Draw Points onto canvas using an offset? - c#

I have an array of Point variables. When drawn using Graphics.DrawLine, they create the expected image. My problem is that 0,0 is actually the center of the image (not the top left of my canvas as expected. My X and Y coordinates in the Points can contain negative numbers.
When I try to draw this to my Image, of course I get 1/4 of the total image as the remainder is drawn outside the bounds of my canvas. How do I center this drawing correctly onto my canvas?
I know the dimensions of the image I want to draw. I know where 0,0 is (width / 2, height / 2).
I suppose I can translate each and every single Point, but that seems like the hard way to do this.

TranslateTransform() can map coordinates for you if you setup a transformation during your drawing handlers.
Graphics.TranslateTransform # MSDN
Or, map your coordinates by adding half the width and half the height of the desired viewing area to each coordinate.
Also, you may need to scale your coordinates. You may use Graphics.ScaleTransform to do this.
Graphics.ScaleTransform # MSDN
If you don't wish to use this, then you should divide X coordinates by the percent amount you wish to stretch the width, and divide Y coordinates by the percent amount you wish to stretch the height. This gives us 1 for 100%, 1.2 for 120%, 0.8 for 80%, etc.

Welcome to the Windows' version of the Cartessian Plane. Your last statement is correct. You do have to offset each and every point. The only real help you can give yourself is to make the offset logic a separate method to clean up your main drawing code.

When creating the array, add an offset to each x value equal to half of the width and an offset to y equal to half of the height. That way when the points are drawn, they're in the expected position.

Related

Ordering frames (blocks) in a page for mobile devices (.NET)

I'm writing an program that traces the contour of individual frames within an image. The tracing is complete and works very well.
Basically I start pixel 0,0 and loop though each row until I find a contour pixel, then using the Moore neighborhood algorithm, I trace out the block until I reach my initial starting point.
However, if anyone has looked at a bitmap up close, you would see that the pixels are not perfectly straight and it's possible for frame #2 or #3 to have a slightly higher starting Y coordinate. Thus I will need to allow for some tolerance on the Y axis.
In the perfect world. I could sort the frames via (y) and then by (x) in ascending order.
Getting to the point, If I have the following image loaded into a bitmap class, and lets say I already knew the top left X, top left Y, width, and height for each frame. How could I programmatically sort the frames correctly?
Image: (figure 12, image a)
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3629985/figure/F12/
You can conceptually align the nearly aligned the frames like this:
Sort the frame locations by X
Set each frame location within a few X pixels of the previous frame's location to the previous frame's X value.
Do the same for Y.
Then you can order them normally.

Is it safe to assume an image rotation manipulates the height and width exactly?

Say I have the following code for an image that's 200x100 or any arbitrary size really:
Image image = Bitmap.FromFile(fileName);
image.RotateFlip(RotateFlipType.Rotate90FlipNone);
image.save(fileName);
Is it safe to assume that the output size is 100x200? That the width and height exactly swapped values? My coworker is convinced it may not be guaranteed. I think the matrix math involved is exact and reliable. Who's right? If it matters, we're working with tif images.
The Image.RotateFlip Method only does rotation in exact multiples of 90 degrees (0, 90, etc.)
This means after you use it, you will always end up with one of 2 results:
1. Either the width and height will remain unchanged (for example when you flip without rotation or rotate 180 degrees).
2. Or the width and height will be exact swapped values.
Bonus info:
This type of right-angle rotation is "lossless" which means the color values are preserved and the operation can be reversed without any loss of pixel values.
On the other hand, any algorithm that rotates arbitrary angles that are not multiples of 90 degrees will be a lossy algorithm. The pixel locations and/or exact color values are not guaranteed to be restored by doing a counter-rotation.

.NET Graphics.FillEllipse()

So the .NET documentation says that this creates a filled ellipse with the UPPER LEFT CORNER at the X,Y coordinate specified.
But I need the ellipse to be CENTERED on the X,Y coordinate I supplied.
How do I do this?
Thanks!
From the desired center point, decrease X by half of the width and decrease Y by half of the height.

Any ideas on what is causing my tiles to not fit together?

I have a tile system written in XNA, and there is problems with tiles fitting together.
What I mean is, sometimes tiles are separated by 1 pixel (maybe 2 pixels? can't tell), and they are suppose to fit together perfectly. I am certain the math I did to get them together is right, I don't know what is causing the issue.
Surprisingly this issue is fixed when I raise the size of my tiles (a double) to 1000. Size is only relative to my camera zoom, so this does not affect game play at all, but it bothers me I have to do this.
Any ideas on what could be causing this?
edit: infact anything below a tile size of 995 has the issue, but anything above is good, this is some kind of weird precision issue. Is double math more accurate with high numbers or something?
Well, pixels are integers, and if the size of your tiles are in doubles, that means you must be doing some sort of conversion to get the pixels, which in turn is probably where you are getting the separation.
For example if tile A starts at 0, and is 9.9 in length, where do you put the next tile?
If you are rounding in this case you would use Math.Round(value, 0, MidpointRounding.AwayFromZero) because Math.Round(value) does not return what you are mathematically taught as "rounding" to be.
I can't remember the specifics but it arises things like Math.Round(0.5) = 0 and Math.Round(1.5) = 2!
Hard to diagnose without seeing how you are handling tile placement. Are your tiles fixed width and height or variable?
Here is how I lay out my level consisting of tiles of size 32x32.
Tile class contains texture of tile and grid position of tile.
Iterate through Tile collection and draw them at:
X = gridPosition.X * textureWidth
Y = gridPosition.Y * textureHeight
So, if you have a tile of 32x32 at grid position (0, 0), then it will be drawn at (0, 0).
And if you have a tile of 32x32 at grid position (0, 1), then it will be drawn at (0, 32).

C# Scaling GDI positions but not font size or line thickness

I need to do a lot of drawing on a grid with spacing of 12.5 pixels X and 20 pixels Y (the PICA scale). The font needs to be a specific size and the lines need to still be one pixel thick. Currently I'm saving these values in floats and multiplying them (for example, text starting on row 3, column 6 is drawn as coords 2f*cx,5f*cy). I'd like to avoid all this unnecessary multiplication by using a scale transform, but unfortunately those affect the font size and line thickness as well. Is there a way to avoid this? Or would the compiler be silently doing this for me as the cx/cy values are constants?
...also, Microsoft has left a little "hack" for us if you don't want lines to be scaled. Set the width of the line to 0px, and it will always be drawn a single pixel thick.
The compiler should reduce the constant portion of expressions to a single constant, but there will still have to be a multiply at runtime since the value of your float is not known at compile time. So, (1 + 2 + c) * 6 * f can be reduced to n * f by the compiler if c is a constant.
Your best bet to prevent scaling of your text is probably to set up a scaling transform, draw all your non-text graphics that you don't care about maintaining minimum line widths, then draw your text without using the transform. You can use the transform to locate where the text should start to save yourself having to calculate that independently - a function like LPtoDP (logical point to device point) should do the trick.
Another other way to approach this is to render the text in the transform, but apply a reverse scaling to the text size itself. So, if the transform scales down 5%, you scale the font size up by 5%. This will not give exact results, but might be close enough for visuals.

Categories