How to design a game for multiple resolutions? - c#

Trying to figure out a good solution for handling different resolutions and resizing in a 2D side-scrolling shooter game build with OpenTK (OpenGL).
Ortho is setup as follows.
private void Setup2DGraphics()
{
const double halfWidth = 1920 / 2;
const double halfHeight = 1080 / 2;
GL.MatrixMode(MatrixMode.Projection);
GL.LoadIdentity();
GL.Ortho(-halfWidth, halfWidth, -halfHeight, halfHeight, -1.0, 1.0);
GL.MatrixMode(MatrixMode.Modelview);
GL.LoadIdentity();
}
Ortho fixes to 1920*1080 to keep the ortho fixed size. Changing size would change game behaviour. Objects movement is set to pixels/sec etc.
public void ResizeWindow(int width, int height)
{
var screenSize = GetOptimalResolution(width, height);
var offsetHeight = screenSize.Y < height ? (height - screenSize.Y) / 2 : 0;
var offsetWidth = screenSize.X < width ? (width - screenSize.X) / 2 : 0;
// Set the viewport
GL.Viewport((int)offsetWidth, (int)offsetHeight, (int)screenSize.X, (int)screenSize.Y);
}
GetOptimalResolution returns the viewport size in order to maintain a 16:9 aspect ratio. Example: If screensize is 1024*768, it resturns 1024*576 (shrinking height). It then places the ortho higher in the viewport using half of the height difference as offset. This results in black bars above and below the windows.
This setup works. It prevents aspect ratio issues on all resolutions. However I'm questioning myself if this is the right way to do it? The issues I can think of with this design:
- On resolutions lower then 1920*1080, the view is scaled down. This has an effect of how sprites look. For example, text starts to look horrible when scaled down.
- If turning down resolution for performance, will this have an effect if the ortho stays the same size?
Did a lot of searching on this topic but so far the only things I can find is plain instructions on how to perform actions, rather then to build solutions. I also found out that using Mipmap's might solve my scaling issue. However the question remains, am I on the right track? Should I look at the aspect ratio problem from a different angle?

For example, text starts to look horrible when scaled down.
Don't use textures for text. Use a technique like signed distance fields for rendering clear text at any resolution.
For artwork, the ideal would be to have resolution specific artwork for a variety of common resolutions / DPIs. Failing that, mipmaps will drastically improve the quality of the rendered texture if it's scaled down from the source texture image.
It also might be easier to work the problem if you stop using pixel coordinates to do stuff. The default OpenGL screen coordinates are (-1,-1) lower left to (1,1) upper right. However, you can alter these using an ortho transform to account for aspect ratio like this
const double aspect = windowWidth / windowHeight;
if (aspect > 1.0) {
GL.Ortho(-1 * aspect, 1 * aspect, -1, 1, -1.0, 1.0);
} else {
GL.Ortho(-1, 1, -1 / aspect, 1 / aspect, -1.0, 1.0);
}
This produces a projection matrix that is at least 1 in both dimensions. This lets you treat the screen as a 2x2 square with the origin in the middle, and any additional stuff the viewer can see because of the aspect is cake.

Related

How can I use screen space coordinates directly with OpenGL?

I've follow the guide I find on https://learnopengl.com/ and managed to render a triangle on the window, but this whole process just seems over complicated to me. Been searching the internet, found many similar questions but they all seem outdated and none of them gives a satisfied answer.
I've read the guide Hello Triangle from the site mentioned above which has a picture explaining the graphics pipeline, with that I got the idea about why a seemingly simple task as drawing a triangle shape on screen takes so many steps.
I have also read the guide Coordinate Systems from the same site, which tells me what is the "strange (to me) coordinate" that OpenGL uses (NDC) and why it uses that.
(Here's the picture from the guide mentioned above which I think would be useful for describing my question.)
The question is: Can I use the final SCREEN SPACE coordinates directly?
All I want is do some 2D rendering (no z-axis) and the screen size is known (fixed), as such I don't see any reason why should I use a normalized coordinate system instead of a special one bound to my screen.
eg: on a 320x240 screen, (0,0) represents the top-left pixel and (319,239) represents the bottom-right pixel. It doesn't need to be exactly what I describe it, the idea is every integer coordinate = a corresponding pixel on the screen.
I know it's possible to setup such a coordinate system for my own use, but the coordinates would be transformed all around and in the end back to screen space - which is what I have in the first place. All these just seems to be wasted work to me, also is't it gonna introduce precision lost when the coordinates get transformed?
Quote from the guide Coordinate Systems (picture above):
After the coordinates are in view space we want to project them to clip coordinates. Clip coordinates are processed to the -1.0 and 1.0 range and determine which vertices will end up on the screen.
So consider on a 1024x768 screen, I define the Clip coordinates as (0,0) to (1024,678), where:
(0,0)--------(1,0)--------(2,0)
| | |
| First | |
| Pixel | |
| | |
(0,1)--------(1,1)--------(2,1) . . .
| | |
| | |
| | |
| | |
(0,2)--------(1,2)--------(2,2)
.
.
.
(1022,766)---(1023,766)---(1024,766)
| | |
| | |
| | |
| | |
(1022,767)---(1023,767)---(1024,767)
| | |
| | Last |
| | Pixel |
| | |
(1022,768)---(1023,768)---(1024,768)
Let's say I want to put a picture at Pixel(11,11), so the clip coordinates for that would be Clip(11.5,11.5) this coordinate is then processed to the -1.0 and 1.0 range:
11.5f * 2 / 1024 - 1.0f = -0.977539063f // x
11.5f * 2 / 768 - 1.0f = -0.970052063f // y
And I have NDC(-0.977539063f,-0.970052063f)
And lastly we transform the clip coordinates to screen coordinates in a process we call viewport transform that transforms the coordinates from -1.0 and 1.0 to the coordinate range defined by glViewport. The resulting coordinates are then sent to the rasterizer to turn them into fragments.
So take the NDC coordinate and transform it back to screen coordinate:
(-0.977539063f + 1.0f) * 1024 / 2 = 11.5f // exact
(-0.970052063f + 1.0f) * 768 / 2 = 11.5000076f // error
The x-axis is accurate as 1024 is power of 2, but since 768 isn't so y-axis is off. The error is very little, but it's not exactly 11.5f, so I guess there would be some sort of blending happen instead of 1:1 representation of the original picture?
To avoid the rounding error mentioned above, I did something like this:
First I set the Viewport size to a size larger than my window, and make both width and height a power of two:
GL.Viewport(0, 240 - 256, 512, 256); // Window Size is 320x240
Then I setup the coordinates of vertices like this:
float[] vertices = {
// x y
0.5f, 0.5f, 0.0f, // top-left
319.5f, 0.5f, 0.0f, // top-right
319.5f, 239.5f, 0.0f, // bottom-right
0.5f, 239.5f, 0.0f, // bottom-left
};
And I convert them manually in the vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
void main()
{
gl_Position = vec4(aPos.x * 2 / 512 - 1.0, 0.0 - (aPos.y * 2 / 256 - 1.0), 0.0, 1.0);
}
Finally I draw a quad, the result is:
This seem to produce a correct result (the quad has a 320x240 size), but I wonder if it's necessary to do all these.
What's the drawback of my approach?
Is there a better way to achieve what I did?
It seems rendering in wireframe mode hides the problem. I tried to apply texture to my quad (actually I switched to 2 triangles), and I got different result on different GPU and non of them seem correct to me:
Left: Intel HD4000 | Right: Nvidia GT635M (optimus)
I set GL.ClearColor to white and disabled the texture.
While both result fills the window client area (320x240), Intel give me a square sized 319x239 placed to the top left while Nvidia gives me a square sized 319x239 placed on the bottom left.
This is what it looks like with texture turned on.
The texture:
(I have it flipped vertically so I can load it easier in code)
The vertices:
float[] vertices_with_texture = {
// x y texture x texture y
0.5f, 0.5f, 0.5f / 512, 511.5f / 512, // top-left
319.5f, 0.5f, 319.5f / 512, 511.5f / 512, // top-right
319.5f, 239.5f, 319.5f / 512, 271.5f / 512, // bottom-right ( 511.5f - height 240 = 271.5f)
0.5f, 239.5f, 0.5f / 512, 271.5f / 512, // bottom-left
};
Now I'm completely stuck.
I thought I'm placing the quad's edges on exact pixel center (.5) and I'm sampling the texture also at exact pixel center (.5), yet two cards give me two different result and non of them is correct (zoom in and you can see the center of the image is slightly blurred, not a clear checker-board pattern)
What am I missing?
I think I figured out what to do now, I've posted the solutions as an answer and leave this question here for reference.
Okay.... I think I finally get everything work as I expect. The problem was that I forgot pixels have size - it's so obvious now that I can't understand why I missed that.
In this answer, I'll refer the Clip Space coordinates as Clip(x,y), where x/y ranges from 0 to screen width/height respectively. For example, on a 320x240 screen the Clip Space is from Clip(0,0) to Clip(320,240)
Mistake 1:
When trying to draw a 10 pixels square, I drew it from Clip(0.5,0.5) to Clip(9.5,9.5) .
It's true that those are the coordinates for the pixel centers of the begin (top-left) pixel and end (bottom-right) pixel of the square, but the real space the square takes is not from the pixel center of it's begin pixel and end pixel.
Instead, the real space said square takes is from the top-left corner of the begin pixel to the bottom-right corner of the end pixel. So, the correct coordinates I should use for the square is Clip(0,0) - Clip(10, 10)
Mistake 2:
Since I got the size of the square wrong, I was trying to map the texture wrong as well. Since now I have the square size fixed, I'll just fix the coordinates for the textures accordingly as well.
However, I found a better solution: Rectangle Texture , from which I quote:
When using rectangle samplers, all texture lookup functions automatically use non-normalized texture coordinates. This means that the values of the texture coordinates span (0..W, 0..H) across the texture, where the (W,H) refers to its dimensions in texels, rather than (0..1, 0..1).
This is VERY convenient for 2D rendering, for one I don't have to do any coordinates transforms, and for a bonus I don't need to flip the texture vertically anymore.
I tried it and it works just as I expected, but I come across a new problem: bleeding on edge when the texture is not placed on exact pixel grids.
Solving the bleeding:
If I use different textures for every square, I can avoid this problem by having the sampler clamp everything outside the texture with TextureWrapMode.ClampToEdge
However, I'm using a texture atlas, a.k.a "sprite sheet". I've searched the internet found solutions like:
Manually add padding to each sprite, basically leave safe spaces for bleeding error.
This is straight forward but I really don't like it as I'll loose the ability to tightly pack my texture, and it makes calculating the texture coordinates more complex also it just a waste of space anyway.
Use GL_NEAREST for GL_TEXTURE_MIN_FILTER and shift the coordinates by 0.5/0.375
This is very easy to code and it works fine for pixel arts - I don't want liner filter to make them blurry any way. But I'd also like to keep the ability to display a picture and move it around smoothly rather than jumping from pixel to pixel, so I need to have the ability to use GL_LINEAR.
One solution: Manually clamp the texture coordinates.
It's basically the same idea as TextureWrapMode.ClampToEdge, but at a per sprite basic rather than only on the edges of entire sprite sheet. I coded the fragment shader like this (just a proof-of-concept, I certainly need to improve it):
#version 330 core
out vec4 FragColor;
in vec2 TexCoord;
uniform sampler2DRect texture1;
void main()
{
vec2 mTexCoord;
mTexCoord.x = TexCoord.x <= 0.5 ? 0.5 : TexCoord.x >= 319.5 ? 319.5 : TexCoord.x;
mTexCoord.y = TexCoord.y <= 0.5 ? 0.5 : TexCoord.y >= 239.5 ? 239.5 : TexCoord.y;
FragColor = texture(texture1, mTexCoord);
}
(Yes, the "sprite" I use in this case is 320x240 which takes my entire screen.)
It's soooo easy to code thanks to the fact that Rectangle Texture uses non-normalized coordinates. It works well enough and I just called it a day and wrapped it up here.
Another solution (untested yet): Use Array Texture instead of texture atlas
The idea is simple, just set TextureWrapMode.ClampToEdge and have the sampler do it's job. I haven't look further into it but it seems to work in concept. Yet I really like the way coordinates work with Rectangle Texture and would like to keep it if possible.
Rounding Error
When trying to animate my square on the screen, I come to a very strange result: (notice the lower left corner of the square when the values on the left reads X.5)
This only happens on my iGPU (Intel HD4000) and not the dGPU (Nvidia GT635M via optimus). It looks like this because I fitted all coordinates to pixel center (.5) in fragment shader:
#version 330 core
out vec4 FragColor;
in vec2 TexCoord;
uniform sampler2DRect texture1;
void main()
{
vec2 mTexCoord;
// Clamp to spirte
mTexCoord.x = clamp(TexCoord.x, 0.5, 319.5);
mTexCoord.y = clamp(TexCoord.y, 0.5, 239.5);
// Snap to pixel
mTexCoord.xy = trunc(mTexCoord.xy) + 0.5;
FragColor = texture(texture1, mTexCoord);
}
My best guess is that the iGPU and dGPU rounds differently when converting coordinates to NDC (and back to screen coordinates)
Using a quad and a texture, both sized power-of-2 would avoid this problem. There's also workarounds like adding a small amount (0.0001 is enough on my laptop) to mTexCoord.xy before trunc.
Update: the solution
Okay, after a good sleep I come up with a relatively simple solution.
When dealing with pictures, nothing needs to be changed (let linear filter do it's job)
Since there's always gonna be rounding error, I basically just give up at this point and live with it. It won't be noticeable to human eyes any way.
When trying to fit the pixels in texture into pixel grid on screen, in addition to snapping the texture coordinates in fragment shader (as seen above), I also have to pre-shift the texture coordinates in vertex shader:
#version 330 core
layout (location = 0) in vec2 aPos;
layout (location = 1) in vec2 aTexCoord;
out vec2 TexCoord;
uniform mat4 projection;
void main()
{
TexCoord = aTexCoord + mod(aPos , 1);
gl_Position = projection * vec4(aPos.xy, 0.0, 1.0);
}
The idea is simple: when the square is placed at Clip(10.5, 10.5), shift the texture coordinate to Pixel(0.5, 0.5) as well. Of course this would mean the end coordinate gets shifted out of sprite region Pixel(320.5, 320.5) but this is fixed in fragment shader using clamp so no need to worry about it. Any coordinates in between (like Pixel(0.1, 0.1)) wound be snapped to Pixel(0.5, 0.5) by fragment shader as well, which creates a pixel-to-pixel result.
This gives me a consistent result across my iGPU(intel) and dGPU(Nvidia):
Positioned at Clip(10.5,10.5), notice the artifacts in lower-left corner is gone.
To sum it up:
Setup the Clip Space coordinates as a 1:1 represent to the screen.
Remember that pixels have size when calculating the size of a sprite.
Use the correct coordinates for textures and fix the bleeding on sprite edges.
When trying to snap texture pixels to screen pixels, take extra care about vertex coordinates in addition to texture coordinates.
Can I use the final SCREEN SPACE coordinates directly
No you can't. You have to transform the coordinates. Either on the CPU or in the shader (GPU).
If you want to use window coordinates, then you've to set an Orthographic projection matrix, which transforms the coordinates from x: [-1.0, 1.0], y: [-1.0, 1.0] (normalized device space) to your window coordinates x: [0, 320], y: [240, 0].
e.g. by the use of glm::ortho
glm::mat4 projection = glm::orhto(0, 320.0f, 240.0f, 0, -1.0f, 1.0f);
e.g. OpenTK Matrix4.CreateOrthographic
OpenTK.Matrix4 projection =
OpenTK.Matrix4.CreateOrthographic(0, 320.0f, 240.0f, 0, -1.0f, 1.0f);
In the vertex shader, the vertex coordinate has to be multiplied by the projection matrix
in vec3 vertex;
uniform mat4 projection;
void main()
{
gl_Position = projection * vec4(vertex.xyz, 1.0);
}
For the sake of completeness Legacy OpenGL, glOrtho:
(do not use the old and deprecated legacy OpenGL functionality)
glOrtho(0.0, 320.0, 240.0, 0.0, -1.0, 1.0);
As others have mentioned you need an orthographic projection matrix.
You can either implement it yourself by following guides such as these:
http://learnwebgl.brown37.net/08_projections/projections_ortho.html
Or maybe the framework you're using already has it.
If you set the right/left/top/bottom values to match your screen resolution then a point of coordinate x,y (z being irrelevant, you can use a 2d vector with a 4x4 matrix) will become the same x,y on the screen.
You can move the view around like a camera if you multiply this projection matrix by a translation matrix (first the translation then the projection).
Then you pass this matrix to your shader and multiply the position of the vertex by it.

Diagonal Shadow with GDI+

I am trying to draw a diagonal shadow.
First I make all pixel to black:
Next with a simple for cicle this is are the result
Now I want to stretch this image diagonally to simulate a shadow.
I have tried:
Bitmap b = new Bitmap(tImage.Width + 100, tImage.Height);
Graphics p = Graphics.FromImage(b);
p.RotateTransform(30f);
p.TranslateTransform(100f, -200f);
p.DrawImage(tImage, new Rectangle(0, -20, b.Width+20, b.Height));
but the images are rotated and translated.
Please anyone have a solution for me?
I need it to look like this (created in Photoshop):
Creating a nice dropshadow is quite a task using Winforms and GDI+.
It features neither polygon scaling nor blurring; and let's not even think about 3D..! - But we can at least do a few things without too much work and get a nice result for many images..
Let's assume you already have an image that is cut out from its background.
The next step would be to turn all colors into black.
Then we most likely would want to add some level of transparency, so that the background the shadow falls on, still shines through.
Both task are done quite effectively by using a suitable ColorMatrix.
With a very transparent version we can also create simple blurring by drawing the image with offsets. For best results I would draw it nine times with 3 differents weights/alpha values..
High quality blurring is an art as you can see by even just looking at the filters and adjustments in pro software like Adobe Photoshop or Affinity Photo. Here is a nice set of interesting links..
But since we are only dealing with a b/w bitmap a simplitstic appraoch is good enough.. I use 3 alpha values of 5%, 10% and 20% for the 4 corner, the 4 edge and the 1 center drawings.
The final step is drawing the shadow with some skewing.
This is explained here; but while this is seemingly very simple it is also somewhat impractical. The three points the DrawImage overlay expects need to be calculated.
So here is a method that does just that; do note that is is a strongly simplified method:
The overlay takes three points, that is 6 floats. We only use 3 numbers:
one for the amount of skewing; 0.5 means the top is shifted to the right by half the width of the bitmap.
the other two are the scaling of the resulting bounding box. 1 and 0.5 mean that the width is unchanged and the height is reduced to 50%.
Here is the function:
public Bitmap SkewBitmap(Bitmap inMap, float skewX, float ratioX, float ratioY )
{
int nWidth = (int)(inMap.Width * (skewX + ratioX));
int nHeight = (int)(Math.Max(inMap.Height, inMap.Height * ratioY));
int yOffset = inMap.Height - nHeight;
Bitmap outMap = new Bitmap(nWidth, nHeight);
Point[] destinationPoints = {
new Point((int)(inMap.Width * skewX), (int)(inMap.Height * ratioY) + yOffset),
new Point((int)(inMap.Width * skewX + inMap.Width * ratioX),
(int)(inMap.Height * ratioY) + yOffset),
new Point(0, inMap.Height + yOffset ) };
using (Graphics g = Graphics.FromImage(outMap))
g.DrawImage(inMap, destinationPoints);
return outMap;
}
Note a few simplifications:
If you want to drop the shadow to the left you will need to not just move the first two points to the left but also to adapt the calculation of the width and also the way you overlay the object over the shadow.
If you study the MSDN example you will see that the DrawImage overlay also allows to do a rotation. I didn't add this to our function, as it is a good deal more complicated to calculate and even to just to write a signature.
If you wonder where the info of the six numbers go, here is the full layout:
3 go into our parameters
1 would be the angle of the rotation we don't do
2 could be either the rotation center point or a point (deltaX&Y) the by which the result is translated
If you look closely you can see the shadow of the left foot is a little below the foot. This is because the feet are not at the same level and with the vertical compression the base lines drift apart. To correct that we would either modify the image or add a tiny rotation after all.
Looking at your example imag it is clear that you will nee to take it apart and treat 'house' and 'tree' separately!
The signature is kept simple; this always a balance between ease of use and effort in coding. On could wish for a paramter the takes angle to control the skewing. Feel free to work out the necessary calculations..
Note that adding the functions behind the other buttons would go beyond the scope of the question. Suffice it to say that most are just one line to do the drawing and a dozen or so to set up the colormatrix..
Here is the code in the 'Skew' button:
Bitmap bmp = SkewBitmap((Bitmap)pictureBox4.Image, 0.5f, 1f, 0.5f);
pictureBox5.Image = pictureBox1.Image;
pictureBox5.BackgroundImage = bmp;
pictureBox5.ClientSize = new Size(bmp.Width, bmp.Height);
Instead of drawing the object over the shadow I make use of the extra layer of the PictureBox. You would of course combine the two Bitmaps..

Fitting a rectangle into screen with XNA

I am drawing a rectangle with primitives in XNA. The width is:
width = GraphicsDevice.Viewport.Width
and the height is
height = GraphicsDevice.Viewport.Height
I am trying to fit this rectangle in the screen (using different screens and devices) but I am not sure where to put the camera on the Z-axis. Sometimes the camera is too close and sometimes to far.
This is what I am using to get the camera distance:
//Height of piramid
float alpha = 0;
float beta = 0;
float gamma = 0;
alpha = (float)Math.Sqrt((width / 2 * width/2) + (height / 2 * height / 2));
beta = height / ((float)Math.Cos(MathHelper.ToRadians(67.5f)) * 2);
gamma = (float)Math.Sqrt(beta*beta - alpha*alpha);
position = new Vector3(0, 0, gamma);
Any idea where to put the camera on the Z-axis?
The trick to doing this is to draw the rectangle directly in raster space. This is the space that the GPU works in when actually rendering stuff.
Normally your model starts in its own model space, you apply a world transform to get it into world space. You then apply a view transform to move the points in the world around the camera (to get the effect of the camera moving around the world). Then you apply a projection matrix to get all those points into the space that the GPU uses for drawing.
It just so happens that this space is always - no matter the screen size - from (-1,-1) in the bottom left corner of the viewport, to (1,1) in the top right (and from 0 to 1 on the Z axis for depth).
So the trick is to set all your matrices (world, view, project) to Matrix.Identity, and then draw a rectangle from (-1,-1,0) to (1,1,0). You probably also want to set DepthStencilState.None so that it doesn't affect the depth buffer.
An alternative method is to just use SpriteBatch and draw to a rectangle that is the same size as the Viewport.

How to scale texture2d in XNA with window resizing

I'm developing an UI for a project for school, and I've tried similar methods to scaling my texture as listed here, but here is the issue:
Our project is developed at 1440 x 900, so I've made my own images that fit that screen resolution. When we have to demo our project in class, the projector can only render up to 1024 x 768, thus, many things on the screen goes missing. I have added window resizing capabilities, and I'm doing my scaling like this. I have my own class called "button" which has a texture 2d, and a Vector2 position contruscted by Button(Texture2d img, float width, float height).
My idea is to set the position of the image to a scalable % of the window width and height, so I'm attempting to set the position of the img to a number between 0-1 and then multiply by the window width and height to keep everything scaled properly.
(this code is not the proper syntax, i'm just trying to convey the point)
Button button = new Button(texture, .01, .01 );
int height = graphicsdevice.viewport.height * button.position.Y;
int width = graphicsdevice.viewport.width * button.position.X;
Rectangle rect = new Rectangle(0,0,width, height);
sprite.being()
sprite.draw (button.img, rect, color.white);
sprite.end
it doesn't end up scaling anything when i go to draw it and resize the window by dragging the mouse around. if i hard code in a different bufferheight and bufferwidth to begin with, the image stays around the same size regardless of resolution, except that the smaller the resolution is, the more pixelated the image looks.
what is the best way to design my program to allow for dynamic texture2d scaling?
As Hannesh said, if you run it in fullscreen you won't have these problems. However, you also have a fundamental problem with the way you are doing this. Instead of using the position of the sprite, which will not change at all during window resize, you must use the size of the sprite. I often do this using a property called Scale in my Sprite class. So instead of clamping the position of the sprite between 0 and 1, you should be clamping the Size property of the sprite between 0 and 1. Then as you rescale the window it will rescale the sprites.
In my opinion, a better way to do this is to have a default resolution, in your case 1440 x 900. Then, if the window is rescaled, just multiply all sprites' scaling factors by the ratio of the new screensize to the old screensize. This takes only 1 multiplication per resize, instead of a multiplication per update (which is what your method will do, because you have to convert from the clamped 0-1 value to the real scale every update).
Also, the effects you noticed during manual rescale of the sprites is normal. Rescaling images to arbitrary sizes causes artifacts in the rendered image because the graphics device doesn't know what to do at most sizes. A good way to get around this is by using filler art during the development process and then create the final art in the correct resolution(s). Obviously this doesn't apply in your situation because you are resizing a window to arbitrary size, but in games you will usually only be able to switch to certain fixed resolutions.

How should I translate from screen space coordinates to image space coordinates in a WinForms PictureBox?

I have an application that displays an image inside of a Windows Forms PictureBox control. The SizeMode of the control is set to Zoom so that the image contained in the PictureBox will be displayed in an aspect-correct way regardless of the dimensions of the PictureBox.
This is great for the visual appearance of the application because you can size the window however you want and the image will always be displayed using its best fit. Unfortunately, I also need to handle mouse click events on the picture box and need to be able to translate from screen-space coordinates to image-space coordinates.
It looks like it's easy to translate from screen space to control space, but I don't see any obvious way to translate from control space to image space (i.e. the pixel coordinate in the source image that has been scaled in the picture box).
Is there an easy way to do this, or should I just duplicate the scaling math that they're using internally to position the image and do the translation myself?
I wound up just implementing the translation manually. The code's not too bad, but it did leave me wishing that they provided support for it directly. I could see such a method being useful in a lot of different circumstances.
I guess that's why they added extension methods :)
In pseudocode:
// Recompute the image scaling the zoom mode uses to fit the image on screen
imageScale ::= min(pictureBox.width / image.width, pictureBox.height / image.height)
scaledWidth ::= image.width * imageScale
scaledHeight ::= image.height * imageScale
// Compute the offset of the image to center it in the picture box
imageX ::= (pictureBox.width - scaledWidth) / 2
imageY ::= (pictureBox.height - scaledHeight) / 2
// Test the coordinate in the picture box against the image bounds
if pos.x < imageX or imageX + scaledWidth < pos.x then return null
if pos.y < imageY or imageY + scaledHeight < pos.y then return null
// Compute the normalized (0..1) coordinates in image space
u ::= (pos.x - imageX) / imageScale
v ::= (pos.y - imageY) / imageScale
return (u, v)
To get the pixel position in the image, you'd just multiply by the actual image pixel dimensions, but the normalized coordinates allow you to address the original responder's point about resolving ambiguity on a case-by-case basis.
Depending on the scaling, the relative image pixel could be anywhere in a number of pixels. For example, if the image is scaled down significantly, pixel 2, 10 could represent 2, 10 all the way up to 20, 100), so you'll have to do the math yourself and take full responsibility for any inaccuracies! :-)

Categories