How can I use screen space coordinates directly with OpenGL? - c#

I've follow the guide I find on https://learnopengl.com/ and managed to render a triangle on the window, but this whole process just seems over complicated to me. Been searching the internet, found many similar questions but they all seem outdated and none of them gives a satisfied answer.
I've read the guide Hello Triangle from the site mentioned above which has a picture explaining the graphics pipeline, with that I got the idea about why a seemingly simple task as drawing a triangle shape on screen takes so many steps.
I have also read the guide Coordinate Systems from the same site, which tells me what is the "strange (to me) coordinate" that OpenGL uses (NDC) and why it uses that.
(Here's the picture from the guide mentioned above which I think would be useful for describing my question.)
The question is: Can I use the final SCREEN SPACE coordinates directly?
All I want is do some 2D rendering (no z-axis) and the screen size is known (fixed), as such I don't see any reason why should I use a normalized coordinate system instead of a special one bound to my screen.
eg: on a 320x240 screen, (0,0) represents the top-left pixel and (319,239) represents the bottom-right pixel. It doesn't need to be exactly what I describe it, the idea is every integer coordinate = a corresponding pixel on the screen.
I know it's possible to setup such a coordinate system for my own use, but the coordinates would be transformed all around and in the end back to screen space - which is what I have in the first place. All these just seems to be wasted work to me, also is't it gonna introduce precision lost when the coordinates get transformed?
Quote from the guide Coordinate Systems (picture above):
After the coordinates are in view space we want to project them to clip coordinates. Clip coordinates are processed to the -1.0 and 1.0 range and determine which vertices will end up on the screen.
So consider on a 1024x768 screen, I define the Clip coordinates as (0,0) to (1024,678), where:
(0,0)--------(1,0)--------(2,0)
| | |
| First | |
| Pixel | |
| | |
(0,1)--------(1,1)--------(2,1) . . .
| | |
| | |
| | |
| | |
(0,2)--------(1,2)--------(2,2)
.
.
.
(1022,766)---(1023,766)---(1024,766)
| | |
| | |
| | |
| | |
(1022,767)---(1023,767)---(1024,767)
| | |
| | Last |
| | Pixel |
| | |
(1022,768)---(1023,768)---(1024,768)
Let's say I want to put a picture at Pixel(11,11), so the clip coordinates for that would be Clip(11.5,11.5) this coordinate is then processed to the -1.0 and 1.0 range:
11.5f * 2 / 1024 - 1.0f = -0.977539063f // x
11.5f * 2 / 768 - 1.0f = -0.970052063f // y
And I have NDC(-0.977539063f,-0.970052063f)
And lastly we transform the clip coordinates to screen coordinates in a process we call viewport transform that transforms the coordinates from -1.0 and 1.0 to the coordinate range defined by glViewport. The resulting coordinates are then sent to the rasterizer to turn them into fragments.
So take the NDC coordinate and transform it back to screen coordinate:
(-0.977539063f + 1.0f) * 1024 / 2 = 11.5f // exact
(-0.970052063f + 1.0f) * 768 / 2 = 11.5000076f // error
The x-axis is accurate as 1024 is power of 2, but since 768 isn't so y-axis is off. The error is very little, but it's not exactly 11.5f, so I guess there would be some sort of blending happen instead of 1:1 representation of the original picture?
To avoid the rounding error mentioned above, I did something like this:
First I set the Viewport size to a size larger than my window, and make both width and height a power of two:
GL.Viewport(0, 240 - 256, 512, 256); // Window Size is 320x240
Then I setup the coordinates of vertices like this:
float[] vertices = {
// x y
0.5f, 0.5f, 0.0f, // top-left
319.5f, 0.5f, 0.0f, // top-right
319.5f, 239.5f, 0.0f, // bottom-right
0.5f, 239.5f, 0.0f, // bottom-left
};
And I convert them manually in the vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
void main()
{
gl_Position = vec4(aPos.x * 2 / 512 - 1.0, 0.0 - (aPos.y * 2 / 256 - 1.0), 0.0, 1.0);
}
Finally I draw a quad, the result is:
This seem to produce a correct result (the quad has a 320x240 size), but I wonder if it's necessary to do all these.
What's the drawback of my approach?
Is there a better way to achieve what I did?
It seems rendering in wireframe mode hides the problem. I tried to apply texture to my quad (actually I switched to 2 triangles), and I got different result on different GPU and non of them seem correct to me:
Left: Intel HD4000 | Right: Nvidia GT635M (optimus)
I set GL.ClearColor to white and disabled the texture.
While both result fills the window client area (320x240), Intel give me a square sized 319x239 placed to the top left while Nvidia gives me a square sized 319x239 placed on the bottom left.
This is what it looks like with texture turned on.
The texture:
(I have it flipped vertically so I can load it easier in code)
The vertices:
float[] vertices_with_texture = {
// x y texture x texture y
0.5f, 0.5f, 0.5f / 512, 511.5f / 512, // top-left
319.5f, 0.5f, 319.5f / 512, 511.5f / 512, // top-right
319.5f, 239.5f, 319.5f / 512, 271.5f / 512, // bottom-right ( 511.5f - height 240 = 271.5f)
0.5f, 239.5f, 0.5f / 512, 271.5f / 512, // bottom-left
};
Now I'm completely stuck.
I thought I'm placing the quad's edges on exact pixel center (.5) and I'm sampling the texture also at exact pixel center (.5), yet two cards give me two different result and non of them is correct (zoom in and you can see the center of the image is slightly blurred, not a clear checker-board pattern)
What am I missing?
I think I figured out what to do now, I've posted the solutions as an answer and leave this question here for reference.

Okay.... I think I finally get everything work as I expect. The problem was that I forgot pixels have size - it's so obvious now that I can't understand why I missed that.
In this answer, I'll refer the Clip Space coordinates as Clip(x,y), where x/y ranges from 0 to screen width/height respectively. For example, on a 320x240 screen the Clip Space is from Clip(0,0) to Clip(320,240)
Mistake 1:
When trying to draw a 10 pixels square, I drew it from Clip(0.5,0.5) to Clip(9.5,9.5) .
It's true that those are the coordinates for the pixel centers of the begin (top-left) pixel and end (bottom-right) pixel of the square, but the real space the square takes is not from the pixel center of it's begin pixel and end pixel.
Instead, the real space said square takes is from the top-left corner of the begin pixel to the bottom-right corner of the end pixel. So, the correct coordinates I should use for the square is Clip(0,0) - Clip(10, 10)
Mistake 2:
Since I got the size of the square wrong, I was trying to map the texture wrong as well. Since now I have the square size fixed, I'll just fix the coordinates for the textures accordingly as well.
However, I found a better solution: Rectangle Texture , from which I quote:
When using rectangle samplers, all texture lookup functions automatically use non-normalized texture coordinates. This means that the values of the texture coordinates span (0..W, 0..H) across the texture, where the (W,H) refers to its dimensions in texels, rather than (0..1, 0..1).
This is VERY convenient for 2D rendering, for one I don't have to do any coordinates transforms, and for a bonus I don't need to flip the texture vertically anymore.
I tried it and it works just as I expected, but I come across a new problem: bleeding on edge when the texture is not placed on exact pixel grids.
Solving the bleeding:
If I use different textures for every square, I can avoid this problem by having the sampler clamp everything outside the texture with TextureWrapMode.ClampToEdge
However, I'm using a texture atlas, a.k.a "sprite sheet". I've searched the internet found solutions like:
Manually add padding to each sprite, basically leave safe spaces for bleeding error.
This is straight forward but I really don't like it as I'll loose the ability to tightly pack my texture, and it makes calculating the texture coordinates more complex also it just a waste of space anyway.
Use GL_NEAREST for GL_TEXTURE_MIN_FILTER and shift the coordinates by 0.5/0.375
This is very easy to code and it works fine for pixel arts - I don't want liner filter to make them blurry any way. But I'd also like to keep the ability to display a picture and move it around smoothly rather than jumping from pixel to pixel, so I need to have the ability to use GL_LINEAR.
One solution: Manually clamp the texture coordinates.
It's basically the same idea as TextureWrapMode.ClampToEdge, but at a per sprite basic rather than only on the edges of entire sprite sheet. I coded the fragment shader like this (just a proof-of-concept, I certainly need to improve it):
#version 330 core
out vec4 FragColor;
in vec2 TexCoord;
uniform sampler2DRect texture1;
void main()
{
vec2 mTexCoord;
mTexCoord.x = TexCoord.x <= 0.5 ? 0.5 : TexCoord.x >= 319.5 ? 319.5 : TexCoord.x;
mTexCoord.y = TexCoord.y <= 0.5 ? 0.5 : TexCoord.y >= 239.5 ? 239.5 : TexCoord.y;
FragColor = texture(texture1, mTexCoord);
}
(Yes, the "sprite" I use in this case is 320x240 which takes my entire screen.)
It's soooo easy to code thanks to the fact that Rectangle Texture uses non-normalized coordinates. It works well enough and I just called it a day and wrapped it up here.
Another solution (untested yet): Use Array Texture instead of texture atlas
The idea is simple, just set TextureWrapMode.ClampToEdge and have the sampler do it's job. I haven't look further into it but it seems to work in concept. Yet I really like the way coordinates work with Rectangle Texture and would like to keep it if possible.
Rounding Error
When trying to animate my square on the screen, I come to a very strange result: (notice the lower left corner of the square when the values on the left reads X.5)
This only happens on my iGPU (Intel HD4000) and not the dGPU (Nvidia GT635M via optimus). It looks like this because I fitted all coordinates to pixel center (.5) in fragment shader:
#version 330 core
out vec4 FragColor;
in vec2 TexCoord;
uniform sampler2DRect texture1;
void main()
{
vec2 mTexCoord;
// Clamp to spirte
mTexCoord.x = clamp(TexCoord.x, 0.5, 319.5);
mTexCoord.y = clamp(TexCoord.y, 0.5, 239.5);
// Snap to pixel
mTexCoord.xy = trunc(mTexCoord.xy) + 0.5;
FragColor = texture(texture1, mTexCoord);
}
My best guess is that the iGPU and dGPU rounds differently when converting coordinates to NDC (and back to screen coordinates)
Using a quad and a texture, both sized power-of-2 would avoid this problem. There's also workarounds like adding a small amount (0.0001 is enough on my laptop) to mTexCoord.xy before trunc.
Update: the solution
Okay, after a good sleep I come up with a relatively simple solution.
When dealing with pictures, nothing needs to be changed (let linear filter do it's job)
Since there's always gonna be rounding error, I basically just give up at this point and live with it. It won't be noticeable to human eyes any way.
When trying to fit the pixels in texture into pixel grid on screen, in addition to snapping the texture coordinates in fragment shader (as seen above), I also have to pre-shift the texture coordinates in vertex shader:
#version 330 core
layout (location = 0) in vec2 aPos;
layout (location = 1) in vec2 aTexCoord;
out vec2 TexCoord;
uniform mat4 projection;
void main()
{
TexCoord = aTexCoord + mod(aPos , 1);
gl_Position = projection * vec4(aPos.xy, 0.0, 1.0);
}
The idea is simple: when the square is placed at Clip(10.5, 10.5), shift the texture coordinate to Pixel(0.5, 0.5) as well. Of course this would mean the end coordinate gets shifted out of sprite region Pixel(320.5, 320.5) but this is fixed in fragment shader using clamp so no need to worry about it. Any coordinates in between (like Pixel(0.1, 0.1)) wound be snapped to Pixel(0.5, 0.5) by fragment shader as well, which creates a pixel-to-pixel result.
This gives me a consistent result across my iGPU(intel) and dGPU(Nvidia):
Positioned at Clip(10.5,10.5), notice the artifacts in lower-left corner is gone.
To sum it up:
Setup the Clip Space coordinates as a 1:1 represent to the screen.
Remember that pixels have size when calculating the size of a sprite.
Use the correct coordinates for textures and fix the bleeding on sprite edges.
When trying to snap texture pixels to screen pixels, take extra care about vertex coordinates in addition to texture coordinates.

Can I use the final SCREEN SPACE coordinates directly
No you can't. You have to transform the coordinates. Either on the CPU or in the shader (GPU).
If you want to use window coordinates, then you've to set an Orthographic projection matrix, which transforms the coordinates from x: [-1.0, 1.0], y: [-1.0, 1.0] (normalized device space) to your window coordinates x: [0, 320], y: [240, 0].
e.g. by the use of glm::ortho
glm::mat4 projection = glm::orhto(0, 320.0f, 240.0f, 0, -1.0f, 1.0f);
e.g. OpenTK Matrix4.CreateOrthographic
OpenTK.Matrix4 projection =
OpenTK.Matrix4.CreateOrthographic(0, 320.0f, 240.0f, 0, -1.0f, 1.0f);
In the vertex shader, the vertex coordinate has to be multiplied by the projection matrix
in vec3 vertex;
uniform mat4 projection;
void main()
{
gl_Position = projection * vec4(vertex.xyz, 1.0);
}
For the sake of completeness Legacy OpenGL, glOrtho:
(do not use the old and deprecated legacy OpenGL functionality)
glOrtho(0.0, 320.0, 240.0, 0.0, -1.0, 1.0);

As others have mentioned you need an orthographic projection matrix.
You can either implement it yourself by following guides such as these:
http://learnwebgl.brown37.net/08_projections/projections_ortho.html
Or maybe the framework you're using already has it.
If you set the right/left/top/bottom values to match your screen resolution then a point of coordinate x,y (z being irrelevant, you can use a 2d vector with a 4x4 matrix) will become the same x,y on the screen.
You can move the view around like a camera if you multiply this projection matrix by a translation matrix (first the translation then the projection).
Then you pass this matrix to your shader and multiply the position of the vertex by it.

Related

How to design a game for multiple resolutions?

Trying to figure out a good solution for handling different resolutions and resizing in a 2D side-scrolling shooter game build with OpenTK (OpenGL).
Ortho is setup as follows.
private void Setup2DGraphics()
{
const double halfWidth = 1920 / 2;
const double halfHeight = 1080 / 2;
GL.MatrixMode(MatrixMode.Projection);
GL.LoadIdentity();
GL.Ortho(-halfWidth, halfWidth, -halfHeight, halfHeight, -1.0, 1.0);
GL.MatrixMode(MatrixMode.Modelview);
GL.LoadIdentity();
}
Ortho fixes to 1920*1080 to keep the ortho fixed size. Changing size would change game behaviour. Objects movement is set to pixels/sec etc.
public void ResizeWindow(int width, int height)
{
var screenSize = GetOptimalResolution(width, height);
var offsetHeight = screenSize.Y < height ? (height - screenSize.Y) / 2 : 0;
var offsetWidth = screenSize.X < width ? (width - screenSize.X) / 2 : 0;
// Set the viewport
GL.Viewport((int)offsetWidth, (int)offsetHeight, (int)screenSize.X, (int)screenSize.Y);
}
GetOptimalResolution returns the viewport size in order to maintain a 16:9 aspect ratio. Example: If screensize is 1024*768, it resturns 1024*576 (shrinking height). It then places the ortho higher in the viewport using half of the height difference as offset. This results in black bars above and below the windows.
This setup works. It prevents aspect ratio issues on all resolutions. However I'm questioning myself if this is the right way to do it? The issues I can think of with this design:
- On resolutions lower then 1920*1080, the view is scaled down. This has an effect of how sprites look. For example, text starts to look horrible when scaled down.
- If turning down resolution for performance, will this have an effect if the ortho stays the same size?
Did a lot of searching on this topic but so far the only things I can find is plain instructions on how to perform actions, rather then to build solutions. I also found out that using Mipmap's might solve my scaling issue. However the question remains, am I on the right track? Should I look at the aspect ratio problem from a different angle?
For example, text starts to look horrible when scaled down.
Don't use textures for text. Use a technique like signed distance fields for rendering clear text at any resolution.
For artwork, the ideal would be to have resolution specific artwork for a variety of common resolutions / DPIs. Failing that, mipmaps will drastically improve the quality of the rendered texture if it's scaled down from the source texture image.
It also might be easier to work the problem if you stop using pixel coordinates to do stuff. The default OpenGL screen coordinates are (-1,-1) lower left to (1,1) upper right. However, you can alter these using an ortho transform to account for aspect ratio like this
const double aspect = windowWidth / windowHeight;
if (aspect > 1.0) {
GL.Ortho(-1 * aspect, 1 * aspect, -1, 1, -1.0, 1.0);
} else {
GL.Ortho(-1, 1, -1 / aspect, 1 / aspect, -1.0, 1.0);
}
This produces a projection matrix that is at least 1 in both dimensions. This lets you treat the screen as a 2x2 square with the origin in the middle, and any additional stuff the viewer can see because of the aspect is cake.

How do I make a set of blocks look seamless when I know the size of the texture and the position and dimensions of the blocks?

So I have a seamless texture. The black lines in the picture represent the repeating texture. Then I have a series of blocks who have known x,y coordinates and known height and width. In Unity Textures have a 'scale' and 'offset'.
Scale is the amount of the texture that will show on the block. So the block starting at 0,0 will have a scale of about (.2,1.3). It's width is .2x the width of a single texture (for the sake of simple numbers) and it's height is about 1.3x the height of the texture.
Offset then moves the 'starting point of the texture. The block at 0,0 will have an offset of 0,0 because it is perfectly aligned with a corner but the block immediately to it's right would have a texture of about .2,0 because the texture needs to start about .2 units in to align properly. This is as far as I understand offset. I am pretty sure this is correct but feel free to correct me if I am wrong.
Now when you apply a texture to a block unity automatically scales the texture to start at the top left corner of the block and stretch it appropriately to fit 1 full iteration inside of that space. I obviously don't want this.
My question comes in for the three blocks labeled with the (x,y) coordinates. I have tried for several hours over a few weeks to get it right, unsuccessfully.
So how do I take in the x,y position and width/height to create a correct scale and offset so that those blocks will look like they are exactly where they are supposed to be in the texture?
It is not a particularly difficult concept but after staring at it I have no more ideas.
For the sake of the question assume a single texture is 12x12. The x,y and width/height are known values but are arbitrary.
I know it's normally good practice to post attempted code but I would rather see a good way of doing it than see answers that try to fix my failed attempts. But I will post code if people want to see that I did try on my own or how I initially tried.
What is a UV Map
Textures are applied to models by what is known as UV map. The idea is that each (x,y,z) vertex has two (u,v) coordinates assigned. UV coordinates define which point on the texture should correspond to that vertex. If you want to know more, I suggest the awesome (and free) Udacity 3D graphics course. Unit 8 talks about UV mapping.
How to solve your problem using UV Maps
Let's ignore all the vertices that are not visible - your blocks are basically rectangles. You can assign a UV mapping, where world possition of each vertex will be turned into its UV coordinate. This way, all the blocks will have the same origin point in texture space (0,0 in world position corresponds to 0,0 on texture). Code looks like this:
Mesh mesh = GetComponent<MeshFilter>().mesh;
Vector3[] vertices = mesh.vertices;
Vector2[] uvs = new Vector2[vertices.Length];
for (int i = 0; i < uvs.Length; i++)
{
// Find world position of each point
Vector3 vertexPos = transform.TransformPoint(vertices[i]);
// And assign its x,y coordinates to u,v.
uvs[i] = new Vector2(vertexPos.x, vertexPos.y);
}
mesh.uv = uvs;
You have to do this each time your block position changes.

Difference between AnisotropicClamp and AnisotropicWrap in XNA 4.0?

I understand that both are methods of texture filtering but what is the difference between clamp and wrap? Which one is better?
EDIT: I tried both with integrated graphics card and found that AnisotropicWrap showed a lower FPS than AnisotropicClamp. Is it true that AnisotropicWrap renders a better texture than AnisotropicClamp?
The sampler states are responsible for telling the graphics device how to translate texture coordinates into texels. Say you have a quadrilateral polygon with the UV coordinates arranged like this:
(0, 0) (1, 0)
o----------o
| |
o----------o
(0, 1) (1, 1)
Texture coordinates have the range [0, 1]. When this is rendered, the top-left corner of the texture will appear at the top-left corner of the polygon, the bottom-right corner of the texture will appear at the bottom-right corner of the polygon, and so on.
Now say you arrange your UV coordinates like this:
(-1, -1) (2, -1)
o----------o
| |
o----------o
(-1, 2) (2, 2)
What happens? There's no correct way to map these coordinates to the texture, because they're outside of our [0, 1] range!
The answer is that you have to tell the device what the correct way is, by specifying either WRAP or CLAMP sampler states.
A CLAMP state clamps the texture coordinates to the [0, 1] range; pixels with coordinates outside of this range will display the closest valid texel.
A WRAP state, on the other hand, assumes that the texture coordinates are cyclical. A coordinate of 1.5 will be treated as 0.5, a coordinate of -0.25 will be treated as 0.75, and so on. This causes the texture to wrap, giving it a tiled appearance.
I'm not an XNA developer but they looks different for me.
SamplerState.AnisotropicClamp
Contains default state for anisotropic filtering and texture
coordinate clamping.
SamplerState.AnisotropicWrap
Contains default state for anisotropic filtering and texture
coordinate wrapping.
Check these links also;
http://iloveshaders.blogspot.com/2011/04/using-state-objects-in-xna.html
https://code.google.com/p/slimdx/

Fitting a rectangle into screen with XNA

I am drawing a rectangle with primitives in XNA. The width is:
width = GraphicsDevice.Viewport.Width
and the height is
height = GraphicsDevice.Viewport.Height
I am trying to fit this rectangle in the screen (using different screens and devices) but I am not sure where to put the camera on the Z-axis. Sometimes the camera is too close and sometimes to far.
This is what I am using to get the camera distance:
//Height of piramid
float alpha = 0;
float beta = 0;
float gamma = 0;
alpha = (float)Math.Sqrt((width / 2 * width/2) + (height / 2 * height / 2));
beta = height / ((float)Math.Cos(MathHelper.ToRadians(67.5f)) * 2);
gamma = (float)Math.Sqrt(beta*beta - alpha*alpha);
position = new Vector3(0, 0, gamma);
Any idea where to put the camera on the Z-axis?
The trick to doing this is to draw the rectangle directly in raster space. This is the space that the GPU works in when actually rendering stuff.
Normally your model starts in its own model space, you apply a world transform to get it into world space. You then apply a view transform to move the points in the world around the camera (to get the effect of the camera moving around the world). Then you apply a projection matrix to get all those points into the space that the GPU uses for drawing.
It just so happens that this space is always - no matter the screen size - from (-1,-1) in the bottom left corner of the viewport, to (1,1) in the top right (and from 0 to 1 on the Z axis for depth).
So the trick is to set all your matrices (world, view, project) to Matrix.Identity, and then draw a rectangle from (-1,-1,0) to (1,1,0). You probably also want to set DepthStencilState.None so that it doesn't affect the depth buffer.
An alternative method is to just use SpriteBatch and draw to a rectangle that is the same size as the Viewport.

How do Vectors and Matrices work into computer graphics, such as Matrix Rotations?

I have been trying to wrap my head around how my Linear and Vector Algerbra knowledge fits in with Computer Graphics. Particulary in the language C#
The knowledge I mean is:
Points
Vectors
Matrices
Matrix multiplaction - Rotations, Skews, etc..
Heres my goal: Create a simple box, and apply a rotation, translation, and skew to it via matrix multiplication. Afterwards, start messing around with the camera. I wish to do this all myself, only using the functions that actually take in the data and draw it. I wish to create all the logical stuff inbetween.
Heres what i've got so far:
My custom Vector3 class, which holds
-an X, Y, and Z variable (floats)
-Several static matrices (as 2x2 2d float arrays?) that hold ZERO and TRANSLATION matrices (for 2x2 and 3x3)
-Methods
1. Rotate(float inAngle) - Creates a rotation matrix and multiplies the xyz by it.
2. Translate(inx,iny,inz) - Adds the ins to the member variables
3. etc...
When complete, i translate the vector back into a C# Vector3 class and pass it to a drawing class, such as DrawPrimitiveShapes which would draw Lines.
The box class is like this:
4 Vector3's, UpperLeftX, UpperRightX, LowerLeftX, LowerRightX
a Draw class which uses the 4 points to then render lines to each one
My confusion comes at this:
How do I rotate this box? Am I on the right track by using 4 vector3's for the box?
Do I just rotate all four vector3's by the angle and be done with it? How does a texture get rotated if it's got all this texture data in the middle?
The way I learned is by using the upper level built in Xna methods and using 'Reflector' to see inside those methods to see how they work.
To rotate the box, each of the four vertices needs to be transformed from where they were to: a number of degrees about a particular axis.
In Xna 2d the axis is always the Z axis and that axis always runs through the worlds origin, the top left corner of the screen in xna.
So to rotate your four rectangle vertices in xna, you would do something like this:
foreach(Vector2 vert in vertices)
{
vert = Vector2.Transform(vert, Matrix.CreateRotationZ(someRadians));
}
This gets the vertices to rotate (orbit) the top left corner of the screen.
In order to have the box rotate in place, you would first move the box to the top left corner of the screen , rotate it a bit, then move it back. All this happens in a single frame so all the user sees is the rectangle rotating in place. There are many ways to do that in code but here is my favorite:
// assumes you know the center of the rectangle's x & y as a Vector2 'center'
foreach(Vector2 vert in vertices)
{
vert = Vector2.Transform(vert - center, Matrix.CreateRotationZ(someRadians)) + center;
}
Now if you were to reflect the "Matrix.CreateRotationZ" method, or the "Vector2.Transform" method, you would see the lines of code MS used to make that work. By working through them, you can learn the math behind more efficiently without so much trial and error.

Categories