I've follow the guide I find on https://learnopengl.com/ and managed to render a triangle on the window, but this whole process just seems over complicated to me. Been searching the internet, found many similar questions but they all seem outdated and none of them gives a satisfied answer.
I've read the guide Hello Triangle from the site mentioned above which has a picture explaining the graphics pipeline, with that I got the idea about why a seemingly simple task as drawing a triangle shape on screen takes so many steps.
I have also read the guide Coordinate Systems from the same site, which tells me what is the "strange (to me) coordinate" that OpenGL uses (NDC) and why it uses that.
(Here's the picture from the guide mentioned above which I think would be useful for describing my question.)
The question is: Can I use the final SCREEN SPACE coordinates directly?
All I want is do some 2D rendering (no z-axis) and the screen size is known (fixed), as such I don't see any reason why should I use a normalized coordinate system instead of a special one bound to my screen.
eg: on a 320x240 screen, (0,0) represents the top-left pixel and (319,239) represents the bottom-right pixel. It doesn't need to be exactly what I describe it, the idea is every integer coordinate = a corresponding pixel on the screen.
I know it's possible to setup such a coordinate system for my own use, but the coordinates would be transformed all around and in the end back to screen space - which is what I have in the first place. All these just seems to be wasted work to me, also is't it gonna introduce precision lost when the coordinates get transformed?
Quote from the guide Coordinate Systems (picture above):
After the coordinates are in view space we want to project them to clip coordinates. Clip coordinates are processed to the -1.0 and 1.0 range and determine which vertices will end up on the screen.
So consider on a 1024x768 screen, I define the Clip coordinates as (0,0) to (1024,678), where:
(0,0)--------(1,0)--------(2,0)
| | |
| First | |
| Pixel | |
| | |
(0,1)--------(1,1)--------(2,1) . . .
| | |
| | |
| | |
| | |
(0,2)--------(1,2)--------(2,2)
.
.
.
(1022,766)---(1023,766)---(1024,766)
| | |
| | |
| | |
| | |
(1022,767)---(1023,767)---(1024,767)
| | |
| | Last |
| | Pixel |
| | |
(1022,768)---(1023,768)---(1024,768)
Let's say I want to put a picture at Pixel(11,11), so the clip coordinates for that would be Clip(11.5,11.5) this coordinate is then processed to the -1.0 and 1.0 range:
11.5f * 2 / 1024 - 1.0f = -0.977539063f // x
11.5f * 2 / 768 - 1.0f = -0.970052063f // y
And I have NDC(-0.977539063f,-0.970052063f)
And lastly we transform the clip coordinates to screen coordinates in a process we call viewport transform that transforms the coordinates from -1.0 and 1.0 to the coordinate range defined by glViewport. The resulting coordinates are then sent to the rasterizer to turn them into fragments.
So take the NDC coordinate and transform it back to screen coordinate:
(-0.977539063f + 1.0f) * 1024 / 2 = 11.5f // exact
(-0.970052063f + 1.0f) * 768 / 2 = 11.5000076f // error
The x-axis is accurate as 1024 is power of 2, but since 768 isn't so y-axis is off. The error is very little, but it's not exactly 11.5f, so I guess there would be some sort of blending happen instead of 1:1 representation of the original picture?
To avoid the rounding error mentioned above, I did something like this:
First I set the Viewport size to a size larger than my window, and make both width and height a power of two:
GL.Viewport(0, 240 - 256, 512, 256); // Window Size is 320x240
Then I setup the coordinates of vertices like this:
float[] vertices = {
// x y
0.5f, 0.5f, 0.0f, // top-left
319.5f, 0.5f, 0.0f, // top-right
319.5f, 239.5f, 0.0f, // bottom-right
0.5f, 239.5f, 0.0f, // bottom-left
};
And I convert them manually in the vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
void main()
{
gl_Position = vec4(aPos.x * 2 / 512 - 1.0, 0.0 - (aPos.y * 2 / 256 - 1.0), 0.0, 1.0);
}
Finally I draw a quad, the result is:
This seem to produce a correct result (the quad has a 320x240 size), but I wonder if it's necessary to do all these.
What's the drawback of my approach?
Is there a better way to achieve what I did?
It seems rendering in wireframe mode hides the problem. I tried to apply texture to my quad (actually I switched to 2 triangles), and I got different result on different GPU and non of them seem correct to me:
Left: Intel HD4000 | Right: Nvidia GT635M (optimus)
I set GL.ClearColor to white and disabled the texture.
While both result fills the window client area (320x240), Intel give me a square sized 319x239 placed to the top left while Nvidia gives me a square sized 319x239 placed on the bottom left.
This is what it looks like with texture turned on.
The texture:
(I have it flipped vertically so I can load it easier in code)
The vertices:
float[] vertices_with_texture = {
// x y texture x texture y
0.5f, 0.5f, 0.5f / 512, 511.5f / 512, // top-left
319.5f, 0.5f, 319.5f / 512, 511.5f / 512, // top-right
319.5f, 239.5f, 319.5f / 512, 271.5f / 512, // bottom-right ( 511.5f - height 240 = 271.5f)
0.5f, 239.5f, 0.5f / 512, 271.5f / 512, // bottom-left
};
Now I'm completely stuck.
I thought I'm placing the quad's edges on exact pixel center (.5) and I'm sampling the texture also at exact pixel center (.5), yet two cards give me two different result and non of them is correct (zoom in and you can see the center of the image is slightly blurred, not a clear checker-board pattern)
What am I missing?
I think I figured out what to do now, I've posted the solutions as an answer and leave this question here for reference.
Okay.... I think I finally get everything work as I expect. The problem was that I forgot pixels have size - it's so obvious now that I can't understand why I missed that.
In this answer, I'll refer the Clip Space coordinates as Clip(x,y), where x/y ranges from 0 to screen width/height respectively. For example, on a 320x240 screen the Clip Space is from Clip(0,0) to Clip(320,240)
Mistake 1:
When trying to draw a 10 pixels square, I drew it from Clip(0.5,0.5) to Clip(9.5,9.5) .
It's true that those are the coordinates for the pixel centers of the begin (top-left) pixel and end (bottom-right) pixel of the square, but the real space the square takes is not from the pixel center of it's begin pixel and end pixel.
Instead, the real space said square takes is from the top-left corner of the begin pixel to the bottom-right corner of the end pixel. So, the correct coordinates I should use for the square is Clip(0,0) - Clip(10, 10)
Mistake 2:
Since I got the size of the square wrong, I was trying to map the texture wrong as well. Since now I have the square size fixed, I'll just fix the coordinates for the textures accordingly as well.
However, I found a better solution: Rectangle Texture , from which I quote:
When using rectangle samplers, all texture lookup functions automatically use non-normalized texture coordinates. This means that the values of the texture coordinates span (0..W, 0..H) across the texture, where the (W,H) refers to its dimensions in texels, rather than (0..1, 0..1).
This is VERY convenient for 2D rendering, for one I don't have to do any coordinates transforms, and for a bonus I don't need to flip the texture vertically anymore.
I tried it and it works just as I expected, but I come across a new problem: bleeding on edge when the texture is not placed on exact pixel grids.
Solving the bleeding:
If I use different textures for every square, I can avoid this problem by having the sampler clamp everything outside the texture with TextureWrapMode.ClampToEdge
However, I'm using a texture atlas, a.k.a "sprite sheet". I've searched the internet found solutions like:
Manually add padding to each sprite, basically leave safe spaces for bleeding error.
This is straight forward but I really don't like it as I'll loose the ability to tightly pack my texture, and it makes calculating the texture coordinates more complex also it just a waste of space anyway.
Use GL_NEAREST for GL_TEXTURE_MIN_FILTER and shift the coordinates by 0.5/0.375
This is very easy to code and it works fine for pixel arts - I don't want liner filter to make them blurry any way. But I'd also like to keep the ability to display a picture and move it around smoothly rather than jumping from pixel to pixel, so I need to have the ability to use GL_LINEAR.
One solution: Manually clamp the texture coordinates.
It's basically the same idea as TextureWrapMode.ClampToEdge, but at a per sprite basic rather than only on the edges of entire sprite sheet. I coded the fragment shader like this (just a proof-of-concept, I certainly need to improve it):
#version 330 core
out vec4 FragColor;
in vec2 TexCoord;
uniform sampler2DRect texture1;
void main()
{
vec2 mTexCoord;
mTexCoord.x = TexCoord.x <= 0.5 ? 0.5 : TexCoord.x >= 319.5 ? 319.5 : TexCoord.x;
mTexCoord.y = TexCoord.y <= 0.5 ? 0.5 : TexCoord.y >= 239.5 ? 239.5 : TexCoord.y;
FragColor = texture(texture1, mTexCoord);
}
(Yes, the "sprite" I use in this case is 320x240 which takes my entire screen.)
It's soooo easy to code thanks to the fact that Rectangle Texture uses non-normalized coordinates. It works well enough and I just called it a day and wrapped it up here.
Another solution (untested yet): Use Array Texture instead of texture atlas
The idea is simple, just set TextureWrapMode.ClampToEdge and have the sampler do it's job. I haven't look further into it but it seems to work in concept. Yet I really like the way coordinates work with Rectangle Texture and would like to keep it if possible.
Rounding Error
When trying to animate my square on the screen, I come to a very strange result: (notice the lower left corner of the square when the values on the left reads X.5)
This only happens on my iGPU (Intel HD4000) and not the dGPU (Nvidia GT635M via optimus). It looks like this because I fitted all coordinates to pixel center (.5) in fragment shader:
#version 330 core
out vec4 FragColor;
in vec2 TexCoord;
uniform sampler2DRect texture1;
void main()
{
vec2 mTexCoord;
// Clamp to spirte
mTexCoord.x = clamp(TexCoord.x, 0.5, 319.5);
mTexCoord.y = clamp(TexCoord.y, 0.5, 239.5);
// Snap to pixel
mTexCoord.xy = trunc(mTexCoord.xy) + 0.5;
FragColor = texture(texture1, mTexCoord);
}
My best guess is that the iGPU and dGPU rounds differently when converting coordinates to NDC (and back to screen coordinates)
Using a quad and a texture, both sized power-of-2 would avoid this problem. There's also workarounds like adding a small amount (0.0001 is enough on my laptop) to mTexCoord.xy before trunc.
Update: the solution
Okay, after a good sleep I come up with a relatively simple solution.
When dealing with pictures, nothing needs to be changed (let linear filter do it's job)
Since there's always gonna be rounding error, I basically just give up at this point and live with it. It won't be noticeable to human eyes any way.
When trying to fit the pixels in texture into pixel grid on screen, in addition to snapping the texture coordinates in fragment shader (as seen above), I also have to pre-shift the texture coordinates in vertex shader:
#version 330 core
layout (location = 0) in vec2 aPos;
layout (location = 1) in vec2 aTexCoord;
out vec2 TexCoord;
uniform mat4 projection;
void main()
{
TexCoord = aTexCoord + mod(aPos , 1);
gl_Position = projection * vec4(aPos.xy, 0.0, 1.0);
}
The idea is simple: when the square is placed at Clip(10.5, 10.5), shift the texture coordinate to Pixel(0.5, 0.5) as well. Of course this would mean the end coordinate gets shifted out of sprite region Pixel(320.5, 320.5) but this is fixed in fragment shader using clamp so no need to worry about it. Any coordinates in between (like Pixel(0.1, 0.1)) wound be snapped to Pixel(0.5, 0.5) by fragment shader as well, which creates a pixel-to-pixel result.
This gives me a consistent result across my iGPU(intel) and dGPU(Nvidia):
Positioned at Clip(10.5,10.5), notice the artifacts in lower-left corner is gone.
To sum it up:
Setup the Clip Space coordinates as a 1:1 represent to the screen.
Remember that pixels have size when calculating the size of a sprite.
Use the correct coordinates for textures and fix the bleeding on sprite edges.
When trying to snap texture pixels to screen pixels, take extra care about vertex coordinates in addition to texture coordinates.
Can I use the final SCREEN SPACE coordinates directly
No you can't. You have to transform the coordinates. Either on the CPU or in the shader (GPU).
If you want to use window coordinates, then you've to set an Orthographic projection matrix, which transforms the coordinates from x: [-1.0, 1.0], y: [-1.0, 1.0] (normalized device space) to your window coordinates x: [0, 320], y: [240, 0].
e.g. by the use of glm::ortho
glm::mat4 projection = glm::orhto(0, 320.0f, 240.0f, 0, -1.0f, 1.0f);
e.g. OpenTK Matrix4.CreateOrthographic
OpenTK.Matrix4 projection =
OpenTK.Matrix4.CreateOrthographic(0, 320.0f, 240.0f, 0, -1.0f, 1.0f);
In the vertex shader, the vertex coordinate has to be multiplied by the projection matrix
in vec3 vertex;
uniform mat4 projection;
void main()
{
gl_Position = projection * vec4(vertex.xyz, 1.0);
}
For the sake of completeness Legacy OpenGL, glOrtho:
(do not use the old and deprecated legacy OpenGL functionality)
glOrtho(0.0, 320.0, 240.0, 0.0, -1.0, 1.0);
As others have mentioned you need an orthographic projection matrix.
You can either implement it yourself by following guides such as these:
http://learnwebgl.brown37.net/08_projections/projections_ortho.html
Or maybe the framework you're using already has it.
If you set the right/left/top/bottom values to match your screen resolution then a point of coordinate x,y (z being irrelevant, you can use a 2d vector with a 4x4 matrix) will become the same x,y on the screen.
You can move the view around like a camera if you multiply this projection matrix by a translation matrix (first the translation then the projection).
Then you pass this matrix to your shader and multiply the position of the vertex by it.
Inside my project, I have a sprite being draw of a box. I have the camera zoom out when clicking a key. When I zoom out, I want my box to scale it's dimensions so it stays consistent even though the camera has zoomed out and "shrunk" it.
I have tried multiplying the object's dimensions by 10% which seems to be the viewpoint's adjustment when zooming out, but that doesn't seem to work. Now this may sound dumb, but would scaling the sprite in the draw function also change the sprite's dimensions?
Let's say the box is 64x64 pixels. I zoom out 10% and scale the sprite. Does the sprite still have the boundaries as 64x64 or is the up-scaling also changing it's dimensions?
Scaling using SpriteBatch.Draw()s scale argument will just draw the sprite smaller/bigger, i.e. a 64x64 one will appear as 7x7 pixels (the outer pixels being alpha blended if enabled). However there are no size properties on the sprite, if you have your own rectangle, position variables for the sprite SpriteBatch.Draw() of course will not change those.
An alternative is draw the sprite in 3D space then everything is scaled when you move your camera, so the sprite will appear smaller though it will still be a 64x64 sprite.
How to draw a sprite in 3D space? Here is a good tutorial http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Point_sprites.php. (You will need to take time to learn about using 3D viewports, camera's etc, see here: http://msdn.microsoft.com/en-us/library/bb197901.aspx)/
To change sprite dimensions you need to change Rectangle parameter for SpriteBatch.Draw. To calculate zoom on rectange:
Rectangle scaledRect = new Rectangle(originalRectangle.X, originalRectangle.Y, (int)(originalRectangle.Width*zoom), (int)(originalRectangle.Height*zoom)); // where zoom default is 1.0f
When drawing use:
spriteBatch.Draw(Texture, scaledRect, Color.White);
Now I'm sorry to assume it, but without knowing why you doing what you doing - I think you doing something wrong.
You should use camera transformation to zoom out/in. It is done like that:
var transform = Matrix.CreateTranslation(new Vector3(-Position.X, -Position.Y, 0))* // camera position
Matrix.CreateRotationZ(_rotation)* // camera rotation, default 0
Matrix.CreateScale(new Vector3(Zoom, Zoom, 1))* // Zoom default 1
Matrix.CreateTranslation(
new Vector3(
Device.Viewport.Width*0.5f,
Device.Viewport.Height*0.5f, 0)); // Device from DeviceManager, center camera to given position
SpriteBatch.Begin( // SpriteBatch variable
SpriteSortMode.BackToFront, // Sprite sort mode - not related
BlendState.NonPremultiplied, // BelndState - not related
null,
null,
null,
null,
transformation); // set camera tranformation
It will change how sprites are displayed inside sprite batch, however - now you also must account for different mouse coordinates (if you using mouse input). To do that you must transform mouse position to transformed world matrix:
// mouse position, your tranformation matrix
public Vector2 ViewToWorld(Vector2 pos, Matrix transform)
{
return Vector2.Transform(pos, Matrix.Invert(transform));
}
I used the code without direct access to test it, so if something will not work - feel free to ask.
This is not answer to your question directly, if you could provide reason why you want re-size sprite when zooming instead of zooming camera - maybe I could better answer your question, also you should fallow markmnl link to understand world transformations and why you seem to need it in this situation.
So I have a seamless texture. The black lines in the picture represent the repeating texture. Then I have a series of blocks who have known x,y coordinates and known height and width. In Unity Textures have a 'scale' and 'offset'.
Scale is the amount of the texture that will show on the block. So the block starting at 0,0 will have a scale of about (.2,1.3). It's width is .2x the width of a single texture (for the sake of simple numbers) and it's height is about 1.3x the height of the texture.
Offset then moves the 'starting point of the texture. The block at 0,0 will have an offset of 0,0 because it is perfectly aligned with a corner but the block immediately to it's right would have a texture of about .2,0 because the texture needs to start about .2 units in to align properly. This is as far as I understand offset. I am pretty sure this is correct but feel free to correct me if I am wrong.
Now when you apply a texture to a block unity automatically scales the texture to start at the top left corner of the block and stretch it appropriately to fit 1 full iteration inside of that space. I obviously don't want this.
My question comes in for the three blocks labeled with the (x,y) coordinates. I have tried for several hours over a few weeks to get it right, unsuccessfully.
So how do I take in the x,y position and width/height to create a correct scale and offset so that those blocks will look like they are exactly where they are supposed to be in the texture?
It is not a particularly difficult concept but after staring at it I have no more ideas.
For the sake of the question assume a single texture is 12x12. The x,y and width/height are known values but are arbitrary.
I know it's normally good practice to post attempted code but I would rather see a good way of doing it than see answers that try to fix my failed attempts. But I will post code if people want to see that I did try on my own or how I initially tried.
What is a UV Map
Textures are applied to models by what is known as UV map. The idea is that each (x,y,z) vertex has two (u,v) coordinates assigned. UV coordinates define which point on the texture should correspond to that vertex. If you want to know more, I suggest the awesome (and free) Udacity 3D graphics course. Unit 8 talks about UV mapping.
How to solve your problem using UV Maps
Let's ignore all the vertices that are not visible - your blocks are basically rectangles. You can assign a UV mapping, where world possition of each vertex will be turned into its UV coordinate. This way, all the blocks will have the same origin point in texture space (0,0 in world position corresponds to 0,0 on texture). Code looks like this:
Mesh mesh = GetComponent<MeshFilter>().mesh;
Vector3[] vertices = mesh.vertices;
Vector2[] uvs = new Vector2[vertices.Length];
for (int i = 0; i < uvs.Length; i++)
{
// Find world position of each point
Vector3 vertexPos = transform.TransformPoint(vertices[i]);
// And assign its x,y coordinates to u,v.
uvs[i] = new Vector2(vertexPos.x, vertexPos.y);
}
mesh.uv = uvs;
You have to do this each time your block position changes.
As the title states, I'm trying to resize a Texture2D before even considering SpriteBatch.Draw(). The reason I'm doing this is I'm trying to fill an arbitrary polygon, laid out with vertices defined by Vector2Ds, with an arbitrary Texture2D.
What I'm thinking of is creating the rectangle that fits the polygon, scaling the Texture2D to that rectangle, and then making the pixels that are outside of the polygon transparent via Texture2D's GetData<>() and SetData<>().
I've gotten to the point of finding the rectangle that fits the polygon, but is there a way to resize the Texture2D, or am I going about it the completely wrong way? Thanks!
You're going about it the wrong way. Setting texture data is expensive. (And there's probably some issues with filtering, too.)
What you want to do is set the texture coordinates (the "UV coordinates") of the vertices you are drawing. This will cause a specific location of your texture to appear at that vertex of your polygon. The texture that would then fall outside your polygon is simply never drawn (it is "clipped" by the polygon edges).
Texture coordinates are specified in the range 0.0 to 1.0 (on the U and V axies - horizontally and vertically) from the top left to the bottom right of your texture.
If you are drawing using vertex buffers, XNA includes some built-in vertex structures like VertexPositionTexture and VertexPositionColorTexture that allow you to specify a TextureCoordinate value.
If you are using your own vertex structure, use VertexElementUsage.TextureCoordinate when specifying a VertexElement. If you are creating your own shader, the value will be exposed in TEXCOORD0 (for usage index 0).
If you are just drawing rectangles with SpriteBatch, then specify a sourceRectangle when you call Draw.
Sounds like you should be using the overloads on the Draw method (I realise you are for some reason not wanting to do this, but it's like this for a good reason):
public void Draw (
Texture2D texture,
Vector2 position,
Nullable<Rectangle> sourceRectangle,
Color color,
float rotation,
Vector2 origin,
Vector2 scale,
SpriteEffects effects,
float layerDepth
)
The sourceRectangle, scale parameter, and origin should be enough. Don't modify the texture in memory, it's relatively expensive to do this (especially doing it every frame!)
http://msdn.microsoft.com/en-us/library/bb196420(v=xnagamestudio.31).aspx
Can you explain why you don't want to scale in Draw()?
I have an animated model that's spinning.
I want to hide/not draw any part of the model that's Y<0
what are the ways I can do it?
ideas:
1) draw a giant rectangular box right below y=0
2) tweak the camera matrix so that y<0 is outside of clipping plane (but i have no idea how)
can someone point me into the right direction? =)
A purely mathematical approach:
Don't draw the polygons whose y's are all less than 0.
Draw the polygons whose y's are all greater than or equal to 0.
Clip the rest of the polygons with the y=0 plane and draw them.
If the polygons making up the model are triangles, clipping them is pretty trivial. You need to clip the two sides intersecting with the y=0 plane and replace the original vertices whose y's are less than 0 with the intersection points of those two sides with the clipping plane.
Use the line equations:
(x-x1) = (x2-x1)*(y-y1)/(y2-y1)
(z-z1) = (z2-z1)*(y-y1)/(y2-y1)
where 1 and 2 are the vertices of the side being clipped by the y=0 plane. Substitute their coordinates (x1, y1, z1, x2, y2, z2) and y=0 into the equations to get x and z of the intersection point. Use this point's coordinates instead of vertex 1's or 2's (whichever has y < 0).
If the polygons are texture-mapped, you'll need to recalculate the texture coordinates for the vertices that you got from the clipping. You do that in the same fashion.
It sounds like you need to introduce MSDN Bounding Frustum
Here is a good tutorial from Nic's GameDev Site.