Resizing a Texture2D without Draw - c#

As the title states, I'm trying to resize a Texture2D before even considering SpriteBatch.Draw(). The reason I'm doing this is I'm trying to fill an arbitrary polygon, laid out with vertices defined by Vector2Ds, with an arbitrary Texture2D.
What I'm thinking of is creating the rectangle that fits the polygon, scaling the Texture2D to that rectangle, and then making the pixels that are outside of the polygon transparent via Texture2D's GetData<>() and SetData<>().
I've gotten to the point of finding the rectangle that fits the polygon, but is there a way to resize the Texture2D, or am I going about it the completely wrong way? Thanks!

You're going about it the wrong way. Setting texture data is expensive. (And there's probably some issues with filtering, too.)
What you want to do is set the texture coordinates (the "UV coordinates") of the vertices you are drawing. This will cause a specific location of your texture to appear at that vertex of your polygon. The texture that would then fall outside your polygon is simply never drawn (it is "clipped" by the polygon edges).
Texture coordinates are specified in the range 0.0 to 1.0 (on the U and V axies - horizontally and vertically) from the top left to the bottom right of your texture.
If you are drawing using vertex buffers, XNA includes some built-in vertex structures like VertexPositionTexture and VertexPositionColorTexture that allow you to specify a TextureCoordinate value.
If you are using your own vertex structure, use VertexElementUsage.TextureCoordinate when specifying a VertexElement. If you are creating your own shader, the value will be exposed in TEXCOORD0 (for usage index 0).
If you are just drawing rectangles with SpriteBatch, then specify a sourceRectangle when you call Draw.

Sounds like you should be using the overloads on the Draw method (I realise you are for some reason not wanting to do this, but it's like this for a good reason):
public void Draw (
Texture2D texture,
Vector2 position,
Nullable<Rectangle> sourceRectangle,
Color color,
float rotation,
Vector2 origin,
Vector2 scale,
SpriteEffects effects,
float layerDepth
)
The sourceRectangle, scale parameter, and origin should be enough. Don't modify the texture in memory, it's relatively expensive to do this (especially doing it every frame!)
http://msdn.microsoft.com/en-us/library/bb196420(v=xnagamestudio.31).aspx
Can you explain why you don't want to scale in Draw()?

Related

c# XNA 4.0 Camera zoom and sprite resizing

Inside my project, I have a sprite being draw of a box. I have the camera zoom out when clicking a key. When I zoom out, I want my box to scale it's dimensions so it stays consistent even though the camera has zoomed out and "shrunk" it.
I have tried multiplying the object's dimensions by 10% which seems to be the viewpoint's adjustment when zooming out, but that doesn't seem to work. Now this may sound dumb, but would scaling the sprite in the draw function also change the sprite's dimensions?
Let's say the box is 64x64 pixels. I zoom out 10% and scale the sprite. Does the sprite still have the boundaries as 64x64 or is the up-scaling also changing it's dimensions?
Scaling using SpriteBatch.Draw()s scale argument will just draw the sprite smaller/bigger, i.e. a 64x64 one will appear as 7x7 pixels (the outer pixels being alpha blended if enabled). However there are no size properties on the sprite, if you have your own rectangle, position variables for the sprite SpriteBatch.Draw() of course will not change those.
An alternative is draw the sprite in 3D space then everything is scaled when you move your camera, so the sprite will appear smaller though it will still be a 64x64 sprite.
How to draw a sprite in 3D space? Here is a good tutorial http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Point_sprites.php. (You will need to take time to learn about using 3D viewports, camera's etc, see here: http://msdn.microsoft.com/en-us/library/bb197901.aspx)/
To change sprite dimensions you need to change Rectangle parameter for SpriteBatch.Draw. To calculate zoom on rectange:
Rectangle scaledRect = new Rectangle(originalRectangle.X, originalRectangle.Y, (int)(originalRectangle.Width*zoom), (int)(originalRectangle.Height*zoom)); // where zoom default is 1.0f
When drawing use:
spriteBatch.Draw(Texture, scaledRect, Color.White);
Now I'm sorry to assume it, but without knowing why you doing what you doing - I think you doing something wrong.
You should use camera transformation to zoom out/in. It is done like that:
var transform = Matrix.CreateTranslation(new Vector3(-Position.X, -Position.Y, 0))* // camera position
Matrix.CreateRotationZ(_rotation)* // camera rotation, default 0
Matrix.CreateScale(new Vector3(Zoom, Zoom, 1))* // Zoom default 1
Matrix.CreateTranslation(
new Vector3(
Device.Viewport.Width*0.5f,
Device.Viewport.Height*0.5f, 0)); // Device from DeviceManager, center camera to given position
SpriteBatch.Begin( // SpriteBatch variable
SpriteSortMode.BackToFront, // Sprite sort mode - not related
BlendState.NonPremultiplied, // BelndState - not related
null,
null,
null,
null,
transformation); // set camera tranformation
It will change how sprites are displayed inside sprite batch, however - now you also must account for different mouse coordinates (if you using mouse input). To do that you must transform mouse position to transformed world matrix:
// mouse position, your tranformation matrix
public Vector2 ViewToWorld(Vector2 pos, Matrix transform)
{
return Vector2.Transform(pos, Matrix.Invert(transform));
}
I used the code without direct access to test it, so if something will not work - feel free to ask.
This is not answer to your question directly, if you could provide reason why you want re-size sprite when zooming instead of zooming camera - maybe I could better answer your question, also you should fallow markmnl link to understand world transformations and why you seem to need it in this situation.

Depth buffers for different screens in XNA

I don't quite understand how to handle ZBuffering for different screens in my program. I have a screenControllers class that calls a Draw method for each of my active Screen classes. The spriteBatch.Begin and spriteBatch.End calls are made in the screenController's Draw. The Screen's Draw just has a spriteBatch.Draw statement for each of the textures being drawn. I understand that I can specify depth when calling the
public void Draw (
Texture2D texture,
Rectangle destinationRectangle,
Nullable<Rectangle> sourceRectangle,
Color color,
float rotation,
Vector2 origin,
SpriteEffects effects,
float layerDepth
)
method for spriteBatch.
But say I open one screen with textures at some given depth. Then, the second screen I open should have other textures at the same depth values as the previous but drawn in order on TOP of the previous screen. I doubt I will need this functionality because the second screen I open will probably be an options screen on top of my scene window with everything in front or it may be a portion of an interface that is not on top of the scene window but to the side. I am just wondering if this layered functionality for multiple screens can be implemented in XNA?
If you use different SpriteBatch to manage them I don't think you will have this kind of problem.
If you use the same SpriteBatch you could use a scaleFactor and an offsetValue to manage the layerDepth value of your Draws, in order to create different invervals for every sceen's screenDepth parameter.
I mean that if you have 2 screen the first's textures are drawn in the [0, 0.5) interval, the second one in [0.5, 1). If you have 3 of them the interval will be every 0.33 and so on...
The scaleFactor will be 1 / numberOfScreens of course.

How do I make a set of blocks look seamless when I know the size of the texture and the position and dimensions of the blocks?

So I have a seamless texture. The black lines in the picture represent the repeating texture. Then I have a series of blocks who have known x,y coordinates and known height and width. In Unity Textures have a 'scale' and 'offset'.
Scale is the amount of the texture that will show on the block. So the block starting at 0,0 will have a scale of about (.2,1.3). It's width is .2x the width of a single texture (for the sake of simple numbers) and it's height is about 1.3x the height of the texture.
Offset then moves the 'starting point of the texture. The block at 0,0 will have an offset of 0,0 because it is perfectly aligned with a corner but the block immediately to it's right would have a texture of about .2,0 because the texture needs to start about .2 units in to align properly. This is as far as I understand offset. I am pretty sure this is correct but feel free to correct me if I am wrong.
Now when you apply a texture to a block unity automatically scales the texture to start at the top left corner of the block and stretch it appropriately to fit 1 full iteration inside of that space. I obviously don't want this.
My question comes in for the three blocks labeled with the (x,y) coordinates. I have tried for several hours over a few weeks to get it right, unsuccessfully.
So how do I take in the x,y position and width/height to create a correct scale and offset so that those blocks will look like they are exactly where they are supposed to be in the texture?
It is not a particularly difficult concept but after staring at it I have no more ideas.
For the sake of the question assume a single texture is 12x12. The x,y and width/height are known values but are arbitrary.
I know it's normally good practice to post attempted code but I would rather see a good way of doing it than see answers that try to fix my failed attempts. But I will post code if people want to see that I did try on my own or how I initially tried.
What is a UV Map
Textures are applied to models by what is known as UV map. The idea is that each (x,y,z) vertex has two (u,v) coordinates assigned. UV coordinates define which point on the texture should correspond to that vertex. If you want to know more, I suggest the awesome (and free) Udacity 3D graphics course. Unit 8 talks about UV mapping.
How to solve your problem using UV Maps
Let's ignore all the vertices that are not visible - your blocks are basically rectangles. You can assign a UV mapping, where world possition of each vertex will be turned into its UV coordinate. This way, all the blocks will have the same origin point in texture space (0,0 in world position corresponds to 0,0 on texture). Code looks like this:
Mesh mesh = GetComponent<MeshFilter>().mesh;
Vector3[] vertices = mesh.vertices;
Vector2[] uvs = new Vector2[vertices.Length];
for (int i = 0; i < uvs.Length; i++)
{
// Find world position of each point
Vector3 vertexPos = transform.TransformPoint(vertices[i]);
// And assign its x,y coordinates to u,v.
uvs[i] = new Vector2(vertexPos.x, vertexPos.y);
}
mesh.uv = uvs;
You have to do this each time your block position changes.

How to color a mesh with values at the vertices in WPF 3D?

We've got a sphere which we want to display in 3D and color given a function that depends on spherical coordinates.
The sphere was triangulated using a regular grid in (theta, phi), but this produced a lot of small triangles near the poles. In an attempt to reduce the number triangles at the poles, we've changed out mesh generation to produce more evenly sized triangles over the surface.
The first triangulation method had the advantage that we could easily create a texture and drape it over the surface. It seems that in WPF it isn't possible to assign colors to vertices the way one would go about in OpenGL or Direct3D.
With the second triangulation method it isn't apparent how to go about generating the texture and setting the texture coordinates, since the vertices aren't aligned to a grid any more.
Maybe it would be possible to create a linear texture containing a color for each vertex, but then how will that effect the coloring? Will it still render smoothly over the triangle surfaces as one would expect by applying per vertex coloring?
I've converted the algorithm to use a linear texture which is really just a lookup into the colormap. This seems to work great and is a much better solution than the previous one. Instead of creating a texture of size ThetaSamples * PhiSamples, I'm now only creating a fixed texture of 256 x 1.

Transforming co-ordinates from a rectangle to a parallelogram

I am stuck on a simple yet vexing problem with basic geometry. Too bad I don;t remember my high-school co-ordinate geometry and looking for some help.
My problem is illustrated in this diagram: A rectangle rotated, scaled, and warped into a parallelogram http://img248.imageshack.us/img248/8011/transform.png
I am struggling with transforming a co-ordinate from the rectangle to a resized parallelogram. Any tips, pointers and/or code-examples would be wonderful!
Thanks,
M.
There are several steps in this transformation.
Scale about (x,y) to adjust to the final size W', H'. (Possibly unequal
scaling on X and Y axes).
Apply a shear transform to convert
the rectangle to a parallelogram
(keeping x,y invariant).
Rotate about (x,y) to align to the
final coordinate orientation.
Translate to the new location.
Create the coordinate matrices for each of these and composite (multiply) them together to create the overall transform. Wikipedia could be your starting point to know about these transformation matrices.
Tip: Might be simplest to apply a translation to move (x,y) to the origin first. Then, the shear, scaling and rotation are a lot simpler to do. Then move it to the new location.

Categories