I am working on a game (in XNA/MonoGame) in which I have large world consisting of many individual tiles with different textures. I store my world as a 2D array and to render it, I simply loop through every tile and draw it at the correct position.
Recently I implemented a zoom feature in my camera. My game's camera is made in the 'usual' XNA way where you have create a matrix based on position and scale and that pass that into your SpriteBatch.Begin() calls. My transform matrix is calculated as so:
Transform = Matrix.Identity *
Matrix.CreateTranslation(-(int)Position.X, -(int)Position.Y, 0) *
Matrix.CreateTranslation(Origin.X, Origin.Y, 0) *
Matrix.CreateScale(Scale);
The problem I am now facing is when I zoom in (by changing the camera's scale variable) some tile textures look odd at some zoom levels. Here are some pictures showing what I mean:
Here is a map at perfectly fine zoom level:
Here is a zoomed out (and cropped) view of the same map, notice how the sand texture is wierdly "upscaled":
I do not have much experience with graphics programming and really have no idea what this is caused by, but it makes the map look very janky.
Related
I am talking about the Camera settings in Unity3D.
I'm trying to figure out if I can change (at least) the background color of the gray area in the screenshot. The limits of the camera are changed programmatically. The motivation lies in the fact that the playing area has to change dynamically based on whether a child or an adult is playing. The screen is huge around more than 83 inches. When rescaling the playing area, the area that is not drawn is gray and a bit ugly, I would like to know if you can define at least the color, or better still if possible with an image.
The screenshot you see is the screen capture in fullscreen mode, so it includes all the pixels.
After this brief explanation in words and images, let's go to the specifics of the technical details. This is how I resize the room design area:
public static void SetViewportCalibration()
{
var camera = Camera.main;
camera.pixelRect = new Rect(MinX, MinY, MaxX, MaxY);
}
Is it possible to set the color of that gray area outside the new Rect(MinX, MinY, MaxX, MaxY)?
There's two ways off the top of my head to accomplish this. Both ways use two Cameras.
The first way. Create a second Camera. The second Camera should have Depth LESS than the dynamic camera. This second, "Background" camera can then display anything you'd like, for example, a separate Skybox, a separate UI, other scene content, etc. etc.
The second way. Your dynamic camera is actually not resized dynamically. Instead, render your camera to a Target Texture. Use this texture in a material, and assign the material to a Quad mesh (most appropriate). This mesh can then be used in your scene like any other 3D object, which means not only can you position it, but scale it and even rotate it. The new camera that you added can have it's own Skybox, UI etc. etc.
I would opt for the second way. Partly personal preference, but also because it sounds like it might suit your situation better and be easier to implement. You can also implement many more effects for extra "wow".
Try to create another camera with no objects in its view and the following settings:
Clear Flags: Solid Color,
Background: Pick a color,
ViewPort Rect: X = 0, y = 0, w = 1, h = 1,
Depth: A smaller value than the other camera (Set the depth of this camera to 0 and the depth of the other camera to 1)
This camera will work as background of your screen.
I hope that I understood the question :)
I am writing a virtual globe using DirectX similar to Google Earth. So far, I have completed tessellation, and have tested with a wrapped texture over the entire sphere, which was successful. I have written the texture coordinates to correspond with the latitude and longitude (90lat,-180lon = 0,0 and -90lat,180lon = 1,1).
For this project, I need to layer several image tiles over the sphere. For example, 8 images spanning 90 degrees by 90 degrees. These tiles may dynamically update (i.e. tiles may be added or removed as you pan around). I have thought about using a render target view and drawing the tiles directly to that, but I'm sure there is a better way.
How would I go about doing this? Is there a way to set the texture to only span a specific texture coordinate space? I.e. from (0, 0) to (0.25, 0.5)?
There are three straight-forward solutions (and possibly many more sopisticated ones).
You can create geometry that matches the part of the sphere covered by a tile and draw those subsequently, setting the correct texture before each draw call (if the tiles are laid out in a simple way, you can also generate this geometry using instancing and a single draw call).
You can write a pixel shader that evaluates the texture coordinates and chooses the appropriate texture using transformed texture coordinates.
Render all textures to a big texture and use that to render the sphere. Whenever a tile changes, bind the big texture as a render target and draw the new tile on top of it.
Inside my project, I have a sprite being draw of a box. I have the camera zoom out when clicking a key. When I zoom out, I want my box to scale it's dimensions so it stays consistent even though the camera has zoomed out and "shrunk" it.
I have tried multiplying the object's dimensions by 10% which seems to be the viewpoint's adjustment when zooming out, but that doesn't seem to work. Now this may sound dumb, but would scaling the sprite in the draw function also change the sprite's dimensions?
Let's say the box is 64x64 pixels. I zoom out 10% and scale the sprite. Does the sprite still have the boundaries as 64x64 or is the up-scaling also changing it's dimensions?
Scaling using SpriteBatch.Draw()s scale argument will just draw the sprite smaller/bigger, i.e. a 64x64 one will appear as 7x7 pixels (the outer pixels being alpha blended if enabled). However there are no size properties on the sprite, if you have your own rectangle, position variables for the sprite SpriteBatch.Draw() of course will not change those.
An alternative is draw the sprite in 3D space then everything is scaled when you move your camera, so the sprite will appear smaller though it will still be a 64x64 sprite.
How to draw a sprite in 3D space? Here is a good tutorial http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Point_sprites.php. (You will need to take time to learn about using 3D viewports, camera's etc, see here: http://msdn.microsoft.com/en-us/library/bb197901.aspx)/
To change sprite dimensions you need to change Rectangle parameter for SpriteBatch.Draw. To calculate zoom on rectange:
Rectangle scaledRect = new Rectangle(originalRectangle.X, originalRectangle.Y, (int)(originalRectangle.Width*zoom), (int)(originalRectangle.Height*zoom)); // where zoom default is 1.0f
When drawing use:
spriteBatch.Draw(Texture, scaledRect, Color.White);
Now I'm sorry to assume it, but without knowing why you doing what you doing - I think you doing something wrong.
You should use camera transformation to zoom out/in. It is done like that:
var transform = Matrix.CreateTranslation(new Vector3(-Position.X, -Position.Y, 0))* // camera position
Matrix.CreateRotationZ(_rotation)* // camera rotation, default 0
Matrix.CreateScale(new Vector3(Zoom, Zoom, 1))* // Zoom default 1
Matrix.CreateTranslation(
new Vector3(
Device.Viewport.Width*0.5f,
Device.Viewport.Height*0.5f, 0)); // Device from DeviceManager, center camera to given position
SpriteBatch.Begin( // SpriteBatch variable
SpriteSortMode.BackToFront, // Sprite sort mode - not related
BlendState.NonPremultiplied, // BelndState - not related
null,
null,
null,
null,
transformation); // set camera tranformation
It will change how sprites are displayed inside sprite batch, however - now you also must account for different mouse coordinates (if you using mouse input). To do that you must transform mouse position to transformed world matrix:
// mouse position, your tranformation matrix
public Vector2 ViewToWorld(Vector2 pos, Matrix transform)
{
return Vector2.Transform(pos, Matrix.Invert(transform));
}
I used the code without direct access to test it, so if something will not work - feel free to ask.
This is not answer to your question directly, if you could provide reason why you want re-size sprite when zooming instead of zooming camera - maybe I could better answer your question, also you should fallow markmnl link to understand world transformations and why you seem to need it in this situation.
i am using the following code to setup my camera. I can see the elements in a range of some 100 fs. I want the camera to see farther.
projection = Matrix.CreatePerspectiveFieldOfView((3.14159265f/10f), device.Viewport.AspectRatio, 0.2f, 40.0f);
How to do it ?
Look at the documentation for Matrix.CreatePerspectiveFieldOfView.
The last two parameters are the near and far plane distances. They determine the size of the view frustum associated with the camera. The view frustum looks like this:
Everything in the frustum is in the volume that the rasteriser uses for drawing - this includes a depth component. Everything outside this region is not drawn.
Increase the distance of the far plane from the camera.
But don't increase it further than you need to. The larger the distance between the near and far plane, the less resolution the Z-buffer has and the more likely you will see artefacts like Z-fighting.
We've got a sphere which we want to display in 3D and color given a function that depends on spherical coordinates.
The sphere was triangulated using a regular grid in (theta, phi), but this produced a lot of small triangles near the poles. In an attempt to reduce the number triangles at the poles, we've changed out mesh generation to produce more evenly sized triangles over the surface.
The first triangulation method had the advantage that we could easily create a texture and drape it over the surface. It seems that in WPF it isn't possible to assign colors to vertices the way one would go about in OpenGL or Direct3D.
With the second triangulation method it isn't apparent how to go about generating the texture and setting the texture coordinates, since the vertices aren't aligned to a grid any more.
Maybe it would be possible to create a linear texture containing a color for each vertex, but then how will that effect the coloring? Will it still render smoothly over the triangle surfaces as one would expect by applying per vertex coloring?
I've converted the algorithm to use a linear texture which is really just a lookup into the colormap. This seems to work great and is a much better solution than the previous one. Instead of creating a texture of size ThetaSamples * PhiSamples, I'm now only creating a fixed texture of 256 x 1.