Pixel perfect rendering in XNA - c#

I have a 2D game in XNA which has a scrolling camera. Unfortunately, when screen is moved, I can see some artifacts - mostly blur and additional lines on the screen.
I thought about changing coordinates before drawing (approximating with Ceiling() or Floor() consistently), but this seems a little inefficient. Is this the only way?
I use SpriteBatch for rendering.
This is my drawing method from Camera:
Vector2D works on doubles, Vector2 works on floats (used by XNA), Srpite is just a class with data for spriteBatch.Draw.
public void DrawSprite(Sprite toDraw)
{
Vector2D drawingPostion;
Vector2 drawingPos;
drawingPostion = toDraw.Position - transform.Position;
drawingPos.X = (float) drawingPostion.X*UnitToPixels;
drawingPos.Y = (float) drawingPostion.Y*UnitToPixels;
spriteBatch.Draw(toDraw.Texture, drawingPos, toDraw.Source, toDraw.Color,
toDraw.Rotation, toDraw.Origin, toDraw.Scale, toDraw.Effects, toDraw.LayerDepth + zsortingValue);
}
My idea is to do this:
drawingPos.X = (float) Math.Floor(drawingPostion.X*UnitToPixels);
drawingPos.Y = (float) Math.Floor(drawingPostion.Y*UnitToPixels);
And it solves the problem. I think I can accept it this way. But are there any other options?

GraphicsDevice.SamplerStates[0] = SamplerState.PointWrap;
This isn't so much a problem with your camera as it is the sampler. Using a Point Sampler state tells the video card to take a single point color sample directly from the texture depending on the position. Other default modes like LinearWrap and LinearClamp will interpolate between texels (pixels on your source texture) and give it a very mushy, blurred look. If you're going for pixel-graphics, you need Point sampling.
With linear interpolation, if you have red and white next to each other in your texture, and it samples between the two (by some aspect of the camera), you will get pink. With point sampling, you get either red or white. Nothing in between.

Yes it is possible... try something this...
bool redrawSprite = false;
Sprite toDraw;
void MainRenderer()
{
if (redrawSprite)
{
DrawSprite(toDraw);
redrawSprite = false;
}
}
void ManualRefresh()
{
"Create or set your sprite and set it to 'toDraw'"
redrawSprite = true;
}
This way you will let main loop do the work like is intended.

Related

How to get color of the pixel I clicked on in Unity? Is there a GetPixel alternative that works with floats?

I'm using a Texture2D to display a map, and I need to get the color of the pixel I clicked on. I used Input.mousePosition to get the float coordinates, but using GetPixel to get the color requires the coordinates to be integers.
I am having trouble with getting GetPixel to find the coordinate that I clicked on.
When using floats and clicking on say, the rightmost side of the texture, I get a number like 27.xxx, but when I cast it to an integer, it displays a coordinate 27 pixels from the leftmost side of the texture. The way floats represent pixels confuses me a great deal, maybe clarifying that would help.
public class ProvinceSelectScript : MonoBehaviour {
public Material SpriteMain;
public Color SelectedCol;
public Color NewlySelectedCol;
public Texture2D WorldColMap;
Vector2 screenPosition;
Vector2 worldPosition;
void Start()
{
WorldColMap = (Texture2D)SpriteMain.GetTexture("_MainTexture");
NewlySelectedCol = Color.blue;
}
private void OnMouseDown()
{
screenPosition = new Vector2(Input.mousePosition.x,Input.mousePosition.y);
worldPosition = Camera.main.ScreenToWorldPoint(screenPosition);
SelectedCol = WorldColMap.GetPixel(((int)(worldPosition.x)+(WorldColMap.width/2)) , (int)((worldPosition.y)+(WorldColMap.height / 2)));
SpriteMain.SetColor("_SelectedProvince", SelectedCol);
SpriteMain.SetColor("_NewlySelectedProvince", NewlySelectedCol);
}
}
The worldPosition in the question isn't calculated in a way that's useful if you're using a perspective camera or if your camera is pointed any direction but directly forward.
To find the world position of the click, the best way to go about that is to use Camera.ScreenPointToRay to calculate the position of the click when intersecting the plane made by the position of the sprite and its local forward.
Either way, a world position does not mean anything to the sprite, which could be positioned anywhere in world space. You should rather use transform.InverseTransformPoint to calculate the local position you're clicking on. At that point, you can then use the spriterenderer's bounds to convert to normalized form (0-1 originating fromt he bottom-left instead of world unit lengths originating from the center).
But, once you have the local sprite position of the click expressed in normalized form, you can try to use GetPixelBilinear to get the color at the UV of (x,y) of the click. If the sprite is super simple, this MAY work. If it is animated or nine-sliced, or anything else it probably won't, and you'll have to reverse-engineer what UV the mouse is actually hovering over.
Camera mainCam;
SpriteRenderer sr;
void Start()
{
WorldColMap = (Texture2D)SpriteMain.GetTexture("_MainTexture");
NewlySelectedCol = Color.blue;
mainCam = Camera.main; // cache for faster access
sr = GetComponent<SpriteRenderer>(); // cache for faster access;
}
private void OnMouseDown()
{
Plane spritePlane = new Plane(transform.position, transform.forward);
Ray pointerRay = Camera.main.ScreenPointToRay(Input.mousePosition);
if (spritePlane.Raycast(pointerRay, out distance))
{
Vector3 worldPositionClick = pointerRay.GetPoint(distance);
Vector3 localSpriteClick = transform.InverseTransformPoint(worldPositionClick);
// convert [(-extents,-extents),(extents,extents)] to [(0,0),(1,1)]
Vector3 localSpriteExtents = sr.sprite.bounds.extents;
localSpriteClick = localSpriteClick + localSpriteExtents ;
localSpriteClick.x /= localSpriteExtents.x * 2;
localSpriteClick.y /= localSpriteExtents.y * 2;
// You clicked on localSpriteClick, on a very simple sprite (where no uv magic is happening) this might work:
SelectedCol = WorldColMap.GetPixelBilinear(localSpriteClick.x, localSpriteClick.y);
SpriteMain.SetColor("_SelectedProvince", SelectedCol);
}

Detect initial orientation for Gyroscope Camera, but heading only

So I'm using a script I found on here (How to enable gyroscope camera at current device orientation) BUT I only want this initial device orientation to affect the heading.
Quaternion offset;
void Awake()
{
Input.gyro.enabled = true;
}
void Start()
{
//Subtract Quaternion
offset = transform.rotation * Quaternion.Inverse(GyroToUnity(Input.gyro.attitude));
}
void Update()
{
GyroModifyCamera();
}
void GyroModifyCamera()
{
//Apply offset
transform.rotation = offset * GyroToUnity(Input.gyro.attitude);
}
private static Quaternion GyroToUnity(Quaternion q)
{
return new Quaternion(q.x, q.y, -q.z, -q.w);
}
For the backstory, I have a few interior renderings, with a gyro control to look around the space. I don't want to affect the entire world orientation, I just want them to be facing in the "correct" direction, in terms of the best view of the space. (If not, it just depends on how the device is oriented and its common to jump into a space looking at a random, uninteresting corner of the room.)
With the script above, it resets the camera on all three axes (and therefore the entire world orientation), so if the user is holding the tablet at an unusual angle, then things get really weird.
Here's a quick image to assist the explanation:
I always want the initial view to be aligned with the green arrow, not something random like the red arrow. But just the heading, it's OK to be looking at the floor or ceiling as long as the heading is right in that sweet spot of the room.
Thanks!

How do I resize sprites in a C# XNA game based on window size?

I'm making a game in C# and XNA 4.0. It uses multiple objects (such as a player character, enemies, platforms, etc.), each with their own texture and hitbox. The objects are created and drawn using code similar to the following:
class Object
{
Texture2D m_texture;
Rectangle m_hitbox;
public Object(Texture2D texture, Vector2 position)
{
m_texture = texture;
m_hitbox = new Rectangle((int)position.X, (int)position.Y, texture.Width, texture.Height);
}
public void Draw(SpriteBatch spriteBatch)
{
spriteBatch.Draw(texture, m_hitbox, Color.White);
}
}
Everything works properly, but I also want to allow the player to resize the game window. The main game class uses the following code to do so:
private void Update(GameTime gameTime)
{
if (playerChangedWindowSize == true)
{
graphics.PreferredBackBufferHeight = newHeight;
graphics.PreferredBackBufferWidth = newWidth;
graphics.ApplyChanges();
}
}
This will inevitably cause the positions and hitboxes of the objects to become inaccurate whenever the window size is changed. Is there an easy way for me to change the positions and hitboxes based on a new window size? If the new window width was twice as big as it was before I could probably just double the width of every object's hitbox, but I'm sure that's a terrible way of doing it.
Consider normalizing your coordinate system to view space {0...1} and only apply the window dimensions scalar at the point of rendering.
View Space to Screen Space Conversion
Pseudo code for co-ordinates:
x' = x * screenResX
y' = y * screenResY
Similarly for dimensions. Let's say you have a 32x32 sprite originally designed for 1920x1080 and wish to scale so that it fits the same logical space on screen (so it doesn't appear unnaturally small):
r = 32 * screenResX' / screenResY
width' = width * r
height' = height * r
Then it won't matter what resolution the user has set.
If you are concerned over performance this may impose, then you can perform the above at screen resolution change time for a one-off computation. However you should still always keep the original viewspace {0...1}.
Collision Detection
It's arguably more efficient to perform CD on screen space coordinates
Hope this helps

How to auto-scale to different screen resolutions?

I'm making an infinite runner in unity, I have an tile spawner/generator and its generating GameObjects based on screen height and width, I managed to make it work with the width but when changing the height the camera doesn't follow and I can't manage to make that work.
Anyway, my code isn't good, I have spent the last 6 hours into that and I don't appreciate the result.
As I found out you can define an aspect ratio to the camera and it will auto-scale your ratio to that one, but it distorts the image and doesn't look great.
Since all those notes, which is the best way to auto-scale a 2D platform game (NOT CONSIDERING GUI, only GameObjects)
I' m using this script to stretch sprites based on their size, works for most cases. I use 5 for the orthographicSize of camera.
using UnityEngine;
using System.Collections;
#if UNITY_EDITOR
[ExecuteInEditMode]
#endif
public class SpriteStretch : MonoBehaviour {
public enum Stretch{Horizontal, Vertical, Both};
public Stretch stretchDirection = Stretch.Horizontal;
public Vector2 offset = new Vector2(0f,0f);
SpriteRenderer sprite;
Transform _thisTransform;
void Start ()
{
_thisTransform = transform;
sprite = GetComponent<SpriteRenderer>();
StartCoroutine("stretch");
}
#if UNITY_EDITOR
void Update()
{
scale();
}
#endif
IEnumerator stretch()
{
yield return new WaitForEndOfFrame();
scale();
}
void scale()
{
float worldScreenHeight = Camera.main.orthographicSize *2f;
float worldScreenWidth = worldScreenHeight / Screen.height * Screen.width;
float ratioScale = worldScreenWidth / sprite.sprite.bounds.size.x;
ratioScale += offset.x;
float h = worldScreenHeight / sprite.sprite.bounds.size.y;
h += offset.y;
switch(stretchDirection)
{
case Stretch.Horizontal:
_thisTransform.localScale = new Vector3(ratioScale,_thisTransform.localScale.y,_thisTransform.localScale.z);
break;
case Stretch.Vertical:
_thisTransform.localScale = new Vector3(_thisTransform.localScale.x, h,_thisTransform.localScale.z);
break;
case Stretch.Both:
_thisTransform.localScale = new Vector3(ratioScale, h,_thisTransform.localScale.z);
break;
default:break;
}
}
}
First i want to say there are no good solutions only less bad.
The easiest is to just support one aspect ratio that way you can just scale everything up and down without distortion but that almost never an option.
The second easiest is to make the game be in one aspect ratio and add black bars (or something) to the edges to made the actual game area the same aspect ratio.
If you still want different aspect ratios the solution is to increase the game area but that might cause people with different aspect ratios to get an advantage since they can see further ahead or higher and it might mess up your level design.
I just posted an answer to how make everything work automatically as long as you have constant aspect ration here https://stackoverflow.com/a/25160299/2885785

XNA - Mouse coordinates to word space transformation

I have a pretty annoying problem. I would like to create a drawing program, using winform + XNA combo.
The most important part would be to transform the mouse position into the XNA drawn grid - I was able to make it for the translations, but it only work if I don't zoom in - when I do, the coordinates simply went horrible wrong.
And I have no idea what I doing wrong. I tried to transform with scaling matrix, transform with inverse scaling matrix, multiplying with zoom, but none seems to work.
In the beginning (with zoom value = 1) the grid starts from (0,0,0) going to (Width, Height, 0). I was able to get coordinates based on this grid as long as the zoom value didn't changed at all. I using a custom shader, with orthographic projection matrix, identity view matrix, and the transformed world matrix.
Here are the two main methods:
internal void Update(RenderData data)
{
KeyboardState keyS = Keyboard.GetState();
MouseState mouS = Mouse.GetState();
if (ButtonState.Pressed == mouS.RightButton)
{
camTarget.X -= (float)(mouS.X - oldMstate.X) / 2;
camTarget.Y += (float)(mouS.Y - oldMstate.Y) / 2;
}
if (ButtonState.Pressed == mouS.MiddleButton || keyS.IsKeyDown(Keys.Space))
{
zVal += (float)(mouS.Y - oldMstate.Y) / 10;
zoom = (float)Math.Pow(2, zVal);
}
oldKState = keyS;
oldMstate = mouS;
world = Matrix.CreateTranslation(new Vector3(-camTarget.X, -camTarget.Y, 0)) * Matrix.CreateScale(zoom / 2);
}
internal PointF MousePos
{
get
{
Vector2 mousePos = new Vector2(Mouse.GetState().X, Mouse.GetState().Y);
Matrix trans = Matrix.CreateTranslation(new Vector3(camTarget.X - (Width / 2), -camTarget.Y + (Height / 2), 0));
mousePos = Vector2.Transform(mousePos, trans);
return new PointF(mousePos.X, mousePos.Y);
}
}
The second method should return the coordinates of the mouse cursor based on the grid (where the (0,0) point of the grid is the top-left corner.).
But is just don't work. I deleted the zoom transformation from the matrix trans, as I wasn't able to get any useful results (most of the time, the coordinates were horribly wrong, mostly many thousands when the grid's size is 500x500).
Any ideas, or suggestions? I've been trying to solve this simple problem for two days now :\
Take a look at the GraphicsDevice.Viewport.Unproject method for converting screen space locations in to world space, it basically goes through your world, view, projection transformations in reverse order.
as for your zooming issue, instead of scaling the world transform why not move the camera closer to the object that you're viewing?

Categories