I have a tile system written in XNA, and there is problems with tiles fitting together.
What I mean is, sometimes tiles are separated by 1 pixel (maybe 2 pixels? can't tell), and they are suppose to fit together perfectly. I am certain the math I did to get them together is right, I don't know what is causing the issue.
Surprisingly this issue is fixed when I raise the size of my tiles (a double) to 1000. Size is only relative to my camera zoom, so this does not affect game play at all, but it bothers me I have to do this.
Any ideas on what could be causing this?
edit: infact anything below a tile size of 995 has the issue, but anything above is good, this is some kind of weird precision issue. Is double math more accurate with high numbers or something?
Well, pixels are integers, and if the size of your tiles are in doubles, that means you must be doing some sort of conversion to get the pixels, which in turn is probably where you are getting the separation.
For example if tile A starts at 0, and is 9.9 in length, where do you put the next tile?
If you are rounding in this case you would use Math.Round(value, 0, MidpointRounding.AwayFromZero) because Math.Round(value) does not return what you are mathematically taught as "rounding" to be.
I can't remember the specifics but it arises things like Math.Round(0.5) = 0 and Math.Round(1.5) = 2!
Hard to diagnose without seeing how you are handling tile placement. Are your tiles fixed width and height or variable?
Here is how I lay out my level consisting of tiles of size 32x32.
Tile class contains texture of tile and grid position of tile.
Iterate through Tile collection and draw them at:
X = gridPosition.X * textureWidth
Y = gridPosition.Y * textureHeight
So, if you have a tile of 32x32 at grid position (0, 0), then it will be drawn at (0, 0).
And if you have a tile of 32x32 at grid position (0, 1), then it will be drawn at (0, 32).
Related
Say I have the following code for an image that's 200x100 or any arbitrary size really:
Image image = Bitmap.FromFile(fileName);
image.RotateFlip(RotateFlipType.Rotate90FlipNone);
image.save(fileName);
Is it safe to assume that the output size is 100x200? That the width and height exactly swapped values? My coworker is convinced it may not be guaranteed. I think the matrix math involved is exact and reliable. Who's right? If it matters, we're working with tif images.
The Image.RotateFlip Method only does rotation in exact multiples of 90 degrees (0, 90, etc.)
This means after you use it, you will always end up with one of 2 results:
1. Either the width and height will remain unchanged (for example when you flip without rotation or rotate 180 degrees).
2. Or the width and height will be exact swapped values.
Bonus info:
This type of right-angle rotation is "lossless" which means the color values are preserved and the operation can be reversed without any loss of pixel values.
On the other hand, any algorithm that rotates arbitrary angles that are not multiples of 90 degrees will be a lossy algorithm. The pixel locations and/or exact color values are not guaranteed to be restored by doing a counter-rotation.
I'm adding lighting to my XNA 2D tile based game.
I found this article useful, but the way its done it does not support collision. What I'd like is a method to do the following
Have always lit point
Collision (If the light ray hits a block, then dim the next block by whatever amount until its dark to simulate shadows)
I've been searching around for quite a while but no luck (I did find Catalin's tutorial, but it seemed a bit advanced for me, and didn't apply to tiles well due to redrawing the entire game for each point)
I'll share my method for applying a smooth lighting effect to a 2D tile grid. ClassicThunder's answer provides a nice link for shadows.
First off, we will need to calculate the lighting values of each tile which will be blurred later. Let me illustrate how this works before I get into the code.
Basicly what we do is loop through all the tiles, starting from the top, if a tile is blank, set the CurrentLight variable to max brightness, if we find a solid tile, set it as the CurrentLight variable and subtract an "absorbsion" amount from the CurrentLight. This way, on our next solid tile, when we set the tile to the CurrentLight value, it will be slightly less. This process is repeated until the array is iterated.
Now there will be a nice top to bottom lighting effect, but it isn't that great. We must repeat this process 3 more times, for bottom to top, left to right, and right to left. And it can be repeated more times for better quality.
Basically running this code on every tile in the loop
if (tile.Light > CurrentLight) //If this tile is brighter than the last, set the current light to the tiles light
CurrentLightR = tile.Light;
else if (CurrentLight != 0f) //If it is less, and isnt dark, then set the tile to the current light
tile.Light = CurLightR;
if (tile.Light == CurLightR) //If it is the same, subtract absorb values
CurrentLight -= tile.Absorb;
And there you go, nice tile lighting. However if you want a less "pixelized" look, you can check out my question on gamedev for that.
For per-pixel lighting, you might have to look somewhere else, because I don't know about that.
For per-tile lighting,
in SpriteBatch.draw, a few of the overloaded methods takes a color. When you use Color.White, the sprite that the SpriteBatch draws is normal colored.
Use Color multiplication by creating a new Color(Color.yourcolor.r*float, Color.yourcolor.y*float, Color.yourcolor.z*float, 255)
Basically, to get the float, try to find out a formula that calculates the brightness of the block due to nearby lights (stored in an array or list, probably). Since there's no normals needed for 2D games, this formula should be relatively easy.
I'm developing an UI for a project for school, and I've tried similar methods to scaling my texture as listed here, but here is the issue:
Our project is developed at 1440 x 900, so I've made my own images that fit that screen resolution. When we have to demo our project in class, the projector can only render up to 1024 x 768, thus, many things on the screen goes missing. I have added window resizing capabilities, and I'm doing my scaling like this. I have my own class called "button" which has a texture 2d, and a Vector2 position contruscted by Button(Texture2d img, float width, float height).
My idea is to set the position of the image to a scalable % of the window width and height, so I'm attempting to set the position of the img to a number between 0-1 and then multiply by the window width and height to keep everything scaled properly.
(this code is not the proper syntax, i'm just trying to convey the point)
Button button = new Button(texture, .01, .01 );
int height = graphicsdevice.viewport.height * button.position.Y;
int width = graphicsdevice.viewport.width * button.position.X;
Rectangle rect = new Rectangle(0,0,width, height);
sprite.being()
sprite.draw (button.img, rect, color.white);
sprite.end
it doesn't end up scaling anything when i go to draw it and resize the window by dragging the mouse around. if i hard code in a different bufferheight and bufferwidth to begin with, the image stays around the same size regardless of resolution, except that the smaller the resolution is, the more pixelated the image looks.
what is the best way to design my program to allow for dynamic texture2d scaling?
As Hannesh said, if you run it in fullscreen you won't have these problems. However, you also have a fundamental problem with the way you are doing this. Instead of using the position of the sprite, which will not change at all during window resize, you must use the size of the sprite. I often do this using a property called Scale in my Sprite class. So instead of clamping the position of the sprite between 0 and 1, you should be clamping the Size property of the sprite between 0 and 1. Then as you rescale the window it will rescale the sprites.
In my opinion, a better way to do this is to have a default resolution, in your case 1440 x 900. Then, if the window is rescaled, just multiply all sprites' scaling factors by the ratio of the new screensize to the old screensize. This takes only 1 multiplication per resize, instead of a multiplication per update (which is what your method will do, because you have to convert from the clamped 0-1 value to the real scale every update).
Also, the effects you noticed during manual rescale of the sprites is normal. Rescaling images to arbitrary sizes causes artifacts in the rendered image because the graphics device doesn't know what to do at most sizes. A good way to get around this is by using filler art during the development process and then create the final art in the correct resolution(s). Obviously this doesn't apply in your situation because you are resizing a window to arbitrary size, but in games you will usually only be able to switch to certain fixed resolutions.
It might be that my math is rusty or I'm just stuck in my box after trying to solve this for so long, either way I need your help.
Background: I'm making a 2d-based game in C# using XNA. In that game I want a camera to be able to zoom in/out so that a certain part of objects always are in view. Needless to say, the objects move in two dimensions while the camera moves in three.
Situation: I'm currently using basic trigonometry to calculate which height the camera should be at for all objects to show. I also position the camera between those objects.
It looks something like this:
1.Loop through all objects to find the outer edges of our objects : farRight, farLeft, farUp, farDown.
2.When we know what the edges of what has to be shown are, calculate the center, also known as the camera position:
CenterX = farLeft + (farRight - farLeft) * 0.5f;
CenterY = farUp + (farDown - farUp) * 0.5f;
3.Loop through our edges to find the largest value compared to our camera position, thus the furthest distance from the center of screen.
4.Using the largest distance-value we can easily calculate the needed height to show all of those objects (points):
float T = 90f - Constants.CAMERA_FIELDOFVIEW * 0.5f;
float height = (float)Math.Tan(MathHelper.ToRadians(T)) * (length);
So far so good, the camera positions itself perfectly based on the calculations.
Problem:
a) My rendering target is 1280*720 with a Field of View of 45 degrees, so one always sees a bit more on the X-axis, 560 pixels more actually. This is not a problem per se but more one that on b)...
b) I want the camera to be a bit further out than it is, so that one sees a bit more on what is happening beyond the furthest point. Sure, this happens on the X-axis, but that is technically my flawed logic's result. I want to be able to see more on both the X- and Y-axis and to control this behavior.
Question
Uhm, so to clarify. I would like to have some input on a way to make the camera position itself, creating this state:
Objects won't get closer than say... 150 pixels to the edge of the X-axis and 100 pixels to the edge of the Y-axis. To do this the camera shall position itself along the Z-axis so that the field of view covers it all.
I don't need help with the coding, just the math and logic of calculating the height of my camera. As you probably can see, I have a hard time wrapping this inside my head and even harder time trying to explain it to you.
If anyone out there has been dealing with this or is just better than me at math, I'd appreciate whatever you have to say! :)
Don't you just need to add or subtract 150 or 100 pixels (depending on which edge you are looking at) to each distance measurement in your loop at step 3 and carry this larger value into length at step 4? Or am I missing something.
I can't explore this area further at the moment, but if anyone is having the same issue but is not satisfied by provided answer there is another possibility in XNA.
ViewPort.Unproject()
This nifty feature converts a screen space coordinate to a world space one.
ViewPort.Project()
Does the opposite thing, namely converting world space to screen space. Just thought that someone might want to go further than me. As much as my OCD hates to leave things not perfect, I can't be perfectioning this... yet.
I have an array of Point variables. When drawn using Graphics.DrawLine, they create the expected image. My problem is that 0,0 is actually the center of the image (not the top left of my canvas as expected. My X and Y coordinates in the Points can contain negative numbers.
When I try to draw this to my Image, of course I get 1/4 of the total image as the remainder is drawn outside the bounds of my canvas. How do I center this drawing correctly onto my canvas?
I know the dimensions of the image I want to draw. I know where 0,0 is (width / 2, height / 2).
I suppose I can translate each and every single Point, but that seems like the hard way to do this.
TranslateTransform() can map coordinates for you if you setup a transformation during your drawing handlers.
Graphics.TranslateTransform # MSDN
Or, map your coordinates by adding half the width and half the height of the desired viewing area to each coordinate.
Also, you may need to scale your coordinates. You may use Graphics.ScaleTransform to do this.
Graphics.ScaleTransform # MSDN
If you don't wish to use this, then you should divide X coordinates by the percent amount you wish to stretch the width, and divide Y coordinates by the percent amount you wish to stretch the height. This gives us 1 for 100%, 1.2 for 120%, 0.8 for 80%, etc.
Welcome to the Windows' version of the Cartessian Plane. Your last statement is correct. You do have to offset each and every point. The only real help you can give yourself is to make the offset logic a separate method to clean up your main drawing code.
When creating the array, add an offset to each x value equal to half of the width and an offset to y equal to half of the height. That way when the points are drawn, they're in the expected position.