I am currently working on a game, and i've run into the following problem:
I want to draw some results as a graph on top of my Canvas and so far i've found out that my Canvas needs to be in Screenspace-Camera for this to work, as the LineRenderer is a 3D-Object and will be covered by Screenspace-Overlay otherwise.
And i've actually got it to work with this, looks like the following:
But the problem i've encountered is that if i increase the screen size, e.g. by stretching the GameView or maximizing it, the line disappears, even though it has a negative z compared to all my UI elements and therefore appears in front of them in the EditorView:
If i try to fix this by applying a greater negative z-Value relative to the screen size the lines get distorted, as they are getting closer and closer to the camera, and changing their alignment from view to z-Axis didn't help either.
What makes this even more confusing is that this happen to lines that are drawn lower (smaller y-value) first, meaning a line at the bottom of my graph disappears earlier. I really don't know why this is happening. Any help would be appreciated.
For 3D objects mixed with UI elements I recommend using a separate camera with greater depth than the camera drawing standard ui elements. This way your 3D objects will always be rendered on top of the UI elements and you wont have to worry about Z positions.
Related
I'm currently creating a large map, that consists of a lot of rectangles (33,844), that all have a unique name (label), which I'm drawing on top of them using a SpriteFont.
Drawing all of the rectangles takes no performance hit at all. But, as soon as I try to write all of their labels with DrawString(), my performance goes into the dumps.
In my head, I would like to draw all my rectangles and text to one texture all at once, and only have to keep redrawing that entire finished texture. My issue is, this is an enormous map, and some of the coordinates for the rectangles are very high (example: one slot's x is 14869 and y is 23622), and they're far bigger than a Texture2D allows.
Since this is a map, I really only need to draw the entire thing once, and then allow the user to scroll/move around it. There's no need for me to continually redraw all of the individual rectangles and their labels.
Does anyone have experience with this type of situation?
Try to only render the labels that you can see on the screen and if you can zoom back far enough, just don't render them.
Textrendering is expensive, since it is basically creating a rectangle to draw on for every character in the font and then applying the same RGBA texture to it. So depending on the number of characters you write, the number of rectangles increases. This means four new vertices per character.
Depending on what you write you could simply create a texture with the text already on it and render that, but it won't be very dynamic.
EDIT: I need to clarify something.
There's no need for me to continually redraw all of the individual rectangles and their labels.
This is wrong. You have to draw the whole thing every frame. Sure, it doesn't increase memorywise, but it still is a lot to render and you will need to render it every frame.
But as I said: Try to only render the labels and the rectangles that collide with the screenboundaries, then you should be fine.
There are two ways to solve your problem.
You can either render your map to a RenderTarget(2D/3D) or you can cull the rectangles/text that are offscreen. However, I am not 100% sure that RenderTargets can go as large as you would need, but you could always segment your map into multiple smaller RenderTargets.
From more information on RenderTargets, you might want to check out RB Whitaker's article on them, http://rbwhitaker.wikidot.com/render-to-texture
Culling, in case you are familiar with the term when used in this context, means to only render what is visible to the end-user. There are various ways that culling can be implemented. This does however require you to have already implemented a camera (or some type of view region) and you perform a basic axis-aligned bounding box collision (AABB collision, which MonoGame's Rectangle supports out of the box) of the rectangles against the camera's viewport and only render it if there is a collision.
A basic implementation of culling would look something like this:
Rectangle myRect = new Rectangle(100, 100, 48, 32);
public void DrawMapItem(SpriteBatch batch, Rectangle viewRegion)
{
if (viewRegion.Contains(myRect))
{
//Render your object here with the SpriteBatch
}
}
Where 'viewRegion' is the area of you world that the camera/end-user can actually see.
You can also combine the two methods, and render the map to multiple render targets, and then cull the render targets.
Hope this helps!
So I created this window editor in WPF that helps me create Forms quickly. Now, one feature I've worked on was create a guideline tool. At its core it's just creates lines to help keep my UI elements organized on the screen. I will show you an example. The long black lines are the guidelines I spoke about earlier.
Now, I noticed that in a lot of art programs (i.e Photoshop) and popular IDEs that implement Forms that they have a "snap-to" feature where a UI element will snap to a line UI or to another UI element in order to maintain alignment. Something like this:
I already have the guidelines showing up in my editor. Now, what I would like help understanding is, how would I go about implementing the "snap to" feature? I'n not asking for code, just a breakdown (a visual breakdown will be most welcomed).
These are my questions:
How does an object know if one of its edges (top, bottom, left, right) touched a line?
How would I know how to unsnap the UI element if the user keeps moving the mouse past the guideline?
If I have (say) 10 lines how do I make sure that the object attaches to the nearest line(s)?
UPDATE
When an object moves or is resized, keep track of its actual size/location relative to the mouse, and separately keep track of a snapped version of the same information. If a given actual edge is within some arbitrary distance of a line -- say 4 pixels (arbitrary WPF units, really). If it's within that distance, set it to the value for the line it's close to. You still have the actual mouse-relative values as well, so you know to unsnap it if the the user keeps on dragging it and it leaves that 4-unit zone.
When an object is being resized, at most two edges of the bounding box will be changing position (assuming you can drag corners as well as edges). When you're moving an object, all four edges of the bounding box will move.
So you need to keep track of which edges are moving, and only do snap-line proximity testing on those edges. When you're moving an object, snapping the left or top edge to a line is easy. That's just the position of the object. But if you snap the right or top edge to a line, you're setting
snappedPos.X = nearestVerticalSnapLine.X - draggedObject.Width;
or
snappedPos.Y = nearestHorizontalSnapLine.Y - draggedObject.Height;
You may also have cases where opposite edges will both be in proximity to lines: Say you're dragging a seven-unit square across a ten-unit grid. When it's inside a grid box, all four sides will be in proximity to a grid line. Which wins? The closer one.
Locating the snap lines is easy -- %.
How do I remove this "giggly" effect when slowly moving a sprite?
I have tried adjusting Antialiasing values in QualitySettings and Filter Mode in ImportSettings in the Unity Editor but that doesn't change anything.
Ideally, I would like to keep the Filter Mode to Point (no filter) and anti aliasing turned on to 2x
The sprite is located inside a Sprite Renderer component of a GameObject.
I have uploaded my Unity Project here: http://www.filedropper.com/sprite
I really don't know how to fix the problem... Can anyone help with my personal project?
I cooked up a quick animation to demonstrate what's happening here:
The grid represents the output pixels of your display. I've overlaid on top of it the sliding sprite we want to sample, if we could render it with unlimited sub-pixel resolution.
The dots in the center of each grid cell represent their sampling point. Because we're using Nearest-Nieghbour/Point filtering, that's the only point in the texture they pay attention to. When the edge of a new colour crosses that sampling point, the whole pixel changes colour at once.
The trouble arises when the source texel grid doesn't line up with our output pixels. In the example above, the sprite is 16x16 texels, but I've scaled it to occupy 17x17 pixels on the display. That means, somewhere in every frame, some texels must get repeated. Where this happens changes as we move the sprite around.
Because each texel is rendered slightly larger than a pixel, there's a moment where it completely bridges the sampling points of two adjacent pixels. Both sampling points land within the same enlarged texel, so both pixels see that texel as the nearest one to sample from, and the texel gets output to the screen in two places.
In this case, since there's only a 1/16th scale difference, each texel is only in this weird situation for a frame or two, then it shifts to its neighbour, creating a ripple of doubled pixels that appears to slide across the image.
(One could view this as a type of moiré pattern resulting from the interaction of the texel grid and the sampling grid when they're dissimilar)
The fix is to ensure that you scale your pixel art so each texel is displayed at the size of an integer multiple of pixels.
Either 1:1
Or 2:1, 3:1...
Using a higher multiple lets the sprite move in increments shorter than its own texel size, without localized stretching that impacts the intended appearance of the art.
So: pay close attention to the resolution of your output and the scaling applied to your assets, to ensure you keep an integer multiple relationship between them. The blog post that CAD97 links has practical steps you can take to achieve this.
Edit: To demonstrate this in the Unity project you've uploaded, I modified the camera settings to match your pixels to units setting, and laid out the following test. The Mario at the top has a slightly non-integer texel-to-pixel ratio (1.01:1), while the Mario at the bottom has 1:1. You can see only the top Mario exhibits rippling artifacts:
You might be interested in this blog post about making "pixel-perfect" 2D games in Unity.
Some relevant excerpts:
If you start your pixel game with all the default settings in Unity, it will look terrible!
The secret to making your pixelated game look nice is to ensure that your sprite is rendered on a nice pixel boundary. In other words, ensure that each pixel of your sprite is rendered on one screen pixel.
These other settings are essential to make things as crisp as possible.
On the sprite:
Ensure your sprites are using lossless compression e.g. True Color
Turn off mipmapping
Use Point sampling
In Render Quality Settings:
Turn off anisotropic filtering
Turn off anti aliasing
Turn on pixel snapping in the sprite shader by creating a custom material that uses the Sprite/Default shader and attaching it to the SpriteRenderer.
Also, I'd just like to point out that Unless you are applying Physics, Never Use FixedUpdate. Also, if your sprite has a Collider and is moving, it should have a Kinematic RigidBody attached even if you're never going to use physics, to tell the engine that the Collider is going to move.
Same problem here. I noticed that the camera settings and scale are also rather important to fix the rippling problem.
Here is What Worked for me:
Go to Project Settings > Quality
Under Quality Make the default Quality as High for all.
Set the Anistropic Texture to "Disabled"
Done, And the issue is resolved for me.
Image Reference:
enter image description here
Ok, in my XNA project i've added simple shader + model loading code, anything works. I created a very simple low-detailed model in 3Ds Max. Exported and imported to XNA with FBX format.
The problem is:
if i move my simple camera to some distance from this model, one of its components starts to flicker. I tried another model and there is the same situation, some of components start to flicker and only if i get to some distance from model.
This flickering (or blinking or ..) appears only with textured objects (probably), and looks like:
in each frame random parts/pixels of model (or not so random) replaced with object which is behind a model or its component... :(
UPDATE: Now i know - problem is in my model (i checked some other models). I dont understand why but Plane object gives that flickering. Maybe the problem is not in Plane object.
This is only an educated guess but: Your far-plane is too far away, or your near-plane is too close, or both.
A perspective camera gives you a viewable area that looks like this:
Your Z-buffer (depth buffer) covers the range between the near and the far planes. A typical Z-buffer might have 24-bits of precision, giving you 224 possible values. The further apart your near and far planes are, the greater the world-space distance each possible value must covers. In other words: your Z buffer is less accurate.
What you are seeing is known as "Z-fighting". This is where the Z-buffer is not accurate enough to differentiate between the depths of two given pixels. So you end up with pixels that should have been rejected as being "behind" what was already rendered, drawn instead.
(Alternately your model has some coplanar or nearly coplanar triangles - that is triangles who's surfaces are too close together. Same issue: Not enough precision in the Z-buffer to differentiate between the two surfaces.)
You may also wish to enable backface-culling (RasterizerState.CullCounterClockwise), if it is not already enabled. This culls triangles facing away from the camera, removing one possible source of Z-fighting.
I have seen this happen before on models where there are two or more surfaces overlapping in the same plane, one surface is inside the other but in the same plane - so the system can not tell which surface is in front of the other and usually ends up with a mash-up of both surfaces.
It looks like you have a smaller rectangular surface intersecting with a larger rectangular surface that makes up the lower base shape of your model. Probably from another object inside the box? Or from a object subtraction error that left two rectangles inside each other on that surface maybe?
Either way modify the model so there are no longer two surfaces with in each other.
I am making an RPG game using an isometric tile engine that I found here:
http://xnaresources.com/default.asp?page=TUTORIALS
However after completing the tutorial I found myself wanting to do some things with the camera that I am not sure how to do.
Firstly I would like to zoom the camera in more so that it is displaying a 1 to 1 pixel ratio.
Secondly, would it be possible to make this game 2.5d in the way that when the camera moves, the sprite trees and things alike, move properly. By this I mean that the bottom of the sprite is planted while the top moves against the background, making a very 3d like experience. This effect can best be seen in games like diablo 2.
Here is the source code off their website:
http://www.xnaresources.com/downloads/tileengineseries9.zip
Any help would be great, Thanks
Games like Diablo or Sims 1, 2, SimCity 1-3, X-Com 1,2 etc. were actually just 2D games. The 2.5D effect requires that tiles further away are exactly the same size as tiles nearby. Your rotation around these games are restricted to 90 degrees.
How they draw is basically painters algorithm. Drawing what is furthest away first and overdrawing things that are nearer. Diablo is actually pretty simple, it didn't introduce layers or height differences as far as I remember. Just a flat map. So you draw the floor tiles first (in this case back to front isn't too necessary since they are all on the same elevation.) Then drawing back to front the walls, characters effects etc.
Everything in these games were rendered to bitmaps and rendered as bitmaps. Even though their source may have been a 3D textured model.
If you want to add perspective or free rotation then you need everything to be a 3D model. Your rendering will be simpler because depth or render order isn't as critical as you would use z-buffering to solve your issues. The only main issue is to properly render transparent bits in the right order or else you may end up with some odd results. However even if your rendering is simpler, your animation or in memory storage is a bit more difficult. You need to animate 3D models instead of just having an array of bitmaps to do the animation. Selection of items on the screen requires a little more work since position and size of the elements are no longer consistent or easily predictable.
So it depends on which features you want that will dictate which sort of solution you can use. Either way has it's plusses and minuses.