Texture/model flickering in distance (3D) - c#

Ok, in my XNA project i've added simple shader + model loading code, anything works. I created a very simple low-detailed model in 3Ds Max. Exported and imported to XNA with FBX format.
The problem is:
if i move my simple camera to some distance from this model, one of its components starts to flicker. I tried another model and there is the same situation, some of components start to flicker and only if i get to some distance from model.
This flickering (or blinking or ..) appears only with textured objects (probably), and looks like:
in each frame random parts/pixels of model (or not so random) replaced with object which is behind a model or its component... :(
UPDATE: Now i know - problem is in my model (i checked some other models). I dont understand why but Plane object gives that flickering. Maybe the problem is not in Plane object.

This is only an educated guess but: Your far-plane is too far away, or your near-plane is too close, or both.
A perspective camera gives you a viewable area that looks like this:
Your Z-buffer (depth buffer) covers the range between the near and the far planes. A typical Z-buffer might have 24-bits of precision, giving you 224 possible values. The further apart your near and far planes are, the greater the world-space distance each possible value must covers. In other words: your Z buffer is less accurate.
What you are seeing is known as "Z-fighting". This is where the Z-buffer is not accurate enough to differentiate between the depths of two given pixels. So you end up with pixels that should have been rejected as being "behind" what was already rendered, drawn instead.
(Alternately your model has some coplanar or nearly coplanar triangles - that is triangles who's surfaces are too close together. Same issue: Not enough precision in the Z-buffer to differentiate between the two surfaces.)
You may also wish to enable backface-culling (RasterizerState.CullCounterClockwise), if it is not already enabled. This culls triangles facing away from the camera, removing one possible source of Z-fighting.

I have seen this happen before on models where there are two or more surfaces overlapping in the same plane, one surface is inside the other but in the same plane - so the system can not tell which surface is in front of the other and usually ends up with a mash-up of both surfaces.
It looks like you have a smaller rectangular surface intersecting with a larger rectangular surface that makes up the lower base shape of your model. Probably from another object inside the box? Or from a object subtraction error that left two rectangles inside each other on that surface maybe?
Either way modify the model so there are no longer two surfaces with in each other.

Related

Getting "giggly" effect when slowly moving a sprite

How do I remove this "giggly" effect when slowly moving a sprite?
I have tried adjusting Antialiasing values in QualitySettings and Filter Mode in ImportSettings in the Unity Editor but that doesn't change anything.
Ideally, I would like to keep the Filter Mode to Point (no filter) and anti aliasing turned on to 2x
The sprite is located inside a Sprite Renderer component of a GameObject.
I have uploaded my Unity Project here: http://www.filedropper.com/sprite
I really don't know how to fix the problem... Can anyone help with my personal project?
I cooked up a quick animation to demonstrate what's happening here:
The grid represents the output pixels of your display. I've overlaid on top of it the sliding sprite we want to sample, if we could render it with unlimited sub-pixel resolution.
The dots in the center of each grid cell represent their sampling point. Because we're using Nearest-Nieghbour/Point filtering, that's the only point in the texture they pay attention to. When the edge of a new colour crosses that sampling point, the whole pixel changes colour at once.
The trouble arises when the source texel grid doesn't line up with our output pixels. In the example above, the sprite is 16x16 texels, but I've scaled it to occupy 17x17 pixels on the display. That means, somewhere in every frame, some texels must get repeated. Where this happens changes as we move the sprite around.
Because each texel is rendered slightly larger than a pixel, there's a moment where it completely bridges the sampling points of two adjacent pixels. Both sampling points land within the same enlarged texel, so both pixels see that texel as the nearest one to sample from, and the texel gets output to the screen in two places.
In this case, since there's only a 1/16th scale difference, each texel is only in this weird situation for a frame or two, then it shifts to its neighbour, creating a ripple of doubled pixels that appears to slide across the image.
(One could view this as a type of moiré pattern resulting from the interaction of the texel grid and the sampling grid when they're dissimilar)
The fix is to ensure that you scale your pixel art so each texel is displayed at the size of an integer multiple of pixels.
Either 1:1
Or 2:1, 3:1...
Using a higher multiple lets the sprite move in increments shorter than its own texel size, without localized stretching that impacts the intended appearance of the art.
So: pay close attention to the resolution of your output and the scaling applied to your assets, to ensure you keep an integer multiple relationship between them. The blog post that CAD97 links has practical steps you can take to achieve this.
Edit: To demonstrate this in the Unity project you've uploaded, I modified the camera settings to match your pixels to units setting, and laid out the following test. The Mario at the top has a slightly non-integer texel-to-pixel ratio (1.01:1), while the Mario at the bottom has 1:1. You can see only the top Mario exhibits rippling artifacts:
You might be interested in this blog post about making "pixel-perfect" 2D games in Unity.
Some relevant excerpts:
If you start your pixel game with all the default settings in Unity, it will look terrible!
The secret to making your pixelated game look nice is to ensure that your sprite is rendered on a nice pixel boundary. In other words, ensure that each pixel of your sprite is rendered on one screen pixel.
These other settings are essential to make things as crisp as possible.
On the sprite:
Ensure your sprites are using lossless compression e.g. True Color
Turn off mipmapping
Use Point sampling
In Render Quality Settings:
Turn off anisotropic filtering
Turn off anti aliasing
Turn on pixel snapping in the sprite shader by creating a custom material that uses the Sprite/Default shader and attaching it to the SpriteRenderer.
Also, I'd just like to point out that Unless you are applying Physics, Never Use FixedUpdate. Also, if your sprite has a Collider and is moving, it should have a Kinematic RigidBody attached even if you're never going to use physics, to tell the engine that the Collider is going to move.
Same problem here. I noticed that the camera settings and scale are also rather important to fix the rippling problem.
Here is What Worked for me:
Go to Project Settings > Quality
Under Quality Make the default Quality as High for all.
Set the Anistropic Texture to "Disabled"
Done, And the issue is resolved for me.
Image Reference:
enter image description here

Draw points along the surface of a cube

So for an assignment I have to morph a cube into a sphere. All that's required is to have a bunch of points along the surface of a cube (don't need to be connected or actually be on a cube, but has to form a cube, though connections would make it easier to look at) and have them smoothly translate into a sphere shape.
The main problem is I never learned how to make points in XNA 4.0 and from what I've seen it's very different to what we did in OpenGL (we learned the old one in a previous class).
Would anyone be able to help me figure out making the cube shape I need? Each side would have 10x10 points with the points on the edge shared by the surfaces of that edge. The structure would need to be easy to copy or modify since I would need to have the start state, end state, and the intermediate state to translate the points between the two states.
If I left out anything that could be important let me know.
First of all, you should familiarise yourself with the Primitives3D sample. It illustrates all of the rendering APIs that you need.
Here is how I would approach this problem (you can look up these classes and methods on MSDN and it will hopefully help you flesh out the details):
Create an array of Vector3[] that represents an appropriately tessellated unit cube around (0,0)
Create a second array of Vector3[] and and use Vector3.Normalize to copy in the vertices from your first array. This will create a unit sphere with vertices that match up with the original cube.
Create an array of VertexPositionColor[]. Fill in the colour data however you like.
Use Vector3.Lerp to loop through the first two arrays, interpolating each element to set positions in the third array. This gives you a parameter you can animate - you will have to do this each frame (in Update is probably best).
Create an array of indices (short[]) that describes a triangle list of the tessellated cube (and, in turn, the sphere and animation between the two).
Set up a BasicEffect for rendering. This involves setting its World, View and Projection matrices and maybe turning on VertexColorEnabled. If you want lighting, see the sample for details (you'll need to use a vertex type with normals, and animate those normals correctly).
The way to render with an effect is: foreach(EffectPass effectPass in effect.CurrentTechnique.Passes) { effectPass.Apply(); /* your stuff here */ }
You could create a DynamicVertexBuffer and IndexBuffer and draw with those. But for something simple like this, DrawUserIndexedPrimitives is much easier (here is a recent answer with some details, and here's another one with a complete example of BasicEffect).
Note that you should only create these objects at startup (LoadContent is a good place). During rendering you're simply using them.
If you have trouble with your 3D rendering, and you're drawing text or sprites, see this article for how to fix it.

Detect mouseover of non-square part of an Image

So I am working on a Risk type game in XNA/C#. I have a map, similar this one, and I need to be able to detect mouseovers on each territory (number). If these areas were squares, it would be easy, as they could each be represented by a rectangle. However, they are different size polygons. Is there a polygon shape that behaves similar to a square? If there isn't, how would I go about doing this?
I sugest this:attach color to each number, recreate your picture in these colors: every shape will be in its particular color. Dont draw it onscreen, use it only as reference map. And when the user clicks or moves mouse over your original map, you just simply project mouse coordinates into the color map, check the color of pixel laying under the mouse and because you have each color associated to number of territory...
This is not c# specific (as I've never written anything in the language, so no idea of what apis there are), though there are 2 algorithms that come to mind for detecting if a point is inside a polygon (which can be used to detect if a mouse point is over another polygon/map shape).
One is based on raycasting, where you cast a ray in 1 direction from the (mouse) point to "infinity" (edge of the board in this case) and count the number of times it crosses the polygon's edges. If it is odd, then the point is inside the polygon, if it is even, then the point is outside of the polygon.
A wiki link to it: http://en.wikipedia.org/wiki/Point_in_polygon#Ray_casting_algorithm
The other algorithm that comes to mind works only for triangles I think but it can be more simple to implement I think (taking a quick glance at your shapes, I think they can easily be broken down into triangles and some are already triangles). It is to do with checking if the point is on the same (internal) "side" of all the edges in the triangle. To find out what "side" a point is on vs an edge, you'd take create 2 vectors, the first vector would be the edge itself (made up of 2 points) and the other vector would be the first point of that edge to the input point, then calculate the cross product of those 2 vectors. The result will be negative or positive, which can be used to determine the "direction".
A link to it: http://www.blackpawn.com/texts/pointinpoly/default.html
(On that page is another algorithm that can also work for triangles)
Hit testing on a polygon is not so difficult to do in real time. You could use a KD-Tree for optimisation if the map is huge. Otherwise find a simple Contains method for a polygon and use that. I have one on another computer. Let me know if you'd like it.

isometric tile engine

I am making an RPG game using an isometric tile engine that I found here:
http://xnaresources.com/default.asp?page=TUTORIALS
However after completing the tutorial I found myself wanting to do some things with the camera that I am not sure how to do.
Firstly I would like to zoom the camera in more so that it is displaying a 1 to 1 pixel ratio.
Secondly, would it be possible to make this game 2.5d in the way that when the camera moves, the sprite trees and things alike, move properly. By this I mean that the bottom of the sprite is planted while the top moves against the background, making a very 3d like experience. This effect can best be seen in games like diablo 2.
Here is the source code off their website:
http://www.xnaresources.com/downloads/tileengineseries9.zip
Any help would be great, Thanks
Games like Diablo or Sims 1, 2, SimCity 1-3, X-Com 1,2 etc. were actually just 2D games. The 2.5D effect requires that tiles further away are exactly the same size as tiles nearby. Your rotation around these games are restricted to 90 degrees.
How they draw is basically painters algorithm. Drawing what is furthest away first and overdrawing things that are nearer. Diablo is actually pretty simple, it didn't introduce layers or height differences as far as I remember. Just a flat map. So you draw the floor tiles first (in this case back to front isn't too necessary since they are all on the same elevation.) Then drawing back to front the walls, characters effects etc.
Everything in these games were rendered to bitmaps and rendered as bitmaps. Even though their source may have been a 3D textured model.
If you want to add perspective or free rotation then you need everything to be a 3D model. Your rendering will be simpler because depth or render order isn't as critical as you would use z-buffering to solve your issues. The only main issue is to properly render transparent bits in the right order or else you may end up with some odd results. However even if your rendering is simpler, your animation or in memory storage is a bit more difficult. You need to animate 3D models instead of just having an array of bitmaps to do the animation. Selection of items on the screen requires a little more work since position and size of the elements are no longer consistent or easily predictable.
So it depends on which features you want that will dictate which sort of solution you can use. Either way has it's plusses and minuses.

Drawing massive amounts of textured cubes in XNA 4.0

I am trying to write a custom Minecraft Classic multiplayer client in XNA 4.0, but I am completely stumped when it comes to actually drawing the world in the game. Each block is a cube in 3D space, and it is possible for it to have different textures on each side. I have been reading around the Internet, and found out that for a cube to have a different texture on each side, each face needs its own set of vertices. That makes a total of 24 vertices for each cube, and if you have a world that consists of 64*64*64 cubes (or possibly even more!), that makes a lot of vertices.
In my original code, I split up the texture map I had into separate textures, and applied these before drawing each side of every cube. I was told that this is a very expensive approach, and that I should keep the textures in the same file, and simply use the UV coordinates to map certain subtextures onto the cube. This didn't do much for performance though, since the sheer amount of vertices is simply too much. I was also told to collect the vertices in a VertexBuffer and draw them all at once, but this didn't help much either, and occasionally causes an exception when the number of vertices exceeds the maximum size of the buffer. Any attempt I've tried to make cubes share vertices has also failed, resulting in massive slowdown and glitchy cubes.
I have no idea what to do with this. I am pretty good at programming in general, but any kind of 3D programming or game development completely escapes me.
Here is the method I use to draw the cubes. I have two global lists List<VertexPositionTexture> and List<int>, one for vertices and one for indices. When drawing, I iterate through all of the cubes in the world and do RenderShape on the ones that aren't empty (like Air). The shape class that I have is pasted below. The commented code in the AddVertices method is the attempt to make cubes share vertices. When all of the cubes' vertices have been added to the list, the data is pasted into a VertexBuffer and IndexBuffer, and DrawIndexedPrimitives is called.
To be honest, I am probably doing it completely wrong, but I really have no idea how to do it, and there are no tutorials that actually describe how to draw lots of objects, only extremely simple ones. I had to figure out how to redo the BasicShape to have several textures myself.
The shape:
http://pastebin.com/zNUFPygP
You can get a copy of the code I wrote with a few other devs called TechCraft:
http://techcraft.codeplex.com
Its free and open source. It should show you how to create an engine similar to Minecrafts.
There are a lot of things you can do to speed this up:
What you want to do is bake a region of cubes into a vertex buffer. What I mean by this is to take all of the cubes in a small area, and put them all into one vertex buffer. Only update this buffer when a cube changes.
In a world like minecraft's, LOTS of faces are occluding each other. The biggest thing you can do is to hide faces that are shared between two cubes. Imagine two cubes sitting right next to each other, you don't really need to draw the face in between, since it can never be seen anyway. In our engine, this resulted in 20 times less vertices.
_ _ _ _
|_|_| == |_ _|
As for your textures, it is a good idea, like you said, to use a texture atlas. This greatly reduces your draw calls.
Good luck! And if you feel like cheating, look at Infiniminer. Infiniminer is the game minecraft was based off. It's written in XNA and is open-source!
You need to think about reducing the size of the problem. How can you produce the same image by doing less work?
If your cubes are spaced at regular intervals and are all the same size, you may not need to store the vertices at all - your shader may be able to calculate the vertex positions as it runs. If they are different sizes and not spaced at regular intervals, then you may still be able to use some for onf instancing (where you supply the position and size of a cube to a shader and it works out where to render the vertices to make a cube appear at that location)
If your cubes obscure anything behnd them, then you only need to draw the front-most cubes - anything behind them is just not visible. A natural approach for this would be to use an octree data structure, which divides 3D space into voxels (cubes). Using an octree you could quickly deternine which cubes are visible, and just draw those cubes - so rather than drawing 64x64x64 cubes, you may find you nly have to draw a few hundred per frame. You will also find that as the camera moves, the set of visible cubes will not change much, so you may be able to use this "temporal coherence" to update your data structures to minimise the work that needs to be done to decide which cubes are visible.
I don't think Minecraft draws all the cubes, all the time. Most of them are interior, and you need to draw only those on the surface. So basically, you need an efficient voxel renderer.
I recently wrote an engine to do this in XNA, the technique you want to look into is called hardware instancing and allows you to pass one model into the shader with a stream of world positions to "instance" that model hundreds (even thousands of times) all over your game world.
I built my engine on top of this example, replacing the instanced model with my own.
http://xbox.create.msdn.com/en-US/education/catalog/sample/mesh_instancing
Once you make it into a re-usable class, it and its accompanying shaders become very useful for rendering thousands of pretty much anything you want (bushes, trees, cubes, swarms of birds, etc).
Once you have a base model (could be one face of the block), its mesh will have an associated texture that you can then replace with whatever you want to allow you to dynamically change block texturing for each side and differing types of blocks.

Categories