I created a mesh in code.
The mesh is well made. 4 vertexs and 2 trianlges
However, rendering became different. below the pictures
Why are the front and the back different?
The visible side is transparent on the other side.
What options should I add?
I only entered values for vertex and triangle points.
After you created your meshes make sure you applied these:
mesh.RecalculateBounds();
mesh.RecalculateNormals();
Any time your modify or create a mesh from scratch you have to recalculate normals and recalculate bounds.
After modifying the vertices it is often useful to update the normals to reflect the change. Normals are calculated from all shared vertices.
You could add "Cull Off" to your shader for render both mesh sides.
By default system renders only face side due to performance reasons,
as mentioned in this post https://answers.unity.com/questions/187252/display-both-sides.html
The reason is that graphics systems always cull backfaces. The faces
are usually determined by the winding order, i.e. the order in which
the vertices of a triangle are declared. Backfaces are culled because
it's an easy way to conserve computational power - it is usually not
necessary to render a face that's turning away from the viewer, such
as the other side of an object's mesh, for instance.
and code Example : https://answers.unity.com/questions/209018/cutout-diffuse-shader-visible-from-both-sides-on-a.html
Related
I have created this icosahedron in code so that each triangle is its own mesh instead of one solid mesh with 20 faces. I want to add text on the outward side of each triangle that is roughly centered. For a simple visual think a 20 sided die for DnD.
The constraints would be that while this version is a simple icosahedron with only the 20 faces, the script is written so that it can be refined recursively to add more faces and turn it into an icosphere. I still want the text to scale and center as the number of triangles grows.
I currently have a prefab "triangle" object that holds the script each triangle will need and as well as a mesh filter/mesh renderer.
The only other thing in the scene is an empty GameObject with the script attached that creates the triangles and acts as the parent for all of them.
I have attempted various ways of adding text into the triangle prefab. The problem I keep running into is that since the mesh itself does not exist in the prefab I am not sure how to orient or scale the text so that it would appear in the correct place.
I am not sure if the text needs to be completely generated at run time with the triangles, or if there is a way to add it to the prefab and then update it so that it scales and positions as the triangle is created.
My Google searching so far has only really brought me results that involve objects permanently in the scene, nothing on what to do with meshes generated at run time.
Is the text a texture saved in png or other image format? Because if it is the case, you can just create a material with the texture that holds the text.
The next step is to define to each vertex its UV, wraping this way the texture on the triangle.
If the text has to be generated on the fly, the best approach I can think right now is to add a Text Mesh to the triangle (since it is a prefab) and access to the component.
Ok, in my XNA project i've added simple shader + model loading code, anything works. I created a very simple low-detailed model in 3Ds Max. Exported and imported to XNA with FBX format.
The problem is:
if i move my simple camera to some distance from this model, one of its components starts to flicker. I tried another model and there is the same situation, some of components start to flicker and only if i get to some distance from model.
This flickering (or blinking or ..) appears only with textured objects (probably), and looks like:
in each frame random parts/pixels of model (or not so random) replaced with object which is behind a model or its component... :(
UPDATE: Now i know - problem is in my model (i checked some other models). I dont understand why but Plane object gives that flickering. Maybe the problem is not in Plane object.
This is only an educated guess but: Your far-plane is too far away, or your near-plane is too close, or both.
A perspective camera gives you a viewable area that looks like this:
Your Z-buffer (depth buffer) covers the range between the near and the far planes. A typical Z-buffer might have 24-bits of precision, giving you 224 possible values. The further apart your near and far planes are, the greater the world-space distance each possible value must covers. In other words: your Z buffer is less accurate.
What you are seeing is known as "Z-fighting". This is where the Z-buffer is not accurate enough to differentiate between the depths of two given pixels. So you end up with pixels that should have been rejected as being "behind" what was already rendered, drawn instead.
(Alternately your model has some coplanar or nearly coplanar triangles - that is triangles who's surfaces are too close together. Same issue: Not enough precision in the Z-buffer to differentiate between the two surfaces.)
You may also wish to enable backface-culling (RasterizerState.CullCounterClockwise), if it is not already enabled. This culls triangles facing away from the camera, removing one possible source of Z-fighting.
I have seen this happen before on models where there are two or more surfaces overlapping in the same plane, one surface is inside the other but in the same plane - so the system can not tell which surface is in front of the other and usually ends up with a mash-up of both surfaces.
It looks like you have a smaller rectangular surface intersecting with a larger rectangular surface that makes up the lower base shape of your model. Probably from another object inside the box? Or from a object subtraction error that left two rectangles inside each other on that surface maybe?
Either way modify the model so there are no longer two surfaces with in each other.
So for an assignment I have to morph a cube into a sphere. All that's required is to have a bunch of points along the surface of a cube (don't need to be connected or actually be on a cube, but has to form a cube, though connections would make it easier to look at) and have them smoothly translate into a sphere shape.
The main problem is I never learned how to make points in XNA 4.0 and from what I've seen it's very different to what we did in OpenGL (we learned the old one in a previous class).
Would anyone be able to help me figure out making the cube shape I need? Each side would have 10x10 points with the points on the edge shared by the surfaces of that edge. The structure would need to be easy to copy or modify since I would need to have the start state, end state, and the intermediate state to translate the points between the two states.
If I left out anything that could be important let me know.
First of all, you should familiarise yourself with the Primitives3D sample. It illustrates all of the rendering APIs that you need.
Here is how I would approach this problem (you can look up these classes and methods on MSDN and it will hopefully help you flesh out the details):
Create an array of Vector3[] that represents an appropriately tessellated unit cube around (0,0)
Create a second array of Vector3[] and and use Vector3.Normalize to copy in the vertices from your first array. This will create a unit sphere with vertices that match up with the original cube.
Create an array of VertexPositionColor[]. Fill in the colour data however you like.
Use Vector3.Lerp to loop through the first two arrays, interpolating each element to set positions in the third array. This gives you a parameter you can animate - you will have to do this each frame (in Update is probably best).
Create an array of indices (short[]) that describes a triangle list of the tessellated cube (and, in turn, the sphere and animation between the two).
Set up a BasicEffect for rendering. This involves setting its World, View and Projection matrices and maybe turning on VertexColorEnabled. If you want lighting, see the sample for details (you'll need to use a vertex type with normals, and animate those normals correctly).
The way to render with an effect is: foreach(EffectPass effectPass in effect.CurrentTechnique.Passes) { effectPass.Apply(); /* your stuff here */ }
You could create a DynamicVertexBuffer and IndexBuffer and draw with those. But for something simple like this, DrawUserIndexedPrimitives is much easier (here is a recent answer with some details, and here's another one with a complete example of BasicEffect).
Note that you should only create these objects at startup (LoadContent is a good place). During rendering you're simply using them.
If you have trouble with your 3D rendering, and you're drawing text or sprites, see this article for how to fix it.
I am trying to write a custom Minecraft Classic multiplayer client in XNA 4.0, but I am completely stumped when it comes to actually drawing the world in the game. Each block is a cube in 3D space, and it is possible for it to have different textures on each side. I have been reading around the Internet, and found out that for a cube to have a different texture on each side, each face needs its own set of vertices. That makes a total of 24 vertices for each cube, and if you have a world that consists of 64*64*64 cubes (or possibly even more!), that makes a lot of vertices.
In my original code, I split up the texture map I had into separate textures, and applied these before drawing each side of every cube. I was told that this is a very expensive approach, and that I should keep the textures in the same file, and simply use the UV coordinates to map certain subtextures onto the cube. This didn't do much for performance though, since the sheer amount of vertices is simply too much. I was also told to collect the vertices in a VertexBuffer and draw them all at once, but this didn't help much either, and occasionally causes an exception when the number of vertices exceeds the maximum size of the buffer. Any attempt I've tried to make cubes share vertices has also failed, resulting in massive slowdown and glitchy cubes.
I have no idea what to do with this. I am pretty good at programming in general, but any kind of 3D programming or game development completely escapes me.
Here is the method I use to draw the cubes. I have two global lists List<VertexPositionTexture> and List<int>, one for vertices and one for indices. When drawing, I iterate through all of the cubes in the world and do RenderShape on the ones that aren't empty (like Air). The shape class that I have is pasted below. The commented code in the AddVertices method is the attempt to make cubes share vertices. When all of the cubes' vertices have been added to the list, the data is pasted into a VertexBuffer and IndexBuffer, and DrawIndexedPrimitives is called.
To be honest, I am probably doing it completely wrong, but I really have no idea how to do it, and there are no tutorials that actually describe how to draw lots of objects, only extremely simple ones. I had to figure out how to redo the BasicShape to have several textures myself.
The shape:
http://pastebin.com/zNUFPygP
You can get a copy of the code I wrote with a few other devs called TechCraft:
http://techcraft.codeplex.com
Its free and open source. It should show you how to create an engine similar to Minecrafts.
There are a lot of things you can do to speed this up:
What you want to do is bake a region of cubes into a vertex buffer. What I mean by this is to take all of the cubes in a small area, and put them all into one vertex buffer. Only update this buffer when a cube changes.
In a world like minecraft's, LOTS of faces are occluding each other. The biggest thing you can do is to hide faces that are shared between two cubes. Imagine two cubes sitting right next to each other, you don't really need to draw the face in between, since it can never be seen anyway. In our engine, this resulted in 20 times less vertices.
_ _ _ _
|_|_| == |_ _|
As for your textures, it is a good idea, like you said, to use a texture atlas. This greatly reduces your draw calls.
Good luck! And if you feel like cheating, look at Infiniminer. Infiniminer is the game minecraft was based off. It's written in XNA and is open-source!
You need to think about reducing the size of the problem. How can you produce the same image by doing less work?
If your cubes are spaced at regular intervals and are all the same size, you may not need to store the vertices at all - your shader may be able to calculate the vertex positions as it runs. If they are different sizes and not spaced at regular intervals, then you may still be able to use some for onf instancing (where you supply the position and size of a cube to a shader and it works out where to render the vertices to make a cube appear at that location)
If your cubes obscure anything behnd them, then you only need to draw the front-most cubes - anything behind them is just not visible. A natural approach for this would be to use an octree data structure, which divides 3D space into voxels (cubes). Using an octree you could quickly deternine which cubes are visible, and just draw those cubes - so rather than drawing 64x64x64 cubes, you may find you nly have to draw a few hundred per frame. You will also find that as the camera moves, the set of visible cubes will not change much, so you may be able to use this "temporal coherence" to update your data structures to minimise the work that needs to be done to decide which cubes are visible.
I don't think Minecraft draws all the cubes, all the time. Most of them are interior, and you need to draw only those on the surface. So basically, you need an efficient voxel renderer.
I recently wrote an engine to do this in XNA, the technique you want to look into is called hardware instancing and allows you to pass one model into the shader with a stream of world positions to "instance" that model hundreds (even thousands of times) all over your game world.
I built my engine on top of this example, replacing the instanced model with my own.
http://xbox.create.msdn.com/en-US/education/catalog/sample/mesh_instancing
Once you make it into a re-usable class, it and its accompanying shaders become very useful for rendering thousands of pretty much anything you want (bushes, trees, cubes, swarms of birds, etc).
Once you have a base model (could be one face of the block), its mesh will have an associated texture that you can then replace with whatever you want to allow you to dynamically change block texturing for each side and differing types of blocks.
I need to make a DirectX 3D mesh at run time using Managed DirectX from C#. I have not been able to locate any information on how to do this.
No, I can't use a 3D modeler program to make my objects. They must be precisely sized and shaped, and I don't have ANY of the size or shape information until runtime.
No, I can't build up the model from existing DirectX mesh capabilities. (A simple example: DirectX would let you easily model a pencil by using a cone mesh and a cylinder mesh. Of course, you have to carry around two meshes for your pencil, not just one, and properly position and orient each. But you can not even make a model of a pencil split in half lengthwise as there is no half cylinder nor half cone mesh provided.)
At runtime, I have all of the vertices already computed and know which ones to connect to make the necessary triangles.
All I need is a solid color. I don't need to texture map.
One can get a mesh of a sphere using this DirectX call:
Mesh sphere = Mesh.Sphere(device, sphereRadius, sphereSlices, sphereStacks);
This mesh is built at runtime.
What I need to know is how to make a similar function:
Mesh shape = MakeCustomMesh(device, vertexlist, trianglelist);
where the two lists could be any suitable container/format.
If anyone can point me to managed DirectX (C#) sample code, even if it just builds a mesh from 3 hardcoded triangles, that would be a great benefit.
There's some sample code showing how to do this on MDXInfo. This creates a mesh with multiple subsets - if you don't need that, it's even easier.
Basically, you just create the mesh:
Mesh mesh = new Mesh(numIndices, numVerticess, MeshFlags.Managed, CustomVertex.PositionColored.Format /* you'll need to set this */ , device);
Then, you can grab the mesh vertex buffer and index buffer, and overwrite it, using:
IndexBuffer indices = mesh.IndexBuffer;
VertexBuffer vertices = mesh.VertexBuffer;
Then, fill in the indices and vertices appropriately.