I need to make a DirectX 3D mesh at run time using Managed DirectX from C#. I have not been able to locate any information on how to do this.
No, I can't use a 3D modeler program to make my objects. They must be precisely sized and shaped, and I don't have ANY of the size or shape information until runtime.
No, I can't build up the model from existing DirectX mesh capabilities. (A simple example: DirectX would let you easily model a pencil by using a cone mesh and a cylinder mesh. Of course, you have to carry around two meshes for your pencil, not just one, and properly position and orient each. But you can not even make a model of a pencil split in half lengthwise as there is no half cylinder nor half cone mesh provided.)
At runtime, I have all of the vertices already computed and know which ones to connect to make the necessary triangles.
All I need is a solid color. I don't need to texture map.
One can get a mesh of a sphere using this DirectX call:
Mesh sphere = Mesh.Sphere(device, sphereRadius, sphereSlices, sphereStacks);
This mesh is built at runtime.
What I need to know is how to make a similar function:
Mesh shape = MakeCustomMesh(device, vertexlist, trianglelist);
where the two lists could be any suitable container/format.
If anyone can point me to managed DirectX (C#) sample code, even if it just builds a mesh from 3 hardcoded triangles, that would be a great benefit.
There's some sample code showing how to do this on MDXInfo. This creates a mesh with multiple subsets - if you don't need that, it's even easier.
Basically, you just create the mesh:
Mesh mesh = new Mesh(numIndices, numVerticess, MeshFlags.Managed, CustomVertex.PositionColored.Format /* you'll need to set this */ , device);
Then, you can grab the mesh vertex buffer and index buffer, and overwrite it, using:
IndexBuffer indices = mesh.IndexBuffer;
VertexBuffer vertices = mesh.VertexBuffer;
Then, fill in the indices and vertices appropriately.
Related
I have created this icosahedron in code so that each triangle is its own mesh instead of one solid mesh with 20 faces. I want to add text on the outward side of each triangle that is roughly centered. For a simple visual think a 20 sided die for DnD.
The constraints would be that while this version is a simple icosahedron with only the 20 faces, the script is written so that it can be refined recursively to add more faces and turn it into an icosphere. I still want the text to scale and center as the number of triangles grows.
I currently have a prefab "triangle" object that holds the script each triangle will need and as well as a mesh filter/mesh renderer.
The only other thing in the scene is an empty GameObject with the script attached that creates the triangles and acts as the parent for all of them.
I have attempted various ways of adding text into the triangle prefab. The problem I keep running into is that since the mesh itself does not exist in the prefab I am not sure how to orient or scale the text so that it would appear in the correct place.
I am not sure if the text needs to be completely generated at run time with the triangles, or if there is a way to add it to the prefab and then update it so that it scales and positions as the triangle is created.
My Google searching so far has only really brought me results that involve objects permanently in the scene, nothing on what to do with meshes generated at run time.
Is the text a texture saved in png or other image format? Because if it is the case, you can just create a material with the texture that holds the text.
The next step is to define to each vertex its UV, wraping this way the texture on the triangle.
If the text has to be generated on the fly, the best approach I can think right now is to add a Text Mesh to the triangle (since it is a prefab) and access to the component.
I created a mesh in code.
The mesh is well made. 4 vertexs and 2 trianlges
However, rendering became different. below the pictures
Why are the front and the back different?
The visible side is transparent on the other side.
What options should I add?
I only entered values for vertex and triangle points.
After you created your meshes make sure you applied these:
mesh.RecalculateBounds();
mesh.RecalculateNormals();
Any time your modify or create a mesh from scratch you have to recalculate normals and recalculate bounds.
After modifying the vertices it is often useful to update the normals to reflect the change. Normals are calculated from all shared vertices.
You could add "Cull Off" to your shader for render both mesh sides.
By default system renders only face side due to performance reasons,
as mentioned in this post https://answers.unity.com/questions/187252/display-both-sides.html
The reason is that graphics systems always cull backfaces. The faces
are usually determined by the winding order, i.e. the order in which
the vertices of a triangle are declared. Backfaces are culled because
it's an easy way to conserve computational power - it is usually not
necessary to render a face that's turning away from the viewer, such
as the other side of an object's mesh, for instance.
and code Example : https://answers.unity.com/questions/209018/cutout-diffuse-shader-visible-from-both-sides-on-a.html
I am working on a 3D game and it has lots of game objects in a scene. So I'm working on reducing draw calls. I'v used mesh combining on my static game objects. But my player is'n static and I can't use mesh combining on it. My player is nothing but combination of some cubes which uses the standard shader and some different color Materials on different parts. So I'm guessing, I can use texture atlasing on my player to reduce darw calls. But I don't know how to do it.
Is my theory of work right? If I'm right please help me with atlasing, and if I'm wrong please point out my fault.
Thanks in advance.
Put all the required images into the same texture. Create a material from that texture. Apply the same material to all of the cubes making up you character. UV map all cubes to the relevant part of the texture (use UV offset in the Unity editor if the UV blocks are quite simple, otherwise you'll need to move the UV elements in your 3D modelling program).
I searched some tutorials to learn HLSL and every tutorials I found talked about VertexShader and PixelShader. (the tutorials used a 3D game as example).
So I don't know if vertexShader is usefull for 2D games.
As I know, a 2D game doesn't have any vertex right ?
I have to use shader as post-processing.
My question is : Could you tell me if I have to learn VertexShader for my 2D game ?
2D graphics in XNA (and Direct3D, OpenGL, etc) are achieved by rendering what is essentially a 3D scene with an orthographic projection that basically disregards the depth of the vertices that it projects (things don't get smaller as they get further away). You could create your own orthographic projection matrix with Matrix.CreateOrthographic.
While you could render arbitrary 3D geometry this way, sprite rendering - like SpriteBatch - simply draws camera-facing "quads" (a square or rectangle made up of 2 triangles formed by 4 vertices). These quads have 3D coordinates - the orthographic projection simply ignores the Z axis.
A vertex shader basically runs on the GPU and manipulates the vertices you pass in, before the triangles that they form are rasterised. You certainly can use a vertex shader on these vertices that are used for 2D rendering.
For all modern hardware, vertex transformations - including the above-mentioned orthographic projection transformation - are usually done in the vertex shader. This is basically boilerplate, and it's unusual to use complicated vertex shaders in a 2D game. But it's not unheard of - for example my own game Dark uses a vertex shader to do 2D shadow projection.
But, if you are using SpriteBatch, and all you want is a pixel shader, then you can just omit the VertexShader from your .fx file - your pixel shader will still work. (To use it, load it as an Effect and pass it to this overload of SpriteBatch.Begin).
If you don't replace it, SpriteBatch will use its default vertex shader. You can view the source for it here. It does a single matrix transformation (the passed matrix being an orthographic projection matrix, multiplied by an optional matrix passed to Begin).
Don't think so. If you're focusing your attention on 2D game, you may avoid learging Shaders in order to complete your projects.
Another option may be, rendering in 2D the scene that is actually generated in 3D, so in other words, you render a projection of your 3D scene to some surface. In this case, you may come to need to use them, especially if you would like to do some cool transformations, without CPU (also often it's an only option).
Hope this helps.
We've got a sphere which we want to display in 3D and color given a function that depends on spherical coordinates.
The sphere was triangulated using a regular grid in (theta, phi), but this produced a lot of small triangles near the poles. In an attempt to reduce the number triangles at the poles, we've changed out mesh generation to produce more evenly sized triangles over the surface.
The first triangulation method had the advantage that we could easily create a texture and drape it over the surface. It seems that in WPF it isn't possible to assign colors to vertices the way one would go about in OpenGL or Direct3D.
With the second triangulation method it isn't apparent how to go about generating the texture and setting the texture coordinates, since the vertices aren't aligned to a grid any more.
Maybe it would be possible to create a linear texture containing a color for each vertex, but then how will that effect the coloring? Will it still render smoothly over the triangle surfaces as one would expect by applying per vertex coloring?
I've converted the algorithm to use a linear texture which is really just a lookup into the colormap. This seems to work great and is a much better solution than the previous one. Instead of creating a texture of size ThetaSamples * PhiSamples, I'm now only creating a fixed texture of 256 x 1.