I have created this icosahedron in code so that each triangle is its own mesh instead of one solid mesh with 20 faces. I want to add text on the outward side of each triangle that is roughly centered. For a simple visual think a 20 sided die for DnD.
The constraints would be that while this version is a simple icosahedron with only the 20 faces, the script is written so that it can be refined recursively to add more faces and turn it into an icosphere. I still want the text to scale and center as the number of triangles grows.
I currently have a prefab "triangle" object that holds the script each triangle will need and as well as a mesh filter/mesh renderer.
The only other thing in the scene is an empty GameObject with the script attached that creates the triangles and acts as the parent for all of them.
I have attempted various ways of adding text into the triangle prefab. The problem I keep running into is that since the mesh itself does not exist in the prefab I am not sure how to orient or scale the text so that it would appear in the correct place.
I am not sure if the text needs to be completely generated at run time with the triangles, or if there is a way to add it to the prefab and then update it so that it scales and positions as the triangle is created.
My Google searching so far has only really brought me results that involve objects permanently in the scene, nothing on what to do with meshes generated at run time.
Is the text a texture saved in png or other image format? Because if it is the case, you can just create a material with the texture that holds the text.
The next step is to define to each vertex its UV, wraping this way the texture on the triangle.
If the text has to be generated on the fly, the best approach I can think right now is to add a Text Mesh to the triangle (since it is a prefab) and access to the component.
Related
I created a mesh in code.
The mesh is well made. 4 vertexs and 2 trianlges
However, rendering became different. below the pictures
Why are the front and the back different?
The visible side is transparent on the other side.
What options should I add?
I only entered values for vertex and triangle points.
After you created your meshes make sure you applied these:
mesh.RecalculateBounds();
mesh.RecalculateNormals();
Any time your modify or create a mesh from scratch you have to recalculate normals and recalculate bounds.
After modifying the vertices it is often useful to update the normals to reflect the change. Normals are calculated from all shared vertices.
You could add "Cull Off" to your shader for render both mesh sides.
By default system renders only face side due to performance reasons,
as mentioned in this post https://answers.unity.com/questions/187252/display-both-sides.html
The reason is that graphics systems always cull backfaces. The faces
are usually determined by the winding order, i.e. the order in which
the vertices of a triangle are declared. Backfaces are culled because
it's an easy way to conserve computational power - it is usually not
necessary to render a face that's turning away from the viewer, such
as the other side of an object's mesh, for instance.
and code Example : https://answers.unity.com/questions/209018/cutout-diffuse-shader-visible-from-both-sides-on-a.html
Since my day one in Unity development, I've seen tips and stuff saying that we should never tint sprites. If we want sprites of different colors, create them and place them in the same texture, and then swap in the differently-colored sprites.
The reasoning is that tinting sprites will break batching.
I have created a small demo.
Situation 1
The same square sprite is used in 6 game objects. No surprise here. In the stats, there is 1 batch. 5 are saved by batching.
Situation 2
The same square sprite is used in 6 game objects again, but this time, all of them are tinted red with the same color value.
Shouldn't this be breaking batching already?
Situation 3
For the sake of completeness, I tinted the square sprites with different colors. Still, we have 1 batch and 5 saved. Nothing's changed.
Additional information
I captured Situation 1 & 2 during a single play through, and situation 3 in a separate play through.
I tried tinting the sprites directly in the editor by changing the "Color" field of SpriteRenderer and through scripts by changing SpriteRenderer.color. The results are the same.
Changing color of sprites on the SpriteRenderer component shouldn't break batching. What breaks batching is when you change the SpriteRenderer's sprite, the material or even when you try to access the material with SpriteRenderer.material property.
For example,
This breaks batching:
SpriteRenderer sr = GetComponent<SpriteRenderer>();
sr.material.color = Color.red;
because you are accessing the material. It will create new material instance when you access the material property for the first time.
This will not break batching:
SpriteRenderer sr = GetComponent<SpriteRenderer>();
sr.color = Color.red;
It will not because it is not accessing the material property. Even though it will not break batching, one issue with it is performance. It affects performance when you do this.
AFAIK, tinting is basically setting a vertex color. This is done for particles, where you can set random start colors and all the particles are rendered in a single draw call. Vertex color does not change/create a new material.
Not sure how that was handled in early Unity versions, but sprites should be simple quads and therefore support vertex colors. The Sprite shader probably works the same way in terms of tinting.
In general, you are right. Changing shader properties will create a duplicate of that material, or it will change the material itself and affect all instances using the material. Did you use the tint on the sprite material, or on the spriteRenderer?
By hand, I could think of using MaterialPropertyBlocks, but maybe Unity did exactly that for sprites.
Some more Detail to clarify:
A You have "Material01.mat" - it's green. You copy this material to 10 sprites. You want to have 10 colors? You have to create 10 materials, each holding the desired color - 10 Draw calls.
You can do the same by script, just change material.color. But Unity will duplicate the materials for you. Still 10 Draw calls. Some people are confused why it breaks batching until they hear about this.
B You changed the RENDERERs tint. This Sprite Renderer will write the tint color into your sprites vertices (probably 4?) - using the vertex color attribute. It's basically free, because they are transmitted to the gpu anyway (afaik)
As I said above, the same is used in the Particle System, to allow rainbow particles with 1 Drawcall.
So, any particle Shader, self-written shader, or Sprite Shader should work with this. All you need is a.albedo = c.rgb * IN.vert.color (a bit pseudo code here)
That means, the same shader, and the same material can be used for multiple objects, having different vertex colors. That won't break batching.
You can even have different objects of any shape and vertex count, giving them different vertex colors (per vertex, like gradients etc) and it will still batch.
Check Static whenever possible, feed information into vertex colors, and for moving objects, try to keep them under 300 verts, so dynamic batching can work.
But for Sprites, unity automated this for you, you simply need to use the SpriteRenderer - that's why you don't use a quad with a texture, but a "Sprite" and a SpriteRenderer.
Again, I could be wrong and the SpriteRenderer actually uses MaterialPropertyBlocks, but it works almost the same. These variables can be set per object and do not create new DrawCalls. The variable values are used in the shader, so the material/shader is the same for multiple objects.
I'm creating simple game and I need to create extending cylinder. I know how to normally do it - by changing objects pivot point and scaling it up, but now I have texture on it and I don't want it to stretch.
I fought about adding small segments to the end of the cylinder when it has to grow, but this wont work smooth and might affect performance. Do you know any solution to this?
This the rope texture that I'm using right now(but it might change in the future):
Before scaling
After scaling
I don't want it to stretch
This is not easy to accomplish but it is possible.
You have two ways to do this.
1.One is to procedurally generate the rope mesh and assign material real-time. This is complicated for beginners. Can't help on this one.
2.Another solution that doesn't require mesh procedurally mesh generation. Change the tiles while the Object is changing size.
For this to work, your texture must be tileable. You can't just use any random texture online. Also, you need a normal map to actually make it look like a rope. Here is a tutorial for making a rope texture with normal map in Maya. There are other parts of the video you have to watch too.
Select the texture, change Texture Type to Texture, change Wrap Mode to Repeat then click Apply.
Get the MeshRenderer of the Mesh then get Material of the 3D Object from the MeshRenderer.Use ropeMat.SetTextureScale to change the the tile of the texture. For example, when changing the xTile and yTile value of the code below, the texture of the Mesh will be tiled.
public float xTile, yTile;
public GameObject rope;
Material ropeMat;
void Start()
{
ropeMat = rope.GetComponent<MeshRenderer>().material;
}
void Update()
{
ropeMat.SetTextureScale("_MainTex", new Vector2(xTile, yTile));
}
Now, you have to find a way to map the xTile and yTile values to the size of the Mesh. It's not simple. Here is complete method to calculate what value xTile and yTile should be when re-sizing the mesh/rope.
I'm new to Unity/C# and building a simple game in order to learn it; essentially it involves flying over a terrain and shooting at things.
The question is - How can I create a non-flat, fixed size (ie, not endless) 2D terrain that I can randomise/generate once at the start of a scene.
Here's what I've thought/tried so far:
I started off by creating a massive, flat expanse using a game object with a Sprite Renderer and a Box Collider 2D. I'm using particles for rocket trails and explosions and set them to collide with a plane which worked well. This works, but the terrain is just flat and boring.
Next, I drew some rudimentary mountains in photoshop, added them as sprites with polygon collider 2D components and dragged them onto the scene (with an idea to randomise their locations later). This is quick and works well but doesn't play nice with the particles because of the 2D/3D mix, and I imagine would lead to all levels looking the same unless I can draw a LOT of different terrain sprites. Is this a normal approach? Is it possible to add more planes to the objects to allow particles to collide?
Another thought was - could I create a polygon collider 2D / or edge collider 2D and basically fill the area with a texture? No luck on that one, it seems like it could work but I couldn't get a texture to appear and fit the shape. A shame, especially as I could then generate or randomise the position of the points later.
Finally, I wondered whether I should just use 3D shapes. I can texture them and it works great with the particles, but doesn't interact right now with all my other 2D assets, I'd have to add 3D colliders to them and that doesn't seem right, since Unity specifically provides a 2D toolset and I am trying to make a 2D game.
All of my solutions have pros and cons - maybe there's something I've missed?
You can create meshes procedurally in unity in scripts. They can be either 2d or 3d depending on how you set the vertices. Colliders are independent of the rendered mesh. I dont have an indepth tutorial for this but i think this video will hint you in the right direction and give you keywords to google or read about in the unity documentation:
https://www.youtube.com/watch?v=nPX8dw283X4
I need to make a DirectX 3D mesh at run time using Managed DirectX from C#. I have not been able to locate any information on how to do this.
No, I can't use a 3D modeler program to make my objects. They must be precisely sized and shaped, and I don't have ANY of the size or shape information until runtime.
No, I can't build up the model from existing DirectX mesh capabilities. (A simple example: DirectX would let you easily model a pencil by using a cone mesh and a cylinder mesh. Of course, you have to carry around two meshes for your pencil, not just one, and properly position and orient each. But you can not even make a model of a pencil split in half lengthwise as there is no half cylinder nor half cone mesh provided.)
At runtime, I have all of the vertices already computed and know which ones to connect to make the necessary triangles.
All I need is a solid color. I don't need to texture map.
One can get a mesh of a sphere using this DirectX call:
Mesh sphere = Mesh.Sphere(device, sphereRadius, sphereSlices, sphereStacks);
This mesh is built at runtime.
What I need to know is how to make a similar function:
Mesh shape = MakeCustomMesh(device, vertexlist, trianglelist);
where the two lists could be any suitable container/format.
If anyone can point me to managed DirectX (C#) sample code, even if it just builds a mesh from 3 hardcoded triangles, that would be a great benefit.
There's some sample code showing how to do this on MDXInfo. This creates a mesh with multiple subsets - if you don't need that, it's even easier.
Basically, you just create the mesh:
Mesh mesh = new Mesh(numIndices, numVerticess, MeshFlags.Managed, CustomVertex.PositionColored.Format /* you'll need to set this */ , device);
Then, you can grab the mesh vertex buffer and index buffer, and overwrite it, using:
IndexBuffer indices = mesh.IndexBuffer;
VertexBuffer vertices = mesh.VertexBuffer;
Then, fill in the indices and vertices appropriately.