Since my day one in Unity development, I've seen tips and stuff saying that we should never tint sprites. If we want sprites of different colors, create them and place them in the same texture, and then swap in the differently-colored sprites.
The reasoning is that tinting sprites will break batching.
I have created a small demo.
Situation 1
The same square sprite is used in 6 game objects. No surprise here. In the stats, there is 1 batch. 5 are saved by batching.
Situation 2
The same square sprite is used in 6 game objects again, but this time, all of them are tinted red with the same color value.
Shouldn't this be breaking batching already?
Situation 3
For the sake of completeness, I tinted the square sprites with different colors. Still, we have 1 batch and 5 saved. Nothing's changed.
Additional information
I captured Situation 1 & 2 during a single play through, and situation 3 in a separate play through.
I tried tinting the sprites directly in the editor by changing the "Color" field of SpriteRenderer and through scripts by changing SpriteRenderer.color. The results are the same.
Changing color of sprites on the SpriteRenderer component shouldn't break batching. What breaks batching is when you change the SpriteRenderer's sprite, the material or even when you try to access the material with SpriteRenderer.material property.
For example,
This breaks batching:
SpriteRenderer sr = GetComponent<SpriteRenderer>();
sr.material.color = Color.red;
because you are accessing the material. It will create new material instance when you access the material property for the first time.
This will not break batching:
SpriteRenderer sr = GetComponent<SpriteRenderer>();
sr.color = Color.red;
It will not because it is not accessing the material property. Even though it will not break batching, one issue with it is performance. It affects performance when you do this.
AFAIK, tinting is basically setting a vertex color. This is done for particles, where you can set random start colors and all the particles are rendered in a single draw call. Vertex color does not change/create a new material.
Not sure how that was handled in early Unity versions, but sprites should be simple quads and therefore support vertex colors. The Sprite shader probably works the same way in terms of tinting.
In general, you are right. Changing shader properties will create a duplicate of that material, or it will change the material itself and affect all instances using the material. Did you use the tint on the sprite material, or on the spriteRenderer?
By hand, I could think of using MaterialPropertyBlocks, but maybe Unity did exactly that for sprites.
Some more Detail to clarify:
A You have "Material01.mat" - it's green. You copy this material to 10 sprites. You want to have 10 colors? You have to create 10 materials, each holding the desired color - 10 Draw calls.
You can do the same by script, just change material.color. But Unity will duplicate the materials for you. Still 10 Draw calls. Some people are confused why it breaks batching until they hear about this.
B You changed the RENDERERs tint. This Sprite Renderer will write the tint color into your sprites vertices (probably 4?) - using the vertex color attribute. It's basically free, because they are transmitted to the gpu anyway (afaik)
As I said above, the same is used in the Particle System, to allow rainbow particles with 1 Drawcall.
So, any particle Shader, self-written shader, or Sprite Shader should work with this. All you need is a.albedo = c.rgb * IN.vert.color (a bit pseudo code here)
That means, the same shader, and the same material can be used for multiple objects, having different vertex colors. That won't break batching.
You can even have different objects of any shape and vertex count, giving them different vertex colors (per vertex, like gradients etc) and it will still batch.
Check Static whenever possible, feed information into vertex colors, and for moving objects, try to keep them under 300 verts, so dynamic batching can work.
But for Sprites, unity automated this for you, you simply need to use the SpriteRenderer - that's why you don't use a quad with a texture, but a "Sprite" and a SpriteRenderer.
Again, I could be wrong and the SpriteRenderer actually uses MaterialPropertyBlocks, but it works almost the same. These variables can be set per object and do not create new DrawCalls. The variable values are used in the shader, so the material/shader is the same for multiple objects.
Related
So I'm looking to create an effect of having a bubble around my player which, when he enters a hidden area (hidden by tilemaps) the bubble activates and it essentially has an xray effect. So I can see the background, the ground and all the items inside the area I just can't see the blocks themselves.
So pretty much going from this
To this
And as I go further in the more gets revealed
I have no idea what to even begin searching for this. So any direction would be greatly appreciated
First of all, I want to get something out of the way: Making things appear when they are nearby the player is easy, you use a light and a shader. Making things disappear when they are nearby the player by that approach is impossible in 2D (3D has flags_use_shadow_to_opacity).
This is the plan: We are going to create a texture that will work as mask for what to show and what not to show. Then we will use that texture mask with a shader to make a material that selectively disappears. To create that texture, we are going to use a Viewport, so we can get a ViewportTexture from it.
The Viewport setup is like this:
Viewport
├ ColorRect
└ Sprite
Set the Viewport with the following properties:
Size: give it the window size (the default is 1024 by 600)
Hdr: disable
Disable 3D: enable
Usage: 2D
Update mode: Always
For the Sprite you want a grayscale texture, perhaps with transparency. It will be the shape you want to reveal around the player.
And for the ColorRect you want to set the background color as either black or white. Whatever is the opposite of the color on the Sprite.
Next, you are going to attach a script to the Viewport. It has to deal with two concerns:
Move the Sprite to match the position of the player. That looks like this:
extends Viewport
export var target_path:NodePath
func _process(_delta:float) -> void:
var target := get_node_or_null(target_path) as Node2D
if target == null:
return
$Sprite.position = target.get_viewport().get_canvas_transform().origin
And you are going to set the target_path to reference the player avatar.
In this code target.get_viewport().get_canvas_transform().origin will give us the position of the target node (the player avatar) on the screen. And we are placing the Sprite to match.
Handle window resizes. That looks like this:
func _ready():
# warning-ignore:return_value_discarded
get_tree().get_root().connect("size_changed", self, "_on_size_changed")
func _on_size_changed():
size = get_tree().get_root().size
In this code we connect to the "size_changed" of the root Viewport (the one associated with the Window), and change the size of this Viewport to match.
The next thing is the shader. Go to your TileMap or whatever you want to make disappear and add a shader material. This is the code for it:
shader_type canvas_item;
uniform sampler2D mask;
void fragment()
{
COLOR.rgb = texture(TEXTURE, UV).rgb;
COLOR.a = texture(mask, SCREEN_UV).r;
}
As you can see, the first line will be setting the red, green, and blue channels to match the texture the node already has. But the alpha channel will be set to one of the channels (the red one in this case) of the mask texture.
Note: The above code will make whatever is in the black parts fully invisible, and whatever is in the white parts fully visible. If you want to invert that, change COLOR.a = texture(mask, SCREEN_UV).r; to COLOR.a = 1.0 - texture(mask, SCREEN_UV).r;.
We, of course, need to set that mask texture. After you set that code, there should be a shader param under the shader material called "Mask", set it to a new ViewportTexture and set the Viewport to the one we set before.
And we are done.
I tested this with this texture from publicdomainvectors.org:
Plus some tiles from Kenney. They are all, of course, under public domain.
This is how it looks like:
Experiment with different textures for different results. Also, you can add a shader to the Sprite for extra effect. For example add some ripples, by giving a shader material to the Sprite with code like this one:
shader_type canvas_item;
void fragment()
{
float width = SCREEN_PIXEL_SIZE.x * 16.0;
COLOR = texture(TEXTURE, vec2(UV.x + sin(UV.y * 32.0 + TIME * 2.0) * width, UV.y));
}
So you get this result:
There is an instant when the above animation stutters. That is because I didn't cut the loop perfectly. Not an issue in game. Also the animation has much less frames per second than the game would.
Addendum A couple things I want to add:
You can create a texture by other means. I have a couple other answer where I cover some of it
How can I bake 2D sprites in Godot at runtime? where we use blit_rect. You might also be interested in blit_rect_mask.
Godot repeating breaks script where we are using lockbits.
I wrote a shader that outputs on the alpha channel here. Other options include:
Using BackBufferCopy.
To discard fragments.
I have created this icosahedron in code so that each triangle is its own mesh instead of one solid mesh with 20 faces. I want to add text on the outward side of each triangle that is roughly centered. For a simple visual think a 20 sided die for DnD.
The constraints would be that while this version is a simple icosahedron with only the 20 faces, the script is written so that it can be refined recursively to add more faces and turn it into an icosphere. I still want the text to scale and center as the number of triangles grows.
I currently have a prefab "triangle" object that holds the script each triangle will need and as well as a mesh filter/mesh renderer.
The only other thing in the scene is an empty GameObject with the script attached that creates the triangles and acts as the parent for all of them.
I have attempted various ways of adding text into the triangle prefab. The problem I keep running into is that since the mesh itself does not exist in the prefab I am not sure how to orient or scale the text so that it would appear in the correct place.
I am not sure if the text needs to be completely generated at run time with the triangles, or if there is a way to add it to the prefab and then update it so that it scales and positions as the triangle is created.
My Google searching so far has only really brought me results that involve objects permanently in the scene, nothing on what to do with meshes generated at run time.
Is the text a texture saved in png or other image format? Because if it is the case, you can just create a material with the texture that holds the text.
The next step is to define to each vertex its UV, wraping this way the texture on the triangle.
If the text has to be generated on the fly, the best approach I can think right now is to add a Text Mesh to the triangle (since it is a prefab) and access to the component.
I'm creating simple game and I need to create extending cylinder. I know how to normally do it - by changing objects pivot point and scaling it up, but now I have texture on it and I don't want it to stretch.
I fought about adding small segments to the end of the cylinder when it has to grow, but this wont work smooth and might affect performance. Do you know any solution to this?
This the rope texture that I'm using right now(but it might change in the future):
Before scaling
After scaling
I don't want it to stretch
This is not easy to accomplish but it is possible.
You have two ways to do this.
1.One is to procedurally generate the rope mesh and assign material real-time. This is complicated for beginners. Can't help on this one.
2.Another solution that doesn't require mesh procedurally mesh generation. Change the tiles while the Object is changing size.
For this to work, your texture must be tileable. You can't just use any random texture online. Also, you need a normal map to actually make it look like a rope. Here is a tutorial for making a rope texture with normal map in Maya. There are other parts of the video you have to watch too.
Select the texture, change Texture Type to Texture, change Wrap Mode to Repeat then click Apply.
Get the MeshRenderer of the Mesh then get Material of the 3D Object from the MeshRenderer.Use ropeMat.SetTextureScale to change the the tile of the texture. For example, when changing the xTile and yTile value of the code below, the texture of the Mesh will be tiled.
public float xTile, yTile;
public GameObject rope;
Material ropeMat;
void Start()
{
ropeMat = rope.GetComponent<MeshRenderer>().material;
}
void Update()
{
ropeMat.SetTextureScale("_MainTex", new Vector2(xTile, yTile));
}
Now, you have to find a way to map the xTile and yTile values to the size of the Mesh. It's not simple. Here is complete method to calculate what value xTile and yTile should be when re-sizing the mesh/rope.
I have two objects, both with different textures and I want to make them the same at a certain point of time. The current code I am looking at is the following:
weaponObject.renderer.material.mainTexture = selectedWeapon.renderer.material.mainTexture;
Unfortunately this does not seem to work. The "weaponObject" texture seems to remain the same but simply moves further backwards in terms of the z axis. Any tips? Both Objects are type GameObject.
You need to make sure that the textures fit both GameObjects. Pretty much you can't attach the texture of an m4 to an m16, the texture won't align correctly.
You also need to make sure that both objects use the same type of material. Remember a material affects how it will look, so even the same texture on different materials will look different.
Example, same texture with different materials:
If the two objects are identical, which they should be if you want consistent results, then just swap the materials:
weaponObject.renderer.material = NewMaterial;
I am trying to write a custom Minecraft Classic multiplayer client in XNA 4.0, but I am completely stumped when it comes to actually drawing the world in the game. Each block is a cube in 3D space, and it is possible for it to have different textures on each side. I have been reading around the Internet, and found out that for a cube to have a different texture on each side, each face needs its own set of vertices. That makes a total of 24 vertices for each cube, and if you have a world that consists of 64*64*64 cubes (or possibly even more!), that makes a lot of vertices.
In my original code, I split up the texture map I had into separate textures, and applied these before drawing each side of every cube. I was told that this is a very expensive approach, and that I should keep the textures in the same file, and simply use the UV coordinates to map certain subtextures onto the cube. This didn't do much for performance though, since the sheer amount of vertices is simply too much. I was also told to collect the vertices in a VertexBuffer and draw them all at once, but this didn't help much either, and occasionally causes an exception when the number of vertices exceeds the maximum size of the buffer. Any attempt I've tried to make cubes share vertices has also failed, resulting in massive slowdown and glitchy cubes.
I have no idea what to do with this. I am pretty good at programming in general, but any kind of 3D programming or game development completely escapes me.
Here is the method I use to draw the cubes. I have two global lists List<VertexPositionTexture> and List<int>, one for vertices and one for indices. When drawing, I iterate through all of the cubes in the world and do RenderShape on the ones that aren't empty (like Air). The shape class that I have is pasted below. The commented code in the AddVertices method is the attempt to make cubes share vertices. When all of the cubes' vertices have been added to the list, the data is pasted into a VertexBuffer and IndexBuffer, and DrawIndexedPrimitives is called.
To be honest, I am probably doing it completely wrong, but I really have no idea how to do it, and there are no tutorials that actually describe how to draw lots of objects, only extremely simple ones. I had to figure out how to redo the BasicShape to have several textures myself.
The shape:
http://pastebin.com/zNUFPygP
You can get a copy of the code I wrote with a few other devs called TechCraft:
http://techcraft.codeplex.com
Its free and open source. It should show you how to create an engine similar to Minecrafts.
There are a lot of things you can do to speed this up:
What you want to do is bake a region of cubes into a vertex buffer. What I mean by this is to take all of the cubes in a small area, and put them all into one vertex buffer. Only update this buffer when a cube changes.
In a world like minecraft's, LOTS of faces are occluding each other. The biggest thing you can do is to hide faces that are shared between two cubes. Imagine two cubes sitting right next to each other, you don't really need to draw the face in between, since it can never be seen anyway. In our engine, this resulted in 20 times less vertices.
_ _ _ _
|_|_| == |_ _|
As for your textures, it is a good idea, like you said, to use a texture atlas. This greatly reduces your draw calls.
Good luck! And if you feel like cheating, look at Infiniminer. Infiniminer is the game minecraft was based off. It's written in XNA and is open-source!
You need to think about reducing the size of the problem. How can you produce the same image by doing less work?
If your cubes are spaced at regular intervals and are all the same size, you may not need to store the vertices at all - your shader may be able to calculate the vertex positions as it runs. If they are different sizes and not spaced at regular intervals, then you may still be able to use some for onf instancing (where you supply the position and size of a cube to a shader and it works out where to render the vertices to make a cube appear at that location)
If your cubes obscure anything behnd them, then you only need to draw the front-most cubes - anything behind them is just not visible. A natural approach for this would be to use an octree data structure, which divides 3D space into voxels (cubes). Using an octree you could quickly deternine which cubes are visible, and just draw those cubes - so rather than drawing 64x64x64 cubes, you may find you nly have to draw a few hundred per frame. You will also find that as the camera moves, the set of visible cubes will not change much, so you may be able to use this "temporal coherence" to update your data structures to minimise the work that needs to be done to decide which cubes are visible.
I don't think Minecraft draws all the cubes, all the time. Most of them are interior, and you need to draw only those on the surface. So basically, you need an efficient voxel renderer.
I recently wrote an engine to do this in XNA, the technique you want to look into is called hardware instancing and allows you to pass one model into the shader with a stream of world positions to "instance" that model hundreds (even thousands of times) all over your game world.
I built my engine on top of this example, replacing the instanced model with my own.
http://xbox.create.msdn.com/en-US/education/catalog/sample/mesh_instancing
Once you make it into a re-usable class, it and its accompanying shaders become very useful for rendering thousands of pretty much anything you want (bushes, trees, cubes, swarms of birds, etc).
Once you have a base model (could be one face of the block), its mesh will have an associated texture that you can then replace with whatever you want to allow you to dynamically change block texturing for each side and differing types of blocks.