I have two objects, both with different textures and I want to make them the same at a certain point of time. The current code I am looking at is the following:
weaponObject.renderer.material.mainTexture = selectedWeapon.renderer.material.mainTexture;
Unfortunately this does not seem to work. The "weaponObject" texture seems to remain the same but simply moves further backwards in terms of the z axis. Any tips? Both Objects are type GameObject.
You need to make sure that the textures fit both GameObjects. Pretty much you can't attach the texture of an m4 to an m16, the texture won't align correctly.
You also need to make sure that both objects use the same type of material. Remember a material affects how it will look, so even the same texture on different materials will look different.
Example, same texture with different materials:
If the two objects are identical, which they should be if you want consistent results, then just swap the materials:
weaponObject.renderer.material = NewMaterial;
Related
Since my day one in Unity development, I've seen tips and stuff saying that we should never tint sprites. If we want sprites of different colors, create them and place them in the same texture, and then swap in the differently-colored sprites.
The reasoning is that tinting sprites will break batching.
I have created a small demo.
Situation 1
The same square sprite is used in 6 game objects. No surprise here. In the stats, there is 1 batch. 5 are saved by batching.
Situation 2
The same square sprite is used in 6 game objects again, but this time, all of them are tinted red with the same color value.
Shouldn't this be breaking batching already?
Situation 3
For the sake of completeness, I tinted the square sprites with different colors. Still, we have 1 batch and 5 saved. Nothing's changed.
Additional information
I captured Situation 1 & 2 during a single play through, and situation 3 in a separate play through.
I tried tinting the sprites directly in the editor by changing the "Color" field of SpriteRenderer and through scripts by changing SpriteRenderer.color. The results are the same.
Changing color of sprites on the SpriteRenderer component shouldn't break batching. What breaks batching is when you change the SpriteRenderer's sprite, the material or even when you try to access the material with SpriteRenderer.material property.
For example,
This breaks batching:
SpriteRenderer sr = GetComponent<SpriteRenderer>();
sr.material.color = Color.red;
because you are accessing the material. It will create new material instance when you access the material property for the first time.
This will not break batching:
SpriteRenderer sr = GetComponent<SpriteRenderer>();
sr.color = Color.red;
It will not because it is not accessing the material property. Even though it will not break batching, one issue with it is performance. It affects performance when you do this.
AFAIK, tinting is basically setting a vertex color. This is done for particles, where you can set random start colors and all the particles are rendered in a single draw call. Vertex color does not change/create a new material.
Not sure how that was handled in early Unity versions, but sprites should be simple quads and therefore support vertex colors. The Sprite shader probably works the same way in terms of tinting.
In general, you are right. Changing shader properties will create a duplicate of that material, or it will change the material itself and affect all instances using the material. Did you use the tint on the sprite material, or on the spriteRenderer?
By hand, I could think of using MaterialPropertyBlocks, but maybe Unity did exactly that for sprites.
Some more Detail to clarify:
A You have "Material01.mat" - it's green. You copy this material to 10 sprites. You want to have 10 colors? You have to create 10 materials, each holding the desired color - 10 Draw calls.
You can do the same by script, just change material.color. But Unity will duplicate the materials for you. Still 10 Draw calls. Some people are confused why it breaks batching until they hear about this.
B You changed the RENDERERs tint. This Sprite Renderer will write the tint color into your sprites vertices (probably 4?) - using the vertex color attribute. It's basically free, because they are transmitted to the gpu anyway (afaik)
As I said above, the same is used in the Particle System, to allow rainbow particles with 1 Drawcall.
So, any particle Shader, self-written shader, or Sprite Shader should work with this. All you need is a.albedo = c.rgb * IN.vert.color (a bit pseudo code here)
That means, the same shader, and the same material can be used for multiple objects, having different vertex colors. That won't break batching.
You can even have different objects of any shape and vertex count, giving them different vertex colors (per vertex, like gradients etc) and it will still batch.
Check Static whenever possible, feed information into vertex colors, and for moving objects, try to keep them under 300 verts, so dynamic batching can work.
But for Sprites, unity automated this for you, you simply need to use the SpriteRenderer - that's why you don't use a quad with a texture, but a "Sprite" and a SpriteRenderer.
Again, I could be wrong and the SpriteRenderer actually uses MaterialPropertyBlocks, but it works almost the same. These variables can be set per object and do not create new DrawCalls. The variable values are used in the shader, so the material/shader is the same for multiple objects.
I'm prototyping a climbing system for my stealth game. I want to mark some edges so that the player can grab them, but I'm not sure how to approach it. I know I can add separate colliders, but this seems quite tedious. Instead, what I'd like to do is to mark mark the edges in the mesh. Can this be achieved? How?
You're probably better off storing that edge data externally (that is, elsewhere in your running game than in the mesh) in an accessible format, that said, if you truly want to embed it INTO the mesh...
Tuck the metadata into Mesh.colors or Mesh.uv2 like #Draco18s suggests. Most shaders ignore one or both of these. Of course, you'll be duplicating edge/face level info for each vertex on the face.
When a player tries to "grab" an edge, run your collision test, grab the face, grab any vertex on the face, and look up your edge/face info.
You could also, for example, attach your edge metadata collection to a script attached to the mesh's game object. So long as you can access your collection of edges when you need to perform the test, where you store it is an implementation detail.
If you have the ability to add nested objects in your mesh, you could create a primitive box that represents the desired ledge and collider size. When you import the mesh in Unity, the child mesh object will appear as a child gameObject with its own MeshRenderer. You can disable the MeshRenderer for the ledge and add a BoxCollider component to it. The collider will be automatically sized to match the primitive mesh.
Blender Nested Objects example
Unity Nested Objects example
I have an Unity 3D scene with several cameras looking at the same object (a huge brain mesh ~100k tri) but not necessary with the same point of view.
In the same 3D scene there is a huge number of spheric plots meshes (from 100 to 30000).
In all the cameras i have to display the brain mesh with a part of the plots meshes.
Depending on the camera view, each plot can have a different size (mesh filter and spheric collider), a different material (opaque or transparent) and can be visible or not.
The spheric collider must have the same size than the mesh.
I set up a shared mesh in common for each spheric mesh.
Their material can be one of the several shared materials i have defined.
Before rendering the scene, for each camera view in the OnPreCull function i have to define which plots are visibles and how they look.
This part can be very costly, i tried several things :
setting gameobject inactive : too costly
setting local scale to vector3(0,0,0) : better but i can see that the rendering is still done in the profiler
setting a total transparent material : same result, but the in the profiler the rendering is now transparent instead of opaque
setting a layer not in the cameras layers masks : huge script cost
I don't kwnow if i can make an efficient culling system with all theses cameras looking at the same point...
I welcome any new ideas.
First issue:
Regarding your specific question with the four dots.
Simply set the renderer.enabled = false, that's all there is to it.
Note however that as I mention in a comment, you would never try to "cull yourself" in Unity (unless I have misunderstood your description).
Second issue:
Regarding the small spheres. I suspect you have very many in the scene. You simply can't do that. In video games (the most difficult of all 3D engineering), you just do this with billboarding. It's how say "grass" is done in a scene. You can achieve this nicely with the particle system in Unity, or other techniques. An implementation is beyond the scope of this answer, but you will have to fully investigate billboarding. Simply it's a small flat image which always faces the camera in the render pass.
Issue 2B:
Note however that sphere colliders are wonderful, and you can use as many as you want. I'm sure this is obvious from base mathematical reasons. Side tip: often folks try to "write their own" thinking it will be faster. It's impossible to outwrite the 100? person-years of spatial culling scientific research in PhysX, and moreover they use the metal, the gpu, so you can't beat it.
Issue three:
Is there a chance you're using a mesh collider somewhere in the project? Never use mesh colliders, at all. (It's extremely confusing they are mentioned or used in Unity; they only have one or two very specific limited uses.)
Issue four:
I'm confused about why you are turning things on and off. I have a guess.
I suspect you are not using more than one "stage"!
There's an amazing trick about video games when you have more than one camera. In fact you have "offscreen" scenes! So you may have players in a dungeon or the like. Off "to the side" you may have an entirely duplicate or triplicate setup of the whole thing running (you could "see it if the camera turned the wrong way") for the other cameras. (In the example you would have different qualities on the dopplegangers, coloring, map-style or whatever the case is.) Sometimes you make a whole double just to run physics calculations or address other problems.
Fascinating extreme example of that sort of thing.
In short in your situation,
You likely need one whole 'stage' of a camera and brain for each of the camera views!
Again this can be http://answers.unity3d.com/answers/299823/view.html but it is indeed the everyday thing. In your overall scene you will see eight happy brains sitting in a row, each with their own camera. In each one you would display whatever items/angle etc are relevant. (Obviously, if certain items are "identical, other than the viewing angle" you could use the "same brain with more than one camera": but I would not do that, best to have one-brain-one-camera for each view.)
I believe that could be the fundamental issue you're having!
I have an object which has a diffuse shader and on runtime I want the shader to switch to Diffuse Always Visible but this should trigger only if the unit is behind a specific object within the layer named obstacles.
First I tried to switch the object shader with the following code and the shader is changed in the inspector but not in the game during play. I tried placing and calling the shader from the resources and also created seprate materials but its not working.
Here is the code I am using in C#
Unit.renderer.material.shader = Shader.Find("Diffuse - Always visible");
As for the rest I was thinking of using a raycast but not sure how to handle this.
Thanks in advance
I only need the other concept now where the shader changes if a unit is behind an object!
For the trigger, how to handle it will depend a lot on how your camera moves.
To check in a 3D game, raycasts should work. Because whether or not something is "behind" something else depends on the camera's perspective, I would personally try something like this to start:
Use Camera.WorldToScreenPoint to figure out where your object is on the screen.
Use Camera.ScreenPointToRay to convert this into a ray you'll send through the physics engine.
Use Physics.Raycast and check what gets hit. If it doesn't hit the object in question, something is in front of it.
Depending on your goals, you might want to check different points. If you're just checking if something is more than halfway covered by an object, doing a single raycast at its center is probably sufficient. If you want to see if it's at all occluded, you could try something like finding four approximate corners of the object and raycasting against them.
If it's a 2D game or the camera is always at a fixed angle (for example, in a side scroller), the problem is potentially a lot simpler. Add colliders to the objects, and use OnTriggerEnter or OnTriggerEnter2D (along with the corresponding Exit functions). That only works if the tiniest bit of overlap should trigger the effect, and it only works if depth doesn't really matter (e.g. in a 2D game) so your colliders will actually intersect.
I am trying to write a custom Minecraft Classic multiplayer client in XNA 4.0, but I am completely stumped when it comes to actually drawing the world in the game. Each block is a cube in 3D space, and it is possible for it to have different textures on each side. I have been reading around the Internet, and found out that for a cube to have a different texture on each side, each face needs its own set of vertices. That makes a total of 24 vertices for each cube, and if you have a world that consists of 64*64*64 cubes (or possibly even more!), that makes a lot of vertices.
In my original code, I split up the texture map I had into separate textures, and applied these before drawing each side of every cube. I was told that this is a very expensive approach, and that I should keep the textures in the same file, and simply use the UV coordinates to map certain subtextures onto the cube. This didn't do much for performance though, since the sheer amount of vertices is simply too much. I was also told to collect the vertices in a VertexBuffer and draw them all at once, but this didn't help much either, and occasionally causes an exception when the number of vertices exceeds the maximum size of the buffer. Any attempt I've tried to make cubes share vertices has also failed, resulting in massive slowdown and glitchy cubes.
I have no idea what to do with this. I am pretty good at programming in general, but any kind of 3D programming or game development completely escapes me.
Here is the method I use to draw the cubes. I have two global lists List<VertexPositionTexture> and List<int>, one for vertices and one for indices. When drawing, I iterate through all of the cubes in the world and do RenderShape on the ones that aren't empty (like Air). The shape class that I have is pasted below. The commented code in the AddVertices method is the attempt to make cubes share vertices. When all of the cubes' vertices have been added to the list, the data is pasted into a VertexBuffer and IndexBuffer, and DrawIndexedPrimitives is called.
To be honest, I am probably doing it completely wrong, but I really have no idea how to do it, and there are no tutorials that actually describe how to draw lots of objects, only extremely simple ones. I had to figure out how to redo the BasicShape to have several textures myself.
The shape:
http://pastebin.com/zNUFPygP
You can get a copy of the code I wrote with a few other devs called TechCraft:
http://techcraft.codeplex.com
Its free and open source. It should show you how to create an engine similar to Minecrafts.
There are a lot of things you can do to speed this up:
What you want to do is bake a region of cubes into a vertex buffer. What I mean by this is to take all of the cubes in a small area, and put them all into one vertex buffer. Only update this buffer when a cube changes.
In a world like minecraft's, LOTS of faces are occluding each other. The biggest thing you can do is to hide faces that are shared between two cubes. Imagine two cubes sitting right next to each other, you don't really need to draw the face in between, since it can never be seen anyway. In our engine, this resulted in 20 times less vertices.
_ _ _ _
|_|_| == |_ _|
As for your textures, it is a good idea, like you said, to use a texture atlas. This greatly reduces your draw calls.
Good luck! And if you feel like cheating, look at Infiniminer. Infiniminer is the game minecraft was based off. It's written in XNA and is open-source!
You need to think about reducing the size of the problem. How can you produce the same image by doing less work?
If your cubes are spaced at regular intervals and are all the same size, you may not need to store the vertices at all - your shader may be able to calculate the vertex positions as it runs. If they are different sizes and not spaced at regular intervals, then you may still be able to use some for onf instancing (where you supply the position and size of a cube to a shader and it works out where to render the vertices to make a cube appear at that location)
If your cubes obscure anything behnd them, then you only need to draw the front-most cubes - anything behind them is just not visible. A natural approach for this would be to use an octree data structure, which divides 3D space into voxels (cubes). Using an octree you could quickly deternine which cubes are visible, and just draw those cubes - so rather than drawing 64x64x64 cubes, you may find you nly have to draw a few hundred per frame. You will also find that as the camera moves, the set of visible cubes will not change much, so you may be able to use this "temporal coherence" to update your data structures to minimise the work that needs to be done to decide which cubes are visible.
I don't think Minecraft draws all the cubes, all the time. Most of them are interior, and you need to draw only those on the surface. So basically, you need an efficient voxel renderer.
I recently wrote an engine to do this in XNA, the technique you want to look into is called hardware instancing and allows you to pass one model into the shader with a stream of world positions to "instance" that model hundreds (even thousands of times) all over your game world.
I built my engine on top of this example, replacing the instanced model with my own.
http://xbox.create.msdn.com/en-US/education/catalog/sample/mesh_instancing
Once you make it into a re-usable class, it and its accompanying shaders become very useful for rendering thousands of pretty much anything you want (bushes, trees, cubes, swarms of birds, etc).
Once you have a base model (could be one face of the block), its mesh will have an associated texture that you can then replace with whatever you want to allow you to dynamically change block texturing for each side and differing types of blocks.