I have identical character models, I look at the Debug Frame, the rendering is completely different Draw Calls. If the models are absolutely identical, but he draws them separately. Dynamic batching checkbox enabled in Project Settings. Also on the material there is a tick Enable GPU instancing
Object in inspector
Material on objects
I looked in the Frame Debug, in the end I didn’t come to any conclusion, since in fact it should compose objects into one
Solution: Remove the SkinnedMeshRenderer Component from the mesh on the object. And install MeshFilters, MeshRenderer. Unfortunately, for unknown reasons, SkinnedMeshRenderer is not a batch
Related
I am experimenting with the terrain tools. The app has a cube (0.2x0.2x0.05) that flies around representing a quadcopter. It has a collider (not set as trigger) and rigidbody. It is controlled by addforce() etc - ie its position and rotation are not changed directly.
It works quite reliably in scenery constructed from primitives (cubes, spheres etc).
I am now trying out the Unity terrain tools package (I'm using editor 2019.3) and have a simple test terrain (500x500m) with rock, scree and sand layers.
What I find is that sometimes when flying it directly into the terrain just to check what happens, it goes straight through. Often it collides OK, but not always, maybe 50:50.
The FixedUpdate() rate is the standard 20mS.
I'm not sure if there is any step I have missed? I am just using all default settings on the terrain. It has the standard mesh collider. It isn't set up just as a trigger or anything (anyway, mostly it works OK).
Is this something I've done / not done, or is this a known issue - is there a workaround?
STOP PRESS: Testing still in progress, but have a feeling this may be to do with the collision type selected in the rigidbody - see answer below.
Testing still in progress, but it strongly appears this is be to do with the collision type selected in the rigidbody. Default is discrete, but there is information in the Unity docs that with fast moving bodies, it may be necessary to use continuous (at some CPU cost) to avoid missing some collisions.
The problem has not happened since the type was changed to continuous.
The quadcopter is after all, quite a fast moving body, up to 150 kph, and small.
I've tried Fantastic glass from the asset store along with just making glass myself:
And because the orthographic cam doesn't support deferred, or potentially due to other reasons, I cant get reflective glass despite having smoothness set to 1.
How can I get reflective glass in isometric?
I think i know what the issue might be here.
In deferred rendering, reflections are sometimes calculated by raycasting against a pixel in the normal buffer and see if any other pixel on the screen would be seen in the reflection (screen-space reflections). For forward rendering, only reflection probes are used, which have to be baked at some point. Anything that is to be baked into a reflection probe needs to have "reflection probe static" set under its static flags - otherwise, only the skybox will render.
In your case, the background looks pitch black - so if you were to bake a probe without any other objects visible, the reflection probe would also turn out black and no reflections would be visible.
If this is a builder game, then this might be problematic since the environment is not static, though i guess you could set all the structures to "lightmap static" in their prefabs, and then have a real-time reflection probe which is set to update from script, and render it ever time the scene is updated. In that case, you might want to set it to a low resolution to avoid hiccups. If you want more accurate reflections, you can spread some probes out in a grid with slightly overlapping bounds, and only update the closest one.
I have a CSV with the following columns: time, carId, x, y. I am writing a script which will read the CSV and spawn car objects and simply update their positions over time based on the data.
Over the span of about 20 minutes, around 3500 car objects will have to have been instantiated. While they won't all be in the simulation at one point in time (once a car gets to certain points in my road network, it will disappear), but I want to be prepared for a situation where hundreds of car objects move through the network at once.
I'm still new to Unity and its rendering pipeline when it comes to optimizing for this big of a project. I know in some cases it's better to setActive(false) on GameObjects as opposed to destroy() and maybe this is one of them. What else should I consider when handling this many gameobjects in Unity?
What else should I consider when handling this many gameobjects in
Unity?
For the most part, this really depends on the amount of Objects or cars that will be displayed at the same time on the screen.
If it's just few cars then use Object Pooling to recycle the Objects.
You should also use LOD to optimize and reduce the number of triangles rendered for an object.
If it's a traffic simulation with hundreds of cars moving at the same time then you should use GPU Instancing which is now built into Unity. To enable GPU, use a Standard Shader and check the "Enable GPU Instancing" box.
After enabling it on the material, you can use Graphics.DrawMeshInstanced to instantiate the Objects and MaterialPropertyBlock to change how they look.
This sounds like a great opportunity to use Object Pooling. You are on the right track with using setActive.
Follow this short tutorial : https://unity3d.com/learn/tutorials/topics/scripting/object-pooling
It should reduce a lot of the lag you would get by instantiating/ destroying a lot of objects.
I'm developing game for Google Daydream and cameras that are used there are really confusing - all of their params are set to the same value when scene starts. Initially I had a problem with further object meshes clipping through each other. I found out that the solution to this is to set nearClippingPlaneto higher value. This caused next problem wich was the cockpit of my ship being not entirerly rendered because of this nearClippingPlane being too far. I tried to create another camera that would render only the cockpit, but because of what I mentioned previously it doesn't work(the cameras act weird), and when I set it through a script it also doesn't work properly.
Because of that I need to change some property of the object, not the camera. I found this but it only works for farClippingPlane(otherwise it would be perfect). Do you know how can I ignore nearClippingPlane for one object/layer without adding second camera?
Regarding further object meshes clipping, this is likely happening due to z-fighting: https://en.wikipedia.org/wiki/Z-fighting
Note that by default, Daydream in Unity will use a 16-bit depth buffer. This is accessible via the player settings -> other settings -> Virtual Reality SDKs -> Daydream.
Switching to a 32-bit depth buffer might allow you to render both the objects in the cockpit and the objects far away using a small nearClippingPlane value. However, this is mostly a mitigation and you might still run into clipping problems, albeit much smaller ones. Additionally there's a performance impact by doing this since you're doubling the memory and bandwidth used by your depth buffer.
You should be able to use multiple cameras like you tried. Create a "Cockpit" and "Environment" camera, with the cockpit camera rendering second using the camera's Depth property. This will render the environment first, and you can ignore all the objects within the cockpit. This has the advantage that you can push out the near plane pretty far.
Next, set the cockpit camera to only clear depth. You can set the cockpit camera to enclose only objects that might be in your cockpit. Now the renderer will preserve the environment, but allow you to use two different depth ranges. Note that this also has performance implications on mobile devices, as the multiple render passes will incur an extra memory transfer, and you also need to clear the depth buffer in the middle of rendering.
You might want to consider creating separate layers for your objects, e.g. "cockpit" and "environment" to prevent things from being rendered twice.
Sample scene with two cameras, note the difference in near/far values
I need to render an object multiple times a frame using different textures. I was wondering about the most performant way to do so. My first approach was to have one Material and in OnRenderImage() call SetTexture() on it for the given number of textures I have. Now I'm wondering if it would be a noticeable improvement if I set up one Material per Texture in Start() and change between Materials in OnRenderImage(). Then I wouldn't need the SetTexture() call. But I can't find what SetTexture() actually does. Does it just set a flag or does it copy or upload the texture somewhere?
From working with low-end devices extensively, performance comes from batching. It's hard to pin point what would improve performance in your context w/o a clear understanding of the scope:
How many objects
How many images
Target platform
Are the images packaged or external at runtime
...
But as a general rule you want to use least amount of materials and individual textures possible. If pre-processing is an option, I would recommend creating spritesheets with as many images as possible on a single image. Then using UV offsetting you can show multiple images on multiple objects for 1 draw call.
I have used extensively a solution called TexturePacker which supports Unity. It's not cheap, there's an app to buy plus a plugin for Unity but it saves time, and draw calls, in the end.
Things like packing hundreds of images into a few 4k textures and down to 3 or 4 draw calls vs 100s before.
That might not be a solution in your case, but the concept is still valid.
Also unity prefabs will not save Draw Calls, but reduce memory usage.
hth.
J.