I'm developing game for Google Daydream and cameras that are used there are really confusing - all of their params are set to the same value when scene starts. Initially I had a problem with further object meshes clipping through each other. I found out that the solution to this is to set nearClippingPlaneto higher value. This caused next problem wich was the cockpit of my ship being not entirerly rendered because of this nearClippingPlane being too far. I tried to create another camera that would render only the cockpit, but because of what I mentioned previously it doesn't work(the cameras act weird), and when I set it through a script it also doesn't work properly.
Because of that I need to change some property of the object, not the camera. I found this but it only works for farClippingPlane(otherwise it would be perfect). Do you know how can I ignore nearClippingPlane for one object/layer without adding second camera?
Regarding further object meshes clipping, this is likely happening due to z-fighting: https://en.wikipedia.org/wiki/Z-fighting
Note that by default, Daydream in Unity will use a 16-bit depth buffer. This is accessible via the player settings -> other settings -> Virtual Reality SDKs -> Daydream.
Switching to a 32-bit depth buffer might allow you to render both the objects in the cockpit and the objects far away using a small nearClippingPlane value. However, this is mostly a mitigation and you might still run into clipping problems, albeit much smaller ones. Additionally there's a performance impact by doing this since you're doubling the memory and bandwidth used by your depth buffer.
You should be able to use multiple cameras like you tried. Create a "Cockpit" and "Environment" camera, with the cockpit camera rendering second using the camera's Depth property. This will render the environment first, and you can ignore all the objects within the cockpit. This has the advantage that you can push out the near plane pretty far.
Next, set the cockpit camera to only clear depth. You can set the cockpit camera to enclose only objects that might be in your cockpit. Now the renderer will preserve the environment, but allow you to use two different depth ranges. Note that this also has performance implications on mobile devices, as the multiple render passes will incur an extra memory transfer, and you also need to clear the depth buffer in the middle of rendering.
You might want to consider creating separate layers for your objects, e.g. "cockpit" and "environment" to prevent things from being rendered twice.
Sample scene with two cameras, note the difference in near/far values
Related
I have identical character models, I look at the Debug Frame, the rendering is completely different Draw Calls. If the models are absolutely identical, but he draws them separately. Dynamic batching checkbox enabled in Project Settings. Also on the material there is a tick Enable GPU instancing
Object in inspector
Material on objects
I looked in the Frame Debug, in the end I didn’t come to any conclusion, since in fact it should compose objects into one
Solution: Remove the SkinnedMeshRenderer Component from the mesh on the object. And install MeshFilters, MeshRenderer. Unfortunately, for unknown reasons, SkinnedMeshRenderer is not a batch
Since a decent frame rate is so important in VR apps, I was just wondering if you can predict frames dropping? If so, before this issue actually occurs, can you deactivate some scripts or other features except the camera transform's updating and rendering of the environment? So, if performance drops (i.e. frames drop) no nausea will be experienced.
Predicting the future is not very likely, so you're going to have to adapt on the fly when you see performance drop. Either that or you could imagine creating a test environment a user could run where you try and figure out the capabilities of the user's hardware setup and tweak settings accordingly for future actual app runs. (i.e. "the test environment ran below the desired 120fps at medium settings, so default to low from now on")
I don't know what platform you are on exactly, but just in case you're on the Oculus ecosystem you might be able to get some help however.
By default on Oculus devices you're supported by what they refer to as "Asynchronous TimeWarp". This in essence decouples the headset's transform update and rendering from the framerate of your application. If no up-to-date frame is available, the latest frame will be transformed based on the latest head tracking information, reducing how noticeable such hiccups are. You will still want to avoid this having to kick in as much as possible though.
Additionally, Oculus supports "Fixed Foveated Rendering" on their mobile platforms where depending on your GPU utilization the device can render at lower resolutions at the edges of your view. In practice I've found this to be surprisingly effective, even though (as the name implies) it's fixed at the center of the view and does not include any eye tracking. But as with the previous method, not needing it is always better.
I'm unfortunately less familiar with options on other devices, but I'm sure others will pitch in if those exist.
I am working with the Hololens in Unity and trying to map a large area (15x15x25) meters. I am able to map the whole area using the SpatialMapping prefab, but I want to do some spatial processing on that mesh to smoothen out the floors and walls. I have been trying to use SpatialUnderstanding for this, but there seems to be a hard limit on how big of an area you can scan with this, which has been detailed by hololens forums thread.
Currently, I don't understand how the pipeline of data works from SpatialMapping to SpatialUnderstanding. Why can I not simply use the meshes generated from SpatialMapping in SpatialUnderstanding? Is there some better method of creating smooth surfaces?
This solution works best for pre-generated rooms. In other words a general solution, one that could be expected to be used by end users, is not possible, given the current limitations.
I will start with the last question: "Is there some better method of creating smooth surfaces?"
Yes, use a tripod on wheels to generate the initial scan. Given the limited resolution of the accelometers and compasses in the hardware, reducing the variance in one linear axis, height, and one rotational axis, roll(roll should not vary during at all during a scan), will result in a much more accurate scan.
The other method to create smooth surfaces is to export the mesh to a 3D editing program and manually flatten the surfaces, then reimport the mesh into Unity3D.
"Why can I not simply use the meshes generated from SpatialMapping in SpatialUnderstanding?"
SpacialUnderstanding further divides the generated mesh into (8cm,8cm,8cm) voxels and then calculates the surfels based on each voxel. To keep performance and memory utilization in check, a hard limit of approximately(10m,10m,10m). That is implemented as(128,128,128) voxels.
Any attempt to use SpacialUnderstanding beyond its defined limits, will produce spurious results due to overflow of the underlying data structures.
I need to render an object multiple times a frame using different textures. I was wondering about the most performant way to do so. My first approach was to have one Material and in OnRenderImage() call SetTexture() on it for the given number of textures I have. Now I'm wondering if it would be a noticeable improvement if I set up one Material per Texture in Start() and change between Materials in OnRenderImage(). Then I wouldn't need the SetTexture() call. But I can't find what SetTexture() actually does. Does it just set a flag or does it copy or upload the texture somewhere?
From working with low-end devices extensively, performance comes from batching. It's hard to pin point what would improve performance in your context w/o a clear understanding of the scope:
How many objects
How many images
Target platform
Are the images packaged or external at runtime
...
But as a general rule you want to use least amount of materials and individual textures possible. If pre-processing is an option, I would recommend creating spritesheets with as many images as possible on a single image. Then using UV offsetting you can show multiple images on multiple objects for 1 draw call.
I have used extensively a solution called TexturePacker which supports Unity. It's not cheap, there's an app to buy plus a plugin for Unity but it saves time, and draw calls, in the end.
Things like packing hundreds of images into a few 4k textures and down to 3 or 4 draw calls vs 100s before.
That might not be a solution in your case, but the concept is still valid.
Also unity prefabs will not save Draw Calls, but reduce memory usage.
hth.
J.
I'm getting images from a C328R camera attached to a small arduino robot. I want the robot to drive towards orange ping-pong balls and pick them up. I'm using the C# code supplied by funkotron76 at http://www.codeproject.com/KB/recipes/C328R.aspx.
Is there a library I can use to do this, or do I need to iterate over every pixel in the image looking for orange? If so, what kind of tolerance would I need to compensate for various lighting conditions?
I could probably test to figure out these numbers, but I'm hoping someone out there knows the answers.
Vision can be surprisingly difficult, especially as you try to tolerate varying conditions. A few good things to research include Blob Finding (searching for contiguous pixels matching certain criteria, usually a threshold of brightness), Image Segmentation (can you have multiple balls in an image?) and general theory on Hue (most vision algorithms work with grayscale or binary images, so you'll first need to transform the image in a way that highlights the orangeness as the criteria for selection.)
Since you are presumably tracking these objects in real time as you move toward them, you might also be interested in learning about tracking models, such as the Kalman filter. It's overkill for what you're doing, but it's interesting and the basic ideas are helpful. Since you presumably know that the object should not be moving very quickly, you can use that fact to filter out false positives that could otherwise lead you to move away from the object. You can put together a simpler version of this kind of filtering by simply ignoring frames that have moved too far from the previous accepted frame (with a few boundary conditions to avoid getting stuck ignoring the object.)