Unity - can't get reflective glass with orthographic camera? - c#

I've tried Fantastic glass from the asset store along with just making glass myself:
And because the orthographic cam doesn't support deferred, or potentially due to other reasons, I cant get reflective glass despite having smoothness set to 1.
How can I get reflective glass in isometric?

I think i know what the issue might be here.
In deferred rendering, reflections are sometimes calculated by raycasting against a pixel in the normal buffer and see if any other pixel on the screen would be seen in the reflection (screen-space reflections). For forward rendering, only reflection probes are used, which have to be baked at some point. Anything that is to be baked into a reflection probe needs to have "reflection probe static" set under its static flags - otherwise, only the skybox will render.
In your case, the background looks pitch black - so if you were to bake a probe without any other objects visible, the reflection probe would also turn out black and no reflections would be visible.
If this is a builder game, then this might be problematic since the environment is not static, though i guess you could set all the structures to "lightmap static" in their prefabs, and then have a real-time reflection probe which is set to update from script, and render it ever time the scene is updated. In that case, you might want to set it to a low resolution to avoid hiccups. If you want more accurate reflections, you can spread some probes out in a grid with slightly overlapping bounds, and only update the closest one.

Related

How to make object/layer ignore near clipping planes?

I'm developing game for Google Daydream and cameras that are used there are really confusing - all of their params are set to the same value when scene starts. Initially I had a problem with further object meshes clipping through each other. I found out that the solution to this is to set nearClippingPlaneto higher value. This caused next problem wich was the cockpit of my ship being not entirerly rendered because of this nearClippingPlane being too far. I tried to create another camera that would render only the cockpit, but because of what I mentioned previously it doesn't work(the cameras act weird), and when I set it through a script it also doesn't work properly.
Because of that I need to change some property of the object, not the camera. I found this but it only works for farClippingPlane(otherwise it would be perfect). Do you know how can I ignore nearClippingPlane for one object/layer without adding second camera?
Regarding further object meshes clipping, this is likely happening due to z-fighting: https://en.wikipedia.org/wiki/Z-fighting
Note that by default, Daydream in Unity will use a 16-bit depth buffer. This is accessible via the player settings -> other settings -> Virtual Reality SDKs -> Daydream.
Switching to a 32-bit depth buffer might allow you to render both the objects in the cockpit and the objects far away using a small nearClippingPlane value. However, this is mostly a mitigation and you might still run into clipping problems, albeit much smaller ones. Additionally there's a performance impact by doing this since you're doubling the memory and bandwidth used by your depth buffer.
You should be able to use multiple cameras like you tried. Create a "Cockpit" and "Environment" camera, with the cockpit camera rendering second using the camera's Depth property. This will render the environment first, and you can ignore all the objects within the cockpit. This has the advantage that you can push out the near plane pretty far.
Next, set the cockpit camera to only clear depth. You can set the cockpit camera to enclose only objects that might be in your cockpit. Now the renderer will preserve the environment, but allow you to use two different depth ranges. Note that this also has performance implications on mobile devices, as the multiple render passes will incur an extra memory transfer, and you also need to clear the depth buffer in the middle of rendering.
You might want to consider creating separate layers for your objects, e.g. "cockpit" and "environment" to prevent things from being rendered twice.
Sample scene with two cameras, note the difference in near/far values

Does drawing outside of screen bounds affect performance

In my 2D game I have large map, and I scroll around it (like in Age of Empires).
On draw I draw all the elements (which are textures/Images). Some of them are on the screen, but most of them are not (because you see about 10% of the map).
Does XNA know not to draw them by checking that the destination rectangle won't fall on the screen?
Or should I manually check it and avoid drawing them at all?
A sometimes overlooked concept of CPU performance while using SpriteBatch is how you go about batching your sprites. And using drawable game component doesn't lend itself easily to efficiently batching sprites.
Basically, The GPU drawing the sprites is not slow & the CPU organizing all the sprites into a batch is not slow. The slow part is when the CPU has to communicate to the GPU what it needs to draw (sending it the batch info). This CPU to GPU communication does not happen when you call spriteBatch.Draw(). It happens when you call spriteBatch.End(). So the low hanging fruit to efficiency is calling spriteBatch.End() less often. (of course this means calling Begin() less often too). Also, use spriteSortMode.Immediate very sparingly because it immediately causes the CPU to send each sprites info to the GPU (slow)).
So if you call Begin() & End() in each game component class, and have many components, you are costing yourself a lot of time unnecessarily and you will probably save more time coming up with a better batching scheme than worrying about offscreen sprites.
Aside: The GPU automatically ignores offscreen sprites from its pixel shader anyway. So culling offscreen sprites on the CPU won't save GPU time.
Reference here.
You must manually account for this, and it will largely effect the performance of the game as it grows, this is otherwise known as culling. Culling is not done just because drawing stuff off screen reduces performance, it is because calling Draw that many extra times is slow. Anything you don't need to update that is out of the viewport should be excluded too. You can see more about how you can do this and how SpriteBatch handles this here.

XNA SpriteSortMode Speed and Shaders (4.0)

Evening/afternoon;
I'm getting to grips with writing a little custom 2D XNA engine for my own personal use. When deciding on how to draw the 2D sprites, I'm stuck in a quandry:
Firstly, I'd like, at some point, to implement some custom shader effects. Every tutorial I read on the internet said that I therefore am forced to use SpriteSortMode.Immediate, except one, which said that in XNA 4.0 that is no longer necessary.
Furthermore, I am unsure about which SpriteSortMode is fastest for my approach, regardless of shading effects. Ordering/layering of different sprites is definitely a necessity (i.e. to have a HUD in front of the game sprites, and the game sprites in front of a backdrop etc.); but would it be faster to implement a custom sorted list and just call the Draw()s in order, or use the BackToFront / FrontToBack options?
Thank you in advance.
Starting with XNA 4.0 you can use custom shader effects with any sprite sort mode. The Immediate sprite sort mode in XNA 3.1 was pretty broken. (see http://blogs.msdn.com/b/shawnhar/archive/2010/04/05/spritesortmode-immediate-in-xna-game-studio-4-0.aspx)
Concerning sorting I would say to sort them back to front for transparent sprites and front to back for opaque ones. See: http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.spritesortmode.aspx
There is one extra interesting mode there (for a 2D engine at least), which is the Texture sorting mode where it sorts the draw calls by which texture is needed and thus reduces state change. That could be a big performance win for the main game sprites.
And I wouldn't worry too much about performance until your profiler says otherwise. SpriteBatch does quite a lot of batching (yes, really) and that will be the biggest performance improvement because it minimizes the number of state changes.
The only other way I can think of to improve performance is to use instancing, but I think that with XNA it might be a bad idea (ideally you'll want SM3.0 hardware and a custom vertex shader at least; I'm not sure how that plays with the SpriteBatch class).

Offloading to HLSL/GPU without displaying?

As far as I know, certain mathematical functions like FFTs and perlin noise, etc. can be much faster when done on the GPU as a pixel shader. My question is, if I wanted to exploit this to calculate results and stream to bitmaps, could I do it without needing to actually display it in Silverlight or something?
More specifically, I was thinking of using this for large terrain generation involving lots of perlin and other noises, and post-processing like high passes and deriving normals from heightmaps, etc, etc.
The short answer is yes. The longer answer is that you can set (for example) a texture as the render target, which deposits your results there.
Unless you're really set on using a shader to do the calculation, you might want to consider using something that's actually designed for this kind of job such as Cuda or OpenCL.
Hmm its a good question.
Anything that can be displayed can be rendered using an instance of WriteableBitmap and its Render method. You can access the output using the Pixels byte array property.
However (assuming GPU acceleration is turned on and the content is appropriately marked to make use of the GPU) whether such a render will actually make use of the GPU when going to a WriteableBitmap instead of the display I don't know.

Collisions in a real world application

Here's my problem. I'm creating a game and I'm wondering about how to do the collisions. I have several case to analyze and to find the best solution for.
I'll say it beforehand, I'm not using any third party physics library, but I'm gonna do it in house. (as this is an educational project, I don't have schedules and I want to learn)
I have 2 types of mesh for which I have to make the collisions for :
1) Static Meshes (that move around the screen, but does not have ANY animation)
2) Skinned/Boned Meshes (animated)
Actually I have this solution (quite hacky :|)
First of all I have a test against some bounding volume that enclose the full mesh (capsule in my case), after :
1) For the static meshes I divide them manually in blocks (on the modeler) and for each of these blocks i use a sphere/AABB test. (works fine, but its a little messy to slice every mesh :P) (i tried an automatic system to divide the mesh through planes, but it gives bad results :()
2) For the animated mesh ATM i'm dividing the mesh at runtime into x blocks (where x is the number of bones). Each block contain the vertex for which that bone is the major influencer. (Sometimes works, sometimes gives really bad results. :|)
Please note that the divide of the mesh is done at loading time and not each time (otherwise it would run like a slideshow :D)
And here's the question :
What is the most sane idea to use for those 2 case?
Any material for me to study these methods? (with some sourcecode and explanations would be even better (language is not important, when i understand the algorithm, the implementation is easy))
Can you argument why that solution is better than others?
I heard a lot of talk about kd-tree, octree, etc..while I understand their structure I miss their utility in a collision detection scenario.
Thanks a lot for the answers!!!
EDIT : Trying to find a K-Dop example with some explanation on the net. Still haven't found anything. :( Any clues?
I'm interested on HOW the K-Dop can be efficiently tested with other type of bounding volumes etc...but the documentation on the net seems highly lacking. :(
Prior to doing complex collision detection you should perform basic detection.
Using spheres or rectangles as bounding volumes is your best bet. Then if this detects a collision, move onto your more complex methods.
What I'm getting at is simple is often better, and quicker. Wrapping bounding volumes and splitting meshes up is costly, not to mention complex. You seem to be on the right track though.
As with game programming there are multiple ways of collision detection. My advice would be start simple. Take a cube and perfect your routines on that, then in theory you should be able to use any other model. As for examples I'd check gamedev.net as they have some nice articles. Much or my home made collision detection is a combination of many methods, so I can't really recommended the definitive resource.
The most common approaches used in many current AAA games is "k-DOP" simplified collision for StaticMeshes, and a simplified physical body representation for the SkeletalMeshes.
If you google for "kDOP collision" or "discrete orientation polytopes" you should find enough references. This is basicly a bounding volume defined of several planes that are moved from outside towards the mesh, until a triangle collision occurs. The "k" in kDOP defines how many of these planes are used, and depending on your geometry and your "k" you can get really good approximations.
For SkeletalMeshes the most common technique is to define simple geometry that is attached to specific bones. This geometry might be a box or a sphere. This collision-model than can be used for quite accurate collision detection of animated meshes.
If you need per-triangle collision, the "Separating Axis Theorem" is the google-search term of your choice. This is usefull for specific cases, but 75% of your collision-detection needs should be covered with the above mentioned methods.
Keep in mind, that you most probably will need a higher level of early collision rejection than a bounding-volume. As soon as you have a lot of objects in the world, you will need to use a "spatial partitioning" to reject groups of objects from further testing as early as possible.
The answering question comes down to how precise do you need?
Clearly, sphere bounding boxes are the most trivial. On the other side of the scale, you have a full triangle mesh-mesh collision detection, which has to happen each time an object moves.
Game development physics engine rely on the art of the approximation(I lurked in GameDev.net's math and physics forums years ago).
My opinion is that you will need some sort of bounding ellipsoid associated with each object. An object can be a general multimesh object, a mesh, or a submesh mesh. This should provide a 'decent' amount of approximation.
Pick up Christer Ericson's book, Real-Time Collision Detection. He discusses these very issues in great detail.
When reading articles, remember that in a real-world game application you will be working under hard limits of memory and time - you get 16.6ms per frame and that's it! So be cautious of any article or paper that doesn't seriously discuss the memory and CPU footprint of its algorithm.

Categories