I am working on a small game where I only want to draw the objects (mesh) when it is inside an invicible box. I have gotten the clipping to work so that the mesh is only rendered inside the box (using the solution mentioned here: https://answers.unity.com/questions/1875660/urp-render-only-whats-inside-a-cube.html
The only annoyance now is that even when the mesh is clipped, the SSAO is still being rendered as you can see in the following image (in the red box):
I assume it is because the object is still contributing to the depth normals - but I am unable to find more information about this - or even if this is the actual issue.
Do any of you have a suggestion for how to prevent this from happening?
I am using Unity 2021.2.8f and URP v12.1.3 btw
The Postprocessing effect SSAO is applied to all layers seen (not culled) by your main camera. Try to put the object on a different layer and ignore it on your main camera.
You could also integrate an additional forward renderer (+ new camera) to your project, which does not use the SSAO effect and takes care of your object.
Related
I have a first person controller and currently if I walk into a slightly transparent object, it disappears until I walk out of it.
I have a 'water' cube object that is very light blue and transparent, and when I move my camera into third person view then enter the water, my screen turns light blue which is good. In first person mode (which is what I'm trying to figure out), the cube disappears and my screen color remains the same.
I know it has something to do with my camera but even after going through all the features on unity docs and changing a few settings in the inspector, it all remains the same.
Like mentioned in the comments your cube is not visible because the backside of its faces is being culled, meaning they are not rendered. This is not a camera setting but a property of the shader your water cubes material is using.
You could create get a duplicate of the shader your are using where you add Cull Off to change this.
Read about it here.
If you want to go that route you need the source files for your shader. Assuming it is one of the Unity Built in shaders you can get them from the Unity Download archive here for your Unity version.
Clipping the camera trough objects is rarely a desired and you should look at a different way of achieving your effect like using a post-processing volume because the "water" would still be a cube and be drawn behind everything in front of its faces.
For example in a FPS type game this would result in the gun not changing color.
in OpenGL I have drawn a transparent cube in orthogonal projection, the enviroment has a front ligth.
The result is reported in figure.
What I don't understand is why there is a missing edge?
gl.Enable(OpenGL.GL_BLEND);
gl.BlendFunc(OpenGL.GL_SRC_ALPHA, OpenGL.GL_ONE_MINUS_SRC_ALPHA);
I use a DrawElements() function that uses indices.
Are there any suggestions?
This is caused by depth testing. OpenGL renders one triangle (or square) at a time. When depth testing is turned on, it skips rendering a pixel if it's already rendered something in front of that pixel. This is good for solid objects, but it doesn't work for transparent ones, because the back parts have to be rendered first or else they don't get rendered at all.
There are many ways to do transparency, but none of them are particularly nice. Unfortunately, whichever way you slice it, transparent objects are just not as easy to render as opaque ones.
So here are some ways to render transparent things:
Sort the faces and render back-to-front.
Sort the faces and render front-to-back, so the back is invisible.
Use face culling and render twice: cull front faces and render, then cull back faces and render. This gives the same effect as back-to-front by getting OpenGL to do it for you. Only works for convex objects (cubes are convex) and if you have more than one transparent object you still have to sort the objects.
Use face culling to cull back faces. This gives the same effect as rendering back-to-front, by getting OpenGL to do it for you. Same caveats as the previous one.
Use a different blending mode where the rendering order doesn't matter, such as multiplicative or additive, and turn depth testing off. Multiplicative blending mode removes light without adding it - looks like cellophane instead of stained glass - you'd need a white background. Additive blending mode looks like one of them sci-fi spaceship control screens - it makes its own light and you can also see through it.
Depth peeling and linked list buffers are two techniques which do separate sorting for each pixel, but they require more intense processing and very complicated shaders.
Raytracing (enough said)
If all the faces are the same colour, you can just turn depth testing off and it will look okay. Since the faces are all the same you can't tell they are rendered in the wrong order. This works for your red cube but it won't work if you add different colours or a texture.
To properly render a cube with a translucent interior, not just a translucent surface, you need a volumetric translucency effect which is also more complicated and out of scope here. You would render the back, the front, and apply a different amount of translucency depending on the distance between them.
I resolved thaks to user253751 putting this to instructions:
gl.Enable(OpenGL.GL_DEPTH_TEST);
gl.DepthFunc(OpenGL.GL_ALWAYS);
Does any know or can possibly point me to some instructions or a github repository on how I can create a script where I have an object and in GoogleVr (Cardboard) if I was to gaze over an object, a tooltip would appear?
If anyone is familiar, in the Cardboard Demos under Under Arctic Journey > Learn, when you click on the fox, a tooltip appears to showcase that item along with like a brief description on it. I want to have something similar (maybe even the same thing) except just having a gaze over will automatically show it. Is this possible?
I want to have this done on multiple objects in my project so I want it created so I can easily substitute out text and whatnot.
Have a script with a reference to a World Space Canvas (WSC). The WSC will be your tooltip and be activated when you hover over the object and disabled when you don't.
You can set images and texts of the WSC through the inspector or through code if you make a reference to them.
The script should also always have its rotation set to face the player.
You can use the SetActive(bool) method to show or hide the WSC.
The UI system makes it easy to create UI that is positioned in the world among other 2D or 3D objects in the Scene.
Start by creating a UI element (such as an Image) if you don’t already have one in your scene by using GameObject > UI > Image. This will also create a Canvas for you.
Set the Canvas to World Space
Select your Canvas and change the Render Mode to World Space.
Now your Canvas is already positioned in the World and can be seen by all cameras if they are pointed at it, but it is probably huge compared to other objects in your Scene. We’ll get back to that.
https://docs.unity3d.com/Manual/HOWTO-UIWorldSpace.html
How do I remove this "giggly" effect when slowly moving a sprite?
I have tried adjusting Antialiasing values in QualitySettings and Filter Mode in ImportSettings in the Unity Editor but that doesn't change anything.
Ideally, I would like to keep the Filter Mode to Point (no filter) and anti aliasing turned on to 2x
The sprite is located inside a Sprite Renderer component of a GameObject.
I have uploaded my Unity Project here: http://www.filedropper.com/sprite
I really don't know how to fix the problem... Can anyone help with my personal project?
I cooked up a quick animation to demonstrate what's happening here:
The grid represents the output pixels of your display. I've overlaid on top of it the sliding sprite we want to sample, if we could render it with unlimited sub-pixel resolution.
The dots in the center of each grid cell represent their sampling point. Because we're using Nearest-Nieghbour/Point filtering, that's the only point in the texture they pay attention to. When the edge of a new colour crosses that sampling point, the whole pixel changes colour at once.
The trouble arises when the source texel grid doesn't line up with our output pixels. In the example above, the sprite is 16x16 texels, but I've scaled it to occupy 17x17 pixels on the display. That means, somewhere in every frame, some texels must get repeated. Where this happens changes as we move the sprite around.
Because each texel is rendered slightly larger than a pixel, there's a moment where it completely bridges the sampling points of two adjacent pixels. Both sampling points land within the same enlarged texel, so both pixels see that texel as the nearest one to sample from, and the texel gets output to the screen in two places.
In this case, since there's only a 1/16th scale difference, each texel is only in this weird situation for a frame or two, then it shifts to its neighbour, creating a ripple of doubled pixels that appears to slide across the image.
(One could view this as a type of moiré pattern resulting from the interaction of the texel grid and the sampling grid when they're dissimilar)
The fix is to ensure that you scale your pixel art so each texel is displayed at the size of an integer multiple of pixels.
Either 1:1
Or 2:1, 3:1...
Using a higher multiple lets the sprite move in increments shorter than its own texel size, without localized stretching that impacts the intended appearance of the art.
So: pay close attention to the resolution of your output and the scaling applied to your assets, to ensure you keep an integer multiple relationship between them. The blog post that CAD97 links has practical steps you can take to achieve this.
Edit: To demonstrate this in the Unity project you've uploaded, I modified the camera settings to match your pixels to units setting, and laid out the following test. The Mario at the top has a slightly non-integer texel-to-pixel ratio (1.01:1), while the Mario at the bottom has 1:1. You can see only the top Mario exhibits rippling artifacts:
You might be interested in this blog post about making "pixel-perfect" 2D games in Unity.
Some relevant excerpts:
If you start your pixel game with all the default settings in Unity, it will look terrible!
The secret to making your pixelated game look nice is to ensure that your sprite is rendered on a nice pixel boundary. In other words, ensure that each pixel of your sprite is rendered on one screen pixel.
These other settings are essential to make things as crisp as possible.
On the sprite:
Ensure your sprites are using lossless compression e.g. True Color
Turn off mipmapping
Use Point sampling
In Render Quality Settings:
Turn off anisotropic filtering
Turn off anti aliasing
Turn on pixel snapping in the sprite shader by creating a custom material that uses the Sprite/Default shader and attaching it to the SpriteRenderer.
Also, I'd just like to point out that Unless you are applying Physics, Never Use FixedUpdate. Also, if your sprite has a Collider and is moving, it should have a Kinematic RigidBody attached even if you're never going to use physics, to tell the engine that the Collider is going to move.
Same problem here. I noticed that the camera settings and scale are also rather important to fix the rippling problem.
Here is What Worked for me:
Go to Project Settings > Quality
Under Quality Make the default Quality as High for all.
Set the Anistropic Texture to "Disabled"
Done, And the issue is resolved for me.
Image Reference:
enter image description here
I am working with C#, Monogame and XNA 4.0. In my scene I have a lot of cubes. Some are connected, some are not. I would like to render the edges of the cube with another shader than the filling. Besides that, I would like to render the outer edges of connected cubes in another color (or thicker) than the edges within the cube-object. Here is a small painting to make clear what I want to do (sorry for my bad painting skills, but I think you will get it).
I know how to render a cube with a specific shader and I am also able to render the wireframe but I was not able to connect both methods. Besids that, the outer lines can not be rendered differently with this approach.
I tried it with post-effects like the edgefinding of comic shaders but in this approach I am not able render only specific edges. Besides that if two cubes are next to each other the shader does not recognize the edges.
I am not searching for a ready-to-use solution from you but I would be glad to get some tips/approaches/tutorials/similar projects/etc on how to achieve my goal. Are there some shader experts out there? I am at my wit's end.
(If you however would like to post a ready to use solution I would not be miffy :D)
It is a shame you're not using deferred shading, this would be pretty straight forward to implement if you were.
If you can access the normal and material for each pixel on screen through a texture lookup you can easily post-process this. You could use a 3x3 filter kernel and search for sufficiently large normal discontinuities (this would catch silhouette edges) and also search for pixels that lie on the transition between material IDs (this would catch the edges between blue and orange cubes). If your filter neighborhood satisfied either of these two conditions, then draw a black pixel to form the outline.
You should be able to do this if you use MRT rendering when you draw your cubes, and encode the normal + material ID into an RGBA texture (x,y,z,material).
The basic theory is described in this paper (pp. 13). In this case instead of using the depth as the secondary characteristic for outlining, you would use the material (or object ID, if you want EVERY cube to have an outline).