The attached picture sums it up. On the right is what I want. On the left is what I get. The colors are wrong. (left face image is from a captured screenshot from the game running in Unity. right face image is me pasting the correct art on top of the screenshot for comparison's sake)
The code I'm calling from OnGUI() to overlay the face image:
GUI.DrawTextureWithTexCoords(toRect, texture, fromRect);
The texture in question comes from a 1024x1024 PNG saved from Photoshop. This PNG has the right colors in it when I open it, but not when DrawTextureWithTexCoords renders it to screen. I've experimented with different Photoshop export settings and texture import settings, but to no avail. It would be nice if I missed something there, but I seem to have run out of knobs to twiddle.
For troubleshooting purposes, I dragged the texture into the scene, applying it as a material to a surface of a mesh. The texture displayed with correct, original colors on the mesh. (Note - this is not the thing I'm trying to do. I need the texture to appear in GUI, not in-scene.)
There is no code that I know of that is processing the texture or doing something that would affect the rendering. I tried turning off my post-processing as a test, but it doesn't affect GUI.
This is running Unity 2019.1.13f1 on a 2015 IMac. I've also included a shot of the Inspector on the texture import settings below. And the almost original texture PNG (watermark added).
How can I get the face art to render with the correct colors?
I found a solution. Answering my own question for posterity's sake.
I replaced the original code:
GUI.DrawTextureWithTexCoords(toRect, texture, fromRect);
...with:
Graphics.DrawTexture(toRect, texture, fromRect, 0, 0, 0, 0);
This fixed the issue. The code above is called from OnGUI(), and behaves exactly like the original code but with corrected colors.
#Erik Overflow, in addition to other helpful comments, pointed out that OnGUI() is called many times per frame. So I'll consider refactoring the above to a call from OnPostFrame() for the sake of performance.
Related
I have a weird line in the left of my grass texture I've looked far and wide, no solution works
Picture:
See that weird light green lines popping up near other grass? That's it too. Kinda hard to see on the grass right there, but the grass around it also has the lines The trees are black because my laptop is very slow and is still processing lighting I've tried:
Setting Wrap Mode to Clamp
Setting Filter Mode to Point
Made it 16 bit, which made the background a solid color
Turned off mipmapping, and played around with some of it's settings
and probably others i forget here, since i've been trying to fix this for so long' also, i tried this out on a different terrain and when there is no other detail meshes, there is no line, but if there is, there's a line also, that's the only grass texture and every other detail mesh is a 3D object and rendered as "Vertex Lit" instead of grass, so I don't know if those meshes can bleed into my grass texutre, like other posts have mentioned.
Tried just about every solution ive found on many pages of google, like the ones i listed above with playing around with the grass texture and the terrain. Nothing worked.
Other pictures of grass also had this bug
Original picture of grass as a png:
Check the texture itself, it seems to follow the texture of the grass and looks as though the line is actually in the texture. In which case, use a photo editing tool like GIMP to remove it.
I am working on a small game where I only want to draw the objects (mesh) when it is inside an invicible box. I have gotten the clipping to work so that the mesh is only rendered inside the box (using the solution mentioned here: https://answers.unity.com/questions/1875660/urp-render-only-whats-inside-a-cube.html
The only annoyance now is that even when the mesh is clipped, the SSAO is still being rendered as you can see in the following image (in the red box):
I assume it is because the object is still contributing to the depth normals - but I am unable to find more information about this - or even if this is the actual issue.
Do any of you have a suggestion for how to prevent this from happening?
I am using Unity 2021.2.8f and URP v12.1.3 btw
The Postprocessing effect SSAO is applied to all layers seen (not culled) by your main camera. Try to put the object on a different layer and ignore it on your main camera.
You could also integrate an additional forward renderer (+ new camera) to your project, which does not use the SSAO effect and takes care of your object.
How do I remove this "giggly" effect when slowly moving a sprite?
I have tried adjusting Antialiasing values in QualitySettings and Filter Mode in ImportSettings in the Unity Editor but that doesn't change anything.
Ideally, I would like to keep the Filter Mode to Point (no filter) and anti aliasing turned on to 2x
The sprite is located inside a Sprite Renderer component of a GameObject.
I have uploaded my Unity Project here: http://www.filedropper.com/sprite
I really don't know how to fix the problem... Can anyone help with my personal project?
I cooked up a quick animation to demonstrate what's happening here:
The grid represents the output pixels of your display. I've overlaid on top of it the sliding sprite we want to sample, if we could render it with unlimited sub-pixel resolution.
The dots in the center of each grid cell represent their sampling point. Because we're using Nearest-Nieghbour/Point filtering, that's the only point in the texture they pay attention to. When the edge of a new colour crosses that sampling point, the whole pixel changes colour at once.
The trouble arises when the source texel grid doesn't line up with our output pixels. In the example above, the sprite is 16x16 texels, but I've scaled it to occupy 17x17 pixels on the display. That means, somewhere in every frame, some texels must get repeated. Where this happens changes as we move the sprite around.
Because each texel is rendered slightly larger than a pixel, there's a moment where it completely bridges the sampling points of two adjacent pixels. Both sampling points land within the same enlarged texel, so both pixels see that texel as the nearest one to sample from, and the texel gets output to the screen in two places.
In this case, since there's only a 1/16th scale difference, each texel is only in this weird situation for a frame or two, then it shifts to its neighbour, creating a ripple of doubled pixels that appears to slide across the image.
(One could view this as a type of moiré pattern resulting from the interaction of the texel grid and the sampling grid when they're dissimilar)
The fix is to ensure that you scale your pixel art so each texel is displayed at the size of an integer multiple of pixels.
Either 1:1
Or 2:1, 3:1...
Using a higher multiple lets the sprite move in increments shorter than its own texel size, without localized stretching that impacts the intended appearance of the art.
So: pay close attention to the resolution of your output and the scaling applied to your assets, to ensure you keep an integer multiple relationship between them. The blog post that CAD97 links has practical steps you can take to achieve this.
Edit: To demonstrate this in the Unity project you've uploaded, I modified the camera settings to match your pixels to units setting, and laid out the following test. The Mario at the top has a slightly non-integer texel-to-pixel ratio (1.01:1), while the Mario at the bottom has 1:1. You can see only the top Mario exhibits rippling artifacts:
You might be interested in this blog post about making "pixel-perfect" 2D games in Unity.
Some relevant excerpts:
If you start your pixel game with all the default settings in Unity, it will look terrible!
The secret to making your pixelated game look nice is to ensure that your sprite is rendered on a nice pixel boundary. In other words, ensure that each pixel of your sprite is rendered on one screen pixel.
These other settings are essential to make things as crisp as possible.
On the sprite:
Ensure your sprites are using lossless compression e.g. True Color
Turn off mipmapping
Use Point sampling
In Render Quality Settings:
Turn off anisotropic filtering
Turn off anti aliasing
Turn on pixel snapping in the sprite shader by creating a custom material that uses the Sprite/Default shader and attaching it to the SpriteRenderer.
Also, I'd just like to point out that Unless you are applying Physics, Never Use FixedUpdate. Also, if your sprite has a Collider and is moving, it should have a Kinematic RigidBody attached even if you're never going to use physics, to tell the engine that the Collider is going to move.
Same problem here. I noticed that the camera settings and scale are also rather important to fix the rippling problem.
Here is What Worked for me:
Go to Project Settings > Quality
Under Quality Make the default Quality as High for all.
Set the Anistropic Texture to "Disabled"
Done, And the issue is resolved for me.
Image Reference:
enter image description here
Alright, I've seen there's a number of threads about scaling a Texture2D both on here and the Unity forums. That line of searching brought me to now being able to scale Texture2Ds by using this class: http://wiki.unity3d.com/index.php/TextureScale
Now, what I'm currently working on is applying decals directly to the targeted texture. That's working fine. BUT, depending on the texture size and how much it's being scaled over it's geometry, the decal appears at different sizes (without any scaling, see attached image). That's what lead me to even look into scaling a Texture2D.
The mystery is, I'm not sure what kind of math to throw at the scale function to make the decal appear the same size visually no matter which mesh and texture it's on.
Any help is much appreciated : ]
If the texture is twice as large, the decal will be twice as large. So, scale the decal by half, and it will be normal size.
In general, every place you multiply the scale for the texture, divide the scale for the decal by the same amount.
I've recently started hacking on my Kinect and I want to remove the depth shadow. The shadow is caused by the IR emitter being positioned slightly to the side of the camera, so any close object will get a big shadow and distant object less or no shadow.
The shadow length is related to the distance between the closest and the farthest spot on each side of the shadow.
My goal is to be able to map the color image correctly onto the depth. This doesn't work without processing the shadow as this picture shows:
Does the depth shadow always come out black?
If so you could use a simple method like a temporal median to calculate the background of the image (more info here: http://www.roborealm.com/help/Temporal_Median.php) and then whenever a pixel is black, set it to the background value at that pixel location.
I did some preliminary work on this problem a few weeks ago. My code works directly on a WriteableBitmap rather than the depth data, but if you're only doing image processing, it should work. The algorithm isn't perfect and would benefit with some more tweaking. If you update the code at all, let me know; I'd be very interested to see what you're doing!
The source code is posted on my blog:
http://richardpianka.com/2011/02/trackingni-depth-correction/
I don't know how it is with c# but openni c++ has a function called xnSetViewPoint() the only problem is, you lose the top 20 or so rows of imagedata due to the transformation.
the reason is due to using two different sensors, which are placed close by each other but not exactly at the same position.
Kinect Method - MapDepthFrametoColorFrame
Get the [x,y] positions in the depth frame, and use that method to fill in
I'm sorry to say but that shadow is caused by your body blocking the inferred dots from hitting that spot of the room so it creates a black spot... Nothing you can do but change the base background to a different color other than black so it won't be a noticeable shadow
The color camera and kinect depth camera dont have the same dimensions, and origin of the infra red dots are not from the same cam, its a IR projector a few cm aside from it (as that displacement is used to calculate depth).
However the solution seams easy here, your shadow data is on the left side.
so you need to extend the last known color data before it went black.
And to fit it better move translate the color cam data to the right.