I've recently started hacking on my Kinect and I want to remove the depth shadow. The shadow is caused by the IR emitter being positioned slightly to the side of the camera, so any close object will get a big shadow and distant object less or no shadow.
The shadow length is related to the distance between the closest and the farthest spot on each side of the shadow.
My goal is to be able to map the color image correctly onto the depth. This doesn't work without processing the shadow as this picture shows:
Does the depth shadow always come out black?
If so you could use a simple method like a temporal median to calculate the background of the image (more info here: http://www.roborealm.com/help/Temporal_Median.php) and then whenever a pixel is black, set it to the background value at that pixel location.
I did some preliminary work on this problem a few weeks ago. My code works directly on a WriteableBitmap rather than the depth data, but if you're only doing image processing, it should work. The algorithm isn't perfect and would benefit with some more tweaking. If you update the code at all, let me know; I'd be very interested to see what you're doing!
The source code is posted on my blog:
http://richardpianka.com/2011/02/trackingni-depth-correction/
I don't know how it is with c# but openni c++ has a function called xnSetViewPoint() the only problem is, you lose the top 20 or so rows of imagedata due to the transformation.
the reason is due to using two different sensors, which are placed close by each other but not exactly at the same position.
Kinect Method - MapDepthFrametoColorFrame
Get the [x,y] positions in the depth frame, and use that method to fill in
I'm sorry to say but that shadow is caused by your body blocking the inferred dots from hitting that spot of the room so it creates a black spot... Nothing you can do but change the base background to a different color other than black so it won't be a noticeable shadow
The color camera and kinect depth camera dont have the same dimensions, and origin of the infra red dots are not from the same cam, its a IR projector a few cm aside from it (as that displacement is used to calculate depth).
However the solution seams easy here, your shadow data is on the left side.
so you need to extend the last known color data before it went black.
And to fit it better move translate the color cam data to the right.
Related
How do I remove this "giggly" effect when slowly moving a sprite?
I have tried adjusting Antialiasing values in QualitySettings and Filter Mode in ImportSettings in the Unity Editor but that doesn't change anything.
Ideally, I would like to keep the Filter Mode to Point (no filter) and anti aliasing turned on to 2x
The sprite is located inside a Sprite Renderer component of a GameObject.
I have uploaded my Unity Project here: http://www.filedropper.com/sprite
I really don't know how to fix the problem... Can anyone help with my personal project?
I cooked up a quick animation to demonstrate what's happening here:
The grid represents the output pixels of your display. I've overlaid on top of it the sliding sprite we want to sample, if we could render it with unlimited sub-pixel resolution.
The dots in the center of each grid cell represent their sampling point. Because we're using Nearest-Nieghbour/Point filtering, that's the only point in the texture they pay attention to. When the edge of a new colour crosses that sampling point, the whole pixel changes colour at once.
The trouble arises when the source texel grid doesn't line up with our output pixels. In the example above, the sprite is 16x16 texels, but I've scaled it to occupy 17x17 pixels on the display. That means, somewhere in every frame, some texels must get repeated. Where this happens changes as we move the sprite around.
Because each texel is rendered slightly larger than a pixel, there's a moment where it completely bridges the sampling points of two adjacent pixels. Both sampling points land within the same enlarged texel, so both pixels see that texel as the nearest one to sample from, and the texel gets output to the screen in two places.
In this case, since there's only a 1/16th scale difference, each texel is only in this weird situation for a frame or two, then it shifts to its neighbour, creating a ripple of doubled pixels that appears to slide across the image.
(One could view this as a type of moiré pattern resulting from the interaction of the texel grid and the sampling grid when they're dissimilar)
The fix is to ensure that you scale your pixel art so each texel is displayed at the size of an integer multiple of pixels.
Either 1:1
Or 2:1, 3:1...
Using a higher multiple lets the sprite move in increments shorter than its own texel size, without localized stretching that impacts the intended appearance of the art.
So: pay close attention to the resolution of your output and the scaling applied to your assets, to ensure you keep an integer multiple relationship between them. The blog post that CAD97 links has practical steps you can take to achieve this.
Edit: To demonstrate this in the Unity project you've uploaded, I modified the camera settings to match your pixels to units setting, and laid out the following test. The Mario at the top has a slightly non-integer texel-to-pixel ratio (1.01:1), while the Mario at the bottom has 1:1. You can see only the top Mario exhibits rippling artifacts:
You might be interested in this blog post about making "pixel-perfect" 2D games in Unity.
Some relevant excerpts:
If you start your pixel game with all the default settings in Unity, it will look terrible!
The secret to making your pixelated game look nice is to ensure that your sprite is rendered on a nice pixel boundary. In other words, ensure that each pixel of your sprite is rendered on one screen pixel.
These other settings are essential to make things as crisp as possible.
On the sprite:
Ensure your sprites are using lossless compression e.g. True Color
Turn off mipmapping
Use Point sampling
In Render Quality Settings:
Turn off anisotropic filtering
Turn off anti aliasing
Turn on pixel snapping in the sprite shader by creating a custom material that uses the Sprite/Default shader and attaching it to the SpriteRenderer.
Also, I'd just like to point out that Unless you are applying Physics, Never Use FixedUpdate. Also, if your sprite has a Collider and is moving, it should have a Kinematic RigidBody attached even if you're never going to use physics, to tell the engine that the Collider is going to move.
Same problem here. I noticed that the camera settings and scale are also rather important to fix the rippling problem.
Here is What Worked for me:
Go to Project Settings > Quality
Under Quality Make the default Quality as High for all.
Set the Anistropic Texture to "Disabled"
Done, And the issue is resolved for me.
Image Reference:
enter image description here
I am writing a little Monogame and I am wondering how I can draw a curve that will be generated by some formula, then I want to fill everything bellow that curve with some color and I want to generate things on the top and drop them down until they collide with the curve. Check the image for a better understanding of my issue.
What type of object I should use for this kind of objects?
Alternative: If the curve and filled area cant be generated automatically or it is too hard to implement, I can make an image where the "unfilled" area is simply invisible, but I still get to the same issue, how to set the bounds such that they match the visible parts of the image and ignore the invisible?
Or even simpler how to get the top Y bound value for a given X value?
I found the solution here Per Pixel Collision - Could do with some general tips basicly it does a color detection collision.
I am working with C#, Monogame and XNA 4.0. In my scene I have a lot of cubes. Some are connected, some are not. I would like to render the edges of the cube with another shader than the filling. Besides that, I would like to render the outer edges of connected cubes in another color (or thicker) than the edges within the cube-object. Here is a small painting to make clear what I want to do (sorry for my bad painting skills, but I think you will get it).
I know how to render a cube with a specific shader and I am also able to render the wireframe but I was not able to connect both methods. Besids that, the outer lines can not be rendered differently with this approach.
I tried it with post-effects like the edgefinding of comic shaders but in this approach I am not able render only specific edges. Besides that if two cubes are next to each other the shader does not recognize the edges.
I am not searching for a ready-to-use solution from you but I would be glad to get some tips/approaches/tutorials/similar projects/etc on how to achieve my goal. Are there some shader experts out there? I am at my wit's end.
(If you however would like to post a ready to use solution I would not be miffy :D)
It is a shame you're not using deferred shading, this would be pretty straight forward to implement if you were.
If you can access the normal and material for each pixel on screen through a texture lookup you can easily post-process this. You could use a 3x3 filter kernel and search for sufficiently large normal discontinuities (this would catch silhouette edges) and also search for pixels that lie on the transition between material IDs (this would catch the edges between blue and orange cubes). If your filter neighborhood satisfied either of these two conditions, then draw a black pixel to form the outline.
You should be able to do this if you use MRT rendering when you draw your cubes, and encode the normal + material ID into an RGBA texture (x,y,z,material).
The basic theory is described in this paper (pp. 13). In this case instead of using the depth as the secondary characteristic for outlining, you would use the material (or object ID, if you want EVERY cube to have an outline).
I'd like to achieve an effect in XNA where I move a clipping plane through an object, and the object gradually disappears WHILE the clipped area is filled with a custom color or texture.
This is what I was able to achieve via HLSL:
And this is what I would actually need:
I think the teapot isn't a really great example because the models I would use would always be fully closed at all times.
Are there any good solutions to this?
I think that if you just have calculated if a pixel is front or back, you don´t need to clip it if the pixel is front, you only should set the pixel normal to the plane normal value, and calculate light as usual...
We want a c# solution to correct the scanned image because it is rotated. To solve this problem we must detect the rotation angle first and then rotate the image. This was our first thought for our problem. But then we thought image warping would be more accurate as I think it would make the scanned image like our template. Then we can process it as we know all the coordinates of our template... I searched for a free SDK or a free solution in c#. Helping me in this will be great as it is the last task in our work. Really, thanks for all.
We used the PrimeOCR product to do this. It's not free, but we couldn't find a free program that was comparable.
So, the hard part is to detect the angle of the page.
If you have full control over the template, the simplest way to do this is probably to come up with an easily-detectable symbol (e.g. a solid black circle) and stick 3 of them on the template. Then, detect them (just look for big blocks of pixels with high saturation, in the case of a solid black circle).
So, you'll then have 3 sets of coordinates. If you have a top circle, a left circle, and a right circle with all 3 circles at difference distances from one another, detecting which circle is the top circle should be pretty easy.
Then just call a rotation function. This part is easy and has been done before (e.g. http://www.switchonthecode.com/tutorials/csharp-tutorial-image-editing-rotate ).
Edit:
I suggested a circle because it's easier to find the center, but a rectangle should work, too.
To be more explicit about how to actually locate the rectangles/circles, take the average Brightness value of every a × a group of pixels. If that value is greater than b, then that a × a group of pixels is part of a rectangle. a and b are varables you'll want to come up with yourself.
Use flood-fill (or, more precisely, Connected Component Labeling) group the resulting pixels together. The end result should give you your rectangles.