Fill clipped area with color - c#

I'd like to achieve an effect in XNA where I move a clipping plane through an object, and the object gradually disappears WHILE the clipped area is filled with a custom color or texture.
This is what I was able to achieve via HLSL:
And this is what I would actually need:
I think the teapot isn't a really great example because the models I would use would always be fully closed at all times.
Are there any good solutions to this?

I think that if you just have calculated if a pixel is front or back, you don´t need to clip it if the pixel is front, you only should set the pixel normal to the plane normal value, and calculate light as usual...

Related

How to get object space in a shader from an UI Image (DisableBatching does not seem to work)

I have a very simple scene with an image and a canvas (its parent).
The image has a shader on it, all it does that it multiplies the vertex.x by 2 (while in object space) before translating it into clip space.
The result is the following :
It seems like that the image used the canvas's object space instead of its own for the multiplication.
The whole shader looks like this :
I tried to use the tag "DisableBatching" = "True" to preserve the object space of the image in the shader, but with no success. Even tried with different unity versions. (yes im getting desperate here:D)
Thanks for any ideas in advance.
The UI system's vertex data is already provided relative to the screen, with (0, 0) in the center and - in my experience - the upper right corner being (screen.width/2, screen.height/2). This might however change depending on platforms or your canvas setup.
This is the behaviour you are seing here, the x coordinate is scaled by 2 relative to the center of your canvas (which would most likely be the extend of your screen in the game view).
There is no "object space" per-se, you would need to pass additional data (i.e. in a texture coordinate) depending on your needs.

How to animate a simple 3D shape using Helix Toolkit?

I'm trying to animate a simple rectangular shape so that it scales in size in a certain direction. As it is, I am making a rectangle that extends from point A to B. The end goal is to animate it so that it starts at A and is transformed to be the length required to get to B.
I'm pretty new to animation in general, so this process seems finicky to me.
Right now I am:
Creating a vector between the start and end point
Finding the 8 corners of the rectangle along that vector
Creating 2 triangles for each face of the rectangle
Rendering the shape
This is all being done by using a MeshBuilder object and adding the triangles and points individually.
So, the way I'm creating the prism doesn't really help for what I need to do. Ideally I suppose, I would just create a short prism aligned between the points, and then just extend the rectangle to be the right length in an animation.
Any thoughts?
I solved this is a sense by scaling the 3D object from a size of 0 in the X/Y/Z to 1.0. So instead of the prism "extending" from A to B, it more or less "grows" to B.
Note that the ScaleTransform3D needed to have the CenterXYZ properties set to the coordinates of point A in order for it to be anchored to the correct position.
If I find a better solution, I'll update this answer later.

C# Monogame Performance when Drawing Thousands of SpriteBatch.DrawString()

I'm currently creating a large map, that consists of a lot of rectangles (33,844), that all have a unique name (label), which I'm drawing on top of them using a SpriteFont.
Drawing all of the rectangles takes no performance hit at all. But, as soon as I try to write all of their labels with DrawString(), my performance goes into the dumps.
In my head, I would like to draw all my rectangles and text to one texture all at once, and only have to keep redrawing that entire finished texture. My issue is, this is an enormous map, and some of the coordinates for the rectangles are very high (example: one slot's x is 14869 and y is 23622), and they're far bigger than a Texture2D allows.
Since this is a map, I really only need to draw the entire thing once, and then allow the user to scroll/move around it. There's no need for me to continually redraw all of the individual rectangles and their labels.
Does anyone have experience with this type of situation?
Try to only render the labels that you can see on the screen and if you can zoom back far enough, just don't render them.
Textrendering is expensive, since it is basically creating a rectangle to draw on for every character in the font and then applying the same RGBA texture to it. So depending on the number of characters you write, the number of rectangles increases. This means four new vertices per character.
Depending on what you write you could simply create a texture with the text already on it and render that, but it won't be very dynamic.
EDIT: I need to clarify something.
There's no need for me to continually redraw all of the individual rectangles and their labels.
This is wrong. You have to draw the whole thing every frame. Sure, it doesn't increase memorywise, but it still is a lot to render and you will need to render it every frame.
But as I said: Try to only render the labels and the rectangles that collide with the screenboundaries, then you should be fine.
There are two ways to solve your problem.
You can either render your map to a RenderTarget(2D/3D) or you can cull the rectangles/text that are offscreen. However, I am not 100% sure that RenderTargets can go as large as you would need, but you could always segment your map into multiple smaller RenderTargets.
From more information on RenderTargets, you might want to check out RB Whitaker's article on them, http://rbwhitaker.wikidot.com/render-to-texture
Culling, in case you are familiar with the term when used in this context, means to only render what is visible to the end-user. There are various ways that culling can be implemented. This does however require you to have already implemented a camera (or some type of view region) and you perform a basic axis-aligned bounding box collision (AABB collision, which MonoGame's Rectangle supports out of the box) of the rectangles against the camera's viewport and only render it if there is a collision.
A basic implementation of culling would look something like this:
Rectangle myRect = new Rectangle(100, 100, 48, 32);
public void DrawMapItem(SpriteBatch batch, Rectangle viewRegion)
{
if (viewRegion.Contains(myRect))
{
//Render your object here with the SpriteBatch
}
}
Where 'viewRegion' is the area of you world that the camera/end-user can actually see.
You can also combine the two methods, and render the map to multiple render targets, and then cull the render targets.
Hope this helps!

Remove Kinect depth shadow

I've recently started hacking on my Kinect and I want to remove the depth shadow. The shadow is caused by the IR emitter being positioned slightly to the side of the camera, so any close object will get a big shadow and distant object less or no shadow.
The shadow length is related to the distance between the closest and the farthest spot on each side of the shadow.
My goal is to be able to map the color image correctly onto the depth. This doesn't work without processing the shadow as this picture shows:
Does the depth shadow always come out black?
If so you could use a simple method like a temporal median to calculate the background of the image (more info here: http://www.roborealm.com/help/Temporal_Median.php) and then whenever a pixel is black, set it to the background value at that pixel location.
I did some preliminary work on this problem a few weeks ago. My code works directly on a WriteableBitmap rather than the depth data, but if you're only doing image processing, it should work. The algorithm isn't perfect and would benefit with some more tweaking. If you update the code at all, let me know; I'd be very interested to see what you're doing!
The source code is posted on my blog:
http://richardpianka.com/2011/02/trackingni-depth-correction/
I don't know how it is with c# but openni c++ has a function called xnSetViewPoint() the only problem is, you lose the top 20 or so rows of imagedata due to the transformation.
the reason is due to using two different sensors, which are placed close by each other but not exactly at the same position.
Kinect Method - MapDepthFrametoColorFrame
Get the [x,y] positions in the depth frame, and use that method to fill in
I'm sorry to say but that shadow is caused by your body blocking the inferred dots from hitting that spot of the room so it creates a black spot... Nothing you can do but change the base background to a different color other than black so it won't be a noticeable shadow
The color camera and kinect depth camera dont have the same dimensions, and origin of the infra red dots are not from the same cam, its a IR projector a few cm aside from it (as that displacement is used to calculate depth).
However the solution seams easy here, your shadow data is on the left side.
so you need to extend the last known color data before it went black.
And to fit it better move translate the color cam data to the right.

Detect Rotation of a scanned image in C#

We want a c# solution to correct the scanned image because it is rotated. To solve this problem we must detect the rotation angle first and then rotate the image. This was our first thought for our problem. But then we thought image warping would be more accurate as I think it would make the scanned image like our template. Then we can process it as we know all the coordinates of our template... I searched for a free SDK or a free solution in c#. Helping me in this will be great as it is the last task in our work. Really, thanks for all.
We used the PrimeOCR product to do this. It's not free, but we couldn't find a free program that was comparable.
So, the hard part is to detect the angle of the page.
If you have full control over the template, the simplest way to do this is probably to come up with an easily-detectable symbol (e.g. a solid black circle) and stick 3 of them on the template. Then, detect them (just look for big blocks of pixels with high saturation, in the case of a solid black circle).
So, you'll then have 3 sets of coordinates. If you have a top circle, a left circle, and a right circle with all 3 circles at difference distances from one another, detecting which circle is the top circle should be pretty easy.
Then just call a rotation function. This part is easy and has been done before (e.g. http://www.switchonthecode.com/tutorials/csharp-tutorial-image-editing-rotate ).
Edit:
I suggested a circle because it's easier to find the center, but a rectangle should work, too.
To be more explicit about how to actually locate the rectangles/circles, take the average Brightness value of every a × a group of pixels. If that value is greater than b, then that a × a group of pixels is part of a rectangle. a and b are varables you'll want to come up with yourself.
Use flood-fill (or, more precisely, Connected Component Labeling) group the resulting pixels together. The end result should give you your rectangles.

Categories