3D shapes and Cellophane glasses - c#

How do I generate basic 3D shapes (red and blue) that can be seen as 3D with cellophane 3D glasses, using C# in a desktop app? (Note that this question is not limited to any particular language. If I can get a head start in any language, then that's great. I can always learn from that and eventually know enough to attempt to implement this in my desired language.)
I've seen so many questions about this, but the answers seem very complicated and don't lead me to anywhere in the end. I can't even find any docs or articles about this.

To generate Anaglyph 3D images, you first have to render the scene from two slightly different viewports, one for each eye. The further apart they are, the smaller the scene will look, and the higher the 3D-sense will be.
The easiest method would be to use some existing library to render the images. Using a "camera", position it slightly to the left (and right) of the center of view. Render two images, and get the pixels.
The second step is to combine the two images into an Anaglyph 3D image. One way to do this, is to combine the red channel from one image with the green and blue channels from the other.
(Pseduo-C#:)
Color Combine(Color left, Color right)
{
return new Color(left.Red, right.Green, right.Blue);
}
Image Combine(Image left, Image right)
{
Image result = new Image(left.Width, left.Height);
for (int y = 0; y < left.Height; y++)
for (int x = 0; x < left.Width; x++)
{
result.SetPixel(x, y, Combine(left.GetPixel(x, y), right.GetPixel(x, y)));
}
}

Related

How can I draw a Line animation to outline various shapes?

I am trying to draw a line animation outline various shapes as in the image below. I am very well aware that it's best practice that I mention what I've been able to achieve to get specific help, but I am not sure where to begin, just that I know that using a Line Renderer could be a good approach to achieving this. That said, how can I achieve this?
UPDATE
I think I didn't explain a few things clearly enough. I am interested in animating the outline of objects without arrows, just a line traced round the outline like the image below:
I would do the following: (pseudocode, untested)
For every prefab or gameobject, store a List of edges that define your outline.
I wouldn't recommend using the mesh's edges, it's probably better to have a specific predefined list of edges per shape to avoid the inner edges of the object. Every entry in the list is defined by two Vector3's which are the two vertices.
List<Vector3[]> outline = new List<Vector3[]>();
Now, you have many ways to actually draw the arrows, like having them as individual gameobjects (probably not a good idea), particle system, or just drawn automatically from the parent objects update function. I would recommend the latter.
Now you would store a bunch of floats that define where your arrows are
public List<float> arrow_locations = new List<float>();
//adding one arrow
arrow_locations.Add(0.0);
//now in the update function of your parent object, update the arrow locations
private float cycle = 0.0f;
void Update()
{
float segment_size = 360.0f/outline.Count;
for(int i=0; i < arrow_locations.Count; i++)
{
arrow_locations[i] += 0.05f; //speed of spinning
if( arrow_locations[i] >= 360.0f ) arrow_locations[i] = 0;
//now to get the actual location of the arrow
int which_edge = Mathf.Floor((arrow_locations[i]/360.0f)*outline.Count);
//this will give us a number 0..1 telling us where along the edge the arrow is
float weight_within_edge=(arrow_locations[i] - segment_size*which_edge)/segment_size;
//here we lerp between the two vertices of the edge
Vector3 new_loc = outline[which_edge][0]*(1.0-weight_within_edge) + outline[which_edge][1]*(weight_within_edge);
//now that we have the location of the arrow, draw it
//note, you can get more efficient if using instancing for all arrows
//You can also use line drawing, but i wouldn't recommend that.
DrawMesh(arrow_mesh, new_loc, Quaternion.identity);
}
}
Please note, that when you have the positions of the arrows, you can opt to draw them in 2D in the UI by projecting them onto the camera plane. The lines aside from the arrows are themselves static, so you can draw them as part of the mesh very easily. Also note, I make no mention of the objects position, all values should probably be defined in local space, then transformed with the object. You can transform the drawn stuff in the the DrawMesh function by supplying a a transform matrix.
I think a shader with a parameterized radial mask would be the best way to do this. I have never done one myself, so I only have a general idea of how it's done, but here is how it would work AFAIK:
Create some kind of cell shader that can draw the edges of objects.
Create a filter/mask that has an angle shape extruding radially from the center to the edges; you can control the shape/angle using a parameter. Unity already has something similar to this in the Tanks! tutorial - Tank Health lesson.
Note: The tutorial might even be exactly this idea, but I don't remember with enough details to confirm; I'll update the answer after I take a look again.
The tutorial has the same idea, but it applies it using unity's builtin UI stuff.
Using this mask, only the masked area of the shape's edge will be drawn the screen.
By increasing the angle parameter of the mask over time, you can create the effect of the edge of the object getting revealed radially over time. Which seems to be exactly what you want.
To help visualize, a very professional diagram made in paint:
light blue = mask.
dark blue = "revealed" part of the mask (angle parameter). Plus how it would behave if the angle is increased (arrow).
green = object.
black = outline being drawn to the screen.

Monotouch iOS Recognize colors from a picture?

I don't know if this is possible with Monotouch so I thought I'd ask the experts. Let's say I want to be able to take a picture of a painted wall and recognize the general color from it - how would I go about doing that in C#/Monotouch?
I know I need to capture the image and do some image processing but I'm more curious about the dynamics of it. Would I need to worry about lighting conditions? I assume the flash would "wash out" my image, right?
Also, I dont need to know exact colors, I just need to know the general color family. I dont need to know a wall is royal blue, I just need it to return "blue". I dont need to know hunter green, I just need it to return "green". I've never done that with image processing.
The code below relies on the .NET System.Drawing.Bitmap class and the System.Drawing.Color class, but I believe these are both supported in MonoTouch (at least based on my reading of the Mono Documentation).
So assuming you have an image in a System.Drawing.Bitmap object named bmp. You can obtain the average hue of that image with code like this:
float hue = 0;
int w = bmp.Width;
int h = bmp.Height;
for (int y = 0; y < bmp.Height; y++) {
for (int x = 0; x < bmp.Width; x++) {
Color c = bmp.GetPixel(x, y);
hue += c.GetHue();
}
}
hue /= (bmp.Width*bmp.Height);
That's iterating over the entire image which may be quite slow for a large image. If performance is an issue, you may want to limit the pixels evaluated to a smaller subsection of the image (as suggested by juhan_h), or just use a smaller image to start with.
Then given the average hue, which is in the range 0 to 360 degrees, you can map that number to a color name with something like this:
String[] hueNames = new String[] {
"red","orange","yellow","green","cyan","blue","purple","pink"
};
float[] hueValues = new float[] {
18, 54, 72, 150, 204, 264, 294, 336
};
String hueName = hueNames[0];
for (int i = 0; i < hueNames.Length; i++) {
if (hue < hueValues[i]) {
hueName = hueNames[i];
break;
}
}
I've just estimated some values for the hueValues and hueNames tables, so you may want to adjust those tables to suit your requirements. The values are the point at which the color appears to change to the next name (e.g. the dividing line between red and orange occurs at around 18 degrees).
To get an idea of the range of colors represent by the hue values, look at the color wheel below. Starting at the top it goes from red/orange (around 0° - north) to yellow/green (around 90° - east), to cyan (around 180° - south), to blue/purple (around 270° - west).
You should note, however, that we are ignoring the saturation and brightness levels, so the results of this calculation will be less than ideal on faded colors and under low light conditions. However, if all you are interested in is the general color of the wall, I think it might be adequate for your needs.
I recently dealt with shifting white balance on iOS (original question here: iOS White point/white balance adjustment examples/suggestions) which included a similar problem.
I can not give you code samples in C# but here are the steps that I would take:
Capture the image
Decide what point/part of the image is of interest (the smaller the better)
Calculate the "color" of that point of the image
Convert the "color" to human readable form (I guess that is what you need?)
To accomplish step #2 I would either let the user choose the point or take the point to be in the center of the image, because that is usually the place to which the camera is actually pointed.
How to accomplys step #3 depends on how big is the area chosen in step #2. If the area is 1x1 pixels then you render it in RGB and get the component (ie red green and blue) values from that rendered pixel. If the area is larger then you would need to get the RGB values of each of the pixels contained in that area and average them.
If you only need a general color this would be mostly it. But if you need to compensate for lighting conditions the problem gets very much more complicated. To compensate for lighting (ie White Balancing) you need to do some transformations and some guesses in which conditions the photo was taken. I will not go into details (I wrote my Bachelors thesis on those details) but Wikipedias article on White Balance is a good starting point.
It is also worth to note that the solution to White Balancing problem will always be subjective and dependent on the guesses made in which light the photo was taken (at least as far as I know).
To accomplish step #4 you should search for tables that map RGB values to human-readable colors. I have not had the need for these kinds of tables, but I am sure they exist somwhere on the Internet.

How to approach drawing shapes in C# to make a tetris clone?

I would first like to note that I am NOT using any XNA or LINQ in this small project. Basically, I want to make a clone of Tetris using C# windows application. I have already drawn out my grid, my picturebox size 250x500, making each square block 25 pixels x 25 pixels.
Now, I am an amateur at drawing shapes. I can draw lines and rectangles, circles, ellipses, and polygons on a grid, and I can fill them in with a color, etc. That's it really. I cannot do much else with drawing. Basic shapes in other words, using Points I created to draw polygons such as the "T" shape in tetris.
My question is, when making my Tetris shapes, should I draw them using the Drawing methods in C# or should I create and import bitmap pictures of the tetris shapes and use those to create my tetris clone?
Once I can figure out how to draw shapes, the rest I can figure out on my own. Also, when doing work on the game grid, do I inherit the Picturebox Properties from my class called GameGrid?
Using bitmap and prerendered images are preferred, becuase it speeds up rendering of each frame. This is what most of such a games do.
The way that you render the shapes will have an effect on your collision detection. For instance, if you a bitmap of the T shape, you will have to have a method of detecting when the T has collided by perhaps per-pixel collision or a seperate structure which maintains the specific shape. Whereas, if you simply maintain a list of the blocks in use, collision detection becomes far simpler.
If you have your shapes as a matrix of blocks, much like the original game, you may find the rendering, handling and collision far easier.
For instance, have a look at the following pseudo-code:
class Shape
{
bool [3][3] Blocks;
Pos pos;
}
Shape T = new Shape();
T.Blocks[0][0] = true;
T.Blocks[0][1] = true;
T.Blocks[0][2] = true;
T.Blocks[1][0] = false;
T.Blocks[1][1] = true;
T.Blocks[1][2] = false;
T.Blocks[2][0] = false;
T.Blocks[2][1] = true;
T.Blocks[2][2] = false;
When rendering, you can do something along the following lines:
foreach(Shape s in currentBlocks)
{
for(int x = 0; i < 3)
{
for(int y = 0; y < 3; y++)
{
if(s.Blocks[x][y])
{
gameGrid.Render(s.Pos.X + x, s.Pos.Y + y);
}
}
}
}

Creating a Textured 2D Sprite using Points in XNA

I am working on a smooth terrain generation algorithm in C# and using XNA to display the data.
I am making it so it creates a new point halfway between each point per iteration, at a random height between the two. This works OK, and I am getting the current result, randomly placed points.
Now what I want to do is turn these points into a primitive (I think that is what it is) and display it like a mountain, obviously using a mountain texture. Example below (using different point data, made up in paint)
Any help or tips are greatly appreciated, and look forward to your responses.
Thanks.
Twitchy
You can draw triangle strips alternating between the points in your primitive and the bottom of the screen with the same x coordinate as the point just before it, stepping along the bottom of the screen.
I am not familiar with drawing primitives in XNA (just openGL), but it should be similar.
You take your points, e.g. A,B,C and D
to draw the strips. you would have your vertices ordered as;
vertex1= A
vertex2 = point(A.x, 0)
vertex3= B
vertex4 = point(B.x, 0)
vertex5= C
vertex6 = point(C.x, 0)
vertex7= D
vertex8 = point(D.x, 0)
(I assume the bottom of the screen has a y coordinate of 0, it can be screen height or whatever y you choose)
http://en.wikipedia.org/wiki/Triangle_strip

Precise pixel movement in Canvas

I am doing some sort of drawing software in WPF, and I have certain visual elements in a Canvas like for example Rectangles and Lines. I have implemented dragging of those elements around the Canvas to move them. The motion must be aligned to pixels, I read WPF uses points and not pixels so it has become a concern of mine to know whether my lines or rectangles are aligned to pixels. I tried using SnapsToPixels, but I'm not sure it will do the trick, or if it will do it when I'm moving the visuals around.
Finally, I must implement moving visuals with the keyboard, a single cursor stroke means move the visual exactly one pixel, how can I do this from the code behind? I assume doing something like:
Canvas.SetLeft(visual) = Canvas.GetLeft(visual) + 1;
Will only add one point to its position, and not one pixel, how can I move exactly one pixel in the Canvas?
Thank you very much.
It might help to use SnapToDevicePixels for your canvas.
Is this what you are looking for?
Matrix m =
PresentationSource.FromVisual(Application.Current.MainWindow)
.CompositionTarget.TransformToDevice;
double pixelSizeX = m.M11;
double pixelSizeY = m.M22;

Categories