WorldToScreen function C# - c#

Can anyone please explain the world to screen function (c#) to me?
I want to understand what it does, how to use it and what I need to use it.
So far I know, that it converts 3d vectors to 2d vectors, that then can be shown on the monitor.
Now I want to know the steps needed to show a 3d point as a drawing on the screen.
Thx in advance :)

I am going to assume that your desired screen space runs from top to bottom so that {0, 0} is top left and {screenWidth, screenHeight} is bottom right. I am also going to assume that normalized device coordinates are in the range of [{-1, -1, 0}, {1, 1, 1}].
The first thing you wanna do, is convert your 3D world coordinates into what are known as Normalized Device Coordinates (NDC for short). This is an intermediary step which simplifies the mapping to ScreenSpace.
Normalized Device Coordinates are used to measure relative positions on the monitor. These are not pixel coordinates. The center of the screen is always 0, the top is always 1 and the bottom is always -1. The left side is always -1 and the right side is always 1.
Knowing view and projection matrices, the transformation is following:
Vector3 pointInNdc = Vector3.Transform(pointInWorld, view * projection);
To understand how this transformation works, you could take a look at the source code of Paradox Game Engine for example.
Once the coordinate is normalized, you want to map it into Screen Space:
To find the X coordinate, move it from range [-1,1] -> [0,2] by adding 1 to it and dividing it by 2 to move it from range [0,2] -> [0,1]. Multiply that by screen width and you have the X coordinate in Screen Space.
float screenX = (pointInNdc.X + 1) / 2f * screenWidth;
Finding Y coordinate is similar, but instead of adding 1, you need to subtract pointInNdc.Y from 1 to invert the coordinate upside down (move from Y running bottom to top to Y running top to bottom)
float screenY = (1 - pointInNdc.Y) / 2f * screenHeight;
There you have both X and Y coordinates of Screen Space. There are many great articles out there (such as this one) which describe this exact same process and also how to go back from ScreenToWorld.
Here is the full listing of WorldToScreen:
Vector2 WorldToScreen(Vector3 pointInWorld, Matrix view, Matrix projection, int screenWidth, int screenHeight)
{
Vector3 pointInNdc = Vector3.Transform(pointInWorld, view * projection);
float screenX = (pointInNdc.X + 1) / 2f * screenWidth;
float screenY = (1 - pointInNdc.Y) / 2f * screenHeight;
return new Vector2(screenX, screenY);
}

Related

3D data point to 2D data point

I'm using GDI+ to implement some simple graphics, I've taken the code from this example http://www.vcskicks.com/3d_gdiplus_drawing.php and can get it to do what I want, but I don't understand how it's doing the conversion from 3D data point to 2D data point:
//Convert 3D Points to 2D
Math3D.Point3D vec;
for (int i = 0; i < point3D.Length; i++)
{
vec = cubePoints[i];
if (vec.Z - camera1.Position.Z >= 0)
{
point3D[i].X = (int)((double)-(vec.X - camera1.Position.X) / (-0.1f) * zoom) + drawOrigin.X;
point3D[i].Y = (int)((double)(vec.Y - camera1.Position.Y) / (-0.1f) * zoom) + drawOrigin.Y;
}
else
{
tmpOrigin.X = (int)((double)(cubeOrigin.X - camera1.Position.X) / (double)(cubeOrigin.Z - camera1.Position.Z) * zoom) + drawOrigin.X;
tmpOrigin.Y = (int)((double)-(cubeOrigin.Y - camera1.Position.Y) / (double)(cubeOrigin.Z - camera1.Position.Z) * zoom) + drawOrigin.Y;
point3D[i].X = (float)((vec.X - camera1.Position.X) / (vec.Z - camera1.Position.Z) * zoom + drawOrigin.X);
point3D[i].Y = (float)(-(vec.Y - camera1.Position.Y) / (vec.Z - camera1.Position.Z) * zoom + drawOrigin.Y);
point3D[i].X = (int)point3D[i].X;
point3D[i].Y = (int)point3D[i].Y;
}
}
I've found a couple of resources which discuss conversion from a 3d data point to a 2d one:
https://amycoders.org/tutorials/3dbasics.html
https://en.wikipedia.org/wiki/Isometric_projection
https://en.wikipedia.org/wiki/3D_projection
However none of these resources seem to detail the maths used in the above example.
I'd be really grateful if someone could point me at the derivation for the maths and/or explain how the above code works.
The article and code is a bit confusing, indeed. Before we start, let's do some modifications to the rest of the code. Through these modifications, you will probably see what's going on more easily. Let's specify a static camera position. Instead of this weird formula:
double cameraZ = -(((anchorPoint.X - cubeOrigin.X) * zoom) / cubeOrigin.X) + anchorPoint.Z;
Let's just do this:
cameraZ = 200;
zoom = 100;
And after that, we keep
camera1.Position = new Math3D.Point3D(cubeOrigin.X, cubeOrigin.Y, cameraZ);
This will position the camera at a depth of 200 such that its x/y coordinates coincide with the cube center. I'll come back to the meaning of zoom.
The camera model uses a perspective projection and a right-handed coordinate system. That means that the camera look in the negative z-direction and things that are far away will appear smaller.
Let's take a closer look at the 3D->2D conversion code step by step:
if (vec.Z - camera1.Position.Z >= 0)
vec is the point that we want to project. A more intuitive way to write that would be:
if (vec.Z >= camera1.Position.Z)
So, this branch applies to all points that are behind the camera (remember that the camera looks into the negative z-direction). What happens in this branch is a bit hacky. It has nothing to do with real projections. What you actually want to do is to cut off those points (as they are not visibile). Luckily, in the example, none of the points lie behind the camera. So, we don't need to care about this. I'll come back to that later.
Let's continue to the else branch.
tmpOrigin = ...
This variable is not used anywhere, so we can ignore it.
point3D[i].X = (float)((vec.X - camera1.Position.X) / (vec.Z - camera1.Position.Z) * zoom + drawOrigin.X);
This is the actual projection (I will only consider the X part. The same goes for the Y part). Let's take a look at the individual parts:
vec.X - camera1.Position.X
This is the vector from the camera position to the point drawn. Everything left of the camera has a negative coordinate, everything right of the camera has a positive coordinate.
vec.Z - camera1.Position.Z
This is the negative depth of the camera. Not sure why the negative depth is used here. This will give you a mirrored image. What you actually wanted to do is (due to the camera looking into the negative z-axis)
camera1.Position.Z - vec.Z
Then,
(vec.X - camera1.Position.X) / (vec.Z - camera1.Position.Z)
is the perspective divide. The difference vector is scaled by its inverse depth (i.e. far objects become smaller).
* zoom
This scales the image from world space (which is very small) to image space (convert world units to pixels). The factor is kind of arbitrary (that's why we just specified 100). More involved camera models use a field of view.
drawOrigin.X
And finally, we align the camera center to the drawOrigin. Remember that points left of the camera had a negative coordinate. With this, these will get a positive coordinate (but still be left of drawOrigin).
point3D[i].X = (int)point3D[i].X;
This is just a cast to int.
For the y-coordinate, there is an additional -. This turns the y-axis around (in the pixel coordinate system of the image, the y-axis points downwards).
Let's go back to the hacky if branch. You see that the formula is exactly the same. Except that the part that had the negative depth of the point before now has (-0.1f). So these points will be considered having a constant depth of 0.1. Pretty dubious and far from actual projections.
And that's basically it. One more note: The article has a section about Gimbal lock. Thing is, the properties of matrix multiplications that are described there have nothing to do with Gimbal lock. So, don't rely on this article too much. It's a nice practical application, but it has quite some flaws.

Use of offsets for translation in Matrix3D

I have an application where I want to scale, rotate, and translate some 3D points.
I'm used to seeing 3x3 rotation matrices, and storing translation in a separate array of values. But the .Net Matrix3D structure ( https://msdn.microsoft.com/en-us/library/System.Windows.Media.Media3D.Matrix3D%28v=vs.110%29.aspx) is 4x4 and has a row of "offsets" - OffsetX, OffsetY, OffsetZ, which are apparently used for translation. But how, exactly, are they intended to be applied?
Say I have a Vector3D with X, Y, Z values, say 72, 24, 4. And say my Matrix3D has
.707 0 -.707 0
0 1 0 0
.707 0 .707 0
100 100 0 1
i.e., so the OffsetX and OffsetY values are 100.
Is there any method or operator for Matrix3D that will apply this as a translation to my points? Transform() doesn't seem to. If my code has . . .
Vector3D v = new Vector3D(72, 24, 0);
Vector3D vectorResult = new Vector3D();
vectorResult = MyMatrix.Transform(v);
vectorResult has 8.484, 24, -8.484, and it has the same values if the offsets are 0.
Obviously I can manually apply the translation individually for each axis, but I thought since it's part of the structure there might be some method or operator where you give it a point and it applies the entire matrix including translation. Is there?
A 4x4 matrix represents a transform in 3D space using homogeneous coordinates. In this representation, a w component is added to the vector. This component differs based on what a vector should represent. Transforming a vector with the matrix is simple multiplication:
transformed = vector * matrix
(where both vectors are row-vectors).
The w component is only considered by the matrix' last row - the part where the translation is stored. So if you want to transform points, this component needs to be 1. If you want to transform directions, this component needs to be 0 (because direction vectors do not change if you translate them).
This difference is expressed with two different structures in WPF. The Vector3D represents a direction (with w component 0) and the Point3D represents a point (with w component 1). So if you make v a Point3D, everything should work as you expect.

how to plot surface that has x,y,z vectors?

I have a file containing some xyz data points and am trying to create a surface plot out of points in this dataset but for some reason my plot always comes out looking horribly deformed.
If there is a grid available for the points, then it is possible to directly feed them to ILSurface. If not (scattered data) you would need to interpolate them in such way to get a grid. Or you have to wait for our upcoming interpolation toolbox, which will provide such feature!
This is currently not possible, I am afraid.
I think you're looking into projecting 3D points on a 2D surface. The basic formula looks something like
projectedX = x / z
projectedY = y / z
and if you consider the screen space, you have something like this:
projectedX = x * ScreenWidth / z - ScreenWidth / 2
projectedY = y * ScreenHeight / z - ScreenHeight / 2
This should get you started on the projection.

Vector/Angle math

I have two objects in a game, which for this purpose can be considered points on a 2d plane, but I use Vector3s because the game itself is 3d.
I have a game camera which I want to align perpendicularly (also on the plane) to the two objects, so that they are both in view of the camera. Due to the nature of the game, the objects could be in any imaginable configuration of positions, so the directional vector between them could have any direction.
Part1: How do I get the perpendicular angle from the two positional vectors?
I have:
Vector3 object1Position; // x and z are relevant
Vector3 object2Position;
I need:
float cameraEulerAngleY;
Part2: Now, because of the way the game's assets are modelled, I want to only allow the camera to view within a 180 degree 'cone'. So if the camera passes a certain point, it should use the exact opposite position the above math might produce.
An image is attached of what I need, the circles are the objects, the box is the camera.
I hope this post is clear and you guys won't burn me alive for being total rubbish at vector math :P
greetings,
Draknir
You'll need to specify a distance from the object line, and an up vector:
Vector3 center = 0.5 * (object2position + object2position)
Vector3 vec12 = object2position - object1position
Vector3 normal = Cross(vec12, up)
normal.Normalize()
Vector3 offset = distance * normal
Vector3 cameraA = center + offset
Vector3 cameraB = center - offset
< choose which camera position you want >
Instead of using Euler angles, you should probably use something like LookAt() to orient your camera.
Assuming Y is always 0 (you mentioned "X and Z" are your relevant components), then you can use some 2-d math for this:
1.Find any perpendicular vector (there are two). You can get this by calculating the difference between the two vectors, swapping the components, and negating one of them.
Vector3 difference = (object1Position - object2Position);
Vector3 perpendicular = new Vector3(difference.z, 0, -difference.x);
2.Using your separating plane's normal, flip the direction of your new vector if it's pointing opposite of intended.
Vector3 separatingPlaneNormal = ...; // down?
if(Vector3.Dot(separatingPlaneNormal, perpendicular ) < 0)
{
perpendicular = -perpendicular ;
}
// done.
Well, for the first bit, if you have points (x1, y1) and (x2, y2) describing the positions of your objects, just think of it in terms of triangles. The angle you're looking for ought to be described by
arctan((y2-y1)/(x2-x1))+90
I don't completely understand what you want to do with the second part, though.

Light is being altered by camera

I have for a long time put off fixing up the lighting in a project i am working on in OpenTK. The problem basically is that when i rotate the camera, lighting displayed on the terrain i am showing also rotates.
Here's a snippet of my onRenderFrame code:
protected override void OnRenderFrame(FrameEventArgs e)
{
base.OnRenderFrame(e);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
float dist = (float)Math.Sin(Math.PI / 3) * camZoom;
Matrix4 view = Matrix4.LookAt(new Vector3(-(float)Math.Cos(camRot) * dist, dist, -(float)Math.Sin(camRot) * dist) + camPos, camPos, new Vector3(0, 1, 0));
//Note: camPos isn't actually the position of the camera, rather the target of it. The actual camera position is calculated above.
GL.MatrixMode(MatrixMode.Modelview);
GL.LoadMatrix(ref view);
float tx = 50;
float ty = 20;
float tz = -15;
GL.Light(LightName.Light0, LightParameter.Position, new float[] { tx, ty, tz});
GL.Begin(BeginMode.Lines);
DrawBox(tx - 2, tx + 2, ty - 2, ty + 2, tz - 2, tz + 2);
GL.End();
... Drawing of terrain is here
I used my quick DrawBox function to draw a simple box around the location of the light. It is working fine, i even implemented some movement such as a sun around the earth. While the camera hadn't been moved, this was working great. But once i turned the camera, the lighting no longer showed what it should have, it seemed to have 'moved' the light, without actually moving it (the box drawn wasn't moved, only the effect of the light).
According to the OpenGL spec for glLight with the GL_POSITION parameter:
GL_POSITION:
params contains four integer or floating-point values that specify the position of the light in homogeneous object coordinates. Both integer and floating-point values are mapped directly. Neither integer nor floating-point values are clamped.
The position is transformed by the modelview matrix when glLight is called (just as if it were a point), and it is stored in eye coordinates. If the w component of the position is 0, the light is treated as a directional source. Diffuse and specular lighting calculations take the light's direction, but not its actual position, into account, and attenuation is disabled. Otherwise, diffuse and specular lighting calculations are based on the actual location of the light in eye coordinates, and attenuation is enabled. The initial position is (0, 0, 1, 0); thus, the initial light source is directional, parallel to, and in the direction of the - z axis.
You only pass in three floats. I can't be certain what OpenTK uses as the fourth (w) component, but I would guess it makes any unspecified parameters 0, thereby making your light directional and ignoring the position you give. I would try adding a 1.0 to the end of the array you pass in and see if that fixes things.

Categories