Light is being altered by camera - c#

I have for a long time put off fixing up the lighting in a project i am working on in OpenTK. The problem basically is that when i rotate the camera, lighting displayed on the terrain i am showing also rotates.
Here's a snippet of my onRenderFrame code:
protected override void OnRenderFrame(FrameEventArgs e)
{
base.OnRenderFrame(e);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
float dist = (float)Math.Sin(Math.PI / 3) * camZoom;
Matrix4 view = Matrix4.LookAt(new Vector3(-(float)Math.Cos(camRot) * dist, dist, -(float)Math.Sin(camRot) * dist) + camPos, camPos, new Vector3(0, 1, 0));
//Note: camPos isn't actually the position of the camera, rather the target of it. The actual camera position is calculated above.
GL.MatrixMode(MatrixMode.Modelview);
GL.LoadMatrix(ref view);
float tx = 50;
float ty = 20;
float tz = -15;
GL.Light(LightName.Light0, LightParameter.Position, new float[] { tx, ty, tz});
GL.Begin(BeginMode.Lines);
DrawBox(tx - 2, tx + 2, ty - 2, ty + 2, tz - 2, tz + 2);
GL.End();
... Drawing of terrain is here
I used my quick DrawBox function to draw a simple box around the location of the light. It is working fine, i even implemented some movement such as a sun around the earth. While the camera hadn't been moved, this was working great. But once i turned the camera, the lighting no longer showed what it should have, it seemed to have 'moved' the light, without actually moving it (the box drawn wasn't moved, only the effect of the light).

According to the OpenGL spec for glLight with the GL_POSITION parameter:
GL_POSITION:
params contains four integer or floating-point values that specify the position of the light in homogeneous object coordinates. Both integer and floating-point values are mapped directly. Neither integer nor floating-point values are clamped.
The position is transformed by the modelview matrix when glLight is called (just as if it were a point), and it is stored in eye coordinates. If the w component of the position is 0, the light is treated as a directional source. Diffuse and specular lighting calculations take the light's direction, but not its actual position, into account, and attenuation is disabled. Otherwise, diffuse and specular lighting calculations are based on the actual location of the light in eye coordinates, and attenuation is enabled. The initial position is (0, 0, 1, 0); thus, the initial light source is directional, parallel to, and in the direction of the - z axis.
You only pass in three floats. I can't be certain what OpenTK uses as the fourth (w) component, but I would guess it makes any unspecified parameters 0, thereby making your light directional and ignoring the position you give. I would try adding a 1.0 to the end of the array you pass in and see if that fixes things.

Related

Finding the Final Position from Original Position + Offset and Rotation

I have my Building System almost finished; the problem is when I debugged my Physics.OverlapBox(), by using a Gizmos.DrawWireCube() with the code shown below:
void OnGizmosDraw()
{
Gizmos.matrix = Matrix4x4.TRS(transform.position + boundingOffset, transform.rotation, boundingExtents);
Gizmos.DrawWireCube(Vector3.zero, Vector3.one);
}
I saw the problem but I will show it in an image, since I am kind of struggling to explain.
I have thought of a solution but I could not find the equations and what method it is. Basically, there is an Original Vector to which an Offset will be added (if defined) and have it's Final Vector affected by Rotation. It is explained more in another.
Maybe it is a trigonometry based calculation. I hope you could find how to do it.
Thanks in advance.
Ok if I understand you correctly this time the issue is that your offset works as long as the transform isn't scaled or rotated.
The reason is that
transform.position + boundingOffset
uses plain worldspace coordinates. As you can see the offset from orange to red dot is exactly the same (in world coordinates) in both pictures.
What you rather want is an offset relative to the transform position and scale. You can convert your boundingOffset to local coordinates relative to the transform by using TransformPoint
Transforms position from local space to world space.
So instead use
Collider[] colliders = Physics.OverlapBox(transform.TransformPoint(boundingOffset), boundingExtents / 2, blueprint.transform.rotation);
What it does is basically
transform.position
+ transform.right * boundingOffset.x * transform.lossyScale.x
+ transform.up * boundingOffset.y * transform.lossyScale.y
+ transform.forward * boundingOffset.z * transform.lossyScale.z;
Alternatively
A bit more maintainable approach I often use in such a case is not providing boundingOffset via a hardocded/field vector but instead simply use a child GameObject e.g. called OffsetPivot and place it in the scene however you want. Since it is a child of your transform it will always keep the correct offset when you rotate/scale/move the parent.
Then in the code I would simply do
[SerializeField] private Transform boundingOffsetPivot;
...
Collider[] colliders = Physics.OverlapBox(boundingOffsetPivot.position, boundingExtents / 2, blueprint.transform.rotation);

3D data point to 2D data point

I'm using GDI+ to implement some simple graphics, I've taken the code from this example http://www.vcskicks.com/3d_gdiplus_drawing.php and can get it to do what I want, but I don't understand how it's doing the conversion from 3D data point to 2D data point:
//Convert 3D Points to 2D
Math3D.Point3D vec;
for (int i = 0; i < point3D.Length; i++)
{
vec = cubePoints[i];
if (vec.Z - camera1.Position.Z >= 0)
{
point3D[i].X = (int)((double)-(vec.X - camera1.Position.X) / (-0.1f) * zoom) + drawOrigin.X;
point3D[i].Y = (int)((double)(vec.Y - camera1.Position.Y) / (-0.1f) * zoom) + drawOrigin.Y;
}
else
{
tmpOrigin.X = (int)((double)(cubeOrigin.X - camera1.Position.X) / (double)(cubeOrigin.Z - camera1.Position.Z) * zoom) + drawOrigin.X;
tmpOrigin.Y = (int)((double)-(cubeOrigin.Y - camera1.Position.Y) / (double)(cubeOrigin.Z - camera1.Position.Z) * zoom) + drawOrigin.Y;
point3D[i].X = (float)((vec.X - camera1.Position.X) / (vec.Z - camera1.Position.Z) * zoom + drawOrigin.X);
point3D[i].Y = (float)(-(vec.Y - camera1.Position.Y) / (vec.Z - camera1.Position.Z) * zoom + drawOrigin.Y);
point3D[i].X = (int)point3D[i].X;
point3D[i].Y = (int)point3D[i].Y;
}
}
I've found a couple of resources which discuss conversion from a 3d data point to a 2d one:
https://amycoders.org/tutorials/3dbasics.html
https://en.wikipedia.org/wiki/Isometric_projection
https://en.wikipedia.org/wiki/3D_projection
However none of these resources seem to detail the maths used in the above example.
I'd be really grateful if someone could point me at the derivation for the maths and/or explain how the above code works.
The article and code is a bit confusing, indeed. Before we start, let's do some modifications to the rest of the code. Through these modifications, you will probably see what's going on more easily. Let's specify a static camera position. Instead of this weird formula:
double cameraZ = -(((anchorPoint.X - cubeOrigin.X) * zoom) / cubeOrigin.X) + anchorPoint.Z;
Let's just do this:
cameraZ = 200;
zoom = 100;
And after that, we keep
camera1.Position = new Math3D.Point3D(cubeOrigin.X, cubeOrigin.Y, cameraZ);
This will position the camera at a depth of 200 such that its x/y coordinates coincide with the cube center. I'll come back to the meaning of zoom.
The camera model uses a perspective projection and a right-handed coordinate system. That means that the camera look in the negative z-direction and things that are far away will appear smaller.
Let's take a closer look at the 3D->2D conversion code step by step:
if (vec.Z - camera1.Position.Z >= 0)
vec is the point that we want to project. A more intuitive way to write that would be:
if (vec.Z >= camera1.Position.Z)
So, this branch applies to all points that are behind the camera (remember that the camera looks into the negative z-direction). What happens in this branch is a bit hacky. It has nothing to do with real projections. What you actually want to do is to cut off those points (as they are not visibile). Luckily, in the example, none of the points lie behind the camera. So, we don't need to care about this. I'll come back to that later.
Let's continue to the else branch.
tmpOrigin = ...
This variable is not used anywhere, so we can ignore it.
point3D[i].X = (float)((vec.X - camera1.Position.X) / (vec.Z - camera1.Position.Z) * zoom + drawOrigin.X);
This is the actual projection (I will only consider the X part. The same goes for the Y part). Let's take a look at the individual parts:
vec.X - camera1.Position.X
This is the vector from the camera position to the point drawn. Everything left of the camera has a negative coordinate, everything right of the camera has a positive coordinate.
vec.Z - camera1.Position.Z
This is the negative depth of the camera. Not sure why the negative depth is used here. This will give you a mirrored image. What you actually wanted to do is (due to the camera looking into the negative z-axis)
camera1.Position.Z - vec.Z
Then,
(vec.X - camera1.Position.X) / (vec.Z - camera1.Position.Z)
is the perspective divide. The difference vector is scaled by its inverse depth (i.e. far objects become smaller).
* zoom
This scales the image from world space (which is very small) to image space (convert world units to pixels). The factor is kind of arbitrary (that's why we just specified 100). More involved camera models use a field of view.
drawOrigin.X
And finally, we align the camera center to the drawOrigin. Remember that points left of the camera had a negative coordinate. With this, these will get a positive coordinate (but still be left of drawOrigin).
point3D[i].X = (int)point3D[i].X;
This is just a cast to int.
For the y-coordinate, there is an additional -. This turns the y-axis around (in the pixel coordinate system of the image, the y-axis points downwards).
Let's go back to the hacky if branch. You see that the formula is exactly the same. Except that the part that had the negative depth of the point before now has (-0.1f). So these points will be considered having a constant depth of 0.1. Pretty dubious and far from actual projections.
And that's basically it. One more note: The article has a section about Gimbal lock. Thing is, the properties of matrix multiplications that are described there have nothing to do with Gimbal lock. So, don't rely on this article too much. It's a nice practical application, but it has quite some flaws.

WorldToScreen function C#

Can anyone please explain the world to screen function (c#) to me?
I want to understand what it does, how to use it and what I need to use it.
So far I know, that it converts 3d vectors to 2d vectors, that then can be shown on the monitor.
Now I want to know the steps needed to show a 3d point as a drawing on the screen.
Thx in advance :)
I am going to assume that your desired screen space runs from top to bottom so that {0, 0} is top left and {screenWidth, screenHeight} is bottom right. I am also going to assume that normalized device coordinates are in the range of [{-1, -1, 0}, {1, 1, 1}].
The first thing you wanna do, is convert your 3D world coordinates into what are known as Normalized Device Coordinates (NDC for short). This is an intermediary step which simplifies the mapping to ScreenSpace.
Normalized Device Coordinates are used to measure relative positions on the monitor. These are not pixel coordinates. The center of the screen is always 0, the top is always 1 and the bottom is always -1. The left side is always -1 and the right side is always 1.
Knowing view and projection matrices, the transformation is following:
Vector3 pointInNdc = Vector3.Transform(pointInWorld, view * projection);
To understand how this transformation works, you could take a look at the source code of Paradox Game Engine for example.
Once the coordinate is normalized, you want to map it into Screen Space:
To find the X coordinate, move it from range [-1,1] -> [0,2] by adding 1 to it and dividing it by 2 to move it from range [0,2] -> [0,1]. Multiply that by screen width and you have the X coordinate in Screen Space.
float screenX = (pointInNdc.X + 1) / 2f * screenWidth;
Finding Y coordinate is similar, but instead of adding 1, you need to subtract pointInNdc.Y from 1 to invert the coordinate upside down (move from Y running bottom to top to Y running top to bottom)
float screenY = (1 - pointInNdc.Y) / 2f * screenHeight;
There you have both X and Y coordinates of Screen Space. There are many great articles out there (such as this one) which describe this exact same process and also how to go back from ScreenToWorld.
Here is the full listing of WorldToScreen:
Vector2 WorldToScreen(Vector3 pointInWorld, Matrix view, Matrix projection, int screenWidth, int screenHeight)
{
Vector3 pointInNdc = Vector3.Transform(pointInWorld, view * projection);
float screenX = (pointInNdc.X + 1) / 2f * screenWidth;
float screenY = (1 - pointInNdc.Y) / 2f * screenHeight;
return new Vector2(screenX, screenY);
}

Vector/Angle math

I have two objects in a game, which for this purpose can be considered points on a 2d plane, but I use Vector3s because the game itself is 3d.
I have a game camera which I want to align perpendicularly (also on the plane) to the two objects, so that they are both in view of the camera. Due to the nature of the game, the objects could be in any imaginable configuration of positions, so the directional vector between them could have any direction.
Part1: How do I get the perpendicular angle from the two positional vectors?
I have:
Vector3 object1Position; // x and z are relevant
Vector3 object2Position;
I need:
float cameraEulerAngleY;
Part2: Now, because of the way the game's assets are modelled, I want to only allow the camera to view within a 180 degree 'cone'. So if the camera passes a certain point, it should use the exact opposite position the above math might produce.
An image is attached of what I need, the circles are the objects, the box is the camera.
I hope this post is clear and you guys won't burn me alive for being total rubbish at vector math :P
greetings,
Draknir
You'll need to specify a distance from the object line, and an up vector:
Vector3 center = 0.5 * (object2position + object2position)
Vector3 vec12 = object2position - object1position
Vector3 normal = Cross(vec12, up)
normal.Normalize()
Vector3 offset = distance * normal
Vector3 cameraA = center + offset
Vector3 cameraB = center - offset
< choose which camera position you want >
Instead of using Euler angles, you should probably use something like LookAt() to orient your camera.
Assuming Y is always 0 (you mentioned "X and Z" are your relevant components), then you can use some 2-d math for this:
1.Find any perpendicular vector (there are two). You can get this by calculating the difference between the two vectors, swapping the components, and negating one of them.
Vector3 difference = (object1Position - object2Position);
Vector3 perpendicular = new Vector3(difference.z, 0, -difference.x);
2.Using your separating plane's normal, flip the direction of your new vector if it's pointing opposite of intended.
Vector3 separatingPlaneNormal = ...; // down?
if(Vector3.Dot(separatingPlaneNormal, perpendicular ) < 0)
{
perpendicular = -perpendicular ;
}
// done.
Well, for the first bit, if you have points (x1, y1) and (x2, y2) describing the positions of your objects, just think of it in terms of triangles. The angle you're looking for ought to be described by
arctan((y2-y1)/(x2-x1))+90
I don't completely understand what you want to do with the second part, though.

Get screen space coordinates of specific verticies of a 3D model when visible

I'd like to render a model, then, if particular verticies (how would i mark them?) are visible, i want to render something 2D where they are.
How would i go about doing this?
First of all, you need your vertex position as a Vector4 (perspective projections require the use of homogeneous coordinates; set W = 1). I'll assume you know which vertex you want and how to get its position. Its position will be in model space.
Now simply transform that point to projection space. That is - multiply it by your World-View-Projection matrix. That is:
Vector4 position = new Vector4(/* your position as a Vector3 */, 1);
IEffectMatrices e = /* a BasicEffect or whatever you are using to render with */
Matrix worldViewProjection = e.World * e.View * e.Projection;
Vector4 result = Vector4.Transform(position, worldViewProjection);
result /= result.W;
Now your result will be in projection space, which is (-1,-1) in the bottom left corner of the screen, and (1,1) in the top right corner. If you want to get your position in client space (which is what SpriteBatch uses), then simply transform it using the inverse of a matrix that matches the implicit View-Projection matrix used by SpriteBatch.
Viewport vp = GraphicsDevice.Viewport;
Matrix invClient = Matrix.Invert(Matrix.CreateOrthographicOffCenter(0, vp.Width, vp.Height, 0, -1, 1));
Vector2 clientResult = Vector2.Transform(new Vector2(result.X, result.Y), invClient);
Disclaimer: I haven't tested any of this code.
(Obviously, to check if a particular vertex is visible or not, simply check if it is in the (-1,-1) to (1,1) range in projection space.)
Take a look at the occlusion culling features of your engine. For XNA, you can consult the framework guide (with samples) here.
http://roecode.wordpress.com/2008/02/18/xna-framework-gameengine-development-part-13-occlusion-culling-and-frustum-culling/
Probably the best way to do this is with a BoundingFrustrum. It's basically like a rectangle that expands out in a certain direction, similar to how a player's camera works. Then, you can check to see if the given point you want is contained in the BoundingFrustrum, and if it is, then render your object.
Basically, the shape it makes looks like this:
Example:
// A view frustum almost always is initialized with the ViewMatrix * ProjectionMatrix
BoundingFrustum viewFrustum = new BoundingFrustum(ActivePlayer.ViewMatrix * ProjectionMatrix);
// Check every entity in the game to see if it collides with the frustum.
foreach (Entity sourceEntity in entityList)
{
// Create a collision sphere at the entities location. Collision spheres have a
// relative location to their entity and this translates them to a world location.
BoundingSphere sourceSphere = new BoundingSphere(sourceEntity.Position,
sourceEntity.Model.Meshes[0].BoundingSphere.Radius);
// Checks if entity is in viewing range, if not it is not drawn
if (viewFrustum.Intersects(sourceSphere))
sourceEntity.Draw(gameTime);
}
The example is actually for culling all objects in the game, but it can be pretty easily modified to handle what you want to do.
Example source: http://nfostergames.com/Lessons/SimpleViewCulling.htm
To get your world coordinates into screen space, take a look at Viewport.Project.

Categories