OpenTK Oblique Frustum (Lens Shift) - c#

I am very new to OpenGL and am using the latest version of OpenTK with C#.
My camera class currently does the following,
public Matrix4 GetProjectionMatrix()
{
return Matrix4.CreatePerspectiveFieldOfView(_fov, AspectRatio, 0.01f, 100f);
}
public Matrix4 GetViewMatrix()
{
Vector3 lookAt = new Vector3(myObject.Pos.X, myObject.Pos.Y, myObject.Pos.Z);
return Matrix4.LookAt(Position, lookAt, _up);
}
I have a slightly weird use case, where my game window will be long, something like a 4:12 ratio, and it will present a long object. From my reading, online the best way to present this the way I want is to do a lense shift (Oblique Frustum).
I've seen articles online on how to do this, namely:
http://www.terathon.com/code/oblique.html
https://docs.unity3d.com/Manual/ObliqueFrustum.html
But I am having trouble translating this to OpenTk.
Was wondering if anyone on here has done something similar to this in OpenTK.
EDIT:
This kind of worked, but not quite what I was looking for :(
private float sgn(float a)
{
if (a > 0.0F) return (1.0F);
if (a < 0.0F) return (-1.0F);
return (0.0F);
}
public Matrix4 CreatePerspectiveFieldOfView(Matrix4 projectionMatrix)
{
Vector4 clipPlane = new Vector4(0.0f, 0.7f, 1.0f , 1.0f);
Vector4 q = new Vector4
{
X = (sgn(clipPlane.X) + projectionMatrix.M13) / projectionMatrix.M11,
Y = (sgn(clipPlane.Y) + projectionMatrix.M23) / projectionMatrix.M22,
Z = -1.0f,
W = (1.0F + projectionMatrix.M33) / projectionMatrix.M34
};
Vector4 c = clipPlane * (2.0F / Vector4.Dot(clipPlane, q));
projectionMatrix.M31 = c.X;
projectionMatrix.M32 = c.Y;
projectionMatrix.M33 = c.Z + 1.0f;
projectionMatrix.M34 = c.W;
return projectionMatrix;
}
EDIT 2:
Basically what I am looking to do, is bring the look at point closer to the edge of the frustum like so:

There are some obvious issues. OpenGL matrices are column major order. Hence, i is the column and j is the row for the Mij properties of Matrix4:
In the following columns are from the top to the bottom and rows are form the left to the right, because that is the representation of the fields of the matrix in memory and how a matrix "looks" in the debuger:
row1 row2 row3 row4 indices
column1 (M11, M12, M13, M14) ( 0, 1, 2, 3)
column2 (M21, M22, M23, M24) ( 4, 5, 6, 7)
column3 (M31, M32, M33, M34) ( 8, 9, 10, 11)
column4 (M41, M42, M43, M44) (12, 13, 14, 15)
Thus
q.x = (sgn(clipPlane.x) + matrix[8]) / matrix[0];
q.y = (sgn(clipPlane.y) + matrix[9]) / matrix[5];
q.z = -1.0F;
q.w = (1.0F + matrix[10]) / matrix[14];
has to be translated to:
Vector4 q = new Vector4
{
X = (sgn(clipPlane.X) + projectionMatrix.M31) / projectionMatrix.M11,
Y = (sgn(clipPlane.Y) + projectionMatrix.M32) / projectionMatrix.M22,
Z = -1.0f,
W = (1.0f + projectionMatrix.M33) / projectionMatrix.M43
};
and
```cpp
matrix[2] = c.x;
matrix[6] = c.y;
matrix[10] = c.z + 1.0F;
matrix[14] = c.w;
has to be
projectionMatrix.M13 = c.X;
projectionMatrix.M23 = c.Y;
projectionMatrix.M33 = c.Z + 1.0f;
projectionMatrix.M43 = c.W;
If you want an asymmetric perspective projection, then consider to to create the perojection matrix by Matrix4.CreatePerspectiveOffCenter.
public Matrix4 GetProjectionMatrix()
{
var offset_x = -0.0f;
var offset_y = -0.005f; // just for instance
var tan_fov_2 = Math.Tan(_fov / 2.0);
var near = 0.01f;
var far = 100.0f;
var left = -near * AspectRatio * tan_fov_2 + offset_x;
var right = near * AspectRatio * tan_fov_2 + offset_x;
var bottom = -near * tan_fov_2 + offset_y;
var top = near * tan_fov_2 + offset_y;
return Matrix4.CreatePerspectiveOffCenter(left, right, bottom, top, near, far);
}

Related

Unity: detect click event on UIVertex

I am drawing lines on a canvas using the 'UIVertex' struct and I would like to be able to detect click events on the lines I have drawn.
Here is how I draw lines (largely inspired from this tutorial => https://www.youtube.com/watch?v=--LB7URk60A):
void DrawVerticesForPoint(Vector2 point, float angle, VertexHelper vh)
{
vertex = UIVertex.simpleVert;
//vertex.color = Color.red;
vertex.position = Quaternion.Euler(0, 0, angle) * new Vector3(-thickness / 2, 0);
vertex.position += new Vector3(unitWidth * point.x, unitHeight * point.y);
vh.AddVert(vertex);
vertex.position = Quaternion.Euler(0, 0, angle) * new Vector3(thickness / 2, 0);
vertex.position += new Vector3(unitWidth * point.x, unitHeight * point.y);
vh.AddVert(vertex);
}
Any idea?
Here is the solution I have found thanks to this post:
public bool PointIsOnLine(Vector3 point, UILineRenderer line)
{
Vector3 point1 = line.points[0];
Vector3 point2 = line.points[1];
var dirNorm = (point2 - point1).normalized;
var t = Vector2.Dot(point - point1, dirNorm);
var tClamped = Mathf.Clamp(t, 0, (point2 - point1).magnitude);
var closestPoint = point1 + dirNorm * tClamped;
var dist = Vector2.Distance(point, closestPoint);
if(dist < line.thickness / 2)
{
return true;
}
return false;
}
The UILineRenderer class is the class I have which represents my lines.
line.points[0] and line.points[1] contain the coordinates of the two points which determine the line length and position. line.thickness is the... thickness of the line :O

Calculating polygon vertices with an angle produce the shape wrong size

When i call my funtion with a startingAngle=0 it produce a good shape with the correct size.
Example:
var points = GetPolygonVertices(sides:4, radius:5, center:(5, 5), startingAngle:0), produces:
points[0] = {X = 10 Y = 5}
points[1] = {X = 5 Y = 0}
points[2] = {X = 0 Y = 5}
points[3] = {X = 5 Y = 10}
As observed the side length is 10px, which is correct, but produce a rotated square at 45º from human eye prespective.
To fix this i added a switch/case to offset the startAngle so it will put the square at correct angle for human eye, by rotating 45º. The rotation works, but the shape is no longer a square of 10x10px, instead i lose 1 to 2px from sides:
[0] = {X = 9 Y = 1}
[1] = {X = 1 Y = 1}
[2] = {X = 1 Y = 9}
[3] = {X = 9 Y = 9}
and become worse as radius grow, for example with radius=10:
[0] = {X = 17 Y = 3}
[1] = {X = 3 Y = 3}
[2] = {X = 3 Y = 17}
[3] = {X = 17 Y = 17}
I tried with both floor and ceil instead of round, but it always end in lose 1 or 2px...
Is there a way to improve the function to keep the shape size equal no matter the number of sides and rotation angle?
My function:
public static Point[] GetPolygonVertices(int sides, int radius, Point center, double startingAngle = 0)
{
if (sides < 3)
throw new ArgumentException("Polygons can't have less than 3 sides...", nameof(sides));
// Fix rotation
switch (sides)
{
case 3:
startingAngle += 90;
break;
case 4:
startingAngle += 45;
break;
case 5:
startingAngle += 22.5;
break;
}
var points = new Point[sides];
var step = 360.0 / sides;
int i = 0;
for (var angle = startingAngle; angle < startingAngle + 360.0; angle += step) //go in a circle
{
if (i == sides) break; // Fix floating problem
double radians = angle * Math.PI / 180.0;
points[i++] = new(
(int) Math.Round(Math.Cos(radians) * radius + center.X),
(int) Math.Round(Math.Sin(-radians) * radius + center.Y)
);
}
return points;
}
EDIT: I updated the function to get rid of the switch condition and product shapes in correct orientation for human eye when angle is not given. Still it suffer from same "problem"
public static Point[] GetPolygonVertices(int sides, int radius, Point center, double startingAngle = 0, bool flipHorizontally = false, bool flipVertically = false)
{
if (sides < 3)
throw new ArgumentException("Polygons can't have less than 3 sides...", nameof(sides));
var vertices = new Point[sides];
double deg = 360.0 / sides;//calculate the rotation angle
var rad = Math.PI / 180.0;
var x0 = center.X + radius * Math.Cos(-(((180 - deg) / 2) + startingAngle) * rad);
var y0 = center.Y - radius * Math.Sin(-(((180 - deg) / 2) + startingAngle) * rad);
var x1 = center.X + radius * Math.Cos(-(((180 - deg) / 2) + deg + startingAngle) * rad);
var y1 = center.Y - radius * Math.Sin(-(((180 - deg) / 2) + deg + startingAngle) * rad);
vertices[0] = new(
(int) Math.Round(x0),
(int) Math.Round(y0)
);
vertices[1] = new(
(int) Math.Round(x1),
(int) Math.Round(y1)
);
for (int i = 0; i < sides - 2; i++)
{
double dsinrot = Math.Sin((deg * (i + 1)) * rad);
double dcosrot = Math.Cos((deg * (i + 1)) * rad);
vertices[i + 2] = new(
(int)Math.Round(center.X + dcosrot * (x1 - center.X) - dsinrot * (y1 - center.Y)),
(int)Math.Round(center.Y + dsinrot * (x1 - center.X) + dcosrot * (y1 - center.Y))
);
}
if (flipHorizontally)
{
var startX = center.X - radius;
var endX = center.X + radius;
for (int i = 0; i < sides; i++)
{
vertices[i].X = endX - (vertices[i].X - startX);
}
}
if (flipVertically)
{
var startY = center.Y - radius;
var endY = center.Y + radius;
for (int i = 0; i < sides; i++)
{
vertices[i].Y = endY - (vertices[i].Y - startY);
}
}
return vertices;
}
EDIT 2: From Tim Roberts anwser here the functions to calculate side length from radius and radius from side length, this solve my problem. Thanks!
public static double CalculatePolygonSideLengthFromRadius(double radius, int sides)
{
return 2 * radius * Math.Sin(Math.PI / sides);
}
public static double CalculatePolygonVerticalLengthFromRadius(double radius, int sides)
{
return radius * Math.Cos(Math.PI / sides);
}
public static double CalculatePolygonRadiusFromSideLength(double length, int sides)
{
var theta = 360.0 / sides;
return length / (2 * Math.Cos((90 - theta / 2) * Math.PI / 180.0));
}
Your problem is one of mathematics. You said "As observed, the side length is 10px". It very definitely is not 10px. The distance from (10,5) to (5,0) is sqrt(5*5 + 5*5), which is 7.07. That's exactly what we expect for a square that is inscribed in a circle of radius 5: 5 x sqrt(2).
And that's what the other squares are as well.
FOLLOWUP
As an added bonus, here is a function that returns the radius of the circle that circumscribes a regular polygon with N sides of length L:
import math
def rad(length,nsides):
theta = 360/nsides
r = length / (2 * math.cos( (90-theta/2) * math.pi / 180))
return r
for s in range(3,9):
print(s, rad(10,s))

C# - Use of compute shaders

I'm trying to implement, using SharpDX11, a ray/mesh intersection method using the GPU. I've seen from an older post (Older post) that this can be done using the Compute Shader; but I need help in order to create and define the buffer outside the .hlsl code.
My HLSL code is the following:
struct rayHit
{
float3 intersection;
};
cbuffer cbRaySettings : register(b0)
{
float3 rayFrom;
float3 rayDir;
uint TriangleCount;
};
StructuredBuffer<float3> positionBuffer : register(t0);
StructuredBuffer<uint3> indexBuffer : register(t1);
AppendStructuredBuffer<rayHit> appendRayHitBuffer : register(u0);
void TestTriangle(float3 p1, float3 p2, float3 p3, out bool hit, out float3 intersection)
{
//Perform ray/triangle intersection
//Compute vectors along two edges of the triangle.
float3 edge1, edge2;
float distance;
//Edge 1
edge1.x = p2.x - p1.x;
edge1.y = p2.y - p1.y;
edge1.z = p2.z - p1.z;
//Edge2
edge2.x = p3.x - p1.x;
edge2.y = p3.y - p1.y;
edge2.z = p3.z - p1.z;
//Cross product of ray direction and edge2 - first part of determinant.
float3 directioncrossedge2;
directioncrossedge2.x = (rayDir.y * edge2.z) - (rayDir.z * edge2.y);
directioncrossedge2.y = (rayDir.z * edge2.x) - (rayDir.x * edge2.z);
directioncrossedge2.z = (rayDir.x * edge2.y) - (rayDir.y * edge2.x);
//Compute the determinant.
float determinant;
//Dot product of edge1 and the first part of determinant.
determinant = (edge1.x * directioncrossedge2.x) + (edge1.y * directioncrossedge2.y) + (edge1.z * directioncrossedge2.z);
//If the ray is parallel to the triangle plane, there is no collision.
//This also means that we are not culling, the ray may hit both the
//back and the front of the triangle.
if (determinant == 0)
{
distance = 0.0f;
intersection = float3(0, 0, 0);
hit = false;
}
float inversedeterminant = 1.0f / determinant;
//Calculate the U parameter of the intersection point.
float3 distanceVector;
distanceVector.x = rayFrom.x - p1.x;
distanceVector.y = rayFrom.y - p1.y;
distanceVector.z = rayFrom.z - p1.z;
float triangleU;
triangleU = (distanceVector.x * directioncrossedge2.x) + (distanceVector.y * directioncrossedge2.y) + (distanceVector.z * directioncrossedge2.z);
triangleU = triangleU * inversedeterminant;
//Make sure it is inside the triangle.
if (triangleU < 0.0f || triangleU > 1.0f)
{
distance = 0.0f;
intersection = float3(0, 0, 0);
hit = false;
}
//Calculate the V parameter of the intersection point.
float3 distancecrossedge1;
distancecrossedge1.x = (distanceVector.y * edge1.z) - (distanceVector.z * edge1.y);
distancecrossedge1.y = (distanceVector.z * edge1.x) - (distanceVector.x * edge1.z);
distancecrossedge1.z = (distanceVector.x * edge1.y) - (distanceVector.y * edge1.x);
float triangleV;
triangleV = ((rayDir.x * distancecrossedge1.x) + (rayDir.y * distancecrossedge1.y)) + (rayDir.z * distancecrossedge1.z);
triangleV = triangleV * inversedeterminant;
//Make sure it is inside the triangle.
if (triangleV < 0.0f || triangleU + triangleV > 1.0f)
{
distance = 0.0f;
intersection = float3(0, 0, 0);
hit = false;
}
//Compute the distance along the ray to the triangle.
float raydistance;
raydistance = (edge2.x * distancecrossedge1.x) + (edge2.y * distancecrossedge1.y) + (edge2.z * distancecrossedge1.z);
raydistance = raydistance * inversedeterminant;
//Is the triangle behind the ray origin?
if (raydistance < 0.0f)
{
distance = 0.0f;
intersection = float3(0, 0, 0);
hit = false;
}
intersection = rayFrom + (rayDir * distance);
hit = true;
}
[numthreads(64, 1, 1)]
void CS_RayAppend(uint3 tid : SV_DispatchThreadID)
{
if (tid.x >= TriangleCount)
return;
uint3 indices = indexBuffer[tid.x];
float3 p1 = positionBuffer[indices.x];
float3 p2 = positionBuffer[indices.y];
float3 p3 = positionBuffer[indices.z];
bool hit;
float3 p;
TestTriangle(p1, p2, p3, hit, p);
if (hit)
{
rayHit hitData;
hitData.intersection = p;
appendRayHitBuffer.Append(hitData);
}
}
While the following is part of my c# implementation but I'm not able to understand how lo load buffers for compute shader.
int count = obj.Mesh.Triangles.Count;
int size = 8; //int+float for every hit
BufferDescription bufferDesc = new BufferDescription() {
BindFlags = BindFlags.UnorderedAccess | BindFlags.ShaderResource,
Usage = ResourceUsage.Default,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.BufferStructured,
StructureByteStride = size,
SizeInBytes = size * count
};
Buffer buffer = new Buffer(device, bufferDesc);
UnorderedAccessViewDescription uavDescription = new UnorderedAccessViewDescription() {
Buffer = new UnorderedAccessViewDescription.BufferResource() { FirstElement = 0, Flags = UnorderedAccessViewBufferFlags.None, ElementCount = count },
Format = SharpDX.DXGI.Format.Unknown,
Dimension = UnorderedAccessViewDimension.Buffer
};
UnorderedAccessView uav = new UnorderedAccessView(device, buffer, uavDescription);
context.ComputeShader.SetUnorderedAccessView(0, uav);
var code = HLSLCompiler.CompileFromFile(#"Shaders\TestTriangle.hlsl", "CS_RayAppend", "cs_5_0");
ComputeShader _shader = new ComputeShader(device, code);
Buffer positionsBuffer = new Buffer(device, Utilities.SizeOf<Vector3>(), ResourceUsage.Default, BindFlags.None, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
context.UpdateSubresource(ref data, positionsBuffer);
context.ComputeShader.Set(_shader);
Inside my c# implementation i'm considering only one ray (with its origin and direction) and I would like to use the shader to check the intersection with all the triangles of the mesh. I'm already able to do that using the CPU but for 20k+ triangles the computation took too long even if i'm already using parallel coding.

Player Coordinates into 2D Screen Coordinates

My project's main goal is to project a player character from a 3D environment to a 2D screen. I already found some useful math online, but in the last four days I couldn't make it work. Everytime I move the character it appears in random places and/or goes off screen. Sometimes it follows horizontally, but usually never vertically.
My question is pretty simple: What am I doing wrong?
// A few numbers to play with:
// Case #1: myPos: 1104.031, 3505.031, -91.9875; myMousePos: 0, 180; myRotation: 153, 153, 25; playerPos: 1072, 3504, -91.9687 (Player Middle of Screen, Standing Behind Player)
// Case #2: myPos: 511.7656, 3549.25, -28.02344; myMousePos: 0, 347.5854; myRotation: 44, 2, 22; playerPos: 1632, 3232, -91.96875 (Player Middle of Screen, I stand higher and 1166 away)
// Case #3: myPos: 1105.523, 2898.336, -11.96875; myMousePos: 0, 58.67249; myRotation: 232, 184, 159; playerPos 1632, 3232, -91.96875 (Player Right, Lower Of Screen)
Vect3d viewAngles;
Vect3d vForward, vRight, vUpward;
float ScreenX, ScreenY;
float[] fov;
bool 3dWorldTo2dScreen(Vect3d playerpos, Vect3d mypos, PlayerData myself)
{
fov = new float[2];
viewAngles = Vect3d();
Vect3d vLocal, vTransForm;
vTransForm = new Vect3d();
vForward = new Vect3d();
vRight = new Vect3d();
vUpward = new Vect3d();
fov[0] = myself.MouseX; // Sky: -89, Ground: 89, Middle: 0
fov[1] = myself.MouseY; // 360 To 0
viewAngles.x = myself.Rotation.x;
viewAngles.y = myself.Rotation.y;
viewAngles.z = myself.Rotation.z;
int screenCenterX = 320; // 640
int screenCenterY = 240; // 480
AngleVectors();
vLocal = SubVectorDist(playerpos, mypos);
vTransForm.x = vLocal.dotproduct(vRight);
vTransForm.y = vLocal.dotproduct(vUpward);
vTransForm.z = vLocal.dotproduct(vForward);
if (vTransForm.z < 0.01)
return false;
ScreenX = screenCenterX + (screenCenterX / vTransForm.z * (1 / fov[0])) * vTransForm.x;
ScreenY = screenCenterY - (screenCenterY / vTransForm.z * (1 / fov[1])) * vTransForm.y;
return true;
}
Vect3d SubVectorDist(Vect3d playerFrom, Vect3d playerTo)
{
return new Vect3d(playerFrom.x - playerTo.x, playerFrom.y - playerTo.y, playerFrom.z - playerTo.z);
}
private void AngleVectors()
{
float angle;
float sr, sp, sy, cr, cp, cy,
cpi = (3.141f * 2 / 360);
angle = viewAngles.y * cpi;
sy = (float)Math.Sin(angle);
cy = (float)Math.Cos(angle);
angle = viewAngles.x * cpi;
sp = (float)Math.Sin(angle);
cp = (float)Math.Cos(angle);
angle = viewAngles.z * cpi;
sr = (float)Math.Sin(angle);
cr = (float)Math.Cos(angle);
vForward.x = cp * cy;
vForward.y = cp * sy;
vForward.z = -sp;
vRight.x = (-1 * sr * sp * cy + -1 * cr * -sy);
vRight.y = (-1 * sr * sp * sy + -1 * cr * cy);
vRight.z = -1 * sr * cp;
vUpward.x = (cr * sp * cy + -sr * -sy);
vUpward.y = (cr * sp * sy + -sr * cy);
vUpward.z = cr * cp;
}

GL Project doesn't work properly

I'm using code which tries to work like Glu.Project() since OpenTK doesn't support Glu.
Vector4 pos = new Vector4(s.Position.X, 0.0f, s.Position.Y, 1.0f);
Matrix4 mov = new Matrix4();
Matrix4 prj = new Matrix4();
Matrix4 mpj = new Matrix4();
float[] vp = new float[4];
GL.GetFloat(GetPName.ModelviewMatrix, out mov);
GL.GetFloat(GetPName.ProjectionMatrix, out prj);
GL.GetFloat(GetPName.Viewport, vp);
Matrix4.Mult(ref prj, ref mov, out mpj);
Vector4.Transform(ref pos, ref mpj, out pos);
// Final mathematics as described in OpenGL 2.1 Glu specs
s.set2DPos(new Vector2f( (vp[0] + (vp[2] * (pos.X + 1) / 2.0f)),
(vp[1] + (vp[3] * (pos.Y + 1) / 2.0f)) ));
// Final mathematics as described in OpenGL 3 Vector specs
s.set2DPos(new Vector2f( (view[2] / 2 * pos.X + view[0]),
(view[3] / 2 * pos.X + view[1]) ));
// Neither of them work, but in relation OpenGL 3 vector specs work better.
s is a class which primary exists as a model in 3D space at s.Position.
But the values I'm getting from this are astronomically far beyond the window boundaries.
The ModelView matrix from a breakpoint:
{(1, 0, 0, 0)
(0, 0.7071068, 0.7071068, 0)
(0, -0.7071068, 0.7071068, 0)
(0, -141.4214, -141.4214, 1)}
The Projection matrix from a breakpoint:
{(1.931371, 0, 0, 0)
(0, 2.414213, 0, 0)
(0, 0, -1.0002, -1)
(0, 0, -2.0002, 0)}
Am I doing something wrong or did I get something wrong? Am I missing something?
Vector2 GetScreenCoordinates(Vector3 ObjectCoordinate)
{
// ref: http://www.songho.ca/opengl/gl_transform.html
Vector4 obj = new Vector4(ObjectCoordinate.X, ObjectCoordinate.Y, ObjectCoordinate.Z, 1.0f);
Matrix4 projection = new Matrix4();
Matrix4 modelView = new Matrix4();
Vector4 viewPort = new Vector4();
GL.GetFloat(GetPName.ModelviewMatrix, out modelView);
GL.GetFloat(GetPName.ProjectionMatrix, out projection);
GL.GetFloat(GetPName.Viewport, out viewPort);
Vector4
eye = Vector4.Transform(obj, modelView),
clip = Vector4.Transform(eye, projection);
Vector3
ndc = new Vector3(clip.X / clip.W, clip.Y / clip.W, clip.Z / clip.W);
Vector2
w = new Vector2(viewPort.Z / 2 * ndc.X + viewPort.X + viewPort.Z / 2,
viewPort.W / 2 * ndc.Y + viewPort.Y + viewPort.W / 2);
return w;
}
Have you sanity checked the s.Position value before using it? What about the projection and transformation matrices you apply to the vector, are they sane looking?
I'm not familiar with OpenTK, but the mathematics prior to set2DPos() look sensible enough.
Here is how it works:
GL.GetFloat(GetPName.ModelviewMatrix, out model);
GL.GetFloat(GetPName.ProjectionMatrix, out proj);
GL.GetFloat(GetPName.Viewport, view);
Matrix4.Transpose(ref model, out model);
Matrix4.Transpose(ref proj, out proj);
Vector4 posa = new Vector4(0.0f, s.Position.Y, 1.0f, s.Position.X);
Vector4 posb = new Vector4(s.Position.Y, 1.0f, s.Position.X, 0.0f);
Vector4 posc = new Vector4(1.0f, s.Position.X, 0.0f, s.Position.Y);
Vector4 one = new Vector4(1.0f, 1.0f, 1.0f, 1.0f);
Matrix4 posv = new Matrix4(pos, posa, posb, posc);
Matrix4 ProjPos = Matrix4.Mult(Matrix4.Mult(proj, model), posv);
Matrix4.Transpose(ref ProjPos, out ProjPos);
Vector2f posout = new Vector2f(
(0 + (this.glc.Width * (ProjPos.Column0.X / ProjPos.Column0.W + 1.0f)) - (this.glc.Width / 2.0f)),
(0 + (this.glc.Height * (ProjPos.Column0.Y / ProjPos.Column0.W + 1.0f)) - (this.glc.Height / 2.0f))
);
In case anyone needs it :)

Categories