Retrieving 3D cylinder parameters to create a Bounding Box - c#

I'm implementing a Kinect application in XNA.
I'm pretty new on 3D programming and I'd like to know how to retrieve parameters such as radius or height from a cylinder model in order to create a bounding box around it for collision detection.
My problem is that my cylinders' position and angle are syncronized with the position of the forearm of the player in the field of the Kinect and so I don't know how to define the bounding box parameters (Center Min or Max values...).
Here is the code for my bounding box creation method:
private BoundingBox CalculateBoundingBox(Model model, Matrix worldTransform)
{
// Initialize minimum and maximum corners of the bounding box to max and min values
Vector3 min = new Vector3(float.MaxValue, float.MaxValue, float.MaxValue);
Vector3 max = new Vector3(float.MinValue, float.MinValue, float.MinValue);
// For each mesh of the model
foreach (ModelMesh mesh in model.Meshes)
{
foreach (ModelMeshPart meshPart in mesh.MeshParts)
{
// Vertex buffer parameters
int vertexStride = meshPart.VertexBuffer.VertexDeclaration.VertexStride;
int vertexBufferSize = meshPart.NumVertices * vertexStride;
// Get vertex data as float
float[] vertexData = new float[vertexBufferSize / sizeof(float)];
meshPart.VertexBuffer.GetData<float>(vertexData);
// Iterate through vertices (possibly) growing bounding box, all calculations are done in world space
for (int i = 0; i < vertexBufferSize / sizeof(float); i += vertexStride / sizeof(float))
{
Vector3 transformedPosition = Vector3.Transform(new Vector3(vertexData[i], vertexData[i + 1], vertexData[i + 2]), worldTransform);
min = Vector3.Min(min, transformedPosition);
max = Vector3.Max(max, transformedPosition);
}
}
}
// Create and return bounding box
return new BoundingBox(min, max);
}
Here is the code for my collision detection method
private bool isCollisionDetected(Model m1, Model m2)
{
bool detection;
BoundingBox b1 = CalculateBoundingBox(m1);
BoundingBox b2 = CalculateBoundingBox(m2);
if (b1.Intersects(b2))
{
detection = true;
}
else
{
detection = false;
}
return detection;
}

Each time you create a transformedPosition, add it to a list<Vector3>.
Then use that list to create a BoundingBox with the built in method BoundingBox.CreateFromPoints(myListOfTransformedPositions).
That method will return the correct min and max.

Related

Finding the closest vertex below a certain height, from a start position

I have procedurally generated Islands with lakes, its basically a 3D mesh that has points above the water line and points below it, any vertex/point below the water level is water, everything above it is solid ground.
From any point on the mesh I want to know the closest distance to this water.
What I ended up doing was creating an Array of Vector2s, the array contains all the points on the mesh that are below the water level.
Next I wish to cycle through these elements and compare them all to find the closest one to my selected point. I am using Vector2.Distance for this because I only want the distance in the XZ components and not going up/down (Y Component).
The problem is that for most points I select this works absolutely fine, giving correct results, but sometimes it doesn't take the closest water point but instead one that is further away, even though this closer water point is confirmed to be in the array of water points that are being compared to find the closest one.
here is my code:
chunk.Vertices = new Vertice[totalVertices];
for (int i = 0, z = 0; z <= chunkSizeZ; z++)
{
for (int x = 0; x <= chunkSizeX; x++, i++)
{
Vertice vert = new Vertice();
vert.index = i;
vert.position = new Vector3(chunkStartPosition.x + x,
chunkStartPosition.y,
chunkStartPosition.z + z);
vert.centerPosition = new Vector3(vert.position.x + 0.5f,
vert.position.y,
vert.position.z + 0.5f);
vert.centerPos2 = new Vector2(vert.position.x + 0.5f,
vert.position.z + 0.5f);
chunk.Vertices[i] = vert;
}
}
Here we get all the water positions:
for (int i = 0; i < totalVertices; i++)
{
if (chunk.Vertices[i].position.y > heightCorrection + tileColliderMinimumY)
{
worldVectorsClean.Add(chunk.Vertices[i].position);
worldIndexClean.Add(chunk.Vertices[i].index);
}
else
{
worldVectorsWater.Add(chunk.Vertices[i].centerPos2);
}
}
Every single tile then calls this function on the generator itself, but only AFTER the whole map and all water points are added. Because the generator keeps track of ALL waterpoints across all chunks otherwise each chunk will only compare its own waterpoints which doesn't work because water from another chunk can be closer but wont be compared to if we don't do it this way;
public float CalculateDistanceToWater(Vector2 pos)
{
var distance = 9001f;
foreach (Vector2 waterVector in worldVectorsWater)
{
var thisDistance = Vector2.Distance(pos, waterVector);
if (thisDistance < distance)
distance = thisDistance;
}
return distance;
}
Finally when we call it from
IEnumerator FindWater()
{
yield return new WaitForSeconds(Random.Range(0.8f, 2.55f));
var pos = new Vector2(transform.position.x, transform.position.z);
distanceToWater = ChunkGenerator.instance.CalculateDistanceToWater(pos);
}
Looking forward to some help on this.

How to find minimum translation vector of rotated squares?

I've made a collision detection algorithm that can detect if rotated squares are colliding. I am struggling to understand what I should do to resolve these collisions. I think the first step is to calculate a minimum translation vector (MTV) that can separate the two squares.
I think the way to do this is by calculating the overlap of the projections of the squares on the axes that are being tested, and then use the length of the smallest overlap and the angle of that axis to form the MTV.
The problem is my code doesn't work by comparing projections but instead uses this code to detect collisions:
double DotProduct(Vector vector0, Vector vector1)
{
return (vector0.X * vector1.X) + (vector0.Y * vector1.Y);
}
bool TestIfPointIsInFrontOfEdge(int edgeNormalIndex, int vertexToTestIndex, Box observerBox, Box observedBox)
{
// if (v - a) ยท n > 0 then vertex is in front of edge
// v is the vertex to test
// a is a vertex on the edge that relates to the edge normal
// n is the edge normal
Vector v = new Vector(observedBox.vertices[vertexToTestIndex].X, observedBox.vertices[vertexToTestIndex].Y);
Vector a = new Vector(observerBox.vertices[edgeNormalIndex].X, observerBox.vertices[edgeNormalIndex].Y);
Vector n = observerBox.edgeNormals[edgeNormalIndex];
Vector vMinusA = Vector.Subtract(v, a);
double dotProduct = DotProduct(vMinusA, n);
//Console.WriteLine(dotProduct);
return dotProduct > 0;
}
bool TestIfAllPointsAreInFrontOfEdge(int edgeIndex, Box observerBox, Box observedBox)
{
for (int i = 0; i < observedBox.vertices.Length; i++)
{
if (!TestIfPointIsInFrontOfEdge(edgeIndex, i, observerBox, observedBox))
{
return false;
}
}
return true;
}
bool TestIfAllPointsAreInFrontOfAnyEdge(Box observerBox, Box observedBox)
{
for (int i = 0; i < observerBox.edgeNormals.Length; i++)
{
if (TestIfAllPointsAreInFrontOfEdge(i, observerBox, observedBox))
return true;
}
return false;
}
bool TestBoxOverlap(Box observerBox, Box observedBox)
{
if (TestIfAllPointsAreInFrontOfAnyEdge(observerBox, observedBox) || TestIfAllPointsAreInFrontOfAnyEdge(observedBox, observerBox))
return false;
return true;
}
Each Box contains an array of four PointF objects which represent the vertices (Box.vertices). They also contain an array of four Vector objects which are normalised (unit) vectors that represent the normals to each of the edges (Box.edgeNormals).
I then call this function for each box to check if there is a collision:
if (TestBoxOverlap(observerBox, observedBox))
{
narrowPhaseCollisionList.Add(collision);
}
collision is a two element array containing observerBox and observedBox.
So how do I calculate the MTV?
Also how do I apply it to the boxes?
Only translate one box with the MTV?
Translate each box away from each other with half the MTV?
Somehow weight how much of the MTV is applied to each box depending on some property (mass / velocity) of the boxes?

ILNumerics plot a plane at specific location

I'm currently playing around with the ILNumerics API and started to plot a few points in a cube.
Then I calculated a regression plane right through those points.
Now I'd like to plot the plane in the same scene plot but only with the same size than the point cloud.
I got the paramters of the plane (a,b,c): f(x,y) = a*x + b*y + c;
I know that only z is interesting for plotting a plane but I've got no clue how pass the right coodinates to the scene so that the plane size is about the same size than the maximum and minimum area of the points.
Could you guys give me a simple example of plotting a plane and a little suggetion how to set the bounds of that plane right?
Here is what I got so far:
private void ilPanel1_Load(object sender, EventArgs e)
{
// get the X and Y bounds and calculate Z with parameters
// plot it!
var scene = new ILScene {
new ILPlotCube(twoDMode: false) {
new ILSurface( ??? ) {
}
}
};
// view angle etc
scene.First<ILPlotCube>().Rotation = Matrix4.Rotation(
new Vector3(1f, 0.23f, 1), 0.7f);
ilPanel1.Scene = scene;
}
I hope that someone can help me ...
Thanks in advance !!!
You could take the Limits of the plotcube.Plots group and derive the coords from the bounding box from it. This gives you the min and max x and y coord for the plane. Use them to get the corresponding z values by evaluating you plane equation.
Once you have x,y and z of the plane, use them with ILSurface to plot the plane.
If you need more help, I can try to add an example.
#Edit: the following Example plots a plane through 3 arbitrary points. The planes orientation and position is computed by help of a plane function zEval. Its coefficients a,b,c are computed here from the 3 (concrete) points. You will have to compute your own equation coefficients here.
The plane is realized with a surface. One might as well take the 4 coords computed in 'P' and use an ILTriangleFan and an ILLineStrip to create the plane and the border. But the surface already comes with a Fill and a Wireframe, so we take this as a quick solution.
private void ilPanel1_Load(object sender, EventArgs e) {
// 3 arbitrary points
float[,] A = new float[3, 3] {
{ 1.0f, 2.0f, 3.0f },
{ 2.0f, 2.0f, 4.0f },
{ 2.0f, -2.0f, 2.0f }
};
// construct a new plotcube and plot the points
var scene = new ILScene {
new ILPlotCube(twoDMode: false) {
new ILPoints {
Positions = A,
Size = 4,
}
}
};
// Plane equation: this is derived from the concrete example points. In your
// real world app you will have to adopt the weights a,b and c to your points.
Func<float, float, float> zEval = (x, y) => {
float a = 1, b = 0.5f, c = 1;
return a * x + b * y + c;
};
// find bounding box of the plot contents
scene.Configure();
var limits = scene.First<ILPlotCube>().Plots.Limits;
// Construct the surface / plane to draw
// The 'plane' will be a surface constructed from a 2x2 mesh only.
// The x/y coordinates of the corners / grid points of the surface are taken from
// the limits of the plots /points. The corresponding Z coordinates are computed
// by the zEval function. So we give the ILSurface constructor not only Z coordinates
// as 2x2 matrix - but an Z,X,Y Array of size 2x2x3
ILArray<float> P = ILMath.zeros<float>(2, 2, 3);
Vector3 min = limits.Min, max = limits.Max;
P[":;:;1"] = new float[,] { { min.X, min.X }, { max.X, max.X } };
P[":;:;2"] = new float[,] { { max.Y, min.Y }, { max.Y, min.Y } };
P[":;:;0"] = new float[,] {
{ zEval(min.X, max.Y) , zEval(min.X, min.Y) },
{ zEval(max.X, max.Y) , zEval(max.X, min.Y) },
};
// create the surface, make it semitransparent and modify the colormap
scene.First<ILPlotCube>().Add(new ILSurface(P) {
Alpha = 0.6f,
Colormap = Colormaps.Prism
});
// give the scene to the panel
ilPanel1.Scene = scene;
}
This would create an image similar to this one:
#Edit2: you asked, how to disable the automatic scaling of the plot cube while adding the surface:
// before adding the surface:
var plotCube = scene.First<ILPlotCube>();
plotCube.AutoScaleOnAdd = false;
Alternatively, you can set the limits of the cube manually:
plotCube.Limits.Set(min,max);
You probably will want to disable some mouse interaction as well, since they would allow the user to rescale the cube in a similar (unwanted?) way:
plotCube.AllowZoom = false; // disables the mouse wheel zoom
plotCube.MouseDoubleClick += (_,arg) => {
arg.Cancel = true; // disable the double click - resetting for the plot cube
};

XNA Get Mesh value of Y at value of X Reply

I am having trouble trying to get a loaded meshes value of Y at a value of X in order to perform some very limited version of detection. Essentially, I am going to get the value of X of a camera and detect if the Y value of my mesh is 5 for example. If so.. there is a wall there.
I load my model with this:
landscape = Content.Load("landscape");
I draw the model with this:
foreach (ModelMesh mesh in landscape.Meshes)
{
if (mesh.Name != "Billboards")
{
foreach (BasicEffect effect in mesh.Effects)
{
effect.View = view;
effect.Projection = projection;
effect.LightingEnabled = true;
effect.DirectionalLight0.Enabled = true;
effect.DirectionalLight0.Direction = lightDirection;
effect.DirectionalLight0.DiffuseColor = lightColor;
//if (flashEnabled == true)
//{
effect.DirectionalLight1.Enabled = flashEnabled;
effect.DirectionalLight1.Direction = cameraFront;
effect.DirectionalLight1.DiffuseColor = lightColor;
effect.DirectionalLight1.SpecularColor = colorFlashLight.ToVector3();
//}
effect.AmbientLightColor = ambientLightColor;
effect.FogEnabled = fogEnabled;
effect.FogColor = color.ToVector3();
effect.FogStart = 9.75f;
effect.FogEnd = 10.25f;
}
device.BlendState = BlendState.Opaque;
device.DepthStencilState = DepthStencilState.Default;
device.RasterizerState = RasterizerState.CullCounterClockwise;
mesh.Draw();
}
So moving on, in my update or input functions I would run an evaluative function to determine if LandscapeVertexY#CameraX is greater than a value of 5.
Any help?
If i understand your question correctly you are trying to retrieve the rendered position of a vertex with by using a specified camera.
In this case you need to project the position of the vertex manually in your coordinate space by using matrix multiplication. This could be done by multiplying the position with your model-view-projection matrix. ( It can be done with the XNA matrix and vectors ).

game programming in C#: sprite collision

I have a C# snippet code about sprite collision in C# game programming, hope you guys will help me to clarify it.
I dont understand method IsCollided, especially the calculation of d1 and d2 to determine whether the sprites collide or not, the meaning and the use of Matrix Invert() and Multiphy in this case as well as the use of Color alpha component to determine the collision of 2 sprites.
Thank you very much.
public struct Vector
{
public double X;
public double Y;
public Vector(double x, double y)
{
X = x;
Y = y;
}
public static Vector operator -(Vector v, Vector v2)
{
return new Vector(v.X-v2.X, v.Y-v2.Y);
}
public double Length
{
get
{
return Math.Sqrt(X * X + Y * Y);
}
}
}
public class Sprite
{
public Vector Position;
protected Image _Image;
protected Bitmap _Bitmap;
protected string _ImageFileName = "";
public string ImageFileName
{
get { return _ImageFileName; }
set
{
_ImageFileName = value;
_Image = Image.FromFile(value);
_Bitmap = new Bitmap(value);
}
}
public Matrix Transform
{
get
{
Vector v = Position;
if (null != _Image)
v -= new Vector(_Image.Size) / 2;
Matrix m = new Matrix();
m.RotateAt(50.0F, new PointF(10.0F, 100.0F));
m.Translate((float)v.X, (float)v.Y);
return m;
}
}
public bool IsCollided(Sprite s2)
{
Vector v = this.Position - s2.Position;
double d1 = Math.Sqrt(_Image.Width * _Image.Width + _Image.Height * _Image.Height)/2;
double d2 = Math.Sqrt(s2._Image.Width * s2._Image.Width + s2._Image.Height * s2._Image.Height)/2;
if (v.Length > d1 + d2)
return false;
Bitmap b = new Bitmap(_Image.Width, _Image.Height);
Graphics g = Graphics.FromImage(b);
Matrix m = s2.Transform;
Matrix m2 = Transform;
m2.Invert();
Matrix m3 = m2;
m3.Multiply(m);
g.Transform = m3;
Vector2F v2 = new Vector2F(0,0);
g.DrawImage(s2._Image, v2);
for (int x = 0; x < b.Width; ++x)
for (int y = 0; y < b.Height; ++y)
{
Color c1 = _Bitmap.GetPixel(x, y);
Color c2 = b.GetPixel(x, y);
if (c1.A > 0.5 && c2.A > 0.5)
return true;
}
return false;
}
}
It's a little convoluted. The two parts:
Part One (simple & fast)
The first part (involving v, d1 + d2) is probably confusing because, instead of doing collision tests using boxes, the dimensions of the image are used to construct bounding circles instead. A simple collision test is then carried out using these bounding circles. This is the 'quick n dirty' test that eliminates sprites that are clearly not colliding.
"Two circles overlap if the sum of there(sic) radii is greater than the distance between their centers. Therefore by Pythagoras we have a collision if:
(cx1-cx2)2 + (cy1-cy2)2 < (r1+r2)2"
See the "Bounding Circles" section of this link and the second link at the foot of this answer.
commented code:
// Get the vector between the two sprite centres
Vector v = this.Position - s2.Position;
// get the radius of a circle that will fit the first sprite
double d1 = Math.Sqrt(_Image.Width * _Image.Width + _Image.Height * _Image.Height)/2;
// get the radius of a circle that will fit the second sprite
double d2 = Math.Sqrt(s2._Image.Width * s2._Image.Width + s2._Image.Height * s2._Image.Height)/2;
// if the distance between the sprites is larger than the radiuses(radii?) of the circles, they do not collide
if (v.Length > d1 + d2)
return false;
Note: You may want to considering using an axially aligned bounding box test instead of a circle here. if you have rectangles with disparate widths and lengths, it'll be a more effective/accurate first test.
Part Two (slower)
I haven't got time to check the code 100%, but the second part --having established that the two sprites are potentially colliding in step one-- is performing a more complicated collision test. It is performing a per-pixel collision test using the sprite's image source -- specifically its alpha channel. In games, the alpha channel is often used to store a representation of transparency. The larger the value, the more opaque the image (so 0 would be 100% transparent, 1.0f would be 100% opaque).
In the code shown, if the pixels in both sprites are overlapping and both have alpha values of > 0.5f, the pixels are sharing the same space and represent solid geometry and are thus colliding. By doing this test, it means that transparent (read: invisible) parts of the sprite will not be considered when testing for collisions, so you can have circular sprites that do not collide at the corners etc.
Here's a link that covers this in a bit more detail.

Categories