Create mesh from an Object with Spatial Awareness Mesh Observer MHL2 - c#

I'm new here and also quite new at C#/Unity2019.4.14f with VS2019, MRTK V2.5.3, and Microsoft Hololens 2 programming. I would like to ask you for advice on a problem that I have not been able to solve for weeks. First of all, I would like to quickly explain what kind of problem it is. My task is to track an object that is in an examination cube with the Spatial Mesh and to represent its shape as well as possible.Explanation screen for the task description
The calculations of where the examination cube is located in space work without any problems. But for some reason, I cannot query the Spatial Awareness Mesh Observer. Anyway, no meshes seem to be present although they are visible.
Since I am at a complete loss and no one I have asked so far has been able to help me, I am publishing my code for this function below. Please bear with me, as I am still a beginner in writing code.
public void ReadAndDrawMesh(){
//Provide a list of the cube coordinates
Vector3[] CubeCoordinateList = new Vector3[24];
//Convert Local to World Coordinates
var localToWorld = transform.localToWorldMatrix;
Vector3 cubeWorldPos = Cube.transform.position; // Reading out the centre position
Vector3[] cubeVertices = Cube.GetComponent<MeshFilter>().mesh.vertices; //World coordinates?
List<Vector3> cubeWorldVertices = new List<Vector3>();
for (int i = 0; i <= (cubeVertices.Length) - 1; i++)
{
CubeCoordinateList[i] = localToWorld.MultiplyPoint3x4(cubeVertices[i]);
}
//CubeVerticies from Vector A[0] to E[4]
float normalX1 = CubeCoordinateList[4].x - CubeCoordinateList[0].x;
float normalY1 = CubeCoordinateList[4].y - CubeCoordinateList[0].y;
float normalZ1 = CubeCoordinateList[4].z - CubeCoordinateList[0].z;
float amount1 = Mathf.Sqrt((normalX1 * normalX1) + (normalY1 * normalY1) + (normalZ1 * normalZ1));
//Create a new vector
Vector3 direction1 = new Vector3(normalX1, normalY1, normalZ1);
direction1 = direction1 / amount1;
//CubeVerticies from Vector A[0] to B[2]
float normalX2 = CubeCoordinateList[2].x - CubeCoordinateList[0].x;
float normalY2 = CubeCoordinateList[2].y - CubeCoordinateList[0].y;
float normalZ2 = CubeCoordinateList[2].z - CubeCoordinateList[0].z;
float amount2 = Mathf.Sqrt((normalX2 * normalX2) + (normalY2 * normalY2) + (normalZ2 * normalZ2));
//Create a new vector
Vector3 direction2 = new Vector3(normalX2, normalY2, normalZ2);
direction2 = direction2 / amount2;
//CubeVerticies from Vector A[0] to D[3]
float normalX3 = CubeCoordinateList[3].x - CubeCoordinateList[0].x;
float normalY3 = CubeCoordinateList[3].y - CubeCoordinateList[0].y;
float normalZ3 = CubeCoordinateList[3].z - CubeCoordinateList[0].z;
float amount3 = Mathf.Sqrt((normalX3 * normalX3) + (normalY3 * normalY3) + (normalZ3 * normalZ3));
//Create a new vector
Vector3 direction3 = new Vector3(normalX3, normalY3, normalZ3);
direction3 = direction3 / amount3;
//From MRTK 2.5.3
// Use CoreServices to quickly get access to the IMixedRealitySpatialAwarenessSystem
var spatialAwarenessService = CoreServices.SpatialAwarenessSystem;
// Cast to the IMixedRealityDataProviderAccess to get access to the data providers
var dataProviderAccess = spatialAwarenessService as IMixedRealityDataProviderAccess;
var meshObserverName = "SpatialAwarenessMeshObserverProfile";
var MeshObserver = dataProviderAccess.GetDataProvider<IMixedRealitySpatialAwarenessMeshObserver>(meshObserverName);
foreach (SpatialAwarenessMeshObject meshObject in MeshObserver.Meshes.Values)
{
Vector3[] meshObjectarray = meshObject.Filter.mesh.vertices;
//Reading the Spatial Mesh of the room
foreach (Vector3 verticiesCoordinaten in meshObjectarray)
{
//List for the mesh coordinates that come out of the scalar product calculation
Vector3[] MeshPositionList = new Vector3[meshObjectarray.Length]; ;
//Determining Direction between verticiesCoordinates of MeshObject with Centre of Cube
var dir_vectorMesh = verticiesCoordinaten - cubeWorldPos;
//Calculating the scalar product of the coordinates
var result1 = Mathf.Abs(Vector3.Dot(dir_vectorMesh, direction1)) * 2;
var result2 = Mathf.Abs(Vector3.Dot(dir_vectorMesh, direction2)) * 2;
var result3 = Mathf.Abs(Vector3.Dot(dir_vectorMesh, direction3)) * 2;
// If scalar product Negative, then write it in the list
if (result1 > amount1 && result2 > amount2 && result3 > amount3)
{
MeshPositionList[meshObjectarray.Length] = verticiesCoordinaten;
}
//Creating a new visible mesh from the points in the list
Mesh mesh = new Mesh();
mesh.vertices = MeshPositionList;
mesh.Optimize();
Graphics.DrawMeshNow(mesh, Vector3.zero, Quaternion.identity);
mesh.RecalculateNormals();
}
}
}
I hope that one of you can help me and I look forward to any constructive answers. Thank you to everyone who reads this post and perhaps responds.
Thank You.

Related

Simple 2d physics with polygons C#

I have been working on a 2D physics engine using polygons.
And i am having trouble implementing the actual physics part. For a bit of background, i am not experienced at all when it comes to physics and therefor even if a found how to do the entire physics thing online, i would not be able to implement it into my project.
My goal is:
To have polygons fall with gravity.
Have weight drag etc.
Collision between multiple polygons.
What i have already made:
A way of displaying and creating multiple polygons.
Moving and rotating specified object(polygon).
Coeffients for drag, gravity and weight.
Hit boxes and visual boxes. (Visual boxes are what gets displayed and hit boxes are for physics)
A center point for every object. (So far is used for rotation)
A tick for when everything gets calculated. (Gametick/tickrate or whatever you wanna call it)
What i was not able to add / looking for:
Actual gravity.
Collision detection
Velocity for each object.
Collision between object.
Code snippets / how stuff works so far:
Beware that my code is janky and could be made better or more efficient.
Efficiency is not what im looking for!
Function for creating object:
public Object CreateNew(PointF[] hb, PointF[] vb, float rt, Color cl, bool gr, PointF ps)
{
Object obj = new Object
{
pos = ps,
rotation = rt,
offsets = vb,
hitBox = hb,
visBox = vb,
gravity = gr,
clr = cl,
};
#region center
List<Vector2> v2Points = new List<Vector2>();
foreach (PointF p in obj.offsets)
{
v2Points.Add(new Vector2(p.X, p.Y));
}
PointF point = ToPoint(Centroid(v2Points));
obj.center = new PointF(point.X, point.Y);
#endregion
return obj;
}
Function for changing position of object:
public Object ChangePosition(PointF pos, double rot, Object obj)
{
//////////////
int i = 0;
foreach (PointF p in obj.visBox)
{
float minPosX = (float)Math.Sqrt((Math.Pow(obj.center.X - pos.X, 2) + Math.Pow(0 - 0, 2)));
float minPosY = (float)Math.Sqrt((Math.Pow(obj.center.Y - pos.Y, 2) + Math.Pow(0 - 0, 2)));
obj.visBox[i] = new PointF(obj.offsets[i].X + pos.X, obj.offsets[i].Y + pos.Y);
i++;
}
i = 0;
foreach (PointF p in obj.hitBox)
{
float minPosX = (float)Math.Sqrt((Math.Pow(obj.center.X - pos.X, 2) + Math.Pow(0 - 0, 2)));
float minPosY = (float)Math.Sqrt((Math.Pow(obj.center.Y - pos.Y, 2) + Math.Pow(0 - 0, 2)));
obj.hitBox[i] = new PointF(obj.offsets[i].X + pos.X, obj.offsets[i].Y + pos.Y);
i++;
}
obj.pos = pos;
List<Vector2> v2Points = new List<Vector2>();
foreach (PointF p in obj.offsets)
{
v2Points.Add(new Vector2(p.X, p.Y));
}
PointF point = ToPoint(Centroid(v2Points));
obj.center = point;
List<Vector2> v2Points2 = new List<Vector2>();
foreach (PointF p in obj.hitBox)
{
v2Points2.Add(new Vector2(p.X, p.Y));
}
PointF point2 = ToPoint(Centroid(v2Points2));
obj.centerHitBox = point2;
obj.hitBox = RotatePolygon(obj.hitBox, obj.center, rotation * -1);
obj.visBox = RotatePolygon(obj.visBox, obj.center, rotation * -1);
obj.offsets = RotatePolygon(obj.offsets, obj.center, rotation * -1);
obj.hitBox = RotatePolygon(obj.hitBox, obj.center, rot);
obj.visBox = RotatePolygon(obj.visBox, obj.center, rot);
obj.offsets = RotatePolygon(obj.offsets, obj.center, rot);
rotation = rot;
return obj;
}
Pastebin link to object script:
https://pastebin.com/9SnG4vyj
I will provide more information or scripts if anybody needs it!

Finding the closest vertex below a certain height, from a start position

I have procedurally generated Islands with lakes, its basically a 3D mesh that has points above the water line and points below it, any vertex/point below the water level is water, everything above it is solid ground.
From any point on the mesh I want to know the closest distance to this water.
What I ended up doing was creating an Array of Vector2s, the array contains all the points on the mesh that are below the water level.
Next I wish to cycle through these elements and compare them all to find the closest one to my selected point. I am using Vector2.Distance for this because I only want the distance in the XZ components and not going up/down (Y Component).
The problem is that for most points I select this works absolutely fine, giving correct results, but sometimes it doesn't take the closest water point but instead one that is further away, even though this closer water point is confirmed to be in the array of water points that are being compared to find the closest one.
here is my code:
chunk.Vertices = new Vertice[totalVertices];
for (int i = 0, z = 0; z <= chunkSizeZ; z++)
{
for (int x = 0; x <= chunkSizeX; x++, i++)
{
Vertice vert = new Vertice();
vert.index = i;
vert.position = new Vector3(chunkStartPosition.x + x,
chunkStartPosition.y,
chunkStartPosition.z + z);
vert.centerPosition = new Vector3(vert.position.x + 0.5f,
vert.position.y,
vert.position.z + 0.5f);
vert.centerPos2 = new Vector2(vert.position.x + 0.5f,
vert.position.z + 0.5f);
chunk.Vertices[i] = vert;
}
}
Here we get all the water positions:
for (int i = 0; i < totalVertices; i++)
{
if (chunk.Vertices[i].position.y > heightCorrection + tileColliderMinimumY)
{
worldVectorsClean.Add(chunk.Vertices[i].position);
worldIndexClean.Add(chunk.Vertices[i].index);
}
else
{
worldVectorsWater.Add(chunk.Vertices[i].centerPos2);
}
}
Every single tile then calls this function on the generator itself, but only AFTER the whole map and all water points are added. Because the generator keeps track of ALL waterpoints across all chunks otherwise each chunk will only compare its own waterpoints which doesn't work because water from another chunk can be closer but wont be compared to if we don't do it this way;
public float CalculateDistanceToWater(Vector2 pos)
{
var distance = 9001f;
foreach (Vector2 waterVector in worldVectorsWater)
{
var thisDistance = Vector2.Distance(pos, waterVector);
if (thisDistance < distance)
distance = thisDistance;
}
return distance;
}
Finally when we call it from
IEnumerator FindWater()
{
yield return new WaitForSeconds(Random.Range(0.8f, 2.55f));
var pos = new Vector2(transform.position.x, transform.position.z);
distanceToWater = ChunkGenerator.instance.CalculateDistanceToWater(pos);
}
Looking forward to some help on this.

How to set Texture Coordinates properly for Mesh3D

I'm new to 3D programming and am having a terrible time getting my texture to fill my meshes properly. I've got it sizing correctly on the walls but the texture on the roof is running on an angle and is stretched out too far.
I have several methods to create the mesh but they are all eventually sent to AddTriangle method, where the TextureCoordinates are set.
public static void AddTriangle(this MeshGeometry3D mesh, Point3D[] pts)
{
// Create the points.
int index = mesh.Positions.Count;
foreach (Point3D pt in pts)
{
mesh.Positions.Add(pt);
mesh.TriangleIndices.Add(index++);
mesh.TextureCoordinates.Add(new Point(pt.X + pt.Z, 0 - pt.Y));
}
}
Here is how my material is set up.
imageBrush.ImageSource = new BitmapImage(new Uri("pack://application:,,,/Textures/shingles1.jpg"));
imageBrush.TileMode = TileMode.Tile;
imageBrush.ViewportUnits = BrushMappingMode.Absolute;
imageBrush.Viewport = new Rect(0, 0, 25, 25);
SidingColor = new DiffuseMaterial(imageBrush);
SidingColor.Color = RGB(89, 94, 100);
My texture looks like this:
And here is the results I'm getting.
That's as close as I could get after hours of fooling around and googling.
Whew that was a little more difficult than I anticipated.
Here are few resources that helped me find a solution.
How to convert a 3D point on a plane to UV coordinates?
From the link below I realized the above formula above formula was correct but for a right hand coordinate system. I converted it and that was the final step.
http://www.math.tau.ac.il/~dcor/Graphics/cg-slides/geom3d.pdf
Here is the code that works in case someone else has this question.
public static void AddTriangle(this MeshGeometry3D mesh, Point3D[] pts)
{
if (pts.Count() != 3) return;
//use the three point of the triangle to calculate the normal (angle of the surface)
Vector3D normal = CalculateNormal(pts[0], pts[1], pts[2]);
normal.Normalize();
//calculate the uv products
Vector3D u;
if (normal.X == 0 && normal.Z == 0) u = new Vector3D(normal.Y, -normal.X, 0);
else u = new Vector3D(normal.X, -normal.Z, 0);
u.Normalize();
Vector3D n = new Vector3D(normal.Z, normal.X, normal.Y);
Vector3D v = Vector3D.CrossProduct(n, u);
int index = mesh.Positions.Count;
foreach (Point3D pt in pts)
{
//add the points to create the triangle
mesh.Positions.Add(pt);
mesh.TriangleIndices.Add(index++);
//apply the uv texture positions
double u_coor = Vector3D.DotProduct(u, new Vector3D(pt.Z,pt.X,pt.Y));
double v_coor = Vector3D.DotProduct(v, new Vector3D(pt.Z, pt.X, pt.Y));
mesh.TextureCoordinates.Add(new Point(u_coor, v_coor));
}
}
private static Vector3D CalculateNormal(Point3D firstPoint, Point3D secondPoint, Point3D thirdPoint)
{
var u = new Point3D(firstPoint.X - secondPoint.X,
firstPoint.Y - secondPoint.Y,
firstPoint.Z - secondPoint.Z);
var v = new Point3D(secondPoint.X - thirdPoint.X,
secondPoint.Y - thirdPoint.Y,
secondPoint.Z - thirdPoint.Z);
return new Vector3D(u.Y * v.Z - u.Z * v.Y, u.Z * v.X - u.X * v.Z, u.X * v.Y - u.Y * v.X);
}

How can i create a plane out of many meshes?

What i want to do is to extrude a mesh plane.
The plane is in red in the scene view. Each mesh have two triangles.
First i don't understand what is the Res X and Res Z are for.
What i want to create first is a plane from vertices and triangles in size of 16x16 or any other size by height(Length should be height) and width.
But after i set all the properties to 16 the plane is built from 15x15 meshes not 16x16.
And my main goal is now to extrude the plane. I mean to use OnMouseDown and by a click on the plane to find the closet and neighbours of the vertices from where i clicked on and to extrude this vertice/s. Extrude i mean for example only the z to change the vertices i clicked on position on z only.
Something the same idea like in this image. Marked it in red circle:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class meshPlane : MonoBehaviour
{
public int length;
public int width;
public int resX;
public int resZ;
private MeshFilter meshf;
private Mesh mesh;
private Vector3[] vertices;
private void Start()
{
GenerateOrigin();
}
private void GenerateOrigin()
{
// You can change that line to provide another MeshFilter
meshf = GetComponent<MeshFilter>();
mesh = new Mesh();
meshf.mesh = mesh;
mesh.Clear();
#region Vertices
vertices = new Vector3[resX * resZ];
for (int z = 0; z < resZ; z++)
{
// [ -length / 2, length / 2 ]
float zPos = ((float)z / (resZ - 1) - .5f) * length;
for (int x = 0; x < resX; x++)
{
// [ -width / 2, width / 2 ]
float xPos = ((float)x / (resX - 1) - .5f) * width;
vertices[x + z * resX] = new Vector3(xPos, 0f, zPos);
}
}
#endregion
#region Normales
Vector3[] normales = new Vector3[vertices.Length];
for (int n = 0; n < normales.Length; n++)
normales[n] = Vector3.up;
#endregion
#region UVs
Vector2[] uvs = new Vector2[vertices.Length];
for (int v = 0; v < resZ; v++)
{
for (int u = 0; u < resX; u++)
{
uvs[u + v * resX] = new Vector2((float)u / (resX - 1), (float)v / (resZ - 1));
}
}
#endregion
#region Triangles
int nbFaces = (resX - 1) * (resZ - 1);
int[] triangles = new int[nbFaces * 6];
int t = 0;
for (int face = 0; face < nbFaces; face++)
{
// Retrieve lower left corner from face ind
int i = face % (resX - 1) + (face / (resZ - 1) * resX);
triangles[t++] = i + resX;
triangles[t++] = i + 1;
triangles[t++] = i;
triangles[t++] = i + resX;
triangles[t++] = i + resX + 1;
triangles[t++] = i + 1;
}
#endregion
mesh.vertices = vertices;
mesh.normals = normales;
mesh.uv = uvs;
mesh.triangles = triangles;
mesh.RecalculateBounds();
}
}
When you say "the plane is built from 15x15 meshes" you mean the plane is built from 15x15 squares. That whole plane is the mesh.
ResX and ResZ are how many points there are in each direction. You get one less square because you need two edges for the first square. You need another two for each square you add, but they can share an edge with the previous one so you need only one more.
To make your mesh clickable you need to add a mesh collider to your gameobject and assign the mesh you generate to it. Then, you can use the camera class to get a ray, put that in a raycast and if your raycast hits anything you can use the triangle index and the triangles array you created to get the three points of the triangle that was hit. In addition you can see which weight in the barycentric coordinates is bigger to know which exact vertex your click was closest to. And finally, now that you have the exact vertex you can modify its height.

Hit dropping objects from sky with bare hands or (object) with Kinect SDK for Windows

I am following a tutorial in a book called Learn the Kinect API by Rob Miles.
Basically, it's an augmented reality game where spiders fall from the top of the screen and you hit them with a Mallet. After following the tutorial and looking at the codes, I understand how the spiders fall and how their position are randomized etc.
I have problem understanding the mallet though, I wish to replace the mallet with an image of a basket instead.
This is how the code for the mallet looks like
Brush malletHandleBrush = new SolidColorBrush(Colors.Black);
Brush malletHeadBrush = new SolidColorBrush(Colors.Red);
float malletHandleLength = 100;
float malletHeadLength = 50;
System.Windows.Vector malletPosition;
float malletHitRadius = 40;
bool malletValid = false;
void updateMallet(Joint j1, Joint j2)
{
// If Joint 1 (Right Wrist) or Joint 2 (Right Hand) is not tracked, we stop here
if (j1.TrackingState != JointTrackingState.Tracked || j2.TrackingState != JointTrackingState.Tracked)
return;
// Get the start and end positions of the mallet vector
ColorImagePoint j1P = myKinect.CoordinateMapper.MapSkeletonPointToColorPoint(j1.Position, ColorImageFormat.RgbResolution640x480Fps30);
ColorImagePoint j2P = myKinect.CoordinateMapper.MapSkeletonPointToColorPoint(j2.Position, ColorImageFormat.RgbResolution640x480Fps30);
int dX = j2P.X - j1P.X;
int dY = j2P.Y - j1P.Y;
System.Windows.Vector malletDirection = new System.Windows.Vector(dX, dY);
if (malletDirection.Length < 1) return;
// Convert into a vector of length 1 unit
malletDirection.Normalize();
// now set the length of the mallet
System.Windows.Vector handleVector = malletDirection * malletHandleLength;
Line handleLine = new Line();
handleLine.Stroke = malletHandleBrush;
handleLine.StrokeThickness = 10;
handleLine.X1 = j1P.X;
handleLine.Y1 = j1P.Y;
handleLine.X2 = j1P.X + handleVector.X;
handleLine.Y2 = j1P.Y + handleVector.Y;
//malletCanvas.Children.Add(handleLine);
Line headLine = new Line();
headLine.Stroke = malletHeadBrush;
headLine.StrokeThickness = 50;
System.Windows.Vector headVector = malletDirection * malletHeadLength;
headLine.X1 = handleLine.X2;
headLine.Y1 = handleLine.Y2;
headLine.X2 = handleLine.X2 + headVector.X;
headLine.Y2 = handleLine.Y2 + headVector.Y;
//malletCanvas.Children.Add(headLine);
malletPosition = new System.Windows.Vector(j1P.X, j1P.Y);
malletPosition = malletPosition + (malletDirection * (malletHandleLength + (malletHeadLength / 2)));
malletValid = true;
}
This is how the code for detecting if the mallet hits the objects looks like
// Declare Hit Vector for each Dollar Note
System.Windows.Vector _spiderHitVector = new System.Windows.Vector(malletPosition.X - _spiderCenterX, malletPosition.Y - _spiderCenterY);
Does anyone have any resources or give me some hints on how to work on this?

Categories