i'm new with Arkit and Xamarin enviroment.
I need help about the translation of SCNNode in scene using PanGesture.
I have used this guide to start my fist approach with PanGesture. Guide
After that....
I used the help code, but I noticed that, as in the example, when I move an object in the scene it ONLY follows the X, Y axes.
In short, if the Cartesian axes of the ARkit scene are framed, with the Z of the camera pointing at the observer, everything works.
If the camera position is changed (the phone moves), how can I obtain the translation delta within the 3D space?
if (sender.State == UIGestureRecognizerState.Changed)
{
var translate = sender.TranslationInView(areaPanned);
// Only allow movement vertically or horizontally [OK, but how can i obtain the XYZ value of delta in scene from XY of Viewport?]
node.LocalTranslate(new SCNVector3((float)translate.X / 10000f, (float)-translate.Y / 10000, 0.0f));
}
Following the opengl standard I thought of such a solution:
[Pseudo code]
scale/offset from 0...1 to -1...1 coordinate space
var vS = new Coordinate(this.Scene.CurrentViewport.X, this.Scene.CurrentViewport.Y, 1.0);
var vWH = new Coordinate(this.Scene.CurrentViewport.Width, this.Scene.CurrentViewport.Height, 1.0);
var scrrenpos = new Coordinate(translate.X, -translate.Y, 1.0);
var normalized = (scrrenpos - vS) / vWH;
After that i need matrix:
var inversePM = (projection * modelView).inverse
where:
=> projection from ARCAmera.ProjectionMatrix
=> modelView from ARCamera.Transform
To finish:
var result = normalized * inversePM;
if I set the SCNNode position with this value nothing works :(
Thanks
Problem solved!
here Swift code to translate in c#... works fine!
Related
I am working on a mobile app in C# using the Xamarin framework. I am trying to move a point by a fixed angle on a map like in the first part of the gif below. I believe I am using the right mathematical functions to compute the coordinates of the shifted points since in first part of the GIF, in GeoGebra, everything seems to be fine.
But when it comes to the actual in-app implementation, the results are quite weird : the angle is not consistent and the distance between the center and the points varies by moving the target.
The GIF showing the issue
I don't have a clue about what is wrong with the code. In the code below I use polylineOptions to draw the lines but I've tried with a Polygon and it displays the same results. Maybe it's because customMap.UserPin.Position returns the coordinates in Decimal Degree format (i.g. 34.00462, -4.512221) and the gap between two position is too small for a double.
Here are the two functions used to draw the lines.
// Add a cone's side to the variable coneLines
private void addConePolyline(double angle, CustomMap customMap, LatLng userPos)
{
// The coordinates of the end of the side to be drawn
LatLng conePoint = movePoint(angle, customMap.UserPin.Position, customMap.TargetPin.Position);
var polylineOptions = new PolylineOptions();
polylineOptions.InvokeWidth(10f);
polylineOptions.InvokeColor(Android.Graphics.Color.Argb(240, 255, 20, 147)); // Pink
polylineOptions.Add(userPos);
polylineOptions.Add(conePoint);
// Add the line to coneLines
coneLines.Add(map.AddPolyline(polylineOptions));
}
// Moves a point by the given angle on a circle of center rotationCenter with respect to p
private LatLng movePoint(double angle, Position rotationCenter, Position initialPoint)
{
// Compute the components of the translation vector between rotationCenter and initialPoint
double dx = initialPoint.Latitude - rotationCenter.Latitude;
double dy = initialPoint.Longitude - rotationCenter.Longitude;
// Compute the moved point's position
double x = rotationCenter.Latitude + Math.Cos(angle) * dx - Math.Sin(angle) * dy;
double y = rotationCenter.Longitude + Math.Sin(angle) * dx + Math.Cos(angle) * dy;
LatLng res = new LatLng(x, y);
return res;
}
I hope someone can help me with this!
Thank you.
for my 3d board game I am trying to build a "see the whole board" feature which should move the camera to a point where it is able to see the whole board, but without changing the rotation of the camera.
For now, I tried to get the minimum and maximum points which include all objects of interest and use these two points to mock a sphere around them (visualized it by actually placing a sphere there programmatically), so I am ending up like this right now:
Visualization of the sphere
My current problem is, that I am not able to build up a math formula to actually calculate the position of the camera to have the whole sphere in view (remember: rotation has to be unchanged).
This is my code so far for finding the smalles and biggest point and visualizing it by building the sphere:
// Find smallest and biggest point of all objects
var p1 = new Vector3(float.PositiveInfinity, float.PositiveInfinity, float.PositiveInfinity);
var p2 = new Vector3(float.NegativeInfinity, float.NegativeInfinity, float.NegativeInfinity);
foreach (var gameObject in gameObjects)
{
foreach (var vertice in gameObject.GetComponent<MeshFilter>().sharedMesh.vertices)
{
var p = vertice + gameObject.transform.position;
p1.x = Math.Min(p1.x, p.x);
p1.y = Math.Min(p1.y, p.y);
p1.z = Math.Min(p1.z, p.z);
p2.x = Math.Max(p2.x, p.x);
p2.y = Math.Max(p2.y, p.y);
p2.z = Math.Max(p2.z, p.z);
}
}
// Center of all objects
var average = (p1 + p2) / 2;
// Visualize by creating a sphere
var diameter = Vector3.Distance(p1, p2);
var sphere = GameObject.CreatePrimitive(PrimitiveType.Sphere);
sphere.transform.position = average;
sphere.transform.localScale = new Vector3(diameter, diameter, diameter);
Can you help me with the formula to actually calculate the position of my camera?
Regards
The easiest thing to do would be to move the camera back, with or without code. just use the game view to check whether the sphere is in view. if you got that point then you could leave it there or use code to do this:
Vector3 camPos;
//define the camera position in the inspector!
private void Start()
{
transform.position = camPos;
}
For each side of the view frustrum, find the parallel plane that is as close to the scene as possible. Each of these planes should just touch one of the objects in the scene.
Each pair of opposite planes will intersect along a line. Find these two lines. They will be perpendicular.
The camera should be located on whichever line is furthest away from the scene, situated so that the other line passes through the center of the field of view.
In the resulting view, one pair of opposite sides will just touch some objects in the scene, because the corresponding sides of the view frustrum will just touch those objects. In the other dimension, the objects will be centered.
What’d be the best way to go, if i paint a bezier Curve (set start and endpoints) to Unity Terrain, and i want the curve to folow the ups and downs from the ground.
right now i partly achieve it like this,(need to connect the new Points from groundedPoints, as new Beziers)
int SegmentCount = Mathf.Floor(BezierLength / SegmentLength);
//Rounded to the next lower integer
var groundedPoints = new List<Vector3>();
for(int i =0; i<SegmentCount;i++){
Vector3 p = GetPoint(BezierPoints,i / SegmentCount);
p = p.RayCastDown();
//RayCasting Down to get the Point on the Terrain
if(i == 0 || i < SegmentCount -1){
groundedPoints.Add(p);
}else{
if(p.y != groundedPoints[groundedPoints.Count-1].y){
groundedPoints.Add(p);
}
}
}
it’s right now kind of not that accurate, but it, doesn’t have to be a real accurate solution.
Maybe someone can give me a hint? thanks
Firstly i would recommend using Centripetal Catmull–Rom spline because it follows points more strictly, and need less points to generate(also only draws between p1 and p2), but i dont know what you want to achieve so:
I would transform your bezier into a 2d bezier, and only work in 2d space with it, then when you draw(render it visually) you give it a Y value by using https://docs.unity3d.com/ScriptReference/Terrain.SampleHeight.html
I do this with my splines, and it gives a quite accurate spline in the end(road generation)
PLEASE NOTE!:
That the implicit Vector2 and Vector3 conversion will not fit your needs, you need to add an extension method to convert Vector3 to Vector2 :)
(Vector(x,y,z) will be Vector(x,y) but you need Vector(x,z))
Edit 1:
Codesample how to read out a terrain actual height, via Terrain.SampleHeight(); by a Vector2 coordinate that you are sure is above a terrain, if the Vector2 is not above the terrain it will give you back null or the closets terrain height to it im not sure which one atm(can't test it now) :)
public static float GetPoint_On_Terrain(Vector2 point){
float terrainHeightAtPoint = Terrain.activeTerrain.SampleHeight(new Vector3(point.x, 0, point.y));
return new Vector3(point.x, terrainHeightAtPoint,point.y);
}
I am trying to extract out 3D distance in mm between two known points in a 2D image. I am using square AR markers in order to get the camera coordinates relative to the markers in the scene. The points are the corners of these markers.
An example is shown below:
The code is written in C# and I am using XNA. I am using AForge.net for the CoPlanar POSIT
The steps I take in order to work out the distance:
1. Mark corners on screen. Corners are represented in 2D vector form, Image centre is (0,0). Up is positive in the Y direction, right is positive in the X direction.
2. Use AForge.net Co-Planar POSIT algorithm to get pose of each marker:
float focalLength = 640; //Needed for POSIT
float halfCornerSize = 50; //Represents 1/2 an edge i.e. 50mm
AVector[] modelPoints = new AVector3[]
{
new AVector3( -halfCornerSize, 0, halfCornerSize ),
new AVector3( halfCornerSize, 0, halfCornerSize ),
new AVector3( halfCornerSize, 0, -halfCornerSize ),
new AVector3( -halfCornerSize, 0, -halfCornerSize ),
};
CoplanarPosit coPosit = new CoplanarPosit(modelPoints, focalLength);
coPosit.EstimatePose(cornersToEstimate, out marker1Rot, out marker1Trans);
3. Convert to XNA rotation/translation matrix (AForge uses OpenGL matrix form):
float yaw, pitch, roll;
marker1Rot.ExtractYawPitchRoll(out yaw, out pitch, out roll);
Matrix xnaRot = Matrix.CreateFromYawPitchRoll(-yaw, -pitch, roll);
Matrix xnaTranslation = Matrix.CreateTranslation(marker1Trans.X, marker1Trans.Y, -marker1Trans.Z);
Matrix transform = xnaRot * xnaTranslation;
4. Find 3D coordinates of the corners:
//Model corner points
cornerModel = new Vector3[]
{
new Vector3(halfCornerSize,0,-halfCornerSize),
new Vector3(-halfCornerSize,0,-halfCornerSize),
new Vector3(halfCornerSize,0,halfCornerSize),
new Vector3(-halfCornerSize,0,halfCornerSize)
};
Matrix markerTransform = Matrix.CreateTranslation(cornerModel[i].X, cornerModel[i].Y, cornerModel[i].Z);
cornerPositions3d1[i] = (markerTransform * transform).Translation;
//DEBUG: project corner onto screen - represented by brown dots
Vector3 t3 = viewPort.Project(markerTransform.Translation, projectionMatrix, viewMatrix, transform);
cornersProjected1[i].X = t3.X; cornersProjected1[i].Y = t3.Y;
5. Look at the 3D distance between two corners on a marker, this represents 100mm. Find the scaling factor needed to convert this 3D distance to 100mm. (I actually get the average scaling factor):
for (int i = 0; i < 4; i++)
{
//Distance scale;
distanceScale1 += (halfCornerSize * 2) / Vector3.Distance(cornerPositions3d1[i], cornerPositions3d1[(i + 1) % 4]);
}
distanceScale1 /= 4;
6. Finally I find the 3D distance between related corners and multiply by the scaling factor to get distance in mm:
for(int i = 0; i < 4; i++)
{
distance[i] = Vector3.Distance(cornerPositions3d1[i], cornerPositions3d2[i]) * scalingFactor;
}
The distances acquired are never truly correct. I used the cutting board as it allowed me easy calculation of what the distances should be. The above image calculated a distance of 147mm (expected 150mm) for corner 1 (red to purple). The image below shows 188mm (expected 200mm).
What is also worrying is the fact that when measuring the distance between marker corners sharing an edge on the same marker, the 3D distances obtained are never the same. Another thing I noticed is that the brown dots never seem to exactly match up with the colored dots. The colored dots are the coordinates used as input to the CoPlanar posit. The brown dots are the calculated positions from the center of the marker calculated via POSIT.
Does anyone have any idea what might be wrong here? I am pulling out my hair trying to figure it out. The code should be quite simple, I don't think I have made any obvious mistakes with the code. I am not great at maths so please point out where my basic maths might be wrong as well...
You are using way to many black boxes in your question. What is the focal length in the second step? Why go through ypr in step 3? How do you calibrate? I recommend to start over from scratch without using libraries that you do not understand.
Step 1: Create a camera model. Understand the errors, build a projection. If needed apply a 2d filter for lens distortion. This might be hard.
Step 2: Find you markers in 2d, after removing lens distortion. Make sure you know the error and that you get the center. Maybe over multiple frames.
Step 3: Un-project to 3d. After 1 and 2 this should be easy.
Step 4: ???
Step 5: Profit! (Measure distance in 3d and know your error)
I think you need to have 3D photo (two photo from a set of distance) so you can get the parallax distance from image differences
I have two objects (target and player), both have Position (Vector3) and Rotation (Quaternion). I want the target to rotate and be facing right at the player. The target, when it shoots something should shoot right at the player.
I've seen plenty of examples of slerping to the player, but I don't want incremental rotation, well, I suppose that would be ok as long as I can make the slerp be 100%, and as long as it actually worked.
FYI - Im able to use the position and rotation to do plenty of other things and it all works great except this last piece I cant figure out.
EDIT
Code samples run in the Target's class, Position = the targets position, Avatar = the player.
Using the value of 1 for the Slerp isn't work. This code below rotates some, but I think something is way off becuase when it's drawn the target scales up and then down as the player gets closer.
var A = new Vector3(Position.X, Position.Y, Position.Z);
var B = new Vector3(GameState.Avatar.Position.X, GameState.Avatar.Position.Y, GameState.Avatar.Position.Z);
A.Normalize();
B.Normalize();
var angle = Math.Acos(Vector3.Dot(A, B));
var axis = Vector3.Normalize(Vector3.Cross(A, B));
var rotOnAngle = new Quaternion(axis, (float)angle);
var newRot = Quaternion.Slerp(Quaternion.Identity, rotOnAngle, 1f);
Rotation = newRot;
Cannon.Shoot(Position, Rotation, this);
I tried using this and it doesn't quite work either...the target does rotate, but not to face the player. But at least the scaling problem goes away.
Quaternion q = new Quaternion();
var pos = Vector3.Normalize(Position);
var pos2 = Vector3.Normalize(GameState.Avatar.Position);%
var a = Vector3.Cross(Position, GameState.Avatar.Position);
q.X = a.X; q.Y = a.Y; q.Z = a.Z;
q.W = (float)Math.Sqrt(((int)Position.Length() ^ 2) * ((int)GameState.Avatar.Position.Length() ^ 2)) + Vector3.Dot(Position, GameState.Avatar.Position);
q.Normalize();
Rotation = q;
Cannon.Shoot(Position, Rotation, this);
It's been a while since I did this sort of math, but I would have guessed that the 3rd parameter there would simply be 1.
Edit: To qualify that, the last time I did this, it was called Managed directX, not XNA!
I happen to ask the same question over in Game Dev stack exchange and someone answered over there. Make sure to read the comments in the answer, regardless, the answer/solution works great! Thanks, and sorry about asking here as well.
https://gamedev.stackexchange.com/questions/15070/orienting-a-model-to-face-a-target