Clone and resize SeismicCube - c#

i'm quite new in Ocean Framework. I have an issue about copy a SeismicCube object with different size. I got to resize K index of the cube for time/depth resampling. All I knew is clone a cube with exactly same properties. Something like this:
Template template = source.Template;
clone = collection.CreateSeismicCube(source, template);
with source is the original cube and clone is the result. Is it possible to find a way to resize clone to different size? size of index K (trace length) particularly. I've explored the overload methods of CreateSeismicCube but still can't understand how to fill the correct parameters. Do you guys have a solution about this issue? Thanks in advance.

When you create a seismic cube using the overload that clones from another seismic cube you do not have the ability to resize it in any direction (I, J, or K). If you desire a different K dimension for your new cube, then you have to create it providing the long list of arguments that includes the vectors describing its rotation and spacing. You can generate the vectors from the original cube using the samples nearest the origin sample (0,0,0) of the original seismic cube.
Consider that you have the following locations in the cube expressed by their I,J,K indexes. Since the K vector is easy to generate, only needing sample rate, I'll focus on I and J here.
First, get positions at the origin and two neighborhing traces.
Point3 I0J0 = inputCube.PositionAtIndex( new IndexDouble3( 0, 0, 0 ) );
Point3 I1J0 = inputCube.PositionAtIndex( new IndexDouble3( 1, 0, 0 ) );
Point3 I0J1 = inputCube.PositionAtIndex( new IndexDouble3( 0, 1, 0 ) );
Now build segments in the I and J directions and use them to create the vectors.
Vector3 iVector = new Vector3( new Segment3( I0J0, I1J0 ) );
Vector3 jVector = new Vector3( new Segment3( I0J0, I0J1 ) );
Now create the K vector from the input cube sampling. Note that you have to negate the value.
Vector3 kVector = new Vector3( 0, 0, -inputCube.SampleSpacingIJK.Z );

Related

Draw contours for determined landmark points ( OpenCvForUnity )

I have worked OpenCV with python before. But I am getting hard to using OpenCV with unity.
I training data for the specific points on the face. I can find landmark point and I can show that points on the unity webcamTexture but I want to draw contours on landmarks point that is determined by me. Firstly I need to convert landmark points to convex hull points for draw contour among existing points. How can I convert?
I tried
List<List<Vector2>> landmarkPoints = new List<List<Vector2>>();
OpenCVForUnityUtils.ConvertVector2ListToArray(landmarkPoints) ;
But landmark points doesn't convert. I need to convert landmark points to Hull Points.
Imgproc.drawContours (rgbaMat, hullPoints, -1, new Scalar (0, 255, 0), 2);
Could you help me, please?
I found this solution, finally . If you have match same problem that answer could help you.
// detect face landmark points.
OpenCVForUnityUtils.SetImage(faceLandmarkDetector, rgbaMat);
for (int i = 0; i < trackedRects.Count; i++)
{
List<Vector2> points = faceLandmarkDetector.DetectLandmark(rect);
List<Vector2> pointss = new List<Vector2>();
//Draw Contours
List<Point> pointsss = OpenCVForUnityUtils.ConvertVector2ListToPointList(pointss);
MatOfPoint hullPointMat = new MatOfPoint();
hullPointMat.fromList(pointsss);
List<MatOfPoint> hullPoints = new List<MatOfPoint>();
hullPoints.Add(hullPointMat);
Imgproc.drawContours(rgbaMat, hullPoints, -1, new Scalar(150, 100, 5,255), -1);
}

How can I get the worldspace coordinates of a pixel on a texture?

I need to be able to get the coordinates of a pixel under the mouse to ensure that when I calculate the distance between 2 points, it ends up being the correct distance.
My issue is that when I try to translate the texture, the mouse (drag based translation) never appears to "grab" a specific pixel.
What I am trying to achieve:
I am trying to create an image viewer program using OpenGL 4. I need to get the basic pan and zoom functionality. I am trying to ensure that when I am panning, the following is always true:
The pixel that is under the mouse when the click is initiated (mouse down) is the same one under the mouse when the click is released (mouse up).
The pixel that is under the mouse when the user uses the scroll wheel to zoom is always the center point of the zoom (scale the image and then translate to ensure that pixel is in the center of the viewable window)
What I have tried:
I have tried taking the screen coordinates for the start and endpoints and dividing them by the screen width/height for x and Y to get the "delta" or change. This never made it so the mouse would appear to anchor to the image.
I have tried then multiplying that number by the zoom level (value between 0 and 1). That seemed to help a bit, but not too much.
I also tried (most current attempt) to use the image height/width instead of the screen height/width
The last thing I have been trying to work with is using GL.ReadPixels, but it needs an IntPtr as the final parameter. I have not been able to find a good example of how that code would work in C#. There are a bunch of C++ examples, but almost no C# examples.
DrawBuffer code:
This is the code that calls all of the OpenGL code. The information that I need is how to ensure that the correct values get put into translationVector.
Extra information
The image viewer is built to handle two images (that are to be used for stereoscopic viewing). The main "image" that is going to be viewed is the left image.
The corrected offset is code that predates me, so I am not 100% sure about the following:
correctedOffset is the delta of a mouse movement in addition to an adjustment if there are 2 images.
normalizedOffset is a delta between 0 and 1 to be used to accurately translate the MVP matrix
The following classes are based on OpenGL tutorials by TheCherno:
VertexBuffer
VertexBufferLayout
IndexBuffer
Shader
This is the type of _shaderProgram
Renderer
Texture
This is the type of _imageTexture
VertexArray
GL.Viewport(new Size(OrthoBounds.Width, OrthoBounds.Height));
Vector3 translationVector = new Vector3();
// The model matrix controls the position of the model (object in your frame)
Matrix4 model;
// The view matrix controls the position of the "camera". In our use, we do not need to set this.
Matrix4 view = new Matrix4();
// The projection matrix is the field of view.
Matrix4 projection = Matrix4.CreateOrthographic(OrthoBounds.Left, OrthoBounds.Top, -1, 1);
// The identity matrix gives a starting point for matrix calculations.
Matrix4 identity = Matrix4.Identity;
//TODO: UI CHANGE
GL.ClearColor(0.09f, 0.09f, 0.09f, 1.0f);
GL.Clear(ClearBufferMask.ColorBufferBit);
// todo: if left and right image are different sizes?
// fit image to inside of control
float scale = Math.Min(
(float)Bounds.Width / LeftImage.Image.ImageSize.Width,
(float)Bounds.Height / LeftImage.Image.ImageSize.Height);
Size imageSize = new Size(
(int)(LeftImage.Image.ImageSize.Width * scale),
(int)(LeftImage.Image.ImageSize.Height * scale));
// use scaled image to caclulate bounds
Rectangle imageBounds = new Rectangle(Point.Empty, imageSize);
imageBounds.Offset(-imageSize.Width / 2, -imageSize.Height / 2);
// todo: maybe use .Offset here, but it would need property changes
PointF correctedOffset = new PointF(
OrthoTranslate.X + (RegisterTranslate.X * (eyeSide == LateralityType.Left ? -1 : 1)),
OrthoTranslate.Y + (RegisterTranslate.Y * (eyeSide == LateralityType.Right ? -1 : 1))
);
/*
* List of vertices that we will need to render the image
* Layout:
* Vertex X Coord, Vertex Y Coord, Texture X Coord, Texture Y Coord
*/
float[] vertices = new float[]
{
imageBounds.Left, imageBounds.Top, 1.0f, 0.0f, // 0
imageBounds.Right, imageBounds.Top, 0.0f, 0.0f, // 1
imageBounds.Right, imageBounds.Bottom, 0.0f, 1.0f, // 2
imageBounds.Left, imageBounds.Bottom, 1.0f, 1.0f // 3
};
/*
* List of indices that we will use to identify which vertices
* to use for each triangle. This reduces the number of times we
* would need to specify vertices as there are no duplicates.
*/
int[] indices =
{
1, 2, 3,
3, 0, 1,
};
var normalizedOffset = new PointF(correctedOffset.X / imageBounds.Width,
correctedOffset.Y / imageBounds.Height);
translationVector = new Vector3(normalizedOffset.X, normalizedOffset.Y, 0.0f);
// Translate the model matrix
model = Matrix4.CreateTranslation(translationVector);
// Scale the model matrix
model *= Matrix4.CreateScale(Zoom, Zoom, 1);
/*
* The model, view, projection matrix is the final matrix that contains
* the information from all three matrices to form the one that will be
* used to render the object on the screen.
*/
_mvpMatrix = projection * identity * model;
Vector3.Unproject(translationVector, 0.0f, 0.1f, 1.0f, 1.0f, 0.0f, 1.0f, _mvpMatrix);
_shaderProgram.SetUniformMat4f("u_MVP", _mvpMatrix);
_shaderProgram.SetUniform1("brightness", Brightness - 1f);
_shaderProgram.SetUniform1("contrast", Contrast);
// set color channel parameter using glsl
_shaderProgram.SetUniform4f("channels", new Vector4(_channelState[RasterColorChannel.Red], _channelState[RasterColorChannel.Green], _channelState[RasterColorChannel.Blue], 1.0f));
// set texture unit identifiers
_shaderProgram.SetUniform1("image", _imageTexture.Id);
_shaderProgram.SetUniform1("overlay", _imageTexture.Id);
// update overlay parameter for shader
_shaderProgram.SetUniform1("drawOverlay", _drawOverlays ? 1 : 0);
_renderer.Clear();
//The vertex array
VertexArray va = new VertexArray();
//The vertex buffer
VertexBuffer vb = new VertexBuffer(vertices, 4 * 4 * sizeof(float));
//The vertex buffer layout
VertexBufferLayout layout = new VertexBufferLayout();
//The index buffer
IndexBuffer ib = new IndexBuffer(indices);
//Add a grouping of two for the vertex XY coordinates
layout.AddFloat(2);
//Add a grouping of two for the texture XY coordinates
layout.AddFloat(2);
//Add the vertex buffer with the associated layout to the vertex array object
va.AddBuffer(vb, layout);
_imageTexture.Bind();
_renderer.Draw(va, ib, _shaderProgram);
_imageTexture.Unbind();
Outcomes
What I Expected to happen vs what did happen
Scenario:
At the normal 0 zoom level, I dragged from the left side to the right side (to drag the image right).
Expected outcome:
** The mouse would stay anchored to a specific part of the image as it was moving.
Actual outcome:
** The mouse would stay close and get further away from the original point as I dragged further to the right.
Other things I noticed
The more I zoom in, the more accurate it appears
This means that it stays closer to the original point that I dragged from
It will still move the image even if I drag from outside of the image bounds
I want it only to move the image when you are dragging from within the image.
Let me know if there is any other information that may be helpful!

AxisAngleRotation3D to Quaternion gives unexpected result

I am developing a controller in WPF 3D (using C#) to be able to easily move and rotate a ProjectionCamera using Move()- and Pitch()-functions. My controller will become a Behavior<ProjectionCamera> that can be attached to a ProjectionCamera. In order to initialize the controller, I want to calculate the current rotation of the camera by looking at its current Up and Forward-vectors and compare them to the default camera orientation (Up = [0 1 0], Forward = [0 0 -1]). In other words, I want to calculate a rotation that will transform the camera default's orientation to its current one.
Ultimately, I want to express the rotation as a single Quaternion, but as an intermediate step I first calculate the Proper Euler Angle rotations of the form z-N-Z expressed as AxisAngleRotation3D-values, following the default definition of Wikipedia:
var alphaRotation = CalculateRotation(z, x, N);
var betaRotation = CalculateRotation(N, z, Z);
var gammaRotation = CalculateRotation(Z, N, X);
with
CalculateRotation(Vector3D axisOfRotation, Vector3D from, Vector3D to) : AxisAngleRotation3D
The Euler Angle Rotations seem to be calculated correctly, based on some unit tests. However, when I convert these rotations to a single Quaternion, the resulting Quaternion represents a rotation that differs from the Euler Angle Rotations, and I don't know why.
This is how I convert the Euler Angles to a single Quaternion:
var rotation =
new Quaternion(alphaRotation.Axis, alphaRotation.Angle) *
new Quaternion(betaRotation.Axis, betaRotation.Angle) *
new Quaternion(gammaRotation.Axis, gammaRotation.Angle);
For example, when I initialize a ProjectionCamera with an UpDirection of [1 0 0], meaning it's been rotated 90 degrees around its LookDirection axis ([0 0 -1]), the calculated Euler Angle Rotations are as follows:
alphaRotation --> 90 deg. around [0 1 0]
betaRotation --> 90 deg. around [0 0 -1]
gammaRotation --> -90 deg. around [1 0 0]
My test verifies that, when applied in order, these rotations will transform the default Up-vector ([0 1 0]) into the current Up-vector ([1 0 0]), effectively rotating it 90 deg. around the [0 0 -1] axis. (It's also reasonable straightforward to verify this by hand.)
However, when I apply the calculated QuaternionRotation to the default Up-vector, it is transformed to the vector [-1 0 0], which is obviously wrong. I have hard-coded these results within a Unit Test and got the same results:
[TestMethod]
public void ConversionTest()
{
var vector = new Vector3D(0, 1, 0);
var alphaRotation = new AxisAngleRotation3D(new Vector3D(0, 1, 0), 90);
var betaRotation = new AxisAngleRotation3D(new Vector3D(0, 0, -1), 90);
var gammaRotation = new AxisAngleRotation3D(new Vector3D(1, 0, 0), -90);
var a = new Quaternion(alphaRotation.Axis, alphaRotation.Angle);
var b = new Quaternion(betaRotation.Axis, betaRotation.Angle);
var c = new Quaternion(gammaRotation.Axis, gammaRotation.Angle);
var combinedRotation = a * b * c;
var x = Apply(vector, alphaRotation, betaRotation, gammaRotation);
var y = Apply(vector, combinedRotation);
}
When you run the test above, you will see that x gives you the expected vector ([1 0 0]) but y will be different, where it should be exactly the same rotation.
What am I missing?
I solved the issue. Apparantly, the order of the multiplication of the individual Quaternions should be reversed. So, to convert Euler Angles (or any set of rotations) into a single Quaternion in .NET you should do the following:
var rotation = gammaRotation * betaRotation * alphaRotation;
where rotation represents geometrically applying alphaRotation first, then betaRotation and finally gammaRotation. I just wished they had documented this, since the meaning of the order depends on the specific library you are working with....

Rotate model group according to mouse drag direction and location in the model

I added several cubes to a Viewport3D in WPF and now I want to manipulate groups of them with the mouse.
When I click & drag over one and a half of those cubes I want the hole plane rotated in the direction that the drag was made, the rotation will be handled by RotateTransform3D so it won't be a problem.
The problem is that I don't know how I should handle the drag, more exactly:
How can I know which faces of the cubes were dragged over in order to determine what plane to rotate?
For example in the case below I'd like to know that I need to rotate the right plane of cubes with 90 degrees clockwise so the row of blue faces will be at the top instead of the white ones which will be in the back.
And in this example the top layer should be rotated 90 degrees counterclockwise:
Currently my idea is to place some sort of invisible areas over the cube, to check in which one the drag is happening with VisualTreeHelper.HitTest and then to determine which plane I should rotate, this area will match the first drag example:
But when I add all four regions then I'm back to square one because I still need to determine the direction and which face to rotate according to which areas were "touched".
I'm open to ideas.
Please note that this cube can be freely moved, so it may not be in the initial position when the user clicks and drags, this is what bothers me the most.
PS:
The drag will be implemented with a combination of MouseLeftButtonDown, MouseMove and MouseLeftButtonUp.
MouseEvents
You'll need to use VisualTreeHelper.HitTest() to pick Visual3D objects (process may be simpler if each face is a separate ModelVisual3D). Here is some help on the HitTesting in general, and here is a very useful tidbit that simplifies the picking process tremendously.
Event Culling
Let's say that you now have two ModelVisual3D objects from your picking tests (one from the MouseDown event, one from the MouseUp event). First, we should detect if they are coplanar (to avoid picks going from one face to another). One way to do this is to compare the face Normals to see if they are pointing the same direction. If you have defined the Normals in your MeshGeometry3D, that's great. If not, then we can still find it. I'd suggest adding a static class for extensions. An example of calculating a normal:
public static class GeometricExtensions3D
{
public static Vector3D FaceNormal(this MeshGeometry3D geo)
{
// get first triangle's positions
var ptA = geo.Positions[geo.TriangleIndices[0]];
var ptB = geo.Positions[geo.TriangleIndices[1]];
var ptC = geo.Positions[geo.TriangleIndices[2]];
// get specific vectors for right-hand normalization
var vecAB = ptB - ptA;
var vecBC = ptC - ptB;
// normal is cross product
var normal = Vector3D.CrossProduct(vecAB, vecBC);
// unit vector for cleanliness
normal.Normalize();
return normal;
}
}
Using this, you can compare the normals of the MeshGeometry3D from your Visual3D hits (lots of casting involved here) and see if they are pointing in the same direction. I would use a tolerance test on the X,Y,Z of the vectors as opposed to a straight equivalence, just for safety's sake. Another extension might be helpful:
public static double SSDifference(this Vector3D vectorA, Vector3D vectorB)
{
// set vectors to length = 1
vectorA.Normalize();
vectorB.Normalize();
// subtract to get difference vector
var diff = Vector3D.Subtract(vectorA, vectorB);
// sum of the squares of the difference (also happens to be difference vector squared)
return diff.LengthSquared;
}
If they are not coplanar (SSDifference > some arbitrary test value), you can return here (or give some kind of feedback).
Object Selection
Now that we have determined our two faces and that they are, indeed, ripe for our desired event-handling, we must deduce a way to bang out the information from what we have. You should still have the Normals you calculated before. We're going to be using them again to pick the rest of the faces to be rotated. Another extension method can be helpful for the comparison to determine if a face should be included in the rotation:
public static bool SharedColumn(this MeshGeometry3D basis, MeshGeometry3D compareTo, Vector3D normal)
{
foreach (Point3D basePt in basis.Positions)
{
foreach (Point3D compPt in compareTo.Positions)
{
var compToBasis = basePt - compPt; // vector from compare point to basis point
if (normal.SSDifference(compToBasis) < float.Epsilon) // at least one will be same direction as
{ // as normal if they are shared in a column
return true;
}
}
}
return false;
}
You'll need to cull faces for both of your meshes (MouseDown and MouseUp), iterating over all of the faces. Save the list of Geometries that need to be rotated.
RotateTransform
Now the tricky part. An Axis-Angle rotation takes two parameters: a Vector3D representing the axis normal to the rotation (using right-hand rule) and the angle of rotation. But the midpoint of our cube may not be at (0, 0, 0), so rotations can be tricky. Ergo, first we must find the midpoint of the cube! The simplest way I can think of is to add the X, Y, and Z components of every point in the cube and then divide them by the number of points. The trick, of course, will be not to add the same point more than once! How you do that will depend on how your data is organized, but I'll assume it to be a (relatively) trivial exercise. Instead of applying transforms, you'll want to move the points themselves, so instead of creating and adding to a TransformGroup, we're going to build Matrices! A translate matrix looks like:
1, 0, 0, dx
0, 1, 0, dy
0, 0, 1, dz
0, 0, 0, 1
So, given the midpoint of your cube, your translation matrices will be:
var cp = GetCubeCenterPoint(); // user-defined method of retrieving cube's center point
// gpu's process matrices in column major order, and they are defined thusly
var matToCenter = new Matrix3D(
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 0, 1,
-cp.X, -cp.Y, -cp.Z, 1);
var matBackToPosition = new Matrix3D(
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 0, 1,
cp.X, cp.Y, cp.Z, 1);
Which just leaves our rotation. Do you still have reference to the two meshes we picked from the MouseEvents? Good! Let's define another extension:
public static Point3D CenterPoint(this MeshGeometry3D geo)
{
var midPt = new Point3D(0, 0, 0);
var n = geo.Positions.Count;
foreach (Point3D pt in geo.Positions)
{
midPt.Offset(pt.X, pt.Y, pt.Z);
}
midPt.X /= n; midPt.Y /= n; midPt.Z /= n;
return midPt;
}
Get the vector from the MouseDown's mesh to the MouseUp's mesh (the order is important).
var swipeVector = MouseUpMesh.CenterPoint() - MouseDownMesh.CenterPoint();
And you still have the normal for our hit faces, right? We can (basically magically) get the rotation axis by:
var rotationAxis = Vector3D.CrossProduct(swipeVector, faceNormal);
Which will make your rotation angle always +90°. Make the RotationMatrix (source):
swipeVector.Normalize();
var cosT = Math.Cos(Math.PI/2);
var sinT = Math.Cos(Math.PI/2);
var x = swipeVector.X;
var y = swipeVector.Y;
var z = swipeVector.Z;
// build matrix, remember Column-Major
var matRotate = new Matrix3D(
cosT + x*x*(1 -cosT), y*x*(1 -cosT) + z*sinT, z*x*(1 -cosT) -y*sinT, 0,
x*y*(1 -cosT) -z*sinT, cosT + y*y*(1 -cosT), y*z*(1 -cosT) -x*sinT, 0,
x*z*(1 -cosT) -y*sinT, y*z*(1 -cosT) -x*sinT, cosT + z*z*(1 -cosT), 0,
0, 0, 0, 1);
Combine them to get the Transformation matrix, note that the order is important. We want to take the point, transform it to coordinates relative to the origin, rotate it, then transform it back to original coordinates, in that order. So:
var matTrans = Matrix3D.Multiply(Matrix3D.Multiply(matToCenter, matRotate), matBackToPosition);
Then, you're ready to move the points. Iterate through each Point3D in each MeshGeometry3D that you previously tagged for rotation, and do:
foreach (MeshGeometry3D geo in taggedGeometries)
{
for (int i = 0; i < geo.Positions.Count; i++)
{
geo.Positions[i] *= matTrans;
}
}
And then... oh wait, we're done!

Trying to accurately measure 3D distance from a 2D image

I am trying to extract out 3D distance in mm between two known points in a 2D image. I am using square AR markers in order to get the camera coordinates relative to the markers in the scene. The points are the corners of these markers.
An example is shown below:
The code is written in C# and I am using XNA. I am using AForge.net for the CoPlanar POSIT
The steps I take in order to work out the distance:
1. Mark corners on screen. Corners are represented in 2D vector form, Image centre is (0,0). Up is positive in the Y direction, right is positive in the X direction.
2. Use AForge.net Co-Planar POSIT algorithm to get pose of each marker:
float focalLength = 640; //Needed for POSIT
float halfCornerSize = 50; //Represents 1/2 an edge i.e. 50mm
AVector[] modelPoints = new AVector3[]
{
new AVector3( -halfCornerSize, 0, halfCornerSize ),
new AVector3( halfCornerSize, 0, halfCornerSize ),
new AVector3( halfCornerSize, 0, -halfCornerSize ),
new AVector3( -halfCornerSize, 0, -halfCornerSize ),
};
CoplanarPosit coPosit = new CoplanarPosit(modelPoints, focalLength);
coPosit.EstimatePose(cornersToEstimate, out marker1Rot, out marker1Trans);
3. Convert to XNA rotation/translation matrix (AForge uses OpenGL matrix form):
float yaw, pitch, roll;
marker1Rot.ExtractYawPitchRoll(out yaw, out pitch, out roll);
Matrix xnaRot = Matrix.CreateFromYawPitchRoll(-yaw, -pitch, roll);
Matrix xnaTranslation = Matrix.CreateTranslation(marker1Trans.X, marker1Trans.Y, -marker1Trans.Z);
Matrix transform = xnaRot * xnaTranslation;
4. Find 3D coordinates of the corners:
//Model corner points
cornerModel = new Vector3[]
{
new Vector3(halfCornerSize,0,-halfCornerSize),
new Vector3(-halfCornerSize,0,-halfCornerSize),
new Vector3(halfCornerSize,0,halfCornerSize),
new Vector3(-halfCornerSize,0,halfCornerSize)
};
Matrix markerTransform = Matrix.CreateTranslation(cornerModel[i].X, cornerModel[i].Y, cornerModel[i].Z);
cornerPositions3d1[i] = (markerTransform * transform).Translation;
//DEBUG: project corner onto screen - represented by brown dots
Vector3 t3 = viewPort.Project(markerTransform.Translation, projectionMatrix, viewMatrix, transform);
cornersProjected1[i].X = t3.X; cornersProjected1[i].Y = t3.Y;
5. Look at the 3D distance between two corners on a marker, this represents 100mm. Find the scaling factor needed to convert this 3D distance to 100mm. (I actually get the average scaling factor):
for (int i = 0; i < 4; i++)
{
//Distance scale;
distanceScale1 += (halfCornerSize * 2) / Vector3.Distance(cornerPositions3d1[i], cornerPositions3d1[(i + 1) % 4]);
}
distanceScale1 /= 4;
6. Finally I find the 3D distance between related corners and multiply by the scaling factor to get distance in mm:
for(int i = 0; i < 4; i++)
{
distance[i] = Vector3.Distance(cornerPositions3d1[i], cornerPositions3d2[i]) * scalingFactor;
}
The distances acquired are never truly correct. I used the cutting board as it allowed me easy calculation of what the distances should be. The above image calculated a distance of 147mm (expected 150mm) for corner 1 (red to purple). The image below shows 188mm (expected 200mm).
What is also worrying is the fact that when measuring the distance between marker corners sharing an edge on the same marker, the 3D distances obtained are never the same. Another thing I noticed is that the brown dots never seem to exactly match up with the colored dots. The colored dots are the coordinates used as input to the CoPlanar posit. The brown dots are the calculated positions from the center of the marker calculated via POSIT.
Does anyone have any idea what might be wrong here? I am pulling out my hair trying to figure it out. The code should be quite simple, I don't think I have made any obvious mistakes with the code. I am not great at maths so please point out where my basic maths might be wrong as well...
You are using way to many black boxes in your question. What is the focal length in the second step? Why go through ypr in step 3? How do you calibrate? I recommend to start over from scratch without using libraries that you do not understand.
Step 1: Create a camera model. Understand the errors, build a projection. If needed apply a 2d filter for lens distortion. This might be hard.
Step 2: Find you markers in 2d, after removing lens distortion. Make sure you know the error and that you get the center. Maybe over multiple frames.
Step 3: Un-project to 3d. After 1 and 2 this should be easy.
Step 4: ???
Step 5: Profit! (Measure distance in 3d and know your error)
I think you need to have 3D photo (two photo from a set of distance) so you can get the parallax distance from image differences

Categories