Conversion of points in one Projected Coordinate System to Another - c#

How can I convert points from one projected coordinate system to another using ArcObjects in C#?
//Coordinates in feet
double feetLong = 2007816.711;
double feetLat = 393153.895;
//Coordinates in decimal degrees (Should be the resulting coordinates)
//long: -97.474575;
//lat: 32.747352;
double[] feetPair = new double[] { feetLong, feetLat };
//Our projection used in GIS
string epsg32038 = "PROJCS[\"NAD27 / Texas North Central\",GEOGCS[\"GCS_North_American_1927\",DATUM[\"D_North_American_1927\",SPHEROID[\"Clarke_1866\",6378206.4,294.9786982138982]],PRIMEM[\"Greenwich\",0],UNIT[\"Degree\",0.017453292519943295]],PROJECTION[\"Lambert_Conformal_Conic\"],PARAMETER[\"standard_parallel_1\",32.13333333333333],PARAMETER[\"standard_parallel_2\",33.96666666666667],PARAMETER[\"latitude_of_origin\",31.66666666666667],PARAMETER[\"central_meridian\",-97.5],PARAMETER[\"false_easting\",2000000],PARAMETER[\"false_northing\",0],UNIT[\"Foot_US\",0.30480060960121924]]";
//Google Maps projection
string epsg3785 = "PROJCS[\"Popular Visualisation CRS / Mercator\",GEOGCS[\"Popular Visualisation CRS\",DATUM[\"D_Popular_Visualisation_Datum\",SPHEROID[\"Popular_Visualisation_Sphere\",6378137,0]],PRIMEM[\"Greenwich\",0],UNIT[\"Degree\",0.017453292519943295]],PROJECTION[\"Mercator\"],PARAMETER[\"central_meridian\",0],PARAMETER[\"scale_factor\",1],PARAMETER[\"false_easting\",0],PARAMETER[\"false_northing\",0],UNIT[\"Meter\",1]]";
This is the beginning of my code. I've tried using the CoordinateSystemFactory but never got anything to work. I intend to use ProjNet to solve this although I am open to any other way. I am really new to using ArcObjects to create custom tools and have been stuck on this for a while.

Related

ML.NET plotting K-means clustering results?

I'm experimenting with ML.NET in an unsupervised clustering scenario. My start data are less than 30 records with 5 features in a TSV file, e.g. (of course the label will be ignored):
Label S1 S2 S3 S4 S5
alpha 0.274167987321712 0.483359746434231 0.0855784469096672 0.297939778129952 0.0332805071315372
beta 0.378208470054279 0.405409549510871 0.162317151706584 0.292342604802355 0.0551994848048085
...
My start point was the iris tutorial, a sample of K-means clustering. In my case I want 3 clusters. As I'm just learning, once created the model I'd like to use it to add the clustering data to each record in a copy of the original file, so I can examine them and plot scatter graphs.
I started with this training code (say MyModel is the POCO class representing its model, with properties for S1-S5):
// load data
MLContext mlContext = new MLContext(seed: 0);
IDataView dataView = mlContext.Data.LoadFromTextFile<MyModel>
(dataPath, hasHeader: true, separatorChar: '\t');
// train model
const string featuresColumnName = "Features";
EstimatorChain<ClusteringPredictionTransformer<KMeansModelParameters>>
pipeline = mlContext.Transforms
.Concatenate(featuresColumnName, "S1", "S2", "S3", "S4", "S5")
.Append(mlContext.Clustering.Trainers.KMeans(featuresColumnName,
numberOfClusters: 3));
TransformerChain<ClusteringPredictionTransformer<KMeansModelParameters>>
model = pipeline.Fit(dataView);
// save model
using (FileStream fileStream = new FileStream(modelPath,
FileMode.Create, FileAccess.Write, FileShare.Write))
{
mlContext.Model.Save(model, dataView.Schema, fileStream);
}
Then, I load the saved model, read every record from the original data, and get its cluster ID. This sounds a bit convoluted, but my learning intent here is inspecting the results, before playing with them. The results should be saved in a new file, together with the centroids coordinates and the points coordinates.
Yet, it does not seem that this API is transparent enough to easily access the centroids; I found only a post, which is rather old, and its code no more compiles. I rather used it as a hint to recover the data via reflection, but this is a hack.
Also, I'm not sure about the details of the data provided by the framework. I can see that every centroid has 3 vectors (named cx cy cz in the sample code), each with 5 elements (the 5 features, in their concatenated input order, I presume, i.e. from S1 to S5); also, each prediction provides a 3-fold distance (dx dy dz). If these assumptions are OK, I could assign a cluster ID to each record like this:
// for each record in the original data
foreach (MyModel record in csvReader.GetRecords<MyModel>())
{
// get its cluster ID
MyPrediction prediction = predictor.Predict(record);
// get the centroids just once, as of course they are the same
// for all the records referring their distances to them
if (cx == null)
{
// get centroids (via reflection...):
// https://github.com/dotnet/machinelearning/blob/master/docs/samples/Microsoft.ML.Samples/Dynamic/Trainers/Clustering/KMeansWithOptions.cs#L49
// https://social.msdn.microsoft.com/Forums/azure/en-US/c09171c0-d9c8-4426-83a9-36ed72a32fe7/kmeans-output-centroids-and-cluster-size?forum=MachineLearning
VBuffer<float>[] centroids = default;
var last = ((TransformerChain<ITransformer>)model)
.LastTransformer;
KMeansModelParameters kparams = (KMeansModelParameters)
last.GetType().GetProperty("Model").GetValue(last);
kparams.GetClusterCentroids(ref centroids, out int k);
cx = centroids[0].GetValues().ToArray();
cy = centroids[1].GetValues().ToArray();
cz = centroids[2].GetValues().ToArray();
}
float dx = prediction.Distances[0];
float dy = prediction.Distances[1];
float dz = prediction.Distances[2];
// ... calculate and save full details for the record ...
}
Given this scenario, I suppose I can get all the details about each record position in a pretrained model in the following way:
dx, dy, dz are the distances.
cx[0] cy[0] cy[0] + the distances (dx, dy, and dz respectively) should be the position of the S1 point; cx[1] cy[1] cz[1] + the distances the position of S2; and so forth up to S5 (cx[4] etc).
In this case, I could plot these data in a 3D scatter graph. Yet, I'm totally new to ML.NET, and thus I'm not sure about these assumptions, and it's well possible I'm on the wrong path. Could anyone point me in the right direction?
I just figured this out myself - took a bit of digging so for those interested heres some good info:
The centroids can now be retrieved right off the fit model via
VBuffer<float>[] centroids = default;
var modelParams = trainedModel.Model;
modelParams.GetClusterCentroids(ref centroids, out var k);
However the documentation here is annoyingly misleading because the centroids they claim are "coordinates" are not coordinates but rather the mean values of the features columns for the cluster.
Based on your pipeline this probably makes them pretty useless if like me you have 700 feature columns and half a dozen transformation steps. As far as I can tell (please correct me if I'm wrong anyone!!!) there is no way to transform the centroids into Cartesian coordinates for charting.
But we can still use them.
What I ended up doing was after training my model on my data I ran all my data through the model's prediction function. This gives me the predicted cluster id and euclidean distances to all other cluster centroids.
Using the predicted cluster id and the centroid means for the cluster you can map your datapoint's features over the means to get a "weighted value" of your data row based on the predicted cluster. Basically a centroid will contain info that it contains a certain column 0.6533, and another column 0.211, and another column 0. By running your datapoint features, lets say ( 5, 3, 1 ), through the centroid you'll get ( 3.2665, 0.633, 0 ). Which is a representation of the data row as included in the predicted cluster.
This is still just a row of data however - to make them into Cartesian coordinates for a point graph I simply use a sum of the first half as X and a sum of the second half as Y. For the example data the coord would be ( 3.8995, 0 )
Doing this we can finally get pretty charts
And here's a mostly complete code example:
VBuffer<float>[] centroids = default;
var modelParams = trainedModel.Model;
modelParams.GetClusterCentroids(ref centroids, out var k);
// extract from the VBuffer for ease
var cleanCentroids = Enumerable.Range(1, 5).ToDictionary(x => (uint)x, x =>
{
var values = centroids[x - 1].GetValues().ToArray();
return values;
});
var points = new Dictionary<uint, List<(double X, double Y)>>();
foreach (var dp in featuresDataset)
{
var prediction = predictor.Predict(dp);
var weightedCentroid = cleanCentroids[prediction.PredictedClusterId].Zip(dp.Features, (x, y) => x * y);
var point = (X: weightedCentroid.Take(weightedCentroid.Count() / 2).Sum(), Y: weightedCentroid.Skip(weightedCentroid.Count() / 2).Sum());
if (!points.ContainsKey(prediction.PredictedClusterId))
points[prediction.PredictedClusterId] = new List<(double X, double Y)>();
points[prediction.PredictedClusterId].Add(point);
}
Where featuresDataset is an array of objects that contain the feature columns being fed to the kmeans trainer. See the microsoft docs link above for an example - featuresDataset would be testData in their sample.

C# calculate bearing from two GeoCoordinate

I have two geo coordinates positions on the earth.
Using .NET I can easily calculate the distance:
GeoCoordinate a = new GeoCoordinate(50, 8);
GeoCoordinate b = new GeoCoordinate(34, -118);
double distanceInMeters = a.GetDistanceTo(b);
This uses the Haversine formula and execution is extremely fast.
How can i get the bearing using the same spherial model which the Haversine formula uses?
I would be happy with something like:
double bearing = HaversineBearingCalculator.calcBearingInDegrees(a, b);
or even
double bearing = a.GetBearingTo(b);
However .NET does not seem to offer anything like it.

Animate an UV sphere with (4D?) noise

I am using a C# port of libnoise with XNA (I know it's dead) to generate planets.
There is a function in libnoise that receives the coordinates of a vertex in a sphere surface (latitude and longitude) and returns a random value (from -1 to 1).
So with that value, I can change the height of each vertex on the surface of the sphere (the altitude), creating some elevation, simulating the surface of a planet (I'm not simply wrapping a texture around the sphere, I'm actually creating each vertex from scratch).
An example of what I have:
Now I want to animate the sphere, like this
But the thing is, libnoise only works with 3D noise.
The "planet" function maps the latitude and longitude to XYZ coordinates of a cube.
And I believe that, to animate a sphere like I want to, I need an extra coordinate there, to be the "time" dimension. Am I right? Or is it possible to do this with what libnoise offers?
OBS: As I mentioned, I'm using an UV sphere, not an icosphere or a spherical cube.
EDIT: Here is the algorithm used by libnoise to map lat/long to XYZ:
public double GetValue(double latitude, double longitude) {
double x=0, y=0, z=0;
double PI = 3.1415926535897932385;
double DEG_TO_RAD = PI / 180.0;
double r = System.Math.Cos(DEG_TO_RAD * lat);
x = r * System.Math.Cos(DEG_TO_RAD * lon);
y = System.Math.Sin(DEG_TO_RAD * lat);
z = r * System.Math.Sin(DEG_TO_RAD * lon);
return GetNoiseValueAt(x, y, z);
}
An n dimensional noise function takes n independent inputs (i0, i1, ..., in-1, in) & returns a value v, thus 3D noise is sufficient to generate a height map that varies over time. In your case the inputs would be longitude, latitude & time and the output would be the height offset.
The simple general algorithm would be:
at each time step (t){
for each vertex (v) on a sphere centered on some point (c){
calculate the longitude & latitude
get the scalar noise value (n) for the longitude, latitude & time
calculate the new vertex position (p) as follows p = ((v-c)n)+c
}
}
Note: this assumes you are not replacing/modifiying the original vertex values. You could either save a copy of them (uses less computation, but more memory) or recalculate them them based on a distance from c (uses less memory, but more computation). Also, you might get a smoother animation by calculating 2 (or more) larger time steps & interpolating to get the intermediate frames.
To the best of my knowledge, this solution should work for a UV sphere, an icosphere or a spherical cube.
Ok I think I made it.
I just added the time parameter to the mapped XYZ coordinates.
Using the same latitude and longitude but incrementing time by 0.01d gave me a nice result.
Here is my code:
public double GetValue(double latitude, double longitude, double time) {
double x=0, y=0, z=0;
double PI = 3.1415926535897932385;
double DEG_TO_RAD = PI / 180.0;
double r = System.Math.Cos(DEG_TO_RAD * lat);
x = r * System.Math.Cos(DEG_TO_RAD * lon);
y = System.Math.Sin(DEG_TO_RAD * lat);
z = r * System.Math.Sin(DEG_TO_RAD * lon);
return GetNoiseValueAt(x + time, y + time, z + time);
}
If someone has a better solution please share it!
Sorry for the late answer, but I couldn't find a satisfactory answer elsewhere online, so I'm writing this up for anyone who has this problem in the future.
What worked for me was using multiple 3d perlin noise sources, and combining them into 1 single noise source. Adding time to the xyz coordinates just creates a very noticeable effect of terrain moving in the (-1,-1,-1) direction.
Averaging over 4 uncorrelated noise sources does change the noise characteristics a bit, so you might have to adapt some factors to your use case.
This solution still isn't perfect, but I haven't seen any visual artifacts.
Code is C++ libnoise, but it should translate equally well to other languages.
noise::module::Perlin perlin_noise[4];
float get_height(ofVec3f p, float time) {
p*=2;
time /= 10 ;
return (perlin_noise[0].GetValue(p.x, p.y, p.z) +
perlin_noise[1].GetValue(p.x, p.y, time) +
perlin_noise[2].GetValue(p.x, time, p.z) +
perlin_noise[3].GetValue(time, p.y, p.z))/2;
}
Ideally, for a single 3d noise source, you want to multiply you x,y,z coords with a monotonic function of t, such that it explores a constantly expanding sphere surface of the noise source, but I haven't figured out the math yet..
Edit: the framework I use (openframeworks) has a 4d perlin noise function built in ofSignedNoise(glm::vec4)

Real World Distance: Stereo cam, cvPerspectiveTransform, Emgu, C#

I'd really appreciate some help with the following questions:
Have captured, then rectified, grayscale images from a calibrated stereo rig.
Am now attempting to get real world x,y, z coords , relative to the left camera, of specific points, in the left image; I am trying to use cvPerspectiveTransform to do so.
My abbreviated code is below.
The code appears to work to some extent, and returns the following 4 data points:
(15.4510, -474.7451, -527.0327, -912.6536), which I understand to represent x,y,z and w.
Question 1) is this assumption correct? - it may be that division by w has already taken place and that XYZ have already been returned, in which case -912.6536 is an artefact to be ignored - any views on this are welcome.
Question 2) However if ,to achieve realworld coordinates X,Y,Z, each of 'x','y','z' respectively is to be divided by 'w', in what units are the resulting XYZ coordinates? I understand them to be related to the "points" used in calibration - in this case chessboard corners were 2.5 cm apart, however the distance from the camera of the object in this case was approximately 60cm... as you can see the math doesn't quite work.
I have diligently read the relevant pages in the Bradski book (and searched online), but I must be missing something.
Matrix<float> inputMatLeft = new Matrix<float>(4,1,3);
inputMatLeft[0,0] = xL; // xL, a float, the x coord of a point in the left image
inputMatLeft[1,0] = yL; // yL, a float, the y coord of same point in left image
inputMatLeft[2,0] = d; // d, a float, the disparity between the same featurepoint in the left and right rectified images, is calc'd and defined elsewhere
inputMatLeft[3,0] = 1F;
Matrix<float> rwCoords = new Matrix<float>(4,1,3);
rwCoords = computeRealWorldCoords(inputMatLeft);
// ....do stuff with rwCoords
public Matrix<float> computeRealWorldCoords(Matrix <float> leftSrc)
{
Matrix<float> leftDest = new Matrix<float>(4,1,3);
CvInvoke.cvPerspectiveTransform(leftSrc, leftDest, inputMatrixQ); // Q Matrix is 4x4 float
return leftDest;
}
Thanks!

Trying to accurately measure 3D distance from a 2D image

I am trying to extract out 3D distance in mm between two known points in a 2D image. I am using square AR markers in order to get the camera coordinates relative to the markers in the scene. The points are the corners of these markers.
An example is shown below:
The code is written in C# and I am using XNA. I am using AForge.net for the CoPlanar POSIT
The steps I take in order to work out the distance:
1. Mark corners on screen. Corners are represented in 2D vector form, Image centre is (0,0). Up is positive in the Y direction, right is positive in the X direction.
2. Use AForge.net Co-Planar POSIT algorithm to get pose of each marker:
float focalLength = 640; //Needed for POSIT
float halfCornerSize = 50; //Represents 1/2 an edge i.e. 50mm
AVector[] modelPoints = new AVector3[]
{
new AVector3( -halfCornerSize, 0, halfCornerSize ),
new AVector3( halfCornerSize, 0, halfCornerSize ),
new AVector3( halfCornerSize, 0, -halfCornerSize ),
new AVector3( -halfCornerSize, 0, -halfCornerSize ),
};
CoplanarPosit coPosit = new CoplanarPosit(modelPoints, focalLength);
coPosit.EstimatePose(cornersToEstimate, out marker1Rot, out marker1Trans);
3. Convert to XNA rotation/translation matrix (AForge uses OpenGL matrix form):
float yaw, pitch, roll;
marker1Rot.ExtractYawPitchRoll(out yaw, out pitch, out roll);
Matrix xnaRot = Matrix.CreateFromYawPitchRoll(-yaw, -pitch, roll);
Matrix xnaTranslation = Matrix.CreateTranslation(marker1Trans.X, marker1Trans.Y, -marker1Trans.Z);
Matrix transform = xnaRot * xnaTranslation;
4. Find 3D coordinates of the corners:
//Model corner points
cornerModel = new Vector3[]
{
new Vector3(halfCornerSize,0,-halfCornerSize),
new Vector3(-halfCornerSize,0,-halfCornerSize),
new Vector3(halfCornerSize,0,halfCornerSize),
new Vector3(-halfCornerSize,0,halfCornerSize)
};
Matrix markerTransform = Matrix.CreateTranslation(cornerModel[i].X, cornerModel[i].Y, cornerModel[i].Z);
cornerPositions3d1[i] = (markerTransform * transform).Translation;
//DEBUG: project corner onto screen - represented by brown dots
Vector3 t3 = viewPort.Project(markerTransform.Translation, projectionMatrix, viewMatrix, transform);
cornersProjected1[i].X = t3.X; cornersProjected1[i].Y = t3.Y;
5. Look at the 3D distance between two corners on a marker, this represents 100mm. Find the scaling factor needed to convert this 3D distance to 100mm. (I actually get the average scaling factor):
for (int i = 0; i < 4; i++)
{
//Distance scale;
distanceScale1 += (halfCornerSize * 2) / Vector3.Distance(cornerPositions3d1[i], cornerPositions3d1[(i + 1) % 4]);
}
distanceScale1 /= 4;
6. Finally I find the 3D distance between related corners and multiply by the scaling factor to get distance in mm:
for(int i = 0; i < 4; i++)
{
distance[i] = Vector3.Distance(cornerPositions3d1[i], cornerPositions3d2[i]) * scalingFactor;
}
The distances acquired are never truly correct. I used the cutting board as it allowed me easy calculation of what the distances should be. The above image calculated a distance of 147mm (expected 150mm) for corner 1 (red to purple). The image below shows 188mm (expected 200mm).
What is also worrying is the fact that when measuring the distance between marker corners sharing an edge on the same marker, the 3D distances obtained are never the same. Another thing I noticed is that the brown dots never seem to exactly match up with the colored dots. The colored dots are the coordinates used as input to the CoPlanar posit. The brown dots are the calculated positions from the center of the marker calculated via POSIT.
Does anyone have any idea what might be wrong here? I am pulling out my hair trying to figure it out. The code should be quite simple, I don't think I have made any obvious mistakes with the code. I am not great at maths so please point out where my basic maths might be wrong as well...
You are using way to many black boxes in your question. What is the focal length in the second step? Why go through ypr in step 3? How do you calibrate? I recommend to start over from scratch without using libraries that you do not understand.
Step 1: Create a camera model. Understand the errors, build a projection. If needed apply a 2d filter for lens distortion. This might be hard.
Step 2: Find you markers in 2d, after removing lens distortion. Make sure you know the error and that you get the center. Maybe over multiple frames.
Step 3: Un-project to 3d. After 1 and 2 this should be easy.
Step 4: ???
Step 5: Profit! (Measure distance in 3d and know your error)
I think you need to have 3D photo (two photo from a set of distance) so you can get the parallax distance from image differences

Categories