I'm currently trying to use the Fisheye.Calibrate method and the Fisheye.UndistorImage method from the Emgu.CV library. As far as I've understood, the Calibrate method is used to calculate a camera matrix (K) and a distortion vector (D), which are to be used to undistort fisheye-images using the UndistorImage method. However, when I use these two methods the results are not convincing. This is the input image I'm testing on: fisheye input image and this is the result: fisheye output image.
When I tried to look at the values of K and D by looking at the data-variable of the objects, it said 'null' for both K and D. Therefore I'm unsure if I'm using the Calibrate() metod correctly. My code is as follow:
private void EmguCVUndistortFisheye()
{
string[] fileNames = Directory.GetFiles(#"C:\Users\Test\Desktop\Jakob\ImageAnalysis\Images\Calibration", "*.png");
Size patternSize = new Size(6, 8);
VectorOfVectorOfPoint3D32F objPoints = new VectorOfVectorOfPoint3D32F();
VectorOfVectorOfPointF imagePoints = new VectorOfVectorOfPointF();
foreach (string file in fileNames)
{
Mat img = CvInvoke.Imread(file, ImreadModes.Grayscale);
CvInvoke.Imshow("input", img);
VectorOfPointF corners = new VectorOfPointF(patternSize.Width * patternSize.Height);
bool find = CvInvoke.FindChessboardCorners(img, patternSize, corners);
if (find)
{
MCvPoint3D32f[] points = new MCvPoint3D32f[patternSize.Width * patternSize.Height];
int loopIndex = 0;
for (int i = 0; i < patternSize.Height; i++)
{
for (int j = 0; j < patternSize.Width; j++)
points[loopIndex++] = new MCvPoint3D32f(j, i, 0);
}
objPoints.Push(new VectorOfPoint3D32F(points));
imagePoints.Push(corners);
}
}
Size imageSize = new Size(1280, 1024);
Mat K = new Mat();
Mat D = new Mat();
Mat rotation = new Mat();
Mat translation = new Mat();
Fisheye.Calibrate(
objPoints,
imagePoints,
imageSize,
K,
D,
rotation,
translation,
Fisheye.CalibrationFlag.CheckCond,
new MCvTermCriteria(30, 0.1)
);
foreach (string file in fileNames)
{
Mat img = CvInvoke.Imread(file, ImreadModes.Grayscale);
Mat output = img.Clone();
Fisheye.UndistorImage(img, output, K, D);
CvInvoke.Imshow("output", output);
}
}
Is the reason for my strange results a consequence of wrong parameters to the Calibrate method or is it simply the case of not using enough input images?
This looks like a similar problem to one I had recently when trying to pass a Mat into the calibration function when it needed a Matrix and as you've found it just doesn't work without reporting any errors. I think you'll need the following:
var K = new Matrix<double>(3, 3);
var D = new Matrix<double>(4, 1);
Also note that if you want to retrieve the rotation and translation vectors passing a Mat in is fine but you'll probably want to convert back to a Matrix if you want to perform calculations on them. I was just using a normal camera calibration rather than fish-eye but the following working code fragment might be useful to get the idea:
var cameraMatrix = new Matrix<double>(3, 3);
var distortionCoeffs = new Matrix<double>(4, 1);
var termCriteria = new MCvTermCriteria(30, 0.1);
System.Drawing.PointF[][] imagePoints = imagePointsList.Select(p => p.ToArray()).ToArray();
MCvPoint3D32f[][] worldPoints = worldPointsList.Select(p => p.ToArray()).ToArray();
double error = CvInvoke.CalibrateCamera(worldPoints, imagePoints, imageSize, cameraMatrix, distortionCoeffs, CalibType.RationalModel, termCriteria, out Mat[] rotationVectors, out Mat[] translationVectors);
var rotation = new Matrix<double>(rotationVectors[0].Rows, rotationVectors[0].Cols, rotationVectors[0].DataPointer);
var translation = new Matrix<double>(translationVectors[0].Rows, translationVectors[0].Cols, translationVectors[0].DataPointer);
Related
The CvInvoke.PCACompute method expects a IInputArray of data, to do the analysis.
I tried using the source image as the input Mat, but the eigenvectors computed are abnormal, as per my understanding. And I am not able to convert my Contour VectorOfPoint to Mat, which can me fed.
I could also not find a good literature online about implementing PCA Analysis in EmguCV / C#.
Can someone please point me in the right direction.
Below is my code -
public static void getOrientation(Image<Gray,byte> inputImage)
{
Image<Gray, Byte> cannyGray = inputImage.Canny(85, 255);
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
Mat eigen_vectors = new Mat(inputImage.Size,DepthType.Cv8U,1);
Mat mean_mat = new Mat(inputImage.Size, DepthType.Cv8U, 1);
CvInvoke.FindContours(cannyGray, contours, null, RetrType.External, ChainApproxMethod.ChainApproxSimple);
Point[][] cont_points = contours.ToArrayOfArray();
Mat contour_mat = new Mat();
contour_mat.SetTo(cont_points[0]);
//CvInvoke.PCACompute(cannyGray.Mat, mean_mat, eigen_vectors,2);
CvInvoke.PCACompute(contours, mean_mat, eigen_vectors);
}
You have to convert each of your contour to a Mat containing your coordinates.
Here is an example of how you can do it:
// points are the point of one contour
var pointList = points.ToArray();
// use DepthType.Cv64F to allow numbers > 255
Mat dataPoints = new Mat(pointList.Length, 2, DepthType.Cv64F, 1);
double[] pointsData = new double[((int)dataPoints.Total * dataPoints.NumberOfChannels)];
// store the points coordinates in the Mat
for (int i = 0; i < dataPoints.Rows; i++)
{
pointsData[i * dataPoints.Cols] = pointList[i].X;
pointsData[i * dataPoints.Cols + 1] = pointList[i].Y;
}
// set the Mat to dataPointsData values
dataPoints.SetTo(pointsData);
// compute PCA
Mat mean = new Mat();
Mat eigenvectors = new Mat();
Mat eigenvalues = new Mat();
CvInvoke.PCACompute(dataPoints, mean, eigenvectors);
I'm trying to build a 3D object model. But my code just has rendered a 3D model with a specific colour in the image.
How can I create a 3D object with 6 images for each surface like a Rubik cube?
This is my code, using Aspose 3D lib and C#:
private void Form1_Load(object sender, EventArgs e)
{
//Create a FBX file with embedded textures
Scene scene = new Scene();
scene.Open("BetterShirt.obj");
//Create an embedded texture
Texture tex = new Texture()
{
Content = CreateTextureContent(),
FileName = "face.png",
WrapModeU = Aspose.ThreeD.Shading.WrapMode.Wrap,
};
tex.SetProperty("TexProp", "value");
//create a material with custom property
//Aspose.ThreeD.Shading.
Material mat = scene.RootNode.ChildNodes[0].Material;
mat.SetTexture(Material.MapDiffuse, tex);
mat.SetProperty("MyProp", 1.0);
scene.RootNode.ChildNodes[0].Material = mat;
//save this to file
scene.Save("exported.obj", FileFormat.WavefrontOBJ);
}
private static byte[] CreateTextureContent()
{
using (var bitmap = new Bitmap(256, 256))
{
using (var g = Graphics.FromImage(bitmap))
{
g.Clear(Color.White);
LinearGradientBrush brush = new LinearGradientBrush(new Rectangle(0, 0, 128, 128),
Color.Moccasin, Color.Blue, 45);
using (var font = new Font(FontFamily.GenericSerif, 40))
{
g.DrawString("Aspose.3D", font, brush, Point.Empty);
}
}
using (var ms = new MemoryStream())
{
//bitmap.Save(ms, ImageFormat.Png);
return ms.ToArray();
}
}
}
build an 3D object model with 6 images
We have devised below code based on your requirements. Comments have also been added for your reference. Please try using it in your environment and then share your kind feedback with us.
private static void RubikCube()
{
Bitmap[] bitmaps = CreateRubikBitmaps();
Scene scene = new Scene();
//create a box and convert it to mesh, so we can manually specify the material per face
var box = (new Box()).ToMesh();
//create a material mapping, the box mesh generated from Box primitive class contains 6 polygons, then we can reference the material of polygon(specified by MappingMode.Polygon) by index(ReferenceMode.Index)
var materials = (VertexElementMaterial)box.CreateElement(VertexElementType.Material, MappingMode.Polygon, ReferenceMode.Index);
//and each polygon uses different materials, the indices of these materials are specified below
materials.SetIndices(new int[] {0, 1, 2, 3, 4, 5});
//create the node and materials(referenced above)
var boxNode = scene.RootNode.CreateChildNode(box);
for (int i = 0; i < bitmaps.Length; i++)
{
//create material with texture
var material = new LambertMaterial();
var tex = new Texture();
using (var ms = new MemoryStream())
{
bitmaps[i].Save(ms, ImageFormat.Png);
var bytes = ms.ToArray();
//Save it to Texture.Content as embedded texture, thus the scene with textures can be exported into a single FBX file.
tex.Content = bytes;
//Give it a name and save it to disk so it can be opened with .obj file
tex.FileName = string.Format("cube_{0}.png", i);
File.WriteAllBytes(tex.FileName, bytes);
//Dispose the bitmap since we're no longer need it.
bitmaps[i].Dispose();
}
//the texture is used as diffuse
material.SetTexture(Material.MapDiffuse, tex);
//attach it to the node where contains the box mesh
boxNode.Materials.Add(material);
}
//save it to file
//3D viewer of Windows 10 does not support multiple materials, you'll see same textures in each face, but the tools from Autodesk does
scene.Save("test.fbx", FileFormat.FBX7500ASCII);
//NOTE: Multiple materials of mesh in Aspose.3D's OBJ Exporter is not supported yet.
//But we can split the mesh with multiple materials into different meshes by using PolygonModifier.SplitMesh
PolygonModifier.SplitMesh(scene, SplitMeshPolicy.CloneData);
//following code will also generate a material library file(test.mtl) which uses the textures exported in above code.
scene.Save("test.obj", FileFormat.WavefrontOBJ);
}
private static Bitmap[] CreateRubikBitmaps()
{
Brush[] colors = { Brushes.White, Brushes.Red, Brushes.Blue, Brushes.Yellow, Brushes.Orange, Brushes.Green};
Bitmap[] bitmaps = new Bitmap[6];
//initialize the cell colors
int[] cells = new int[6 * 9];
for (int i = 0; i < cells.Length; i++)
{
cells[i] = i / 9;
}
//shuffle the cells
Random random = new Random();
Array.Sort(cells, (a, b) => random.Next(-1, 2));
//paint each face
// size of each face is 256px
const int size = 256;
// size of cell is 80x80
const int cellSize = 80;
// calculate padding size between each cell
const int paddingSize = (size - cellSize * 3) / 4;
int cellId = 0;
for (int i = 0; i < 6; i++)
{
bitmaps[i] = new Bitmap(size, size, System.Drawing.Imaging.PixelFormat.Format32bppRgb);
using (Graphics g = Graphics.FromImage(bitmaps[i]))
{
g.Clear(Color.Black);
for (int j = 0; j < 9; j++)
{
//calculate the cell's position
int row = j / 3;
int column = j % 3;
int y = row * (cellSize + paddingSize) + paddingSize;
int x = column * (cellSize + paddingSize) + paddingSize;
Brush cellBrush = colors[cells[cellId++]];
//paint cell
g.FillRectangle(cellBrush, x, y, cellSize, cellSize);
}
}
}
return bitmaps;
}
PS: I work with Aspose as Developer Evangelist.
I am working on a project in which I try to compare two images in C#. I'm using EmguCV (a C# wrapper for OpenCV). I've tested some funcitons which work (compareHist for example).
I am now trying to use the implementation of Earth's Mover Distance. As I use color images, I build a 2d Histogram based on the HSV image. I then build the corresponding signature (as described in the docs, and explained here).
The problem is that I always obtain a NaN as the output of my code. Since I'm new to C# and EmguCV, I've tried to make the same steps, but this time in Python, and it works, EMD returns a number without any error.
I've spent a lot of time on this problem trying to change the type of the histogram (between the OpenCV Mat and EmguCV image), looking at the histograms values to verify if they are right,... But I don't find what I'm doing wrong.
About the code:
I have a "Comparator" class which just contain 2 images:
class Comparator
{
public Image<Bgr, Byte> RefImage;
public Image<Bgr, Byte> TestImage;
public Comparator(string TestPath, string RefPath)
{
Image<Bgr, Byte> TestImagetemp = new Image<Bgr, Byte>(TestPath);
Image<Bgr, Byte> RefImagetemp = new Image<Bgr, Byte>(RefPath);
int newCols = Math.Min(TestImagetemp.Cols, RefImagetemp.Cols);
int newRows = Math.Min(RefImagetemp.Rows, TestImagetemp.Rows);
Rectangle roi = new Rectangle(0, 0, newCols, newRows);
this.RefImage = crop(RefImagetemp, roi);
this.TestImage = crop(TestImagetemp, roi);
string DiffPath = "C:\\Users\\EPIERSO\\Docs\\testdiff";
this.TestImage.Save(DiffPath + "testavant.png");
}
Here is the method used for computing the histogram:
public static Mat CalcHistHSV(Image<Bgr,Byte> image)
{
int[] histbins = new int[] { 30, 32 };
float[] ranges = new float[] { 0.0f, 180.0f, 0.0f, 256.0f };
Mat hist = new Mat();
VectorOfMat vm = new VectorOfMat();
Image<Hsv,float> imghsv = image.Convert<Hsv, float>();
vm.Push(imghsv);
CvInvoke.CalcHist(vm, new int[] { 0, 1 }, null, hist, histbins, ranges, false);
return hist;
}
And this is the method used for comparing with EMD:
public bool EMDCompare()
{
int hbins = 30;
int sbins = 32;
Mat histref = CalcHistHSV(RefImage);
Mat histtest = CalcHistHSV(TestImage);
//Computing the signatures
Mat sigref = new Mat(hbins*sbins,3,Emgu.CV.CvEnum.DepthType.Cv32F,1);
Mat sigtest = new Mat(hbins*sbins,3, Emgu.CV.CvEnum.DepthType.Cv32F, 1);
for (int h = 0; h<hbins; h++)
{
for (int s = 0; s < sbins; s++)
{
var bin = MatExtension.GetValue(histref,h,s);
MatExtension.SetValue(sigref, h * sbins + s, 0, bin);
MatExtension.SetValue(sigref, h * sbins + s, 1, h);
MatExtension.SetValue(sigref, h * sbins + s, 2, s);
var bin2 = MatExtension.GetValue(histtest, h, s);
MatExtension.SetValue(sigtest, h * sbins + s, 0, bin2);
MatExtension.SetValue(sigtest, h * sbins + s, 1, h);
MatExtension.SetValue(sigtest, h * sbins + s, 2, s);
}
}
float emd = CvInvoke.EMD(sigref, sigtest, DistType.L2);
return ((1 - emd) > 0.7);
}
For modifying Mat values, I use an extension named MatExtension, found here: How can I get and set pixel values of an EmguCV Mat image?
This is the equivalent Python code: https://pastebin.com/drhvNMNs
I'm using c# and aforge.net to find the perimeter of objects in following image.
Source Image
what I've done till now is ...
made image binary.
found the blobs.
Extracted image for each blob.
used MoravecCornersDetector class to find edge points.
When I draw Points, the result is like following image.
Final Image
Now my problem is ...
Now my problem is sorting these points to make a polygon and sum the distance between them to find perimeter.
Would you mind tell me How can I do this?
Do you know any better way to find perimeter?
private void Form1_Load(object sender, EventArgs e)
{
curImage = new Bitmap(#"1.jpg");
extractBlob(ThresholdImage(curImage));
}
Bitmap ThresholdImage(Bitmap image)
{
Bitmap result;
using (Bitmap aa = image)
{
// create grayscale filter (BT709)
Grayscale filter = new Grayscale(0.2125, 0.7154, 0.0721);
// apply the filter
Bitmap grayImage = filter.Apply(aa);
// create filter
Threshold filter2 = new Threshold(200);
// apply the filter
filter2.ApplyInPlace(grayImage);
result = new Bitmap(grayImage);
}
return result;
}
void extractBlob(Bitmap image)
{
BlobCounterBase bc = new BlobCounter();
bc.FilterBlobs = true;
bc.MinHeight = 5;
bc.MinWidth = 5;
bc.ProcessImage(image);
Blob[] blobs = bc.GetObjectsInformation();
for (int i = 0, n = blobs.Length; i < n; i++)
{
bc.ExtractBlobsImage(image, blobs[i], true);
Bitmap copy = blobs[i].Image.ToManagedImage();
Edge(copy);
// ------> Draw Edge(copy)
}
}
List<PointF> Edge(Bitmap image)
{
// create corner detector's instance
MoravecCornersDetector mcd = new MoravecCornersDetector();
// process image searching for corners
List<IntPoint> corners = mcd.ProcessImage(image);
List<PointF> eachObject = new List<PointF>();
foreach (var item in corners)
{
PointF p = new PointF(item.X, item.Y);
eachObject.Add(p);
}
return eachObject;
}
I am trying to use the Clipper library to modify a graphics path.
I have list of widths that represent outlines / strokes. I want to start with the largest first and work my way down to the smallest.
For this example, we will add 2 strokes with widths of 20 and 10.
I want to take take my graphics path, and expand / offset it by 20 pixels into a new graphics path. I do not want to alter the original path. Then I want to fill the new graphics path with a solid color.
Next, I want to take my original graphics path, and expand / offset it by 10 pixels into a new graphics path. I want to fill this new path with a different color.
Then I want to fill my original path with a different color.
What is the proper way to do this. I have the following method that I created to try and do this, but it is not working properly.
private void createImage(Graphics g, GraphicsPath gp, List<int> strokeWidths)
{
ClipperOffset pathConverter = new ClipperOffset();
Clipper c = new Clipper();
gp.Flatten();
foreach(int strokeSize in strokeWidths)
{
g.clear();
ClipperPolygons polyList = new ClipperPolygons();
GraphicsPath gpTest = (GraphicsPath)gp.Clone();
PathToPolygon(gpTest, polyList, 100);
gpTest.Reset();
c.Execute(ClipType.ctUnion, polyList, PolyFillType.pftPositive, PolyFillType.pftEvenOdd);
pathConverter.AddPaths(polyList, JoinType.jtMiter, EndType.etClosedPolygon);
pathConverter.Execute(ref polyList, strokeSize * 100);
for (int i = 0; i < polyList.Count; i++)
{
// reverses scaling
PointF[] pts2 = PolygonToPointFArray(polyList[i], 100);
gpTest.AddPolygon(pts2);
}
g.FillPath(new SolidBrush(Color.Red), gpTest);
}
}
private void PathToPolygon(GraphicsPath path, ClipperPolygons polys, Single scale)
{
GraphicsPathIterator pathIterator = new GraphicsPathIterator(path);
pathIterator.Rewind();
polys.Clear();
PointF[] points = new PointF[pathIterator.Count];
byte[] types = new byte[pathIterator.Count];
pathIterator.Enumerate(ref points, ref types);
int i = 0;
while (i < pathIterator.Count)
{
ClipperPolygon pg = new ClipperPolygon();
polys.Add(pg);
do
{
IntPoint pt = new IntPoint((int)(points[i].X * scale), (int)(points[i].Y * scale));
pg.Add(pt);
i++;
}
while (i < pathIterator.Count && types[i] != 0);
}
}
private PointF[] PolygonToPointFArray(ClipperPolygon pg, float scale)
{
PointF[] result = new PointF[pg.Count];
for (int i = 0; i < pg.Count; ++i)
{
result[i].X = (float)pg[i].X / scale;
result[i].Y = (float)pg[i].Y / scale;
}
return result;
}
While you've made a pretty reasonable start, you seem to be getting muddled in your createImage() function. You mention wanting different colors with the different offsets and so you're missing a colors array to match your strokeWidths array. Also, it's unclear to me what you're doing with the clipping (union) stuff, but it's probably unnecessary.
So in pseudo-code I suggest something like the following ....
static bool CreateImage(Graphics g, GraphicsPath gp,
List<int> offsets, List<Color> colors)
{
const scale = 100;
if (colors.Count < offsets.Count) return false;
//convert GraphicsPath path to Clipper paths ...
Clipper.Paths cpaths = GPathToCPaths(gp.Flatten(), scale);
//setup the ClipperOffset object ...
ClipperOffset co = new ClipperOffsets();
co.AddPaths(cpaths, JoinType.jtMiter, EndType.etClosedPolygon);
//now loop through each offset ...
foreach(offset in offsets, color in colors)
{
Clipper.Paths csolution = new Clipper.Paths();
co.Execute(csolution, offset);
if (csolution.IsEmpty) break; //useful for negative offsets
//now convert back to floating point coordinate array ...
PointF[] solution = CPathToPointFArray(csolution, scale);
DrawMyPaths(Graphics g, solution, color);
}
}
And something to watch for if you were to use increasingly larger offsets, each polygon drawn in the 'foreach' loop would hide previously drawn polygons.