I have a device that rotates an object and takes a picture of a portion of the object at regular intervals. Currently, I have 30 pictures. To stitch the images into a flat image, I am taking a slice right out of the center of each picture of a fixed width (between 50 and 75 pixels). I am trying to stitch these slices together into a flat image of the original picture using the EMGU CV Stitching library with the sample stitching code that comes with EMGU. I am testing with between 5 and 10 slices at a time. Sometimes, I am getting an error that says "Error, need more images". When I do get a result, it looks terrible with weird curvatures. I don't need any spatial adjustments. I just want to stitch them in a linear fashion from left to right. Any ideas, either using EMGU or other?
Here are a few slices and the result:
Why is the resulting image not the same height as the 4 slices? What must be done just to stitch these together in a linear fashion so that the text is continuous?
Here is the code I am using:
private void selectImagesButton_Click(object sender, EventArgs e)
{
OpenFileDialog dlg = new OpenFileDialog();
dlg.CheckFileExists = true;
dlg.Multiselect = true;
if (dlg.ShowDialog() == System.Windows.Forms.DialogResult.OK)
{
sourceImageDataGridView.Rows.Clear();
Image<Bgr, byte>[] sourceImages = new Image<Bgr, byte>[dlg.FileNames.Length];
for (int i = 0; i < sourceImages.Length; i++)
{
sourceImages[i] = new Image<Bgr, byte>(dlg.FileNames[i]);
using (Image<Bgr, byte> thumbnail = sourceImages[i].Resize(200, 200, Emgu.CV.CvEnum.Inter.Cubic, true))
{
DataGridViewRow row = sourceImageDataGridView.Rows[sourceImageDataGridView.Rows.Add()];
row.Cells["FileNameColumn"].Value = dlg.FileNames[i];
row.Cells["ThumbnailColumn"].Value = thumbnail.ToBitmap();
row.Height = 200;
}
}
try
{
//only use GPU if you have build the native binary from code and enabled "NON_FREE"
using (Stitcher stitcher = new Stitcher(false))
{
using (AKAZEFeaturesFinder finder = new AKAZEFeaturesFinder())
{
stitcher.SetFeaturesFinder(finder);
using (VectorOfMat vm = new VectorOfMat())
{
Mat result = new Mat();
vm.Push(sourceImages);
Stopwatch watch = Stopwatch.StartNew();
this.Text = "Stitching";
Stitcher.Status stitchStatus = stitcher.Stitch(vm, result);
watch.Stop();
if (stitchStatus == Stitcher.Status.Ok)
{
resultImageBox.Image = result;
this.Text = String.Format("Stitched in {0} milliseconds.", watch.ElapsedMilliseconds);
}
else
{
MessageBox.Show(this, String.Format("Stiching Error: {0}", stitchStatus));
resultImageBox.Image = null;
}
}
}
}
}
finally
{
foreach (Image<Bgr, Byte> img in sourceImages)
{
img.Dispose();
}
}
}
}
Related
After I applied the Threshold on the Destination Image
I get a Mat as Reference
then I called the function FindContours()
to extract all existed Contours on the target Image
finally I tried to convert the extracted contours one at a time to Bitmap and here at this point am getting an Error:
System.ArgumentException: "Number of channels must be 1, 3 or 4. Parametername: src" by converting Mat to Bitmap
private void button1_Click(object sender, EventArgs e)
{
OpenFileDialog getImag = new OpenFileDialog();
getImag.Filter = "PNG,JPG|*.png;*.jpg";
DialogResult result = getImag.ShowDialog();
string Source_Logo_Link = string.Empty;
if (result == DialogResult.OK)
{
Source_Logo_Link = getImag.FileName;
System.Drawing.Image image = System.Drawing.Image.FromFile(Source_Logo_Link);
Mat src = new OpenCvSharp.Mat(Source_Logo_Link);
OpenCvSharp.Mat gray = src.CvtColor(ColorConversionCodes.BGR2GRAY);
OpenCvSharp.Mat binary = gray.Threshold(0, 255, ThresholdTypes.Otsu);
OpenCvSharp.Mat[] contoursQuery;
OutputArray hierarchyQ = InputOutputArray.Create(new List<Vec4i>());
binary.FindContours(out contoursQuery, hierarchyQ, RetrievalModes.CComp, ContourApproximationModes.ApproxTC89KCOS);
List<Bitmap> images = new List<Bitmap>();
for (int i = 0; i <= contoursQuery.Length; i++)
images.add(contoursQuery[i].toBitmap());
}
}
sorry this comes a bit late, but I hope this still helps though. Apart from a few other issues (like the counter of the for loop), I guess that your major problem is, that you miss-understood the result of FindContours.
It really just gives you a list of all contour points, which it found in the image, not complete images.
Have a look at the documentation FindContours, it clearly states for the contour parameter:
Each contour is stored as a vector of points
So that means if you wand them "one at a time to Bitmap" you'll need to create these bitmaps first, and then draw the contours into it. To make drawing a bit more easy (and faster) there is the DrawContours command, which you could use for the drawing part.
Over all, in the end, it should look somehow like this:
private void button1_Click(object sender, EventArgs e)
{
OpenFileDialog getImag = new OpenFileDialog();
getImag.Filter = "PNG,JPG|*.png;*.jpg";
DialogResult result = getImag.ShowDialog();
string Source_Logo_Link = string.Empty;
if (result == DialogResult.OK)
{
Source_Logo_Link = getImag.FileName;
System.Drawing.Image image = System.Drawing.Image.FromFile(Source_Logo_Link);
Mat src = new OpenCvSharp.Mat(Source_Logo_Link);
OpenCvSharp.Mat gray = src.CvtColor(ColorConversionCodes.BGR2GRAY);
OpenCvSharp.Mat binary = gray.Threshold(0, 255, ThresholdTypes.Otsu);
// I'd really prefer to work with point lists and HierarchyIndex because
// it's much more descriptive that way
OpenCvSharp.Point[][] contours;
OpenCvSharp.HierarchyIndex[] hierarchyQ;
binary.FindContours(out contours,out hierarchyQ, RetrievalModes.List, ContourApproximationModes.ApproxTC89KCOS);
List<Bitmap> images = new List<Bitmap>();
for (int i = 0; i < contoursQuery.Length; i++){
var singleContour = new OpenCvSharp.Mat(src.Size(), MatType.CV_8U,0);
singleContour.DrawContours(contours,i,Scalar.White);
images.Add(singleContour.ToBitmap());
}
}
}
I also changed the RetrievalMode to "List", since you don't really seem to care about the hierarchical relationships.
Hope that helps ;-)
I want to put a PictureBox on WinForm in C # using a Basler camera.
But I want to convert IGrabImage to Mat. because I want to insert it into the PictureBox using Mat.
Please let me know your hint or solution.
PixelDataConverter converter = new PixelDataConverter();
public Form1() {
InitializeComponent();
using (Camera camera = new Camera())
{
camera.CameraOpened += Configuration.AcquireContinuous;
camera.Open();
camera.Parameters[PLCameraInstance.MaxNumBuffer].SetValue(5);
camera.StreamGrabber.Start();
IGrabResult grabResult = camera.StreamGrabber.RetrieveResult(5000, TimeoutHandling.ThrowException);
using (grabResult)
{
if (grabResult.GrabSucceeded) {
Mat rtnMat = convertToMat(grabResult);
Cv2.ImShow("test", rtnMat);
pictureBox1.Image = BitmapConverter.ToBitmap(frame);
}
}
camera.StreamGrabber.Stop();
camera.Close();
}
}
private Mat convertToMat(IGrabResult rtnGrabResult) {
IImage image = rtnGrabResult;
converter.OutputPixelFormat = PixelType.BGR8packed;
byte[] buffer = image.PixelData as byte[];
return new Mat(rtnGrabResult.Width, rtnGrabResult.Height, MatType.CV_8UC1, buffer);
}
Basler Image:
OpenCvSharp Image:
Here is the correct way to convert an IGrabResult into an OpenCvSharp.Mat.
I didn't try it without the converter but your main problem was the sequence of the new Mat(..) arguments. In OpenCV, you declare the rows first and then the columns. That means first height and then width. And also the MatType for an colored image was wrong like #Nyerguds said. It has to be CV_8UC3.
Corrected code:
private Mat convertToMat(IGrabResult rtnGrabResult) {
converter.OutputPixelFormat = PixelType.BGR8packed;
byte[] buffer = new byte[conv.GetBufferSizeForConversion(rtnGrabResult];
converter.Convert(buffer, rtnGrabResult);
return new Mat(rtnGrabResult.Height, rtnGrabResult.Width, MatType.CV_8UC3, buffer);
}
I am using Kinect v2 and have a small program that only shows body and color streams but the stream stops sending frames after fetching just 3 frames. Here is the code:
_sensor = KinectSensor.GetDefault();
if (_sensor != null)
{
_sensor.Open();
_reader = _sensor.OpenMultiSourceFrameReader(FrameSourceTypes.Color | FrameSourceTypes.Depth | FrameSourceTypes.Infrared | FrameSourceTypes.Body);
_reader.MultiSourceFrameArrived += Reader_MultiSourceFrameArrived;
}
and here is how I am getting frames
Console.WriteLine("==== FRAME FOUND ====");
var reference = e.FrameReference.AcquireFrame();
// Body
using (var frame = reference.ColorFrameReference.AcquireFrame())
{
if (frame != null)
{
//stream.Children.Clear();
var c_frame = reference.ColorFrameReference.AcquireFrame();
ImageBrush ib = new ImageBrush();
Image im = new Image();
rgb.Source = frame.ToBitmap();
var b_frame = reference.BodyFrameReference.AcquireFrame();
_bodies = new Body[b_frame.BodyFrameSource.BodyCount];
b_frame.GetAndRefreshBodyData(_bodies);
if (_bodies[0].IsTracked)
{
stream.DrawSkeleton(_bodies[0]);
if (recording)
{
recorder.RecordFrame(_bodies[0]);
}
}
b_frame.GetAndRefreshBodyData(_bodies);
}
}
Most probably your build target is set to 32 bit CPU. Set it to 64 bit.
I was working on Face Recognition project. After training the database and calling EigenObjectRecognizer, the result is a black image with unrecognized label.When the code runs, it looks like the following http://www.mediafire.com/view/?ewns4iqvd51adsc .And as shown in the picture the detected and supposed to be recognized and extracted face in the image box is totally black. And the input image for recognition is exactly the same as the one the database is trained with.So why it has kept giving Unknown or Unrecognized result.
Part of the code looks
Images from the training set loaded as
public FaceRecognizer()
{
InitializeComponent();
//Load faces from the dataset
try
{
ContTrain = ContTrain + 1;
//Load previous trained and labels for each image from the database Here
string NameLabelsinfo = File.ReadAllText(Application.StartupPath +
"/TrainedFaces/TrainedNameLables.txt");
string[] NameLabels = NameLabelsinfo.Split('%');
NumNameLabels = Convert.ToInt16(NameLabels[0]);
string IDLabelsinfo = File.ReadAllText(Application.StartupPath +
"/TrainedFaces/TrainedNameLables.txt");
string[] IDLables = IDLabelsinfo.Split('%');
NumIDLabels = Convert.ToInt16(IDLables[0]);
if (NumNameLabels == NumIDLabels)
{
ContTrain = NumNameLabels;
string LoadFaces;
// Converting the master image to a bitmap
for (int tf = 1; tf < NumNameLabels + 1; tf++)
{
LoadFaces = String.Format("face{0}.bmp", tf);
trainingImages.Add(new Image<Gray, byte>(String.Format("
{0}/TrainedFaces/{1}", Application.StartupPath, LoadFaces)));
IDLabless.Add(IDLables[tf]);
NameLabless.Add(NameLabels[tf]);
}
}
}
catch (Exception e)
{
//Returns the following message if nothing saved in the training set
MessageBox.Show("Nothing in binary database, please add at least a
face(Simply train the prototype with the Add Face Button).", "Triained
faces load",MessageBoxButtons.OK, MessageBoxIcon.Exclamation);
}
}
The face recognizer method looks like
private void RecognizeFaces()
{
//detect faces from the gray-scale image and store into an array of type
// 'var',i.e 'MCvAvgComp[]'
Image<Gray, byte> grayframe = GetGrayframe();
stringOutput.Add("");
//Assign user-defined Values to parameter variables:
MinNeighbors = int.Parse(comboBoxMinNeigh.Text); // the 3rd parameter
WindowsSize = int.Parse(textBoxWinSiz.Text); // the 5th parameter
ScaleIncreaseRate = Double.Parse(comboBoxScIncRte.Text); //the 2nd
//parameter
//Detect faces from an image and save it to var i.t MCvAcgComp[][]
var faces = grayframe.DetectHaarCascade(haar, ScaleIncreaseRate,
MinNeighbors,
HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
new Size(WindowsSize, WindowsSize))[0];
if (faces.Length > 0 && trainingImages.ToArray().Length != 0)
{
Bitmap ExtractedFace; //empty
ExtFaces = new Image<Gray, byte>[faces.Length];
faceNo = 0;
foreach (var face in faces)
{
// ImageFrame.Draw(face.rect, new Bgr(Color.Green), 3);
//set the size of the empty box(ExtractedFace) which will later
//contain the detected face
ExtractedFace = new Bitmap(face.rect.Width, face.rect.Height);
ExtFaces[faceNo] = new Image<Gray, byte>(ExtractedFace);
ExtFaces[faceNo] = ExtFaces[faceNo].Resize(100, 100,
Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);
//TermCriteria for face recognition with numbers of trained images
// like maxIteration
MCvTermCriteria termCrit = new MCvTermCriteria(ContTrain, 0.001);
//Eigen face recognizer
EigenObjectRecognizer recognizer = new EigenObjectRecognizer(
trainingImages.ToArray(),
NameLabless.ToArray(),
700,
ref termCrit);
stringOutput[faceNo] = recognizer.Recognize(ExtFaces[faceNo]);
stringOutput.Add("");
faceNo++;
}
pbExtractedFaces.Image = ExtFaces[0].ToBitmap(); //draw the face detected
// in the 0th (gray) channel with blue color
if (stringOutput[0] == "")
{
label1.Text = "Unknown";
label9.Text = "";
}
//Draw the label for each face detected and recognized
else
{
//string[] label = stringOutput[faceNo].Split(',');
label1.Text = "Known";
// for (int i = 0; i < 2; i++)
//{
label9.Text = stringOutput[0];
//label7.Text = label[1];
//}
}
}
if (faceNo == 0)
{
MessageBox.Show("No face detected");
}
else
{
btnNextRec.Enabled = true;
btnPreviousRec.Enabled = true;
}
}
The training set is trained with detected faces as follows
private void saveFaceToDB_Click(object sender, EventArgs e)
{
abd = (Bitmap) pbExtractedFaces.Image;
TrainedFaces = new Image<Gray, byte>(abd);
trainingImages.Add(TrainedFaces);
NameLabless.Add(StudentName.Text);
IDLabless.Add(StudentID.Text);
//Write the number of trained faces in a file text for further load
File.WriteAllText(Application.StartupPath + "/TrainedFaces
/TrainedNameLables.txt", trainingImages.ToArray().Length + "%");
File.WriteAllText(Application.StartupPath + "/TrainedFaces
/TrainedIDLables.txt", trainingImages.ToArray().Length + "%");
//Write the labels of trained faces in a file text for further load
for (int i = 1; i < trainingImages.ToArray().Length + 1; i++)
{
trainingImages.ToArray()[i - 1].Save(String.Format("{0}/TrainedFaces
/face{1}.bmp", Application.StartupPath, i));
File.AppendAllText(Application.StartupPath + "/TrainedFaces
/TrainedIDLables.txt", NameLabless.ToArray()[i - 1] + "%");
File.AppendAllText(Application.StartupPath + "/TrainedFaces
/TrainedNameLables.txt", IDLabless.ToArray()[i - 1] + "%");
}
MessageBox.Show(StudentName.Text + "´s face detected and added :)", "Training
OK", MessageBoxButtons.OK, MessageBoxIcon.Information);
}
Thanks
I am working on a people counter. For this I have the Microsoft Kinect installed over the door.
I am working with C# and EmguCV. I have extracted the heads of the people, so that they appear as white blobs on a black image. Then I have created a bounding box around the heads. That works fine. So I now how many blobs I have per frame and I also now their position. This works fine. But now I want to track the blobs because I want to count how much people come in and go out, but I don't know how to do this. Can anyone help me? The problem is that every frame, new blobs can appear and old blobs can disappear. Can anyone give me an algorithm or maybe some code? or a paper.
Thanks a lot!
Sure. This is the code for the blobs:
using (MemStorage stor = new MemStorage())
{
Contour<System.Drawing.Point> contours = head_image.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_EXTERNAL, stor);
for (int i = 0; contours != null; contours = contours.HNext)
{
i++;
//if ((contours.Area > Math.Pow(sliderMinSize.Value, 2)) && (contours.Area < Math.Pow(sliderMaxSize.Value, 2)))
{
MCvBox2D box = contours.GetMinAreaRect();
blobCount++;
contour_image.Draw(box, new Bgr(System.Drawing.Color.Red), 1);
new_position = new System.Drawing.Point((int)(box.center.X), (int)(box.center.Y));
new_x = box.center.X;
new_y = box.center.Y;
}
}
}
Please see Emgu CV Blob Detection for more information. Assuming you are using Emgu CV 2.1 or higher, then the answer will work. If you are using version 1.5 or higher, see this thread on how to easily detect blobs. Or look at the code below
Capture capture = new Capture();
ImageViewer viewer = new ImageViewer();
BlobTrackerAutoParam param = new BlobTrackerAutoParam();
param.ForgroundDetector = new ForgroundDetector(Emgu.CV.CvEnum.FORGROUND_DETECTOR_TYPE.FGD);
param.FGTrainFrames = 10;
BlobTrackerAuto tracker = new BlobTrackerAuto(param);
Application.Idle += new EventHandler(delegate(object sender, EventArgs e)
{
tracker.Process(capture.QuerySmallFrame().PyrUp());
Image<Gray, Byte> img = tracker.GetForgroundMask();
//viewer.Image = tracker.GetForgroundMask();
MCvFont font = new MCvFont(Emgu.CV.CvEnum.FONT.CV_FONT_HERSHEY_SIMPLEX, 1.0, 1.0);
foreach (MCvBlob blob in tracker)
{
img.Draw(Rectangle.Round(blob), new Gray(255.0), 2);
img.Draw(blob.ID.ToString(), ref font, Point.Round(blob.Center), new Gray(255.0));
}
viewer.Image = img;
});
viewer.ShowDialog();
Hope this helps!
EDIT
I think you should use this code every ten frames or so (~3 times a second) and do something like this:
Capture capture = new Capture();
ImageViewer viewer = new ImageViewer();
BlobTrackerAutoParam param = new BlobTrackerAutoParam();
param.ForgroundDetector = new ForgroundDetector(Emgu.CV.CvEnum.FORGROUND_DETECTOR_TYPE.FGD);
param.FGTrainFrames = 10;
BlobTrackerAuto tracker = new BlobTrackerAuto(param);
int frames = 0;
Application.Idle += new EventHandler(delegate(object sender, EventArgs e)
{
frames++;//Add to number of frames
if (frames == 10)
{
frames = 0;//if it is after 10 frames, do processing and reset frames to 0
tracker.Process(capture.QuerySmallFrame().PyrUp());
Image<Gray, Byte> img = tracker.GetForgroundMask();
//viewer.Image = tracker.GetForgroundMask();
int blobs = 0;
MCvFont font = new MCvFont(Emgu.CV.CvEnum.FONT.CV_FONT_HERSHEY_SIMPLEX, 1.0, 1.0);
foreach (MCvBlob blob in tracker)
{
//img.Draw(Rectangle.Round(blob), new Gray(255.0), 2);
//img.Draw(blob.ID.ToString(), ref font, Point.Round(blob.Center), new Gray(255.0));
//Only uncomment these if you want to draw a rectangle around the blob and add text
blobs++;//count each blob
}
blobs = /*your counter here*/;
blobs = 0; //reset
viewer.Image = img;//get next frame
});
viewer.ShowDialog();
EDIT 2
It sounds like you just want to identify the blobs, it sounds like you want McvBlob.ID. This is the ID of the blob and you can check which ID's are still there and which are not. I would still do this every ten frames to not slow it down as much. You just need a simple algorithm that can observe what the ID's are, and if they have changed. I would store the IDs in a List<string> and check that list for changes every few frames. Example:
List<string> LastFrameIDs, CurrentFrameIDs;
Capture capture = new Capture();
ImageViewer viewer = new ImageViewer();
BlobTrackerAutoParam param = new BlobTrackerAutoParam();
param.ForgroundDetector = new ForgroundDetector(Emgu.CV.CvEnum.FORGROUND_DETECTOR_TYPE.FGD);
param.FGTrainFrames = 10;
BlobTrackerAuto tracker = new BlobTrackerAuto(param);
int frames = 0;
Application.Idle += new EventHandler(delegate(object sender, EventArgs e)
{
frames++;//Add to number of frames
if (frames == 10)
{
frames = 0;//if it is after 10 frames, do processing and reset frames to 0
tracker.Process(capture.QuerySmallFrame().PyrUp());
Image<Gray, Byte> img = tracker.GetForgroundMask();
//viewer.Image = tracker.GetForgroundMask();
int blobs = 0, i = 0;
MCvFont font = new MCvFont(Emgu.CV.CvEnum.FONT.CV_FONT_HERSHEY_SIMPLEX, 1.0, 1.0);
foreach (MCvBlob blob in tracker)
{
i++;
//img.Draw(Rectangle.Round(blob), new Gray(255.0), 2);
//img.Draw(blob.ID.ToString(), ref font, Point.Round(blob.Center), new Gray(255.0));
//Only uncomment these if you want to draw a rectangle around the blob and add text
CurrentFrameIDs.Add(blob.ID.ToString());
if (CurrentFrameIDs[i] == LastFrameIDs[i])
img.Draw(Rectangle.Round(blob), new Gray(0,0), 2);//mark the new/changed blob
blobs++;//count each blob
}
blobs = /*your counter here*/;
blobs = 0; //reset
i = 0;
LastFrameIDs = CurrentFrameIDs;
CurrentFrameIDs = null;
viewer.Image = img;//get next frame
});
viewer.ShowDialog();