C# Emgu BackgroundSubtractorMOG2 returns black image - c#

Hey and thanks for your time.
im working on some opencv in C#(Emgu) and trying to diplay the forground from BackgroundSubtractorMOG2
but with no luck here is my code
internal Bitmap AdaptableBackgroundSubtraction(Bitmap org, int history = 2, bool detectShadows = true, int threshold = 16)
{
Image<Bgr, byte> imageorg = org.ToImage<Bgr, byte>();
var forgroundmask = imageorg;
var mDetector = new BackgroundSubtractorMOG2(history, threshold, detectShadows);
mDetector.Apply(imageorg, forgroundmask, 0.5);
return forgroundmask.ToBitmap();
}
i have tried a few things and changing parameters but i dont get any results. any help will be appreciated
Picture of program running
Link to the full code if needed
https://gitlab.com/Clipcometx/semesterportefolje/-/tree/master/Machinelearning/ComputerVision

According to attached picture, you set the max hue value to 255, while the hue range is 0-179.

Related

threshold depth distance in image opencvsharp c# intel realsense

This maybe a stupid question but how can you make a threshold so that the depth distance of the camera can get changed. Now I am using the Cv2.threshold to to that but with the otsu method the whole picture changes to one color instead of different kinds of a color.
The code used:
var colorizedDepth = colorizer.Process<VideoFrame>(depthFrame).DisposeWith(frames);
Mat testcd = new Mat(colorizedDepth.Height, colorizedDepth.Width, MatType.CV_8UC3, colorizedDepth.Data);
Mat testgd = new Mat();
Cv2.CvtColor(testcd, testgd, ColorConversionCodes.RGBA2GRAY);
Mat testbd = new Mat();
Cv2.Threshold(testgd, testbd, 0, 255, ThresholdTypes.Otsu | ThresholdTypes.Binary);
Cv2.ImShow("camera", testgd);
Cv2.WaitKey(0);
The code to get the colored depth is from the wrapper librealsense:
https://github.com/IntelRealSense/librealsense/tree/master/wrappers/csharp
Does anyone know what I am doing wrong for the threshold so that the depth distances get changed?

Emgucv crop detected shape automatically

I have an application which is going to be used to crop blank spaces from scanned documents for example this image. What I want to do is extract only the card and remove all the white/blank area. I'm using Emgucv FindContours to do this and at the moment I'm able to find the card contour and some noise captured by the scanner in the image as you can see below.
My question is how can I crop the largest contour found or how to extract it by removing other contours and blanks/whitespaces? Or maybe it is possible with the contour index?
Edit: Maybe another possible solution is if is possible to draw the contour to another pictureBox.
Here is the code that I'm using:
Image<Bgr, byte> imgInput;
Image<Bgr, byte> imgCrop;
private void abrirToolStripMenuItem_Click(object sender, EventArgs e)
{
try
{
OpenFileDialog dialog = new OpenFileDialog();
if (dialog.ShowDialog() ==DialogResult.OK)
{
imgInput = new Image<Bgr, byte>(dialog.FileName);
pictureBox1.Image = imgInput.Bitmap;
imgCrop = imgInput;
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
private void shapeToolStripMenuItem_Click(object sender, EventArgs e)
{
if (imgCrop == null)
{
return;
}
try
{
var temp = imgCrop.SmoothGaussian(5).Convert<Gray, byte>().ThresholdBinaryInv(new Gray(230), new Gray(255));
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
Mat m = new Mat();
CvInvoke.FindContours(temp, contours, m, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
for (int i = 0; i < contours.Size; i++)
{
double perimeter = CvInvoke.ArcLength(contours[i], true);
VectorOfPoint approx = new VectorOfPoint();
CvInvoke.ApproxPolyDP(contours[i], approx, 0.04 * perimeter, true);
CvInvoke.DrawContours(imgCrop, contours, i, new MCvScalar(0, 0, 255), 2);
pictureBox2.Image = imgCrop.Bitmap;
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
I'll give you my answer in C++, but the same operations should be available in Emgu CV.
I propose the following approach: Segment (that is – separate) the target object using the HSV color space. Calculate a binary mask for the object of interest. Get the biggest blob in the binary mask, this should be the card. Compute the bounding box of the card. Crop the card out of the input image
Ok, first get (or read) the input image. Apply a median blur filter, it will help in getting rid of that high-frequency noise (the little grey blobs) that you see on the input. The main parameter to adjust is the size of the kernel (or filter aperture) be careful, though – a high value will result in an aggressive effect and will likely destroy your image:
//read input image:
std::string imageName = "C://opencvImages//yoshiButNotYoshi.png";
cv::Mat imageInput = cv::imread( imageName );
//apply a median blur filter, the size of the kernel is 5 x 5:
cv::Mat blurredImage;
cv::medianBlur ( imageInput, blurredImage, 5 );
This is the result of the blur filter (The embedded image is resized):
Next, segment the image. Exploit the fact that the background is white, and everything else (the object of interest, mainly) has some color information. You can use the HSV color space. First, convert the BGR image into HSV:
//BGR to HSV conversion:
cv::Mat hsvImg;
cv::cvtColor( blurredImage, hsvImg, CV_RGB2HSV );
The HSV color space encodes color information differently than the typical BGR/RGB color space. Its advantage over other color models pretty much depends on the application, but in general, it is more robust while working with hue gradients. I'll try to get an HSV-based binary mask for the object of interest.
In a binary mask, everything you are interested on the input image is colored in white, everything else in black (or vice versa). You can obtain this mask using the inRange function. However, you must specify the color ranges that will be rendered in white (or black) in the output mask. For your image, and using the HSV color model those values are:
cv::Scalar minColor( 0, 0, 100 ); //the lower range of colors
cv::Scalar maxColor( 0, 0, 255 ); //the upper range of colors
Now, get the binary mask:
//prepare the binary mask:
cv::Mat binaryMask;
//create the binary mask using the specified range of color
cv::inRange( hsvImg, minColor, maxColor, binaryMask );
//invert the mask:
binaryMask = 255 - binaryMask;
You get this image:
Now, you can get rid of some of the noise (that survived the blur filter) via morphological filtering. Morphological filters are, essentially, logical rules applied on binary (or gray) images. They take a "neighborhood" of pixels in the input and apply logical functions to get an output. They are quite handy while cleaning up binary images. I'll apply a series of logical filters to achieve just that.
I'll first erode the image and then dilate it using 3 iterations. The structuring element is a rectangle of size 3 x 3:
//apply some morphology the clean the binary mask a little bit:
cv::Mat SE = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(3, 3) );
int morphIterations = 3;
cv::morphologyEx( binaryMask, binaryMask, cv::MORPH_ERODE, SE, cv::Point(-1,-1), morphIterations );
cv::morphologyEx( binaryMask, binaryMask, cv::MORPH_DILATE, SE, cv::Point(-1,-1), morphIterations );
You get this output. Check out how the noisy blobs are mostly gone:
Now, comes the cool part. You can loop through all the contours in this image and get the biggest of them all. That's a typical operation that I constantly perform, so, I've written a function that does that. It is called findBiggestBlob. I'll present the function later. Check out the result you get after finding and extracting the biggest blob:
//find the biggest blob in the binary image:
cv::Mat biggestBlob = findBiggestBlob( binaryMask );
You get this:
Now, you can get the bounding box of the biggest blob using boundingRect:
//Get the bounding box of the biggest blob:
cv::Rect bBox = cv::boundingRect( biggestBlob );
Let's draw the bounding box on the input image:
cv::Mat imageClone = imageInput.clone();
cv::rectangle( imageClone, bBox, cv::Scalar(255,0,0), 2 );
Finally, let's crop the card out of the input image:
cv::Mat croppedImage = imageInput( bBox );
This is the cropped output:
This is the code for the findBiggestBlob function. The idea is just to compute all the contours in the binary input, calculate their area and store the contour with the largest area of the bunch:
//Function to get the largest blob in a binary image:
cv::Mat findBiggestBlob( cv::Mat &inputImage ){
cv::Mat biggestBlob = inputImage.clone();
int largest_area = 0;
int largest_contour_index = 0;
std::vector< std::vector<cv::Point> > contours; // Vector for storing contour
std::vector< cv::Vec4i > hierarchy;
// Find the contours in the image
cv::findContours( biggestBlob, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
for( int i = 0; i < (int)contours.size(); i++ ) {
//Find the area of the contour
double a = cv::contourArea( contours[i], false);
//Store the index of largest contour:
if( a > largest_area ){
largest_area = a;
largest_contour_index = i;
}
}
//Once you get the biggest blob, paint it black:
cv::Mat tempMat = biggestBlob.clone();
cv::drawContours( tempMat, contours, largest_contour_index, cv::Scalar(0),
CV_FILLED, 8, hierarchy );
//Erase the smaller blobs:
biggestBlob = biggestBlob - tempMat;
tempMat.release();
return biggestBlob;
}

How to increase accuracy of OCR text detection in tesseract with emgucv?

Well I am not able to get good accuracy of text detection in tesseract. Please check code and image below.
Mat imgInput = CvInvoke.Imread(#"D:\workspace\raw2\IMG_20200625_194541.jpg", ImreadModes.AnyColor);
int kernel_size = 11;
//Dilation
Mat imgDilatedEdges = new Mat();
CvInvoke.Dilate(
imgInput,
imgDilatedEdges,
CvInvoke.GetStructuringElement(
ElementShape.Rectangle,
new Size(kernel_size, kernel_size),
new Point(1, 1)),
new Point(1, 1),
1,
BorderType.Default,
new MCvScalar(0));
//Blur
Mat imgBlur = new Mat();
CvInvoke.MedianBlur(imgDilatedEdges, imgBlur, kernel_size);
//Abs diff
Mat imgAbsDiff = new Mat();
CvInvoke.AbsDiff(imgInput, imgBlur, imgAbsDiff);
Mat imgNorm = imgAbsDiff;
//Normalize
CvInvoke.Normalize(imgAbsDiff, imgNorm, 0, 255, NormType.MinMax, DepthType.Default);
Mat imgThreshhold = new Mat();
//getting threshhold value
double thresholdval = CvInvoke.Threshold(imgAbsDiff, imgThreshhold, 230, 0, ThresholdType.Trunc);
//Normalize
CvInvoke.Normalize(imgThreshhold, imgThreshhold, 0, 255, NormType.MinMax, DepthType.Default);
imgThreshhold.Save(#"D:\workspace\ocr_images\IMG_20200625_194541.jpg");
//contrast correction
Mat lab = new Mat();
CvInvoke.CvtColor(imgThreshhold, lab, ColorConversion.Bgr2Lab);
VectorOfMat colorChannelB = new VectorOfMat();
CvInvoke.Split(lab, colorChannelB);
CvInvoke.CLAHE(colorChannelB[0], 3.0, new Size(12, 12), colorChannelB[0]);
Mat clahe = new Mat();
//merge
CvInvoke.Merge(colorChannelB, clahe);
Image<Bgr, byte> output = new Image<Bgr, byte>(#"D:\workspace\ocr_images\IMG_20200625_194541.jpg");
Bitmap bmp = output.ToBitmap();
//setting image to 300 dpi since tesseract likes that
bmp.SetResolution(300, 300);
bmp.Save(#"D:\workspace\ocr_images\IMG_20200625_194541.jpg");
I am not getting expected accuracy. Please check how image is converted.
source image
converted image
I have posted few images above that you can refer. For first image i am getting garbage data. For last two images i am getting partial data.
Converting image to gray scale and playing with threshold gives better output.
I want to understand that if in case threshold is the key part then how i will be able to get dynamic threshhold value for each new image? It is going to work as service so user will simply pass the image and get the result. My app should be intelligent enough to process and understand image.
Do i have to adjust contrast, threshold more accurately? If yes how i will do that? or image itself is faulty I mean noise causing problem.
Please let me know what i am doing wrong in the algorithm or anything which will help me to understand issue. Any one who is aware of please tell me what should be ideal steps for image preprocessing for OCR?
I am using csharp, emucv and tesseract.
Any suggestion will be highly appreciated.

Resize Image<Gray, byte> without scaling. Emgu CV

I am using Image.InRange to create a mask from an image. In order to keep performance at its maximum, I am using the Image.ROI to crop the image and prior to using the InRange method. In order to to actually work with image though, I need it to have the same dimensions as the original, but all that is apparent to me is how to scale an image, not change the dimensions preserving the image.
Here is the code in question:
public Image<Gray, byte> Process(Image<Bgr, byte> frameIn, Rectangle roi)
{
Image<Bgr, byte> rectFrame = null;
Image<Gray, byte> mask = null;
if (roi != Rectangle.Empty)
{
rectFrame = frameIn.Copy(roi);
}
else
{
rectFrame = frameIn;
}
if (Equalize)
{
rectFrame._EqualizeHist();
}
mask = rectFrame.InRange(minColor, maxColor);
mask ._Erode(Iterations);
mask ._Dilate(Iterations);
if (roi != Rectangle.Empty)
{
//How do I give the image its original dimensions?
}
return mask;
}
Thank you,
Chris
I will assume you wish to return the mask with the same size as framIn the easiest way is to copy the mask to a new image that has the same size as framIn. You could if your application isn't time sensitive make mask the same size of framIn set its ROI and then do your operations. This does take longer to process and isn't the best practice.
Anyway here is hopefully the code your after if not let me know and I'll correct it accordingly.
if (roi != Rectangle.Empty)
{
//Create a blank image with the correct size
Image<Gray, byte> mask_return = new Image<Gray, byte>(frameIn.Size);
//Set its ROI to the same as Mask and in the centre of the image (you may wish to change this)
mask_return.ROI = new Rectangle((mask_return.Width - mask.Width) / 2, (mask_return.Height - mask.Height) / 2, mask.Width, mask.Height);
//Copy the mask to the return image
CvInvoke.cvCopy(mask, mask_return, IntPtr.Zero);
//Reset the return image ROI so it has the same dimensions
mask_return.ROI = new Rectangle(0, 0, frameIn.Width, frameIn.Height);
//Return the mask_return image instead of the mask
return mask_return;
}
return mask;
Hope this helps,
Cheers,
Chris

How to make white blob tracking for video or camera capture on Emgu?

I want to make program using C# with Emgu that can detect white blobs on images from camera and also track it. Also, the program can return IDs of tracked blobs
Frame1: http://www.freeimagehosting.net/uploads/ff2ac19054.jpg
Frame2: http://www.freeimagehosting.net/uploads/09e20e5dd6.jpg
The Emgu sample project "VideoSurveilance" in the Emgu.CV.Example solution (Emgu.CV.Example.sln) demonstrates blob tracking and assigns ID's to them.
I'm a newbie to OpenCV but it seems to me that the tracking of only "white" blobs may be harder than it sounds. For example, the blobs in your sample picture aren't really "white" are they? What I think you are really trying to do is "get the blobs that are brighter than the background by a certain amount" i.e. find a gray blob on a black background or a white blob on a gray background.
It depends what's your background like. If it is constantly dark like on those images you attached, then you should be able to extract those "white" blobs with some threshold. For any smarter segmentation you'll need to use some other features as well (e.g. like correlation if your object is color consistent).
I cannot say the code will work because I haven't tested it.
The general idea is to take the captured frame (assuming you're capturing frames) and filter out the noise by modifying the saturation and value(brightness). This modified HSV image is then processed as greyscale. Blobs can be labeled by looping through the blob collection generated by the tracker and assigned id's and bounding boxes.
Also, you may be interested in AForge.net and the related article: Hands Gesture Recognition on the mechanics and implementation of using the histogram for computer vision.
This is a modified version of custom tracker code found on the nui forums:
static void Main(){
Capture capture = new Capture(); //create a camera captue
Image<Bgr, Byte> img = capture.QuerySmallFrame();
OptimizeBlobs(img);
BackgroundStatisticsModel bsm = new BackgroundStatisticsModel(img, Emgu.CV.CvEnum.BG_STAT_TYPE.FGD_STAT_MODEL);
bsm.Update(img);
BlobSeq oldBlobs = new BlobSeq();
BlobSeq newBlobs = new BlobSeq();
ForgroundDetector fd = new ForgroundDetector(Emgu.CV.CvEnum.FORGROUND_DETECTOR_TYPE.FGD);
BlobDetector bd = new BlobDetector(Emgu.CV.CvEnum.BLOB_DETECTOR_TYPE.CC);
BlobTracker bt = new BlobTracker(Emgu.CV.CvEnum.BLOBTRACKER_TYPE.CC);
BlobTrackerAutoParam btap = new BlobTrackerAutoParam();
btap.BlobDetector = bd;
btap.ForgroundDetector = fd;
btap.BlobTracker = bt;
btap.FGTrainFrames = 5;
BlobTrackerAuto bta = new BlobTrackerAuto(btap);
Application.Idle += new EventHandler(delegate(object sender, EventArgs e)
{ //run this until application closed (close button click on image viewer)
//******* capture image *******
img = capture.QuerySmallFrame();
OptimizeBlobs(img);
bd.DetectNewBlob(img, bsm.Foreground, newBlobs, oldBlobs);
List<MCvBlob> blobs = new List<MCvBlob>(bta);
MCvFont font = new MCvFont(Emgu.CV.CvEnum.FONT.CV_FONT_HERSHEY_SIMPLEX, 1.0, 1.0);
foreach (MCvBlob blob in blobs)
{
img.Draw(Rectangle.Round(blob), new Gray(255.0), 2);
img.Draw(blob.ID.ToString(), ref font, Point.Round(blob.Center), new Gray(255.0));
}
Image<Gray, Byte> fg = bta.GetForgroundMask();
});
}
public Image<Gray, Byte> OptimizeBlobs(Image<Gray, Byte img)
{
// can improve image quality, but expensive if real-time capture
img._EqualizeHist();
// convert img to temporary HSV object
Image<Hsv, Byte> imgHSV = img.Convert<Hsv, Byte>();
// break down HSV
Image<Gray, Byte>[] channels = imgHSV.Split();
Image<Gray, Byte> imgHSV_saturation = channels[1]; // saturation channel
Image<Gray, Byte> imgHSV_value = channels[2]; // value channel
//use the saturation and value channel to filter noise. [you will need to tweak these values]
Image<Gray, Byte> saturationFilter = imgHSV_saturation.InRange(new Gray(0), new Gray(80));
Image<Gray, Byte> valueFilter = imgHSV_value.InRange(new Gray(200), new Gray(255));
// combine the filters to get the final image to process.
Image<Gray, byte> imgTarget = huefilter.And(saturationFilter);
return imgTarget;
}

Categories