I'm working on veins detection from image using emgu CV and I have few questions. Is there any simple way to detect color or range of colors? Is there a simple way to replace this color with another (e.g. average color of the image)? How can I achive that without degrading the performance?
Thanks in advance!
I can't imagine this problem has published about 3 years...
public static Image<Bgr, byte> BackgroundToGreen(Image<Bgr, byte> rgbimage)
{
Image<Bgr, byte> ret = rgbimage;
var image = rgbimage.InRange(new Bgr(190, 190, 190), new Bgr(255, 255, 255));
var mat = rgbimage.Mat;
mat.SetTo(new MCvScalar(200, 237, 204), image);
mat.CopyTo(ret);
return ret;
}
Why Matrix?
http://www.emgu.com/wiki/index.php/Working_with_Images#Accessing_the_pixels_from_Mat
Unlike the Image<,> class, where memory are pre-allocated and fixed, the memory of Mat can be automatically re-allocated by Open CV function calls. We cannot pre-allocate managed memory and assume the same memory are used through the life time of the Mat object. As a result, Mat class do not contains a Data property like the Image<,> class, where the pixels can be access through a managed array.
What is InRange?
http://www.emgu.com/wiki/files/2.0.0.0/html/07eff70b-f81f-6313-98a9-02d508f7c7e0.htm
Checks that image elements lie between two scalars
Return Value
res[i,j] = 255 if inrange, 0 otherwise
What is SetTo?
http://www.emgu.com/wiki/files/2.4.0/document/html/0309f41d-aa02-2c0d-767f-3d7d8ccc9212.htm
Copies scalar value to every selected element of the destination GpuMat: GpuMat(I)=value if mask(I)!=0
So it is.
Quoted from: http://blog.zsxsoft.com/post/24 (Chinese only) (CC BY-NC-ND)
Image<Bgr, Byte> img;
Image<Gray, Byte> grayImg = img.Convert<Gray, Byte>();
grayImg = img.InRange(new Bgr(minB, minG, minR), new Bgr(new Bgr(maxB, maxG, maxR);
it will only show your color(range) in binary and is the fastest
way
But if you want to detect a certain range of colors AND replace them:
Image<Bgr, Byte> img;
for (int i = 0; i < img.ManagedArray.GetLength(0); i++)
{
for (int j = 0; j < img.ManagedArray.GetLength(1); j++)
{
Bgr currentColor = img[i, j];
if (currentColor.Blue >= minB && currentColor.Blue <= maxB && currentColor.Green >= minG && maxG <= trackBar13.Value && currentColor.Red >= minR && currentColor.Red <= maxR)
{
img[i, j] = new Bgr(B,G,R);
}
}
}
Related
In OpenCv C++ you can do:
int nbrLabel = connectedComponentsWithStats(img, labelsMat, stats, centroids);
int selectedLabel = 4;
Mat mask = (labelsMat == selectedLabel);
mask will be the size of img and store if each pixel is equal to selectedLabel or not.
Emgu doesn't have a == operator for Mat.
What would be the best solution ?
The only solution that I found was to use InRange from the Image<> class with the same lowerand higher value:
int nbrLabel = CvInvoke.ConnectedComponentsWithStats(img, labelsMat, stats, centroids);
Image<Gray, byte> labelsImg = labelsMat.ToImage<Gray, byte>();
int selectedLabel = 4;
Mat mask = labelsImg.InRange(new Gray(selectedLabel), new Gray(selectedLabel)).Mat
I am working on a project where I need to identify dots from IR lasers on a surface. I use for that a camera with IR filter
Some input images:
There can be several dots, too. So I tried to sharpen this image from webcam and then use FindContours method of Emgu CV.
There is my code:
public static Image<Gray, byte> Sharpen(Image<Gray, byte> image, int w, int h, double sigma1, double sigma2, int k)
{
w = (w % 2 == 0) ? w - 1 : w;
h = (h % 2 == 0) ? h - 1 : h;
//apply gaussian smoothing using w, h and sigma
var gaussianSmooth = image.SmoothGaussian(w, h, sigma1, sigma2);
//obtain the mask by subtracting the gaussian smoothed image from the original one
var mask = image - gaussianSmooth;
//add a weighted value k to the obtained mask
mask *= k;
//sum with the original image
image += mask;
return image;
}
private void ProcessFrame(object sender, EventArgs arg)
{
Mat frame = new Mat();
if (_capture.Retrieve(frame, CameraDevice))
{
Image<Bgr, byte> original = frame.ToImage<Bgr, byte>();
Image<Gray, byte> img = Sharpen(frame.ToImage<Gray, byte>(), 100, 100, 100, 100, 30);
Image<Gray, byte> thresh = new Image<Gray, byte>(img.Size);
CvInvoke.PyrDown(img, thresh);
CvInvoke.PyrUp(thresh, thresh);
Image<Gray, byte> mask = new Image<Gray, byte>(thresh.Size);
Image<Gray, byte> cannyImg = thresh.Canny(10, 50);
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
Mat hierarchy = new Mat();
CvInvoke.FindContours(
cannyImg,
contours,
hierarchy,
RetrType.External,
ChainApproxMethod.ChainApproxSimple
);
Image<Bgr, byte> resultImage = img.Copy().Convert<Bgr, byte>();
int contCount = contours.Size;
for (int i = 0; i < contCount; i++)
{
using (VectorOfPoint contour = contours[i])
{
resultImage.Draw(CvInvoke.BoundingRectangle(contour), new Bgr(255, 0, 0), 5);
}
}
captureBox.Image = original.Bitmap;
cvBox.Image = resultImage.Bitmap;
}
}
Example of result image:
So it almost all the time works as I expect it to, but framerate is very low. I'm getting like 10-15 fps with resolution of 640x480. I need to be able to do the same thing for 1920x1080 with at least 30 fps. It's my first time with OpenCV and Emgu.CV. What can I do to make it perform better?
I solved this just setting the threshold, so that image turns black and white only. By adjusting the threshold I was able to achieve the same results if not better in terms of clarity, but also performance drastically improved since there is not heavy processing going on
Here is a snippet with ARCore library instead on EmguCV
var bitmap = eventArgs.Frame;
var filter = new Grayscale(0.2125, 0.7154, 0.0721);
var grayImage = filter.Apply(bitmap);
var thresholdFilter = new Threshold(CurrentThreshold);
thresholdFilter.ApplyInPlace(grayImage);
var blobCounter = new BlobCounter();
blobCounter.ProcessImage(grayImage);
var rectangles = blobCounter.GetObjectsRectangles();
I have a 12 bit gray-scale camera and I want to use EMGU to process the image.
My problem is that I want to process the image at "UInt16" TDepth and not the usual "Byte"
So initially I create an empty 2D image:
Image<Gray, UInt16> OnImage = new Image<Gray, UInt16>(960, 1280);
then I create a for loop to transfer my Image from 1D vector form to a 2D image:
for (int i=1; i< 960; i++)
{
for (int j = 1; j < 1280; j++)
{
OnImage[i, j] = MyImageVector[Counter];
Counter++;
}
}
where:
int[] MyImageVector = new int[1228800];
The problem is at the line :
OnImage[i, j] = MyImageVector[Counter];
where i get the following error message:
Cannot Implicitly convert type "int" to "EMGU.CV.Structure.Gray"
Why this is happening?
Do you know any way that i can store Int values to an Emgu Image object???
Any alternative workaround would be also helpful.
Thank you
I found an alternative solution to pass a 1D vector into a 2D EMGU.Image:
Image<Gray, Single> _myImage= new Image<Gray, Single>(Width, Height);
Buffer.BlockCopy(MyVector, 0, _myImage.Data, 0, MyVector.Length * sizeof(Single));
This works much faster that 2 for loops...
I would like to do motion detection in C# (using EmguCV 3.0) to remove object in motion or in foreground to draw an overlay.
Here is a sample test I done with a Kinect (because It's a depth camera)
How can I get started with EmguCV 3.0 ?
I tried many background removal code that do not work
It seems OpticalFlow is a good start but there si no example in EmguCV 3.0
If I find the largest blob how can I find its contours ?
Can someone help me to get started ?
EDIT: 17/06/2015
In EmguCV3.0.0 RC I don't see OpticalFlow in the package and documentation:
http://www.emgu.com/wiki/files/3.0.0-rc1/document/html/b72c032d-59ae-c36f-5e00-12f8d621dfb8.htm
There is only : DenseOpticalFlow, OpticalFlowDualTVL1 ???
This is a AbsDiff Code:
var grayFrame = frame.Convert<Gray, Byte>();
var motionFrame = grayFrame.AbsDiff(backFrame)
.ThresholdBinary(new Gray(20), new Gray(255))
.Erode(2)
.Dilate(2);
Result:
I don't know how to get the motion in white ?
This is the Blob Code:
Image<Bgr, Byte> smoothedFrame = new Image<Bgr, byte>(frame.Size);
CvInvoke.GaussianBlur(frame, smoothedFrame, new Size(3, 3), 1); //filter out noises
Mat forgroundMask = new Mat();
fgDetector.Apply(smoothedFrame, forgroundMask);
CvBlobs blobs = new CvBlobs();
blobDetector.Detect(forgroundMask.ToImage<Gray, byte>(), blobs);
blobs.FilterByArea(400, int.MaxValue);
blobTracker.Update(blobs, 1.0, 0, 1);
foreach (var pair in blobs) {
CvBlob b = pair.Value;
CvInvoke.Rectangle(frame, b.BoundingBox, new MCvScalar(255.0, 255.0, 255.0), 2);
}
Result:
Why so much false positive ?
This is a MOG2 Code:
forgroundDetector.Apply(frame, forgroundMask);
motionHistory.Update(forgroundMask);
var motionMask = GetMotionMask();
Image<Bgr, Byte> motionImage = new Image<Bgr, byte>(motionMask.Size);
CvInvoke.InsertChannel(motionMask, motionImage, 0);
Rectangle[] rects;
using (VectorOfRect boundingRect = new VectorOfRect()) {
motionHistory.GetMotionComponents(segMask, boundingRect);
rects = boundingRect.ToArray();
}
foreach (Rectangle comp in rects) { ...
Result:
If I select the biggest Area how can I get the contour of the object ?
First, I can give you some example Optical Flow code.
Let oldImage and newImage be variables that hold the previous and current frame. In my code, it's of type Image<Gray, Byte>.
// prep containers for x and y vectors
Image<Gray, float> velx = new Image<Gray, float>(newImage.Size);
Image<Gray, float> vely = new Image<Gray, float>(newImage.Size);
// use the Horn and Schunck dense optical flow algorithm.
OpticalFlow.HS(oldImage, newImage, true, velx, vely, 0.1d, new MCvTermCriteria(100));
// color each pixel
Image<Hsv, Byte> coloredMotion = new Image<Hsv, Byte>(newImage.Size);
for (int i = 0; i < coloredMotion.Width; i++)
{
for (int j = 0; j < coloredMotion.Height; j++)
{
// Pull the relevant intensities from the velx and vely matrices
double velxHere = velx[j, i].Intensity;
double velyHere = vely[j, i].Intensity;
// Determine the color (i.e, the angle)
double degrees = Math.Atan(velyHere / velxHere) / Math.PI * 90 + 45;
if (velxHere < 0)
{
degrees += 90;
}
coloredMotion.Data[j, i, 0] = (Byte) degrees;
coloredMotion.Data[j, i, 1] = 255;
// Determine the intensity (i.e, the distance)
double intensity = Math.Sqrt(velxHere * velxHere + velyHere * velyHere) * 10;
coloredMotion.Data[j, i, 2] = (intensity > 255) ? 255 : intensity;
}
}
// coloredMotion is now an image that shows intensity of motion by lightness
// and direction by color.
Regarding the larger question of how to remove the foreground:
If I had a way to get a static background image, that's the best way to start. Then, the foreground would be detected by the AbsDiff method and using Erode and Dilate or Gaussian to smooth the image, then use blob detection.
For simple foreground detection, I found Optical Flow to be way too much processing (8fps max), whereas the AbsDiff method was just as accurate but had no effect on framerate.
Regarding contours, if you're merely looking to find the size, position, and other moments, then the blob detection in the AbsDiff tutorial above seems to be sufficient, which uses Image.FindContours(...).
If not, I would start looking at the CvBlobDetector class as used in this tutorial. There's a built-in DrawBlob function that might come in handy.
I'm doing a skin detection method in c# using EmguCV. For skin detection I'm referring this article. I'm new in EmguCV. I just want to know how to get or set every pixel value of image that is capturing via webcam. If skin pixel matched it become white else black. I just want RGB value of pixel without degrading the performance of application.
to get or set every pixel value of image you do it easily as following
Image<Bgr, Byte> img = ....
for (i = 0; i < img.Height; i++)
{
for (k = 0; k < img.Width; k++)
{
// Get
// Color ( R, G, B, alpha)
Color c = img[i, k];
// Set
img[i,k] = new Bgr();
}
}
it will be write inplace