How to make PerspectiveTransform work? - c#

I just want to reproduce the result as posted here.
I rewrite the source to EmguCV form.
Image<Bgr, byte> image = new Image<Bgr, byte>(#"B:\perspective.png");
CvInvoke.cvShowImage("Hello World!", image);
float[,] scrp = { { 43, 18 }, { 280,40}, {19,223 }, { 304,200} };
float[,] dstp = { { 0,0}, { 320,0}, { 0,240 }, { 320,240 } };
float[,] homog = new float[3, 3];
Matrix<float> c1 = new Matrix<float>(scrp);
Matrix<float> c2 = new Matrix<float>(dstp);
Matrix<float> homogm = new Matrix<float>(homog);
CvInvoke.cvFindHomography(c1.Ptr, c2.Ptr, homogm.Ptr, Emgu.CV.CvEnum.HOMOGRAPHY_METHOD.DEFAULT, 0, IntPtr.Zero);
CvInvoke.cvGetPerspectiveTransform(c1, c2, homogm);
Image<Bgr, byte> newImage = image.WarpPerspective(homogm, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC, Emgu.CV.CvEnum.WARP.CV_WARP_DEFAULT, new Bgr(0, 0, 0));
CvInvoke.cvShowImage("newImage", newImage);
This is the testing image.
The newImage is always a blank image.
Can anyone help me make my source code work???

Hopefully, I found the answer by myself.
I rewrote the source to the wrong form. I should use Point[] and there is CameraCalibration.GetPerspectiveTransform function to use.
PointF[] srcs = new PointF[4];
srcs[0] = new PointF(253, 211);
srcs[1] = new PointF(563, 211);
srcs[2] = new PointF(563, 519);
srcs[3] = new PointF(253, 519);
PointF[] dsts = new PointF[4];
dsts[0] = new PointF(234, 197);
dsts[1] = new PointF(520, 169);
dsts[2] = new PointF(715, 483);
dsts[3] = new PointF(81, 472);
HomographyMatrix mywarpmat = CameraCalibration.GetPerspectiveTransform(srcs, dsts);
Image<Bgr, byte> newImage = image.WarpPerspective(mywarpmat, Emgu.CV.CvEnum.INTER.CV_INTER_NN, Emgu.CV.CvEnum.WARP.CV_WARP_FILL_OUTLIERS, new Bgr(0, 0, 0));

Here is an extension method for EmguCV version 3 (which have many functions deprecated):
public static Image<TColor, TDepth> GetAxisAlignedImagePart<TColor, TDepth>(
this Image<TColor, TDepth> input,
Quadrilateral rectSrc,
Quadrilateral rectDst,
Size targetImageSize)
where TColor : struct, IColor
where TDepth : new()
{
var src = new[] { rectSrc.P0, rectSrc.P1, rectSrc.P2, rectSrc.P3 };
var dst = new[] { rectDst.P0, rectDst.P1, rectDst.P2, rectDst.P3 };
using (var matrix = CvInvoke.GetPerspectiveTransform(src, dst))
{
using (var cutImagePortion = new Mat())
{
CvInvoke.WarpPerspective(input, cutImagePortion, matrix, targetImageSize, Inter.Cubic);
return cutImagePortion.ToImage<TColor, TDepth>();
}
}
}

Related

How to use Facemarker in EMGUCV?

I'm trying to follow this OpenCV tutorial but I have not managed to create the FaceInvoke.FaceDetectNative function, I tried to use this function but the application stops working.
static bool MyDetector(IntPtr input, IntPtr output)
{
CascadeClassifier faceDetector = new CascadeClassifier(#"..\..\Resource\EMGUCV\haarcascade_frontalface_default.xml");
Image<Gray, byte> grayImage = (new Image<Bgr, byte>(CvInvoke.cvGetSize(input))).Convert<Gray, byte>();
grayImage._EqualizeHist();
Rectangle[] faces = faceDetector.DetectMultiScale(grayImage, 1.1, 10, Size.Empty);
VectorOfRect rects = new VectorOfRect(faces);
CvInvoke.cvCopy(rects.Ptr, output, IntPtr.Zero);
return true;
}
On the other hand I tried calling the GetFaces method by passing a Mat object = new Mat (); as IOutputArray which also has not worked (Crash error).
FacemarkLBFParams fParams = new FacemarkLBFParams();
fParams.ModelFile = #"..\..\Resource\EMGUCV\facemarkmodel.yaml";
FacemarkLBF facemark = new FacemarkLBF(fParams);
facemark.SetFaceDetector(MyDetector);
VectorOfRect result = new VectorOfRect();
Image<Bgr, Byte> image = new Image<Bgr, byte>(#"C:\Users\matias\Documents\Proyectos\100-20.bmp");
bool success = facemark.GetFaces(image, result);
Rectangle[] faces = result.ToArray();
Thank's
After several hours I have managed to detect the points of a face, for that use the Fit method, which receives the image, the faces (such as VectorOfRect) and a VectorOfVectorOfPointF for the output
public Image<Bgr, Byte> GetFacePoints()
{
CascadeClassifier faceDetector = new CascadeClassifier(#"..\..\Resource\EMGUCV\haarcascade_frontalface_default.xml");
FacemarkLBFParams fParams = new FacemarkLBFParams();
fParams.ModelFile = #"..\..\Resource\EMGUCV\lbfmodel.yaml";
fParams.NLandmarks = 68; // number of landmark points
fParams.InitShapeN = 10; // number of multiplier for make data augmentation
fParams.StagesN = 5; // amount of refinement stages
fParams.TreeN = 6; // number of tree in the model for each landmark point
fParams.TreeDepth = 5; //he depth of decision tree
FacemarkLBF facemark = new FacemarkLBF(fParams);
//facemark.SetFaceDetector(MyDetector);
Image<Bgr, Byte> image = new Image<Bgr, byte>(#"C:\Users\matias\Downloads\personas-buena-vibra-caracteristicas-1200x600.jpg");
Image<Gray, byte> grayImage = image.Convert<Gray, byte>();
grayImage._EqualizeHist();
VectorOfRect faces = new VectorOfRect(faceDetector.DetectMultiScale(grayImage));
VectorOfVectorOfPointF landmarks = new VectorOfVectorOfPointF();
facemark.LoadModel(fParams.ModelFile);
bool success = facemark.Fit(grayImage, faces, landmarks);
if (success)
{
Rectangle[] facesRect = faces.ToArray();
for (int i = 0; i < facesRect.Length; i++)
{
image.Draw(facesRect[i], new Bgr(Color.Blue), 2);
FaceInvoke.DrawFacemarks(image, landmarks[i], new Bgr(Color.Blue).MCvScalar);
}
return image;
}
return null;
}
Now all that remains is to optimize the code and continue with my project

EmguCV SURF - Determine matched pairs of points

I'm currently modifying EmguCV's (Ver 3.0.0.2157) SurfFeature example (Seen here).
I'm trying to determine the amount of matched pairs of points in order to calculate a percentage of similarity between the inputted images.
From what I understand, this information is stored in the mask variable, but I don't know how to access it?
(This question has been asked before here, but the example source code being referenced is using an older version of EmguCV)
Thanks in advance!
p determine matches
public static Image<Bgr, Byte> Draw(Image<Gray, Byte> modelImage, Image<Gray, byte> observedImage, out long matchTime, out int nonofZeroCount)
{
int returnValue = 0;
Stopwatch watch;
HomographyMatrix homography = null;
SURFDetector surfCPU = new SURFDetector(500, false);
VectorOfKeyPoint modelKeyPoints;
VectorOfKeyPoint observedKeyPoints;
Matrix<int> indices;
Matrix<byte> mask;
int k = 2;
double uniquenessThreshold = 0.8;
if (GpuInvoke.HasCuda)
{
GpuSURFDetector surfGPU = new GpuSURFDetector(surfCPU.SURFParams, 0.01f);
using (GpuImage<Gray, Byte> gpuModelImage = new GpuImage<Gray, byte>(modelImage))
//extract features from the object image
using (GpuMat<float> gpuModelKeyPoints = surfGPU.DetectKeyPointsRaw(gpuModelImage, null))
using (GpuMat<float> gpuModelDescriptors = surfGPU.ComputeDescriptorsRaw(gpuModelImage, null, gpuModelKeyPoints))
using (GpuBruteForceMatcher<float> matcher = new GpuBruteForceMatcher<float>(DistanceType.L2))
{
modelKeyPoints = new VectorOfKeyPoint();
surfGPU.DownloadKeypoints(gpuModelKeyPoints, modelKeyPoints);
watch = Stopwatch.StartNew();
// extract features from the observed image
using (GpuImage<Gray, Byte> gpuObservedImage = new GpuImage<Gray, byte>(observedImage))
using (GpuMat<float> gpuObservedKeyPoints = surfGPU.DetectKeyPointsRaw(gpuObservedImage, null))
using (GpuMat<float> gpuObservedDescriptors = surfGPU.ComputeDescriptorsRaw(gpuObservedImage, null, gpuObservedKeyPoints))
using (GpuMat<int> gpuMatchIndices = new GpuMat<int>(gpuObservedDescriptors.Size.Height, k, 1, true))
using (GpuMat<float> gpuMatchDist = new GpuMat<float>(gpuObservedDescriptors.Size.Height, k, 1, true))
using (GpuMat<Byte> gpuMask = new GpuMat<byte>(gpuMatchIndices.Size.Height, 1, 1))
using (Stream stream = new Stream())
{
matcher.KnnMatchSingle(gpuObservedDescriptors, gpuModelDescriptors, gpuMatchIndices, gpuMatchDist, k, null, stream);
indices = new Matrix<int>(gpuMatchIndices.Size);
mask = new Matrix<byte>(gpuMask.Size);
//gpu implementation of voteForUniquess
using (GpuMat<float> col0 = gpuMatchDist.Col(0))
using (GpuMat<float> col1 = gpuMatchDist.Col(1))
{
GpuInvoke.Multiply(col1, new MCvScalar(uniquenessThreshold), col1, stream);
GpuInvoke.Compare(col0, col1, gpuMask, CMP_TYPE.CV_CMP_LE, stream);
}
observedKeyPoints = new VectorOfKeyPoint();
surfGPU.DownloadKeypoints(gpuObservedKeyPoints, observedKeyPoints);
//wait for the stream to complete its tasks
//We can perform some other CPU intesive stuffs here while we are waiting for the stream to complete.
stream.WaitForCompletion();
gpuMask.Download(mask);
gpuMatchIndices.Download(indices);
if (GpuInvoke.CountNonZero(gpuMask) >= 4)
{
int nonZeroCount = Features2DToolbox.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints, indices, mask, 1.5, 20);
if (nonZeroCount >= 4)
homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints, observedKeyPoints, indices, mask, 2);
returnValue = nonZeroCount;
}
watch.Stop();
}
}
}
else
{
//extract features from the object image
modelKeyPoints = surfCPU.DetectKeyPointsRaw(modelImage, null);
Matrix<float> modelDescriptors = surfCPU.ComputeDescriptorsRaw(modelImage, null, modelKeyPoints);
watch = Stopwatch.StartNew();
// extract features from the observed image
observedKeyPoints = surfCPU.DetectKeyPointsRaw(observedImage, null);
Matrix<float> observedDescriptors = surfCPU.ComputeDescriptorsRaw(observedImage, null, observedKeyPoints);
BruteForceMatcher<float> matcher = new BruteForceMatcher<float>(DistanceType.L2);
matcher.Add(modelDescriptors);
indices = new Matrix<int>(observedDescriptors.Rows, k);
using (Matrix<float> dist = new Matrix<float>(observedDescriptors.Rows, k))
{
matcher.KnnMatch(observedDescriptors, indices, dist, k, null);
mask = new Matrix<byte>(dist.Rows, 1);
mask.SetValue(255);
Features2DToolbox.VoteForUniqueness(dist, uniquenessThreshold, mask);
}
int nonZeroCount = CvInvoke.cvCountNonZero(mask);
if (nonZeroCount >= 4)
{
nonZeroCount = Features2DToolbox.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints, indices, mask, 1.5, 20);
if (nonZeroCount >= 4)
homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints, observedKeyPoints, indices, mask, 2);
}
returnValue = nonZeroCount;
watch.Stop();
}
int p = mask.ManagedArray.OfType<byte>().ToList().Where(q => q == 1).Count();
//Draw the matched keypoints
Image<Bgr, Byte> result = Features2DToolbox.DrawMatches(modelImage, modelKeyPoints, observedImage, observedKeyPoints,
indices, new Bgr(255, 255, 255), new Bgr(255, 255, 255), mask, Features2DToolbox.KeypointDrawType.DEFAULT);
#region draw the projected region on the image
if (homography != null && p > 20)
{ //draw a rectangle along the projected model
Rectangle rect = modelImage.ROI;
PointF[] pts = new PointF[] {
new PointF(rect.Left, rect.Bottom),
new PointF(rect.Right, rect.Bottom),
new PointF(rect.Right, rect.Top),
new PointF(rect.Left, rect.Top)};
homography.ProjectPoints(pts);
result.DrawPolyline(Array.ConvertAll<PointF, Point>(pts, Point.Round), true, new Bgr(Color.Red), 5);
}
#endregion
matchTime = watch.ElapsedMilliseconds;
nonofZeroCount = returnValue;
return result;
}

EmguCV HaarCascade issue

I have developed a working C# face recognition program using EmguCV.
However, if I load "haarcascade_fullbody.xml" instead of "haarcascade_frontalface_alt_tree.xml" I get the almighty Access Violation.
This is the code;
public Bitmap detection(Bitmap Source)
{
List<Image<Gray, byte>> TrainedImages = this.TrainedImages;
List<String> Names = this.Names;
Image<Bgr, byte> ImageFrame = new Image<Bgr, byte>(Source);
Image<Gray, byte> grayFrame = ImageFrame.Convert<Gray, byte>();
Image<Bgr, byte> overlay = new Image<Bgr, byte>(Source.Width, Source.Height);
Graphics FaceCanvas;
List<String> finimg = new List<String>();
//HaarCascade haar = new HaarCascade("haarcascade_frontalface_alt_tree.xml");
HaarCascade haar = new HaarCascade("haarcascade_fullbody.xml");
var faces = grayFrame.DetectHaarCascade(haar, 1.1, 3, HAAR_DETECTION_TYPE.DO_CANNY_PRUNING, new System.Drawing.Size(25, 25))[0];
foreach (var face in faces)
{
overlay.Draw(face.rect, new Bgr(System.Drawing.Color.Green), 3);
tempbmp = new Bitmap(100, 100);
FaceCanvas = Graphics.FromImage(tempbmp);
FaceCanvas.DrawImage(grayFrame.ToBitmap(), 0, 0, face.rect, GraphicsUnit.Pixel);
detected.Add(tempbmp);
if (doit)
{
saveBitmap(tempbmp, trainpath, trainnamer.Text);
doit = false;
}
if (doit10)
{
for (int k = 1; k <= 10; k++)
saveBitmap(tempbmp, trainpath, trainnamer.Text);
doit10 = false;
}
try
{
MCvTermCriteria termCrit = new MCvTermCriteria(TrainedImages.ToArray().Length, 0.001);//????????????
EigenObjectRecognizer recognizer = new EigenObjectRecognizer(TrainedImages.ToArray(), Names.ToArray(), 2500, ref termCrit);
MCvFont font = new MCvFont(FONT.CV_FONT_HERSHEY_TRIPLEX, 0.5d, 0.5d);
String name = recognizer.Recognize(new Image<Gray, byte>(tempbmp));
if (Names.Contains(name) == false)
name = "Stranger";
else
name = removeformat(name);
overlay.Draw(name, ref font, new System.Drawing.Point(face.rect.Left, face.rect.Top - 5), new Bgr(System.Drawing.Color.Green));
finimg.Add(name);
}
catch (IndexOutOfRangeException)
{
MCvFont font = new MCvFont(FONT.CV_FONT_HERSHEY_TRIPLEX, 0.5d, 0.5d);
ImageFrame.Draw("Stranger", ref font, new System.Drawing.Point(face.rect.Left, face.rect.Top - 5), new Bgr(color));
continue;
}
}
detected.Clear();
Bitmap supra = overlay.ToBitmap();
supra.MakeTransparent(System.Drawing.Color.Black);
return supra;
}
Apparently there is a problem with the xml, as any other haarcascade I try to load loads succesfully. I recommend using the HOGDescriptor instead or "haarcascade_mcs_upperbody.xml" for pedestrian detecting.

Sharpdx - Depthstencil not working when using MRT (Multiple Render Targets)

been trying to change a renderer I wrote from SlimDX to SharpDX and ran into a problem. I want to render to multiple render targets (in this case color and object ID for picking)
This is the initialization of the rendertargets (all with same dimension and multisample setting)
//Swapchain, Device, Primary Rendertarget
var description = new SwapChainDescription()
{
BufferCount = 1,
Usage = Usage.RenderTargetOutput,
OutputHandle = Form.Handle,
IsWindowed = true,
ModeDescription = new ModeDescription(0, 0, new Rational(60, 1), Format.R8G8B8A8_UNorm),
SampleDescription = new SampleDescription(1, 0),
Flags = SwapChainFlags.AllowModeSwitch,
SwapEffect = SwapEffect.Discard
};
this.Device = new Device(adapter);
this.SwapChain = new SwapChain(factory, Device, description);
this.backBuffer = SharpDX.Direct3D11.Texture2D.FromSwapChain<SharpDX.Direct3D11.Texture2D>(SwapChain, 0);
this.RenderTargetView = new RenderTargetView(Device, backBuffer);
//Depthbuffer
Texture2DDescription descDepth = new Texture2DDescription();
descDepth.Width = (int)Viewport.Width;
descDepth.Height = (int)Viewport.Height;
descDepth.MipLevels = 1;
descDepth.ArraySize = 1;
descDepth.Format = Format.D32_Float;
descDepth.Usage = ResourceUsage.Default;
descDepth.SampleDescription = new SampleDescription(1, 0);
descDepth.BindFlags = BindFlags.DepthStencil;
descDepth.CpuAccessFlags = 0;
descDepth.OptionFlags = 0;
using (Texture2D depthStencil = new Texture2D(Device, descDepth))
{
depthView = new DepthStencilView(Device, depthStencil);
}
//Rendertargetview for the ID
Texture2DDescription IdMapDesc = new Texture2DDescription();
IdMapDesc.Width = (int)Viewport.Width;
IdMapDesc.Height = (int)Viewport.Height;
IdMapDesc.ArraySize = 1;
IdMapDesc.MipLevels = 1;
IdMapDesc.Format = Format.R16_UInt;
IdMapDesc.Usage = ResourceUsage.Default;
IdMapDesc.SampleDescription = new SampleDescription(1, 0);
IdMapDesc.BindFlags = BindFlags.ShaderResource | BindFlags.RenderTarget;
IdMapDesc.CpuAccessFlags = 0;
IdMapDesc.OptionFlags = 0;
using (Texture2D idMap = new Texture2D(Device, IdMapDesc))
{
idView = new RenderTargetView(Device, idMap);
}
This is how I do the Rendering
public override void Render()
{
Context.ClearDepthStencilView(depthView, DepthStencilClearFlags.Depth, 1f, 0);
Context.OutputMerger.SetTargets(depthView, RenderTargetView);
staticMeshRenderer.UpdateCameraConstants();
foreach(TerrainSegment segment in terrain.SegmentMap)
{
terrainRenderer.Draw(segment, null);
}
objectManager.DrawContent(Device);
}
producing this output (can't post images, it's a scene with working depthstencil)
However when using multiple rendertargets like this
Context.OutputMerger.SetTargets(depthView, RenderTargetView, idView);
the Depthstencil stops doing its job.
HLSL code used for both attempts:
struct PS_Output
{
float4 Color : SV_TARGET0;
uint ID : SV_TARGET1;
};
PS_Output PShader(VS_OutputStatic input)
{
PS_Output output;
output.ID = 3; //test
output.Color = Diffuse.Sample(StateLinear, input.TexCoords).rgba;
return output;
}
What am I doing wrong here?
Thanks in advance!
Debugging with RenderDoc showed that my Buffer Dimensions were NOT the same. Fixing that solved the problem.

Eye and Mouth detection from face using haar-cascades

I have extracted eyes and mouth from the face, but want to extract emotions from eyes and mouth.. However, mouth is not detected properly..
This is my code..
private void timer1_Tick(object sender, EventArgs e)
{
using (Image<Bgr, byte> nextFrame = cap.QueryFrame())
{
if (nextFrame != null)
{
// there's only one channel (greyscale), hence the zero index
//var faces = nextFrame.DetectHaarCascade(haar)[0];
Image<Gray, byte> grayframe = nextFrame.Convert<Gray, byte>();
Image<Gray, Byte> gray = nextFrame.Convert<Gray, Byte>();
Image<Gray, Byte> gray1 = nextFrame.Convert<Gray, Byte>();
var faces = grayframe.DetectHaarCascade(
haar, 1.4, 4,
HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
new Size(nextFrame.Width / 8, nextFrame.Height / 8)
)[0];
MCvAvgComp[][] eyes = gray.DetectHaarCascade(eye, 1.1, 1, Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING, new Size(20, 20));
gray.ROI = Rectangle.Empty;
MCvAvgComp[][] mouthsDetected = gray.DetectHaarCascade(mouth, 1.1, 10, Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING, new Size(20, 20));
gray1.ROI = Rectangle.Empty;
foreach (MCvAvgComp mouthsnap in mouthsDetected[0])
{
Rectangle mouthRect = mouthsnap.rect;
// mouthRect.Offset(f.rect.X, f.rect.Y);
nextFrame.Draw(mouthRect, new Bgr(Color.Red), 2);
detectedmouth = mouthRect;
}
foreach (MCvAvgComp eyesnap in eyes[0])
{
Rectangle eyeRect = eyesnap.rect;
// mouthRect.Offset(f.rect.X, f.rect.Y);
nextFrame.Draw(eyeRect, new Bgr(Color.Green), 2);
}
foreach (var face in faces)
{
nextFrame.Draw(face.rect, new Bgr(Color.LightGreen), 3);
facesnap = face.rect;
}
pictureBox1.Image = nextFrame.ToBitmap();
}
}
}
private void Form1_Load(object sender, EventArgs e)
{
cap = new Capture(0);
// adjust path to find your xml
//haar = new HaarCascade("haarcascade_frontalface_alt2.xml");
haar = new HaarCascade("haarcascade_frontalface_alt_tree.xml");
mouth = new HaarCascade("Mouth.xml");
eye = new HaarCascade("haarcascade_eye_tree_eyeglasses.xml");
}
private void button1_Click(object sender, EventArgs e)
{
Image snap = pictureBox1.Image;
snap.Save("c:\\snapshot.jpg", System.Drawing.Imaging.ImageFormat.Jpeg);
pictureBox2.Image = snap;
pictureBox3.Image = cropImage(snap,facesnap);
pictureBox4.Image = cropImage(snap, detectedmouth);
}
private static Image cropImage(Image img, Rectangle croparea)
{
Bitmap bmpImage = new Bitmap(img);
Bitmap bmpCrop = bmpImage.Clone(croparea, bmpImage.PixelFormat);
return (Image)(bmpCrop);
}
Please help me in emotion detection and better mouth detection using c#.
I would try to look for a mouth in a face rectangle, instead of checking the hole picture.
var faces = grayframe.DetectHaarCascade(
haar, 1.4, 4,
HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
new Size(nextFrame.Width / 8, nextFrame.Height / 8)
)[0];
foreach (var f in faces)
{
//draw the face detected in the 0th (gray) channel with blue color
image.Draw(f.rect, new Bgr(Color.Blue), 2);
//Set the region of interest on the faces
gray.ROI = f.rect;
var mouthsDetected = gray.DetectHaarCascade(mouth,
1.1, 10,
Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
new Size(20, 20));
gray.ROI = Rectangle.Empty;
foreach (var m in mouthsDetected [0])
{
Rectangle mouthRect = m.rect;
mouthRect.Offset(f.rect.X, f.rect.Y);
image.Draw(mouthRect , new Bgr(Color.Red), 2);
}
}
i have divided face area in 2 rectangles top and bottom..and applied bottom rectangle to gray.ROI. and it works.. this is the code for both rectangles..
int halfheight = facesnap.Height/2;
int start = facesnap.X;
int start1 = facesnap.Y;
Rectangle top = new Rectangle(start,start1,facesnap.Width,halfheight);
int start2 = top.Bottom;
Rectangle bottom = new Rectangle(start, start2, facesnap.Width, halfheight);
nextFrame.Draw(bottom, new Bgr(Color.Yellow), 2);
//Set the region of interest on the faces
gray.ROI = bottom;
MCvAvgComp[][] mouthsDetected = gray.DetectHaarCascade(mouth,
1.1, 10,
Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
new Size(20, 20));

Categories