Eyes' positions in an image of a human face? - c#

How can I get the eye(s)' position(s) in an image of a human face ?
For instance, my program searches for eyes and then their positions could be stored in 2D vectors like :
Vector2 leftEye = new Vector2(56, 50);
I heard about Emgu but I really don't understand how it works with XMLs...

Here is an example using Emgu 3.4.1. The training data xml is available on GitHub, and you load that into a CascadeClassifier class which can then perform the detection.
using Emgu.CV;
using Emgu.CV.Structure;
using System.Diagnostics;
using System.Drawing;
using System.Drawing.Imaging;
using System.IO;
using System.Net;
public class Program
{
private const string EYE_DETECTION_XML = "https://raw.githubusercontent.com/opencv/opencv/master/data/haarcascades/haarcascade_eye.xml";
private const string SAMPLE_IMAGE = "https://upload.wikimedia.org/wikipedia/commons/thumb/1/18/Lewis_Hamilton_2016_Malaysia_2.jpg/330px-Lewis_Hamilton_2016_Malaysia_2.jpg";
static void Main()
{
// download sample photo
WebClient client = new WebClient();
Bitmap image = null;
using (MemoryStream ms = new MemoryStream(client.DownloadData(SAMPLE_IMAGE)))
image = new Bitmap(Image.FromStream(ms));
// convert to Emgu image, convert to grayscale and increase brightness/contrast
Emgu.CV.Image<Bgr, byte> emguImage = new Emgu.CV.Image<Bgr, byte>(image);
var grayScaleImage = emguImage.Convert<Gray, byte>();
grayScaleImage._EqualizeHist();
// load eye classifier data
string eye_classifier_local_xml = #"c:\temp\haarcascade_eye.xml";
client.DownloadFile(#EYE_DETECTION_XML, eye_classifier_local_xml);
CascadeClassifier eyeClassifier = new CascadeClassifier(eye_classifier_local_xml);
// perform detection which will return rectangles of eye positions
var eyes = eyeClassifier.DetectMultiScale(grayScaleImage, 1.1, 4);
// draw those rectangles on original image
foreach (Rectangle eye in eyes)
emguImage.Draw(eye, new Bgr(255, 0, 0), 3);
// save image and show it
string output_image_location = #"c:\temp\output.png";
emguImage.ToBitmap().Save(output_image_location, ImageFormat.Png);
Process.Start(output_image_location);
}
}

Related

What is a quick way to scale down an image using SharpDX hardware acceleration in C#?

I'm trying to scale down large images (~ 23k x 1k) to be displayed in winforms. The current way I'm scaling the images is taking too long, which is why I want to use the GPU through SharpDX (C#) to improve performance. What would be a good way to do this?
I'm working on a method to scale an image by applying the scale effect (that I don't have access to right now), but I still don't fully understand SharpDX, so I'm wondering if there's a better way to go about this. I modeled my code off of this example but I removed the text overlay, the image saving, the drawing portion, and I replaced the gaussian with the scaling effect. Since I'm using GDI to do the drawing for simplicity, the image is in the form of a systems drawing bitmap so I initialize the encoder with a memory stream that I use to get the output image after the scaling effect is applied. The smaller tests I have done with this method don't seem to make the scaling much quicker, but I haven't been able to put this fully in action yet.
Is there a quicker way to scale down an image using SharpDX, or is something along the lines of my current method the quickest?
Based on what I found on https://csharp.hotexamples.com/examples/SharpDX.WIC/WICStream/-/php-wicstream-class-examples.html
Looks like SharpDX about twice the performance of GDI or better.
Test code that works on my Windows 11 computer. Should be enough to get you started even if you know as little of SharpDX as I do.
var inputPath = #"x:\Temp\1\_Landscape.jpg";
var data = File.ReadAllBytes(inputPath);
var sw = Stopwatch.StartNew();
var iu6 = new ImageUtilities6();
Debug.WriteLine($"Init: {sw.ElapsedMilliseconds}ms total");
for (int i = 0; i < 10; i++)
{
sw.Restart();
var image = iu6.ResizeImage(data, 799, 399);
Debug.WriteLine($"Resize: {sw.ElapsedMilliseconds}ms total");
File.WriteAllBytes(#"X:\TEMP\1\007-xxx.jpg", image);
}
sw.Restart();
iu6.Dispose();
Debug.WriteLine($"Dispose: {sw.ElapsedMilliseconds}ms total");
Class I made based on the samples found on that page.
using SharpDX;
using dw = SharpDX.DirectWrite;
using d2 = SharpDX.Direct2D1;
using d3d = SharpDX.Direct3D11;
using dxgi = SharpDX.DXGI;
using wic = SharpDX.WIC;
using System;
using System.IO;
using SharpDX.Direct3D11;
using SharpDX.WIC;
using SharpDX.DirectWrite;
namespace SharpDX_ImageResizingTest
{
public class ImageUtilities6 : IDisposable
{
private Device defaultDevice;
private Device1 d3dDevice;
private dxgi.Device dxgiDevice;
private d2.Device d2dDevice;
private ImagingFactory2 imagingFactory;
//private d2.DeviceContext d2dContext;
private Factory dwFactory;
private d2.PixelFormat d2PixelFormat;
public ImageUtilities6()
{
//SharpDX.Configuration.EnableObjectTracking = true; //Turn on memory leak logging
// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
// INITIALIZATION ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
// initialize the D3D device which will allow to render to image any graphics - 3D or 2D
defaultDevice = new SharpDX.Direct3D11.Device(SharpDX.Direct3D.DriverType.Hardware,
d3d.DeviceCreationFlags.VideoSupport
| d3d.DeviceCreationFlags.BgraSupport
| d3d.DeviceCreationFlags.Debug); // take out the Debug flag for better performance
d3dDevice = defaultDevice.QueryInterface<d3d.Device1>(); // get a reference to the Direct3D 11.1 device
dxgiDevice = d3dDevice.QueryInterface<dxgi.Device>(); // get a reference to DXGI device
//var dxgiSurface = d3dDevice.QueryInterface<dxgi.Surface>(); // get a reference to DXGI surface
d2dDevice = new d2.Device(dxgiDevice); // initialize the D2D device
imagingFactory = new wic.ImagingFactory2(); // initialize the WIC factory
dwFactory = new dw.Factory();
// specify a pixel format that is supported by both D2D and WIC
d2PixelFormat = new d2.PixelFormat(dxgi.Format.R8G8B8A8_UNorm, d2.AlphaMode.Premultiplied);
// if in D2D was specified an R-G-B-A format - use the same for wic
}
public byte[] ResizeImage(byte[] image, int targetWidth, int targetHeight)
{
int dpi = 72; //96? does it even matter
var wicPixelFormat = wic.PixelFormat.Format32bppPRGBA;
// initialize the DeviceContext - it will be the D2D render target and will allow all rendering operations
var d2dContext = new d2.DeviceContext(d2dDevice, d2.DeviceContextOptions.None);
// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
// IMAGE LOADING ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
var imageStream = new MemoryStream(image);
//var decoder = new wic.PngBitmapDecoder(imagingFactory); // we will load a PNG image
var decoder = new wic.JpegBitmapDecoder(imagingFactory); // we will load a JPG image
var inputStream = new wic.WICStream(imagingFactory, imageStream); // open the image for reading
decoder.Initialize(inputStream, wic.DecodeOptions.CacheOnLoad);
// decode the loaded image to a format that can be consumed by D2D
var formatConverter = new wic.FormatConverter(imagingFactory);
var frame = decoder.GetFrame(0);
formatConverter.Initialize(frame, wicPixelFormat);
// load the base image into a D2D Bitmap
var inputBitmap = d2.Bitmap1.FromWicBitmap(d2dContext, formatConverter, new d2.BitmapProperties1(d2PixelFormat));
// store the image size - output will be of the same size
var inputImageSize = formatConverter.Size;
var pixelWidth = inputImageSize.Width;
var pixelHeight = inputImageSize.Height;
// Calculate correct aspect ratio
double aspectRatio = (double)pixelHeight / (double)pixelWidth;
double targetAspectRatio = (double)targetHeight / (double)targetWidth;
if (targetAspectRatio > aspectRatio)
{
targetHeight = (int)(targetHeight * (aspectRatio / targetAspectRatio));
}
else
{
targetWidth = (int)(targetWidth * (targetAspectRatio / aspectRatio));
}
// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
// EFFECT SETUP ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
//Effect 1 : BitmapSource - take decoded image data and get a BitmapSource from it
//var bitmapSourceEffect = new d2.Effects.BitmapSource(d2dContext);
//bitmapSourceEffect.WicBitmapSource = formatConverter;
// Effect 2 : GaussianBlur - give the bitmapsource a gaussian blurred effect
//var gaussianBlurEffect = new d2.Effects.GaussianBlur(d2dContext);
//gaussianBlurEffect.SetInput(0, bitmapSourceEffect.Output, true);
//gaussianBlurEffect.StandardDeviation = 5f;
// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
// RENDER TARGET SETUP ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
// create the d2d bitmap description using default flags (from SharpDX samples) and 96 DPI
var d2dBitmapProps = new d2.BitmapProperties1(d2PixelFormat, 96, 96, d2.BitmapOptions.Target | d2.BitmapOptions.CannotDraw);
// the render target
var d2dRenderTarget = new d2.Bitmap1(d2dContext, new Size2(targetWidth, targetHeight), d2dBitmapProps);
d2dContext.Target = d2dRenderTarget; // associate bitmap with the d2d context
d2dContext.BeginDraw();
//d2dContext.DrawImage(bitmapSourceEffect); //Way #1
//d2dContext.DrawImage(gaussianBlurEffect); //Way #2
//d2dContext.DrawBitmap(inputBitmap, 1, d2.InterpolationMode.Linear); //Way #3
d2dContext.DrawBitmap(inputBitmap, new SharpDX.Mathematics.Interop.RawRectangleF(0, 0, targetWidth, targetHeight), 1, d2.InterpolationMode.Linear, null, null); //Way #4 - resizing
d2dContext.EndDraw();
// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
// IMAGE SAVING ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
// delete the output file if it already exists
//if (System.IO.File.Exists(outputPath)) System.IO.File.Delete(outputPath);
// use the appropiate overload to write either to stream or to a file
var outputStream = new MemoryStream();
var stream = new wic.WICStream(imagingFactory, outputStream);
// select the image encoding format HERE
var encoder = new wic.JpegBitmapEncoder(imagingFactory);
encoder.Initialize(stream);
var bitmapFrameEncode = new wic.BitmapFrameEncode(encoder);
bitmapFrameEncode.Options.ImageQuality = 0.95f;
bitmapFrameEncode.Initialize();
bitmapFrameEncode.SetSize(targetWidth, targetHeight);
bitmapFrameEncode.SetPixelFormat(ref wicPixelFormat);
// this is the trick to write D2D1 bitmap to WIC
var imageEncoder = new wic.ImageEncoder(imagingFactory, d2dDevice);
imageEncoder.WriteFrame(d2dRenderTarget, bitmapFrameEncode, new wic.ImageParameters(d2PixelFormat, dpi, dpi, 0, 0, targetWidth, targetHeight));
bitmapFrameEncode.Commit();
encoder.Commit();
imageEncoder.Dispose();
bitmapFrameEncode.Dispose();
encoder.Dispose();
stream.Dispose();
formatConverter.Dispose();
d2dRenderTarget.Dispose();
inputStream.Dispose();
decoder.Dispose();
inputBitmap.Dispose();
frame.Dispose();
d2dContext.Dispose();
return outputStream.ToArray();
}
public void Dispose()
{
//bitmapSourceEffect.Dispose();
dwFactory.Dispose();
imagingFactory.Dispose();
d2dDevice.Dispose();
dxgiDevice.Dispose();
d3dDevice.Dispose();
defaultDevice.Dispose();
//System.Diagnostics.Debug.WriteLine(SharpDX.Diagnostics.ObjectTracker.ReportActiveObjects()); Log that memory leak
}
public byte[] ResizeImage1(byte[] data, int width, int height)
{
var ms = new MemoryStream(data);
//Image image = Image.FromStream(ms);
System.Drawing.Image image = System.Drawing.Image.FromStream(ms, false, false);
System.Drawing.Bitmap result = new System.Drawing.Bitmap(width, height);
// set the resolutions the same to avoid cropping due to resolution differences
result.SetResolution(image.HorizontalResolution, image.VerticalResolution);
//use a graphics object to draw the resized image into the bitmap
using (System.Drawing.Graphics graphics = System.Drawing.Graphics.FromImage(result))
{
//set the resize quality modes to high quality
graphics.CompositingQuality = System.Drawing.Drawing2D.CompositingQuality.HighQuality;
graphics.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
graphics.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;
//draw the image into the target bitmap
graphics.DrawImage(image, 0, 0, result.Width, result.Height);
}
var stream = new System.IO.MemoryStream();
image.Save(stream, System.Drawing.Imaging.ImageFormat.Jpeg);
stream.Position = 0;
return stream.ToArray();
}
}
}
Library used was SharpDX + .Direct2D1 + .Direct3D11 + .DXGI 4.2.

Extracting points coordinates(x,y) from a curve c#

i have a curve that i draw on a picturebox in c# using the method graphics.drawcurve(pen, points, tension)
is there anyway that i can extract all points (x,y coordinates) been covered by the curve ? and save them into an array or list or any thing would be great, so i can use them in a different things.
My code:
void Curved()
{
Graphics gg = pictureBox1.CreateGraphics();
Pen pp = new Pen(Color.Green, 1);
int i,j;
Point[] pointss = new Point[counter];
for (i = 0; i < counter; i++)
{
pointss[i].X = Convert.ToInt32(arrayx[i]);
pointss[i].Y = Convert.ToInt32(arrayy[i]);
}
gg.DrawCurve(pp, pointss, 1.0F);
}
Many thanks in advance.
If you really want a list of pixel co-ordinates, you can still let GDI+ do the heavy lifting:
using System.Collections.Generic;
using System.Diagnostics;
using System.Drawing;
using System.Drawing.Drawing2D;
namespace so_pointsfromcurve
{
class Program
{
static void Main(string[] args)
{
/* some test data */
var pointss = new Point[]
{
new Point(5,20),
new Point(17,63),
new Point(2,9)
};
/* instead of to the picture box, draw to a path */
using (var path = new GraphicsPath())
{
path.AddCurve(pointss, 1.0F);
/* use a unit matrix to get points per pixel */
using (var mx = new Matrix(1, 0, 0, 1, 0, 0))
{
path.Flatten(mx, 0.1f);
}
/* store points in a list */
var list_of_points = new List<PointF>(path.PathPoints);
/* show them */
int i = 0;
foreach(var point in list_of_points)
{
Debug.WriteLine($"Point #{ ++i }: X={ point.X }, Y={point.Y}");
}
}
}
}
}
This approach draws the spline to a path, then uses the built-in capability of flattening that path to a sufficiently dense set of line segments (in a way most vector drawing programs do, too) and then extracts the path points from the line mesh into a list of PointFs.
The artefacts of GDI+ device rendering (smoothing, anti-aliasing) are lost in this process.

Need a way to output QR Code using DevExpress that is 1 inch in size in WPF C# VS2017

I am using DevExpress 18.1 on Windows 10 with VS 2017 for a WPF application. Additionally I am using the DevExpress BarCode Class. I am trying to create a QR Code that is 1 inch in size but am unable to do it without using something like Photoshop to shrink the output. I think I must be missing something in the process. Below is the code being used:
using System.Diagnostics;
using System.Drawing;
using System.Text;
using System.Windows;
using DevExpress.BarCodes;
namespace WpfBarcode01
{
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
}
private void Btn_1_Click(object sender, RoutedEventArgs e)
{
BarCode barCode = new BarCode();
barCode.Symbology = Symbology.QRCode;
barCode.CodeText =
"Alexander Johnathon Stevenson JR;Senior Software Developer;alexanderjohnathonstevensonjr#somesamplewebsite.com;20180709-08:00:00;9993334444;Los Angeles;CA;USA;ABC Company";
barCode.BackColor = Color.White;
barCode.ForeColor = Color.Black;
barCode.RotationAngle = 0;
barCode.CodeBinaryData = Encoding.Default.GetBytes(barCode.CodeText);
barCode.Options.QRCode.Version = QRCodeVersion.Version5;
barCode.Options.QRCode.CompactionMode = QRCodeCompactionMode.Byte;
barCode.Options.QRCode.ErrorLevel = QRCodeErrorLevel.H;
barCode.Options.QRCode.ShowCodeText = false;
barCode.DpiX = 100;
barCode.DpiY = 100;
barCode.AutoSize = false;
barCode.Unit = GraphicsUnit.Millimeter;
barCode.ImageWidth = (float)70;
barCode.ImageHeight = (float)70;
barCode.BarCodeImage.Save("d1.png", System.Drawing.Imaging.ImageFormat.Png);
Process.Start("d1.png");
}
}
}
When this runs, a QR Code is created which a hand held scanner is able to scan both on paper and screen. The problem is it is about 2.76 inches in size. I want one about 1 inch so I end up importing the .png file to Photoshop and reducing the image size to 1 inch. This works as the images now becomes small enough for label or document printing. This workflow though seems too time consuming if someone has to do this for a few hundred QR Codes.
I tried different values for the ImageWidth and ImageHeight as well as different values for DpiX and DPiY but no luck. And I tried to change the GraphicsUnit to Inches but that option does not seem to work as I always get an image of very irregular size. So I ended up using the Millimeter option for GraphicsUnit with a basis that 1 inch = 25.4 millimeters. If I use an ImageWidth or ImageHeight value lower than 65 the QR Code box gets clipped and becomes invalid for scanning.
Is there something else I can do to make the output be 1 inch and still valid? Or perhaps some graphic library call in DevExpress I can call to reduce the .png file to 1 inch like Photoshop does? Thanks in advance.
=====================================
Update July 9, 2018
Based on PepitoSH's suggested link below I was able to find a solution which I have added here in the code update. This code produces a 1 inch .png QRCode file which is a resize from the original that was 2.76 inches.
using System.Diagnostics;
using System.Drawing;
using System.Drawing.Drawing2D;
using System.Drawing.Imaging;
using System.Text;
using System.Windows;
using DevExpress.BarCodes;
namespace WpfBarcode01
{
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
}
private void Btn_1_Click(object sender, RoutedEventArgs e)
{
BarCode barCode = new BarCode();
barCode.Symbology = Symbology.QRCode;
barCode.CodeText =
"Alexander Johnathon Stevenson JR;Senior Software Developer;alexanderjohnathonstevensonjr#somesamplewebsite.com;20180709-08:00:00;9993334444;Los Angeles;CA;USA;ABC Company";
barCode.BackColor = Color.White;
barCode.ForeColor = Color.Black;
barCode.RotationAngle = 0;
barCode.CodeBinaryData = Encoding.Default.GetBytes(barCode.CodeText);
barCode.Options.QRCode.Version = QRCodeVersion.Version5;
barCode.Options.QRCode.CompactionMode = QRCodeCompactionMode.Byte;
barCode.Options.QRCode.ErrorLevel = QRCodeErrorLevel.H;
barCode.Options.QRCode.ShowCodeText = false;
barCode.Dpi = 200;
barCode.AutoSize = false; //needs to be off if specifying unit and widths
barCode.Unit = GraphicsUnit.Millimeter; // Note: 1 inch = 25.4 Millimeters
barCode.ImageWidth = 70F;
barCode.ImageHeight = 70F;
Bitmap bitmap = ResizeImage(barCode.BarCodeImage, 200, 200);
bitmap.Save("QRCode.png");
Process.Start("QRCode.png");
}
public static Bitmap ResizeImage(Image originalImage, int newWidthInPixels, int newHeightInPixels)
{
var destRect = new Rectangle(0, 0, newWidthInPixels, newHeightInPixels);
var destImage = new Bitmap(newWidthInPixels, newHeightInPixels);
destImage.SetResolution(originalImage.HorizontalResolution, originalImage.VerticalResolution);
using (var graphics = Graphics.FromImage(destImage))
{
graphics.CompositingMode = CompositingMode.SourceCopy;
graphics.CompositingQuality = CompositingQuality.HighQuality;
graphics.InterpolationMode = InterpolationMode.HighQualityBicubic;
graphics.SmoothingMode = SmoothingMode.HighQuality;
graphics.PixelOffsetMode = PixelOffsetMode.HighQuality;
using (var wrapMode = new ImageAttributes())
{
wrapMode.SetWrapMode(WrapMode.TileFlipXY);
graphics.DrawImage(originalImage, destRect, 0, 0, originalImage.Width, originalImage.Height, GraphicsUnit.Pixel, wrapMode);
}
}
return destImage;
}
}
}
Update July 9, 2018
Based on PepitoSH's suggested link I was able to find a solution which I have added here in the code update. This code produces a 1 inch .png QRCode file which is a resize from the original that was 2.76 inches.
using System.Diagnostics;
using System.Drawing;
using System.Drawing.Drawing2D;
using System.Drawing.Imaging;
using System.Text;
using System.Windows;
using DevExpress.BarCodes;
namespace WpfBarcode01
{
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
}
private void Btn_1_Click(object sender, RoutedEventArgs e)
{
BarCode barCode = new BarCode();
barCode.Symbology = Symbology.QRCode;
barCode.CodeText =
"Alexander Johnathon Stevenson JR;Senior Software Developer;alexanderjohnathonstevensonjr#somesamplewebsite.com;20180709-08:00:00;9993334444;Los Angeles;CA;USA;ABC Company";
barCode.BackColor = Color.White;
barCode.ForeColor = Color.Black;
barCode.RotationAngle = 0;
barCode.CodeBinaryData = Encoding.Default.GetBytes(barCode.CodeText);
barCode.Options.QRCode.Version = QRCodeVersion.Version5;
barCode.Options.QRCode.CompactionMode = QRCodeCompactionMode.Byte;
barCode.Options.QRCode.ErrorLevel = QRCodeErrorLevel.H;
barCode.Options.QRCode.ShowCodeText = false;
barCode.Dpi = 200;
barCode.AutoSize = false; //needs to be off if specifying unit and widths
barCode.Unit = GraphicsUnit.Millimeter; // Note: 1 inch = 25.4 Millimeters
barCode.ImageWidth = 70F;
barCode.ImageHeight = 70F;
Bitmap bitmap = ResizeImage(barCode.BarCodeImage, 200, 200);
bitmap.Save("QRCode.png");
Process.Start("QRCode.png");
}
public static Bitmap ResizeImage(Image originalImage, int newWidthInPixels, int newHeightInPixels)
{
var destRect = new Rectangle(0, 0, newWidthInPixels, newHeightInPixels);
var destImage = new Bitmap(newWidthInPixels, newHeightInPixels);
destImage.SetResolution(originalImage.HorizontalResolution, originalImage.VerticalResolution);
using (var graphics = Graphics.FromImage(destImage))
{
graphics.CompositingMode = CompositingMode.SourceCopy;
graphics.CompositingQuality = CompositingQuality.HighQuality;
graphics.InterpolationMode = InterpolationMode.HighQualityBicubic;
graphics.SmoothingMode = SmoothingMode.HighQuality;
graphics.PixelOffsetMode = PixelOffsetMode.HighQuality;
using (var wrapMode = new ImageAttributes())
{
wrapMode.SetWrapMode(WrapMode.TileFlipXY);
graphics.DrawImage(originalImage, destRect, 0, 0, originalImage.Width, originalImage.Height, GraphicsUnit.Pixel, wrapMode);
}
}
return destImage;
}
}
}

C# AForge.Net image processing drawing on image

I'm using this example:
http://www.aforgenet.com/framework/features/blobs_processing.html
I tried using the last example and show the output in a picture box after button click:
using AForge;
using AForge.Imaging;
using AForge.Imaging.Filters;
using AForge.Math.Geometry;
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Drawing.Imaging;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
namespace Image_Processing_testings
{
public partial class Form1 : Form
{
Bitmap image = null;
public Form1()
{
InitializeComponent();
Bitmap bitmap = new Bitmap("C:\\Users\\user\\Desktop\\test.png");
Bitmap gsImage = Grayscale.CommonAlgorithms.BT709.Apply(bitmap);
DifferenceEdgeDetector filter = new DifferenceEdgeDetector();
image = filter.Apply(gsImage);
// process image with blob counter
BlobCounter blobCounter = new BlobCounter();
blobCounter.ProcessImage(image);
Blob[] blobs = blobCounter.GetObjectsInformation();
// create convex hull searching algorithm
GrahamConvexHull hullFinder = new GrahamConvexHull();
// lock image to draw on it
BitmapData data = image.LockBits(
new Rectangle(0, 0, image.Width, image.Height),
ImageLockMode.ReadWrite, image.PixelFormat);
int i = 0;
// process each blob
foreach (Blob blob in blobs)
{
List<IntPoint> leftPoints, rightPoints, edgePoints = new List<IntPoint>();
// get blob's edge points
blobCounter.GetBlobsLeftAndRightEdges(blob,
out leftPoints, out rightPoints);
edgePoints.AddRange(leftPoints);
edgePoints.AddRange(rightPoints);
// blob's convex hull
List<IntPoint> hull = hullFinder.FindHull(edgePoints);
Drawing.Polygon(data, hull, Color.Red);
i++;
}
image.UnlockBits(data);
MessageBox.Show("Found: " + i + " Objects");
}
private void button1_Click_1(object sender, EventArgs e)
{
pictureBox1.Image = image;
}
}
}
The result is that i'm getting the image after filter, but without any polygon on it.
I counted the number of blob and got 3 for this picture :
The examples in the link you've provided assume that white pixels belong to the object and black pixels belong to the background. Your image that you've provided is the opposite. Therefore, invert the image before applying the algorithm and that should work.

Emgu CV EigenObjectRecognizer not working

I've tried to code a face recognition program and need some help from the community.
The code posted below compiled with no error but the recognizer seems to be not working?
Basically target.jpg contain a person crop out of the pic1.jpg(3 person inside) so the recognizer should be able to detect it more easily.
The code below run with no errors but all 3 person in pic1.jpg is boxed, and the GetEigenDistances for all 3 faces is 0. By right only the person in pic1.jpg(person in target.jpg) should be boxed.
Any idea on where have i gone wrong? Thanks in advance.
I'm using emgu cv 2.4 with c# 2010 express
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using Emgu.CV;
using Emgu.Util;
using Emgu.CV.Structure;
using Emgu.CV.UI;
using Emgu.CV.CvEnum;
namespace FaceReco
{
public partial class Form1 : Form
{
private HaarCascade haar;
List<Image<Gray, byte>> trainingImages = new List<Image<Gray, byte>>();
Image<Gray, byte> TrainedFace, UnknownFace = null;
MCvFont font = new MCvFont(FONT.CV_FONT_HERSHEY_TRIPLEX, 0.5d, 0.5d);
public Form1()
{
InitializeComponent();
}
private void Form1_Load(object sender, EventArgs e)
{
// adjust path to find your XML file
haar = new HaarCascade("haarcascade_frontalface_alt_tree.xml");
//Read an target image
Image TargetImg = Image.FromFile(Environment.CurrentDirectory + "\\target\\target.jpg");
Image<Bgr, byte> TargetFrame = new Image<Bgr, byte>(new Bitmap(TargetImg));
//FACE DETECTION FOR TARGET FACE
if (TargetImg != null) // confirm that image is valid
{
//convert the image to gray scale
Image<Gray, byte> grayframe = TargetFrame.Convert<Gray, byte>();
var faces = grayframe.DetectHaarCascade(haar, 1.4, 4,
HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
new Size(25, 25))[0];
foreach (var face in faces)
{
//add into training array
TrainedFace = TargetFrame.Copy(face.rect).Convert<Gray, byte>().Resize(100, 100, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);
trainingImages.Add(TrainedFace);
break;
}
TargetImageBox.Image = TrainedFace;
}
//Read an unknown image
Image UnknownImg = Image.FromFile(Environment.CurrentDirectory + "\\img\\pic1.jpg");
Image<Bgr, byte> UnknownFrame = new Image<Bgr, byte>(new Bitmap(UnknownImg));
//FACE DETECTION PROCESS
if (UnknownFrame != null) // confirm that image is valid
{
//convert the image to gray scale
Image<Gray, byte> grayframe = UnknownFrame.Convert<Gray, byte>();
//Detect faces from the gray-scale image and store into an array of type 'var',i.e 'MCvAvgComp[]'
var faces = grayframe.DetectHaarCascade(haar, 1.4, 4,
HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
new Size(25, 25))[0];
//draw a green rectangle on each detected face in image
foreach (var face in faces)
{
UnknownFace = UnknownFrame.Copy(face.rect).Convert<Gray, byte>().Resize(100, 100, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);
MCvTermCriteria termCrit = new MCvTermCriteria(16, 0.001);
//Eigen face recognizer
EigenObjectRecognizer recognizer = new EigenObjectRecognizer(trainingImages.ToArray(), ref termCrit);
// if recognise face, draw green box
if (recognizer.Recognize(UnknownFace) != null)
{
UnknownFrame.Draw(face.rect, new Bgr(Color.Green), 3);
}
float f = recognizer.GetEigenDistances(UnknownFace)[0];
// display threshold
UnknownFrame.Draw(f.ToString("R"), ref font, new Point(face.rect.X - 3, face.rect.Y - 3), new Bgr(Color.Red));
}
//Display the image
CamImageBox.Image = UnknownFrame;
}
}
}
}
This area is not yet my specialty, but if I can help I will try. This is what I am using and its working quite nicely.
Try to do all your work with the GPU, its a lot faster than the CPU for doing this stuff!
List<Rectangle> faces = new List<Rectangle>();
List<Rectangle> eyes = new List<Rectangle>();
RightCameraImage = RightCameraImageCapture.QueryFrame().Resize(480, 360, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC); //Read the files as an 8-bit Bgr image
//Emgu.CV.GPU.GpuInvoke.HasCuda
if (GpuInvoke.HasCuda)
{
Video.DetectFace.UsingGPU(RightCameraImage, Main.FaceGpuCascadeClassifier, Main.EyeGpuCascadeClassifier, faces, eyes, out detectionTime);
}
else
{
Video.DetectFace.UsingCPU(RightCameraImage, Main.FaceCascadeClassifier, Main.EyeCascadeClassifier, faces, eyes, out detectionTime);
}
string PersonsName = string.Empty;
Image<Gray, byte> GreyScaleFaceImage;
foreach (Rectangle face in faces)
{
RightCameraImage.Draw(face, new Bgr(Color.Red), 2);
GreyScaleFaceImage = RightCameraImage.Copy(face).Convert<Gray, byte>().Resize(200, 200, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);
if (KnownFacesList.Count > 0)
{
// MCvTermCriteria for face recognition...
MCvTermCriteria mCvTermCriteria = new MCvTermCriteria(KnownFacesList.Count, 0.001);
// Recognize Known Faces with Eigen Object Recognizer...
EigenObjectRecognizer recognizer = new EigenObjectRecognizer(KnownFacesList.ToArray(), KnownNamesList.ToArray(), eigenDistanceThreashhold, ref mCvTermCriteria);
EigenObjectRecognizer.RecognitionResult recognitionResult = recognizer.Recognize(GreyScaleFaceImage);
if (recognitionResult != null)
{
// Set the Persons Name...
PersonsName = recognitionResult.Label;
// Draw the label for each face detected and recognized...
RightCameraImage.Draw(PersonsName, ref mCvFont, new Point(face.X - 2, face.Y - 2), new Bgr(Color.LightGreen));
}
else
{
// Draw the label for each face NOT Detected...
RightCameraImage.Draw(FaceUnknown, ref mCvFont, new Point(face.X - 2, face.Y - 2), new Bgr(Color.LightGreen));
}
}
}
My Code in the Class: Video.DetectFace:
using System;
using Emgu.CV;
using Emgu.CV.GPU;
using System.Drawing;
using Emgu.CV.Structure;
using System.Diagnostics;
using System.Collections.Generic;
namespace Video
{
//-----------------------------------------------------------------------------------
// Copyright (C) 2004-2012 by EMGU. All rights reserved. Modified by Chris Sykes.
//-----------------------------------------------------------------------------------
public static class DetectFace
{
// Use me like this:
/*
//Emgu.CV.GPU.GpuInvoke.HasCuda
if (GpuInvoke.HasCuda)
{
DetectUsingGPU(...);
}
else
{
DetectUsingCPU(...);
}
*/
private static Stopwatch watch;
public static void UsingGPU(Image<Bgr, Byte> image, GpuCascadeClassifier face, GpuCascadeClassifier eye, List<Rectangle> faces, List<Rectangle> eyes, out long detectionTime)
{
watch = Stopwatch.StartNew();
using (GpuImage<Bgr, Byte> gpuImage = new GpuImage<Bgr, byte>(image))
using (GpuImage<Gray, Byte> gpuGray = gpuImage.Convert<Gray, Byte>())
{
Rectangle[] faceRegion = face.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
faces.AddRange(faceRegion);
foreach (Rectangle f in faceRegion)
{
using (GpuImage<Gray, Byte> faceImg = gpuGray.GetSubRect(f))
{
//For some reason a clone is required.
//Might be a bug of GpuCascadeClassifier in opencv
using (GpuImage<Gray, Byte> clone = faceImg.Clone())
{
Rectangle[] eyeRegion = eye.DetectMultiScale(clone, 1.1, 10, Size.Empty);
foreach (Rectangle e in eyeRegion)
{
Rectangle eyeRect = e;
eyeRect.Offset(f.X, f.Y);
eyes.Add(eyeRect);
}
}
}
}
}
watch.Stop();
detectionTime = watch.ElapsedMilliseconds;
}
public static void UsingCPU(Image<Bgr, Byte> image, CascadeClassifier face, CascadeClassifier eye, List<Rectangle> faces, List<Rectangle> eyes, out long detectionTime)
{
watch = Stopwatch.StartNew();
using (Image<Gray, Byte> gray = image.Convert<Gray, Byte>()) //Convert it to Grayscale
{
//normalizes brightness and increases contrast of the image
gray._EqualizeHist();
//Detect the faces from the gray scale image and store the locations as rectangle
//The first dimensional is the channel
//The second dimension is the index of the rectangle in the specific channel
Rectangle[] facesDetected = face.DetectMultiScale(gray, 1.1, 10, new Size(20, 20), Size.Empty);
faces.AddRange(facesDetected);
foreach (Rectangle f in facesDetected)
{
//Set the region of interest on the faces
gray.ROI = f;
Rectangle[] eyesDetected = eye.DetectMultiScale(gray, 1.1, 10, new Size(20, 20), Size.Empty);
gray.ROI = Rectangle.Empty;
foreach (Rectangle e in eyesDetected)
{
Rectangle eyeRect = e;
eyeRect.Offset(f.X, f.Y);
eyes.Add(eyeRect);
}
}
}
watch.Stop();
detectionTime = watch.ElapsedMilliseconds;
}
} // END of CLASS...
}// END of NAMESPACE...

Categories