emgucv: pan card improper skew detection in C# - c#

I am having three image of pan card for testing skew of image using emgucv and c#.
1st image which is on top Detected 180 degree working properly.
2nd image which is in middle Detected 90 dgree should detected as 180 degree.
3rd image Detected 180 degree should detected as 90 degree.
One observation I am having that i wanted to share here is when i crop unwanted part of image from up and down side of pan card using paint brush, it gives me expected result using below mention code.
Now i wanted to understand how i can remove the unwanted part using programming.
I have played with contour and roi but I am not able to figure out how to fit the same. I am not able to understand whether emgucv itself selects contour or I have to do something.
Please suggest any suitable code example.
Please check code below for angle detection and please help me. Thanks in advance.
imgInput = new Image<Bgr, byte>(impath);
Image<Gray, Byte> img2 = imgInput.Convert<Gray, Byte>();
Bitmap imgs;
Image<Gray, byte> imgout = imgInput.Convert<Gray, byte>().Not().ThresholdBinary(new Gray(50), new Gray(125));
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
Emgu.CV.Mat hier = new Emgu.CV.Mat();
var blurredImage = imgInput.SmoothGaussian(5, 5, 0 , 0);
CvInvoke.AdaptiveThreshold(imgout, imgout, 255, Emgu.CV.CvEnum.AdaptiveThresholdType.GaussianC, Emgu.CV.CvEnum.ThresholdType.Binary, 5, 45);
CvInvoke.FindContours(imgout, contours, hier, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
if (contours.Size >= 1)
{
for (int i = 0; i <= contours.Size; i++)
{
Rectangle rect = CvInvoke.BoundingRectangle(contours[i]);
RotatedRect box = CvInvoke.MinAreaRect(contours[i]);
PointF[] Vertices = box.GetVertices();
PointF point = box.Center;
PointF edge1 = new PointF(Vertices[1].X - Vertices[0].X, Vertices[1].Y - Vertices[0].Y);
PointF edge2 = new PointF(Vertices[2].X - Vertices[1].X, Vertices[2].Y - Vertices[1].Y);
double r = edge1.X + edge1.Y;
double edge1Magnitude = Math.Sqrt(Math.Pow(edge1.X, 2) + Math.Pow(edge1.Y, 2));
double edge2Magnitude = Math.Sqrt(Math.Pow(edge2.X, 2) + Math.Pow(edge2.Y, 2));
PointF primaryEdge = edge1Magnitude > edge2Magnitude ? edge1 : edge2;
double primaryMagnitude = edge1Magnitude > edge2Magnitude ? edge1Magnitude : edge2Magnitude;
PointF reference = new PointF(1, 0);
double refMagnitude = 1;
double thetaRads = Math.Acos(((primaryEdge.X * reference.X) + (primaryEdge.Y * reference.Y)) / (primaryMagnitude * refMagnitude));
double thetaDeg = thetaRads * 180 / Math.PI;
imgInput = imgInput.Rotate(thetaDeg, new Bgr());
imgout = imgout.Rotate(box.Angle, new Gray());
Bitmap bmp = imgout.Bitmap;
break;
}
}

The Problem
Let us start with the problem before the solution:
Your Code
When you submit code, asking for help, at least make some effort to "clean" it. Help people help you! There's so many lines of code here that do nothing. You declare variables that are never used. Add some comments that let people know what it is that you think your code should do.
Bitmap imgs;
var blurredImage = imgInput.SmoothGaussian(5, 5, 0, 0);
Rectangle rect = CvInvoke.BoundingRectangle(contours[i]);
PointF point = box.Center;
double r = edge1.X + edge1.Y;
// Etc
Adaptive Thresholding
The following line of code produces the following images:
CvInvoke.AdaptiveThreshold(imgout, imgout, 255, Emgu.CV.CvEnum.AdaptiveThresholdType.GaussianC, Emgu.CV.CvEnum.ThresholdType.Binary, 5, 45);
Image 1
Image 2
Image 3
Clearly this is not what you're aiming for since the primary contour, the card edge, is completely lost. As a tip, you can always use the following code to display images at runtime to help you with debugging.
CvInvoke.NamedWindow("Output");
CvInvoke.Imshow("Output", imgout);
CvInvoke.WaitKey();
The Soltuion
Since your in example images the card is primarily a similar Value (in the HSV sense) to the background. I do not think simple gray scale thresholding is the correct approach in this case. I purpose the following:
Algorithm
Use Canny Edge Detection to extract the edges in the image.
Dilate the edges so as the card content combines.
Use Contour Detection to filter for the combined edges with the largest bounding.
Fit this primary contour with a rotated rectangle in order to extract the corner points.
Use the corner points to define a transformation matrix to be applied using WarpAffine.
Warp and crop the image.
The Code
You may wish to experiment with the parameters of the Canny Detection and Dilation.
// Working Images
Image<Bgr, byte> imgInput = new Image<Bgr, byte>("Test1.jpg");
Image<Gray, byte> imgEdges = new Image<Gray, byte>(imgInput.Size);
Image<Gray, byte> imgDilatedEdges = new Image<Gray, byte>(imgInput.Size);
Image<Bgr, byte> imgOutput;
// 1. Edge Detection
CvInvoke.Canny(imgInput, imgEdges, 25, 80);
// 2. Dilation
CvInvoke.Dilate(
imgEdges,
imgDilatedEdges,
CvInvoke.GetStructuringElement(
ElementShape.Rectangle,
new Size(3, 3),
new Point(-1, -1)),
new Point(-1, -1),
5,
BorderType.Default,
new MCvScalar(0));
// 3. Contours Detection
VectorOfVectorOfPoint inputContours = new VectorOfVectorOfPoint();
Mat hierarchy = new Mat();
CvInvoke.FindContours(
imgDilatedEdges,
inputContours,
hierarchy,
RetrType.External,
ChainApproxMethod.ChainApproxSimple);
VectorOfPoint primaryContour = (from contour in inputContours.ToList()
orderby contour.GetArea() descending
select contour).FirstOrDefault();
// 4. Corner Point Extraction
RotatedRect bounding = CvInvoke.MinAreaRect(primaryContour);
PointF topLeft = (from point in bounding.GetVertices()
orderby Math.Sqrt(Math.Pow(point.X, 2) + Math.Pow(point.Y, 2))
select point).FirstOrDefault();
PointF topRight = (from point in bounding.GetVertices()
orderby Math.Sqrt(Math.Pow(imgInput.Width - point.X, 2) + Math.Pow(point.Y, 2))
select point).FirstOrDefault();
PointF botLeft = (from point in bounding.GetVertices()
orderby Math.Sqrt(Math.Pow(point.X, 2) + Math.Pow(imgInput.Height - point.Y, 2))
select point).FirstOrDefault();
PointF botRight = (from point in bounding.GetVertices()
orderby Math.Sqrt(Math.Pow(imgInput.Width - point.X, 2) + Math.Pow(imgInput.Height - point.Y, 2))
select point).FirstOrDefault();
double boundingWidth = Math.Sqrt(Math.Pow(topRight.X - topLeft.X, 2) + Math.Pow(topRight.Y - topLeft.Y, 2));
double boundingHeight = Math.Sqrt(Math.Pow(botLeft.X - topLeft.X, 2) + Math.Pow(botLeft.Y - topLeft.Y, 2));
bool isLandscape = boundingWidth > boundingHeight;
// 5. Define warp crieria as triangles
PointF[] srcTriangle = new PointF[3];
PointF[] dstTriangle = new PointF[3];
Rectangle ROI;
if (isLandscape)
{
srcTriangle[0] = botLeft;
srcTriangle[1] = topLeft;
srcTriangle[2] = topRight;
dstTriangle[0] = new PointF(0, (float)boundingHeight);
dstTriangle[1] = new PointF(0, 0);
dstTriangle[2] = new PointF((float)boundingWidth, 0);
ROI = new Rectangle(0, 0, (int)boundingWidth, (int)boundingHeight);
}
else
{
srcTriangle[0] = topLeft;
srcTriangle[1] = topRight;
srcTriangle[2] = botRight;
dstTriangle[0] = new PointF(0, (float)boundingWidth);
dstTriangle[1] = new PointF(0, 0);
dstTriangle[2] = new PointF((float)boundingHeight, 0);
ROI = new Rectangle(0, 0, (int)boundingHeight, (int)boundingWidth);
}
Mat warpMat = new Mat(2, 3, DepthType.Cv32F, 1);
warpMat = CvInvoke.GetAffineTransform(srcTriangle, dstTriangle);
// 6. Apply the warp and crop
CvInvoke.WarpAffine(imgInput, imgInput, warpMat, imgInput.Size);
imgOutput = imgInput.Copy(ROI);
imgOutput.Save("Output1.bmp");
Two extension methods are used:
static List<VectorOfPoint> ToList(this VectorOfVectorOfPoint vectorOfVectorOfPoint)
{
List<VectorOfPoint> result = new List<VectorOfPoint>();
for (int contour = 0; contour < vectorOfVectorOfPoint.Size; contour++)
{
result.Add(vectorOfVectorOfPoint[contour]);
}
return result;
}
static double GetArea(this VectorOfPoint contour)
{
RotatedRect bounding = CvInvoke.MinAreaRect(contour);
return bounding.Size.Width * bounding.Size.Height;
}
Outputs
Meta Example

Related

Getting NaN for image angle in emgucv with c#

I have referenced following link to develop following csharp code to detect angle of Image. https://stackoverflow.com/a/34285205/7805023
Image<Gray, byte> imgout = imgInput.Convert<Gray, byte>().Not().ThresholdBinary(new
Gray(50), new Gray(255));
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
Mat hier = new Mat();
CvInvoke.FindContours(imgout, contours, hier, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
for (int i = 0; i <= 1; i++)
{
Rectangle rect = CvInvoke.BoundingRectangle(contours[i]);
RotatedRect box = CvInvoke.MinAreaRect(contours[i]);
PointF[] Vertices = box.GetVertices();
PointF point = box.Center;
PointF edge1 = new PointF(Vertices[1].X - Vertices[0].X,
Vertices[1].Y - Vertices[0].Y);
PointF edge2 = new PointF(Vertices[2].X - Vertices[1].X,Vertices[2].Y - Vertices[1].Y);
double edge1Magnitude = Math.Sqrt(Math.Pow(edge1.X, 2) + Math.Pow(edge1.Y, 2));
double edge2Magnitude = Math.Sqrt(Math.Pow(edge2.X, 2) + Math.Pow(edge2.Y, 2));
PointF primaryEdge = edge1Magnitude > edge2Magnitude ? edge1 : edge2;
PointF reference = new PointF(Vertices[1].X, Vertices[0].Y);
double thetaRads = Math.Acos((primaryEdge.X * reference.X) + (primaryEdge.Y * reference.Y))/(edge1Magnitude* edge2Magnitude);
double thetaDeg = thetaRads * 180 / Math.PI;
}
I am getting NaN as output for angle. Please go through the code let me know what wrong I have done?
I have used emgucv for image processing.
Any help will be highly appreciated.
NaN, Not a Number, is returned by Math.Acos() when the provided value is not between -1 and 1.
Your calculation of thetaRads is wrong, you need to calculate the Acos of the vectors' dot product divided by the product of the vectors' magnitudes, as per:
double thetaRads = Math.Acos(((primaryEdge.X * reference.X) + (primaryEdge.Y * reference.Y)) / (primaryMagnitude * refMagnitude));

Logo recognition using emguCV

I have implemented this code and have detcted logo in couple of images,
I was able to get some results like this but I need to count that how many images contain this logo,
may be something like finding all keypoints of logo inside big image or some thing else.
I can see I have foud the logo inside big image but I want to confirm it programetically, using emguCV.
Please help.
-- edited
this is the piece of code with homography, can you guide me a bit here, because I am totaly new to emguCV and openV please help me counting these inlier
public static Mat Draw(Mat modelImage, Mat observedImage, out long matchTime)
{
Mat homography;
VectorOfKeyPoint modelKeyPoints;
VectorOfKeyPoint observedKeyPoints;
using (VectorOfVectorOfDMatch matches = new VectorOfVectorOfDMatch())
{
Mat mask;
FindMatch(modelImage, observedImage, out matchTime, out modelKeyPoints, out observedKeyPoints, matches,
out mask, out homography);
//Draw the matched keypoints
Mat result = new Mat();// new Size(400,400), modelImage.Depth, modelImage.NumberOfChannels);
Features2DToolbox.DrawMatches(modelImage, modelKeyPoints, observedImage, observedKeyPoints,
matches, result, new MCvScalar(255, 255, 255), new MCvScalar(255, 255, 255), mask);
#region draw the projected region on the image
if (homography != null)
{
//draw a rectangle along the projected model
Rectangle rect = new Rectangle(Point.Empty, modelImage.Size);
PointF[] pts = new PointF[]
{
new PointF(rect.Left, rect.Bottom),
new PointF(rect.Right, rect.Bottom),
new PointF(rect.Right, rect.Top),
new PointF(rect.Left, rect.Top)
};
pts = CvInvoke.PerspectiveTransform(pts, homography);
Point[] points = Array.ConvertAll<PointF, Point>(pts, Point.Round);
using (VectorOfPoint vp = new VectorOfPoint(points))
{
CvInvoke.Polylines(result, vp, true, new MCvScalar(255, 0, 0, 255), 5);
}
}
#endregion
return result;
}
}
I think my answer is a bit to late, but may I can help someone other. With following code snipppet you can count the matching feature points that belongs to you question (counting lines). The importents variables is the mask variable. It contains the informations.
private int CountHowManyParisExist(Mat mask) {
Matrix<Byte> matrix = new Matrix<Byte>(mask.Rows, mask.Cols);
mask.CopyTo(matrix);
var matched = matrix.ManagedArray;
var list = matched.OfType<byte>().ToList();
var count = list.Count(a => a.Equals(1));
return count;
}

EmguCV: Draw contour on object in Motion using Optical Flow?

I would like to do motion detection in C# (using EmguCV 3.0) to remove object in motion or in foreground to draw an overlay.
Here is a sample test I done with a Kinect (because It's a depth camera)
How can I get started with EmguCV 3.0 ?
I tried many background removal code that do not work
It seems OpticalFlow is a good start but there si no example in EmguCV 3.0
If I find the largest blob how can I find its contours ?
Can someone help me to get started ?
EDIT: 17/06/2015
In EmguCV3.0.0 RC I don't see OpticalFlow in the package and documentation:
http://www.emgu.com/wiki/files/3.0.0-rc1/document/html/b72c032d-59ae-c36f-5e00-12f8d621dfb8.htm
There is only : DenseOpticalFlow, OpticalFlowDualTVL1 ???
This is a AbsDiff Code:
var grayFrame = frame.Convert<Gray, Byte>();
var motionFrame = grayFrame.AbsDiff(backFrame)
.ThresholdBinary(new Gray(20), new Gray(255))
.Erode(2)
.Dilate(2);
Result:
I don't know how to get the motion in white ?
This is the Blob Code:
Image<Bgr, Byte> smoothedFrame = new Image<Bgr, byte>(frame.Size);
CvInvoke.GaussianBlur(frame, smoothedFrame, new Size(3, 3), 1); //filter out noises
Mat forgroundMask = new Mat();
fgDetector.Apply(smoothedFrame, forgroundMask);
CvBlobs blobs = new CvBlobs();
blobDetector.Detect(forgroundMask.ToImage<Gray, byte>(), blobs);
blobs.FilterByArea(400, int.MaxValue);
blobTracker.Update(blobs, 1.0, 0, 1);
foreach (var pair in blobs) {
CvBlob b = pair.Value;
CvInvoke.Rectangle(frame, b.BoundingBox, new MCvScalar(255.0, 255.0, 255.0), 2);
}
Result:
Why so much false positive ?
This is a MOG2 Code:
forgroundDetector.Apply(frame, forgroundMask);
motionHistory.Update(forgroundMask);
var motionMask = GetMotionMask();
Image<Bgr, Byte> motionImage = new Image<Bgr, byte>(motionMask.Size);
CvInvoke.InsertChannel(motionMask, motionImage, 0);
Rectangle[] rects;
using (VectorOfRect boundingRect = new VectorOfRect()) {
motionHistory.GetMotionComponents(segMask, boundingRect);
rects = boundingRect.ToArray();
}
foreach (Rectangle comp in rects) { ...
Result:
If I select the biggest Area how can I get the contour of the object ?
First, I can give you some example Optical Flow code.
Let oldImage and newImage be variables that hold the previous and current frame. In my code, it's of type Image<Gray, Byte>.
// prep containers for x and y vectors
Image<Gray, float> velx = new Image<Gray, float>(newImage.Size);
Image<Gray, float> vely = new Image<Gray, float>(newImage.Size);
// use the Horn and Schunck dense optical flow algorithm.
OpticalFlow.HS(oldImage, newImage, true, velx, vely, 0.1d, new MCvTermCriteria(100));
// color each pixel
Image<Hsv, Byte> coloredMotion = new Image<Hsv, Byte>(newImage.Size);
for (int i = 0; i < coloredMotion.Width; i++)
{
for (int j = 0; j < coloredMotion.Height; j++)
{
// Pull the relevant intensities from the velx and vely matrices
double velxHere = velx[j, i].Intensity;
double velyHere = vely[j, i].Intensity;
// Determine the color (i.e, the angle)
double degrees = Math.Atan(velyHere / velxHere) / Math.PI * 90 + 45;
if (velxHere < 0)
{
degrees += 90;
}
coloredMotion.Data[j, i, 0] = (Byte) degrees;
coloredMotion.Data[j, i, 1] = 255;
// Determine the intensity (i.e, the distance)
double intensity = Math.Sqrt(velxHere * velxHere + velyHere * velyHere) * 10;
coloredMotion.Data[j, i, 2] = (intensity > 255) ? 255 : intensity;
}
}
// coloredMotion is now an image that shows intensity of motion by lightness
// and direction by color.
Regarding the larger question of how to remove the foreground:
If I had a way to get a static background image, that's the best way to start. Then, the foreground would be detected by the AbsDiff method and using Erode and Dilate or Gaussian to smooth the image, then use blob detection.
For simple foreground detection, I found Optical Flow to be way too much processing (8fps max), whereas the AbsDiff method was just as accurate but had no effect on framerate.
Regarding contours, if you're merely looking to find the size, position, and other moments, then the blob detection in the AbsDiff tutorial above seems to be sufficient, which uses Image.FindContours(...).
If not, I would start looking at the CvBlobDetector class as used in this tutorial. There's a built-in DrawBlob function that might come in handy.

Emgu CV - How i can get all occurrence of pattern in Image

Hi have already function solution but one issue:
// The screenshot will be stored in this bitmap.
Bitmap capture = new Bitmap(rec.Width, rec.Height, PixelFormat.Format24bppRgb);
using (Graphics g = Graphics.FromImage(capture))
{
g.CopyFromScreen(rec.Location, new System.Drawing.Point(0, 0), rec.Size);
}
MCvSURFParams surfParam = new MCvSURFParams(500, false);
SURFDetector surfDetector = new SURFDetector(surfParam);
// Template image
Image<Gray, Byte> modelImage = new Image<Gray, byte>("template.jpg");
// Extract features from the object image
ImageFeature[] modelFeatures = surfDetector.DetectFeatures(modelImage, null);
// Prepare current frame
Image<Gray, Byte> observedImage = new Image<Gray, byte>(capture);
ImageFeature[] imageFeatures = surfDetector.DetectFeatures(observedImage, null);
// Create a SURF Tracker using k-d Tree
Features2DTracker tracker = new Features2DTracker(modelFeatures);
Features2DTracker.MatchedImageFeature[] matchedFeatures = tracker.MatchFeature(imageFeatures, 2);
matchedFeatures = Features2DTracker.VoteForUniqueness(matchedFeatures, 0.8);
matchedFeatures = Features2DTracker.VoteForSizeAndOrientation(matchedFeatures, 1.5, 20);
HomographyMatrix homography = Features2DTracker.GetHomographyMatrixFromMatchedFeatures(matchedFeatures);
// Merge the object image and the observed image into one image for display
Image<Gray, Byte> res = modelImage.ConcateVertical(observedImage);
#region draw lines between the matched features
foreach (Features2DTracker.MatchedImageFeature matchedFeature in matchedFeatures)
{
PointF p = matchedFeature.ObservedFeature.KeyPoint.Point;
p.Y += modelImage.Height;
res.Draw(new LineSegment2DF(matchedFeature.SimilarFeatures[0].Feature.KeyPoint.Point, p), new Gray(0), 1);
}
#endregion
#region draw the project region on the image
if (homography != null)
{
// draw a rectangle along the projected model
Rectangle rect = modelImage.ROI;
PointF[] pts = new PointF[] {
new PointF(rect.Left, rect.Bottom),
new PointF(rect.Right, rect.Bottom),
new PointF(rect.Right, rect.Top),
new PointF(rect.Left, rect.Top)
};
homography.ProjectPoints(pts);
for (int i = 0; i < pts.Length; i++)
pts[i].Y += modelImage.Height;
res.DrawPolyline(Array.ConvertAll<PointF, Point>(pts, Point.Round), true, new Gray(255.0), 2);
}
#endregion
pictureBoxScreen.Image = res.ToBitmap();
the result is:
And my problem is that, function homography.ProjectPoints(pts);
Get only first occurrence of pattern (white rectangle in pic above)
How i can Project all occurrence of template, respectively how I can get occurrence of template rectangle in image
I face a problem similar to yours in my master thesis. Basically you have two options:
Use a clustering such as Hierarchical k-means or a point density one such as DBSCAN (it depends on two parameters but you can make it threshold free in bidimensional R^2 space)
Use a multiple robust model fitting estimation techniques such as JLinkage. In this more advanced technique you clusters points that share an homography instead of cluster points that close to each other in euclidean space.
Once you partition your matches in "clusters" you can estimate homographies between matches belonging to correspondant clusters.

How to draw a dashed line over an object?

I am drawing a line on a control on my Windows form like this:
// Get Graphics object from chart
Graphics graph = e.ChartGraphics.Graphics;
PointF point1 = PointF.Empty;
PointF point2 = PointF.Empty;
// Set Maximum and minimum points
point1.X = -110;
point1.Y = -110;
point2.X = 122;
point2.Y = 122;
// Convert relative coordinates to absolute coordinates.
point1 = e.ChartGraphics.GetAbsolutePoint(point1);
point2 = e.ChartGraphics.GetAbsolutePoint(point2);
// Draw connection line
graph.DrawLine(new Pen(Color.Yellow, 3), point1, point2);
I would like to know if it is possible to draw a dashed (dotted) line instead of a regular solid line?
It's pretty simple once you figure out the formatting that defines the dashes:
float[] dashValues = { 5, 2, 15, 4 };
Pen blackPen = new Pen(Color.Black, 5);
blackPen.DashPattern = dashValues;
e.Graphics.DrawLine(blackPen, new Point(5, 5), new Point(405, 5));
The numbers in the float array represent dash lengths of different colors. So for a simple dash of 2 pixels on (black) and two off each your aray would look like: {2,2} The pattern then repeats. If you wanted 5-wide dashes with a space of 2 pixels you would use {5,2}
In your code it would look like:
// Get Graphics object from chart
Graphics graph = e.ChartGraphics.Graphics;
PointF point1 = PointF.Empty;
PointF point2 = PointF.Empty;
// Set Maximum and minimum points
point1.X = -110;
point1.Y = -110;
point2.X = 122;
point2.Y = 122;
// Convert relative coordinates to absolute coordinates.
point1 = e.ChartGraphics.GetAbsolutePoint(point1);
point2 = e.ChartGraphics.GetAbsolutePoint(point2);
// Draw (dashed) connection line
float[] dashValues = { 4, 2 };
Pen dashPen= new Pen(Color.Yellow, 3);
dashPen.DashPattern = dashValues;
graph.DrawLine(dashPen, point1, point2);
I think you can accomplish this by changing the pen you use to draw your line.
So, replace the last 2 lines in your example with:
var pen = new Pen(Color.Yellow, 3);
pen.DashStyle = DashStyle.Dash;
graph.DrawLine(pen, point1, point2);
Pen has a public property that is defined as
public DashStyle DashStyle { get; set; }
you can set DasStyle.Dash if you want to draw a Dashed line.
Pen.DashPattern will do this. Look here for an example
In more modern C#:
var dottedPen = new Pen(Color.Gray, width: 1) { DashPattern = new[] { 1f, 1f } };
To answer this question regarding the generation of a dashed line using the code-behind:
Pen dashPenTest = new(Brushes.DodgerBlue, 1);
Line testLine = new()
{
Stroke = dashPenTest.Brush, //Brushes.Aqua,
StrokeThickness = dashPenTest.Thickness,//1,
StrokeDashArray = new DoubleCollection() { 8,4 },
X1 = 0,
X2 = canvas.Width,
Y1 = 10,
Y2 = 10
};
canvas.Children.Add(testLine);
This answer make use of the generation of a canvas in the xaml:
<Canvas x:Name ="canvas" Background="White" Height="300" Width="300">
The important method here is the "StrokeDashArray" that generates the dashes for the line drawn. More information is given here: Shape.StrokeDashArray

Categories