This question is quite difficult for me to explain, so I'll be illustrating with some images as well as text.
For a steel engraving machine I need to use .NET's normal graphics framework to create a "document" that is sent to the engraving machine - it is treated just like a normal printer. The machine in question is this one:
http://www.rolanddga.com/products/impactprinters/mpx90/features.asp
I can print a text-outline on it in C# with this:
// ALL UNITS ARE SET IN MILIMETERS (MM)
Graphics g = <instantiated from my printers-printpage-event>;
// The following values are set as "constants" here for the purpose of my question
// they normally are passed as parameters
string s = "ABC";
float fontSize = 4.0F;
RectangleF r = new RectangleF(0, 30.0F, 100.0F, 40.0F);
StringFormat sfDraw = new StringFormat();
sfDraw.Alignment = StringAlignment.Center;
FontStyle fStyle = FontStyle.Regular;
using (var gpDraw = new GraphicsPath())
{
gpDraw.AddString(text, fFamily, (int)fStyle, fSize, r, sfDraw);
SolidBrush brushFG = new SolidBrush(Color.Black);
Pen pen = new Pen(brushFG, 0.01F);
g.DrawPath(pen, gpDraw);
}
It gives an output similar to this: http://i47.tinypic.com/mruu4j.jpg
What I want now is to fill this outline. Not simply with a brush-fill (as can easily be accomplished with g.FillPath(brushFG, gpDraw).
It should instead be "filled" with smaller and smaller outlines, like shown on this image: http://i46.tinypic.com/b3kb29.png
(the different line colors are only used to make the example clearer).
As I made the example in Photoshop, I realized that what I am actually trying to do is to mimmick the functionality, that you find in Photoshop's Select/Modify/Contract.
But I am at my wit's end as to how I accomplish this.
Any help? I'm not looking for a complete solution, but I am at the moment completely stuck. I've tried simple scaling, which probably is the wrong way (since it does not produce the right result...)
UPDATE 2012-07-16: I am now using the Clipper Library http://www.angusj.com/delphi/clipper.php which has a wonderful function called OffsetPolygons.
My test-code is shown here: http://pastie.org/4264890
It works fine with "single" polygons - e.g. a "C" since it only consists of a single polygon. An "O" consist of two polygons - an inside and outside. Likewise with "A". And these give me some trouble. See these images:
C: http://i46.tinypic.com/ap304.png
O: http://i45.tinypic.com/35k60xg.jpg
A: http://i50.tinypic.com/1zyaibm.png
B: http://i49.tinypic.com/5lbb40.png
You get the picture (heh heh... ;-)
I think the problem is, that I extract everything from GraphicsPath as a single polygon, when there are actually 2 (in the case of A and O), and 3 in the case of a B.
Clipper's OffsetPolygons actually takes an array of polygons, so I guess it is able to do this right. But I don't know how to extract my paths from GraphicsPath as seperate polygons.
UPDATE 2012-07-16 (later in the day):
Okay I've actually managed to pull it off now, and will explain it in an answer, in the hope that it might help others with similar problems.
And a big thank you to everybody who helped along the way! Only reason that I accept my own answer is so that others might benefit from this question with a full-baked solution.
Take a look at An algorithm for inflating/deflating (offsetting, buffering) polygons -- the questioner there is actually asking about the reverse operation, but the answers there apply to your case as well. One of them (the highest rated) has a pointer to an open source library that has a C# version.
The usual name for the operation you describe is "polygon offsetting", by the way.
Using the Clipper library was only half of the battle.
I extracted all points from GraphicsPath in a single array, thus inadvertently creating a misshapen polygon based on 2 seperate polygons (in the case of "A").
Instead I needed to examine the PointTypes array property on GraphicsPath. Everytime a point has a PointType == 0 it means the beginning of a new polygon. So the extracting method should use this and instead return an array of polygons instead of just a single polygon:
private ClipperPolygons graphicsPathToPolygons(GraphicsPath gp)
{
ClipperPolygons polyList = new ClipperPolygons();
ClipperPolygon poly = null;
for (int i = 0; i < gp.PointCount; i++)
{
PointF p = gp.PathPoints[i];
byte pType = gp.PathTypes[i];
if (pType == 0)
{
if (poly != null)
polyList.Add(poly);
poly = new ClipperPolygon();
}
IntPoint ip = new IntPoint();
ip.X = (int)(p.X * pointScale);
ip.Y = (int)(p.Y * pointScale);
poly.Add(ip);
}
if (poly != null)
polyList.Add(poly);
return polyList;
}
Clipper's OffsetPolygons actually WANTS a list of polygons, so this ought to have been obvious to me earlier.
The entire code can be seen here: http://pastie.org/4265265
And if you're curious, I've zipped the entire test-project here to open in Visual Studio and compile.
http://gehling.dk/wp-content/uploads/2012/07/TestClipper.zip
It has not been optimized for speed in any way.
/ Carsten
Related
I am using NetTopologySuite with C# to filter points inside the precise boundaries of a country with a pretty simple way:
var path = "fr.shp"; // "big" country and boundaries need to be precise in my case
var reader = new ShapeDataReader(path);
var mbr = reader.ShapefileBounds;
var result = reader.ReadByMBRFilter(mbr);
var polygons = new List<Geometry>();
using (var coll = result.GetEnumerator())
{
while (coll.MoveNext())
{
var item = coll.Current;
if (item == null)
{
continue;
}
polygons.Add(item.Geometry);
}
}
var polygon = new GeometryCombiner(polygons).Combine();
var points = new List<Point>();
List<Point> pointsToFilterWithBorders; // loaded from DB, not visible here but we have 1,350,000 points to filter
Parallel.ForEach(pointsToFilterWithBorders, point =>
{
if (polygon.Contains(point))
points.Add(point);
});
It's working fine (filtering works great!) but it's pretty slow... like one day to do the filtering for only 1,350,000 points!
Any idea on how to improve that?
I tried to use Parallel.ForEach but still very long and I tried to find something like a "batch" compare in NetTopologySuite but couldn't find a quicker solution to filter my points in this big shapefile...
Presumably the polygon that defines the border is quite large and detailed, otherwise it should not take such a long time. There are a few approaches I would consider
Do an initial Bounding box check
Create an axis aligned bounding box for the polygon. Start by testing if the point is inside this box before continuing with any more complex check. This should be very easy to implement.
Render the polygon
create a large bitmap and render your polygon to the bitmap, you will need so use some kind of transform to translate between GIS coordinate and pixel coordinates. Then you can simply check the pixel value for each point in the bitmap. Just make sure to disable anti aliasing. Note that this would be an approximate solution.
You could also do something like rendering the polygon in one color and then rendering the border in another color using a pen more than one pixel wide. Any points on the border would be uncertain and may need a more accurate test, while any points outside the border should be guaranteed to be either inside or outside your polygon.
Use a tree structure to speed up the check
The typical approach to speed up any kind of repeated search is to build some kind of search structure to speedup the search, this is often a tree. In this specific case I believe a Binary Space Partition tree might be suitable. The advantage here is that the number of checks needed would be O(log n) where n is the number of lines in your polygons, instead of the O(n) that I suspect the .Contains() method is.
But I have never implemented a BSP tree, so I refer to other sources on how to implement one.
You could also consider simplifying your border polygon, but it might be more difficult to ensure the simplified polygon is either fully contained by the true polygon, or fully contains the true polygon.
Note that most methods assume that you are using a planar coordinate system, so you might need to do some conversions if you are using anything else.
If possible use spatial capabilities of your database and only load points into your set that possibly intersect with your MultiPolygon, i.e. they are withing the bounding box of it.
Then for 1:n geometry checks use NetTopologySuite's PreparedGeometry predicates:
// polygon and pointsToFilterWithBorders from sample above
var prep = NetTopologySuite.Geometries.Prepared.PreparedGeometryFactory.Prepare(polygon);
var points = new System.Collections.Generic<Point>();
foreach(var pt in pointsToFilterWithBorders)
if (prep.Contains(pt)) points.Add(pt);
Thanks everyone for the help! At the end, I found a solution inspired by your ideas :-)
I can't share the code itself because it's inside a closed source code but here is the idea:
My code was already grouping all the points by squares of 1 kmĀ² for other purposes: I just had to check if these polygons where not fully inside the boundaries. When that was not the case, I just had to do a check on all points inside these squares!
Still a little bit long (but less than one hour for France and it's mixed with other pieces of code which were already a little bit slow) but clearly quicker!
My goal is to detect the different regions within a simple drawing constructed of various lines. Please click the following link to view a visual example of my goal for clarification. I am of course able to get the position of the drawn lines, but since one line can cross multiple 'regions' I don't think this information alone will be sufficient.
Any ideas, suggestions or points to other websites are welcome. I am using C# in combination with WPF - I am not certain which search words might lead to an answer to this problem. I did come across this shape checker article from AForge, but it seems to focus on detecting shapes that are already there, not so much on regions that still have to be 'discovered'. As a side note, I hope to find a solution that works not only with rectangles but also with other types of shapes.
Thank you very much in advance.
Update:
foreach (Line canvasObject in DrawingCanvas.Children.OfType<Line>())
{
LineGeometry lineGeometry1 = new LineGeometry();
lineGeometry1.StartPoint = new Point(canvasObject.X1, canvasObject.Y1);
lineGeometry1.EndPoint = new Point(canvasObject.X2, canvasObject.Y2);
if (canvasObject.X1 != canvasObject.X2) {
foreach (Line canvasObject2 in DrawingCanvas.Children.OfType<Line>()) {
if (canvasObject.X1 == canvasObject2.X1 && canvasObject.X2 == canvasObject2.X2 &&
canvasObject2.Y1 == canvasObject2.Y2 && canvasObject.Y2 == canvasObject2.Y2) {
return;
// prevent the system from 'colliding' the same two lines
}
LineGeometry lineGeometry2 = new LineGeometry {
StartPoint = new Point(canvasObject2.X1, canvasObject2.Y1),
EndPoint = new Point(canvasObject2.X2, canvasObject2.Y2)
};
if (lineGeometry1.FillContainsWithDetail(lineGeometry2).ToString() != "Empty") {
//collision detected
Rectangle rectangle = new Rectangle {
Width = Math.Abs(canvasObject.X2 - canvasObject.X1),
Height = 20,
Fill = Brushes.Red
};
//rectangle.Height = Math.Abs(canvasObject.Y2 - canvasObject.Y1);
DrawingCanvas2.Children.Add(rectangle);
Canvas.SetTop(rectangle, canvasObject.Y1);
Canvas.SetLeft(rectangle, canvasObject.X1);
}
}
}
}
I have experimented with the following code - to give you an impression of how I tried to tackle this problem. Initially I thought I had found a partial solution, by checking for collision between lines. Unfortunately I just created a second line of each line (which of course collided 'with itself'). After I added a simple if check (see below) this no longer occurs, but now I don't get any collisions anymore.. so will probably need a new technique.
Update 2:
After some more digging and searching the internet for solutions, I have a new potential solution in mind. Hopefully this can also be of use to anyone looking for answers in the future. Using a flood-fill algorithm I am able to 'fill' each region with a specific color - much like the paint bucket tool in an image editing application. Summarized, this done by taking a 'screenshot' of the Canvas element, starting at a certain pixel and expanding over and over until a pixel with a different color is found (these would be the lines). It works pretty well and is able to return an image with the various regions. However - my current problem is accessing these regions as 'objects' in C#/WPF. I would like to draw the regions myself (using polyobject or something similar?) - making it possible to use the objects for further calculations or interactions.
I have tried saving the position of the smallest and largest X and Y positions in the FloodFill algorithm after each pixel check, but this makes the algorithm work very very slow. If anyone has an idea, I would love to know. :)
I am using EmguCV 2.3.0.1416 from a simple console application (.net 4.0 and c#) and I have a question around canny's, edge detection etc. Given the following code:
var colours = new[]
{
new Bgr(Color.YellowGreen),
new Bgr(Color.Turquoise),
new Bgr(Color.Blue),
new Bgr(Color.DeepPink)
};
// Convert to grayscale, remove noise and get the canny
using (var image = new Image<Bgr, byte>(fileName)
.Convert<Gray, byte>()
.PyrDown()
.PyrUp()
.Canny(new Gray(180),
new Gray(90)))
{
// Save the canny out to a file and then get each contour within
// the canny and get the polygon for it, colour each a different
// colour from a selection so we can easily see if they join up
image.Save(cannyFileName);
var contours = image
.FindContours(CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
RETR_TYPE.CV_RETR_EXTERNAL);
using (var debug = new Image<Bgr, byte>(image.Size))
{
int colIndex = 0;
for (; contours != null; contours = contours.HNext)
{
Contour<Point> poly = contours
.ApproxPoly(contours.Perimeter*0.05,
contours.Storage);
debug.Draw(poly, colours[colIndex], 1);
colIndex++;
if (colIndex > 3) colIndex = 0;
}
debug.Save(debugFileName);
}
}
I get this output (this is actually just a part of the image but it shows what I am asking about):
As you can see it has a blue line with a little bit of pink and then a green line. The real thing has just a solid edge here so I want this to be a single line in order that I can be sure it is the edge of what I am looking at.
The original image looks like this (I have zoomed it but you can see it has a very distinctive edge that I was expecting to be able to find easily).
If I look at just the canny I can see the gap there so I tried adjusting the parameters for creating the canny (the threshold and linking threshold) but they have made no difference.
I also dilated and then eroded the canny (using the same value for the iterations parameter - 10 incidentally) and that seemed to do the trick but could I lose accuracy by doing this (it just feels a bit wrong somehow)?
So, how should I ensure that I get a single line in this instance?
Did you try smoothing before canny?
I found this link, maybe useful for you
http://www.indiana.edu/~dll/B657/B657_lec_hough.pdf
What exactly do you mean by single line? Perhaps you are trying to thicken your line:
debug.Draw(poly, colours[colIndex], 2);
Instead of:
debug.Draw(poly, colours[colIndex], 1);
Or whatever thickness of line you want.
Here's the emgucv Draw Method for polygon.
Perhaps look at this link too.
The first argument in the approxPoly() function is exactly what you are looking for. Just fiddle with that and you will get exactly what you want.
How can I set the colour of every pixel in a image to it's closest colour match from a list of colours in RGB format (no alpha), that can be of any length, in C#?
It's basically creating a custom BitmapPalette, but since you can't do that
(Trust me, I've tried everything possible for that), I need an alternative.
Does anyone know a way to do this?
Boy...I hope you loves your maths...
This is a tough question. To determine the "closeness of fit" between two colors, you first must understand the color space/color model in which your are working. The RGB color model (not counting the alpha channel) is essentially Euclidean in nature: each color maps to a point in 3D space. Ergo, the putative distance between two colors, C1 and C2 is
Distance = SQRT( (C1red - C2red)2 + (C1green - C2green)2 + (C1blue - C2blue)2 )
WRT "normal" human visual perception, this is not necessarily correct. To take that into account gets much more complicated.
Try these two papers as jumping-off points:
Colour Metric
The Color FAQ
The Color FAQ also provide many links to other colorspace resources.
Some more links at http://www.golden-gryphon.com/software/misc/color-links.html
Here's a paper on color differences that might help also: http://www.axiphos.com/Reports/ColorDifferences.pdf
Bruce Lindbloom's web site has lots of stuff as well, including a color difference calculator, that works in the CIE color space (which has provision for distance computations).
ColorMine is open source C# library that has methods for converting between color spaces and comparing via a couple delta-e methods
For example, this will give you a similarity score based on the most common delta-E method (Cie76)
var a = new Rgb { R = 23, G = 117, B = 114 }
var b = new Rgb { R = 113, G = 27, B = 11 }
var deltaE = a.Compare(b,new Cie1976Comparison());
The function below prints a color raster image to a PCL-5 printer. The function was adapted from a 2-color (1bpp) printing function we had that worked perfectly, except for the grainy 2-color printing. The problem is that the image comes out with a large black bar extending from the right of the image to the edge of the page like this:
IMAGE#########################################
IMAGE#########AREA COMPLETELY BLACK###########
IMAGE#########################################
The image itself looks perfect, otherwise.
Various PCL-to-PDF tools don't show the image at all, which leads me to believe I've forgotten do to something. Appropriate resets (\u001bE\u001b%-12345X) were sent before, and page-feeds after.
Any PCL experts out there? I've got the PCL 5 Color Technical Reference Manual, and it's gotten me this far. This last thing is driving me crazy though.
*Edit:
I now know what command is causing the problem, but I don't know why:
stream("\u001b*r0F");
This should keep the image rotated along with the page (portrait, landscape). If I remove this, the problem goes away. I can compensate by rotating the bitmap beforehand, but I really want to know what caused this!
static void PrintImage()
{
// Get an image into memory
Image original = Image.FromFile("c:\\temp\\test.jpg");
Bitmap newBitmap = new Bitmap(original, original.Width, original.Height);
stream(String.Format("\u001b*p{0:d}x*p{1:d}Y", 1000, 1000));// Set cursor.
stream("\u001b*t300R"); // 300 DPI
stream(String.Format("\u001b*r{0:d}T", original.Height)); // Height
stream(String.Format("\u001b*r{0:d}S", original.Width)); // Width
stream("\u001b*r3U"); // 8-bit color palette
stream("\u001b*r0F"); // Follow logical page layout (landscape, portrait, etc..)
// Set palette depth, 3 bytes per pixel RGB
stream("\u001b*v6W\u0000\u0003\u0000\u0008\u0008\u0008");
stream("\u001b*r1A"); // Start raster graphics
stream("\u001b*b0M"); // Compression 0 = None, 1 = Run Length Encoding
// Not fast, but fast enough.
List<byte> colors = new List<byte>();
for (int y2 = 0; y2 < original.Height; y2++)
{
colors.Clear();
for (int x2 = 0; x2 < original.Width; x2++)
{
Color c = newBitmap.GetPixel(x2, y2);
colors.Add(c.R);
colors.Add(c.G);
colors.Add(c.B);
}
stream(String.Format("\u001b*b{0}W", colors.Count)); // Length of data to send
streamBytes(colors.ToArray()); // Binary data
}
stream("\u001b*rB"); // End raster graphics (also tried *rC -- no effect)
}
There are a few problems with your code. First off your cursor position code is incorrect, it should read:
"\u001b*p{0:d}x1:d}Y", 1000, 1000
This equates to:
<esc>*p1000x1000Y
you had:
<esc>*p1000x*p1000Y
When joining PCL commands together you match up the same parameterized value and group and then simply add value + parametrized character + value + parametrized character etc. Ensure that the final parametrized character is a capital letter which signifies the end of the PCL command.
Also when defining an image I recommend you also specify the width & hight in decipoints, this should help with the scaling of the image (*r3A) on the page so add this (just after your resolution command should be an okay place for it):
Int32 deciHeight = original.Height / (int)original.HorizontalResolution * 720;
Int32 deciWidth = original.Width / (int)original.VerticalResolution * 720;
stream("\u001b*t{0:d}h{1:d}V", deciHeight, deciWidth));
The other recommendation is to write all of this to file (watch your encodings) and use one of the handful of PCL viewers to view your data vs. always printing it. Should save you some time and a forest or two! I've tried all of them and wold recommend spending the $89 and purchasing pclWorks. They also have a complete SDK if you are going to do a lot of PCL. We don't use that as we hardcode all PCL ourselves but it does look good.
As for rotation, we've had problems on some device, You could just rotate the the jpg first (original.RotateFlip) an then write it out.
I don't have much time today but hope that my comments assist. I can test your code on Monday or Tuesday and work with it and post any further comments.
Keep in mind that even though PCL is a standard its support from manufacturer to manufacturer and device to device can be a problem and vastly different. When doing basic things most devices seem okay; however, if you get into macros or complex graphics you will find difference.