How do I detect irregular borders in an image? - c#

Given the image below, what algorithm might I use to detect whether regions one and two (identified by color) have a border?
http://img823.imageshack.us/img823/4477/borders.png
If there's a C# example out there, that would be awesome, but I'm really just looking for any example code.
Edit: Using Jaro's advice, I came up with the following...
public class Shape
{
private const int MAX_BORDER_DISTANCE = 15;
public List<Point> Pixels { get; set; }
public Shape()
{
Pixels = new List<Point>();
}
public bool SharesBorder(Shape other)
{
var shape1 = this;
var shape2 = other;
foreach (var pixel1 in shape1.Pixels)
{
foreach (var pixel2 in shape2.Pixels)
{
var xDistance = Math.Abs(pixel1.X - pixel2.X);
var yDistance = Math.Abs(pixel1.Y - pixel2.Y);
if (xDistance > 1 && yDistance > 1)
{
if (xDistance * yDistance < MAX_BORDER_DISTANCE)
return true;
}
else
{
if (xDistance < Math.Sqrt(MAX_BORDER_DISTANCE) &&
yDistance < Math.Sqrt(MAX_BORDER_DISTANCE))
return true;
}
}
}
return false;
}
// ...
}
Clicking on two shapes that do share a border returns fairly quickly, but very distance shapes or shapes with a large number of pixels take 3+ seconds at times. What options do I have for optimizing this?

2 regions having border means that within a certain small area there should be 3 colors present: red, black and green.
So a very ineffective solution presents itself:
using Color pixelColor = myBitmap.GetPixel(x, y); you could scan an area for those 3 colors. The area must be larger than the width of the border of course.
There is of course plenty room for optimizations (like going in 50 pixels steps and decreasing the precision continually).
Since black is the least used color, you would search around black areas first.
This should explain what I have written in various comments in this topic:
namespace Phobos.Graphics
{
public class BorderDetector
{
private Color region1Color = Color.FromArgb(222, 22, 46);
private Color region2Color = Color.FromArgb(11, 189, 63);
private Color borderColor = Color.FromArgb(11, 189, 63);
private List<Point> region1Points = new List<Point>();
private List<Point> region2Points = new List<Point>();
private List<Point> borderPoints = new List<Point>();
private Bitmap b;
private const int precision = 10;
private const int distanceTreshold = 25;
public long Miliseconds1 { get; set; }
public long Miliseconds2 { get; set; }
public BorderDetector(Bitmap b)
{
if (b == null) throw new ArgumentNullException("b");
this.b = b;
}
private void ScanBitmap()
{
Color c;
for (int x = precision; x < this.b.Width; x += BorderDetector.precision)
{
for (int y = precision; y < this.b.Height; y += BorderDetector.precision)
{
c = this.b.GetPixel(x, y);
if (c == region1Color) region1Points.Add(new Point(x, y));
else if (c == region2Color) region2Points.Add(new Point(x, y));
else if (c == borderColor) borderPoints.Add(new Point(x, y));
}
}
}
/// <summary>Returns a distance of two points (inaccurate but very fast).</summary>
private int GetDistance(Point p1, Point p2)
{
return Math.Abs(p1.X - p2.X) + Math.Abs(p1.Y - p2.Y);
}
/// <summary>Finds the closests 2 points among the points in the 2 sets.</summary>
private int FindClosestPoints(List<Point> r1Points, List<Point> r2Points, out Point foundR1, out Point foundR2)
{
int minDistance = Int32.MaxValue;
int distance = 0;
foundR1 = Point.Empty;
foundR2 = Point.Empty;
foreach (Point r1 in r1Points)
foreach (Point r2 in r2Points)
{
distance = this.GetDistance(r1, r2);
if (distance < minDistance)
{
foundR1 = r1;
foundR2 = r2;
minDistance = distance;
}
}
return minDistance;
}
public bool FindBorder()
{
Point r1;
Point r2;
Stopwatch watch = new Stopwatch();
watch.Start();
this.ScanBitmap();
watch.Stop();
this.Miliseconds1 = watch.ElapsedMilliseconds;
watch.Start();
int distance = this.FindClosestPoints(this.region1Points, this.region2Points, out r1, out r2);
watch.Stop();
this.Miliseconds2 = watch.ElapsedMilliseconds;
this.b.SetPixel(r1.X, r1.Y, Color.Green);
this.b.SetPixel(r2.X, r2.Y, Color.Red);
return (distance <= BorderDetector.distanceTreshold);
}
}
}
It is very simple. Searching this way only takes about 2 + 4 ms (scanning and finding the closest points).
You could also do the search recursively: first with precision = 1000, then precision = 100 and finally precision = 10 for large images.
FindClosestPoints will practically give you an estimated rectangual area where the border should be situated (usually borders are like that).
Then you could use the vector approach I have described in other comments.

I read your question as asking whether the two points exist in different regions. Is this correct? If so, I would probably use a variation of Flood Fill. It's not super difficult to implement (don't implement it recursively, you will almost certainly run out of stack space) and it will be able to look at complex situations like a U-shaped region that has a border between two points, but are not actually different regions. Basically run flood fill, and return true when your coordinate matches the target coordinate (or perhaps when it's close enough for your satisfaction, depending on your use case)
[Edit] Here is an example of flood fill that I wrote for a project of mine. The project is CPAL-licensed, but the implementation is pretty specific to what I use it for anyway, so don't worry about copying parts of it. And it doesn't use recursion, so it should be able to scale to pixel data.
[Edit2] I misunderstood the task. I don't have any example code that does exactly what you're looking for, but I can say that comparing pixel-per-pixel the entire two regions is not something you want to do. You can reduce the complexity by partitioning each region into a larger grid (maybe 25x25 pixels), and comparing those sectors first, and if any of those are close enough, do a pixel-per-pixel comparison just within those two sectors.
[Edit2.5] [Quadtree]3 might be able to help you too. I don't have a lot of experience with it, but I know it's popular in 2D collision detection, which is similar to what you're doing here. Might be worth researching.

Related

C# creating LinkedPixels from a Bitmap in an Optimized way

I am doing image manipulation, I have a class that I've inherited that I can't change because it's referenced too many times to completely refactor. Here is the class as it stands.
I have a Bitmap and I am trying to get a collection of all of the LinkedPixels. For example pixel
x, y would be a key and a pixel in the center would be attached to four pixels, (x+1, y), (x, y + 1), (x-1, y), (y-1, x). Pixels on the sides would have three attached Pixels, and Pixels in the corners only two.
imWrap.Arr is an array with all of the Pixels inside of a Bitmap, this is built as the program spools up so this is not built on the fly it is already available.
Since I cannot change the class itself (I could change it to a struct if necessary) I need to find the fastest way to create the LinkedPix, I am aware that there are many better ways to do this but I am locked in to this "LinkedPix" class, there are over 100 references to it
public class LinkedPix
{
public Point P;
public override string ToString()
{
return P.ToString() + " " + Links.Count;
}
public HashSet<LinkedPix> Links;
public SegObject MemberOfObject { get; set; }
public bool IsMemberOfObject { get => MemberOfObject != null; }
private void Init()
{
MemberOfObject = null;
Links = new HashSet<LinkedPix>();
}
public LinkedPix(int x, int y)
{
Init();
P = new Point(x, y);
}
public LinkedPix(Point p)
{
Init();
P = p;
}
public LinkedPix RotateCopy(double theta_radians)
{
//Assumes that 0,0 is the origin
LinkedPix LP2 = new LinkedPix(new Point(
(int)(P.X * Math.Cos(theta_radians) - P.Y * Math.Sin(theta_radians)),
(int)(P.Y * Math.Cos(theta_radians) + P.X * Math.Sin(theta_radians))));
return LP2;
}
public void LinksRemove(LinkedPix ToRemove, bool TwoWay = false)
{
//Remove from each others
if (TwoWay) ToRemove.Links.Remove(this);
Links.Remove(ToRemove);
}
/// <summary>
/// Removes this from each of its links and removes these links
/// </summary>
internal void LinksCrossRemove()
{
foreach (LinkedPix tPix in Links) tPix.LinksRemove(this, false);
Links = new HashSet<LinkedPix>();
}
}
Here are the three functions I used to populate the information about how LinkedPix are related.
Dictionary<Point, LinkedPix> PointDict = new Dictionary<Point, LinkedPix>();
void EstablishLinks(int xi, int yi, LinkedPix Pixi)
{
if (imWrap.Arr[xi, yi] > threshold)
{
Point pi = new Point(xi, yi);
LinkedPix Pix2 = GetOrAdd(pi);
Pixi.Links.Add(Pix2);
Pix2.Links.Add(Pixi);
}
}
LinkedPix Pix; Point p;
int height = imWrap.Height;
for (int y = 0; y < height - 1; y++)
{
if (Slice && y % Slice_ModulusFactor < 1) continue; //This is for Axon Degeneration to break up the axons
for (int x = 0; x < height - 1; x++)
{
if (Slice && x % Slice_ModulusFactor < 1) continue;
if (imWrap.Arr[x, y] > threshold)
{
Pixels++;
if (CreateLinks)
{
p = new Point(x, y);
Pix = GetOrAdd(p);
if (y + 1 < height) EstablishLinks(x, y + 1, Pix);
if (x + 1 < height) EstablishLinks(x + 1, y, Pix);
}
}
}
}
}
public LinkedPix GetOrAdd(Point p)
{
if (PointDict.ContainsKey(p))
{
return PointDict[p];
}
else
{
LinkedPix LP = new LinkedPix(p);
PointDict.Add(p, LP);
return LP;
}
}
}
imWrap.Arr[x, y] is an array that has already been pre-populated with the contents of a bitmap.
things I have tried
Switching to ConcurrentDictionaries and running in Parallel. The inferior Hashing Algorithm of ConcurrentDictionary compared to HashSet means running this in Parallel actually takes more time, at least when I tried it.
switching to a different collection with faster adds such as a Que. This gains me some speed but because of the way LinkedPix is used, I ultimately have to convert to a HashSet anyway which costs time in the end so this is not ultimately a win.
Various types of Asynchronicity, I either gain very little or nothing at all.
I'm definitely stuck and could use some outside of the box thinking. I definitely need to speed this up and any help would be greatly appreciated.
These are some timings
GetOrAdd: 4.190254371881144 ticks
        LinksAdd: 1.7282694093881512 ticks
        LinksAddTwo: 1.6570632131524925 ticks
        PointCreateTime: 0.20380008184398646 ticks
        ArrayCheck: 0.3061780882729328 ticks

Collision detection on GPU in C#

I have a Dictionary of Circles (random distributed in 2D-space) and a Position in 2D-space.
The method should return just ANY circle, where the Position is within the radius of the circle. (There might be even more than one, but I don't care about which one it chooses)
My current implementation looks like this:
int GetAnyCircleWithinRadius(Dictionary<int, Circle>circles, Position position)
{
int circleIndex = -1;
Parallel.ForEach(circles.Values, (circle, state) =>
{
double deltaX = cirlce.center.x - position.x;
double deltaY = cirlce.center.x - position.x;
double distance = Math.Abs(deltaX * deltaX + deltaY * deltaY);
if (distance < circle.radius)
{
state.Break();
circleIndex = circle.index;
}
}
return circleIndex;
}
Basically it runs through all circles in parallel on the CPU and checks if the distance to the center is smaller than its radius, which would mean that the position is inside the circle.
Now my question is:
Is there a simple way, to run the same routine on the GPU instead of the CPU?
What I tried so far is a bit playing arround with Cloo and I was able to run a prime-searcher (see below) but I have no idea of how I can translate my own "circle-problem"-C# code in this program.
using Cloo.Extensions;
...
static void Main(string[] args)
{
primes.ClooForEach(IsPrime);
}
static string IsPrime
{
get
{
return
#"
kernel void GetIfPrime(global int* message)
{
int index = get_global_id(0);
int upperl = (int)sqrt((float)message[index]);
for(int i = 2; i <= upperl; i++)
{
if(message[index]%i==0)
{
message[index]=0;
return;
}
}
}";
}
}
I'm also glad to receive any other performance hints! :)

Drawing zig-zag lines is much slower than drawing straight lines

While using a self-written graphing control I noticed that the painting of the graph was much slower while displaying noisy data than when it displayed clean data.
I dug further into and narrowed the problem down to its bare minimum difference: Drawing the same amount of lines with varying Y values versus drawing lines with the same Y value.
So for example I put together the following tests. I generate lists of points, one with random Y values, one with the same Y, and one with a Zig-Zag Y pattern.
private List<PointF> GenerateRandom(int n, int width, int height)
{
//Generate random pattern
Random rnd = new Random();
float stepwidth = Convert.ToSingle(width / n);
float mid = Convert.ToSingle(height / 2);
float lastx = 0;
float lasty = mid;
List<PointF> res = new List<PointF>();
res.Add(new PointF(lastx, lasty));
for (int i = 1; i <= n; i++)
{
var x = stepwidth * i;
var y = Convert.ToSingle(height * rnd.NextDouble());
res.Add(new PointF(x, y));
}
return res;
}
private List<PointF> GenerateUnity(int n, int width, int height)
{
//Generate points along a simple line
float stepwidth = Convert.ToSingle(width / n);
float mid = Convert.ToSingle(height / 2);
float lastx = 0;
float lasty = mid;
List<PointF> res = new List<PointF>();
res.Add(new PointF(lastx, lasty));
for (int i = 1; i <= n; i++)
{
var x = stepwidth * i;
var y = mid;
res.Add(new PointF(x, y));
}
return res;
}
private List<PointF> GenerateZigZag(int n, int width, int height)
{
//Generate an Up/Down List
float stepwidth = Convert.ToSingle(width / n);
float mid = Convert.ToSingle(height / 2);
float lastx = 0;
float lasty = mid;
List<PointF> res = new List<PointF>();
res.Add(new PointF(lastx, lasty));
var state = false;
for (int i = 1; i <= n; i++)
{
var x = stepwidth * i;
var y = mid - (state ? 50 : -50);
res.Add(new PointF(x, y));
state = !state;
}
return res;
}
I now draw each list of points a few times and compare how long it takes:
private void DoTheTest()
{
Bitmap bmp = new Bitmap(970, 512);
var random = GenerateRandom(2500, bmp.Width, bmp.Height).ToArray();
var unity = GenerateUnity(2500, bmp.Width, bmp.Height).ToArray();
var ZigZag = GenerateZigZag(2500, bmp.Width, bmp.Height).ToArray();
using (Graphics g = Graphics.FromImage(bmp))
{
var tUnity = BenchmarkDraw(g, 200, unity);
var tRandom = BenchmarkDraw(g, 200, random);
var tZigZag = BenchmarkDraw(g, 200, ZigZag);
MessageBox.Show(tUnity.ToString() + "\r\n" + tRandom.ToString() + "\r\n" + tZigZag.ToString());
}
}
private double BenchmarkDraw(Graphics g, int n, PointF[] Points)
{
var Times = new List<double>();
for (int i = 1; i <= n; i++)
{
g.Clear(Color.White);
System.DateTime d3 = DateTime.Now;
DrawLines(g, Points);
System.DateTime d4 = DateTime.Now;
Times.Add((d4 - d3).TotalMilliseconds);
}
return Times.Average();
}
private void DrawLines(Graphics g, PointF[] Points)
{
g.DrawLines(Pens.Black, Points);
}
I come up with the following durations per draw:
Straight Line: 0.095 ms
Zig-Zag Pattern: 3.24 ms
Random Pattern: 5.47 ms
So it seems to get progressively worse, the more change there is in the lines to be drawn, and that is also a real world effect I encountered in the control painting I mentioned in the beginning.
My questions are thus the following:
Why does it make a such a brutal difference, which lines are to be drawn?
How can I improve the drawing speed for the noisy data?
Three reasons come to mind:
Line Length : Depending on the actual numbers sloped lines may be longer by just a few pixels or a lot or even by some substantial factor. Looking at your code I suspect the latter..
Algorithm : Drawing sloped lines does take some algorithm to find the next pixels. Even fast drawing routines need to do some computations as opposed to vertical or horizontal lines, which run straight through the pixel arrays.
Anti-Aliasing : Unless you turn off anti-aliasing completely (with all the ugly consequences) the number of pixels to paint will also be around 2-3 times more as all those anti-aliasing pixels above and below the center lines must also be calculated and drawn. Not to forget calculating their colors!
The remedy for the latter part is obviously to turn off anti-aliasing, but the other problems are simply the way things are. So best don't worry and be happy about the speedy straight lines :-)
If you really have a lot of lines or your lines could be very long (a few time the size of the screen), or if you have a lot of almost 0 pixel line, you have to wrote code to reduce useless drawing of lines.
Well, here are some ideas:
If you write many lines at the same x, then you could replace those by a single line between min and max y at that x.
If your line goes way beyond the screen boundary, you should clip them.
If a line is completly outside of the visible area, you should skip it.
If a line have a 0 length, you should not write it.
If a line has a single pixel length, you should write only that pixel.
Obviously, the benefit depends a lot on how many lines you draw... And also the alternative might not give the exact same result...
In practice, it you draw a chart on a screen, then if you display only useful information, it should be pretty fast on modern hardware.
Well if you use style or colors, it might not be as trivial to optimize the displaying of the data.
Alternatively, they are some charting component that are optimized for display large data... The good one are generally expensive but it might still worth it. Often trials are available so you can get a good idea on how much you might increase the performance and then decide what to do.

Flood fill missing a tile or two

edit: It works now, I moved the getting X and Y into a seperate function, and it seemed to have fixed it (I'm not sure exactly what was wrong, sorry)
I've been making a Tile Editor using WPF and C#, and I seem to have come across a rather odd problem.
When I first run the fill algorithm on a blank canvas, it fills up all tiles except one (usually around (2,7)). Depending on where I start, it jumps around slightly in that area.
What really confuses me is if I flood fill that same area, it will fix the missing tile.
I tried the recursive flood fill as well as the one using a Queue. Here's the code for the Recursion one:
private void FloodFill(Tile node, int targetID, int replaceID) {
if (targetID == replaceID) return;
if (node.TileID != targetID) return;
node.TileID = replaceID;
SetImageAtCoord(node.X, node.Y);
if (node.Y + 1 != Map.Height) FloodFill(Map.TileInformation[node.X, node.Y + 1], targetID, replaceID);
if (node.Y - 1 != -1) FloodFill(Map.TileInformation[node.X, node.Y - 1], targetID, replaceID);
if (node.X - 1 != -1) FloodFill(Map.TileInformation[node.X - 1, node.Y], targetID, replaceID);
if (node.X + 1 != Map.Width) FloodFill(Map.TileInformation[node.X + 1, node.Y], targetID, replaceID);
}
Now since I also tried a different method and got the same problem, I thought it might be in the SetImageAtCoord():
private void SetImageAtCoord(int x, int y) {
IEnumerable<Rectangle> rectangles = MapViewer.Children.OfType<Rectangle>();
Map.TileInformation[x, y].TileID = CurrentTile;
foreach (Rectangle rect in rectangles) {
int X = (int)Canvas.GetLeft(rect) / 32;
int Y = (int)Canvas.GetTop(rect) / 32;
if (X == x && Y == y) {
rect.Fill = CurrentImage;
}
}
}
My only guess is that it either misses it on the pass (yet somehow make it every other time), or it has something to do with the placement of the rectangle on the Canvas.
edit:
Here is the code for selecting what CurrentImage (and CurrentTile are):
private void Sprite_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) {
Rectangle curr = (Rectangle)e.OriginalSource;
CurrentImage = (ImageBrush)curr.Fill;
int x = (int)Canvas.GetLeft(curr) / 32;
int y = (int)Canvas.GetTop(curr) / 32;
CurrentTile = y * Map.SpriteSheet.GetLength(0) + x;
}
and the code that calls FloodFill
if (CurrentContol == MapEditControl.Fill) {
Rectangle ClickedRectangle = (Rectangle)e.OriginalSource;
ClickedRectangle.Fill = CurrentImage;
int x = (int)Canvas.GetLeft(ClickedRectangle) / 32;
int y = (int)Canvas.GetTop(ClickedRectangle) / 32;
int oldTile = Map.TileInformation[x, y].TileID;
FloodFill(Map.TileInformation[x, y], oldTile, CurrentTile);
}
Map.TileInformation is actually only really used to store the ID for exporting to JSON, and the actual rectangles aren't bound to that in anyway (probably should have)

game programming in C#: sprite collision

I have a C# snippet code about sprite collision in C# game programming, hope you guys will help me to clarify it.
I dont understand method IsCollided, especially the calculation of d1 and d2 to determine whether the sprites collide or not, the meaning and the use of Matrix Invert() and Multiphy in this case as well as the use of Color alpha component to determine the collision of 2 sprites.
Thank you very much.
public struct Vector
{
public double X;
public double Y;
public Vector(double x, double y)
{
X = x;
Y = y;
}
public static Vector operator -(Vector v, Vector v2)
{
return new Vector(v.X-v2.X, v.Y-v2.Y);
}
public double Length
{
get
{
return Math.Sqrt(X * X + Y * Y);
}
}
}
public class Sprite
{
public Vector Position;
protected Image _Image;
protected Bitmap _Bitmap;
protected string _ImageFileName = "";
public string ImageFileName
{
get { return _ImageFileName; }
set
{
_ImageFileName = value;
_Image = Image.FromFile(value);
_Bitmap = new Bitmap(value);
}
}
public Matrix Transform
{
get
{
Vector v = Position;
if (null != _Image)
v -= new Vector(_Image.Size) / 2;
Matrix m = new Matrix();
m.RotateAt(50.0F, new PointF(10.0F, 100.0F));
m.Translate((float)v.X, (float)v.Y);
return m;
}
}
public bool IsCollided(Sprite s2)
{
Vector v = this.Position - s2.Position;
double d1 = Math.Sqrt(_Image.Width * _Image.Width + _Image.Height * _Image.Height)/2;
double d2 = Math.Sqrt(s2._Image.Width * s2._Image.Width + s2._Image.Height * s2._Image.Height)/2;
if (v.Length > d1 + d2)
return false;
Bitmap b = new Bitmap(_Image.Width, _Image.Height);
Graphics g = Graphics.FromImage(b);
Matrix m = s2.Transform;
Matrix m2 = Transform;
m2.Invert();
Matrix m3 = m2;
m3.Multiply(m);
g.Transform = m3;
Vector2F v2 = new Vector2F(0,0);
g.DrawImage(s2._Image, v2);
for (int x = 0; x < b.Width; ++x)
for (int y = 0; y < b.Height; ++y)
{
Color c1 = _Bitmap.GetPixel(x, y);
Color c2 = b.GetPixel(x, y);
if (c1.A > 0.5 && c2.A > 0.5)
return true;
}
return false;
}
}
It's a little convoluted. The two parts:
Part One (simple & fast)
The first part (involving v, d1 + d2) is probably confusing because, instead of doing collision tests using boxes, the dimensions of the image are used to construct bounding circles instead. A simple collision test is then carried out using these bounding circles. This is the 'quick n dirty' test that eliminates sprites that are clearly not colliding.
"Two circles overlap if the sum of there(sic) radii is greater than the distance between their centers. Therefore by Pythagoras we have a collision if:
(cx1-cx2)2 + (cy1-cy2)2 < (r1+r2)2"
See the "Bounding Circles" section of this link and the second link at the foot of this answer.
commented code:
// Get the vector between the two sprite centres
Vector v = this.Position - s2.Position;
// get the radius of a circle that will fit the first sprite
double d1 = Math.Sqrt(_Image.Width * _Image.Width + _Image.Height * _Image.Height)/2;
// get the radius of a circle that will fit the second sprite
double d2 = Math.Sqrt(s2._Image.Width * s2._Image.Width + s2._Image.Height * s2._Image.Height)/2;
// if the distance between the sprites is larger than the radiuses(radii?) of the circles, they do not collide
if (v.Length > d1 + d2)
return false;
Note: You may want to considering using an axially aligned bounding box test instead of a circle here. if you have rectangles with disparate widths and lengths, it'll be a more effective/accurate first test.
Part Two (slower)
I haven't got time to check the code 100%, but the second part --having established that the two sprites are potentially colliding in step one-- is performing a more complicated collision test. It is performing a per-pixel collision test using the sprite's image source -- specifically its alpha channel. In games, the alpha channel is often used to store a representation of transparency. The larger the value, the more opaque the image (so 0 would be 100% transparent, 1.0f would be 100% opaque).
In the code shown, if the pixels in both sprites are overlapping and both have alpha values of > 0.5f, the pixels are sharing the same space and represent solid geometry and are thus colliding. By doing this test, it means that transparent (read: invisible) parts of the sprite will not be considered when testing for collisions, so you can have circular sprites that do not collide at the corners etc.
Here's a link that covers this in a bit more detail.

Categories