Mandelbrot Fractal not working - c#

I tried making this Mandelbrot fractal generator, but when I run this, I get an output like a circle.
Not sure exactly why this happens. I think something may be wrong with my coloring, but even if so the shape is also incorrect.
public static Bitmap Generate(
int width,
int height,
double realMin,
double realMax,
double imaginaryMin,
double imaginaryMax,
int maxIterations,
int bound)
{
var bitmap = new FastBitmap(width, height);
var planeWidth = Math.Abs(realMin) + Math.Abs(realMax); // Total width of the plane.
var planeHeight = Math.Abs(imaginaryMin) + Math.Abs(imaginaryMax); // Total height of the plane.
var realStep = planeWidth / width; // Amount to step by on the real axis per pixel.
var imaginaryStep = planeHeight / height; // Amount to step by on the imaginary axis per pixel.
var realScaling = width / planeWidth;
var imaginaryScaling = height / planeHeight;
var boundSquared = bound ^ 2;
for (var real = realMin; real <= realMax; real += realStep) // Loop through the real axis.
{
for (var imaginary = imaginaryMin; imaginary <= imaginaryMax; imaginary += imaginaryStep) // Loop through the imaginary axis.
{
var z = Complex.Zero;
var c = new Complex(real, imaginary);
var iterations = 0;
for (; iterations < maxIterations; iterations++)
{
z = z * z + c;
if (z.Real * z.Real + z.Imaginary * z.Imaginary > boundSquared)
{
break;
}
}
if (iterations == maxIterations)
{
bitmap.SetPixel(
(int)((real + Math.Abs(realMin)) * realScaling),
(int)((imaginary + Math.Abs(imaginaryMin)) * imaginaryScaling),
Color.Black);
}
else
{
var nsmooth = iterations + 1 - Math.Log(Math.Log(Complex.Abs(z))) / Math.Log(2);
var color = MathHelper.HsvToRgb(0.95f + 10 * nsmooth, 0.6, 1.0);
bitmap.SetPixel(
(int)((real + Math.Abs(realMin)) * realScaling),
(int)((imaginary + Math.Abs(imaginaryMin)) * imaginaryScaling),
color);
}
}
}
return bitmap.Bitmap;
}

Here's one error:
var boundSquared = bound ^ 2;
This should be:
var boundSquared = bound * bound;
The ^ operator means xor.

Related

Procedural Island Terrain Generation

Edit: Rewrote my question after trying a few things and made it more specific.
Hi, so I'm creating a mobile RTS game with procedurally generated maps. I've worked out how to create a terrain with a basic perlin noise on it, and tried to integrate https://gamedev.stackexchange.com/questions/54276/a-simple-method-to-create-island-map-mask method to creating an island procedurally. This is the result so far:
The image below from http://www-cs-students.stanford.edu/~amitp/game-programming/polygon-map-generation/ shows the kind of terrain I'm after. The tutorial there is great but would be too intensive, thus the post.
I want the Random Shaped island with Perlin noise generated land mass, surrounded by water.
edit: Basic Perlin terrain gen working now =)
Here is my code. A script attached to a null with a button to activate Begin():
using UnityEngine;
using System.Collections;
using System.Runtime.InteropServices;
public class Gen_Perlin : MonoBehaviour {
public float Tiling = 0.5f;
private bool active = false;
public int mapHeight = 10;
public void Begin()
{
if (active == false) {
TerrainData terrainData = new TerrainData ();
const int size = 513;
terrainData.heightmapResolution = size;
terrainData.size = new Vector3 (2000, mapHeight, 2000);
terrainData.heightmapResolution = 513;
terrainData.baseMapResolution = 1024;
terrainData.SetDetailResolution (1024, 1024);
Terrain.CreateTerrainGameObject (terrainData);
GameObject obj = GameObject.Find ("Terrain");
obj.transform.parent = this.transform;
if (obj.GetComponent<Terrain> ()) {
GenerateHeights (obj.GetComponent<Terrain> (), Tiling);
}
} else {
GameObject obj = GameObject.Find ("Terrain");
if (obj.GetComponent<Terrain> ()) {
GenerateHeights (obj.GetComponent<Terrain> (), Tiling);
}
}
}
public void GenerateHeights(Terrain terrain, float tileSize)
{
Debug.Log ("Start_Height_Gen");
float[,] heights = new float[terrain.terrainData.heightmapWidth, terrain.terrainData.heightmapHeight];
for (int i = 0; i < terrain.terrainData.heightmapWidth; i++)
{
for (int k = 0; k < terrain.terrainData.heightmapHeight; k++)
{
heights[i, k] = 0.25f + Mathf.PerlinNoise(((float)i / (float)terrain.terrainData.heightmapWidth) * tileSize, ((float)k / (float)terrain.terrainData.heightmapHeight) * tileSize);
heights[i, k] *= makeMask( terrain.terrainData.heightmapWidth, terrain.terrainData.heightmapHeight, i, k, heights[i, k] );
}
}
terrain.terrainData.SetHeights(0, 0, heights);
}
public static float makeMask( int width, int height, int posX, int posY, float oldValue ) {
int minVal = ( ( ( height + width ) / 2 ) / 100 * 2 );
int maxVal = ( ( ( height + width ) / 2 ) / 100 * 10 );
if( getDistanceToEdge( posX, posY, width, height ) <= minVal ) {
return 0;
} else if( getDistanceToEdge( posX, posY, width, height ) >= maxVal ) {
return oldValue;
} else {
float factor = getFactor( getDistanceToEdge( posX, posY, width, height ), minVal, maxVal );
return oldValue * factor;
}
}
private static float getFactor( int val, int min, int max ) {
int full = max - min;
int part = val - min;
float factor = (float)part / (float)full;
return factor;
}
public static int getDistanceToEdge( int x, int y, int width, int height ) {
int[] distances = new int[]{ y, x, ( width - x ), ( height - y ) };
int min = distances[ 0 ];
foreach( var val in distances ) {
if( val < min ) {
min = val;
}
}
return min;
}
}
Yeah. The article in question is using a waaay complex method.
The best way of doing this is to take a function that represents the shape of your basic island, with height values between 0 and 1. For the type of island in the picture, you'd basically want something which smoothly rises from the edges, and smoothly dips back to zero where you want lakes.
Now you either add that surface to your basic fractal surface (if you want to preserve spikiness at low elevations) or you multiply it (if you want lower elevations to be smooth). Then you define a height, below which is water.
Here is my very quick go at doing this, rendered with Terragen:
I used a function that rises in a ring from the edge of the map to halfway to the middle, then drops again, to match a similar shape to the one from the article. In practice, you might only use this to get the shape of the island, and then carve the bit of terrain that matches the contour, and bury everything else.
I used my own fractal landscape generator as described here: https://fractal-landscapes.co.uk for the basic fractal.
Here is the C# code that modifies the landscape:
public void MakeRingIsland()
{
this.Normalize(32768);
var ld2 = (double) linearDimension / 2;
var ld4 = 4 / (double) linearDimension;
for (var y = 0u; y < linearDimension; y++)
{
var yMul = y * linearDimension;
for (var x = 0u; x < linearDimension; x++)
{
var yCoord = (y - ld2) * ld4;
var xCoord = (x - ld2) * ld4;
var dist = Math.Sqrt(xCoord * xCoord + yCoord * yCoord);
var htMul = dist > 2 ? 0 :
(dist < 1 ?
dist + dist - dist * dist :
1 - (dist - 1) * (dist - 1));
var height = samples[x + yMul];
samples[x + yMul] = (int) (height + htMul * 32768);
}
}
}
the image you are showing comes from article describing how to generate it

Hough with AForge on Kinect. How can i get the right lines?

i want to use my kinect to recognise certain objects. One of the methods that i want to use is the Houghtransformation. To do this i am using the AForge library.
But the lines that i find, are totally wrong. Important ones are missing and there are many useless lines in the picture.
To take care of this problem i thought:
First i am detecting the 100 most intense lines via hough to be sure that all the correct lines are detected as well.
I am writing all points from edge detection into a list and them I am checking for each line whether the detected line is at least on X points ( I used 15, 50 and 150) but nontheless my results are bad.
Have you got any idea, how i can find the right lines, before drawing them into the picture and analysing the geometry any further? Or maybe there is just a grave mistake in my code, that i dont see.
SobelEdgeDetector Kante = new SobelEdgeDetector();
System.Drawing.Bitmap neu2 = Kante.Apply(neu1);
neu2.Save("test.png");
for (int a = 0; a < 320; a++) //alle mögliche Kantenpunkte in eine Liste
{
for (int b = 0; b < 240; b++)
{
color = neu2.GetPixel(a, b);
if ((color.R+color.G+color.B)/3 >= 50)
{
Kantenpunkte.Add(new System.Drawing.Point(a, b));
}
}
}
Bitmap Hough = new Bitmap(320, 240);
Hough.Save("C:\\Users\\Nikolas Rieble\\Desktop\\Hough.png");
//houghtrans
HoughLineTransformation lineTransform = new HoughLineTransformation();
// apply Hough line transofrm
lineTransform.ProcessImage(neu2);
Bitmap houghLineImage = lineTransform.ToBitmap();
houghLineImage.Save("1.png");
// get most intensive lines
HoughLine[] lines = lineTransform.GetMostIntensiveLines(100);
UnmanagedImage fertig = UnmanagedImage.FromManagedImage(neu2);
foreach (HoughLine line in lines)
{
// get line's radius and theta values
int r = line.Radius;
double t = line.Theta;
// check if line is in lower part of the image
if (r < 0)
{
t += 180;
r = -r;
}
// convert degrees to radians
t = (t / 180) * Math.PI;
// get image centers (all coordinate are measured relative
// to center)
int w2 = neu2.Width / 2;
int h2 = neu2.Height / 2;
double x0 = 0, x1 = 0, y0 = 0, y1 = 0;
if (line.Theta != 0)
{
// none-vertical line
x0 = -w2; // most left point
x1 = w2; // most right point
// calculate corresponding y values
y0 = (-Math.Cos(t) * x0 + r) / Math.Sin(t);
y1 = (-Math.Cos(t) * x1 + r) / Math.Sin(t);
}
else
{
// vertical line
x0 = line.Radius;
x1 = line.Radius;
y0 = h2;
y1 = -h2;
}
// draw line on the image
int a = 0;
foreach (System.Drawing.Point p in Kantenpunkte) //count number of detected edge points that are on this line
{
double m1 = ((double)p.Y - y0)/((double)p.X - x0);
double m2 = ((y0 - y1)) / (x0 - x1);
if (m1-m2<0.0001)
{
a=a+1;
}
}
if (a > 150) //only draw lines, which cover at least A points
{
AForge.Imaging.Drawing.Line(fertig,
new IntPoint((int)x0 + w2, h2 - (int)y0),
new IntPoint((int)x1 + w2, h2 - (int)y1),
System.Drawing.Color.Red);
}
}

Random elements (coordinates)

I have to do something like this.
When I click on a node, it expands, and this is OK (I am using Powercharts to do it).
My big problem is creating random coordinates so that when I open the subnode, it doesn't overlap with another node/subnode.
In the Powercharts I have to pass the coordinates, so the big problem is in passing it.
I have to do the random coordinates in C#.
//-------------------------------------------------------
This is what i did so far:
This is what i do, is not overlaping, but i have a problem.
how can i start do the circles from a starting point?
for example, starts in the middle (300,300) and then do circles around it. Is possible?
private void button2_Click(object sender, EventArgs e)
{
g = pictureBox1.CreateGraphics();
g.Clear(pictureBox1.BackColor);
double angle;
Circle item0 = new Circle();
item0.x=200;
item0.y=150;
item0.r=50;
listaCirculos.Add(item0);
Random randomMember = new Random();
g.DrawEllipse(pen1, 200, 150, 50, 50);
while(listaCirculos.Count!=11)
{
int[] it = GenerateNewCircle(600);
Circle item = new Circle();
item.x = it[0];
item.y = it[1];
item.r = 50;
if (circleIsAllowed(listaCirculos, item))
{
listaCirculos.Add(item);
g.DrawEllipse(pen1, Convert.ToInt32(item.x), Convert.ToInt32(item.y), 50, 50);
}
}
}
bool circleIsAllowed(List<Circle> circles, Circle newCircle)
{
foreach(Circle it in circles)
{
//double sumR = it.x + newCircle.r;
//double dx = it.x - newCircle.x;
//double dy = it.y - newCircle.y;
//double squaredDist = dx * dx + dy * dy;
double aX = Math.Pow(it.x - newCircle.x, 2);
double aY = Math.Pow(it.y - newCircle.y, 2);
double Dif = Math.Abs(aX - aY);
double ra1 = it.r / 2;
double ra2 = it.r / 2;
double raDif = Math.Pow(ra1 + ra2, 2);
if ((raDif + 1) > Dif) return false;
//if (squaredDist < sumR*sumR) return false;
}
return true; // no existing circle overlaps
}
public int[] GenerateNewCircle(int maxSize)
{
int x, y;
Random randomMember = new Random();
x = randomMember.Next(0,maxSize);
if (x - 50 < 0)
y = randomMember.Next(x + 50, maxSize);
else if (x + 50 > 600)
y = randomMember.Next(0, x - 50);
else
// in this case, x splits the range 0..n into 2 subranges.
// get a random number and skip the "gap" if necessary
y = randomMember.Next(0, maxSize - 50);
if (y > x - 50)
{
y += 20;
}
int[] abc = new int[2];
abc[0] = x;
abc[1] = y;
return abc;
}
Size sizeShape = new Size("SomeWidth" , "SomeHeight");
List<Retangle> rects = new List<Retangle>();
Random r = new Random();
while(rects.count != "someCount")
{
Point rPoint = new Point(r.Next(500) , r.Next(500))
bool isNew = true;
foreach(Rectangle r in rects)
{
if(r.contains(rPoint))
isNew = false;
}
if(isNew)
rects.Add(GetRect(rPoint , sizeShape));
}
private Rectangle GetRect(Point p , Size s)
{
return new Rectangle(p,s);
}
After than you can use from rects ,
rects has random coordinates.
Here's the link to let you know how to create Random Numbers in C#.
You have to keep in mind that you will generate random numbers, but you also have to make sure the numbers generated will not make objects overlap.
Despite this being pretty poorly worded, I believe what you are looking for is the Random class. It is instantiated as such:
Random thisRandom = new Random();
Then from there, if you want to generate a random coordinate, you need to know the maximum possible X and Y coordinates. Knowing these will allow you to generate X and Y coordinates within the given canvas as such:
int maxX = 500; //This is the maximum X value on your canvas
int maxY = 500; //This is the maximum Y value on your canvas
int newX, newY;
newX = thisRandom.Next(maxX);
newY = thisRandom.Next(maxY);
While its not the absolute best in terms of "true" randomization this should give you what you need.

How to find rectangle of difference between two images

I have two images the same size. What is the best way to find the rectangle in which they differ. Obviously I could go through the image 4 times in different directions, but i'm wondering if there's an easier way.
Example:
A naive approach would be to start at the origin, and work line by line, column by column. Compare each pixel, keeping note of the topmost, leftmost, rightmost, and bottommost, from which you can calculate your rectangle. There will be cases where this single pass approach would be faster (i.e. where there is a very small differing area)
If you want a single rectangle, use int.MaxValue for the threshold.
var diff = new ImageDiffUtil(filename1, filename2);
var diffRectangles = diff.GetDiffRectangles(int.MaxValue);
If you want multiple rectangles, use a smaller threshold.
var diff = new ImageDiffUtil(filename1, filename2);
var diffRectangles = diff.GetDiffRectangles(8);
ImageDiffUtil.cs
using System;
using System.Collections.Generic;
using System.Drawing;
using System.Linq;
namespace diff_images
{
public class ImageDiffUtil
{
Bitmap image1;
Bitmap image2;
public ImageDiffUtil(string filename1, string filename2)
{
image1 = Image.FromFile(filename1) as Bitmap;
image2 = Image.FromFile(filename2) as Bitmap;
}
public IList<Point> GetDiffPixels()
{
var widthRange = Enumerable.Range(0, image1.Width);
var heightRange = Enumerable.Range(0, image1.Height);
var result = widthRange
.SelectMany(x => heightRange, (x, y) => new Point(x, y))
.Select(point => new
{
Point = point,
Pixel1 = image1.GetPixel(point.X, point.Y),
Pixel2 = image2.GetPixel(point.X, point.Y)
})
.Where(pair => pair.Pixel1 != pair.Pixel2)
.Select(pair => pair.Point)
.ToList();
return result;
}
public IEnumerable<Rectangle> GetDiffRectangles(double distanceThreshold)
{
var result = new List<Rectangle>();
var differentPixels = GetDiffPixels();
while (differentPixels.Count > 0)
{
var cluster = new List<Point>()
{
differentPixels[0]
};
differentPixels.RemoveAt(0);
while (true)
{
var left = cluster.Min(p => p.X);
var right = cluster.Max(p => p.X);
var top = cluster.Min(p => p.Y);
var bottom = cluster.Max(p => p.Y);
var width = Math.Max(right - left, 1);
var height = Math.Max(bottom - top, 1);
var clusterBox = new Rectangle(left, top, width, height);
var proximal = differentPixels
.Where(point => GetDistance(clusterBox, point) <= distanceThreshold)
.ToList();
proximal.ForEach(point => differentPixels.Remove(point));
if (proximal.Count == 0)
{
result.Add(clusterBox);
break;
}
else
{
cluster.AddRange(proximal);
}
};
}
return result;
}
static double GetDistance(Rectangle rect, Point p)
{
var dx = Math.Max(rect.Left - p.X, 0);
dx = Math.Max(dx, p.X - rect.Right);
var dy = Math.Max(rect.Top - p.Y, 0);
dy = Math.Max(dy, p.Y - rect.Bottom);
return Math.Sqrt(dx * dx + dy * dy);
}
}
}
Form1.cs
using System.Drawing;
using System.Linq;
using System.Windows.Forms;
namespace diff_images
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
var filename1 = #"Gelatin1.PNG";
var filename2 = #"Gelatin2.PNG";
var diff = new ImageDiffUtil(filename1, filename2);
var diffRectangles = diff.GetDiffRectangles(8);
var img3 = Image.FromFile(filename2);
Pen redPen = new Pen(Color.Red, 1);
var padding = 3;
using (var graphics = Graphics.FromImage(img3))
{
diffRectangles
.ToList()
.ForEach(rect =>
{
var largerRect = new Rectangle(rect.X - padding, rect.Y - padding, rect.Width + padding * 2, rect.Height + padding * 2);
graphics.DrawRectangle(redPen, largerRect);
});
}
var pb1 = new PictureBox()
{
Image = Image.FromFile(filename1),
Left = 8,
Top = 8,
SizeMode = PictureBoxSizeMode.AutoSize
};
var pb2 = new PictureBox()
{
Image = Image.FromFile(filename2),
Left = pb1.Left + pb1.Width + 16,
Top = 8,
SizeMode = PictureBoxSizeMode.AutoSize
};
var pb3 = new PictureBox()
{
Image = img3,
Left = pb2.Left + pb2.Width + 16,
Top = 8,
SizeMode = PictureBoxSizeMode.AutoSize
};
Controls.Add(pb1);
Controls.Add(pb2);
Controls.Add(pb3);
}
}
}
Image processing like this is expensive, there are a lot of bits to look at. In real applications, you almost always need to filter the image to get rid of artifacts induced by imperfect image captures.
A common library used for this kind of bit whacking is OpenCV, it takes advantage of dedicated CPU instructions available to make this fast. There are several .NET wrappers available for it, Emgu is one of them.
I don't think there is an easier way.
In fact doing this will just be a (very) few lines of code, so unless you find a library that does that for you directly you won't find a shorter way.
Idea:
Consider an image as a 2D Array with each Array element as a pixel of the image. Hence, I would say Image Differencing is nothing but 2D Array Differencing.
Idea is to just scan through the array elements width-wise and find the place where there is a difference in pixel values. If example [x, y] co-ordinates of both 2D Array are different then our rectangle finding logic starts. Later on the rectangles would be used to patch the last updated Frame Buffer.
We need to scan through the boundaries of the rectangles for differences and if any difference is found in the boundary of rectangle, then the boundary will be increased width-wise or height-wise depending upon the type of scan made.
Consider I scanned width-wise of 2D Array and I found a location where there exist a co-ordinate which is different in both the 2D Arrays, I will create a rectangle with the starting position as [x-1, y-1] and with the width and height as 2 and 2 respectively. Please note that width and height refers to the number of pixels.
eg: Rect Info:
X = 20
Y = 35
W = 26
H = 23
i.e width of the rectangle starts from co-ordinate [20, 35] -> [20, 35 + 26 - 1]. Maybe when you find the code you may be able to understand it better.
Also there are possibilities that there are smaller rectangles inside a bigger rectangle you have found, thus we need to remove the smaller rectangles from our reference because they mean nothing to us except that they occupu my precious space !!
The above logic would be helpful in the case of VNC Server Implementation where there would be a need of rectangles that denotes differences in the image that is currently taken. Those rectangles could be sent in the network to the VNC Client which can patch the rectangles in the local copy of Frame Buffer it possesses thereby displaying it on the VNC Client Display Board.
P.S.:
I will be attaching the code in which I implemented my own algorithm. I would request viewers to comment for any mistakes or performance tuning. I would also request viewers to comment about any better algorithm that would make life simpler.
Code:
Class Rect:
public class Rect {
public int x; // Array Index
public int y; // Array Index
public int w; // Number of hops along the Horizontal
public int h; // Number of hops along the Vertical
#Override
public boolean equals(Object obj) {
Rect rect = (Rect) obj;
if(rect.x == this.x && rect.y == this.y && rect.w == this.w && rect.h == this.h) {
return true;
}
return false;
}
}
Class Image Difference:
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.util.LinkedList;
import javax.imageio.ImageIO;
public class ImageDifference {
long start = 0, end = 0;
public LinkedList<Rect> differenceImage(int[][] baseFrame, int[][] screenShot, int xOffset, int yOffset, int width, int height) {
// Code starts here
int xRover = 0;
int yRover = 0;
int index = 0;
int limit = 0;
int rover = 0;
boolean isRectChanged = false;
boolean shouldSkip = false;
LinkedList<Rect> rectangles = new LinkedList<Rect>();
Rect rect = null;
start = System.nanoTime();
// xRover - Rovers over the height of 2D Array
// yRover - Rovers over the width of 2D Array
int verticalLimit = xOffset + height;
int horizontalLimit = yOffset + width;
for(xRover = xOffset; xRover < verticalLimit; xRover += 1) {
for(yRover = yOffset; yRover < horizontalLimit; yRover += 1) {
if(baseFrame[xRover][yRover] != screenShot[xRover][yRover]) {
// Skip over the already processed Rectangles
for(Rect itrRect : rectangles) {
if(( (xRover < itrRect.x + itrRect.h) && (xRover >= itrRect.x) ) && ( (yRover < itrRect.y + itrRect.w) && (yRover >= itrRect.y) )) {
shouldSkip = true;
yRover = itrRect.y + itrRect.w - 1;
break;
} // End if(( (xRover < itrRect.x + itrRect.h) && (xRover >= itrRect.x) ) && ( (yRover < itrRect.y + itrRect.w) && (yRover >= itrRect.y) ))
} // End for(Rect itrRect : rectangles)
if(shouldSkip) {
shouldSkip = false;
// Need to come out of the if condition as below that is why "continue" has been provided
// if(( (xRover <= (itrRect.x + itrRect.h)) && (xRover >= itrRect.x) ) && ( (yRover <= (itrRect.y + itrRect.w)) && (yRover >= itrRect.y) ))
continue;
} // End if(shouldSkip)
rect = new Rect();
rect.x = ((xRover - 1) < xOffset) ? xOffset : (xRover - 1);
rect.y = ((yRover - 1) < yOffset) ? yOffset : (yRover - 1);
rect.w = 2;
rect.h = 2;
/* Boolean variable used to re-scan the currently found rectangle
for any change due to previous scanning of boundaries */
isRectChanged = true;
while(isRectChanged) {
isRectChanged = false;
index = 0;
/* I */
/* Scanning of left-side boundary of rectangle */
index = rect.x;
limit = rect.x + rect.h;
while(index < limit && rect.y != yOffset) {
if(baseFrame[index][rect.y] != screenShot[index][rect.y]) {
isRectChanged = true;
rect.y = rect.y - 1;
rect.w = rect.w + 1;
index = rect.x;
continue;
} // End if(baseFrame[index][rect.y] != screenShot[index][rect.y])
index = index + 1;;
} // End while(index < limit && rect.y != yOffset)
/* II */
/* Scanning of bottom boundary of rectangle */
index = rect.y;
limit = rect.y + rect.w;
while( (index < limit) && (rect.x + rect.h != verticalLimit) ) {
rover = rect.x + rect.h - 1;
if(baseFrame[rover][index] != screenShot[rover][index]) {
isRectChanged = true;
rect.h = rect.h + 1;
index = rect.y;
continue;
} // End if(baseFrame[rover][index] != screenShot[rover][index])
index = index + 1;
} // End while( (index < limit) && (rect.x + rect.h != verticalLimit) )
/* III */
/* Scanning of right-side boundary of rectangle */
index = rect.x;
limit = rect.x + rect.h;
while( (index < limit) && (rect.y + rect.w != horizontalLimit) ) {
rover = rect.y + rect.w - 1;
if(baseFrame[index][rover] != screenShot[index][rover]) {
isRectChanged = true;
rect.w = rect.w + 1;
index = rect.x;
continue;
} // End if(baseFrame[index][rover] != screenShot[index][rover])
index = index + 1;
} // End while( (index < limit) && (rect.y + rect.w != horizontalLimit) )
} // while(isRectChanged)
// Remove those rectangles that come inside "rect" rectangle.
int idx = 0;
while(idx < rectangles.size()) {
Rect r = rectangles.get(idx);
if( ( (rect.x <= r.x) && (rect.x + rect.h >= r.x + r.h) ) && ( (rect.y <= r.y) && (rect.y + rect.w >= r.y + r.w) ) ) {
rectangles.remove(r);
} else {
idx += 1;
} // End if( ( (rect.x <= r.x) && (rect.x + rect.h >= r.x + r.h) ) && ( (rect.y <= r.y) && (rect.y + rect.w >= r.y + r.w) ) )
} // End while(idx < rectangles.size())
// Giving a head start to the yRover when a rectangle is found
rectangles.addFirst(rect);
yRover = rect.y + rect.w - 1;
rect = null;
} // End if(baseFrame[xRover][yRover] != screenShot[xRover][yRover])
} // End for(yRover = yOffset; yRover < horizontalLimit; yRover += 1)
} // End for(xRover = xOffset; xRover < verticalLimit; xRover += 1)
end = System.nanoTime();
return rectangles;
}
public static void main(String[] args) throws IOException {
LinkedList<Rect> rectangles = null;
// Buffering the Base image and Screen Shot Image
BufferedImage screenShotImg = ImageIO.read(new File("screenShotImg.png"));
BufferedImage baseImg = ImageIO.read(new File("baseImg.png"));
int width = baseImg.getWidth();
int height = baseImg.getHeight();
int xOffset = 0;
int yOffset = 0;
int length = baseImg.getWidth() * baseImg.getHeight();
// Creating 2 Two Dimensional Arrays for Image Processing
int[][] baseFrame = new int[height][width];
int[][] screenShot = new int[height][width];
// Creating 2 Single Dimensional Arrays to retrieve the Pixel Values
int[] baseImgPix = new int[length];
int[] screenShotImgPix = new int[length];
// Reading the Pixels from the Buffered Image
baseImg.getRGB(0, 0, baseImg.getWidth(), baseImg.getHeight(), baseImgPix, 0, baseImg.getWidth());
screenShotImg.getRGB(0, 0, screenShotImg.getWidth(), screenShotImg.getHeight(), screenShotImgPix, 0, screenShotImg.getWidth());
// Transporting the Single Dimensional Arrays to Two Dimensional Array
long start = System.nanoTime();
for(int row = 0; row < height; row++) {
System.arraycopy(baseImgPix, (row * width), baseFrame[row], 0, width);
System.arraycopy(screenShotImgPix, (row * width), screenShot[row], 0, width);
}
long end = System.nanoTime();
System.out.println("Array Copy : " + ((double)(end - start) / 1000000));
// Finding Differences between the Base Image and ScreenShot Image
ImageDifference imDiff = new ImageDifference();
rectangles = imDiff.differenceImage(baseFrame, screenShot, xOffset, yOffset, width, height);
// Displaying the rectangles found
int index = 0;
for(Rect rect : rectangles) {
System.out.println("\nRect info : " + (++index));
System.out.println("X : " + rect.x);
System.out.println("Y : " + rect.y);
System.out.println("W : " + rect.w);
System.out.println("H : " + rect.h);
// Creating Bounding Box
for(int i = rect.y; i < rect.y + rect.w; i++) {
screenShotImgPix[ ( rect.x * width) + i ] = 0xFFFF0000;
screenShotImgPix[ ((rect.x + rect.h - 1) * width) + i ] = 0xFFFF0000;
}
for(int j = rect.x; j < rect.x + rect.h; j++) {
screenShotImgPix[ (j * width) + rect.y ] = 0xFFFF0000;
screenShotImgPix[ (j * width) + (rect.y + rect.w - 1) ] = 0xFFFF0000;
}
}
// Creating the Resultant Image
screenShotImg.setRGB(0, 0, width, height, screenShotImgPix, 0, width);
ImageIO.write(screenShotImg, "PNG", new File("result.png"));
double d = ((double)(imDiff.end - imDiff.start) / 1000000);
System.out.println("\nTotal Time : " + d + " ms" + " Array Copy : " + ((double)(end - start) / 1000000) + " ms");
}
}
Description:
There would be a function named
public LinkedList<Rect> differenceImage(int[][] baseFrame, int[][] screenShot, int width, int height)
which does the job of finding differences in the images and return a linkedlist of objects. The objects are nothing but the rectangles.
There is main function which does the job of testing the algorithm.
There are 2 sample images passed into the code in main function, they are nothing but the "baseFrame" and "screenShot" thereby creating the resultant image named "result".
I don't possess the desired reputation to post the resultant image which would be very interesting.
There is a blog which would provide the output
Image Difference
I don't think there can be anything better than exhaustively searching from each side in turn for the first point of difference in that direction. Unless, that is, you know a fact that in some way constrains the set of points of difference.
So here comes the easy way if you know how to use Lockbit :)
Bitmap originalBMP = new Bitmap(pictureBox1.ImageLocation);
Bitmap changedBMP = new Bitmap(pictureBox2.ImageLocation);
int width = Math.Min(originalBMP.Width, changedBMP.Width),
height = Math.Min(originalBMP.Height, changedBMP.Height),
xMin = int.MaxValue,
xMax = int.MinValue,
yMin = int.MaxValue,
yMax = int.MinValue;
var originalLock = originalBMP.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.ReadWrite, originalBMP.PixelFormat);
var changedLock = changedBMP.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.ReadWrite, changedBMP.PixelFormat);
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
//generate the address of the colour pixel
int pixelIdxOrg = y * originalLock.Stride + (x * 4);
int pixelIdxCh = y * changedLock.Stride + (x * 4);
if (( Marshal.ReadByte(originalLock.Scan0, pixelIdxOrg + 2)!= Marshal.ReadByte(changedLock.Scan0, pixelIdxCh + 2))
|| (Marshal.ReadByte(originalLock.Scan0, pixelIdxOrg + 1) != Marshal.ReadByte(changedLock.Scan0, pixelIdxCh + 1))
|| (Marshal.ReadByte(originalLock.Scan0, pixelIdxOrg) != Marshal.ReadByte(changedLock.Scan0, pixelIdxCh))
)
{
xMin = Math.Min(xMin, x);
xMax = Math.Max(xMax, x);
yMin = Math.Min(yMin, y);
yMax = Math.Max(yMax, y);
}
}
}
originalBMP.UnlockBits(originalLock);
changedBMP.UnlockBits(changedLock);
var result = changedBMP.Clone(new Rectangle(xMin, yMin, xMax - xMin, yMax - yMin), changedBMP.PixelFormat);
pictureBox3.Image = result;
disclaim it looks like your 2 pictures contains more differences than we can see with the naked eye so the result will be wider than you expect but you can add a tolerance so it wil fit even if the rest isn't 100% identical
to speed things up you will maybe able to us Parallel.For but do it only for the outer loop

Deskew scanned images

I am working in OMR project and we are using C#. When we come to scan the answer sheets, the images are skewed. How can we deskew them?
VB.Net Code for this is available here, however since you asked for C# here is a C# translation of their Deskew class (note: Binarize (strictly not necessary, but works much better) and Rotate are exercises left to the user).
public class Deskew
{
// Representation of a line in the image.
private class HougLine
{
// Count of points in the line.
public int Count;
// Index in Matrix.
public int Index;
// The line is represented as all x,y that solve y*cos(alpha)-x*sin(alpha)=d
public double Alpha;
}
// The Bitmap
Bitmap _internalBmp;
// The range of angles to search for lines
const double ALPHA_START = -20;
const double ALPHA_STEP = 0.2;
const int STEPS = 40 * 5;
const double STEP = 1;
// Precalculation of sin and cos.
double[] _sinA;
double[] _cosA;
// Range of d
double _min;
int _count;
// Count of points that fit in a line.
int[] _hMatrix;
public Bitmap DeskewImage(Bitmap image, int type, int binarizeThreshold)
{
Size oldSize = image.Size;
_internalBmp = BitmapFunctions.Resize(image, new Size(1000, 1000), true, image.PixelFormat);
Binarize(_internalBmp, binarizeThreshold);
return Rotate(image, GetSkewAngle());
}
// Calculate the skew angle of the image cBmp.
private double GetSkewAngle()
{
// Hough Transformation
Calc();
// Top 20 of the detected lines in the image.
HougLine[] hl = GetTop(20);
// Average angle of the lines
double sum = 0;
int count = 0;
for (int i = 0; i <= 19; i++)
{
sum += hl[i].Alpha;
count += 1;
}
return sum / count;
}
// Calculate the Count lines in the image with most points.
private HougLine[] GetTop(int count)
{
HougLine[] hl = new HougLine[count];
for (int i = 0; i <= count - 1; i++)
{
hl[i] = new HougLine();
}
for (int i = 0; i <= _hMatrix.Length - 1; i++)
{
if (_hMatrix[i] > hl[count - 1].Count)
{
hl[count - 1].Count = _hMatrix[i];
hl[count - 1].Index = i;
int j = count - 1;
while (j > 0 && hl[j].Count > hl[j - 1].Count)
{
HougLine tmp = hl[j];
hl[j] = hl[j - 1];
hl[j - 1] = tmp;
j -= 1;
}
}
}
for (int i = 0; i <= count - 1; i++)
{
int dIndex = hl[i].Index / STEPS;
int alphaIndex = hl[i].Index - dIndex * STEPS;
hl[i].Alpha = GetAlpha(alphaIndex);
//hl[i].D = dIndex + _min;
}
return hl;
}
// Hough Transforamtion:
private void Calc()
{
int hMin = _internalBmp.Height / 4;
int hMax = _internalBmp.Height * 3 / 4;
Init();
for (int y = hMin; y <= hMax; y++)
{
for (int x = 1; x <= _internalBmp.Width - 2; x++)
{
// Only lower edges are considered.
if (IsBlack(x, y))
{
if (!IsBlack(x, y + 1))
{
Calc(x, y);
}
}
}
}
}
// Calculate all lines through the point (x,y).
private void Calc(int x, int y)
{
int alpha;
for (alpha = 0; alpha <= STEPS - 1; alpha++)
{
double d = y * _cosA[alpha] - x * _sinA[alpha];
int calculatedIndex = (int)CalcDIndex(d);
int index = calculatedIndex * STEPS + alpha;
try
{
_hMatrix[index] += 1;
}
catch (Exception ex)
{
System.Diagnostics.Debug.WriteLine(ex.ToString());
}
}
}
private double CalcDIndex(double d)
{
return Convert.ToInt32(d - _min);
}
private bool IsBlack(int x, int y)
{
Color c = _internalBmp.GetPixel(x, y);
double luminance = (c.R * 0.299) + (c.G * 0.587) + (c.B * 0.114);
return luminance < 140;
}
private void Init()
{
// Precalculation of sin and cos.
_cosA = new double[STEPS];
_sinA = new double[STEPS];
for (int i = 0; i < STEPS; i++)
{
double angle = GetAlpha(i) * Math.PI / 180.0;
_sinA[i] = Math.Sin(angle);
_cosA[i] = Math.Cos(angle);
}
// Range of d:
_min = -_internalBmp.Width;
_count = (int)(2 * (_internalBmp.Width + _internalBmp.Height) / STEP);
_hMatrix = new int[_count * STEPS];
}
private static double GetAlpha(int index)
{
return ALPHA_START + index * ALPHA_STEP;
}
}
Scanned document are always skewed for an average [-10;+10] degrees angle.
It's easy to deskew them using the Hough transform, like Lou Franco said. This transform detects lines on your image for several angles. You just have to select the corresponding one to your document horizontal lines, then rotate it.
try to isolate the pixel corresponding to your document horizontal lines (for instance, black pixels that have a white pixel at their bottom).
Run Hough transform. Do not forget to use 'unsafe' mode in C# to fasten the process of your whole image by using a pointor.
Rotate your document in the opposite angle found.
Works like a charm on binary documents (easily extendable to grey level ones)
Disclaimer: I work at Atalasoft, DotImage Document Imaging can do this with a couple of lines of code.
Deskew is a term of art that describes what you are trying to do. As Ben Voigt said, it's technically rotation, not skew -- however, you will find algorithms under automatic deskew if you search.
The normal way to do this is to do a hough transform to look for the prevalent lines in the image. With normal documents, many of them will be orthogonal to the sides of the paper.
Are you sure it's "skew" rather than "rotation" (rotation preserves angles, skew doesn't).
Use some sort of registration mark (in at least two places) which you can recognize even when rotated.
Find the coordinates of these marks and calculate the rotation angle.
Apply a rotation transformation matrix to the image.

Categories