For large number of lines n > 1000 the performance of Graphics.DrawLines is very poor (multiple seconds) when the lines are crossing each other. See the following example:
private static readonly Random r = new Random();
private void Form1_Paint(object sender, PaintEventArgs e)
{
int n = 10000;
using (Pen pen = new Pen(Color.Black, 1))
{
Point[] points = new Point[n];
for (int i = 0; i < n; i++)
{
int ii = i * 1000 / n;
int x = r.Next(0, 1001);
int y = r.Next(0, 1001);
points[i] = new Point(x, y);
}
e.Graphics.DrawLines(pen, points);
}
}
When I replace x or y by ii the performance is good. Here the lines are not crossing each other.
I observed as well that the line width has an impact. Line widths larger than 1 are even slower.
Is there any way to improved the performance of DrawLines?
As Matthew Watson points out, the most probable reason is simply that you are drawing more pixels. If you change your code to
for (int i = 0; i < n; i += 4)
{
points[i] = new Point(0, 0);
points[i+1] = new Point(1000, 1000);
points[i+2] = new Point(1000, 0);
points[i+3] = new Point(0, 1000);
}
You would repeatedly be drawing a cross. When I test this I get about 230ms for drawing a line strip of 10k segments to a bitmap. And that seem to be perfectly within expectations. I would expect you to get similar performance when drawing to screen using double buffering.
When drawing random points there seem be a performance degradation sometimes, especially when using wider lines. My only explanation for this is that it hits some edge case that is handled much less efficiently. I would recommend seeing if you get acceptable performance with actual data, and make another post with said data if performance is much worse than expected.
You might also want to take a look at Ramer–Douglas–Peucker or Visvalingam–Whyatt to simplify your line. If this is a plot it might not make much sense to have many more points than you have pixels on your monitor.
Related
I have a Metafile object. For reasons outside of my control, it has been provided much larger (thousands of times larger) than what would be required to fit the image drawn inside it.
For example, it could be 40 000 x 40 000, yet only contains "real" (non-transparent) pixels in an area 2000 x 1600.
Originally, this metafile was simply drawn to a control, and the control bounds limited the area to a reasonable size.
Now I am trying to split it into different chunks of dynamic size, depending on user input. What I want to do it count how many of those chunks will be there (in x and in y, even the splitting is into a two-dimensional grid of chunks).
I am aware that, technically, I could go the O(N²) way, and just check the pixels one by one to find the "real" bounds of the drawn image.
But this will be painfully slow.
I am looking for a way of getting the position (x,y) of the very last drawn pixel in the entire metafile, without iterating through every single one of them.
Since The DrawImage method is not painfully slow, at least not N² slow, I assume that the metafile object has some optimisations on the inside that would allow something like this. Just like the List object has a .Count Property that is much faster than actually counting the objects, is there some way of getting the practical bounds of a metafile?
The drawn content, in this scenario, will always be rectangular. I can safely assume that the last pixel will be the same, whether I loop in x then y, or in y then x.
How can I find the coordinates of this "last" pixel?
Finding the bounding rectangle of the non-transparent pixels for such a large image is indeed an interesting challenge.
The most direct approach would be tackling the WMF content but that is also by far the hardest to get right.
Let's instead render the image to a bitmap and look at the bitmap.
First the basic approach, then a few optimizations.
To get the bounds one need to find the left, top, right and bottom borders.
Here is a simple function to do that:
Rectangle getBounds(Bitmap bmp)
{
int l, r, t, b; l = t = r = b = 0;
for (int x = 0; x < bmp.Width - 1; x++)
for (int y = 0; y < bmp.Height - 1; y++)
if (bmp.GetPixel(x,y).A > 0) { l = x; goto l1; }
l1:
for (int x = bmp.Width - 1; x > l ; x--)
for (int y = 0; y < bmp.Height - 1; y++)
if (bmp.GetPixel(x,y).A > 0) { r = x; goto l2; }
l2:
for (int y = 0; y < bmp.Height - 1; y++)
for (int x = l; x < r; x++)
if (bmp.GetPixel(x,y).A > 0) { t = y; goto l3; }
l3:
for (int y = bmp.Height - 1; y > t; y--)
for (int x = l; x < r; x++)
if (bmp.GetPixel(x,y).A > 0) { b = y; goto l4; }
l4:
return Rectangle.FromLTRB(l,t,r,b);
}
Note that is optimizes the last, vertical loops a little to look only at the portion not already tested by the horizontal loops.
It uses GetPixel, which is painfully slow; but even Lockbits only gains 'only' about 10x or so. So we need to reduce the sheer numbers; we need to do that anyway, because 40k x 40k pixels is too large for a Bitmap.
Since WMF is usually filled with vector data we probably can scale it down a lot. Here is an example:
string fn = "D:\\_test18b.emf";
Image img = Image.FromFile(fn);
int w = img.Width;
int h = img.Height;
float scale = 100;
Rectangle rScaled = Rectangle.Empty;
using (Bitmap bmp = new Bitmap((int)(w / scale), (int)(h / scale)))
using (Graphics g = Graphics.FromImage(bmp))
{
g.ScaleTransform(1f/scale, 1f/scale);
g.Clear(Color.Transparent);
g.DrawImage(img, 0, 0);
rScaled = getBounds(bmp);
Rectangle rUnscaled = Rectangle.Round(
new RectangleF(rScaled.Left * scale, rScaled.Top * scale,
rScaled.Width * scale, rScaled.Height * scale ));
}
Note that to properly draw the wmf file one may need to adapt the resolutions. Here is an example i used for testing:
using (Graphics g2 = pictureBox.CreateGraphics())
{
float scaleX = g2.DpiX / img.HorizontalResolution / scale;
float scaleY = g2.DpiY / img.VerticalResolution / scale;
g2.ScaleTransform(scaleX, scaleY);
g2.DrawImage(img, 0, 0); // draw the original emf image.. (*)
g2.ResetTransform();
// g2.DrawImage(bmp, 0, 0); // .. it will look the same as (*)
g2.DrawRectangle(Pens.Black, rScaled);
}
I left this out but for fully controlling the rendering, it ought have been included in the snippet above as well..
This may or may not be good enough, depending on the accuracy needed.
To measure the bounds perfectly one can do this trick: Use the bounds from the scaled down test and measure unscaled but only a tiny stripe around the four bound numbers. When creating the render bitmap we move the origin accordingly.
Example for the right bound:
Rectangle rScaled2 = Rectangle.Empty;
int delta = 80;
int right = (int)(rScaled.Right * scale);
using (Bitmap bmp = new Bitmap((int)(delta * 2 ), (int)(h )))
using (Graphics g = Graphics.FromImage(bmp))
{
g.Clear(Color.Transparent);
g.DrawImage(img, - right - delta, 0);
rScaled2 = getBounds(bmp);
}
I could have optimized by not going over the full height but only the portion (plus delte) we already found..
Further optimization can be achieved if one can use knowledge about the data. If we know that the image data are connected we could use larger steps in the loops until a pixel is found and then trace back one step..
I have tested two rescaling functions by applying them on FFT convolution outputs.
The first one is collected from this link.
public static void RescaleComplex(Complex[,] convolve)
{
int imageWidth = convolve.GetLength(0);
int imageHeight = convolve.GetLength(1);
double maxAmp = 0.0;
for (int i = 0; i < imageWidth; i++)
{
for (int j = 0; j < imageHeight; j++)
{
maxAmp = Math.Max(maxAmp, convolve[i, j].Magnitude);
}
}
double scale = 1.0 / maxAmp;
for (int i = 0; i < imageWidth; i++)
{
for (int j = 0; j < imageHeight; j++)
{
convolve[i, j] = new Complex(convolve[i, j].Real * scale,
convolve[i, j].Imaginary * scale);
}
}
}
Here the problem is incorrect contrast.
The second one is collected from this link.
public static void RescaleComplex(Complex[,] convolve)
{
int imageWidth = convolve.GetLength(0);
int imageHeight = convolve.GetLength(1);
double scale = imageWidth * imageHeight;
for (int j = 0; j < imageHeight; j++)
{
for (int i = 0; i < imageWidth; i++)
{
double re = Math.Max(0.0, Math.Min(convolve[i, j].Real * scale, 1.0));
double im = Math.Max(0.0, Math.Min(convolve[i, j].Imaginary * scale, 1.0));
convolve[i, j] = new Complex(re, im);
}
}
}
Here the output is totally white.
So, you can see two of the versions are giving one correct and another incorrect outputs.
How can I solve this dilemma?
.
Note. Matrix is the following kernel:
0 -1 0
-1 5 -1
0 -1 0
Source Code. Here is my FFT Convolution function.
private static Complex[,] ConvolutionFft(Complex[,] image, Complex[,] kernel)
{
Complex[,] imageCopy = (Complex[,])image.Clone();
Complex[,] kernelCopy = (Complex[,])kernel.Clone();
Complex[,] convolve = null;
int imageWidth = imageCopy.GetLength(0);
int imageHeight = imageCopy.GetLength(1);
int kernelWidth = kernelCopy.GetLength(0);
int kernelHeight = kernelCopy.GetLength(1);
if (imageWidth == kernelWidth && imageHeight == kernelHeight)
{
Complex[,] fftConvolved = new Complex[imageWidth, imageHeight];
Complex[,] fftImage = FourierTransform.ForwardFFT(imageCopy);
Complex[,] fftKernel = FourierTransform.ForwardFFT(kernelCopy);
for (int j = 0; j < imageHeight; j++)
{
for (int i = 0; i < imageWidth; i++)
{
fftConvolved[i, j] = fftImage[i, j] * fftKernel[i, j];
}
}
convolve = FourierTransform.InverseFFT(fftConvolved);
RescaleComplex(convolve);
convolve = FourierShifter.ShiftFft(convolve);
}
else
{
throw new Exception("Padded image and kernel dimensions must be same.");
}
return convolve;
}
This is not really a dilemma. This is just an issue of the limited range of the display, and of your expectations, which are different in the two cases.
(top): this is a normalized kernel (its elements sum up to 1). It doesn't change the contrast of the image. But because of negative values in it, it can generate values outside the original range.
(bottom): this is not a normalized kernel. It changes the contrast of the output.
For example, play around with the kernel
0, -1, 0
-1, 6, -1
0, -1, 0
(notice the 6 in the middle). It sums up to 2. The image contrast will be doubled. That is, in a region where the input is all 0, the output is 0 as well, but where the input is all 1, the output will be 2 instead.
Typically, a convolution filter, if it is not meant to change image contrast, is normalized. If you apply such a filter, you don't need to re-scale the output for display (though you might want to clip out-of-range values if they appear). However, it is possible that the out-of-range values are relevant, in this case you need to re-scale the output to match the display range.
In your case 2 (the image kernel), you could normalize the kernel to avoid re-scaling the output. But this is not a solution in general. Some filters add up to 0 (e.g. the Sobel kernels or the Laplace kernel, both of which are based on derivatives which remove the DC component). These cannot be normalized, you will always have to re-scale the output image for display (though you wouldn't re-scale their output for analysis, since their output values have a physical meaning that is destroyed upon re-scaling).
That is to say, the convolution sometimes is meant to produce an output image with the same contrast (within approximately the same range) as the input image, and sometimes it isn't. You need to know what filter you are applying for the output to make sense, and to be able to display the output on a screen that expects images to be in a specific range.
EDIT: explanation of what is going on in your figures.
1st figure: Here you are rescaling so that the full image intensity range is visible. Logically here you don't get any saturated pixels. But because the matrix kernel enhances high frequencies, the output image has values outside the original range. Rescaling to fit the full range within the display's range reduces the contrast of the image.
2nd figure: You are rescaling the frequency-domain convolution result by N = imageWidth * imageHeight. This yields the right output. That you need to apply this scaling indicates that your forward FFT scales by 1/N, and your inverse FFT doesn't scale.
For IFFT(FFT(img))==img, it is necessary that either the FFT or the IFFT are scaled by 1/N. Typically it is the IFFT that is scaled. The reason is that then the convolution does as expected without any further scaling. To see this, imagine an image where all pixels have the same value. FFT(img) will be zero everywhere except for the 0 frequency component (DC component), which will be sum(img). The normalized kernel sums up to 1, so its DC component is sum(kernel)==1. Multiply these two, we obtain again a frequency spectrum like the input's, with a DC component of sum(img). Its inverse transform will be equal to img. This is exactly what we expect for this convolution.
Now, use the other form of normalization (i.e. the one used by the FFT you have access to). The DC component of FFT(img) will be sum(img)/N. The DC component of the kernel will be 1/N. Multiply these two, and obtain a DC component of sum(img)/(N*N). Its inverse transform will be equal to img/N. Thus, you need to multiply by N to obtain the expected result. This is exactly what you're seeing in your frequency-domain convolution for the "matrix kernel", which is normalized.
As I mentioned above, the "image kernel" isn't normalized. The DC component of FFT(kernel) is sum(img)/N, the multiplication of that by FFT(img) has a DC component sum(img)*sum(img)/(N*N), and so the inverse transform has a contrast multiplied by sum(img)/N, multiplying by N still leaves you with a factor sum(img) too large. If you were to normalize the kernel, you would be dividing it by sum(img), which would bring your output into the expected range.
While using a self-written graphing control I noticed that the painting of the graph was much slower while displaying noisy data than when it displayed clean data.
I dug further into and narrowed the problem down to its bare minimum difference: Drawing the same amount of lines with varying Y values versus drawing lines with the same Y value.
So for example I put together the following tests. I generate lists of points, one with random Y values, one with the same Y, and one with a Zig-Zag Y pattern.
private List<PointF> GenerateRandom(int n, int width, int height)
{
//Generate random pattern
Random rnd = new Random();
float stepwidth = Convert.ToSingle(width / n);
float mid = Convert.ToSingle(height / 2);
float lastx = 0;
float lasty = mid;
List<PointF> res = new List<PointF>();
res.Add(new PointF(lastx, lasty));
for (int i = 1; i <= n; i++)
{
var x = stepwidth * i;
var y = Convert.ToSingle(height * rnd.NextDouble());
res.Add(new PointF(x, y));
}
return res;
}
private List<PointF> GenerateUnity(int n, int width, int height)
{
//Generate points along a simple line
float stepwidth = Convert.ToSingle(width / n);
float mid = Convert.ToSingle(height / 2);
float lastx = 0;
float lasty = mid;
List<PointF> res = new List<PointF>();
res.Add(new PointF(lastx, lasty));
for (int i = 1; i <= n; i++)
{
var x = stepwidth * i;
var y = mid;
res.Add(new PointF(x, y));
}
return res;
}
private List<PointF> GenerateZigZag(int n, int width, int height)
{
//Generate an Up/Down List
float stepwidth = Convert.ToSingle(width / n);
float mid = Convert.ToSingle(height / 2);
float lastx = 0;
float lasty = mid;
List<PointF> res = new List<PointF>();
res.Add(new PointF(lastx, lasty));
var state = false;
for (int i = 1; i <= n; i++)
{
var x = stepwidth * i;
var y = mid - (state ? 50 : -50);
res.Add(new PointF(x, y));
state = !state;
}
return res;
}
I now draw each list of points a few times and compare how long it takes:
private void DoTheTest()
{
Bitmap bmp = new Bitmap(970, 512);
var random = GenerateRandom(2500, bmp.Width, bmp.Height).ToArray();
var unity = GenerateUnity(2500, bmp.Width, bmp.Height).ToArray();
var ZigZag = GenerateZigZag(2500, bmp.Width, bmp.Height).ToArray();
using (Graphics g = Graphics.FromImage(bmp))
{
var tUnity = BenchmarkDraw(g, 200, unity);
var tRandom = BenchmarkDraw(g, 200, random);
var tZigZag = BenchmarkDraw(g, 200, ZigZag);
MessageBox.Show(tUnity.ToString() + "\r\n" + tRandom.ToString() + "\r\n" + tZigZag.ToString());
}
}
private double BenchmarkDraw(Graphics g, int n, PointF[] Points)
{
var Times = new List<double>();
for (int i = 1; i <= n; i++)
{
g.Clear(Color.White);
System.DateTime d3 = DateTime.Now;
DrawLines(g, Points);
System.DateTime d4 = DateTime.Now;
Times.Add((d4 - d3).TotalMilliseconds);
}
return Times.Average();
}
private void DrawLines(Graphics g, PointF[] Points)
{
g.DrawLines(Pens.Black, Points);
}
I come up with the following durations per draw:
Straight Line: 0.095 ms
Zig-Zag Pattern: 3.24 ms
Random Pattern: 5.47 ms
So it seems to get progressively worse, the more change there is in the lines to be drawn, and that is also a real world effect I encountered in the control painting I mentioned in the beginning.
My questions are thus the following:
Why does it make a such a brutal difference, which lines are to be drawn?
How can I improve the drawing speed for the noisy data?
Three reasons come to mind:
Line Length : Depending on the actual numbers sloped lines may be longer by just a few pixels or a lot or even by some substantial factor. Looking at your code I suspect the latter..
Algorithm : Drawing sloped lines does take some algorithm to find the next pixels. Even fast drawing routines need to do some computations as opposed to vertical or horizontal lines, which run straight through the pixel arrays.
Anti-Aliasing : Unless you turn off anti-aliasing completely (with all the ugly consequences) the number of pixels to paint will also be around 2-3 times more as all those anti-aliasing pixels above and below the center lines must also be calculated and drawn. Not to forget calculating their colors!
The remedy for the latter part is obviously to turn off anti-aliasing, but the other problems are simply the way things are. So best don't worry and be happy about the speedy straight lines :-)
If you really have a lot of lines or your lines could be very long (a few time the size of the screen), or if you have a lot of almost 0 pixel line, you have to wrote code to reduce useless drawing of lines.
Well, here are some ideas:
If you write many lines at the same x, then you could replace those by a single line between min and max y at that x.
If your line goes way beyond the screen boundary, you should clip them.
If a line is completly outside of the visible area, you should skip it.
If a line have a 0 length, you should not write it.
If a line has a single pixel length, you should write only that pixel.
Obviously, the benefit depends a lot on how many lines you draw... And also the alternative might not give the exact same result...
In practice, it you draw a chart on a screen, then if you display only useful information, it should be pretty fast on modern hardware.
Well if you use style or colors, it might not be as trivial to optimize the displaying of the data.
Alternatively, they are some charting component that are optimized for display large data... The good one are generally expensive but it might still worth it. Often trials are available so you can get a good idea on how much you might increase the performance and then decide what to do.
I'm trying to draw the Mandelbrot fractal, using the following method that I wrote:
public void Mendelbrot(int MAX_Iterations)
{
int iterations = 0;
for (float x = -2; x <= 2; x += 0.001f)
{
for (float y = -2; y <= 2; y += 0.001f)
{
Graphics gpr = panel.CreateGraphics();
//System.Numerics
Complex C = new Complex(x, y);
Complex Z = new Complex(0, 0);
for (iterations = 0; iterations < MAX_Iterations && Complex.Abs(Z) < 2; Iterations++)
Z = Complex.Pow(Z, 2) + C;
//ARGB color based on Iterations
int r = (iterations % 32) * 7;
int g = (iterations % 16) * 14;
int b = (iterations % 128) * 2;
int a = 255;
Color c = Color.FromArgb(a,r,g,b);
Pen p = new Pen(c);
//Tranform the coordinates x(real number) and y(immaginary number)
//of the Gauss graph in x and y of the Cartesian graph
float X = (panel.Width * (x + 2)) / 4;
float Y = (panel.Height * (y + 2)) / 4;
//Draw a single pixel using a Rectangle
gpr.DrawRectangle(p, X, Y, 1, 1);
}
}
}
It works, but it's slow, because I need to add the possibility of zooming. Using this method of drawing it isn't possible, so I need something fast. I tried to use a FastBitmap, but it isn't enough, the SetPixel of the FastBitmap doesn't increase the speed of drawing. So I'm searching for something very fast, I know that C# isn't like C and ASM, but it would be interesting do this in C# and Winforms.
Suggestions are welcome.
EDIT: Mendelbrot Set Zoom Animation
I assume it would be significantly more efficient to first populate your RGB values into a byte array in memory, then write them in bulk into a Bitmap using LockBits and Marshal.Copy (follow the link for an example), and finally draw the bitmap using Graphics.DrawImage.
You need to understand some essential concepts, such as stride and image formats, before you can get this to work.
As comment said put out CreateGraphics() out of the double loop, and this is already a good imrovement.
But also
Enable double buffering
For zooming use MatrixTransformation functions like:
ScaleTransform
RotateTransform
TranslateTransform
An interesting article on CodeProject can be found here. It goes a little bit further than just function calls, by explaining actually Matrix calculus ( a simple way, don't worry), which is good and not difficult to understand, in order to know what is going on behind the scenes.
I've been racking my brain trying to figure out how to animate an effect. This is related to a question I asked on math.stackexchange.com.
https://math.stackexchange.com/questions/91120/equal-division-of-rectangles-to-make-total/
As a side note, I didn't implement the drawing algorithm that was defined on the question above -- instead using my own in order to change the perspective to make it look more condensed.
I've been able to draw a stationary 3d style effect, but I am having trouble wrapping my brain around the logic to make the lines below look like they are coming towards you.
My code is as follows,
List<double> sizes = new List<double>();
private void Form1_Load(object sender, EventArgs e)
{
for (int y = 1; y < 10; y++)
{
double s = ((240 / 2) / y) / 4;
sizes.Add(s);
}
sizes.Add(0);
}
int offset = 0;
private void button1_Click(object sender, EventArgs e)
{
Bitmap b = new Bitmap(320, 480);
Graphics g = Graphics.FromImage(b);
Color firstColor = Color.DarkGray;
Color secondColor = Color.Gray;
Color c = firstColor;
int yOffset = 0;
for(int i = 0; i < sizes.Count; i++)
{
c = (i % 2 == 0) ? firstColor : secondColor;
int y = (int)Math.Round(b.Height - yOffset - sizes[i]);
int height = (int)Math.Round(sizes[i]);
g.FillRectangle(new SolidBrush(c), new Rectangle(0, y + offset, b.Width, height + offset));
yOffset += (int)sizes[i];
}
this.BackgroundImage = b;
offset+=1;
}
Each button click should cause the rectangles to resize and move closer. However, my rectangles aren't growing as they should. My logic draws fine, but simply doesn't work as far as moving goes.
So my question is:
Is there an existing algorithm for this effect that I am not aware of, or is this something pretty simple that I'm over thinking? Any help in correcting my logic or pointing me in the right direction would be very appreciated.
Interesting...
(video of the answer here: http://youtu.be/estq62yz7v0)
I would do it like that:
First, drop all RECTANGLE drawing and draw your effect line by line. Like so:
for (int y=start;y<end;y++)
{
color = DetermineColorFor(y-start);
DrawLine(left, y, right, y, color);
}
This is of course pseudo-code not to be troubled with GDI+ or something.
Everything is clear here, except on how to code DetermineColorFor() method. That method will have to return color of the line at specified PROJECTED height.
Now, on the picture, you have:
you point of view (X) - didn't know how to draw an eye
red line (that's your screen - projection plane)
your background (alternating stripes at the bottom)
and few projecting lines that should help you devise the DetermineColorFor() method
Hint - use triangle similarity to go from screen coordinates to 'bar' coordinates.
Next hint - when you are in 'bar' coordinates, use modulo operator to determine color.
I'll add more hints if needed, but it would be great if you solved this on your own.
I was somehow inspired by the question, and have created a code for the solution. Here it is:
int _offset = 0;
double period = 20.0;
private void timer1_Tick(object sender, EventArgs e)
{
for (int y = Height / 3; y < Height; y++)
{
using (Graphics g = CreateGraphics())
{
Pen p = new Pen(GetColorFor(y - Height / 3));
g.DrawLine(p, 0, y, Width, y);
p.Dispose();
}
}
_offset++;
}
private Color GetColorFor(int y)
{
double d = 10.0;
double h = 20.0;
double z = 0.0;
if (y != 0)
{
z = d * h / (double)y + _offset;
}
double l = 128 + 127 * Math.Sin(z * 2.0 * Math.PI / period);
return Color.FromArgb((int)l, (int)l, (int)l);
}
Experiment with:
d - distance from the eye to the projection screen
h - height of the eye from the 'bar'
period - stripe width on the 'bar'
I had a timer on the form and event properly hooked. Timer duration was 20ms.
Considering that you're talking here about 2D rendering, as much as I understodd, to me it seems that you're gonna to reenvent the wheel. Cause what you need, IMHO; is use Matrix Transformations already available in GDI+ for 2D rendering.
Example of aplying it in GDI+ : GDI+ and MatrixTranformations
For this they use System.Drawing.Drawing2D.Matrix class, which is inside Graphics.
The best ever 2D rendering framework I ever used is Piccolo2D framework which I used with great success in big real production project. Definitely use this for your 2D rendering projects, but first you need to study it little bit.
Hope this helps.