Scrolling through a waveform - c#

I have a waveform visualiser I am trying to make for some audio editing, and need to be able to scroll through the wave form. The code I'm currently using comes from this question and works after I made some modification to allow the specifying of a start audio time and end audio time:
public Texture2D PaintWaveformSpectrum(AudioClip audio, int textWidth, int textHeight, int audioStart, int audioEnd, Color col) {
Texture2D tex = new Texture2D(textWidth, textHeight, TextureFormat.RGBA32, false);
float[] samples = new float[audioLength];
float[] waveform = new float[textWidth];
audio.GetData(samples, 0);
int packSize = ((audioEnd - audioStart) / textWidth) + 1;
if (audioStart != 0) {
audioStart += packSize % audioStart;
}
int s = 0;
for (int i = audioStart; i < audioEnd; i += packSize) {
waveform[s] = Mathf.Abs(samples[i]);
s++;
}
for (int x = 0; x < textWidth; x++) {
for (int y = 0; y < textHeight; y++) {
tex.SetPixel(x, y, Color.gray);
}
}
for (int x = 0; x < waveform.Length; x++) {
for (int y = 0; y <= waveform[x] * ((float)textHeight * .75f); y++) {
tex.SetPixel(x, (textHeight / 2) + y, col);
tex.SetPixel(x, (textHeight / 2) - y, col);
}
}
tex.Apply();
return tex;
}
The issue here however, is that when I'm scrolling through the audio, the waveform changes. It does indeed scroll, but the issue is that it is now showing different values in the waveform. This is because there are significantly more samples than pixels, so there is a need to down sample. At the moment, every nth sample is chosen, but the issue is with a different start point, different samples will be chosen. Images below for comparison (additionally, here's a video. This is what I want the scroll to look like):
As you can see they are slightly different. The overall structure is there but the waveform is ultimately different.
I thought this would be an easy fix - shift the start audio value to the nearest packSize (ie, audioStart += packSize % audioStart when audioStart != 0) but this didn't work. The same issue still occurred.
If anyone has any suggestions on how I can keep the waveform consistent while scrolling it would be much appreciated.

Despite years of programming experience, I still can't seem to correctly round a number. It was as simple as that.
The line
if (audioStart != 0) {
audioStart += packSize % audioStart;
}
should be
audioStart = (int) Mathf.Round(audioStart / packSize) * packSize;
Adding 1 extra byte to waveform is also necessary as half the time the rounding will cause there to be one extra sample included. As such, waveform should be defined as:
float[] waveform = new float[textWidth+1];
This solves the issue and the samples are chosen consistently. I'm not quite sure how programs like audacity manage to get nice looking waveforms that aren't super noisy (comparison below for the same song: mine on top, audacity below) but that's for another question.

Related

Enhance performance to paint image, is SIMD perhapse a solution?

I have no experience with SIMD, but have a method that is too slow. I know get 40fps, and I need more.
Does anyone know how I could make this paint method faster? Perhaps the SIMD instructions are a solution?
The sourceData is now a byte[] (videoBytes) but could use a pointer too.
public bool PaintFrame(IntPtr layerBuffer, ushort vStart, byte vScale)
{
for (ushort y = 0; y < height; y++)
{
ushort eff_y = (ushort)(vScale * (y - vStart) / 128);
var newY = tileHeight > 0 ? eff_y % tileHeight : 0;
uint y_add = (uint)(newY * tileWidth * bitsPerPixel >> 3);
for (int x = 0; x < width; x++)
{
var newX = tileWidth > 0 ? x % tileWidth : 0;
ushort x_add = (ushort)(newX * bitsPerPixel >> 3);
uint tile_offset = y_add + x_add;
byte color = videoBytes[tile_offset];
var colorIndex = BitsPerPxlCalculation(color, newX);
// Apply Palette Offset
if (paletteOffset > 0)
colorIndex += paletteOffset;
var place = x + eff_y * width;
Marshal.WriteByte(layerBuffer + place, colorIndex);
}
}
return true;
}
private void UpdateBitPerPixelMethod()
{
// Convert tile byte to indexed color
switch (bitsPerPixel)
{
case 1:
BitsPerPxlCalculation = (color, newX) => color;
break;
case 2:
BitsPerPxlCalculation = (color, newX) => (byte)(color >> 6 - ((newX & 3) << 1) & 3);
break;
case 4:
BitsPerPxlCalculation = (color, newX) => (byte)(color >> 4 - ((newX & 1) << 2) & 0xf);
break;
case 8:
BitsPerPxlCalculation = (color, newX) => color;
break;
}
}
More info
Depending on the settings, the bpp can be changed. The indexed colors and the palette colors are separatly stored. Here I have to recreate the image pixels indexes, so later on I use the palette and color indexes in WPF(Windows) or SDL(Linux, Mac) to display the image.
vStart is the ability to crop the image on top.
The UpdateBitPerPixelMethod() will not change during a frame rendering, only before. During the for, no settings data can be changed.
So I was hoping that some parts can be written with SIMD, because the procedure is the same for all pixels.
Hy,
your code is not the clearest to me. Are you trying to create a new matrix / image ? If yes create a new 2D allocation and calculate the entire image into it. Set it to 0 after you do not need the calculations anymore.
Replace the Marshal.WriteByte(layerBuffer + place, colorIndex);with a 2D image ( maybe this is the image ?).
Regarding the rest it is a problem because you have non uniform offsets in indexing and jumps. That will make developing a SIMD solution difficult (you need masking and stuff). My bet would be to calculate everything for all the indices and save it into individual 2D matrices, that are allocated once at the begining.
For example:
ushort eff_y = (ushort)(vScale * (y - vStart) / 128);
Is calculated per every image row. Now you could calculate it once as an array since I do not believe that the format size of the images changes during the run.
I dont know if vStart and vScale are defined as a constant at program start. You should do this for every calculation that uses constant, and just read the matrices later to calculate.
SIMD can help but only if you do every iteration you calculate the same thing and if you avoid branching and switch cases.
Addition 1
You have multiple problems and design considerations from my stand point.
First of all you need to get away from the idea SIMD is going to help in your case. You would need to remove all conditional statements. SIMD-s are not build to deal with conditional statements.
Your idea should be to split up the logic into manageable pieces so you can see witch piece of the code takes most time.
One big problem is the write byte in the marshal, this is automatically saying to the compiler that you handle only and exclusively 1 byte. I'm guessing that this creates on big bottle neck.
By code analysis I see in each loop you are doing checks. This must be restructured.
Assumption is the image get rarely cropped this would be a separation from the image calculations.
List<ushort> eff_y = new List<ushort>();
List<uint> y_add = new List<uint>();
for (ushort y = 0; y < height; y++)
{
eff_y.add((ushort)(vScale * (y - vStart) / 128));
var newY = tileHeight > 0 ? eff_y % tileHeight : 0;
y_add = (uint)(newY * tileWidth * bitsPerPixel >> 3);
}
So this can be precalculated and changed only when the cropping changes.
Now it gets realy tricky.
paletteOffset - the if statement makes only sense in paletteOffset can be negative, then zero it out and remove the if statement
bitsPerPixel - this looks like a fixed value for the rendering duration
so remove the UpdateBitPerPixelMethod and send in a parameter.
for (ushort y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
var newX = tileWidth > 0 ? x % tileWidth : 0; // conditional stetement
ushort x_add = (ushort)(newX * bitsPerPixel >> 3);
uint tile_offset = y_add + x_add;
byte color = videoBytes[tile_offset];
var colorIndex = BitsPerPxlCalculation(color, newX);
// Apply Palette Offset
if (paletteOffset > 0) // conditional stetement
colorIndex += paletteOffset;
var place = x + eff_y * width;
Marshal.WriteByte(layerBuffer + place, colorIndex);
}
}
This are only few things that need to be done before you try anything with the SIMD. But by that time the changes will give the compiler hints about what you want to do. This could improve the machine code execution. You need also to test the performance of your code to pinpoint the bottle neck it is very hard to assume or guess correctly by code.
Good luck

Rescaling Complex data after FFT Convolution

I have tested two rescaling functions by applying them on FFT convolution outputs.
The first one is collected from this link.
public static void RescaleComplex(Complex[,] convolve)
{
int imageWidth = convolve.GetLength(0);
int imageHeight = convolve.GetLength(1);
double maxAmp = 0.0;
for (int i = 0; i < imageWidth; i++)
{
for (int j = 0; j < imageHeight; j++)
{
maxAmp = Math.Max(maxAmp, convolve[i, j].Magnitude);
}
}
double scale = 1.0 / maxAmp;
for (int i = 0; i < imageWidth; i++)
{
for (int j = 0; j < imageHeight; j++)
{
convolve[i, j] = new Complex(convolve[i, j].Real * scale,
convolve[i, j].Imaginary * scale);
}
}
}
Here the problem is incorrect contrast.
The second one is collected from this link.
public static void RescaleComplex(Complex[,] convolve)
{
int imageWidth = convolve.GetLength(0);
int imageHeight = convolve.GetLength(1);
double scale = imageWidth * imageHeight;
for (int j = 0; j < imageHeight; j++)
{
for (int i = 0; i < imageWidth; i++)
{
double re = Math.Max(0.0, Math.Min(convolve[i, j].Real * scale, 1.0));
double im = Math.Max(0.0, Math.Min(convolve[i, j].Imaginary * scale, 1.0));
convolve[i, j] = new Complex(re, im);
}
}
}
Here the output is totally white.
So, you can see two of the versions are giving one correct and another incorrect outputs.
How can I solve this dilemma?
.
Note. Matrix is the following kernel:
0 -1 0
-1 5 -1
0 -1 0
Source Code. Here is my FFT Convolution function.
private static Complex[,] ConvolutionFft(Complex[,] image, Complex[,] kernel)
{
Complex[,] imageCopy = (Complex[,])image.Clone();
Complex[,] kernelCopy = (Complex[,])kernel.Clone();
Complex[,] convolve = null;
int imageWidth = imageCopy.GetLength(0);
int imageHeight = imageCopy.GetLength(1);
int kernelWidth = kernelCopy.GetLength(0);
int kernelHeight = kernelCopy.GetLength(1);
if (imageWidth == kernelWidth && imageHeight == kernelHeight)
{
Complex[,] fftConvolved = new Complex[imageWidth, imageHeight];
Complex[,] fftImage = FourierTransform.ForwardFFT(imageCopy);
Complex[,] fftKernel = FourierTransform.ForwardFFT(kernelCopy);
for (int j = 0; j < imageHeight; j++)
{
for (int i = 0; i < imageWidth; i++)
{
fftConvolved[i, j] = fftImage[i, j] * fftKernel[i, j];
}
}
convolve = FourierTransform.InverseFFT(fftConvolved);
RescaleComplex(convolve);
convolve = FourierShifter.ShiftFft(convolve);
}
else
{
throw new Exception("Padded image and kernel dimensions must be same.");
}
return convolve;
}
This is not really a dilemma. This is just an issue of the limited range of the display, and of your expectations, which are different in the two cases.
(top): this is a normalized kernel (its elements sum up to 1). It doesn't change the contrast of the image. But because of negative values in it, it can generate values outside the original range.
(bottom): this is not a normalized kernel. It changes the contrast of the output.
For example, play around with the kernel
0, -1, 0
-1, 6, -1
0, -1, 0
(notice the 6 in the middle). It sums up to 2. The image contrast will be doubled. That is, in a region where the input is all 0, the output is 0 as well, but where the input is all 1, the output will be 2 instead.
Typically, a convolution filter, if it is not meant to change image contrast, is normalized. If you apply such a filter, you don't need to re-scale the output for display (though you might want to clip out-of-range values if they appear). However, it is possible that the out-of-range values are relevant, in this case you need to re-scale the output to match the display range.
In your case 2 (the image kernel), you could normalize the kernel to avoid re-scaling the output. But this is not a solution in general. Some filters add up to 0 (e.g. the Sobel kernels or the Laplace kernel, both of which are based on derivatives which remove the DC component). These cannot be normalized, you will always have to re-scale the output image for display (though you wouldn't re-scale their output for analysis, since their output values have a physical meaning that is destroyed upon re-scaling).
That is to say, the convolution sometimes is meant to produce an output image with the same contrast (within approximately the same range) as the input image, and sometimes it isn't. You need to know what filter you are applying for the output to make sense, and to be able to display the output on a screen that expects images to be in a specific range.
EDIT: explanation of what is going on in your figures.
1st figure: Here you are rescaling so that the full image intensity range is visible. Logically here you don't get any saturated pixels. But because the matrix kernel enhances high frequencies, the output image has values outside the original range. Rescaling to fit the full range within the display's range reduces the contrast of the image.
2nd figure: You are rescaling the frequency-domain convolution result by N = imageWidth * imageHeight. This yields the right output. That you need to apply this scaling indicates that your forward FFT scales by 1/N, and your inverse FFT doesn't scale.
For IFFT(FFT(img))==img, it is necessary that either the FFT or the IFFT are scaled by 1/N. Typically it is the IFFT that is scaled. The reason is that then the convolution does as expected without any further scaling. To see this, imagine an image where all pixels have the same value. FFT(img) will be zero everywhere except for the 0 frequency component (DC component), which will be sum(img). The normalized kernel sums up to 1, so its DC component is sum(kernel)==1. Multiply these two, we obtain again a frequency spectrum like the input's, with a DC component of sum(img). Its inverse transform will be equal to img. This is exactly what we expect for this convolution.
Now, use the other form of normalization (i.e. the one used by the FFT you have access to). The DC component of FFT(img) will be sum(img)/N. The DC component of the kernel will be 1/N. Multiply these two, and obtain a DC component of sum(img)/(N*N). Its inverse transform will be equal to img/N. Thus, you need to multiply by N to obtain the expected result. This is exactly what you're seeing in your frequency-domain convolution for the "matrix kernel", which is normalized.
As I mentioned above, the "image kernel" isn't normalized. The DC component of FFT(kernel) is sum(img)/N, the multiplication of that by FFT(img) has a DC component sum(img)*sum(img)/(N*N), and so the inverse transform has a contrast multiplied by sum(img)/N, multiplying by N still leaves you with a factor sum(img) too large. If you were to normalize the kernel, you would be dividing it by sum(img), which would bring your output into the expected range.

How to properly run an FFT on a windowed set of data from a pure sine wave

I am trying to experiment with Math.Net, specifically the FFT portion. I am attempting to extract the frequency domain information from a pure sine wave. Here is the code:
private void Form1_Load(object sender, EventArgs e)
{
//Set up the wave and derive some useful info
Double WaveFreq = 500;
Double WavePeriod = 1 / WaveFreq;
Double SampleFreq = 20000;
Double SampleTime = (1 / SampleFreq);
//Generate the wave using the above parameters
var points = Generate.Sinusoidal(100000, SampleFreq, WaveFreq, 1);
//Array to hold our complex numbers
var data = new Complex[points.Length];
//Set up the series to display our raw wave
Series WaveSeries = new Series("Waveform");
WaveSeries.ChartType = SeriesChartType.Line;
//Creat the series for displaying the FFT
Series FFTSeries = new Series("FFT Test");
FFTSeries.ChartType = SeriesChartType.Column;
//Populate both the wave series and the data array
for (int i = 0; i < points.Length; i++)
{
Double x = SampleTime * i;
WaveSeries.Points.AddXY(x, points[i]);
data[i] = new Complex(x, points[i]);
}
//Create the window to evaluate (using a window 5 times wider than the wavelength of the lowest ferequency being measured)
int WindowWidth = (int)Math.Round((1 / WaveFreq) / (1 / SampleFreq) * 5 + 0.5f);
var HannWindow = Window.HannPeriodic(WindowWidth);
var window = new Complex[WindowWidth];
for(int i = 0; i < WindowWidth; i++)
{
var y = data[i].Imaginary * HannWindow[i];
window[i] = new Complex(data[i].Real, y);
}
//Perform the FFT
Fourier.Forward(window);
//Add the calculated FFT to our FFTSeries
foreach(Complex sample in window)
{
FFTSeries.Points.AddXY(sample.Phase, sample.Magnitude);
}
chart2.Series.Add(WaveSeries);
chart2.ChartAreas[0].AxisX.Minimum = 0;
chart2.ChartAreas[0].AxisX.Maximum = .01;
chart2.ChartAreas[0].AxisY.Minimum = -2;
chart2.ChartAreas[0].AxisY.Maximum = 2;
chart1.Series.Add(FFTSeries);
chart1.ChartAreas[0].AxisX.Minimum = 0;
chart1.ChartAreas[0].AxisX.Maximum = 1000;
chart1.ChartAreas[0].AxisY.Minimum = 0;
chart1.ChartAreas[0].AxisY.Maximum = 5;
}
As you can see, I am generating a sine wave at a frequency of 500Hz, sampling at 20kHz and generating 10k Samples.
The output is as the following (FFT on the left, wave on the right)
The FFT shows absolutely nothing (asides from a peak of 1.8 around 0Hz)! I suspect it is probably an error with the windowing but for the life of me I can't see what it is.
There seems to be some misunderstanding on complex numbers. In your code they seem to be used like points (x,y-tuples), but they have nothing to do with points at all. The complex equivalent of your real data points is an array where the real part of the complex numers match your real data points and the imaginary part is all zero. Essentially:
var window = new Complex[WindowWidth];
for (int i = 0; i < WindowWidth; i++)
{
window[i] = new Complex(points[i] * HannWindow[i], 0.0);
}
If you need an easy way to get the correct x axis for your frequency plot, you can use the FrequencyScale function, along the lines of:
var scale = Fourier.FrequencyScale(WindowWidth, SampleFreq);
for (int i = 0; i < WindowWidth; i++)
{
FFTSeries.Points.AddXY(scale[i], window[i].Magnitude);
}
You should see a spike at index 5, which according to the computed scale array corresponds to frequency 500, which matches with your wave frequency.
Note that the FFT routine returns the full spectrum including negative frequencies, so you should also see a spike of the same size at frequency -500.
The FFT is definitely there but the scale that you have mapped is wrong.
Just change the X axis and you will see it
chart1.ChartAreas[0].AxisX.Maximum = 10;
It also seems like the sinusoidal wave form you generate is not right although I am no math net expert so I don't know. The center doesn't seem to be at zero.

Drawing zig-zag lines is much slower than drawing straight lines

While using a self-written graphing control I noticed that the painting of the graph was much slower while displaying noisy data than when it displayed clean data.
I dug further into and narrowed the problem down to its bare minimum difference: Drawing the same amount of lines with varying Y values versus drawing lines with the same Y value.
So for example I put together the following tests. I generate lists of points, one with random Y values, one with the same Y, and one with a Zig-Zag Y pattern.
private List<PointF> GenerateRandom(int n, int width, int height)
{
//Generate random pattern
Random rnd = new Random();
float stepwidth = Convert.ToSingle(width / n);
float mid = Convert.ToSingle(height / 2);
float lastx = 0;
float lasty = mid;
List<PointF> res = new List<PointF>();
res.Add(new PointF(lastx, lasty));
for (int i = 1; i <= n; i++)
{
var x = stepwidth * i;
var y = Convert.ToSingle(height * rnd.NextDouble());
res.Add(new PointF(x, y));
}
return res;
}
private List<PointF> GenerateUnity(int n, int width, int height)
{
//Generate points along a simple line
float stepwidth = Convert.ToSingle(width / n);
float mid = Convert.ToSingle(height / 2);
float lastx = 0;
float lasty = mid;
List<PointF> res = new List<PointF>();
res.Add(new PointF(lastx, lasty));
for (int i = 1; i <= n; i++)
{
var x = stepwidth * i;
var y = mid;
res.Add(new PointF(x, y));
}
return res;
}
private List<PointF> GenerateZigZag(int n, int width, int height)
{
//Generate an Up/Down List
float stepwidth = Convert.ToSingle(width / n);
float mid = Convert.ToSingle(height / 2);
float lastx = 0;
float lasty = mid;
List<PointF> res = new List<PointF>();
res.Add(new PointF(lastx, lasty));
var state = false;
for (int i = 1; i <= n; i++)
{
var x = stepwidth * i;
var y = mid - (state ? 50 : -50);
res.Add(new PointF(x, y));
state = !state;
}
return res;
}
I now draw each list of points a few times and compare how long it takes:
private void DoTheTest()
{
Bitmap bmp = new Bitmap(970, 512);
var random = GenerateRandom(2500, bmp.Width, bmp.Height).ToArray();
var unity = GenerateUnity(2500, bmp.Width, bmp.Height).ToArray();
var ZigZag = GenerateZigZag(2500, bmp.Width, bmp.Height).ToArray();
using (Graphics g = Graphics.FromImage(bmp))
{
var tUnity = BenchmarkDraw(g, 200, unity);
var tRandom = BenchmarkDraw(g, 200, random);
var tZigZag = BenchmarkDraw(g, 200, ZigZag);
MessageBox.Show(tUnity.ToString() + "\r\n" + tRandom.ToString() + "\r\n" + tZigZag.ToString());
}
}
private double BenchmarkDraw(Graphics g, int n, PointF[] Points)
{
var Times = new List<double>();
for (int i = 1; i <= n; i++)
{
g.Clear(Color.White);
System.DateTime d3 = DateTime.Now;
DrawLines(g, Points);
System.DateTime d4 = DateTime.Now;
Times.Add((d4 - d3).TotalMilliseconds);
}
return Times.Average();
}
private void DrawLines(Graphics g, PointF[] Points)
{
g.DrawLines(Pens.Black, Points);
}
I come up with the following durations per draw:
Straight Line: 0.095 ms
Zig-Zag Pattern: 3.24 ms
Random Pattern: 5.47 ms
So it seems to get progressively worse, the more change there is in the lines to be drawn, and that is also a real world effect I encountered in the control painting I mentioned in the beginning.
My questions are thus the following:
Why does it make a such a brutal difference, which lines are to be drawn?
How can I improve the drawing speed for the noisy data?
Three reasons come to mind:
Line Length : Depending on the actual numbers sloped lines may be longer by just a few pixels or a lot or even by some substantial factor. Looking at your code I suspect the latter..
Algorithm : Drawing sloped lines does take some algorithm to find the next pixels. Even fast drawing routines need to do some computations as opposed to vertical or horizontal lines, which run straight through the pixel arrays.
Anti-Aliasing : Unless you turn off anti-aliasing completely (with all the ugly consequences) the number of pixels to paint will also be around 2-3 times more as all those anti-aliasing pixels above and below the center lines must also be calculated and drawn. Not to forget calculating their colors!
The remedy for the latter part is obviously to turn off anti-aliasing, but the other problems are simply the way things are. So best don't worry and be happy about the speedy straight lines :-)
If you really have a lot of lines or your lines could be very long (a few time the size of the screen), or if you have a lot of almost 0 pixel line, you have to wrote code to reduce useless drawing of lines.
Well, here are some ideas:
If you write many lines at the same x, then you could replace those by a single line between min and max y at that x.
If your line goes way beyond the screen boundary, you should clip them.
If a line is completly outside of the visible area, you should skip it.
If a line have a 0 length, you should not write it.
If a line has a single pixel length, you should write only that pixel.
Obviously, the benefit depends a lot on how many lines you draw... And also the alternative might not give the exact same result...
In practice, it you draw a chart on a screen, then if you display only useful information, it should be pretty fast on modern hardware.
Well if you use style or colors, it might not be as trivial to optimize the displaying of the data.
Alternatively, they are some charting component that are optimized for display large data... The good one are generally expensive but it might still worth it. Often trials are available so you can get a good idea on how much you might increase the performance and then decide what to do.

Faster method for drawing in C#

I'm trying to draw the Mandelbrot fractal, using the following method that I wrote:
public void Mendelbrot(int MAX_Iterations)
{
int iterations = 0;
for (float x = -2; x <= 2; x += 0.001f)
{
for (float y = -2; y <= 2; y += 0.001f)
{
Graphics gpr = panel.CreateGraphics();
//System.Numerics
Complex C = new Complex(x, y);
Complex Z = new Complex(0, 0);
for (iterations = 0; iterations < MAX_Iterations && Complex.Abs(Z) < 2; Iterations++)
Z = Complex.Pow(Z, 2) + C;
//ARGB color based on Iterations
int r = (iterations % 32) * 7;
int g = (iterations % 16) * 14;
int b = (iterations % 128) * 2;
int a = 255;
Color c = Color.FromArgb(a,r,g,b);
Pen p = new Pen(c);
//Tranform the coordinates x(real number) and y(immaginary number)
//of the Gauss graph in x and y of the Cartesian graph
float X = (panel.Width * (x + 2)) / 4;
float Y = (panel.Height * (y + 2)) / 4;
//Draw a single pixel using a Rectangle
gpr.DrawRectangle(p, X, Y, 1, 1);
}
}
}
It works, but it's slow, because I need to add the possibility of zooming. Using this method of drawing it isn't possible, so I need something fast. I tried to use a FastBitmap, but it isn't enough, the SetPixel of the FastBitmap doesn't increase the speed of drawing. So I'm searching for something very fast, I know that C# isn't like C and ASM, but it would be interesting do this in C# and Winforms.
Suggestions are welcome.
EDIT: Mendelbrot Set Zoom Animation
I assume it would be significantly more efficient to first populate your RGB values into a byte array in memory, then write them in bulk into a Bitmap using LockBits and Marshal.Copy (follow the link for an example), and finally draw the bitmap using Graphics.DrawImage.
You need to understand some essential concepts, such as stride and image formats, before you can get this to work.
As comment said put out CreateGraphics() out of the double loop, and this is already a good imrovement.
But also
Enable double buffering
For zooming use MatrixTransformation functions like:
ScaleTransform
RotateTransform
TranslateTransform
An interesting article on CodeProject can be found here. It goes a little bit further than just function calls, by explaining actually Matrix calculus ( a simple way, don't worry), which is good and not difficult to understand, in order to know what is going on behind the scenes.

Categories