Operating on images in emgucv - c#

I have 2 Emgu.CV.Image images:
Image<Gray, byte> img1 = new Image<Gray, byte>(#"xyz.gif");
Image<Gray, byte> img2 = new Image<Gray, byte>(#"abc.gif");
I want to perform operation on images like image addition pixel by pixel like ( without using inbuilt functions):
for (int i = 0; i < width1; i++){
for (int j = 0; j < height1; j++){
img2[i][j] = img1[i][j] + img2[i][j];
}
}
How can I do so?

If you need to alter the image, Pixel by Pixel then it's back to using the Image.Data property.
If you are using colour images it is important to note that it is a 3 Dimensional array containing Red, Green and Blue Data in layer 0,1,2 respectively. The following code will allow you to access data from the image and adjust it.
for (int i = 0; i < width1; i++)
{
for (int j = 0; j < height1; j++)
{
img1.Data[i,j,0] = img1.Data[i,j,0] + img2.Data[i,j,0];
img1.Data[i,j,1] = img1.Data[i,j,1] + img2.Data[i,j,1];
img1.Data[i,j,2] = img1.Data[i,j,2] + img2.Data[i,j,2];
}
}
For the int<>Byte conversion error, you might need to cast the results to a (byte) (however you can assign int's to the Data providing there in range) i.e.
img1.Data[i,j,0] = (byte)img1.Data[i,j,0] + img2.Data[i,j,0];
this is done to tell .Net that you willing to accept data loss. Please note however you are adding two bytes from image data. There values are 0-255 so you could end up with a value of 0-510. To account for this you must normalise you results to the required 0 - 255 standard in images. i.e.
img1.Data[i,j,0] = ((img1.Data[i,j,0] + img2.Data[i,j,2])/0);
As you are using greyscale images you image data will only have 1 layer the code is similar in format however you only add the first layer 0.
for (int i = 0; i < width1; i++)
{
for (int j = 0; j < height1; j++)
{
gray_image1.Data[i,j,0] = gray_image1.Data[i,j,0] + gray_image2.Data[i,j,0];
}
}
The TDepth property allows you to alter the type of data held with .Data construct while some conversion are not supported doubles are, mainly because the way these are stored allow code to execute more efficiently with them. It is good practice to use this however not really essential.

Related

ImageSharp: convert from PixelFormats to ColorSpace

I am trying to use ImageSharp for some image processing. I would like to get HSL values for an individual pixel. For that I think that I need to convert PixelFormat to a ColorSpace. How do I convert to/access Hsl color space values?
I have tried the following ColorSpaceConverter to no avail.
for (int y = 0; y < image.Height; y++)
{
Span<Rgb24> pixelRowSpan = image.GetPixelRowSpan(y);
Span<Hsl> hslRowSpan = new Span<Hsl>();
var converter = new ColorSpaceConverter();
converter.Convert(pixelRowSpan, hslRowSpan);
}
I do get the following errors:
error CS1503: Argument 1: cannot convert from
'System.Span<SixLabors.ImageSharp.PixelFormats.Rgb24>' to
'System.ReadOnlySpan<SixLabors.ImageSharp.ColorSpaces.CieLch>'
error CS1503: Argument 2: cannot convert from
'System.Span<SixLabors.ImageSharp.ColorSpaces.Hsl>' to 'System.Span<SixLabors.ImageSharp.ColorSpaces.CieLab>'
Rgb24 has an implicit conversion to Rgb but as you have discovered that doesn't allow implicit conversion of spans.
I would allocate a pooled buffer equivalent to one row of Rgb outside the loop and populate the buffer for each y.
// I would probably pool these buffers.
Span<Rgb> rgb = new Rgb[image.Width];
Span<Hsl> hsl = new Hsl[image.Width];
ColorSpaceConverter converter = new();
for (int y = 0; y < image.Height; y++)
{
Span<Rgb24> row = image.GetPixelRowSpan(y);
for (int x = 0; x < row.Length; x++)
{
rgb[x] = row[x];
}
converter.Convert(rgb, hsl);
}

Accessing processed values from FFT

I am attempting to use Lomont FFT in order to return complex numbers to build a spectrogram / spectral density chart using c#.
I am having trouble understanding how to return values from the class.
Here is the code I have put together thus far which appears to be working.
int read = 0;
Double[] data;
byte[] buffer = new byte[1024];
FileStream wave = new FileStream(args[0], FileMode.Open, FileAccess.Read);
read = wave.Read(buffer, 0, 44);
read = wave.Read(buffer, 0, 1024);
data = new Double[read];
for (int i = 0; i < read; i+=2)
{
data[i] = BitConverter.ToInt16(buffer, i) / 32768.0;
Console.WriteLine(data[i]);
}
LomontFFT LFFT = new LomontFFT();
LFFT.FFT(data, true);
What I am not clear on is, how to return/access the values from Lomont FFT implementation back into my application (console)?
Being pretty new to c# development, I'm thinking I am perhaps missing a fundamental aspect of understanding regarding how to retrieve processed values from the instance of the Lomont Class, or perhaps even calling it incorrectly.
Console.WriteLine(LFFT.A); // Returns 0
Console.WriteLine(LFFT.B); // Returns 1
I have been searching for a code snippet or explanation of how to do this, but so far have come up with nothing that I understand or explains this particular aspect of the issue I am facing. Any guidance would be greatly appreciated.
A subset of the results held in data array noted in the code above can be found below and based on my current understanding, appear to be valid:
0.00531005859375
0.0238037109375
0.041473388671875
0.0576171875
0.07183837890625
0.083465576171875
0.092193603515625
0.097625732421875
0.099639892578125
0.098114013671875
0.0931396484375
0.0848388671875
0.07354736328125
0.05963134765625
0.043609619140625
0.026031494140625
0.007476806640625
-0.011260986328125
-0.0296630859375
-0.047027587890625
-0.062713623046875
-0.076141357421875
-0.086883544921875
-0.09454345703125
-0.098785400390625
-0.0994873046875
-0.0966796875
-0.090362548828125
-0.080810546875
-0.06842041015625
-0.05352783203125
-0.036712646484375
-0.0185546875
What am I actually attempting to do? (perspective)
I am looking to load a wave file into a console application and return a spectrogram/spectral density chart/image as a jpg/png for further processing.
The wave files I am reading are mono in format
UPDATE 1
I Receive slightly different results depending on which FFT is used.
Using RealFFT
for (int i = 0; i < read; i+=2)
{
data[i] = BitConverter.ToInt16(buffer, i) / 32768.0;
//Console.WriteLine(data[i]);
}
LomontFFT LFFT = new LomontFFT();
LFFT.RealFFT(data, true);
for (int i = 0; i < buffer.Length / 2; i++)
{
System.Console.WriteLine("{0}",
Math.Sqrt(data[2 * i] * data[2 * i] + data[2 * i + 1] * data[2 * i + 1]));
}
Partial Result of RealFFT
0.314566983321381
0.625242818210924
0.30314888696868
0.118468857708093
0.0587697011760449
0.0369034115568654
0.0265842582236275
0.0207195964060356
0.0169601273233317
0.0143745438577886
0.012528799609089
0.0111831275153128
0.0102313284519146
0.00960198279358434
0.00920236001619566
Using FFT
for (int i = 0; i < read; i+=2)
{
data[i] = BitConverter.ToInt16(buffer, i) / 32768.0;
//Console.WriteLine(data[i]);
}
double[] bufferB = new double[2 * data.Length];
for (int i = 0; i < data.Length; i++)
{
bufferB[2 * i] = data[i];
bufferB[2 * i + 1] = 0;
}
LomontFFT LFFT = new LomontFFT();
LFFT.FFT(bufferB, true);
for (int i = 0; i < bufferB.Length / 2; i++)
{
System.Console.WriteLine("{0}",
Math.Sqrt(bufferB[2 * i] * bufferB[2 * i] + bufferB[2 * i + 1] * bufferB[2 * i + 1]));
}
Partial Result of FFT:
0.31456698332138
0.625242818210923
0.303148886968679
0.118468857708092
0.0587697011760447
0.0369034115568653
0.0265842582236274
0.0207195964060355
0.0169601273233317
0.0143745438577886
0.012528799609089
0.0111831275153127
0.0102313284519146
0.00960198279358439
0.00920236001619564
Looking at the LomontFFT.FFT documentation:
Compute the forward or inverse Fourier Transform of data, with
data containing complex valued data as alternating real and
imaginary parts. The length must be a power of 2. The data is
modified in place.
This tells us a few things. First the function is expecting complex-valued data whereas your data is real. A quick fix for this is to create another buffer of twice the size and setting all the imaginary parts to 0:
double[] buffer = new double[2*data.Length];
for (int i=0; i<data.Length; i++)
{
buffer[2*i] = data[i];
buffer[2*i+1] = 0;
}
The documentation also tells us that the computation is done in place. That means that after the call to FFT returns, the input array is replaced with the computed result. You could thus print the spectrum with:
LomontFFT LFFT = new LomontFFT();
LFFT.FFT(buffer, true);
for (int i = 0; i < buffer.Length/2; i++)
{
System.Console.WriteLine("{0}",
Math.Sqrt(buffer[2*i]*buffer[2*i]+buffer[2*i+1]*buffer[2*i+1]));
}
Note since your input data is real valued you could also use LomontFFT.RealFFT. In that case, given a slightly different packing rule, you would obtain the FFT results using:
LomontFFT LFFT = new LomontFFT();
LFFT.RealFFT(data, true);
System.Console.WriteLine("{0}", Math.Abs(data[0]);
for (int i = 1; i < data.Length/2; i++)
{
System.Console.WriteLine("{0}",
Math.Sqrt(data[2*i]*data[2*i]+data[2*i+1]*data[2*i+1]));
}
System.Console.WriteLine("{0}", Math.Abs(data[1]);
This would give you the non-redundant lower half of the spectrum (Unlike LomontFFT.FFT which provides the entire spectrum). Also, numerical differences on the order of double precision (around 1e-16 times the spectrum peak value) with respect to LomontFFT.FFT can be expected.

smoothing stacked line graph

I have a c# Windows forms application that generates a stacked line graph using the standard MS Chart control, as in the below example.
Is there a way of "smoothing" the lines by formatting the series or some other property?
Looking at MSDN and Google I can't seem to find a way to do this, in Excel there a series.Smooth property...
I have I missed it or is it not possible?
If you liked the smooth look of the SplineAreas you can calculate the necessary values to get just that look:
A few notes:
I have reversed the order of the series; many ways to get the colors right.. (Instead one probably should accumulate in reverse)
The stacked DataPoints need to be aligned, as usual and any empty DataPoints should have their Y-Values to be 0.
Of course in the new series you can't access the actual data values any longer as you now have the accumulated values instead; at least not without reversing the calulation. So if need them keep them around somewhere. The new DataPoints' Tag property is one option..
You can control the ' smoothness' of each Series by setting its LineTension custom attribute:
chart2.Series[0].SetCustomProperty("LineTension", "0.15");
Here is the full example code that creates the above screenshots calculating a 'stacked' SplineArea chart2 from the data in a StackedArea chart1:
// preparation
for (int i = 0; i < 4; i++)
{
Series s = chart1.Series.Add("S" + i);
s.ChartType = SeriesChartType.StackedArea;
Series s2 = chart2.Series.Add("S" + i);
s2.ChartType = SeriesChartType.SplineArea;
}
for (int i = 0; i < 30; i++) // some test data
{
chart1.Series[0].Points.AddXY(i, Math.Abs(Math.Sin(i / 8f)));
chart1.Series[1].Points.AddXY(i, Math.Abs(Math.Sin(i / 4f)));
chart1.Series[2].Points.AddXY(i, Math.Abs(Math.Sin(i / 1f)));
chart1.Series[3].Points.AddXY(i, Math.Abs(Math.Sin(i / 0.5f)));
}
// the actual calculations:
int sCount = chart1.Series.Count;
for (int i = 0; i < chart1.Series[0].Points.Count ; i++)
{
double v = chart1.Series[0].Points[i].YValues[0];
chart2.Series[sCount - 1].Points.AddXY(i, v);
for (int j = 1; j < sCount; j++)
{
v += chart1.Series[j].Points[i].YValues[0];
chart2.Series[sCount - j - 1].Points.AddXY(i, v);
}
}
// optionally control the tension:
chart2.Series[0].SetCustomProperty("LineTension", "0.15");

EmguCv: Reduce the grayscales

Is there a way to reduce the grayscales of an gray-image in openCv?
Normaly i have grayvalues from 0 to 256 for an
Image<Gray, byte> inputImage.
In my case i just need grayvalues from 0-10. Is there i good way to do that with OpenCV, especially for C# ?
There's nothing built-in on OpenCV that allows this sort of thing.
Nevertheless, you can write something yourself. Take a look at this C++ implementation and just translate it to C#:
void colorReduce(cv::Mat& image, int div=64)
{
int nl = image.rows; // number of lines
int nc = image.cols * image.channels(); // number of elements per line
for (int j = 0; j < nl; j++)
{
// get the address of row j
uchar* data = image.ptr<uchar>(j);
for (int i = 0; i < nc; i++)
{
// process each pixel
data[i] = data[i] / div * div + div / 2;
}
}
}
Just send a grayscale Mat to this function and play with the div parameter.

New image overlays previous bitmap

There are a number of posts about this, but i still can't figure it out. I am rather new at this, so please be forgiving.
I display an image, then grab a new image, and try to display it. When the new image is displayed, it has remnants of the old image. I have tried Picture1.Image= null to no avail.
Is it an issue with managed memory? I suspect it has to do with how the memory is being managed, that somehow the code copies a new image over and old image in a way that leaves some data from the previous image.
Here is the code to display the data in scaled1 (from this helpful earlier post):
Edit:
Code added showing processing of arrays that are plotted. The overlaying behavior stops if the arrays are cleared using the Array.Clear method. Perhaps when this is cleared up I can post a canonical snippet demonstrating the issue.
This resets the question as: Why do arrays need to be cleared when each value of the array is rewritten? How can the array retain information of previous values?
ushort[] frame = null;
byte[] scaled1 = null;
double[][] frameringSin;
double[][] frameringCos;
double[] sumsin;
double[] sumcos;
frame = new ushort[mImageWidth * mImageHeight];
scaled1 = new byte[mImageWidth * mImageHeight];
frameringSin = new double[RingSize][];
frameringCos = new double[RingSize][];
ringsin = new double[RingSize];
ringcos = new double[RingSize];
//Fill array with images
for(int ring=0; ring <nN; ++ring)
{
mCamera.GrabFrameReduced(framering[ring], reduced, out preset);
}
//Process images
for (int i = 0; i < nN; ++i)
{
Array.Clear(frameringSin[i], 0, frameringSin.Length);
Array.Clear(frameringCos[i], 0, frameringSin.Length);
}
Array.Clear(sumsin, 0, sumsin.Length);
Array.Clear(sumcos, 0, sumcos.Length);
for(int r=0;r<nN; ++r)
{
for (int i = 0; i < frame.Length; ++i)//upto 12 ms
{
frameringSin[r][i] = framering[r][i]* ringsin[r] / nN;
frameringCos[r][i] = framering[r][i] *ringcos[r] / nN;
}
}
for (int i = 0; i < sumsin.Length; ++i)//up to 25ms
{
for (int r = 0; r < nN; ++r)
{
sumsin[i] += frameringSin[r][i];
sumcos[i] += frameringCos[r][i];
}
}
for(int r=0 ; r<nN ;++r)
{
for (int i = 0; i < sumsin.Length; ++i)
{
A[i] = Math.Sqrt(sumsin[i] * sumsin[i] + sumcos[i] * sumcos[i]);
}
//extract scaling parameters
...
//Scale Image
for (i1 = 0; i1 < frame.Length; ++i1)
scaled1[i1] = (byte)((Math.Min(Math.Max(min1, frameA[i1]), max1) - min1) * scale1);
bmp1 = new Bitmap(mImageWidth,mImageHeight,System.Drawing.Imaging.PixelFormat.Format8bppIndexed);
var bdata1 = bmp1.LockBits(new Rectangle(new Point(0, 0), bmp1.Size), ImageLockMode.WriteOnly, bmp1.PixelFormat);
try
{
Marshal.Copy(scaled1, 0, bdata1.Scan0, scaled1.Length);
}
finally
{
bmp1.UnlockBits(bdata1);
}
Picture1.Image = bmp1;
Picture1.Refresh();
Actually, you're not replacing all values in the arrays - your for cycles are wrong. You want them to look like this:
for (i1 = 0; i1 < frame.Length; i1++)
scaled1[i1] = (byte)((Math.Min(Math.Max(min1, frameA[i1]), max1) - min1)
* scale1);
The difference (i++ vs ++i) is that your way, you're skipping the first and the last index. C# arrays start at 0, while you start at 1 (you increment the loop variable before you run the body for the first time).
Also, note that for performance reasons, it's very handy if you're going through the array like this:
for (var i = 0; i < array.Length; i++)
/* do work with array[i] */
The JIT compiler recognizes this and avoids bounds checks, because it knows there can never be an overflow. When you're doing a lot of work with arrays, this can give you a massive performance boost, even if you access multiple arrays through the same index (one of them will not have the checks, the others will - still saves a lot of work).
The default JIT isn't very smart about this (it has to be quite fast), so pretty much anything else will reintroduce the bounds check. If performance is a concern for you, you'd want to profile the code anyway, of course.
EDIT: Ah, my bad. Anyway, I believe your problem isn't having to clear the frameringXXX arrays, but rather, the sumsin and sumcos arrays - you're always adding to those, so you'd be adding to the value that was already there, rather than starting from zero again. So you need to reset those arrays to zeroes first (which is what Array.Clear does).

Categories