On MSDN I found this for the definition of the image stride. In short it is the width in bytes of one line in the buffer.
Now I have an RGB image in a similar buffer and I have been given the stride and the image width in pixels, and I want to know how much padding bytes there have been added.
Is this simply stride - 3 * imagewidth since I have 3 bytes (RGB) per pixel?
unsafe private void setPrevFrame(IntPtr pBuffer)
{
prevFrame = new byte[m_videoWidth, m_videoHeight, 3];
Byte* b = (byte*) pBuffer;
for (int i = 0; i < m_videoWidth; i++)
{
for (int j = 0; j < m_videoHeight; j++)
{
for (int k = 0; k < 3; k++)
{
prevFrame[i,j,k] = *b;
b++;
}
}
b += (m_stride - 3 * m_videoHeight);
}
}
It's the last line of code I'm not sure about
Stride will be padded to a 4 byte boundary:
So if your image width is W and it is 3 bytes per pixel:
int strideWidth = ((W * 3) - 1) / 4 * 4 + 4;
//or simply
int strideWidth = Math.Abs(bData.Stride);
//and so
int padding = strideWidth - W * 3;
If you are using the .Stride property of the BitmapData object, you must Abs that value because it may be negative.
Following your edit of the question:
There is no need for you to know the padding in order to iterate all the bytes of the image:
unsafe private void setPrevFrame(IntPtr pBuffer)
{
for (int j = 0; j < m_videoHeight; j++)
{
b = scan0 + j * stride; //the first byte of the first pixel of the row
for (int i = 0; i < m_videoWidth; i++)
{
...
}
}
}
Your previous method will also fail on bottom-up images, as described in the MSDN link you posted.
Related
I have a byte[] for a RGBA array. I have the following method that flips the image vertically:
private byte[] FlipPixelsVertically(byte[] frameData, int height, int width)
{
byte[] data = new byte[frameData.Length];
int k = 0;
for (int j = height - 1; j >= 0 && k < height; j--)
{
for (int i = 0; i < width * 4; i++)
{
data[k * width * 4 + i] = frameData[j * width * 4 + i];
}
k++;
}
return data;
}
The reason I am creating new byte[] is because I do not want to alter the contents of frameData, since the original info will be used elsewhere. So for now, I just have a nested for loop that copies the byte to the proper place in data.
As height and width increase, this will become an expensive operation. How can I optimize this so that the copy/swap is faster?
Using Buffer.BlockCopy:
private byte[] FlipPixelsVertically(byte[] frameData, int height, int width)
{
byte[] data = new byte[frameData.Length];
int k = 0;
for (int k = 0; k < height; k++)
{
int j = height - k - 1;
Buffer.BlockCopy(
frameData, k * width * 4,
data, j * width * 4,
width*4);
}
return data;
}
Trying to write an efficient algorithm to scale down YUV 4:2:2 by a factor of 2 - and which doesn't require a conversion to RGB (which is CPU intensive).
I've seen plenty of code on stack overflow for YUV to RGB conversion - but only an example of scaling for YUV 4:2:0 here which I have started based my code on. However, this produces an image which is effectively 3 columns of the same image with corrupt colours, so something is wrong with the algo when applied to 4:2:2.
Can anybody see what is wrong with this code?
public static byte[] HalveYuv(byte[] data, int imageWidth, int imageHeight)
{
byte[] yuv = new byte[imageWidth / 2 * imageHeight / 2 * 3 / 2];
int i = 0;
for (int y = 0; y < imageHeight; y += 2)
{
for (int x = 0; x < imageWidth; x += 2)
{
yuv[i] = data[y * imageWidth + x];
i++;
}
}
for (int y = 0; y < imageHeight / 2; y += 2)
{
for (int x = 0; x < imageWidth; x += 4)
{
yuv[i] = data[(imageWidth * imageHeight) + (y * imageWidth) + x];
i++;
yuv[i] = data[(imageWidth * imageHeight) + (y * imageWidth) + (x + 1)];
i++;
}
}
return yuv;
}
A fast way to generate a low quality thumbnail would be to discard half of the data in each dimension.
We break the image in 4x2 grid of pixels - each pair of pixels in the grid is represented by 4 bytes. In the down-scaled image, we take the color values for the first 2 pixels in the grid by copying the first 4 bytes, whilst discarding the other 12 bytes worth of data.
This scaling can be generalized to any power of 2 (1/2, 1/4, 1/8, ...) - this method is quick because it doesn't use any interpolation. This will give a lower quality image which appears blocky however - for better results consider some sampling approach.
public static byte[] FastResize(
byte[] data,
int imageWidth,
int imageHeight,
int scaleDownExponent)
{
var scaleDownFactor = (uint)Math.Pow(2, scaleDownExponent);
var outputImageWidth = imageWidth / scaleDownFactor;
var outputImageHeight = imageHeight / scaleDownFactor;
// 2 bytes per pixel.
byte[] yuv = new byte[outputImageWidth * outputImageHeight * 2];
var pos = 0;
// Process every other line.
for (uint pixelY = 0; pixelY < imageHeight; pixelY += scaleDownFactor)
{
// Work in blocks of 2 pixels, we discard the second.
for (uint pixelX = 0; pixelX < imageWidth; pixelX += 2*scaleDownFactor)
{
// Position of pixel bytes.
var start = ((pixelY * imageWidth) + pixelX) * 2;
yuv[pos] = data[start];
yuv[pos + 1] = data[start + 1];
yuv[pos + 2] = data[start + 2];
yuv[pos + 3] = data[start + 3];
pos += 4;
}
}
return yuv;
}
I assume that the original data is in the following order (as it seems so from your example code): First there are the luminance (Y) values of the pixels of the image (size = imageWidth*imageHeight bytes). After that there are the chrominance components UV, s.t., the values for a single pixel are given after each other. This means that the total size of the original image is 3*size.
Now for 4:2:2 subsampling means that every other value of the horizontal chrominance component are discarded. This reduces the data to size size + 0.5*size + 0.5*size = 2*size, i.e., luminance is kept completely and both chrominance components are divided to half. Therefore, the result image should be allocated as:
byte[] yuv = new byte[2*imageWidth*imageHeight];
As the first part of the image is copied in full the first loop becomes:
int i = 0;
for (int y = 0; y < imageHeight; y++)
{
for (int x = 0; x < imageWidth; x++)
{
yuv[i] = data[y * imageWidth + x];
i++;
}
}
Because this just copies the beginning of data this can be simplified to
int size = imageHeight*imageWidth;
int i = 0;
for (; i < size; i++)
{
yuv[i] = data[i];
}
Now to copy the rest we need to skip every other horizontal coordinate
for (int y = 0; y < imageHeight; y++)
{
for (int x = 0; x < imageWidth; x += 2) // +2 skip each other horizontal component
{
yuv[i] = data[size + y*2*imageWidth + 2*x];
i++;
yuv[i] = data[size + y*2*imageWidth + 2*x + 1];
i++;
}
}
The factor two in data-array index is needed because there are 2 bytes for each pixel (both chrominance components), so each "row" has 2*imageWidth bytes of data.
first time working with C# here. I am reading a few images files, do some calculations, and output and array of double. I need to be able to save this double array (or these, since I will have multiples arrays) to a greyscale image. I have been looking around on the internet, I couldn't find much. i have done it on Python and Mathlab, but C# doesn't seems to be as friendly to me. here is what I have done so far (for the double image creation).
static Image MakeImage(double[,] data)
{
Image img = new Bitmap(data.GetUpperBound(1), data.GetUpperBound(0));
//Bitmap bitmap = new Bitmap(data.GetUpperBound(1), data.GetUpperBound(0));
for (int i = 0; i < data.GetUpperBound(1); i++)
{
for (int k = 0; k < data.GetUpperBound(0); k++)
{
//bitmap.SetPixel(k, i, Color.FromArgb((int)data[i, k],(int) data[i, k],(int) data[i, k]));
}
}
return img;
}
}
}
This code actually doesnt do much. It create my blank image template. color doesnt take double as input. I have no Idea how to create an image from data... I am stuck =)
thank you in advance.
If you can accept using an unsafe block this is pretty fast:
private Image CreateImage(double[,] data)
{
double min = data.Min();
double max = data.Max();
double range = max - min;
byte v;
Bitmap bm = new Bitmap(data.GetLength(0), data.GetLength(1));
BitmapData bd = bm.LockBits(new Rectangle(0, 0, bm.Width, bm.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
// This is much faster than calling Bitmap.SetPixel() for each pixel.
unsafe
{
byte* ptr = (byte*)bd.Scan0;
for (int j = 0; j < bd.Height; j++)
{
for (int i = 0; i < bd.Width; i++)
{
v = (byte)(255 * (data[i, bd.Height - 1 - j] - min) / range);
ptr[0] = v;
ptr[1] = v;
ptr[2] = v;
ptr[3] = (byte)255;
ptr += 4;
}
ptr += (bd.Stride - (bd.Width * 4));
}
}
bm.UnlockBits(bd);
return bm;
}
I am writing a class for printing bitmaps to a portable bluetooth printer in Android via Mono For Android. My class is used to obtain the pixel data from the stream so that is can be sent to the printer in the correct format. Right now the class is simple, it just reads the height, width, and bits per pixel.
Using the offset it reads and returns the pixel data to the printer. Right now I am just working with 1 bit per pixel black and white images. The bitmaps I am working with are in Windows format.
Here is the original image:
Here is the result of printing, the first image is without any transformation. And the second one is the result of modifying the BitArray with the following code:
BitArray bits = new BitArray(returnBytes);
BitArray flippedBits = new BitArray(bits);
for (int i = 0, j = bits.Length - 1; i < bits.Length; i++, j--)
{
flippedBits[i] = bits[j];
}
My Question is:
How do I flip the image vertically when I am working with a byte array. I am having trouble finding the algorithm for doing this, all examples seem to suggest using established graphics libraries which I cannot use.
Edit:
My Bitmap is saved in a 1 dimensional array, with the first rows bytes, then the second, third, etc.
You need to do something like this:
BitArray bits = new BitArray(returnBytes);
BitArray flippedBits = new BitArray(bits);
for (int i = 0; i < bits.Length; i += width) {
for (int j = 0, k = width - 1; j < width; ++j, --k) {
flippedBits[i + j] = bits[i + k];
}
}
If you need to mirror picture upside-down, use this code:
BitArray bits = new BitArray(returnBytes);
BitArray flippedBits = new BitArray(bits);
for (int i = 0, j = bits.Length - width; i < bits.Length; i += width, j -= width) {
for (int k = 0; k < width; ++k) {
flippedBits[i + k] = bits[j + k];
}
}
For the format with width*height bits in row order, you just need to view the bit array as a two-dimensional array.
for(int row = 0; row < height; ++row) {
for(int column = 0; column < width; ++column) {
flippedBits[row*width + column] = bits[row*width + (width-1 - column)];
}
}
It would be a bit more complicated if there were more than one bit per pixel.
You need to use two loops, the first to iterate over all the rows and the second to iterate the pixels inside each row.
for (int y = 0; y < height; y++)
{
int row_start = (width/8) * y;
int flipped_row = (width/8) * (height-1 - y);
for (int x = 0; x < width/8; x++)
{
flippedBits[flipped_row+x] = bits[row_start+x];
}
}
I've been playing with Huffman Compression on images to reduce size while maintaining a lossless image, but I've also read that you can use predictive coding to further compress image data by reducing entropy.
From what I understand, in the lossless JPEG standard, each pixel is predicted as the weighted average of the adjacent 4 pixels already encountered in raster order (three above and one to the left). e.g., trying to predict the value of a pixel a based on preceding pixels, x, to the left as well as above a :
x x x
x a
Then calculate and encode the residual (difference between predicted and actual value).
But what I don't get is if the average 4 neighbor pixels aren't a multiple of 4, you'd get a fraction right? Should that fraction be ignored? If so, would the proper encoding of an 8 bit image (saved in a byte[]) be something like:
public static void Encode(byte[] buffer, int width, int height)
{
var tempBuff = new byte[buffer.Length];
for (int i = 0; i < buffer.Length; i++)
{
tempBuff[i] = buffer[i];
}
for (int i = 1; i < height; i++)
{
for (int j = 1; j < width - 1; j++)
{
int offsetUp = ((i - 1) * width) + (j - 1);
int offset = (i * width) + (j - 1);
int a = tempBuff[offsetUp];
int b = tempBuff[offsetUp + 1];
int c = tempBuff[offsetUp + 2];
int d = tempBuff[offset];
int pixel = tempBuff[offset + 1];
var ave = (a + b + c + d) / 4;
var val = (byte)(ave - pixel);
buffer[offset + 1] = val;
}
}
}
public static void Decode(byte[] buffer, int width, int height)
{
for (int i = 1; i < height; i++)
{
for (int j = 1; j < width - 1; j++)
{
int offsetUp = ((i - 1) * width) + (j - 1);
int offset = (i * width) + (j - 1);
int a = buffer[offsetUp];
int b = buffer[offsetUp + 1];
int c = buffer[offsetUp + 2];
int d = buffer[offset];
int pixel = buffer[offset + 1];
var ave = (a + b + c + d) / 4;
var val = (byte)(ave - pixel);
buffer[offset + 1] = val;
}
}
}
I don't see how this really will reduce entropy? How will this help compress my images further while still being lossless?
Thanks for any enlightenment
EDIT:
So after playing with the predictive coding images, I noticed that the histogram data shows a lot of +-1's of the varous pixels. This reduces entropy quite a bit in some cases. Here is a screenshot:
Yes, just truncate. Doesn't matter because you store the difference. It reduces entropy because you only store small values, a lot of them will be -1, 0 or 1. There are a couple of off-by-one bugs in your snippet btw.