In C++, I write a float image into a file:
FILE* fp = fopen("image.fft", "wb");
float* pixels = getPixel();
fwrite((unsigned char*)pixels, sizeof(pixels), width*height, fp);
For analyzing the image, we need to read the float image into C#. I am stuck with how to read the float image "image.fft" into C#. I know the size width and height of the float image.
You could use this bimap constructor http://msdn.microsoft.com/en-us/library/zy1a2d14.aspx , just use GCHandle to byte array from file to get IntPtr or something like this:
Bitmap BytesToBitmap (byte[] bmpBytes, Size imageSize)
{
Bitmap bmp = new Bitmap (imageSize.Width, imageSize.Height);
BitmapData bData = bmp.LockBits (new Rectangle (0,0, bmp.Size.Width,bmp.Size.Length),
ImageLockMode.WriteOnly,
PixelFormat.Format32bppRgb);
// Copy the bytes to the bitmap object
Marshal.Copy (bmpBytes, 0, bData.Scan0, bmpBytes.Length);
bmp.UnlockBits(bData);
return bmp;
}
use Bitmap class for get and set pixel
for more information follow this
Related
I have a method in c# that the only thing it does its LockBits, and then UnlockBits, and the images(input/output, transformed to byte arrays) are different. The one from output has less 100 and something bytes than the one from the input. This happens only with .jpg files. And checking the files in HxD I came to the understanding that it´s removing a part of the header, the exif signature to be exact. But I don't know how and why.
Does someone know what this is doing?
Here's the code:
public Image Validate (image){
BitmapData original = null;
Bitmap originalBMP = null;
try{
originalBMP = image as Bitmap;
original = originalBMP.LockBits(new Rectangle(0, 0,
originalBMP.Width, originalBMP.Height),
ImageLockMode.ReadWrite,
originalBMP.PixelFormat);
originalBMP.UnlockBits(original);
}catch{}
return image;
}
Calling Bitmap.LockBits() followed by Bitmap.UnlockBits() does nothing.
The behavior you observe is because of loading a JPEG image, and then saving it again. JPEG uses a lossy algorithm. So what happens:
You load the JPEG from disk
The JPEG data gets decoded into individual pixels with color information, i.e. a bitmap
You save the bitmap again in the JPEG format, resulting in a different file than #1
You also potentially lose metadata that was present in the JPEG file in doing so. So yes, the file is different and probably smaller, because every time you do this, you lose some pixel data or metadata.
Lockbits/Unlockbits are used to allow the program to manipulate the image data in memory. Nothing more, nothing less. See also the documentation for those methods.
Use the LockBits method to lock an existing bitmap in system memory so that it can be changed programmatically. You can change the color of an image with the SetPixel method, although the LockBits method offers better performance for large-scale changes.
A Rectangle structure that specifies the portion of the Bitmap to lock.
Example:
private void LockUnlockBitsExample(PaintEventArgs e)
{
// Create a new bitmap.
Bitmap bmp = new Bitmap("c:\\fakePhoto.jpg");
// Lock the bitmap's bits.
Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
System.Drawing.Imaging.BitmapData bmpData =
bmp.LockBits(rect, System.Drawing.Imaging.ImageLockMode.ReadWrite,
bmp.PixelFormat);
// Get the address of the first line.
IntPtr ptr = bmpData.Scan0;
// Declare an array to hold the bytes of the bitmap.
int bytes = Math.Abs(bmpData.Stride) * bmp.Height;
byte[] rgbValues = new byte[bytes];
// Copy the RGB values into the array.
System.Runtime.InteropServices.Marshal.Copy(ptr, rgbValues, 0, bytes);
// Set every third value to 255. A 24bpp bitmap will look red.
for (int counter = 2; counter < rgbValues.Length; counter += 3)
rgbValues[counter] = 255;
// Copy the RGB values back to the bitmap
System.Runtime.InteropServices.Marshal.Copy(rgbValues, 0, ptr, bytes);
// Unlock the bits.
bmp.UnlockBits(bmpData);
// Draw the modified image.
e.Graphics.DrawImage(bmp, 0, 150);
}
I have two monochromatic images as byte[] taken from cameras. I want to combine these images and write the combined image into a writeable bitmap.
Merging images: OpenCV
Using openCV, I create Mat objects (of type CV_8UC1) from the byte arrays (greenMat for the green and redMat for the red color channel image) and merge them via Cv2.Merge
Mat mergedMat = new Mat(greenMat.Height, greenMat.Width, MatType.CV_8UC3);
Mat emptyMat = Mat.Zeros(greenMat.Height, greenMat.Width, MatType.CV_8UC1);
Cv2.Merge(new[]
{
redMat, greenMat, emptyMat
}, mergedMat);
The mergedMat.ToBytes() now retuns a buffer of size 2175338. Why is it this size? Merging three one-channel matrices (CV_8UC1) if size 2448x2048 into one three-channel matrix(CV_8UC3) should yield to a buffer of size 15040512, or am I missing something here?
Display merged image: WriteableBitmap
I want to display the merged image by writing it into an existing WriteableBitmap, that is initialized via
ImageSource = new WriteableBitmap(
width, //of the green image
height, //of the green image
96, //dpiX: does not affect image as far as I tested
96, //dpiY
PixelFormats.Rgb24, //since the combined image is a three channel color image
null); //no bitmap palette needed for used PixelFormat
Is this initialization correct? I'm unsure about the PixelFormat here. The dpiX and dpiY values also seem to have no effect. What is their use?
The tricky part I can't get to work properly now is to write the image data into the WriteableBitmap.
Using the following code to obtain the byte array of the
merged image and using WriteableBitmaps' WritePixels method, with
stride according to the ImageSources' PixelFormat, yields an
System.ArgumentException: 'Buffer size is not sufficient.'
var mergedBuffer = mergedMat.ToBytes();
var bytesPerPixel = (PixelFormats.Rgb24.BitsPerPixel + 7) / 8;
var stride = bytesPerPixel * width;
((WriteableBitmap)ImageSource).WritePixels(
new Int32Rect(0, 0, width, height),
mergedBuffer,
stride,
0)
Edit: Stride is calculated according to this answer, although it's not neccessary with a fixed PixelFormat like here.
Why is the buffer too small? I did initialize the WriteableBitmap
with the correct width and height and PixelFormats.Rgb24. Am I
calculating my stride correctly?
Solution
Thanks to #Micka s suggestions in the comments I realized the mergedMat.ToBytes method seems not to do what I expceted. Instead, I tried to use mergedMat.Data pointer like so:
var mergedBufferSize = (int)(mergedMat.Total() * mergedMat.Channels());
byte[] mergedBuffer = new byte[mergedBufferSize];
System.Runtime.InteropServices.Marshal.Copy(
mergedMat.Data,
mergedBuffer,
0,
mergedBufferSize);
Everything works fine this way.
I'm using Bitmap.GetHbitmap for passing image to C++ dll from c# like below.
Bitmap img = Bitmap.FromFile(PATH); // this is 24bppRGB bitmap.
IntPtr hBit = img.GetHbitmap(); // Make hbitmap for c++ dll.
Here's problem:
Bitmap temp = Bitmap.FromHbitmap(hBit); // It changes to 32bppRGB.
I need 24bpp bitmap for c++ dll methods, but GetHbitmap() changes the bitcount.
How can I make a 24bpp HBITMAP?
Short Version
Use Bitmap.LockBits and CreateDIBSection to manually create your desired HBITMAP.
Long Version
Bitmap.GetHBITMAP will always return a 32bpp HBITMAP; no matter what the format of the image you actually loaded is. This is because it is the format the image is internally stored as. You will need to manually create an HBITMAP yourself.
You can use CreateDIBSection to create your own 24bpp bitmap. The function returns a pointer where you can place the raw pixel data.
Fortunately GDI+ lets you obtain pixel data in whatever format you want, by calling LockBits and specifying the pixel format you want (i.e. PixelFormat24bppRGB). Then you can just copy the pixel data. There may be issues with -Stride, and top-down or bottom-up bitmaps.
Define a BITMAPINFO for the bitmap we want to create:
BITMAPINFO bm;
bm.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
bm.bmiHeader.biWidth = img.Width;
bm.bmiHeader.biHeight = -img.Height; // oriented top-down
bm.bmiHeader.biPlanes = 1;
bm.bmiHeader.biBitCount = 24; // 24bpp
bm.bmiHeader.biCompression = BI_RGB; // no compression
bm.bmiHeader.biSizeImage = 0; // let Windows determine size
bm.bmiHeader.biXPelsPerMeter = 0; // Not used by CreateDIBSection
bm.bmiHeader.biYPelsPerMeter = 0; // Not used by CreateDIBSection
Create a DIBSection based on the bitmap info, and get the buffer where we place our pixel data
Pointer dibPixels; //will receive a point where we can stuff our pixel data
HBITMAP bmp = CreateDIBSection(0, bmInfo, DIB_RGB_COLORS, out dibPixels, 0, 0);
Use Bitmap.LockBits to obtain a pointer to pixel data in the format you want:
BitmapData bitmapData = img.LockBits(
img.Bounds, //get entire image
ImageLockModeRead,
PixelFormat24bppRGB //we want the pixel data in 24bpp format
);
Copy pixel data from our bitmapData source image into the pixel buffer returned by CreateDIBSectin:
int stride = bitmapData.Stride;
int bufferSize = stride * bmData.Height;
CopyMemory(bitmapData.Scan0, dibPixels, bufferSize);
Unlock the bits:
img.UnlockBits(bitmapData);
Now you have your HBITMAP bitmap ready to pass to your dll seven years later.
Bonus Reading
The Supercomputing Blog: Using LockBits in GDI+ archive
I’m trying to crop a 24bpp image using memcpy like I read here: cropping an area from BitmapData with C#. The problem I’m having is that it only works when my sourceImage is 32bpp. It gives me a corrupt image when my sourceImage is 24bpp.
class Program
{
[DllImport("msvcrt.dll", CallingConvention = CallingConvention.Cdecl)]
static unsafe extern int memcpy(byte* dest, byte* src, long count);
static void Main(string[] args)
{
var image = new Bitmap(#"C:\Users\Vincent\Desktop\CroppedScaledBitmaps\adsadas.png");
//Creates a 32bpp image - Will work eventhough I treat it as a 24bpp image in the CropBitmap method...
//Bitmap newBitmap = new Bitmap(image);
//Creates a 24bpp image - Will produce a corrupt cropped bitmap
Bitmap newBitmap = (Bitmap)image.Clone();
var croppedBitmap = CropBitmap(newBitmap, new Rectangle(0, 0, 150, 150));
croppedBitmap.Save(#"C:\Users\Vincent\Desktop\CroppedScaledBitmaps\PieceOfShit.png", ImageFormat.Png);
Console.ReadLine();
}
static public Bitmap CropBitmap(Bitmap sourceImage, Rectangle rectangle)
{
Console.WriteLine("Bits per pixel of sourceImage: {0}", Image.GetPixelFormatSize(sourceImage.PixelFormat));
var sourceBitmapdata = sourceImage.LockBits(rectangle, ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb);
var croppedImage = new Bitmap(rectangle.Width, rectangle.Height, PixelFormat.Format24bppRgb);
var croppedBitmapData = croppedImage.LockBits(new Rectangle(0, 0, rectangle.Width, rectangle.Height), ImageLockMode.WriteOnly, PixelFormat.Format24bppRgb);
unsafe
{
byte* sourceImagePointer = (byte*)sourceBitmapdata.Scan0.ToPointer();
byte* croppedImagePointer = (byte*)croppedBitmapData.Scan0.ToPointer();
memcpy(croppedImagePointer, sourceImagePointer, croppedBitmapData.Stride * rectangle.Height);
}
sourceImage.UnlockBits(sourceBitmapdata);
croppedImage.UnlockBits(croppedBitmapData);
return croppedImage;
}
}
I’m very confused, because the only thing I’m changing is the sourceImage PixelFormat, not any of the code in the CropBitmap method. So I always call LockBits using 24bpp Pixelformat, even if the sourceImage is 32bpp.
I’ve tried different methods of calculating the number of bytes I’m copying but everything resulted in more or less the same corrupted image.
Any help is appreciated!
You are trying to copy the data as if it was one continuous block, but it isn't.
The image data is arranged in scan lines, but as you are selecting a part of the image, you don't want all the data from each scan line, you only want the data that represents the pixels that you have selected. A scan line contains the data for the pixels that you specified when you called LockBits, but also data for the pixels outside that area.
The Stride value is the difference in memory address from one scan line to the next. The Stride value may also include padding between the scan lines. Note also that the Stride value can be negative, which happens when the image data is stored upside down in memory.
You want to copy the relevant data from one line of the source image to the line in the destination image. As there can be gaps both in the source data and destination data, you can't copy the data as a single chunk of data.
You would need to loop through the lines and copy each line separately, I haven't tested this code, but something like this:
byte* sourceImagePointer = (byte*)sourceBitmapdata.Scan0.ToPointer();
byte* croppedImagePointer = (byte*)croppedBitmapData.Scan0.ToPointer();
int width = rectange.Width * 3; // for 24 bpp pixel data
for (int y = 0; y < rectangle.Height; y++) {
memcpy(croppedImagePointer, sourceImagePointer, width);
sourceImagePointer += sourceBitmapdata.Stride;
croppedImagePointer += croppedBitmapData.Stride;
}
I'm coding a live control/remote desktop solution using DFMirage's free mirror driver. There is a C# sample on how to interface and control the mirror driver here. You would need the mirror driver installed first, of course, here. So, the concept is, the client (helper) requests a screen update, and the server (victim) sends one, using raw pixel encoding. The concept of a mirror driver eliminates the need to expensively poll for screen changes, because a mirror driver is notified of all screen drawing operations in real-time. The mirror driver receives the location and size of the update rectangle, and can simply query memory for the new pixel bytes and send them.
Should be easy, except that I don't know how to do that part where we query memory for the new pixel bytes. The sample shows how to query memory to grab the pixels of the entire screen using something with raw bitmap data and scan lines and stride and all that good stuff:
Bitmap result = new Bitmap(_bitmapWidth, _bitmapHeight, format);
Rectangle rect = new Rectangle(0, 0, _bitmapWidth, _bitmapHeight);
BitmapData bmpData = result.LockBits(rect, ImageLockMode.WriteOnly, format);
// Get the address of the first line.
IntPtr ptr = bmpData.Scan0;
// Declare an array to hold the bytes of the bitmap.
int bytes = bmpData.Stride * _bitmapHeight;
var getChangesBuffer = (GetChangesBuffer)Marshal
.PtrToStructure(_getChangesBuffer, typeof (GetChangesBuffer));
var data = new byte[bytes];
Marshal.Copy(getChangesBuffer.UserBuffer, data, 0, bytes);
// Copy the RGB values into the bitmap.
Marshal.Copy(data, 0, ptr, bytes);
result.UnlockBits(bmpData);
return result;
This is great and works fine. The resulting Bitmap object now has the pixels of the entire screen. But if I wanted to just extract a rectangle of pixel data instead of getting the pixel data from the whole screen, how would I be able to do that? I guess this is more of a rawbitmap-scan-stride question, but I typed all of this so you might know where this is coming from. So any insight on how to get just a portion of pixel data instead of the entire screen's pixel data?
Update: Found something interesting (code portion only).
Here's a function to copy a rectangular area from some source image buffer to a Bitmap:
private static Bitmap ExtractImageRectangle(byte[] sourceBuffer, int sourceStride, PixelFormat sourcePixelFormat, Rectangle rectangle)
{
Bitmap result = new Bitmap(rectangle.Width, rectangle.Height, sourcePixelFormat);
BitmapData resultData = result.LockBits(new Rectangle(0, 0, result.Width, result.Height), ImageLockMode.WriteOnly, result.PixelFormat);
int bytesPerPixel = GetBytesPerPixel(sourcePixelFormat); // Left as an exercise for the reader
try
{
// Bounds checking omitted for brevity
for (int rowIndex = 0; rowIndex < rectangle.Height; ++rowIndex)
{
// The address of the start of this row in the destination image
IntPtr destinationLineStart = resultData.Scan0 + resultData.Stride * rowIndex;
// The index at which the current row of our rectangle starts in the source image
int sourceIndex = sourceStride * (rowIndex + rectangle.Top) + rectangle.Left * bytesPerPixel;
// Copy the row from the source to the destination
Marshal.Copy(sourceBuffer, sourceIndex, destinationLineStart, rectangle.Width * bytesPerPixel);
}
}
finally
{
result.UnlockBits(resultData);
}
return result;
}
You could then use it like this:
Rectangle roi = new Rectangle(100, 150, 200, 250);
Bitmap result = ExtractImageRectangle(getChangesBuffer.UserBuffer, getChangesBuffer.Stride, getChangesBuffer.PixelFormat, roi);
This assumes that GetChangesBuffer has properties for the stride and pixelformat of the source image buffer. It most likely hasn't, but you should have some means to determine the stride and pixel format of your input image. In your example you are assuming the stride of the input image is equal to the stride of your output image, which is a tricky assumption.