I'm working with image processing in WinForm and it work very well when I have Bitmap and BitmapData, I can easily get IntPtr from it. But in UWP, I have no way to get IntPtr from them. So do we have any way to do that?
UPDATE: If we cannot get IntPtr value, can we get the pointer address for that image? Something like this in WinForm:
byte* src = (byte*) BitmapData.Scan0.ToPointer( );
You could get pxiel data from file stream via BitmapDecoder and PixelDataProvider:
Windows.Storage.Streams.IRandomAccessStream random = await Windows.Storage.Streams.RandomAccessStreamReference.CreateFromUri(new Uri("ms-appx:///Assets/StoreLogo.png")).OpenReadAsync();
Windows.Graphics.Imaging.BitmapDecoder decoder = await Windows.Graphics.Imaging.BitmapDecoder.CreateAsync(random);
Windows.Graphics.Imaging.PixelDataProvider pixelData = await decoder.GetPixelDataAsync();
byte[] buffer = pixelData.DetachPixelData();
Then you could get Intptr from byte array via unsafe code
unsafe
{
fixed (byte* p = buffer)
{
IntPtr ptr = (IntPtr)p;
// do you stuff here
}
}
If compile unsafe code, you need to enable the Allow Unsafe Code option in project's build property.
Related
I'm implementing a Custom Credential Provider in C#. I'm using a C++ project as example. This piece of C++ code provides an image to Windows. The way I see it phbmp is a pointer to the image-bitmap. The code either updates the pointer so it points to a new bitmap (read from Resource) or it loads the bitmap to the address pointed by phbmp. I'm not sure if the pointer itself is changed or not.
// Get the image to show in the user tile
HRESULT CSampleCredential::GetBitmapValue(DWORD dwFieldID, _Outptr_result_nullonfailure_ HBITMAP *phbmp)
{
HRESULT hr;
*phbmp = nullptr;
if ((SFI_TILEIMAGE == dwFieldID))
{
HBITMAP hbmp = LoadBitmap(HINST_THISDLL, MAKEINTRESOURCE(IDB_TILE_IMAGE));
if (hbmp != nullptr)
{
hr = S_OK;
*phbmp = hbmp;
}
else
{
hr = HRESULT_FROM_WIN32(GetLastError());
}
}
else
{
hr = E_INVALIDARG;
}
return hr;
}
Below is the C# equivalent I'm implementing:
public int GetBitmapValue(uint dwFieldID, IntPtr phbmp)
{
if (dwFieldID == 2)
{
Bitmap image = Resource1.TileImage;
ImageConverter imageConverter = new ImageConverter();
byte[] bytes = (byte[])imageConverter.ConvertTo(image, typeof(byte[]));
Marshal.Copy(bytes, 0, phbmp, bytes.Length);
return HResultValues.S_OK;
}
return HResultValues.E_INVALIDARG;
}
What I'm trying to do:
Load the image from resource (this works, it has the correct length)
Convert the Bitmap to an array of bytes
Copy these bytes to the address pointed by phbmp
This crashes, I assume because of memory-allocation.
The parameters in this method are defined by an interface (in CredentialProvider.Interop.dll, which is provided by Microsoft - I think). So I'm pretty sure it's correct and phbmp is not an out-parameter.
Because it is not an out-parameter I can not change phbmp to let it point to my bitmap, right? I have assigned phbmp to Bitmap.GetHbitmap() and that doesn't crash but it isn't working either. I assume that the change to phbmp is only local in this method.
I can understand that it is not possible to alloc memory to a predefined address. It's the other way around: you alloc memory and get an pointer to it. But then this change is local again. How does this work?
Although some people agreed that IntPtr should be an out-parameter (see comments in https://syfuhs.net/2017/10/15/creating-custom-windows-credential-providers-in-net/) the answer was actually:
var bmp = new Bitmap(imageStream);
Marshal.WriteIntPtr(phbmp, bmp.GetHbitmap());
I've been using FFmpeg.AutoGen https://github.com/Ruslan-B/FFmpeg.AutoGen wrapper to decode my H264 video for sometime with great success and now have to add AAC audio decoding (previous I was using G711 and NAudio for this).
I have the AAC stream decoding using avcodec_decode_audio4, however the output buffer or frame is in floating point format FLT and I need it to be in S16. For this I have found unmanaged examples using swr_convert and FFmpeg.AutoGen does have this function P/Invoked as;
[DllImport(SWRESAMPLE_LIBRARY, EntryPoint="swr_convert", CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Ansi)]
public static extern int swr_convert(SwrContext* s, byte** #out, int out_count, byte** #in, int in_count);
My trouble is that I can't find a successful way of converting/fixing/casting my managed byte[] in to a byte** to provide this as the destination buffer.
Has anyone doing this before?
My non-working code...
packet.ResetBuffer(m_avFrame->linesize[0]*2);
fixed (byte* pData = packet.Payload)
{
byte** src = &m_avFrame->data_0;
//byte** dst = *pData;
IntPtr d = new IntPtr(pData);
FFmpegInvoke.swr_convert(m_pConvertContext, (byte**)d.ToPointer(), packet.Length, src, (int)m_avFrame->linesize[0]);
}
Thanks for any help.
Cheers
Dave
The function you are trying to call is documented here: http://www.ffmpeg.org/doxygen/2.0/swresample_8c.html#a81af226d8969df314222218c56396f6a
The out_arg parameter is declare like this:
uint8_t* out_arg[SWR_CH_MAX]
That is an length SWR_CH_MAX array of byte arrays. Your translation renders that as byte** and so forces you to use unsafe code. Personally I think I would avoid that. I would declare the parameter like this:
[MarshalAs(UnmanagedType.LPArray)]
IntPtr[] out_arg
Declare the array like this:
IntPtr[] out_arg = new IntPtr[channelCount];
I am guessing that the CH in SWR_CH_MAX is short-hand for channel.
Then you need to allocate memory for the output buffer. I'm not sure how you want to do that. You could allocate one byte array per channel and pin those arrays to get hold of a pointer to pass down to the native code. That would be my preferred approach because upon return you'd have your channels in nice managed arrays. Another way would be a call to Marshal.AllocHGlobal.
The input buffer would need to be handled in the same way.
I would not use the automated pinvoke translation that you are currently using. It seems he'll bent on forcing you to use pointers and unsafe code. Not massively helpful. I'd translate it by hand.
I'm sorry not to give more specific details but it's a little hard because your question did not contain any information about the types used in your code samples. I hope the general advice is useful.
Thanks to #david-heffernan answer I've managed to get the following working and I'm posting as an answer as examples of managed use of FFmpeg are very rare.
fixed (byte* pData = packet.Payload)
{
IntPtr[] in_buffs = new IntPtr[2];
in_buffs[0] = new IntPtr(m_avFrame->data_0);
in_buffs[1] = new IntPtr(m_avFrame->data_1);
IntPtr[] out_buffs = new IntPtr[1];
out_buffs[0] = new IntPtr(pData);
FFmpegInvoke.swr_convert(m_pConvertContext, out_buffs, m_avFrame->nb_samples, in_buffs, m_avFrame->nb_samples);
}
In in the complete context of decoding a buffer of AAC audio...
protected override void DecodePacket(MediaPacket packet)
{
int frameFinished = 0;
AVPacket avPacket = new AVPacket();
FFmpegInvoke.av_init_packet(ref avPacket);
byte[] payload = packet.Payload;
fixed (byte* pData = payload)
{
avPacket.data = pData;
avPacket.size = packet.Length;
if (packet.KeyFrame)
{
avPacket.flags |= FFmpegInvoke.AV_PKT_FLAG_KEY;
}
int in_len = packet.Length;
int count = FFmpegInvoke.avcodec_decode_audio4(CodecContext, m_avFrame, out frameFinished, &avPacket);
if (count != packet.Length)
{
}
if (count < 0)
{
throw new Exception("Can't decode frame!");
}
}
FFmpegInvoke.av_free_packet(ref avPacket);
if (frameFinished > 0)
{
if (!mConverstionContextInitialised)
{
InitialiseConverstionContext();
}
packet.ResetBuffer(m_avFrame->nb_samples*4); // need to find a better way of getting the out buff size
fixed (byte* pData = packet.Payload)
{
IntPtr[] in_buffs = new IntPtr[2];
in_buffs[0] = new IntPtr(m_avFrame->data_0);
in_buffs[1] = new IntPtr(m_avFrame->data_1);
IntPtr[] out_buffs = new IntPtr[1];
out_buffs[0] = new IntPtr(pData);
FFmpegInvoke.swr_convert(m_pConvertContext, out_buffs, m_avFrame->nb_samples, in_buffs, m_avFrame->nb_samples);
}
packet.Type = PacketType.Decoded;
if (mFlushRequest)
{
//mRenderQueue.Clear();
packet.Flush = true;
mFlushRequest = false;
}
mFirstFrame = true;
}
}
I am fairly new to using p/invoke calls and am wondering if someone can guide me on how to retrieve the raw pixel data (unsigned char*) from an hbitmap.
This is my scenario:
I am loading a .NET Bitmap object on the C# side and sending it's IntPtr to my unmanaged c++ method. Once I receive the hbitmap ptr on the C++ side, I would like to access the Bitmaps' pixel data. I already made a method that accepts an unsigned char* which represents the raw pixel data from c# however I found extracting the byte[] from the c# is fairly slow. This is why I want to send in the Bitmap ptr instead of converting the Bitmap into a byte[] and sending that to my C++ method.
C# code for getting Bitmap IntPtr
Bitmap srcBitmap = new Bitmap(m_testImage);
IntPtr hbitmap = srcBitmap.GetHbitmap();
C# code for importing c++ method
[SuppressUnmanagedCodeSecurityAttribute()]
[DllImport("MyDll.dll", CharSet = CharSet.Unicode, CallingConvention = CallingConvention.Cdecl)]
public static extern int ResizeImage(IntPtr srcImg);
C++ method that will receive the Hbitmap handler
int Resize::ResizeImage(unsigned char* srcImg){
//access srcImgs raw pixel data (preferably in unsigned char* format)
//do work with that
return status;
}
Questions:
1) Since I am sending in an IntPrt, can my C++ method parameter be an unsigned char* ?
2) If not, how can I access the bitmap's raw data from c++?
The GetHbitmap method does not retrieve pixel data. It yields a GDI bitmap handle, of type HBITMAP. Your unmanaged code would receive that as a parameter of type HBITMAP. You can obtain the pixel data from that using GDI calls. But it is not, in itself, the raw pixels.
In fact, I'm pretty sure you are attacking this problem the wrong way. You are probably heading this way because GetPixel and SetPixel are slow. This quite true. Indeed, their GDI equivalents are too. What you need to do is to use LockBits. This will allow you to operate on the entire pixel data in C# in an efficient way. A good description of the subject can be found here: https://web.archive.org/web/20141229164101/http://bobpowell.net/lockingbits.aspx. Note that, for efficiency, this is one type of C# code where unsafe code and pointers is often the best solution.
If, for whatever reason, you still wish to operate on the pixel data using C++ code, then you can still use LockBits as the simplest way to get a pointer to the pixel data. It's certainly much easier than the unmanaged GDI equivalents.
First, an HBITMAP shouldn't be a unsigned char*. If you are passing an HBITMAP to C++ then the parameter should be an HBITMAP:
int Resize::ResizeImage(HBITMAP hBmp)
Next to convert from HBITMAP to pixels:
std::vector<unsigned char> ToPixels(HBITMAP BitmapHandle, int &width, int &height)
{
BITMAP Bmp = {0};
BITMAPINFO Info = {0};
std::vector<unsigned char> Pixels = std::vector<unsigned char>();
HDC DC = CreateCompatibleDC(NULL);
std::memset(&Info, 0, sizeof(BITMAPINFO)); //not necessary really..
HBITMAP OldBitmap = (HBITMAP)SelectObject(DC, BitmapHandle);
GetObject(BitmapHandle, sizeof(Bmp), &Bmp);
Info.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
Info.bmiHeader.biWidth = width = Bmp.bmWidth;
Info.bmiHeader.biHeight = height = Bmp.bmHeight;
Info.bmiHeader.biPlanes = 1;
Info.bmiHeader.biBitCount = Bmp.bmBitsPixel;
Info.bmiHeader.biCompression = BI_RGB;
Info.bmiHeader.biSizeImage = ((width * Bmp.bmBitsPixel + 31) / 32) * 4 * height;
Pixels.resize(Info.bmiHeader.biSizeImage);
GetDIBits(DC, BitmapHandle, 0, height, &Pixels[0], &Info, DIB_RGB_COLORS);
SelectObject(DC, OldBitmap);
height = std::abs(height);
DeleteDC(DC);
return Pixels;
}
Apparently sending in the Pointer from Scan0 is equivalent to what I was searching for. I am able to manipulate the data as expected by sending in an IntPtr retrieved from the bitmapData.Scan0 method.
Bitmap srcBitmap = new Bitmap(m_testImage);
Rectangle rect = new Rectangle(0, 0, srcBitmap.Width, srcBitmap.Height);
BitmapData bmpData = srcBitmap.LockBits(rect, ImageLockMode.ReadWrite, srcBitmap.PixelFormat);
//Get ptr to pixel data of image
IntPtr ptr = bmpData.Scan0;
//Call c++ method
int status = myDll.ResizeImage(ptr);
srcBitmap.UnlockBits(bmpData);
To further help clarify, the only code I changed from my initial post was the first block of code. All the rest remained the same. (C++ method still accepts unsigned char * as a param)
I am trying to write images acquired from a webcam to a file via FileStream in C#. They are 16-bit monochrome so I cannot just write out the Bitmap object. I am using Marshal.Copy() in order to work around this as follows:
unsafe private void RecordingFrame()
{
Bitmap bm16;
BitmapData bmd;
Emgu.CV.Image<Gray, UInt16> currentFrame;
const int ORIGIN_X = 0;
const int ORIGIN_Y = 0;
// get image here and put it in bm16...
bmd = bm16.LockBits(new Rectangle(ORIGIN_X, ORIGIN_Y, bm16.Width, bm16.Height),
ImageLockMode.ReadOnly, bm16.PixelFormat);
var length = bmd.Stride * bmd.Height;
byte[] bytes = new byte[length];
Marshal.Copy(bmd.Scan0, bytes, 0, length);
fsVideoWriter.Write(bytes, 0, length);
bm16.UnlockBits(bmd);
}
Is this the best way to accomplish this? I wanted to simply pass the BitmapData's Scan0 member as a pointer to FileStream but I couldn't figure out how to do this so I copied the data into a byte buffer. This reduces performance slightly so if I can improve it to achieve a higher frame rate I'd like to do so.
You could create an UnmanagedMemoryStream from bmd.Scan0 and then call CopyTo(fsVideoWriter). But I'm not sure if this would be any faster than what you have now.
It is possible to get a pointer from a managed array
byte [] buffer = new byte[length + byteAlignment];
GCHandle bufferHandle = GCHandle.Alloc(buffer, GCHandleType.Pinned);
IntPtr ptr = bufferHandle.AddrOfPinnedObject();
is there any way to do the opposite. getting a byte array from a pinned object without copying?
Sure, that's what Marshal.Copy is for - there is no way (well, no way without copying of some variety) to otherwise get memory between the managed and unmanaged states...well, that's not 100% true, but I'm assuming you don't want to rely solely on Win32/C and p/invoke to copy memory around.
Marshal.Copy use would look like:
IntPtr addressOfThing = ....;
byte[] buffer = new byte[...];
Marshal.Copy(addressOfThing, buffer, 0, bufferSize);