C#: Object with custom marshaller not containing data after PInvoke call - c#

I am having a problem with PInvoking some WinAPI functions that accept WAVEFORMATEX structures as parameters. Since the length of the WAVEFORMATEX structure can vary, I implemented a WaveFormatEX class that is marshalled by a custom marshaller class (which implements ICustmoMarshaller). This is following an example provided by Aaron Lerch in his Blog (Part 1, Part 2), but with a few modifications from my side.
When I call the API function from my code, the methods MarshalManagedToNative and MarshalNativeToManaged of the custom marshaller are called, and at the end of MarshalNativeToManaged, the managed object contains the correct values. But when the execution returns to my calling code, the WaveFormatEx object does not contain the values read during the API call.
So the question is: Why does the data that is correctly marshalled back from native to managed not show up in my WaveFormatEx object after the native API call? What am I doing wrong here?
Edit:
To clarify, the function call succeeds, so does the marshalling of the WaveFormatEx object back to managed code. Just when the execution returns from the marshalling method to the scope from where the method was called, the WaveFormatEx object that was declared in that calling scope does not contain the result data.
Here are the function prototype and the WaveFormatEx class:
[DllImport("avifil32.dll")]
public static extern int AVIStreamReadFormat(
int Stream,
int Position,
[In, Out, MarshalAs(UnmanagedType.CustomMarshaler,
MarshalTypeRef = typeof(WaveFormatExMarshaler))]
WaveFormatEx Format,
ref int Size
);
[StructLayout(LayoutKind.Sequential)]
public class WaveFormatEx
{
public int FormatTag;
public short Channels;
public int SamplesPerSec;
public int AvgBytesPerSec;
public short BlockAlign;
public short BitsPerSample;
public short Size;
public byte[] AdditionalData;
public WaveFormatEx(short AdditionalDataSize)
{
WaveFormat.Size = AdditionalDataSize;
AdditionalData = new byte[AdditionalDataSize];
}
}
The marshalling methods look like this:
public object MarshalNativeToManaged(System.IntPtr NativeData)
{
WaveFormatEx ManagedObject = new WaveFormatEx(0);
ManagedObject = (WaveFormatEx)Marshal.PtrToStructure(
NativeData, typeof(WaveFormatEx));
ManagedObject.AdditionalData = new byte[ManagedObject.Size];
// If there is extra data, marshal it
if (ManagedObject.WaveFormat.Size > 0)
{
NativeData = new IntPtr(
NativeData.ToInt32() +
Marshal.SizeOf(typeof(WaveFormatEx)));
ManagedObject.AdditionalData = new byte[ManagedObject.WaveFormat.Size];
Marshal.Copy(NativeData, ManagedObject.AdditionalData, 0,
ManagedObject.WaveFormat.Size);
}
return ManagedObject;
}
public System.IntPtr MarshalManagedToNative(object Object)
{
WaveFormatEx ManagedObject = (WaveFormatEx)Object;
IntPtr NativeStructure = Marshal.AllocHGlobal(
GetNativeDataSize(ManagedObject) + ManagedObject.WaveFormat.Size);
Marshal.StructureToPtr(ManagedObject, NativeStructure, false);
// Marshal extra data
if (ManagedObject.WaveFormat.Size > 0)
{
IntPtr dataPtr = new IntPtr(NativeStructure.ToInt32()
+ Marshal.SizeOf(typeof(WaveFormatEx)));
Marshal.Copy(ManagedObject.AdditionalData, 0, dataPtr, Math.Min(
ManagedObject.WaveFormat.Size,
ManagedObject.AdditionalData.Length));
}
return NativeStructure;
}
And this is my calling code:
WaveFormatEx test = new WaveFormatEx(100);
int Size = System.Runtime.InteropServices.Marshal.SizeOf(test);
// After this call, test.FormatTag should be set to 1 (PCM audio),
// but it is still 0, as well as all the other members
int Result = Avi.AVIStreamReadFormat(AudioStream, 0, test, ref Size);

There are several mistakes in the code and the declarations that prevents this code from working on a 64-bit operating system. Be sure to set the Platform Target to x86.
Are you sure the native function actually returns data? What is the Result return value? A non-zero value indicates failure.
The proper way to call this function is to call it twice. First with the lpFormat argument set to null (IntPtr.Zero) so it tells you how large a buffer it needs (returned by lpbcFormat). Then you create the buffer and call it again.
Instead of a custom marshaller, I would just create the buffer with Marshal.AllocHGobal after the first call and pass the IntPtr it returns as the lpFormat argument in the second call. Then, iff you get a success return code, use Marshal.PtrToStructure to write the WaveFormatEx. And Marshal.Copy to get the additional data.
Fwiw, using ref causes the P/Invoke marshaller to pass a WaveFormatEx** to the function but it expects a WaveFormatEx*. Which will cause it to overwrite data in the garbage collected heap, destroying its internal format. A kaboom is next when the CLR notices this.
Check out the NAudio project as a good alternative for doing this yourself.

Related

Marshal a std::vector<uint64_t> from C++ to C#

no matter what I try. I appear to get garbage results when I marshal the data across! The data after the marshal copy just contains an array of what looks like uninitialized data. Just pure garbage.
Thanks for your help in advance!
C++
typedef uint64_t TDOHandle;
extern "C" DATAACCESSLAYERDLL_API const TDOHandle * __stdcall DB_GetRecords()
{
const Database::TDBRecordVector vec = Database::g_Database.GetRecords();
if (vec.size() > 0)
{
return &vec[0];
}
return nullptr;
}
C#
The declaration
[System.Security.SuppressUnmanagedCodeSecurity()]
[DllImport("DataCore.dll")]
static private extern IntPtr DB_GetRecords();
//The marshalling process
IntPtr ptr_records = DB_GetRecords();
if (ptr_records != null)
{
Byte[] recordHandles = new Byte[DB_GetRecordCount()*sizeof (UInt64)];
Marshal.Copy(ptr_records, recordHandles, 0, recordHandles.Length);
Int64[] int64Array = new Int64[DB_GetRecordCount()];
Buffer.BlockCopy(recordHandles, 0, int64Array, 0, recordHandles.Length);
}
You are returning the address of memory owned by a local variable. When the function returns, the local variable is destroyed. Hence the address you returned is now meaningless.
You need to allocate dynamic memory and return that. For instance, allocate it with CoTaskMemAlloc. Then the consuming C# can deallocate it with a call to Marshal.FreeCoTaskMem.
Or allocate the memory using new, but also export a function from your unamanaged code that can deallocate the memory.
For example:
if (vec.size() > 0)
{
TDOHandle* records = new TDOHandle[vec.size()];
// code to copy content of vec to records
return records;
}
return nullptr;
And then you would export another function that exposed the deallocator:
extern "C" DATAACCESSLAYERDLL_API void __stdcall DB_DeleteRecords(
const TDOHandle * records)
{
if (records)
delete[] record;
}
All that said, it seems that you can obtain the array length before you call the function to populate the array. You do that with DB_GetRecordCount(). In that case you should create an array in your managed code, and pass that to the unmanaged code for it to populate. That side steps all the issues of memory management.
I'll add that there is another way to do it:
public sealed class ULongArrayWithAllocator
{
// Not necessary, default
[UnmanagedFunctionPointer(CallingConvention.StdCall)]
public delegate IntPtr AllocatorDelegate(IntPtr size);
private GCHandle Handle;
private ulong[] allocated { get; set; }
public ulong[] Allocated
{
get
{
// We free the handle the first time the property is
// accessed (we are already C#-side when it is accessed)
if (Handle.IsAllocated)
{
Handle.Free();
}
return allocated;
}
}
// We could/should implement a full IDisposable interface, but
// the point of this class is that you use it when you want
// to let C++ allocate some memory and you want to retrieve it,
// so you'll access LastAllocated and free the handle
~ULongArrayWithAllocator()
{
if (Handle.IsAllocated)
{
Handle.Free();
}
}
// I'm using IntPtr for size because normally
// sizeof(IntPtr) == sizeof(size_t) and vector<>.size()
// returns a size_t
public IntPtr Allocate(IntPtr size)
{
if (allocated != null)
{
throw new NotSupportedException();
}
allocated = new ulong[(long)size];
Handle = GCHandle.Alloc(allocated, GCHandleType.Pinned);
return Handle.AddrOfPinnedObject();
}
}
[DllImport("DataCore.dll", CallingConvention = CallingConvention.StdCall)]
static private extern IntPtr DB_GetRecords(ULongArrayWithAllocator.AllocatorDelegate allocator);
and to use it:
var allocator = new ULongArrayWithAllocator();
DB_GetRecords(allocator.Allocate);
// Here the Handle is freed
ulong[] allocated = allocator.Allocated;
and C++ side
extern "C" DATAACCESSLAYERDLL_API void __stdcall DB_GetRecords(TDOHandle* (__stdcall *allocator)(size_t)) {
...
// This is a ulong[vec.size()] array, that you can
// fill C++-side and can retrieve C#-side
TDOHandle* records = (*allocator)(vec.size());
...
}
or something similar :-) You pass a delegate to the C++ function that can allocate memory C#-side :-) And then C# side you can retrieve the last memory that was allocated. It is important that you don't make more than one allocation C++-side in this way in a single call, because you are saving a single LastAllocated reference, that is "protecting" the allocated memory from the GC (so don't do (*allocator)(vec.size());(*allocator)(vec.size());)
Note that it took me 1 hour to write correctly the calling conventions of the function pointers, so this isn't for the faint of heart :-)

C++ API and PInvoke in C#

I got the "System.AccessViolationException" when I am trying to call the method from C++ API. In resultXML_out I got properly formated XML with data returned as exepted but the exception is raised (exactly on this method) and cannot handle this even with try catch block. I assume that I should declare somehow the memory for resultXML_out but I don't know how to do this.
Here is C++ API method declaration:
SW_ErrCode SW_GetMyUserInfo (SW_LoginID lh, SW_XML *resultXML_out)
Declaration of SW_XML:
const char * SW_XML
Here is my code:
[StructLayout(LayoutKind.Sequential)]
public struct SW_LoginID
{
public int loginId;
}
[StructLayout(LayoutKind.Sequential)]
public struct SW_XML
{
public string xml;
}
[DllImport("sw_api.dll")]
[HandleProcessCorruptedStateExceptionsAttribute]
public static extern SW_ErrCode SW_GetMyUserInfo(SW_LoginID sh, out SW_XML resultXML_out);
And here is the call of this method:
SW_XML resultXML_out = new SW_XML();
resultXML_out.xml = "";
SW_ErrCode d = SW_GetMyUserInfo(login, out resultXML_out);
In API I found also such a method. But I don't know how to properly use it (or even if it is necessary):
char* SW_AllocateString (unsigned size)
But after I passed e.g. 1000 the program terminates without even an exception... Here is description of this function in API Documentation:
Allocate a string.
Returns a string allocated by the API

What is the correct P/Invoke signature of double*& which points to an unmanaged array?

I am wrapping a c++ dll which does high quality sample rate conversion in c# and I am not sure what kind of type I should use for the op0 parameter. A C++ wrapper would call it like this:
int _cdecl process(double* const ip0, int l, double*& op0)
The documentation says about the parameter:
"#param[out] op0 This variable receives the pointer to the resampled data.
This pointer may point to the address within the "ip0" input buffer, or to
*this object's internal buffer. In real-time applications it is suggested
to pass this pointer to the next output audio block and consume any data
left from the previous output audio block first before calling the
process() function again. The buffer pointed to by the "op0" on return may
be owned by the resampler, so it should not be freed by the caller."
What I would like to do is the following:
[DllImport("r8bsrc.dll", EntryPoint="process", CallingConvention = CallingConvention.Cdecl)]
public static extern int Process([in] double[] ip0,
int length,
[out] double[] op0);
But I am pretty sure this would not work, since the marshaller cannot know how big the memory behind op1 is, or am I wrong?
So I guess I have to copy the values behind op1 back to a managed array myself. Maybe:
[DllImport("r8bsrc.dll", EntryPoint="process", CallingConvention = CallingConvention.Cdecl)]
public static extern int Process([in] double[] ip0,
int length,
out IntPtr op0); //or do i need out double* ?
And then wrap it again with:
private IntPtr FOutBufferPtr; //reuse it as recommeded
public int Process(double[] input, out double[] output)
{
var outSamples = R8BrainDLLWrapper.Process(input, input.Length, out FOutBufferPtr);
output = new double[outSamples];
Marshal.Copy(FOutBufferPtr, output, 0, outSamples);
}
What is the optimal way which involves the least number of copies?
EDIT2:
This is the current code, it works perfectly:
public int Process(double[] input, ref double[] output)
{
//pin the input during process
var pinnedHandle = GCHandle.Alloc(input, GCHandleType.Pinned);
//resample
var outSamples = R8BrainDLLWrapper.Process(FUnmanagedInstance, pinnedHandle.AddrOfPinnedObject(), input.Length, out FOutBufferPtr);
//copy to output array
if(output.Length < outSamples)
output = new double[outSamples];
Marshal.Copy(FOutBufferPtr, output, 0, outSamples);
//free pin
pinnedHandle.Free();
return outSamples;
}
The signature is now:
[DllImport("r8bsrc.dll", EntryPoint="r8b_process", CallingConvention = CallingConvention.Cdecl)]
public static extern int Process(IntPtr instance,
IntPtr ip0,
int length,
out IntPtr op0);
#param[out] op0
This variable receives the pointer to the resampled data. This pointer may point to the address within the "ip0" input buffer, or to *this object's internal buffer. In real-time applications it is suggested to pass this pointer to the next output audio block and consume any data left from the previous output audio block first before calling the process() function again. The buffer pointed to by the "op0" on return may be owned by the resampler, so it should not be freed by the caller.
This immediately presents a constraint on ip0. You must arrange that the buffer that ip0 points to is stable beyond the end of the call to the function. That implies that you must pin it before calling the function. Which in turn implies that it must be declared as IntPtr.
For op0, this points to either memory owned by the resampler, or to a location within the ip0 input buffer. So, again you are going to have to use an IntPtr, this time an out parameter.
So, the declaration must be:
[DllImport("r8bsrc.dll", EntryPoint="process",
CallingConvention = CallingConvention.Cdecl)]
public static extern int Process(IntPtr ip0, int length, out IntPtr op0);
And as discussed above, the pointer you pass in ip0 must be obtained using the GCHandle class so that you can pin the array.

Safe and Correct Struct Marshalling

Unmanaged and Managed Memory Regions
I am attempting to execute unmanaged code from a C-library. One of the methods takes a void* as a parameter but under the hood it's cast to a struct of type nc_vlen_t
C struct for nc_vlen_t
/** This is the type of arrays of vlens. */
typedef struct {
size_t len; /**< Length of VL data (in base type units) */
void *p; /**< Pointer to VL data */
} nc_vlen_t;
Executing the method is correct and it works, I am concerned more about the pinning and safe handling of managed and unmanaged memory regions. I want to be as certain as possible that I am not going to cause memory leaks or a SEGFAULT. I wrote a struct that will be marshalled to and from the nc_vlen_t when I execute the C-library method calls.
C# struct
[StructLayout(LayoutKind.Sequential)]
public struct VlenStruct {
public Int32 len;
public IntPtr p; // Data
}
The struct consists of a size_t that indicates the array length and a void * to the data. Inside the library it has attributes that allow it to cast the (void*) to the appropriate numeric types and I've had great success with that so far.
What I want to understand is the best way to handle the memory regions. After reading some articles and other SO questions this is my best guess for how to handle it. I have a class that acts as an arbiter for creating and managing the structs and their memory. I rely on a destructor to free the handle which will unpin the array so that the GC can do it's job.
C# Vlen Helper
public class Vlen {
private GCHandle handle;
private VlenStruct vlen_t;
public Vlen() {
isNull = true;
}
public Vlen(Array t) {
isNull = false;
handle = GCHandle.Alloc(t, GCHandleType.Pinned); // Pin the array
vlen_t.len = t.Length;
vlen_t.p = Marshal.UnsafeAddrOfPinnedArrayElement(t, 0); // Get the pointer for &t[0]
}
~Vlen() {
if(!isNull) {
handle.Free(); // Unpin the array
}
}
public VlenStruct ToStruct() {
VlenStruct retval = new VlenStruct();
retval.len = vlen_t.len;
retval.p = vlen_t.p;
return retval;
}
private bool isNull;
}
C Method Declaration
//int cmethod(const int* typep, void *data)
// cmethod copies the array contents of the vlen struct to a file
// returns 0 after successful write
// returns -1 on fail
[DllImport("somelib.dll", CharSet = CharSet.Ansi, SetLastError = true, ExactSpelling = true, CallingConvention=CallingConvention.Cdecl)]
public static extern Int32 cmethod(ref Int32 typep, ref VlenStruct data);
If I use this class to create the struct is there a possibility that the GC will clean the array before the C-library is called in this situation:
C# Use-case
{
double[] buffer vlenBuffer = new double[] { 0, 12, 4};
Vlen data = new Vlen(vlenBuffer); // The instance now pins buffer
VlenStruct s = data.ToStruct()
Int32 type = VLEN_TYPE;
cmethod(ref type, ref s);
}
Is it possible for the data instance to be cleaned and thereby unpin buffer which could cause unpredictable behavior when executing the external library method?
Yes, you certainly have a problem here. As far as the jitter is concerned, the lifetime of your "data" object ends just before the ToStruct() method returns. Check this answer for the reason why. Which permits the finalizer to run while your unmanaged code is running. Which unpins your array. It would actually take another garbage collection to corrupt the data that the unmanaged code uses. Very rare indeed but not impossible. You are not likely to get an exception either, just random data corruption.
One workaround is to extend the lifetime of the Vlen object beyond the call, like this:
Vlen data = ...
...
cmethod(ref type, ref s);
GC.KeepAlive(data);
Which works but doesn't win any prizes, easy to forget. I would do this differently:
public static void CallHelper<T>(int type, T[] data) {
var hdl = GCHandle.Alloc(data, GCHandleType.Pinned);
try {
var vlen = new nc_vlen();
vlen.len = data.Length;
vlen.data = hdl.AddrOfPinnedObject();
cmethod(ref type, ref vlen);
}
finally {
hdl.Free();
}
}
Usage:
var arr = new int[] { 1, 2, 3 };
CallHelper(42, arr);
Which, beyond avoiding the early collection problem, also keeps the array pinned as short as possible. Do note that ref on the first argument of this function is pretty strange, you would not expect this function to alter the data type.

Pass .NET Bitmap object to COM (DirectShow filter)

I'm trying to create a source filter that makes a live video stream based on a sequence of pictures.
To do this, I make an interface of IUnknown:
[ComImport, InterfaceType(ComInterfaceType.InterfaceIsIUnknown), Guid("F18FC642-5BA2-460D-8D12-23B7ECFA8A3D")]
public interface IVirtualCameraFilter_Crop
{
void SetCurrentImage(Bitmap img);
...
};
And in my programm I get it:
pUnk = Marshal.GetIUnknownForObject(sourceFilter);
Marshal.AddRef(pUnk);
filterInterface = Marshal.GetObjectForIUnknown(pUnk) as IVirtualCameraFilter_Crop;
When I pass simple types everything works fine. But when I try to pass a C# Bitmap object I get an error unable to cast Com object to <my object type>. Or application shutting down with error APPCRUSH.
filterInterface.SetCurrentImage(frame);
I understand that it`s not the correct way but I do not know other possible ways of passing parameters. I tried to pass IntPtr to BitmapData and then I get the same application crush.
So How can I pass the bitmap to the DirectShow filter?
Result:
For a complete picture of the code cite
Create an interface:
[ComImport, InterfaceType (ComInterfaceType.InterfaceIsIUnknown), Guid ("F18FC642-5BA2-460D-8D12-23B7ECFA8A3D")]
public interface IVirtualCameraFilter_Crop
{
unsafe void SetPointerToByteArr (byte * array, int length);
};
implementation:
unsafe public void SetPointerToByteArr (byte * array, int length)
{
    this.array = new byte [length];
    Marshal.Copy (new IntPtr (array), this.array, 0, length);
}
In application:
byte [] text = ... get data;
unsafe
{
fixed (byte * ptr = & text [0])
     {
         filterInterface.SetPointerToByteArr (ptr, text.Length);
     }
}
System.Drawing.Bitmap is a .net type, not COM type, and there is no equivalent for it in COM, so you cannot use it as a parameter of a COM interface.
Either use the COM interface IStream, which is not easy to use in C# since .net MemoryStream does not implements it, or use the COM interface IPicture, or just an array of bytes.
Also be aware that your DirectShow filter will usually be called in a thread that is not the UI thread, so you are supposed to take care of putting the proper lock mechanisms inside your filter.
You can't pass the Bitmap object directly from .NET into native code. You can pass the BitmapData IntPtr but you should copy that data into your buffer inside filter bcs that pointer will be not valid once you unlock it. Passing an array of bytes should work fine, you can do it this way:
// interface method declaration
interface IVirtualCameraFilter_Crop
{
[PreserveSig]
int SetImageData([In,MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)] byte[] _array,[In] int size);
}
// Code implementation
IVirtualCameraFilter_Crop _filter // your filter interface
BitmapData _data = // your BitmapLock
int _size = // your image size while you locking it Width * Height * BPP / 8
byte[] _array = new byte[_size];
Marshal.Copy(_data.Scan0,_array,0,_size);
_filter.SetImageData(_array,_size);
// Passing IntPtr way will be similar just the changes will be in method declaration in interface
[PreserveSig]
int SetImageData([In] IntPtr _array,[In] int size);
// This way you can just pass the _data.Scan0 value
// The C++ interface declaration and handler will be same for both ways
// C++ interface declaration
DECLARE_INTERFACE(IVirtualCameraFilter_Crop,IUnknown)
{
[PreserveSig]
STDMETHOD(SetImageData)(LPBYTE _array,long size)PURE;
}
//C++ implementation
STDMETHODIMP CFilter::SetImageData(LPBYTE _array,long size)
{
CheckPointer(_array,E_POINTER);
CopyMemory(_internalArray,_array,size);
return NOERROR;
}
Another things while your filter crashing:
1 - you not copy data from given buffer
2 - you apply data to existing buffer without locking execution thread
3 - you not proper calculating data size while copy inside C++ method
4 - your resulted buffer not allocated.
Actually I think you should try to make your filter entierly in C#
Here is the BaseClasses.NET with an examples:
Pure NET DirectShow Filters in Csharp
Here is virtual camera implementation:
DirectShow Virtual Video Capture Source Filter in C#
Regards,
Maxim.

Categories