How many bytes does my function use? (C#) - c#

I would like to calculate how many bytes my function fills so that I can inject it into another process using CreateRemoteThread(). Once I know the number of bytes, I can write them into the remote process using the function's pointer. I have found an article online (see http://www.codeproject.com/KB/threads/winspy.aspx#section_3, chapter III) where they do the following in C++ :
// ThreadFunc
// Notice: - the code being injected;
//Return value: password length
static DWORD WINAPI ThreadFunc (INJDATA *pData)
{
//Code to be executed remotely
}
// This function marks the memory address after ThreadFunc.
static void AfterThreadFunc (void) {
}
Then they calculate the number of bytes ThreadFunc fills using :
const int cbCodeSize = ((LPBYTE) AfterThreadFunc - (LPBYTE) ThreadFunc);
Using cbCodeSize they allocate memory in the remote process for the injected ThreadFunc and write a copy of ThreadFunc to the allocated memory:
pCodeRemote = (PDWORD) VirtualAllocEx( hProcess, 0, cbCodeSize, MEM_COMMIT, PAGE_EXECUTE_READWRITE );
if (pCodeRemote == NULL)
__leave;
WriteProcessMemory( hProcess, pCodeRemote, &ThreadFunc, cbCodeSize, &dwNumBytesXferred );
I would like to do this in C#. :)
I have tried creating delegates, getting their pointers, and subtracting them like this:
// Thread proc, to be used with Create*Thread
public delegate int ThreadProc(InjectionData param);
//Function pointer
ThreadFuncDeleg = new ThreadProc(ThreadFunc);
ThreadFuncPtr = Marshal.GetFunctionPointerForDelegate(ThreadFuncDeleg);
//FunctionPointer
AfterThreadFuncDeleg = new ThreadProc(AfterThreadFunc);
IntPtr AfterThreadFuncDelegPtr= Marshal.GetFunctionPointerForDelegate(AfterThreadFuncDeleg);
//Number of bytes
int cbCodeSize = (AfterThreadFuncDelegPtr.ToInt32() - ThreadFuncPtr.ToInt32())*4 ;
It just does not seem right, as I get a static number no matter what I do with the code.
My question is, if possible, how does one calculate the number of bytes a function's code fills in C#?
Thank you in advance.

I don't think it is possible due dynamic optimization and code generation in .NET. You can try to measure IL-code length but when you try to measure machine-depended code length in general case it will fail.
By 'fail' I mean you can't get correct size that provide any meaning by using this technique dynamically.
Of course you can go with finding how NGEN, JIT compile works, pdb structure and try to measure. You can determine size of your code by exploring generated machine code in VS for example.
How to see the Assembly code generated by the JIT using Visual Studio
If you really need to determine size, start with NET Internals and Code Injection / NET Internals and Native Compiling but I can't imagine why you ever want it.
Be aware all internals about how JIT works exactly is subject to change so depending solution can be broken by any future version of .NET.
If you want to stick with IL: check Profiling Interfaces (CLR Profiling API), and a bit old articles: Rewrite MSIL Code on the Fly with the .NET Framework Profiling API and No Code Can Hide from the Profiling API in the .NET Framework 2.0. There are also some topics about CLR Profiling API here on SO.
But simplest way to explore assembly is Reflection API, you want MethodBody there. So you can check Length of MethodBody.GetILAsByteArray and you'll find method length in IL-commands.

Related

Clear stack variable memory

I have 8 uints which represent a security key like this:
uint firstParam = ...;
uint secondParam = ...;
uint thirdParam = ...;
uint etcParam = ...;
uint etcParam = ...;
They are allocated as local variables, inside of an UNSAFE method.
Those keys are very sensitive.
I was wondering do those locals on the stack get deleted when the method is over? Does the UNSAFE method have an affect on this? MSDN says that Unsafe code is automatically pinned in memory.
If they are not removed from memory, will assigning them all to 0 help at the end of the method, even though analyzers will say this has no effect?
So I tested zeroing out the variables. However, in x64 Release mode the zeroing is removed from the final product (checked using ILSpy)
Is there any way to stop this?
Here is the sample code (in x64 Release)
private static void Main(string[] args)
{
int num = new Random().Next(10, 100);
Console.WriteLine(num);
MethodThatDoesSomething(num);
num = 0; // This line is removed!
Console.ReadLine();
}
private static void MethodThatDoesSomething(int num)
{
Console.WriteLine(num);
}
The num = 0 statement is removed in x64 release.
I cannot use SecureString because I'm P/Invoking into a native method which takes the UInts as a paramter.
I'm P/Invoking into the unmanaged method AllocateAndInitializeSid, which takes 8 uints as parameters. What could I do in this scenerio?
I have tried adding
[MethodImpl(MethodImplOptions.NoInlining | MethodImplOptions.NoOptimization)]
to the sample code (above Main method), however, the num = 0 is STILL removed!
EDIT: after some reasoning I've come to correct this answer.
DO NOT use SecureString, as #Servy and #Alejandro point out in the comments, it is not considered really secure anymore and will give a misguided sense of security, probably leading to futhering unconsidered exposures.
I have striked the passages I'm not comfortable with anymore and, in their place, would recommend as follows.
To assign firstParam use:
firstParam = value ^ OBFUSCATION_MASK;
To read firstParam use (again):
firstParam ^ OBFUSCATION_MASK;
The ^ (bitwise XOR) operator is the inverse of itself, so applying it twice returns the original value. By reducing the time the value exists without obfuscation (for the CPU time is actually the number of machine code cycles), its exposure is also reduced. When the value is stored for long-term (say, 2-3 microseconds) it should always be obfuscated. For example:
private static uint firstParam; // use static so that the compiler cannot remove apparently "useless" assignments
public void f()
{
// somehow acquire the value (network? encrypted file? user input?)
firstParam = externalSourceFunctionNotInMyCode() ^ OBFUSCATION_MASK; // obfuscate immediately
}
Then, several microseconds later:
public void g()
{
// use the value
externalUsageFunctionNotInMyCode(firstParam ^ OBFUSCATION_MASK);
}
The two external[Source|Usage]FunctionNotInMyCode() are entry and exit points of the value. The important thing is that as long as the value is stored in my code it is never in the plain, it's always obfuscated. What happens before and after my code is not under our control and we must live with it. At some point values must enter and/or exit. Otherwise what program would that be?
One last note is about the OBFUSCATION_MASK. I would randomize it for every start of the application, but ensure that the entropy is high enough, that means that the count of 0 and 1 is maybe not fifty/fifty, but near it. I think RNGCryptoServiceProvider will suffice. If not, it's always possible to count the bits or compute the entropy:
private static readonly uint OBFUSCATION_MASK = cryptographicallyStrongRandomizer();
At that point it's relatively difficult to identify the sensitive values in the binary soup and maybe even irrelevant if the data was paged out to disk.
As always, security must be balanced with cost and efficiency (in this case, also readability and maintainability).
ORIGINAL ANSWER:
Even with pinned unmanaged memory you cannot be sure if the physical memory is paged out to the disk by the OS.
In fact, in nations where Internet Bars are very common, clients may use your program on a publicly accessible machine. An attacker may try and do as follows:
compromise a machine by running a process that occasionally allocates all the RAM available;
wait for other clients to use that machine and run a program with sensitive data (such as username and password);
once the rogue program exhausts all RAM, the OS will page out the virtual memory pages to disk;
after several hours of usage by other clients the attacker comes back to the machine to copy unused sectors and slack space to an external device;
his hope is that pagefile.sys changed sectors several times (this occurs through sector rotation and such, which may not be avoided by the OS and can depend on hardware/firmware/drivers);
he brings the external device to his dungeon and slowly but patiently analyze the gathered data, which is mainly binary gibberish, but may have slews of ASCII characters.
By analyzing the data with all the time in the world and no pressure at all, he may find those sectors to which pagefile.sys has been written several "writes" before. There, the content of the RAM and thus heap/stack of programs can be inspected.
If a program stored sensitive data in a string, this procedure would expose it.
Now, you're using uint not string, but the same principles still apply. To be sure to not expose any sensitive data, even if paged out to disk, you can use secure versions of types, such as SecureString.
The usage of uint somewhat protects you from ASCII scanning, but to be really sure you should never store sensitive data in unsafe variables, which means you should somehow convert the uint into a string representation and store it exclusively in a SecureString.
Hope that helps someone implementing secure apps.
In .NET, you can never be sure that variables are actually cleared from memory.
Since the CLR manages the memory, it's free to move them around, liberally leaving old copies behind, including if you purposely overwrite them with zeroes o other random values. A memory analyzer or a debugger may still be able to get them if it has enough privileges.
So what can you do about it?
Just terminating the method leaves the data behind in the stack, and they'll be eventually overwritten by something else, without any certainity of when (or if) it'll happen.
Manually overwriting it will help, provided the compiler doesn't optimize out the "useless" assignment (see this thread for details). This will be more likely to success if the variables are short-lived (before the GC had the chance to move them around), but you still have NO guarrantes that there won't be other copies in other places.
The next best thing you can do is to terminate the whole process immediately, preferably after overwritting them too. This way the memory returns to the OS, and it'll clear it before giving it away to another process. You're still at the mercy of kernel-mode analyzers, though, but now you've raised the bar significantly.

Getting most performance out of C++/CLI

Profiling my application reveals that 50% of runtime is being spent in a packArrays() function which performs array transformations where C++ strongly outperforms C#.
In order to improve performance, I used unsafe in packArrays to gain only low single digit percentage improvements in runtime. In order to eliminate cache as the bottleneck and in order to estimate the ceiling of performance improvement, I wrote packArrays in C++ and timed the difference in both languages. The C++ version runs approx 5x faster than C#. I decided to give C++/CLI a try.
As a result, I have three implementations:
C++ - a simple packArrays() function
C# - packArrays() is wrapped into a class, however the code inside the function is identical to the C++ version
C++/CLI - shown below, but again the implementation of packArrays() is identical (literally) to the previous two
The C++/CLI implementation is as follows
QCppCliPackArrays.cpp
public ref class QCppCliPackArrays
{
void pack(array<bool> ^ xBoolArray, int xLen, array<int> ^% yBoolArray, int % yLen)
{
// prepare variables
pin_ptr<bool> xBoolArrayPinned = &xBoolArray[0];
bool * xBoolArray_ = xBarsAreTruePinned;
pin_ptr<bool> yBoolArrayPinned = &yBoolArray[0];
bool * yBoolArray_ = yBarsAreTruePinned;
// go
packArrays(xBoolArray_, xBarCount, yBoolArray_ , yLen);
}
};
packArraysWorker.cpp
#pragma managed(push, off)
void packArrays(bool * xArray, int xLen, bool * yArray, int & yLen)
{
... actual code that is identical across languages code ...
}
#pragma managed(pop)
QCppCliPackArrays.cpp is compiled with \clr option, packArraysWorker.cpp is compiled with No Common Language RunTime Support option.
The problem: When using a C# application to run both C# and C++/CLI implementations, C++/CLI implementation is still only marginally faster than C#.
Questions:
Is there any other option/setting/keyword I can use to increase the performance of C++/CLI?
Can the performance loss of C++/CLI compared to C++ be wholely attributed to interop? Currently, for 10K repetitions C# runs some 4.5 seconds slower than C++, giving interop 0.45 millisecond per repetition. As all types being passed are blittable, I would expect the interop to .. well just pass over some pointers.
Would I gain anything by using P/Invoke? From what I read not, but it's always better to ask.
Is there any other method I can use? Leaving a five-fold increase in performance on the table is just too much.
All timings are made in Release/x64 from the command line (not from VS) on a single thread.
EDIT:
In order to determine the performance loss due to interop, I placed a Stopwatch around the QCppCliPackArrays::packArrays() call as well a chrono::high_resolution_clock inside the packArrays() per se. The results show that The C# <-> C++/CLI switch costs approx. 5 milliseconds per 10K calls. The switch from managed C++/CLI to unmanaged C++/CLI, according to results, costs nothing.
Hence, interop can be ruled out as the cause of performange degradation.
On the other hand, its obvious that packArrays() is NOT run as unmanaged! But why?
EDIT 2:
I tried to link the packArrays() as a .lib file exported from a separate unmanaged C++ library. Results are still the same.
EDIT 3:
The actual packArrays is this
public void packArrays(bool[] xConditions, int[] xValues, int xLen, ref int[] yValuesPacked, ref int yPackedLen)
{
// alloc
yPackedLen = xConditions.trueCount();
yValuesPacked = new int [yPackedLen];
// fill
int xPackedIdx = 0;
for (int xIdx = 0; xIdx < xLen; xIdx++)
if (xConditions[xIdx] == true)
yValuesPacked[xPackedIdx++] = xValues[xIdx];
}
into yValuesPacked puts all values from xValues where the corresponding xConditions[i] is true.
Now, I am facing a new issue - I have several implementations aiming to solve this problem, all of them work correctly (tested). When I run a benchmark that invididually calls these different implementations 50K times on arrays 86K items long, I get the following timinigs in seconds:
The original implementation originalArray is the code listed above. Clearly, the QCsCpp* versions dominate the benchmark - these are the implementations using C++/CLI. However, when I replace originalArrayin my original application, that calls packArrays a vast number of times, with either QCsCpp* implementation, the whole application runs SLOWER. With this result, I am really clueless and I must admit that it honestly crushed me. How can this be true? As always, any insight is much appreciated.

How to free memory in C# that is allocated in C++

I have a C++ dll which is reading video frames from a camera. These frames get allocated in the DLL returned via pointer to the caller (a C# program).
When C# is done with a particular frame of video, it needs to clean it up. The DLL interface and memory management is wrapped in a disposable class in C# so its easier to control things. However, it seems like the memory doesn't get freed/released. The memory footprint of my process grows and grows and in less than a minute, I get allocation errors in the C++ DLL as there isn't any memory left.
The video frames are a bit over 9 MB each. There is a lot of code, so I'll simply provide the allocation/deallocations/types/etc.
First : Allocation in C++ of raw buffer for the camera bytes.
dst = new unsigned char[mFrameLengthInBytes];
Second : transfer from the raw pointer back to across the DLL boundary as an unsigned char * and into an IntPtr in C#
IntPtr pFrame = VideoSource_GetFrame(mCamera, ImageFormat.BAYER);
return new VideoFrame(pFrame, .... );
So now the IntPtr is passed into the CTOR of the VideoFrame class. Inside the CTOR the IntPtr is copied to an internal member of the class as follows :
IntPtr dataPtr;
public VideoFrame(IntPtr pDataToCopy, ...)
{
...
this.dataPtr = pDataToCopy;
}
My understanding is that is a shallow copy and the class now references the original data buffer. The frame is used/processed/etc. Later, the VideoFrame class is disposed and the following is used to clean up the memory.
Marshal.FreeHGlobal(this.dataPtr);
I suspect the problem is that... dataPtr is an IntPtr and C# has no way to know that the underlying buffer is actually 9 MB, right? Is there a way to tell it how much memory to release at that point? Am I using the wrong C# free method? Is there one specifically for this sort of situation?
You need to call the corresponding "free" method in the library you're using.
Memory allocated via new is part of the C++ runtime, and calling FreeHGlobal won't work. You need to call (one way or the other) delete[] against the memory.
If this is your own library then create a function (eg VideoSource_FreeFrame) that deletes the memory. Eg:
void VideoSource_FreeFrame(unsigned char *buffer)
{
delete[] buffer;
}
And then call this from C#, passing in the IntPtr you got back.
You need to (in c++) delete dst;. That means you need to provide an API that the C# code can call, like FreeFrame(...), which does exactly that.
I agree with the first answer. Do NOT free it in C# code, using any magical, liturgical incantations. Write a method in C++ that free's the memory, and call it from your C# code. Do NOT get into the habit of allocationg memory in one heap (native) and freeing it another heap (managed), that's just bad news.
Remember one of the rules from the book effective C++: Allocate memory in the constructor, and deallocate in the destructor. And if you can't do it in the destructor, do it in an in-class method, not some global (or even worse) friend function.

.NET EventWaitHandle slow

I'm using waveOutWrite with a callback function, and under native code everything is fast. Under .NET it is much slower, to the point I think I'm doing something very wrong, 5 or 10 times slower sometimes.
I can post both sets of code, but seems like too much, so I'll just post the C code that is fast and point out the minor variances in the .NET code.
HANDLE WaveEvent;
const int TestCount = 100;
HWAVEOUT hWaveOut[1]; // don't ask why this is an array, just test code
WAVEHDR woh[1][20];
void CALLBACK OnWaveOut(HWAVEOUT,UINT uMsg,DWORD,DWORD,DWORD)
{
if(uMsg != WOM_DONE)
return;
assert(SetEvent(WaveEvent)); // .NET code uses EventWaitHandle.Set()
}
void test(void)
{
WaveEvent = CreateEvent(NULL,FALSE,FALSE,NULL);
assert(WaveEvent);
WAVEFORMATEX wf;
memset(&wf,0,sizeof(wf));
wf.wFormatTag = WAVE_FORMAT_PCM;
wf.nChannels = 1;
wf.nSamplesPerSec = 8000;
wf.wBitsPerSample = 16;
wf.nBlockAlign = WORD(wf.nChannels*(wf.wBitsPerSample/8));
wf.nAvgBytesPerSec = (wf.wBitsPerSample/8)*wf.nSamplesPerSec;
assert(waveOutOpen(&hWaveOut[0],WAVE_MAPPER,&wf,(DWORD)OnWaveOut,0,CALLBACK_FUNCTION) == MMSYSERR_NOERROR);
for(int x=0;x<2;x++)
{
memset(&woh[0][x],0,sizeof(woh[0][x]));
woh[0][x].dwBufferLength = PCM_BUF_LEN;
woh[0][x].lpData = (char*) malloc(woh[0][x].dwBufferLength);
assert(waveOutPrepareHeader(hWaveOut[0],&woh[0][x],sizeof(woh[0][x])) == MMSYSERR_NOERROR);
assert(waveOutWrite(hWaveOut[0],&woh[0][x],sizeof(woh[0][x])) == MMSYSERR_NOERROR);
}
int bufferIndex = 0;
DWORD times[TestCount];
for(int x=0;x<TestCount;x++)
{
DWORD t = timeGetTime();
assert(WaitForSingleObject(WaveEvent,INFINITE) == WAIT_OBJECT_0); // .NET code uses EventWaitHandle.WaitOne()
assert(woh[0][bufferIndex].dwFlags & WHDR_DONE);
assert(waveOutWrite(hWaveOut[0],&woh[0][bufferIndex],sizeof(woh[0][bufferIndex])) == MMSYSERR_NOERROR);
bufferIndex = bufferIndex == 0 ? 1 : 0;
times[x] = timeGetTime() - t;
}
}
The times[] array for the C code always has values around 80, which is the PCM buffer length I am using. The .NET code also shows similar values sometimes, however, it sometimes shows values as high as 1000, and more often values in the 300 to 500 range.
Doing the part that is in the bottom loop inside the OnWaveOut callback instead of using events, makes it fast all the time, with .NET or native code. So it appears the issue is with the wait events in .NET only, and mostly only when "other stuff" is happening on the test PC -- but not a lot of stuff, can be as simple as moving a window around, or opening a folder in my computer.
Maybe .NET events are just really bad about context switching, or .NET apps/threads in general? In the app I'm using to test my .NET code, the code just runs in the constructor of a form (easy place to add test code), not on a thread-pool thread or anything.
I also tried using the version of waveOutOpen that takes an event instead of a function callback. This is also slow in .NET but not in C, so again, it points to an issue with events and/or context switching.
I'm trying to keep my code simple and setting an event to do the work outside the callback is the best way I can do this with my overall design. Actually just using the event driven waveOut is even better, but I tried this other method because straight callbacks are fast, and I didn't expect normal event wait handles to be so slow.
Maybe not 100% related but I faced somehow the same issue: calling EventWaitHandle.Set for X times is fine, but then, after a threshold that I can't mention, each call of this method takes 1 complete second!
Is appears that some .net way to synchronize thread are much slower than the ones you use in C++.
The all mighty #jonskeet once made a post on his web site (https://jonskeet.uk/csharp/threads/waithandles.html) where he also refers the very complex concept of .net synchronization domains explained here: https://www.drdobbs.com/windows/synchronization-domains/184405771
He mentions that .net and the OS must communicate in a very very very time precise way with object that must be converted from one environment to another. All this is very time consuming.
I summarized a lot here, not to take credit for the answer but there is an explanation. There are some recommendations here (https://learn.microsoft.com/en-us/dotnet/standard/threading/overview-of-synchronization-primitives) about some ways to choose how to synchronize depending on the context, and the performance aspect is mentioned a little bit.

Access Violation Exception/Crash from C++ callback to C# function

So I have a native 3rd party C++ code base I am working with (.lib and .hpp files) that I used to build a wrapper in C++/CLI for eventual use in C#.
I've run into a particular problem when switching from Debug to Release mode, in that I get an Access Violation Exception when a callback's code returns.
The code from the original hpp files for callback function format:
typedef int (*CallbackFunction) (void *inst, const void *data);
Code from the C++/CLI Wrapper for callback function format:
(I'll explain why I declared two in a moment)
public delegate int ManagedCallbackFunction (IntPtr oInst, const IntPtr oData);
public delegate int UnManagedCallbackFunction (void* inst, const void* data);
--Quickly, the reason I declared a second "UnManagedCallbackFunction" is that I tried to create an "intermediary" callback in the wrapper, so the chain changed from Native C++ > C# to a version of Native C++ > C++/CLI Wrapper > C#...Full disclosure, the problem still lives, it's just been pushed to the C++/CLI Wrapper now on the same line (the return).
And finally, the crashing code from C#:
public static int hReceiveLogEvent(IntPtr pInstance, IntPtr pData)
{
Console.WriteLine("in hReceiveLogEvent...");
Console.WriteLine("pInstance: {0}", pInstance);
Console.WriteLine("pData: {0}", pData);
// provide object context for static member function
helloworld hw = (helloworld)GCHandle.FromIntPtr(pInstance).Target;
if (hw == null || pData == null)
{
Console.WriteLine("hReceiveLogEvent: received null instance pointer or null data\n");
return 0;
}
// typecast data to DataLogger object ptr
IntPtr ip2 = GCHandle.ToIntPtr(GCHandle.Alloc(new DataLoggerWrap(pData)));
DataLoggerWrap dlw = (DataLoggerWrap)GCHandle.FromIntPtr(ip2).Target;
//Do Logging Stuff
Console.WriteLine("exiting hReceiveLogEvent...");
Console.WriteLine("pInstance: {0}", pInstance);
Console.WriteLine("pData: {0}", pData);
Console.WriteLine("Setting pData to zero...");
pData = IntPtr.Zero;
pInstance = IntPtr.Zero;
Console.WriteLine("pData: {0}", pData);
Console.WriteLine("pInstance: {0}", pInstance);
return 1;
}
All writes to the console are done and then we see the dreaded crash on the return:
Unhandled exception at 0x04d1004c in
helloworld.exe: 0xC0000005: Access
violation reading location 0x04d1004c.
If I step into the debugger from here, all I see is that the last entry on the call stack is: > "04d1004c()" which evaluates to a decimal value of: 80805964
Which is only interesting if you look at the console which shows:
entering registerDataLogger
pointer to callback handle: 790848
fp for callback: 2631370
pointer to inst: 790844
in hReceiveLogEvent...
pInstance: 790844
pData: 80805964
exiting hReceiveLogEvent...
pInstance: 790844
pData: 80805964
Setting pData to zero...
pData: 0
pInstance: 0
Now, I know that between debug and release some things are quite different in the Microsoft world. I am, of course worried about byte padding and initialization of variables, so if there is something I am not providing here, just let me know and I'll add to the (already long) post. I also think the managed code may NOT be releasing all ownership and then the native C++ stuff (which I don't have the code for) may be trying to delete or kill off the pData object, thus crashing the app.
More full disclosure, it all works fine (seemingly) in Debug mode!
A real head scratch issue that would appreciate any help!
I think the stack got crushed because of mismatching calling conventions:
try out to put the attribute
[UnmanagedFunctionPointer(CallingConvention.Cdecl)]
on the callback delegate declaration.
This doesn't directly answer your question, but it may lead you in the right direction as far as debug mode okay vs. release mode not okay:
Since the debugger adds a lot of record-keeping information to the stack, generally padding out the size and layout of my program in memory, I was “getting lucky” in debug mode by scribbling over 912 bytes of memory that weren’t very important. Without the debugger, though, I was scribbling on top of rather important things, eventually walking outside of my own memory space, causing Interop to delete memory it didn’t own.
What is the definition of DataLoggerWrap? A char field may be too small for the data you are receiving.
I'm not sure what your are trying to achieve.
A few points:
1) The garbage collector is more aggressive in release mode so with bad ownership the behaviour you describe is not uncommon.
2) I don't understands what the below code is trying to do?
IntPtr ip2 = GCHandle.ToIntPtr(GCHandle.Alloc(new DataLoggerWrap(pData)));
DataLoggerWrap dlw = (DataLoggerWrap)GCHandle.FromIntPtr(ip2).Target;
You use GCHandle.Alloc to lock an instance of DataLoggerWrap in memory, but then you never pass it out to unmanaged - so why do you lock it?
You also never free it?
The second line then grabs back a reference - why the circular path? why the reference - you never use it?
3) You set the IntPtrs to null - why? - this will have no effect outside of the function scope.
4) You need to know what the contract of the callback is. Who owns pData the callback or the calling function?
I'm with #jdehaan, except CallingConvetion.StdCall could be the answer, especially when the 3rd party lib is written in BC++, for example.

Categories