I have an unmanaged C++ exe that I could call from inside my C# code directly (have the C++ code that I could make a lib) or via spawning a process and grabbing the data from the OutputStream. What are the advantages/disadvantages of the options?
Since you have source code of the C++ library, you can use C++/CLI to compile it into a mixed mode dll so it is easy to be used by the C# application.
The benefit of this will be most flexible on data flow (input or output to that C++ module).
While running the C++ code out of process has one benefit. If your C++ code is not very robust, this can make your main C# process stable so as not to be crashed by the C++ code.
The big downside to scraping the OutputStream is the lack of data typing. I'd much rather do the work of exporting a few functions and reusing an existing library; but, that's really just a preference.
Another disadvantage of spawning a process is that on windows spwaning a process is a very expensive (slow) operation. If you intend to call the c++ code quite often this is worth considering.
An advantage can be that you're automatically more isolated to crashes in the c++ program.
Drop in replacement of the c++ executable can be an advantage as well.
Furthermore writing interop code can be big hassle in c#. If it's a complicated interace and you decide to do interop, have a look at c++/cli for the interop layer.
You're far better off taking a subset of the functions of the C++ executable and building it into a library. You'll keep type safety and you'll be able to better leverage Exception Handling (not to mention finer grain control of how you manage the calls into the functions in the library).
If you go with grabbing the data from the OutputStream of the executable, you're going to have no visibility into the processes of the executable, no real exception handling, and you're going to lose any type information you may have had.
The main disadvantage to being in process would be making sure you handle the managed/native interactions correctly.
1)
The c++ code will probably depend on deterministic destruction for cleanup/resource freeing etc. I say probably because this is common and good practice in c++.
In the managed code this means you have to be careful to dispose of your c++ cli wrapper code properly. If your code is used once, a using clause in c# will do this for you. If the object needs to live a while as a member you'll find that the dispose will need to be chained the whole way through your application.
2)
Another issue depends on how memory hungry your application is. The managed garbage collector can be lazy. It is guaranteed to kick in if a managed allocation needs more space than is available. However the unmanaged allocator is not connected in anyway. Therefore you need to manaully inform the managed allocator that you will be making unmanaged allocations and that it should keep that space available. This is done using the AddMemoryPressure method.
The main disadvantages to being out of process are:
1) Speed.
2) Code overhead to manage the communication.
3) Code overhead to watch for one or other process dying when it is not expected to.
Related
I was wondering if there is any way to allocate memory in a process and have that memory be r/w & executable?
I found System.Runtime.InteropServices.Marshal.AllocHGlobal, dunno if that the thing I am looking for, if so then how does it work? I don't really understand it, where is the allocated memory located.
This is a task for the VirtualAlloc and VirtualProtect API calls rather than the interop marshaller. You will have to declare them [DllImport]. Hhowever this entire process would be painful enough that I would seriously consider using a different language. Perhaps a c++ project that provides just the interop calls you need while the UI remains in C#. (Honestly, interop stuff is the only area where I see c++ and .net working well together).
I'm attempting to marshal a forest of objects from C# .NET to native C++. That is: I have a graph of hundreds of millions of objects (if not more), that I wish to use in native C++. See it as a normal 'leaf'/'node' construction with pointers between leafs and nodes. I control both the C++ and the C# code, so I can make adjustments to the code.
The inner loop of the software is going to be implemented in native C++ for performance reasons. I basically want to tell the GC to stop for a while (to ensure objects aren't moved), then do the fancy C++ routine, and then continue the GC once it's done.
There are also things that I don't want to do:
Make my own mark & sweep algorithm to pin all objects in the graph. Not only will that be very time consuming, it'll also cost a lot of memory because I then have to keep track of all these GCHandle objects by myself.
Use native allocation methods like malloc. I've had a native C++ application in the past, and it suffered greatly from memory fragmentation, that .NET 'automatically' solves just fine... not to mention the benefit of GC.
So, any suggestions on how to do this?
I will look at using managed C++.
Maybe accessing the .NET objects from manage C++ will be fast enough.
Otherwise use managed C++ to “walk” the .net objects and create native C++ objects, delete them all once done.
Or create a class factory class in manage C++ that can be used to create the C++ object being callable from C#, once again delete them all once done.
Or do as Marc Gravel says and manually allocating a buffer of unmanaged memory, and dealing with structs inside that space, maybe using a code generator driven from attributes on your C# classes.
I have some native applications we have written in c++ and compiled to executables. .exe ect.
We need to run this in a service from c# and I am wondering if it makes any difference to create wrapper project that makes it easy to call it from c# either by a c++/CLI project or just p/invoke, or to just to start a process that calls the .exe file as we would do from command line?
Ofcause its easier to consume from c# if its just taking a namespace and call a c# function that takes care of things. Put I could as easy create that function that starts a process call to the command line exe and get the result that way.
Is there any performance difference to either method because that will most likely be the key factor for this as we can implement both ways easy.
Also using a c++/CLI wrapper makes transfer of variables easier.
Ofcause its easier to consume from c# if its just taking a namespace
and call a c# function that takes care of things
It depends on nature of what your app is going to do using thet C++ code.
For sure it will be easier to debug.
The performance should be better in case of wrapper, as in case of EXE, you need to startup EXE, which may take some non so irrelevant time. In case of wrapper, there is perfromance bottleneck in transfering data between managed and unmanaged code, instead. Which, by the way, should be less then executable run.
All this depends on concrete application lifecycle and actually can be measured only by you, in your concrete context.
My choice would be:
If calls are not so frequent, keep it like separate executable.
On the other hand if calls are still not so frequent, but the amount of data you need to pass to and from calls are big enough, may consider wrapper. Here again choice of EXE may be again valid if your data is or can be file, so you just pass file paths to executable.
If calls are frequent enough, use wrapper.
Repeat, those are just consideration that may or may not bring some practical benefit in your concrete case.
There is a performance difference. However, as with all performance tweaking, the key is profiling. Do it using the way that's easier, and check if its okay.
The main performance difference between the P/Invoke and C++/CLI way is marshalling, which is automatic in P/Invoke (with some customization).
I have a project in which I'll have to process 100s if not 1000s of messages a second, and process/plot this data on graphs accordingly. (The user will search for a set of data in which the graph will be plotted in real time, not literally having to plot 1000s of values on a graph.)
I'm having trouble understanding using DLLs for having the bulk of the message processing in C++, but then handing the information into a C# interface. Can someone dumb it down for me here?
Also, as speed will be a priority, I was wondering if accessing across 2 different layers of code will have more of a performance hit than programming the project in its entirety in C#, or of course, C++. However, I've read bad things about programming a GUI in C++; in regards to which, this application must also look modern, clean, professional etc. So I was thinking C# would be the way forward (perhaps XAML, WPF).
Thanks for your time.
The simplest way to interop between a C/C++ DLL and a .NET Assembly is through p/invoke. On the C/C++ side, create a DLL as you would any other. On the C# side you create a p/invoke declaration. For example, say your DLL is mydll.dll and it exports a method void Foo():
[DllImport("mydll.dll")]
extern static void Foo();
That's it. You simply call Foo like any other static class method. The hard part is getting data marshalled and that is a complicated subject. If you are writing the DLL you can probably go out of your way to make the export functions easily marshalled. For more on the topic of p/invoke marshalling see here: http://msdn.microsoft.com/en-us/magazine/cc164123.aspx.
You will take a performance hit when using p/invoke. Every time a managed application makes an unmanaged method call, it takes a hit crossing the managed/unmanaged boundary and then back again. When you marshal data, a lot of copying goes on. The copying can be reduced if necessary by using 'unsafe' C# code (using pointers to access unmanaged memory directly).
What you should be aware of is that all .NET applications are chock full of p/invoke calls. No .NET application can avoid making Operating System calls and every OS call has to cross into the unmanaged world of the OS. WinForms and even WPF GUI applications make that journey many hundreds, even thousands of times a second.
If it were my task, I would first do it 100% in C#. I would then profile it and tweak performance as necessary.
If speed is your priority, C++ might be the better choice. Try to make some estimations about how hard the calculation really is (1000 messages can be trivial to handle in C# if the calculation per message is easy, and they can be too hard for even the best optimized program). C++ might have some more advantages (regarding performance) over C# if your algorithms are complex, involving different classes, etc.
You might want to take a look at this question for a performance comparison.
Separating back-end and front-end is a good idea. Whether you get a performance penalty from having one in C++ and the other in C# depends on how much data conversion is actually necessary.
I don't think programming the GUI is a pain in general. MFC might be painful, Qt is not (IMHO).
Maybe this gives you some points to start with!
Another possible way to go: sounds like this task is a prime target for parallelization. Build your app in such a way that it can split its workload on several CPU cores or even different machines. Then you can solve your performance problems (if there will be any) by throwing hardware at them.
If you have C/C++ source, consider linking it into C++/CLI .NET Assembly. This kind of project allows you to mix unmanaged code and put managed interfaces on it. The result is a simple .NET assembly which is trivial to use in C# or VB.NET projects.
There is built-in marshaling of simple types, so that you can call functions from the managed C++ side into the unmanaged side.
The only thing you need to be aware of is that when you marshal a delegate into a function pointer, it doesn't hold a reference, so if you need the C++ to hold managed callbacks, you need to arrange for a reference to be held. Other than that, most of the built-in conversions work as expected. Visual Studio will even let you debug across the boundary (turn on unmanaged debugging).
If you have a .lib, you can use it in a C++/CLI project as long as it's linked to the C-Runtime dynamically.
You should really prototype this in C# before you start screwing around with marshalling and unmarshalling data into unsafe structures so that you can invoke functions in a C++ DLL. C# is very often faster than you think it'll be. Prototyping is cheap.
I'm writing some native C++ code which needs to be called from C# (and I can't replace the C++ native code with C# code).
I found memory corruptions while allocating/deallocating some memory in the native C++ code using malloc/free. Then I used LocalAlloc/LocalFree and HeapAlloc/HeapFree and had the same problems.
The allocations/deallocations seem to be correct, and they happen in a separate thread, created by the native code.
I was wondering which is the best allocation strategy to use in a native C++ library called by C#
EDIT: found the problem: the problem wasn't in the allocation/deallocation code, but in some memory being written after being deallocated.
As long as the C# side of the code uses the compiler's /unsafe switch and the fixed keyword used for holding the buffer of data, I think you should be ok.
As to the question of your memory allocation, it may not be the C++ memory allocation code that is causing the problem, it could be the way how the C++ code is interacting with the driver...maybe using VirtualAlloc/VirtualFree pair as per the MSDN docs...
Edit: When you try to allocate the buffer to hold the data from the C++ side after interacting with the driver...possibly a race-condition or interrupt latency is causing a memory corruption...just a thought...
Hope this helps,
Best regards,
Tom.
Your question is missing essential details, it isn't at all clear whether the memory allocated by the C++ code needs to be released on the C# side. That's normally done automatically with, say, the P/Invoke marshaller or the COM interop layer in the CLR. Or can be done manually by declaring a method argument as IntPtr, then use of the Marshal class.
If it is done automatically you must use the COM memory allocator, CoTaskMemAlloc(). If you marshal yourself you could also use GlobalAlloc(), release on the C# side with Marshal.FreeHGlobal(). There isn't any advantage to using GlobalAlloc(), you might as well use CoTaskMemAlloc() and release with Marshal.FreeCoTaskMem().
But you should have noticed this yourself. Allocating with malloc() or HeapAlloc() on the C++ side causes leaks instead of corruption if the managed code releases the memory. Vista and Win7 have a much stricter heap manager, it terminates the program if it notices a bad release.
It sounds to me that you have simple heap corruption in your C++ code. That is the most common scourge of unmanaged C++ programming, over-running the end of a buffer, writing to memory that's been freed, bad pointer values. The way to get rid of bugs like these is a careful code review and use of a debug allocator, such as the one provided by <crtdbg.h>. Good luck with it.
The windows driver developpement kit recommend against the use of C++ for drivers.
Also the best strategy is to have the driver manage its own memory. When the c# needs to see the data then pass it a marshalled buffer and have the driver to fill it