I am little confused about using 32 bit interop dll with 64 bit process.
In order to get access to 8TB of memory I am going to build my application for 64 bit, unfortunately it uses some statistical interop library which is build in 32 bit mode. I don't have sources for this library so I cannot rebuild it to 64 bit.
In this article the suggestion is to create 64 bit surrogate process that will communicate with my app using IPC (e.g. WCF). Here we can find the solution that uses Runtime Callable Wrapper (RCW). Which is better? I started to implement surrogate process and just today I found the second solution which I don't know if is suitable for my needs.
I need to mention that this statistical interop library has hundreds of interfaces and classes. Still I need just some of them. I have started creating WCF service hosting several of them as endpoints and it seems it will be a lot of code/work.
Can I use second method (RCW) to use with interop dll?
Regards,
jotbek
Well, "better" is a loaded term. But, yes, COM surrogates can make it a helluvalot simpler to get this going. If you can use the system surrogate, odds are almost always good when the library was well designed, then you just need to duplicate the registry keys into the 64-bit keys and tweak a few of them to use the surrogate and it all works without you writing any code at all. The MSDN starting page is here.
It won't work out when the library doesn't support cross apartment marshaling. If you have no idea if it does then try calling a library function from a worker thread. If that doesn't work then don't bother trying. And you'll lose the "better" if this library is prone to crashing bugs, that invariably turns out poorly in an out-of-process scenario. Speed might be an issue, out-of-process calls have a lot of overhead. But you're stuck with that either way. You'll get good answers instead of SO guesses by contacting the library owner for support.
Related
I have a 32 bit application which I have no control over. I currently have it communicating with a 32 bit C++ DLL which massages the data and passes it on to a 32 bit .NET DLL where my C# application can get to it. I would like to upgrade my C# app to 64bits to get greater memory access.
Is there a way to get my 32 bit C++ DLL to talk to the .NET Concurrent Collections, specifically the ConcurrentQueue? Will this allow inter-process communication? If so, is there an example I could learn from? If not, does anyone know of any API or library to solve bidirectional 32 - 64 bit inter-process communications.
It seems that this would be a major problem for a lot of developers but my searching has not uncovered a fast, reliable, possibly easy solution. I find it strange that the major component developers do not offer any solutions (that I have found).
Thanks in advance for any assistance.
There's no way to have an in-process mixture of 32 and 64 bit application and dlls.
If you want to communicate between a 32 and 64 bit application then you've got a few options:
Package up the 32 bit library into a COM object and host it out of process. COM will marshall the calls from 64 bit to 32 bit and back again.
Use IPC (such as named pipes, TCP or WCF) to communicate between the 32 and 64 bit applications.
Use memory mapped files to exchange data using shared data structures. This is probably the most complicated solution.
Short question : is it possible (on an x64 OS of course) ? If not, why exactly ?
I have developed a c# plugin dll for excel 32.
When compiled in x86 it works fine.
When compiled in x64 the COM call fails.
Do I need a 64 bit version of excel ?
I thought COM was agnostic of compiling architecture and made communication possible between dlls developed in different technologies and having different architectures, but I guess the latter is wrong.
I guess an x64 bit dll can obviously not be called via COM (or else) from a 32-bit app.
COM supports two kind of servers, in-process and out-of-process. Office extensions are in-process components, a DLL that gets loaded into the process. A hard rule for 32-bit processes is that they cannot load 64-bit DLLs. And the other way around. This is enforced by the registry itself, a 32-bit process cannot directly access the registration information for 64-bit COM servers. They are redirected to the HKLM/Software/Wow6432Node keys. Or in other words, they cannot even see components of the wrong bitness.
Out-of-process components don't have that restriction, they run in their own process. COM marshals the calls between the two processes using RPC and papers over the bitness difference. This is also a way to get an in-process 64-bit server to work with a 32-bit host, you can run the component in a surrogate process. This is tricky to get going and almost never worth the hassle, out of process calls are much more expensive than in-process calls due to the required marshaling and context switching. Not just a little more expensive either, it is about 10,000 times slower, mostly because an in-process function call is so very fast. It is only ever used to keep a legacy 32-bit server working with a 64-bit program. Look at COM+ hosting if you want to try this, I don't know much about it.
I have a project in which I'll have to process 100s if not 1000s of messages a second, and process/plot this data on graphs accordingly. (The user will search for a set of data in which the graph will be plotted in real time, not literally having to plot 1000s of values on a graph.)
I'm having trouble understanding using DLLs for having the bulk of the message processing in C++, but then handing the information into a C# interface. Can someone dumb it down for me here?
Also, as speed will be a priority, I was wondering if accessing across 2 different layers of code will have more of a performance hit than programming the project in its entirety in C#, or of course, C++. However, I've read bad things about programming a GUI in C++; in regards to which, this application must also look modern, clean, professional etc. So I was thinking C# would be the way forward (perhaps XAML, WPF).
Thanks for your time.
The simplest way to interop between a C/C++ DLL and a .NET Assembly is through p/invoke. On the C/C++ side, create a DLL as you would any other. On the C# side you create a p/invoke declaration. For example, say your DLL is mydll.dll and it exports a method void Foo():
[DllImport("mydll.dll")]
extern static void Foo();
That's it. You simply call Foo like any other static class method. The hard part is getting data marshalled and that is a complicated subject. If you are writing the DLL you can probably go out of your way to make the export functions easily marshalled. For more on the topic of p/invoke marshalling see here: http://msdn.microsoft.com/en-us/magazine/cc164123.aspx.
You will take a performance hit when using p/invoke. Every time a managed application makes an unmanaged method call, it takes a hit crossing the managed/unmanaged boundary and then back again. When you marshal data, a lot of copying goes on. The copying can be reduced if necessary by using 'unsafe' C# code (using pointers to access unmanaged memory directly).
What you should be aware of is that all .NET applications are chock full of p/invoke calls. No .NET application can avoid making Operating System calls and every OS call has to cross into the unmanaged world of the OS. WinForms and even WPF GUI applications make that journey many hundreds, even thousands of times a second.
If it were my task, I would first do it 100% in C#. I would then profile it and tweak performance as necessary.
If speed is your priority, C++ might be the better choice. Try to make some estimations about how hard the calculation really is (1000 messages can be trivial to handle in C# if the calculation per message is easy, and they can be too hard for even the best optimized program). C++ might have some more advantages (regarding performance) over C# if your algorithms are complex, involving different classes, etc.
You might want to take a look at this question for a performance comparison.
Separating back-end and front-end is a good idea. Whether you get a performance penalty from having one in C++ and the other in C# depends on how much data conversion is actually necessary.
I don't think programming the GUI is a pain in general. MFC might be painful, Qt is not (IMHO).
Maybe this gives you some points to start with!
Another possible way to go: sounds like this task is a prime target for parallelization. Build your app in such a way that it can split its workload on several CPU cores or even different machines. Then you can solve your performance problems (if there will be any) by throwing hardware at them.
If you have C/C++ source, consider linking it into C++/CLI .NET Assembly. This kind of project allows you to mix unmanaged code and put managed interfaces on it. The result is a simple .NET assembly which is trivial to use in C# or VB.NET projects.
There is built-in marshaling of simple types, so that you can call functions from the managed C++ side into the unmanaged side.
The only thing you need to be aware of is that when you marshal a delegate into a function pointer, it doesn't hold a reference, so if you need the C++ to hold managed callbacks, you need to arrange for a reference to be held. Other than that, most of the built-in conversions work as expected. Visual Studio will even let you debug across the boundary (turn on unmanaged debugging).
If you have a .lib, you can use it in a C++/CLI project as long as it's linked to the C-Runtime dynamically.
You should really prototype this in C# before you start screwing around with marshalling and unmarshalling data into unsafe structures so that you can invoke functions in a C++ DLL. C# is very often faster than you think it'll be. Prototyping is cheap.
The frequency with which I am coming across the situation where I have to call native 32-bit code from a managed 64-bit process is increasing as 64-bit machines and applications become prevalent. I don't want to mark my applciation as 32-bit and I cannot obtain 64-bit versions of of the code that is being calling.
The solution that I currently use is to create C++ COM shims that are loaded out of process to make the 32-bit calls from the 64-bit process.
This COM shim solution works well and the cross process calls are handled behind the scenes by COM, which minimises the overhead of this approach.
I would however like to keep all the new development that we undertake using C# and wondered if there are any frameworks that minimise the overhead of doing this. I have looked at IPCChannel but I feel that this approach is not as neat as the COM shim solution.
thanks,
Ed
I had the same problem and my solution was to use remoting. Basically the project consisted of:
Platform-independent CalculatorRemote.dll library with
CalculatorNative internal static class with x32 P/Invoke methods
RemoteCalculator class derived from MarshalByRefObject which used native methods from CalculatorNative;
Main platform-independent C# library (e.g. Calculator.dll), referencing CalculatorRemote.dll, with Calculator class which was privately using singleton of the RemoteCalculator class to invoke x32 functions where needed;
x32 console application which hosted RemoteCalculator from CalculatorRemote.dll to consume by Calculator.dll via IpcChannel.
So if the main application started in x64 mode it was spawning a RemoteCalculator host application and used remoted RemoteCalculator instance. (When in x32 it just used a local instance of RemoteCalculator.) The tricky part was telling calculator-host application to shut down.
I think this it better than using COM because:
You don't have to register COM classes anywhere;
Interoperating with COM should be slower than .NET remoting;
Sometimes if something is going wrong on the COM-side you need to restart your application to recover from that; (possibly I'm just not very familiar with COM)
When running in x32 mode there won't be any performance penalty with remoting -- all methods will be invoked in the same AppDomain.
Pretty much the only answer is out of process communication. You could create a .NET project that is a 32-bit executable that makes all of the 32-bit calls needed and communicate with it via Windows Messages, WCF, Named Pipes, Memory Mapped Files (4.0), etc. I am pretty sure this is how Paint.NET does their WIA (Windows Imaging Acquisition) from a 64-bit process.
In the case of PDN, they simply pass the name of the file they expect as the output, but more complex communication isn't difficult. It could be a better way to go depending on what you're doing.
We use an open source library written in C# wrapping Windows BITS COM component. However, the code is only safe to run it in x86 mode. I would like to contribute to the library by making it safe for both x86 and x64, however I have no deep knowledge in this field.
Could you please list here good/bad practices, typical issues, maybe principles also, etc, what to watch out for?
For example, I have seen in the code IntPtr is casted to System.Int32, which does not fly well on x64. How would you address this and similar issues in a platform agnostic manner?
I think you are talking about SharpBits.NET, a wrapper for the BITS component. Yes, there are several places where the author fumbled the interop. There is otherwise no reason why it couldn't work, BITS is available both as a 32-bit and a 64-bit COM server. One example of such a fumble is in this thread.
Getting the P/Invoke declaration wrong or improperly manual marshaling is the vast majority of all 64-bit interop problems. I've left lots of hints on 64-bit coding problems in this thread.
Hm, you dont ;)
Seriously.
Problem is that 64 bit and 32 bit com objects are not interoperable, so you definitely need either some lazy load setup without stron gbinding (and then we dont talk COM but COM with IDispatch), or two different wrapepr assemblies.
The whole x86/x64 thing is a SIGNIFICANT gap - in that it is very hard to cross. For example, yuo CAN NOT load a 32 bit DLL into a 64 bit process - regardless of wrapper or not, you just can not load it.
MS designed it in this way, on purpose. As such - there is no way around it.