Short question : is it possible (on an x64 OS of course) ? If not, why exactly ?
I have developed a c# plugin dll for excel 32.
When compiled in x86 it works fine.
When compiled in x64 the COM call fails.
Do I need a 64 bit version of excel ?
I thought COM was agnostic of compiling architecture and made communication possible between dlls developed in different technologies and having different architectures, but I guess the latter is wrong.
I guess an x64 bit dll can obviously not be called via COM (or else) from a 32-bit app.
COM supports two kind of servers, in-process and out-of-process. Office extensions are in-process components, a DLL that gets loaded into the process. A hard rule for 32-bit processes is that they cannot load 64-bit DLLs. And the other way around. This is enforced by the registry itself, a 32-bit process cannot directly access the registration information for 64-bit COM servers. They are redirected to the HKLM/Software/Wow6432Node keys. Or in other words, they cannot even see components of the wrong bitness.
Out-of-process components don't have that restriction, they run in their own process. COM marshals the calls between the two processes using RPC and papers over the bitness difference. This is also a way to get an in-process 64-bit server to work with a 32-bit host, you can run the component in a surrogate process. This is tricky to get going and almost never worth the hassle, out of process calls are much more expensive than in-process calls due to the required marshaling and context switching. Not just a little more expensive either, it is about 10,000 times slower, mostly because an in-process function call is so very fast. It is only ever used to keep a legacy 32-bit server working with a 64-bit program. Look at COM+ hosting if you want to try this, I don't know much about it.
Related
I need to load different hardware drivers that are provided in .dll files.
The problem appears to be that the drivers for one device are given in a 64bit dll, the other device (rather old) apparently relies on drivers given in a 32bit dll. I want to control them through a program written in C# which will be run through a python wrapper.
Obviously I cant run both devices from one program directly but I need a way to address them depending on each other - for example: device 1 waiting for device 2 to finish some job. Is there any way to circumvent this issue or will I need to run them in two separate programs and manage actions depending on each other through the python wrapper?
On 64-bit Windows 64-bit processes can not use 32-bit DLLs and 32-bit processes can't use 64-bit DLLs. Microsoft has documented this:
On 64-bit Windows, a 64-bit process cannot load a 32-bit dynamic-link library (DLL). Additionally, a 32-bit process cannot load a 64-bit DLL.
You would need a 32-bit process that communicates with the 32-bit DLL and a 64-bit process to communicate with the 64-bit DLL. Microsoft says this:
However, 64-bit Windows supports remote procedure calls (RPC) between 64-bit and 32-bit processes (both on the same computer and across computers).
The problem then becomes one of how to have Python communicate with these processes. Some form of Interprocess Communication (IPC) would be needed. Microsoft created a technology decades ago that can do just that - COM interfaces using Out of Process COM servers (out-of-proc).
The general idea is:
Create a 64-bit out-of-proc COM server that wraps (and exposes) the needed methods and data of the 64-bit DLL.
Create a 32-bit out-of-proc COM server that wraps (and exposes) the needed methods and data of the 32-bit DLL.
Write either 32-bit or 64-bit client code that instantiates the COM objects and calls their interfaces. Python can be used as a COM client via win32com
COM provides an IPC mechanism under the hood that allows a 64-bit client to access a 64-bit out-of-proc COM server and for a 64-bit client to access a 32-bit out-of-proc server. You can even have 32-bit clients communicate with 32-bit and 64-bit out-of-proc COM servers as well.
I haven't done low level Windows work using the newer MS languages. When I had to do what you needed in your question the two main technologies that made it easy to write COM servers and COM interfaces were:
MSVC/C++ using Microsoft Foundation Classes (MFC)
MSVC/C++ using Active Template Library (ATL).
I had a preference for ATL since it didn't require the MFC library and had less overhead.
Yes you will need 2 separate processes, running from different executables. Only 32-bit executables can load 32-bit DLLs. (See #MichaelPetch's answer for useful suggestions for details of how to get one to communicate with the other with a remote-procedure-call mechanism that can simulate calling 32-bit functions from 64-bit code or vice versa.)
x86 32-bit and x86-64 are two separate architectures that just happen to be both executable by the same CPU, in different modes. Their machine-code is very similar but not compatible, and many other things are different too, including object file format, and ABI details like pointer width being 8 vs. 4 bytes.
Having a 64-bit process start a thread that does a jmp far to a 32-bit code segment is technically possible (because the GDT has 32 and 64-bit code segment entries), but that's insane and very poorly supported by everything including the DLL dynamic loading / symbol resolving code. (Also including the kernel, so this is not even safe: an interrupt or system call could return to 32-bit code in 64-bit mode if you tried this, because the kernel knows your thread / process started in 64-bit mode.)
You won't be able to convince a compiler to generate 32-bit code and link it with 64-bit code, even if you were using C on an OS where it was "safe" to do this. A higher-level managed language makes it even more unusable even if it was "safe" with hand-written asm.
I mention this just in case you're curious about what it would technically require to make this happen, not because anyone should ever do this.
But if it was safe, in theory you could write (by hand in asm) wrapper functions for every function in the 32-bit DLL that changes to 32-bit mode before calling the function.
Apparently this was a thing early 32-bit Windows; you could call 16-bit DLLs from 32-bit code via a "thunk" wrapper that the OS supplied. But there's no similar support for 32-bit DLLs from 64-bit code or vice versa.
I am little confused about using 32 bit interop dll with 64 bit process.
In order to get access to 8TB of memory I am going to build my application for 64 bit, unfortunately it uses some statistical interop library which is build in 32 bit mode. I don't have sources for this library so I cannot rebuild it to 64 bit.
In this article the suggestion is to create 64 bit surrogate process that will communicate with my app using IPC (e.g. WCF). Here we can find the solution that uses Runtime Callable Wrapper (RCW). Which is better? I started to implement surrogate process and just today I found the second solution which I don't know if is suitable for my needs.
I need to mention that this statistical interop library has hundreds of interfaces and classes. Still I need just some of them. I have started creating WCF service hosting several of them as endpoints and it seems it will be a lot of code/work.
Can I use second method (RCW) to use with interop dll?
Regards,
jotbek
Well, "better" is a loaded term. But, yes, COM surrogates can make it a helluvalot simpler to get this going. If you can use the system surrogate, odds are almost always good when the library was well designed, then you just need to duplicate the registry keys into the 64-bit keys and tweak a few of them to use the surrogate and it all works without you writing any code at all. The MSDN starting page is here.
It won't work out when the library doesn't support cross apartment marshaling. If you have no idea if it does then try calling a library function from a worker thread. If that doesn't work then don't bother trying. And you'll lose the "better" if this library is prone to crashing bugs, that invariably turns out poorly in an out-of-process scenario. Speed might be an issue, out-of-process calls have a lot of overhead. But you're stuck with that either way. You'll get good answers instead of SO guesses by contacting the library owner for support.
I created a multithreaded service to perform image processing. Everything worked fine until one of our clients installed the product in a 16 process server with lots of memory. Now the service throws lots of out of memory errors, which is understandable because processes can only get 1.5GB of memory regardless of how much is installed.
What is the accepted solution for this situation? Should this service instead spawn off a separate worker process? Should I have one worker process per CPU talking via named pipes to the main service?
EDIT we are running on a 64bit server, but can't target x64 because of imaging libraries limitations
Thank you
There are multiple solutions for this. These are some of the options:
Link your .exe with /LARGEADDRESSAWARE option. That will give your app up to 3 Gig of RAM, and no other changes are required.
Ask your software vendor who provided you with 32-bit binaries for 64 bit version.
Move your 32-bit dependencies out-of proc (e.g communicating via COM or WCF), and change your EXE architecture to 64 bit.
Spawn new processes for each execution action, rather than threads.
Convert your code to use Address Windowing Extensions.
Options #1 and #2 are the easiest to implement, #5 is most difficult.
EDIT
I noticed C# tag in your question. For managed apps you can still use Large Address Aware flag using EditBin.exe tool.
The frequency with which I am coming across the situation where I have to call native 32-bit code from a managed 64-bit process is increasing as 64-bit machines and applications become prevalent. I don't want to mark my applciation as 32-bit and I cannot obtain 64-bit versions of of the code that is being calling.
The solution that I currently use is to create C++ COM shims that are loaded out of process to make the 32-bit calls from the 64-bit process.
This COM shim solution works well and the cross process calls are handled behind the scenes by COM, which minimises the overhead of this approach.
I would however like to keep all the new development that we undertake using C# and wondered if there are any frameworks that minimise the overhead of doing this. I have looked at IPCChannel but I feel that this approach is not as neat as the COM shim solution.
thanks,
Ed
I had the same problem and my solution was to use remoting. Basically the project consisted of:
Platform-independent CalculatorRemote.dll library with
CalculatorNative internal static class with x32 P/Invoke methods
RemoteCalculator class derived from MarshalByRefObject which used native methods from CalculatorNative;
Main platform-independent C# library (e.g. Calculator.dll), referencing CalculatorRemote.dll, with Calculator class which was privately using singleton of the RemoteCalculator class to invoke x32 functions where needed;
x32 console application which hosted RemoteCalculator from CalculatorRemote.dll to consume by Calculator.dll via IpcChannel.
So if the main application started in x64 mode it was spawning a RemoteCalculator host application and used remoted RemoteCalculator instance. (When in x32 it just used a local instance of RemoteCalculator.) The tricky part was telling calculator-host application to shut down.
I think this it better than using COM because:
You don't have to register COM classes anywhere;
Interoperating with COM should be slower than .NET remoting;
Sometimes if something is going wrong on the COM-side you need to restart your application to recover from that; (possibly I'm just not very familiar with COM)
When running in x32 mode there won't be any performance penalty with remoting -- all methods will be invoked in the same AppDomain.
Pretty much the only answer is out of process communication. You could create a .NET project that is a 32-bit executable that makes all of the 32-bit calls needed and communicate with it via Windows Messages, WCF, Named Pipes, Memory Mapped Files (4.0), etc. I am pretty sure this is how Paint.NET does their WIA (Windows Imaging Acquisition) from a 64-bit process.
In the case of PDN, they simply pass the name of the file they expect as the output, but more complex communication isn't difficult. It could be a better way to go depending on what you're doing.
We use an open source library written in C# wrapping Windows BITS COM component. However, the code is only safe to run it in x86 mode. I would like to contribute to the library by making it safe for both x86 and x64, however I have no deep knowledge in this field.
Could you please list here good/bad practices, typical issues, maybe principles also, etc, what to watch out for?
For example, I have seen in the code IntPtr is casted to System.Int32, which does not fly well on x64. How would you address this and similar issues in a platform agnostic manner?
I think you are talking about SharpBits.NET, a wrapper for the BITS component. Yes, there are several places where the author fumbled the interop. There is otherwise no reason why it couldn't work, BITS is available both as a 32-bit and a 64-bit COM server. One example of such a fumble is in this thread.
Getting the P/Invoke declaration wrong or improperly manual marshaling is the vast majority of all 64-bit interop problems. I've left lots of hints on 64-bit coding problems in this thread.
Hm, you dont ;)
Seriously.
Problem is that 64 bit and 32 bit com objects are not interoperable, so you definitely need either some lazy load setup without stron gbinding (and then we dont talk COM but COM with IDispatch), or two different wrapepr assemblies.
The whole x86/x64 thing is a SIGNIFICANT gap - in that it is very hard to cross. For example, yuo CAN NOT load a 32 bit DLL into a 64 bit process - regardless of wrapper or not, you just can not load it.
MS designed it in this way, on purpose. As such - there is no way around it.