Let's say, I am executing an exe written in c#(just my choice of language). It has the following piece of code:
var comObj=new ComClass();
comObj.DoSomething();
Now, I would like to know in which process is the DoSomething method executed. Is it the same process where the current exe is running or a different process responds to the DoSomething call?
This is entirely transparent in COM, you cannot find out from your program either. It is determined by configuration information that is stored in the registry. The core reason why COM servers need to be registered. The different scenarios are:
On the same thread that creates the object. Used when the server is registered as an in-process server and the thread's apartment is compatible with the threading model of the COM object. The most common case, particularly so when you create objects on the UI thread of a program.
On another thread, if necessary created by COM, to give the object a thread-safe home. This commonly happens when your new statement runs on a thread that's in the MTA, the multi-threaded apartment. Commonly from a worker thread. The object you create is a proxy, its primary job is to serialize the arguments you pass to a method and deserialize them in the stub which runs on the other thread. It ensures that all calls on the object are thread-safe. Otherwise the same kind of mechanism as used in .NET Remoting. The underlying layer that takes care of the marshaling is LRPC, an obscure Windows component that was optimized to make inter-thread and inter-process calls as fast as possible.
Inside a surrogate process for an in-process component. Not very common but surrogates can be very handy to bridge a process bitness problem for example. Allowing you to use a 32-bit server in a 64-bit process. Requires both 32-bit and 64-bit proxy/stubs.
Inside another process that was registered as an out-of-process server. The canonical example are Microsoft Office programs like Word and Excel, very common in .NET programming. This is where COM starts to get brittle, unexpected program aborts tend to cause a mess when the server keeps running. A very common question at SO.
Inside another process on another machine. Called DCOM or Distributed COM. An extra configuration step is necessary to ensure the target machine and proper account privileges can be selected. Pretty notorious for giving humans a splitting headache, it doesn't get used much anymore these days. DCOM's biggest claim to fame was enabling Java to eat Microsoft's lunch in the middle-ware wars of the late 90s.
If you have no idea which of these scenarios applies in your case then a utility like SysInternals' Process Monitor tends to provide insight. You'll see your program reading the registry, telling you where to look, and load a DLL or start an EXE.
From COM Clients and Servers
There are two main types of servers, in-process and out-of-process. In-process servers are implemented in a dynamic linked library (DLL), and out-of-process servers are implemented in an executable file (EXE). Out-of-process servers can reside either on the local computer or on a remote computer.
I do think that the names are quite explicit :-)
Note that even for out-of-process COM servers, there will be some code in-process that will do the marshaling between the COM client and the COM server
Related
EDIT: Question updated from information gleamed from comments below
Client: 32-bit COM client
Server: 64-bit COM in-process server configured to run out-of-process. Server makes calls to native c++ code
I am trying to run an out-of-process COM object with the help of dllhost. The 32-bit test client runs fine with each individual test case. However when I try to run the cases consecutively using a batch file, it crashes with the InteropServices.COMException: RPC failed (HRESULT 0x800706BE) . Each test case is a program with the following structure
var ComType = Type.GetTypeFromProgID("My.COMClass");
var ComObject = Activator.CreateInstance(ComType);
ComType.InvokeMember("SomeFunction", BindingFlags.InvokeMethod, null, ComObject, null);
Marshal.ReleaseComObject(ComObject);
The crash happens when I run the test in with the following fashion
//test1.bat
TestA.exe
TestB.exe //crash
//test2.bat
TestB.exe
TestA.exe //crash
There is no problem when I run each test individually. I also noticed that if I wait for the dllhost process to completely finish (and disappear) before calling the next test, the whole batch file will run without problem.
//test3.bat
TestA.exe
pause //wait a few seconds then press enter
TestB.exe //ok
Since each test run perfectly individually I assumed the code was fine and it's just a problem of how I executed the tests however I couldn't find anything about this problem so I'd appreciate any insight on this
The 64-bit COM server is actually just a COM wrapper for a C++ dll, we wrote the wrapper based on this. It's also worth mentioning that my original C++ dll was working normally before attempting the COM wrapper
I have found the cause of my problem. It's because my original C++ dll is creating threads to handle concurrency on its own and According to Inside COM+: Base Services, COM object code isn't supposed to do that
Since the system automatically spawns threads as necessary to enable concurrent access to the component, you should avoid calling the Win32 CreateThread function. In fact, calling CreateThread to enable concurrency in a component is strongly discouraged because in most cases this will interfere with the system's threading pooling algorithm
I'm still not entirely sure how the thread handling can lead to this problem so it would be great if someone can share a more detailed explanation but for now, it's fine as long as I remove the multi-thread code in the C++ dll.
I've written a C# program that talks to a COM server to conduct simulations. It works without any trouble, but the simulations being carried out by the COM server are fairly processor intensive, and only run single core.
As such, I've used Parallel.For to distribute the workload amongst multiple threads. It appears, however, that all the simulation results generated by the COM server are shared amongst all instances of its classes, so when I'm running the parallel task with only 1 thread, everything works as expected, but when I'm running the task with multiple threads, the results are completely garbled (as multiple threads are effectively causing the simulation engine to replace its results with new ones as they are being read).
I was wondering if there was a way to connect to the COM server multiple times in order to stop the results-sharing of class instances?
Edit
My process for connecting to the COM server was to:
Add a reference using Project->Add References->COM (VS2010)
Use the following code to instantiate the simulator object:
dss = new OpenDSSengine.DSS();
dss.Start(0);
The above code is called in the local thread data initialiser (localInit) parameter of Parallel.For, and thus a new dss object is created for each thread, but the results obtained seem to be common across all threads.
The COM server is a dll.
As you specify that your COM server is actually an in-proc server (a .dll instead of .exe), it means that every time you execute new DSS() you actually create a new instance (unless it is created with singleton class factory, which is rare but possible).
The problem, according to your description, seems to be with the fact that the DSS implementation uses some static/global state which results in garbled data when you parallelize the execution.
In that case you can run each instance of the server in a separate process by using DllSurrogate. If the default surrogate (dllhost.exe) doesn't suffice, it is possible to write the custom one. Be aware that moving the server into another process will introduce marshaling overhead for each method call done against the server.
Please also not that if you are using an STA COM server, your parallelization will have no effect, as all the calls to the server are serialized by COM infrastructure.
All that being said, before going there make sure that the problem is not on the caller side, i.e. with your parallelization and not the server itself.
First try creating multiple instances of the COM object (just call new OpenDSSengine.DSS() multiple times, storing the results in separate variables, or in an array). If the COM server was implemented well, those multiple instances will co-exist in your process without interfering with each other, and your multi-threaded client code can use them simultaneously.
If you still find that those instances are interfering with each other, that means the COM server is using some state that is global to the process. The only way to get around that would be to invoke the multiple COM objects via multiple surrogate processes, as others have suggested.
I have to create a custom download manager that will replace a standard download manager in Internet Explorer. After googling I've learned that I have to create a COM component that implements the IDownloadManager interface.
As far as I understand I have to create a dll, generate guid for it and register it using regasm.exe utility, and then add specific entry in windows registry for IE.
I have a few questions:
I want my program to be an exe and I want to be able to run it manualy and add url to it as well as run it by IE after clicking on a downloadable link.
Although I would prefer to have a single executable, I think to achieve this i have to create dll and exe, and from dll i should check whether the exe is running (by window id) and run if it isn't and communicate with it somehow. Is this correct approach?
I want to share my program with other users, and i don't want them to register COM manually. Is it possible to do it from the code? Or perhaps I should create an installer (which I would like to avoid)?
I'll start with a WARNING: Do not create a .Net components that will be loaded in IE. Ask yourself the question "What would happen if another app does the same, and it uses different version of the CLR?". IE does not guarantee any order of loading the different COM components it needs, so there's no guarantee that your version of the CLR will be loaded in the process by the time IE calls you.
Now onto your problem. There are several issue with your scenario:
.Net does not support creating out-of-proc COM components natively. Yes, it is possible to create one by doing bunch of hacks and manual registration; however, it is not a simple task and requires deep knowledge of how COM works;
with the above in mind, your option is really to create a .Net DLL and use the ComVisible attribute to expose the classes you need to COM. As you mentioned it, you will need to register it using RegAsm.exe, for IE to be able to use it;
since you want the main functionality of your download manager to be in a standalone executable, you will have to use a .Net supported cross-process communication mechanism. .Net Remoting is likely the easiest way to implement it, and should for the most part meet your requirements. The alternative is to implement the download functionality in-proc. However, beside the consideration that you now could easily hose the IE process, if you are not careful to listen to its quit notification (which require a lot more work by itself), there's also the whole enchilada with the IE7+ protected mode, which severely limits what your in-proc code can do (limited file access, registry access, Windows APIs and other limitations);
there are certain complications arising from the IE8 and IE9 process model. Besides the top frame process, IE8/9 create a pool of processes and load-balance the tabs into these. I don't know which process will try to create your COM component and wheter it's going to be one per tab or per process or for the whole IE session (which spans multiple processes), so you have to be prepared that you might have multiple instances in multiple processes running concurrently. If this is the case, you will have to figure out how to ensure that the communication between the in-proc COM component and the executable is not serialized one instance at a time, or you might affect the browsing experience for the user. (A simple scenario would be a page with multiple download links and the user right-clicking on each link and selecting Open in new tab, thus launching multiple downloads in several tabs at once);
even if there is one instance per IE session, elevated IE instances run in a separate session from the regular user IE instances for security reasons. There's the interesting complication that your .Net Remoting call from the in-proc COM component in the elevated IE session will result in a second copy of your executable being launched also elevated. Thus, your download manager will have to be prepared that there might be two processes accessing the same download queue;
starting with IE7, IE protected mode (the default) will intercept any calls that result in starting a new process and show a dialog to the user. The only way to avoid this would be to register a silent IE elevation policy for your process. The elevation policies are registered in HKEY_LOCAL_MACHINE, which means that you will need an installer, or at least a simple script for the users to run as administrator;
even if you decide against the elevation policy and to live with the bad experience of this dialog, to register your download manager with IE, you still will have to write to the HKEY_LOCAL_MACHINE registry hive, otherwise IE will not know of it and won't use it. In other words, you still need some kind of installer or a deployment script;
IE is fairly aggressive in measuring the performance of the code that runs on the UI thread and in terminating background threads when exiting the process. So whatever functionality you have in the in-proc component, you will have to balance between being as fast as possible on the UI thread (which means less work or you'll impact the user experience) and doing work on the background threads (which means be prepared you might be killed without notification at any moment);
I think this list covers the main issues you will have to solve. The biggest problem you will encounter is that a lot of the specifics around IE process model are not well documented on MSDN, and there are almost no examples of implementing this scenario in managed code (and of those that exist, most are old and are not updated for IE8/IE9, and some even won't work in IE7).
I want to have an application that works as a Host to many other small applications. Each one of those applications should work as kind of plugin to this main application. I call them plugins not in the sense they add something to the main application, but because they can only work with this Host application as they depend on some of its services.
My idea was to have each of those plugins run in a different app domain. The problem seems to be that my host application should have a set of services that my plugins will want to use and from what is my understanding making data flow in and out from different app domains is not that great of a thing.
On one hand I'd like them to behave as stand-alone applications(although, as I said, they need to use lots of times the host application services), but on the other hand I'd like that if any of them crashes, my main application wouldn't suffer from it.
What is the best (.NET) approach to this kind of situation? Make them all run on the same AppDomain but each one in a different Thread? Use different AppDomains? One for each "plugin"? How would I make them communicate with the Host Application? Any other way of doing this?
Although speed is not an issue here, I wouldn't like for function calls to be that much slower than they are when we're working with just a regular .NET application.
Thanks
EDIT: Maybe I really need to use different AppDomains. From what I've been reading, loading assemblies in different AppDomains is the only way to later be able to unload them from the process.
I've implemented something along these lines using the Managed Addin Framework (MAF) in the System.Addin namespace. With MAF you package your addins as separate DLLs, which your host app can discover and launch in its app domain, in a separate domain for all of the addins, or each addin in its own domain. With shadow copy and separate domains you can even update an addin without shutting down your hostapp.
Your host app and the addins communicate through contracts that you derive from MAF interfaces. You can send objects back and forth between the host and the addins. The cotnracts provide a black-box interface between addins and the host, allowing you to change an addin's implementation unbeknownst to the host.
Addins can even communicate between themselves if the host tells them about each other. In my case a logging addin is shared by the others. This lets me drop in different loggers without touching the other addins or the host.
For my app, the addin use simple supervisor classes that in launch worker classes on their own threads that do all of the processing. Workers catch their own exceptions, which they return to their supervisor through callback methods. Supervisors can restart workers or take other action. The host controls the supervisors through a command contract, which instructs them to start and stop workers and return data.
My host app is a Windows service. The worker threads have thrown exceptions for all the usual reasons (including bugs!), but the host app has never crashed in any of our installations. Since debugging services is inconvenient, addins allow me to build test apps that use the same contracts, with added assurance that I'm testing what I deploy.
Addins can expose UI elements, too. This is very helpful to me as I need to deploy a controller app with the host service, since services do not have UIs. Each plugin includes its own controller interface. The controller app itself is very simple - it loads the addins and displays their UI elements. This allows me to ship an updated addin with an updated interface and not have to ship a new controller.
Even though the controller and the host service use the same addins, they don't step on each other; in fact, they don't even know that another app is using the same addins. The controller and the host talk to each other through a shared database, but you could also use another inter-app mechanism like MSMQ. In the next version the host will be a WCF service with addins on the backend and web services for control.
This is a bit long-winded but I wanted to give you an idea of how versatile MAF is. It's not as complex as it might first look, and you can build rock-solid apps with it.
It depends on how much trust you wish to allow the extensions. I'm working on a similar application and I've chosen to mostly trust the extension code, as this greatly simplifies things. I call into the code from a common thread (in my case, the extensions don't really 'run' in any continuous loop, but rather execute certain tasks that the main application wants to do) and catch exceptions in this thread, so as to provide helpful warnings that loaded extensions are misbehaving.
Currently there's nothing keeping these extensions from launching their own threads that could throw and crash the whole app, but this where I've had to make the trade-off between safety and complexity. My application is not mission-critical (not like a web server or database server), so I consider it an acceptable risk that a buggy extension could bring down my application. I provide safeguards to more politely cover the most common failure cases and leave it to the plugin developers (who will mostly be in-house people for now anyway) to clean up their bugs.
In regards to Unloading, yes, you can only unload the code and metadata for an assembly if you place it in an AppDomain. That said, unless you want to be loading and unloading frequently over the life of your program, the overhead associated with keeping the code in memory is not necessarily an issue. Any actual instances or resources using types from the assembly will still be cleaned up by the GC when you stop 'using' it, so the fact that it's still in memory doesn't imply a memory leak.
If your main use case is a series of plugins that you locate once at startup and then provide an option to instantiate while your app is running, I suggest investigating the real memory footprint associated with loading all of them at start-up and keeping them loaded. If you use AppDomains, there will be additional overhead there as well (for instance, memory for the proxy objects and loaded/JITed code to support AppDomain marshaling). There will also be CPU overhead associated with the marshaling and attendant serialization.
In short, I would only use AppDomains if one of the following were true:
I want to get true isolation for the purposes of code security (i.e. I need to run untrusted code in an isolated way)
My app is mission-critical and I absolutely need to make sure that if a plugin fails, it can't bring down my core app.
I need to load and unload the same plugin repeatedly, in order to support dynamic changes to the DLL. This is mainly if my app can't stop running, but I want to hot-patch plugins while it's still running.
I would not prefer AppDomains for the sole purpose of reducing possible memory footprint by allowing Unload.
This is an interisting question.
My first idea was to simply implement interfaces from your host application in your plugin applications to allow them to communicate through Reflection, but this would only allow communication and would not bring a real "sandbox-like" architecture.
My second thought was to design a service-oriented platform. The host application would be a kind of "plugin broadcaster" which would publish your plugins in a ServiceHost on a different thread. As this need to be really responsive and "no brainer configurated", the host application could communicate with the plugin through named pipes channel (NetNamedPipesBinding for WCF) which means is only communicating with localhost pipes and does not need any network configuration or knowledge at all. I think this could be a good solution to your problem.
Regards.
Imagine an untrusted application (plugin) that reads from the standard input and writes to the standard output.
How to get the output returned for the specified input by this application preventing any side effects?
For example, if application deletes file on a disk, it should be detected and this attempt should be canceled.
It's some kind of wrapper application. Is it possible to build it?
Less complicated task is too interesting: make this wrapper using .NET (both host and client are written in .NET language).
Safest way would be to load that plugin into a separate AppDomain which you configure with the security evidence for the requirements you have.
When you create an AppDomain, you can specify exactly the kinds of things code can do in this sandbox. Code that runs there is restricted to the limits you set. But this process can be confusing the first time you do it and may still leave you open to vulnerabilities.
Using AppDomains to isolate assemblies is an interesting process. You'd think you load your plugins into the other AppDomain then use them via proxies in your AppDomain, but its the other way around. They need to use your proxies in their AppDomain. If you fail to understand and do this right, you'll end up loading your plugin code within your main AppDomain and executing it there instead of in the restricted domain. There are lots of gotchas that you'll get bit by (subscribing to events has some interesting side effects) if you don't do things correctly.
I'd suggest prototyping, brush up on the AppDomain chapter in CLR Via C#, and read as much as you can on the subject.
Here's a test app I made to investigate cross-appdomain events.
http://cid-f8be9de57b85cc35.skydrive.live.com/self.aspx/Public/appdomainevents.rar