Starting a .NET process within an AppDomain - c#

The title of my question might already give away the fact that I'm not sure about what I want, as it might not make sense.
For a project I want to be able to run executables within my application, while redirecting their standard in and out so that my application can communicate with them through those streams.
At the same time, I do not want to allow these executables to perform certain actions like use the network, or read/write outside of their own working directory (basically I only want to allow them to write and read from the standard in and out).
I read on different places on the internet that these permissions can be set with PermissionStates when creating an AppDomain in which you can then execute the executables. However, I did not find a way to then communicate with the executables through their standard in and out, which is essential. I can however do this when starting a new Process (Process.Start()), though then I cannot set boundaries as to what the executable is allowed to do.
My intuition tells me I should somehow execute the Process inside the AppDomain, so that the process kind of 'runs' in the domain, though I cannot see a way to directly do that.
A colleague of mine accomplished this by creating a proxy-application, which basically is another executable in which the AppDomain is created, in which the actual executable is executed. The proxy-application is then started by a Process in the main application. I think this is a cool idea, though I feel like I shouldn't need this step.
I could add some code containing what I've done so far creating a process and appdomain, though the question is pretty long already. I'll add it if you want me to.

The "proxy" application sounds like a very reasonable approach (given that you only ever want to run .NET assemblies).
You get the isolation of different processes which allows you to communicate via stdin/stdout and gives the additional robustness that the untrusted executable cannot crash your main application (which it could if it was running in an AppDomain inside your main application's process.
The proxy application would then setup a restricted AppDomain and execute the sandboxed code, similar to the approach described here:
How to: Run Partially Trusted Code in a Sandbox
In addition, you can make use of operation system level mechansims to reduce the attack surface of a process. This can be achieved e.g. by starting the proxy process with lowest integrity which removes write access to most resources (e.g. allow writing files only in AppData\LocalLow). See here for an example.
Of course, you need to consider whether this level of sandboxing is sufficient for you. Sandboxing, in general, is hard, and the level of isolation always will be to a certain degree only.

Related

Block file system and internet access in a C# application [duplicate]

Over the months, I've developed a personal tool that I'm using to compile C# 3.5 Xaml projects online. Basically, I'm compiling with the CodeDom compiler. I'm thinking about making it public, but the problem is that it is -very-very- easy to do anything on the server with this tool.
The reason I want to protect my server is because there's a 'Run' button to test and debug the app (in screenshot mode).
Is this possible to run an app in a sandbox - in other words, limiting memory access, hard drive access and BIOS access - without having to run it in a VM? Or should I just analyze every code, or 'disable' the Run mode?
Spin up an AppDomain, load assemblies in it, look for an interface you control, Activate up the implementing type, call your method. Just don't let any instances cross that AppDomain barrier (including exceptions!) that you don't 100% control.
Controlling the security policies for your external-code AppDomain is a bit much for a single answer, but you can check this link on MSDN or just search for "code access security msdn" to get details about how to secure this domain.
Edit: There are exceptions you cannot stop, so it is important to watch for them and record in some manner the assemblies that caused the exception so you will not load them again.
Also, it is always better to inject into this second AppDomain a type that you will then use to do all loading and execution. That way you are ensured that no type (that won't bring down your entire application) will cross any AppDomain boundary. I've found it is useful to define a type that extends MarshalByRefObject that you call methods on that executes insecure code in the second AppDomain. It should never return an unsealed type that isn't marked Serializable across the boundary, either as a method parameter or as a return type. As long as you can accomplish this you are 90% of the way there.

Run .exe file from code behind asynchronously

I have a console application that writes on a txt files information retrieved from a database. Until now I manually executes the executable generated by the console application.
Now I need to automatize the invocation of the .exe from my web application, so that each time a specific condition happens in my code behind I can run the .exe with a logic "fire and forget".
My goals are:
1) Users must not be affected in any way by the console application execution (the SQL queries and txt file generation might take around 3 to 5 minutes), therefore the logic of "fire and forget" delegated to a separate process.
2) Since the executable will be still run manually in some cases, I would prefer having the all logic in one place, in order to avoid the risk of having a different behaviour.
Can I safely use System.Diagnostics.Process to achieve this?
System.Diagnostics.Process cmd = new System.Diagnostics.Process();
cmd.Start("Logger.exe");
Does the process automatically ends or do I have to set a timeout and explicitly close it? Is it "safe" in a web application environment with different users accessing the web application let them call the executable without the risk of concurring accesses?
Thanks.
EDIT:
Changed to use the built in class for more clarity, thanks for the hint.
As far as the mechanics, I assume CommandLineProcess wraps Process? If so, I don't see anything necessarily wrong with it, at first glance. I just have some issue with running this as an executable from a web application, as you are more likely to reduce security to get it working than rearchitect (if you follow the normal path I see in development).
If you encapsulate the actual business code in a class library, you can run the code in the web application. The main rule is the folder it saves to should be under webroot (physically or logically) so you don't have to reduce security. But, if the logic is encapsulated, you can run the "file creeator" in the web process without spinning up a Process.
Your other option is wrap the process in a service (I like a non-HTTP WCF service, but you can go windows service, if you want). I would only go this direction if it makes sense to follow a SOA path with a service endpoint. As this is likely to be isolated to a single application, in process makes more sense (unless you are saving to a directory outside of webroot).
Hope this makes sense.
Yes, it will die on it's own - provided that the .exe file will terminate on it's own. It will run with the same credentials of the web server.
Keep in mind this is considered unsafe, since you are executing code based on whatever your webapp is doing. However, the problem is with .exe files being executed this way in general and not with the actual users accessing the app.
Similar question here How do I run a command line process from a web application?

Creating COM Component in .NET to override IE functionality (custom download manager)

I have to create a custom download manager that will replace a standard download manager in Internet Explorer. After googling I've learned that I have to create a COM component that implements the IDownloadManager interface.
As far as I understand I have to create a dll, generate guid for it and register it using regasm.exe utility, and then add specific entry in windows registry for IE.
I have a few questions:
I want my program to be an exe and I want to be able to run it manualy and add url to it as well as run it by IE after clicking on a downloadable link.
Although I would prefer to have a single executable, I think to achieve this i have to create dll and exe, and from dll i should check whether the exe is running (by window id) and run if it isn't and communicate with it somehow. Is this correct approach?
I want to share my program with other users, and i don't want them to register COM manually. Is it possible to do it from the code? Or perhaps I should create an installer (which I would like to avoid)?
I'll start with a WARNING: Do not create a .Net components that will be loaded in IE. Ask yourself the question "What would happen if another app does the same, and it uses different version of the CLR?". IE does not guarantee any order of loading the different COM components it needs, so there's no guarantee that your version of the CLR will be loaded in the process by the time IE calls you.
Now onto your problem. There are several issue with your scenario:
.Net does not support creating out-of-proc COM components natively. Yes, it is possible to create one by doing bunch of hacks and manual registration; however, it is not a simple task and requires deep knowledge of how COM works;
with the above in mind, your option is really to create a .Net DLL and use the ComVisible attribute to expose the classes you need to COM. As you mentioned it, you will need to register it using RegAsm.exe, for IE to be able to use it;
since you want the main functionality of your download manager to be in a standalone executable, you will have to use a .Net supported cross-process communication mechanism. .Net Remoting is likely the easiest way to implement it, and should for the most part meet your requirements. The alternative is to implement the download functionality in-proc. However, beside the consideration that you now could easily hose the IE process, if you are not careful to listen to its quit notification (which require a lot more work by itself), there's also the whole enchilada with the IE7+ protected mode, which severely limits what your in-proc code can do (limited file access, registry access, Windows APIs and other limitations);
there are certain complications arising from the IE8 and IE9 process model. Besides the top frame process, IE8/9 create a pool of processes and load-balance the tabs into these. I don't know which process will try to create your COM component and wheter it's going to be one per tab or per process or for the whole IE session (which spans multiple processes), so you have to be prepared that you might have multiple instances in multiple processes running concurrently. If this is the case, you will have to figure out how to ensure that the communication between the in-proc COM component and the executable is not serialized one instance at a time, or you might affect the browsing experience for the user. (A simple scenario would be a page with multiple download links and the user right-clicking on each link and selecting Open in new tab, thus launching multiple downloads in several tabs at once);
even if there is one instance per IE session, elevated IE instances run in a separate session from the regular user IE instances for security reasons. There's the interesting complication that your .Net Remoting call from the in-proc COM component in the elevated IE session will result in a second copy of your executable being launched also elevated. Thus, your download manager will have to be prepared that there might be two processes accessing the same download queue;
starting with IE7, IE protected mode (the default) will intercept any calls that result in starting a new process and show a dialog to the user. The only way to avoid this would be to register a silent IE elevation policy for your process. The elevation policies are registered in HKEY_LOCAL_MACHINE, which means that you will need an installer, or at least a simple script for the users to run as administrator;
even if you decide against the elevation policy and to live with the bad experience of this dialog, to register your download manager with IE, you still will have to write to the HKEY_LOCAL_MACHINE registry hive, otherwise IE will not know of it and won't use it. In other words, you still need some kind of installer or a deployment script;
IE is fairly aggressive in measuring the performance of the code that runs on the UI thread and in terminating background threads when exiting the process. So whatever functionality you have in the in-proc component, you will have to balance between being as fast as possible on the UI thread (which means less work or you'll impact the user experience) and doing work on the background threads (which means be prepared you might be killed without notification at any moment);
I think this list covers the main issues you will have to solve. The biggest problem you will encounter is that a lot of the specifics around IE process model are not well documented on MSDN, and there are almost no examples of implementing this scenario in managed code (and of those that exist, most are old and are not updated for IE8/IE9, and some even won't work in IE7).

Question about how to implement a c# host application with a plugin-like architecture

I want to have an application that works as a Host to many other small applications. Each one of those applications should work as kind of plugin to this main application. I call them plugins not in the sense they add something to the main application, but because they can only work with this Host application as they depend on some of its services.
My idea was to have each of those plugins run in a different app domain. The problem seems to be that my host application should have a set of services that my plugins will want to use and from what is my understanding making data flow in and out from different app domains is not that great of a thing.
On one hand I'd like them to behave as stand-alone applications(although, as I said, they need to use lots of times the host application services), but on the other hand I'd like that if any of them crashes, my main application wouldn't suffer from it.
What is the best (.NET) approach to this kind of situation? Make them all run on the same AppDomain but each one in a different Thread? Use different AppDomains? One for each "plugin"? How would I make them communicate with the Host Application? Any other way of doing this?
Although speed is not an issue here, I wouldn't like for function calls to be that much slower than they are when we're working with just a regular .NET application.
Thanks
EDIT: Maybe I really need to use different AppDomains. From what I've been reading, loading assemblies in different AppDomains is the only way to later be able to unload them from the process.
I've implemented something along these lines using the Managed Addin Framework (MAF) in the System.Addin namespace. With MAF you package your addins as separate DLLs, which your host app can discover and launch in its app domain, in a separate domain for all of the addins, or each addin in its own domain. With shadow copy and separate domains you can even update an addin without shutting down your hostapp.
Your host app and the addins communicate through contracts that you derive from MAF interfaces. You can send objects back and forth between the host and the addins. The cotnracts provide a black-box interface between addins and the host, allowing you to change an addin's implementation unbeknownst to the host.
Addins can even communicate between themselves if the host tells them about each other. In my case a logging addin is shared by the others. This lets me drop in different loggers without touching the other addins or the host.
For my app, the addin use simple supervisor classes that in launch worker classes on their own threads that do all of the processing. Workers catch their own exceptions, which they return to their supervisor through callback methods. Supervisors can restart workers or take other action. The host controls the supervisors through a command contract, which instructs them to start and stop workers and return data.
My host app is a Windows service. The worker threads have thrown exceptions for all the usual reasons (including bugs!), but the host app has never crashed in any of our installations. Since debugging services is inconvenient, addins allow me to build test apps that use the same contracts, with added assurance that I'm testing what I deploy.
Addins can expose UI elements, too. This is very helpful to me as I need to deploy a controller app with the host service, since services do not have UIs. Each plugin includes its own controller interface. The controller app itself is very simple - it loads the addins and displays their UI elements. This allows me to ship an updated addin with an updated interface and not have to ship a new controller.
Even though the controller and the host service use the same addins, they don't step on each other; in fact, they don't even know that another app is using the same addins. The controller and the host talk to each other through a shared database, but you could also use another inter-app mechanism like MSMQ. In the next version the host will be a WCF service with addins on the backend and web services for control.
This is a bit long-winded but I wanted to give you an idea of how versatile MAF is. It's not as complex as it might first look, and you can build rock-solid apps with it.
It depends on how much trust you wish to allow the extensions. I'm working on a similar application and I've chosen to mostly trust the extension code, as this greatly simplifies things. I call into the code from a common thread (in my case, the extensions don't really 'run' in any continuous loop, but rather execute certain tasks that the main application wants to do) and catch exceptions in this thread, so as to provide helpful warnings that loaded extensions are misbehaving.
Currently there's nothing keeping these extensions from launching their own threads that could throw and crash the whole app, but this where I've had to make the trade-off between safety and complexity. My application is not mission-critical (not like a web server or database server), so I consider it an acceptable risk that a buggy extension could bring down my application. I provide safeguards to more politely cover the most common failure cases and leave it to the plugin developers (who will mostly be in-house people for now anyway) to clean up their bugs.
In regards to Unloading, yes, you can only unload the code and metadata for an assembly if you place it in an AppDomain. That said, unless you want to be loading and unloading frequently over the life of your program, the overhead associated with keeping the code in memory is not necessarily an issue. Any actual instances or resources using types from the assembly will still be cleaned up by the GC when you stop 'using' it, so the fact that it's still in memory doesn't imply a memory leak.
If your main use case is a series of plugins that you locate once at startup and then provide an option to instantiate while your app is running, I suggest investigating the real memory footprint associated with loading all of them at start-up and keeping them loaded. If you use AppDomains, there will be additional overhead there as well (for instance, memory for the proxy objects and loaded/JITed code to support AppDomain marshaling). There will also be CPU overhead associated with the marshaling and attendant serialization.
In short, I would only use AppDomains if one of the following were true:
I want to get true isolation for the purposes of code security (i.e. I need to run untrusted code in an isolated way)
My app is mission-critical and I absolutely need to make sure that if a plugin fails, it can't bring down my core app.
I need to load and unload the same plugin repeatedly, in order to support dynamic changes to the DLL. This is mainly if my app can't stop running, but I want to hot-patch plugins while it's still running.
I would not prefer AppDomains for the sole purpose of reducing possible memory footprint by allowing Unload.
This is an interisting question.
My first idea was to simply implement interfaces from your host application in your plugin applications to allow them to communicate through Reflection, but this would only allow communication and would not bring a real "sandbox-like" architecture.
My second thought was to design a service-oriented platform. The host application would be a kind of "plugin broadcaster" which would publish your plugins in a ServiceHost on a different thread. As this need to be really responsive and "no brainer configurated", the host application could communicate with the plugin through named pipes channel (NetNamedPipesBinding for WCF) which means is only communicating with localhost pipes and does not need any network configuration or knowledge at all. I think this could be a good solution to your problem.
Regards.

How to wrap an untrusted application?

Imagine an untrusted application (plugin) that reads from the standard input and writes to the standard output.
How to get the output returned for the specified input by this application preventing any side effects?
For example, if application deletes file on a disk, it should be detected and this attempt should be canceled.
It's some kind of wrapper application. Is it possible to build it?
Less complicated task is too interesting: make this wrapper using .NET (both host and client are written in .NET language).
Safest way would be to load that plugin into a separate AppDomain which you configure with the security evidence for the requirements you have.
When you create an AppDomain, you can specify exactly the kinds of things code can do in this sandbox. Code that runs there is restricted to the limits you set. But this process can be confusing the first time you do it and may still leave you open to vulnerabilities.
Using AppDomains to isolate assemblies is an interesting process. You'd think you load your plugins into the other AppDomain then use them via proxies in your AppDomain, but its the other way around. They need to use your proxies in their AppDomain. If you fail to understand and do this right, you'll end up loading your plugin code within your main AppDomain and executing it there instead of in the restricted domain. There are lots of gotchas that you'll get bit by (subscribing to events has some interesting side effects) if you don't do things correctly.
I'd suggest prototyping, brush up on the AppDomain chapter in CLR Via C#, and read as much as you can on the subject.
Here's a test app I made to investigate cross-appdomain events.
http://cid-f8be9de57b85cc35.skydrive.live.com/self.aspx/Public/appdomainevents.rar

Categories