The problem (and some unnecessary information): I am creating a chat bot in C# (not chatterbot), and want users to be able to run custom code on the bot. Basically, you send a string message over the network, and the bot runs the code contained in it.
I have looked into and actually implemented/used CSharpCodeProvider, however, this has the problem of every time custom code is compiled, it adds another Assembly to the AppDomain (which is impossible to remove). When you take into account that tens or hundreds of separate custom code invokes may occur in a single lifetime, this becomes a problem.
My idea is that there might be a interpreted language or some such thing that is able to be invoked from C#.
You can remove an assembly if you remove the entire appdomain. So you could create a fresh appdomain, load the assembly there (or compile it from there) and dispose of it after use.
You could recycle the appdomain every 100 statements or so in order to amortize the (small) time it takes to cycle one.
Related
I'm working on an application extension system (plugins) where each plugin should be isolated into a separate AppDomain. The work is about to be completed, but there is still one important question about how long an AppDomain should live.
The system is used server-side, and it uses the plugins regularly, let's say it should call each plugin in every ten minutes once. In this case, taking every kind of overhead of AppDomains into count, which is more appropriate?
Create the AppDomain instances once and keep them alive for the entire life-cycle of the application (so each plugin call will go into the same AppDomain per plugin).
Create the Appdomain instances for each plugin calls and then Unload them.
Using AppDomain.CreateDomain(...):
1). create new app domain for each plugin and keep it alive during the entire application lifetime
pros: no overhead for: creating app domain, loading .dlls, etc on each plugin call
cons: all .dlls from all app domains are eating the memory during the entire application lifetime; need to be careful with static variables; no sandboxing between calls (if one breaks the app domain then all calls will fail)
2). create new app domain for each plugin call and unload after
pros: sandboxing between calls; releasing memory between calls
cons: overhead for: creating app domain, loading .dlls, etc on each plugin call
If you have many calls per plugin and large batch of .dlls for it, use option 1
If you have many calls per plugin and small batch of .dlls for it, use option 2
If you have few calls per plugin and small batch of .dlls for it, use option 2
If you want sandboxing between calls, use option 2
The title of my question might already give away the fact that I'm not sure about what I want, as it might not make sense.
For a project I want to be able to run executables within my application, while redirecting their standard in and out so that my application can communicate with them through those streams.
At the same time, I do not want to allow these executables to perform certain actions like use the network, or read/write outside of their own working directory (basically I only want to allow them to write and read from the standard in and out).
I read on different places on the internet that these permissions can be set with PermissionStates when creating an AppDomain in which you can then execute the executables. However, I did not find a way to then communicate with the executables through their standard in and out, which is essential. I can however do this when starting a new Process (Process.Start()), though then I cannot set boundaries as to what the executable is allowed to do.
My intuition tells me I should somehow execute the Process inside the AppDomain, so that the process kind of 'runs' in the domain, though I cannot see a way to directly do that.
A colleague of mine accomplished this by creating a proxy-application, which basically is another executable in which the AppDomain is created, in which the actual executable is executed. The proxy-application is then started by a Process in the main application. I think this is a cool idea, though I feel like I shouldn't need this step.
I could add some code containing what I've done so far creating a process and appdomain, though the question is pretty long already. I'll add it if you want me to.
The "proxy" application sounds like a very reasonable approach (given that you only ever want to run .NET assemblies).
You get the isolation of different processes which allows you to communicate via stdin/stdout and gives the additional robustness that the untrusted executable cannot crash your main application (which it could if it was running in an AppDomain inside your main application's process.
The proxy application would then setup a restricted AppDomain and execute the sandboxed code, similar to the approach described here:
How to: Run Partially Trusted Code in a Sandbox
In addition, you can make use of operation system level mechansims to reduce the attack surface of a process. This can be achieved e.g. by starting the proxy process with lowest integrity which removes write access to most resources (e.g. allow writing files only in AppData\LocalLow). See here for an example.
Of course, you need to consider whether this level of sandboxing is sufficient for you. Sandboxing, in general, is hard, and the level of isolation always will be to a certain degree only.
Over the months, I've developed a personal tool that I'm using to compile C# 3.5 Xaml projects online. Basically, I'm compiling with the CodeDom compiler. I'm thinking about making it public, but the problem is that it is -very-very- easy to do anything on the server with this tool.
The reason I want to protect my server is because there's a 'Run' button to test and debug the app (in screenshot mode).
Is this possible to run an app in a sandbox - in other words, limiting memory access, hard drive access and BIOS access - without having to run it in a VM? Or should I just analyze every code, or 'disable' the Run mode?
Spin up an AppDomain, load assemblies in it, look for an interface you control, Activate up the implementing type, call your method. Just don't let any instances cross that AppDomain barrier (including exceptions!) that you don't 100% control.
Controlling the security policies for your external-code AppDomain is a bit much for a single answer, but you can check this link on MSDN or just search for "code access security msdn" to get details about how to secure this domain.
Edit: There are exceptions you cannot stop, so it is important to watch for them and record in some manner the assemblies that caused the exception so you will not load them again.
Also, it is always better to inject into this second AppDomain a type that you will then use to do all loading and execution. That way you are ensured that no type (that won't bring down your entire application) will cross any AppDomain boundary. I've found it is useful to define a type that extends MarshalByRefObject that you call methods on that executes insecure code in the second AppDomain. It should never return an unsealed type that isn't marked Serializable across the boundary, either as a method parameter or as a return type. As long as you can accomplish this you are 90% of the way there.
I'm building an extensible service application for my work, and the way it 'extends' is by loading DLL's and executing methods within. It is designed this way so that I do not need to recompile and re-deploy every time we have a new job for it to do. Currently, the service loads a DLL using Assembly.LoadFrom() and then it registers the assembly with the service. In the registration, a Func<object, bool> is passed that dictates the entry point for the new job.
My question is would it be better if I created the instance every time I needed to run the task via something similar to this:
IRunable run = (IRunable)asm.CreateInstance(t.FullName, true);
run.Run();
or would it be better to do it the way I am currently, where I store the Func<> in a class that is called based of a timer?
Timer!
If the performance might stuck because of it (i always had this problem in c#) u can still change it, but a timer gives you more controll over the programm itself...
But implementing a buffer should also just work fine
Imagine an untrusted application (plugin) that reads from the standard input and writes to the standard output.
How to get the output returned for the specified input by this application preventing any side effects?
For example, if application deletes file on a disk, it should be detected and this attempt should be canceled.
It's some kind of wrapper application. Is it possible to build it?
Less complicated task is too interesting: make this wrapper using .NET (both host and client are written in .NET language).
Safest way would be to load that plugin into a separate AppDomain which you configure with the security evidence for the requirements you have.
When you create an AppDomain, you can specify exactly the kinds of things code can do in this sandbox. Code that runs there is restricted to the limits you set. But this process can be confusing the first time you do it and may still leave you open to vulnerabilities.
Using AppDomains to isolate assemblies is an interesting process. You'd think you load your plugins into the other AppDomain then use them via proxies in your AppDomain, but its the other way around. They need to use your proxies in their AppDomain. If you fail to understand and do this right, you'll end up loading your plugin code within your main AppDomain and executing it there instead of in the restricted domain. There are lots of gotchas that you'll get bit by (subscribing to events has some interesting side effects) if you don't do things correctly.
I'd suggest prototyping, brush up on the AppDomain chapter in CLR Via C#, and read as much as you can on the subject.
Here's a test app I made to investigate cross-appdomain events.
http://cid-f8be9de57b85cc35.skydrive.live.com/self.aspx/Public/appdomainevents.rar