Execute code in different AppDomain to extent application memory - c#

My problem that i'm using 32 bit application so i have limited memory usage.
I need to execute piece of code (to use some data base) that need lot of memory in parallel and i thought to run this code in different processes (If I'm not mistaken each process get approximately 2 GB's of memory usage) another advantage is any crash on process won't affect the application.
I wondering if Appdomain really don't share memory with the main application?
If so, this solution will help me?
Executing Code in a Separate Application Domain Using C#

App domains does use main application memory however once the app domain is unloaded all the memory is reclaimed, however creating and unloading of the app domain has a performance cost and if the app domain contains lots of static objects it can actually inflate the size of the process as static objects are tied to the app domain and not the process. See Understanding Application Domains.
If the memory intensive part of your application runs for a limited amount of time, you can benefit from this approach, however running in a separate process will allow you to use more memory, especially if this is a x64 process, but you may need to somehow communicate between the two processes.
You can also look at reducing the memory pressure of your application by pooling and reusing objects that consume a lot of memory.

See Difference between AppDomain, Assembly, Process, and a Thread
An AppDomain isn't usually run in a seperate process to my knowledge; I don't think this would help you there.
Why not spawn a new process directly?

Related

Is it possible to implement a "light" appdomain by writing a dedicated native application that would host the .NET run-time?

I have Background Job Engine that runs jobs. It could be 50 jobs at the same time. All run in a single AppDomain, different threads.
The problem is that it is impossible to:
Kill a job (killing a thread is not an option)
To get job's memory usage
Theoretically, the solution is running each job in its own AppDomain, but having 50 AppDomains is impractical. AppDomains are heavy and one of the reasons - they load all the assemblies, even if they are all the same.
Now I realize there is no solution from within the .NET realm, but what if my Background Job Engine was a native C++ application hosting the .NET run-time. Do I have more options there? Could I implement a sort of "light" AppDomain that would enable me to run 50 jobs each in its own "light" AppDomain?
A "light" AppDomain would use the shared set of .NET assemblies and should come up in a snap. But it still should give a fair amount of isolation, enough to allow me to bring it down along with everything running inside it. In this case, only assemblies loaded directly in this AppDomain would be unloaded. It would also be great to be able to collect the memory usage of individual "light" AppDomain.
Any ideas?
EDIT 1
Suppose that I could run 50 AppDomains and I am OK with whatever control that gives my over the code running in an AppDomain. Now we all understand that running 50 AppDomains is unrealistic.
But what makes it so unrealistic? I can think of one reason - the necessity to load all the assemblies, could be other reasons as well.
But why is it working like this?
Isn't it true that the assembly code is read-only?
Or is it because of the static variables that are mapped within the assembly code space?
In short, what makes the model where different AppDomains share the some assemblies so problematic?
I am not familiar with .NET runtime hosting, so I am curious whether the shared assemblies model could be implemented manually through usage of some advanced .NET runtime hosting API. In that model, the host preloads some assemblies which are shared by all the AppDomains.
I can think of at least two possible .NET-only solutions, with various trade-offs for each.
Each job gets its own thread (or use async/await if you want to reduce threads). Each job has a Cancel() method that sets both a boolean flag and a ManualResetEvent. Then during any async operation, you use WaitHandle.WaitAny(workEvent, cancelEvent) which gives you the option to cancel the current async work. At various points in your code you can also check whether the cancel boolean flag is set, and terminate the job if it is. You can optionally do cleanup after cancellation if desired. When your jobs allocate or deallocate resources, manually keep a running counter of the subset of resources which are likely to consume the most memory, as a rough approximation of memory usage. This won't give you exact numbers, but relative to other jobs it should help you identify the worst offenders, and could give you more detailed insight into why they're consuming memory.
Break this into two .NET executables: a job controller, and a worker. The job controller can create as many worker processes as it wants using Process.Start(), and can track exact memory consumption for each Process. It can also terminate individual processes and be guaranteed that all resources will be freed. Downsides: additional process overhead, and potential messiness left behind if cleanup is needed.
Edit: a few more options
Implement your jobs as Tasks and use CancellationTokens.
Use something like Hangfire to manage your jobs. It supports cancellation tokens.

DotNet App Domain - Does app domain gives the same benefit of multiple process?

In our server we are loading a third-party assembly which creates its own resources like memory and threads. We need to create multiple instance of this third-party plug-in and as number of instances increases memory and threads for our server hits the limit.
One way is to load this plug-ins in different exes, which frees up server resources. This will work as each process will get benefit of its own resource pool.
The question is, if we use AppDomain and isolate the plug-in, will it give the advantage similar to hosting it in different process with respect to resource availability.
Thanks,
M...
AppDomain does not enjoy a new 'resource space' provided by the OS. For example, the available memory space is not enlarged when creating a new AppDomain which could be a limitation on 32bit systems. For the same reason, an error which causes the process to die (like out of memory) will cause all AppDomains in process to die together, and so are uncatched exceptions. That's absolutely wrong when using Processes.
However .NET do deal with AppDomain as foreign units. For example garbage collection is performed for each AppDomain by its own thus one GC thread of an AppDomain will not disturb different AppDomain. It may affect the CPU time resource that AppDomains consume by lowering dependencies between AppDomains (although I've never tried to check how much it affects).
From your questions it sounds like process is the preferred solution as you want a different resource pool for each plugin, but the answer depends on what exactly resource types are you talking about.
My experience is that processes are more flexible and stable than AppDomains. There are some fatal limitations of AppDomains compared to processes:
If threads are created in the AppDomain, the host has no idea about the new threads
If threads are created in the AppDomain and an unhandled exception is thrown in the new threads, the whole process will be killed (which is disaster when hosting 3rd party components)
An AppDomain cannot be unloaded properly if it contains unmanaged code
Even if the AppDomain only contains managed code, you may still not be able to unload it or abort the thread (e.g. dead loop in finally block), while a process can be easily killed

.NET 4.5 Memory Leak

I have a problem with an application that I wrote in .NET/C#. It consists of a server which manages a few other machines, and runs tests on them. It is a windows forms application. In order to run tests with proper error handling, for each machine I have two threads: one for running tests and one that pings it continuously. Each machine has a running queue, in which tasks are stored, tasks that will be run on that particular machine.
The issue is that after some time, when more than a few tasks are present in the queue, the memory it consumes(process explorer, task manager) gradually increases from about 50-100MB to 1.6-1.8 GB. At about this limit almost every transaction(file copy on share, remote WMI access) with the remote machines fails with either "Not enough storage" or "Out of memory". I tried some tools in order to localize the string and the closest I got was .Net Memory Profiler. That wasn't of great help, because the largest amount of memory was residing in "Private Data - Unidentified". This I'm guessing it's unmanaged data, because I can evaluate every other data(from my program) down to each string and int, and every instance of it.
Can anyone suggest a tool I can use in order to properly localize the leak and maybe fix it. It would help me a lot if I would know the DLL(from my app)/Thread that uses that memory, or at least if I can view somehow what is in that memory.
Note: A lot of posts are out there about the two exceptions: Not enough storage, and Out of memory. Most of them suggest increasing the IRPStackSize on the 'server' machine(in my case, clients). I have IRPStackSize of 50(0x32) on all of the machines, including the server.
EDIT
Regarding the comments: yes, I do maintain a log, but nothing strange happens. Using a memory profiler I discovered that my application, the .NET side uses about 20MB of memory when the unmanaged part is well over 1GB. With the help of WinDbg I found out what resides in that extra memory(in most of it). In order to access the machines and run different tests on them I use WMI, for which I have a wrapper. Everything I use is being disposed(using statements, and for some actually calling the Dispose method. Strangely though, the memory is filled with clones of this class. Does anyone know why a class would clone itself in memory.
Note: the rate at which the memory usage increases in about 5MB/s, so it's not really over a long period of time. I also wonder why it is not being freed by the garbage collector. I am using C# classes to work with WMI, not COM, nor unmanaged code. Also, among the objects on the heap I see a lot of data belonging to wmiutils, CWbemError. Oddly enough, google doesn't even know the word(no results for CWbemError)

How do I sandbox calling an external unmanaged application from managed code?

We are developing an online test application for XSLT processors in ASP.NET, however, I'm a bit worried about how to limit the vulnerabilities of our system. Is it possible with .NET to sandbox a third party unmanaged or managed application? It should:
not be allowed to start any other process by any means or vulnerability;
have no access to other existing processes on the system;
be killed when it takes too much processing power or memory;
work with both managed and unmanaged external applications;
should not be able to access system calls
Some applications have a managed API, but that doesn't suffice because than I need to run it in the same processing space as ASP.NET with all potential risks (stack overflow, memory exceptions, buffer overflow). I'm not aware whether .NET offers sandboxing of unmanaged applications.
We currently execute the external program in a console with specific affinity and monitor this, but that doesn't feel like a right or even closely safe approach.
You can execute managed code within an AppDomain which can be configured to provide some level of protection, however as soon as you allow unmanaged code to run, its pretty much got access to everything the user its running under has access to.
I'm pretty sure you can prevent unmanaged/unsafe code being executed within an AppDomain though.

AppDomain communication and performance

I am hosting a WCF service where the requirements are that an object, of a type the WCF service is not directly referenced, is invoked and some (common) methods run on it. The type is created via reflection and AssemblyResolve: this is OK.
I then got to thinking -- we are expecting maybe 50 - 100 of these assemblies / types to arrive, especially when we are versioning them. This should presumably bloat (still on theory here rather than practice) the memory and performance of the service host application due all these assemblies being referenced in mem.
As a result, we should unload -- but the only way to do this is via an appdomain. The thinking is that each assembly would somehow run in its own appdomain and the WCF service is actually just passing messages to the appropriate appdomain. If the appdomain is not used for some_period_of_time, then we simply unload the appdomain.
Some guidance would be useful on:
is this an insane idea?
should the process should run fine with ~100 assemblies in memory?
communication with appdomains would presumably come at some cost (via remoting / named pipes): does this disqualify the idea?
creating an appdomain to basically service one type of .dll would involve many appdomains; is this a bad idea?
I have no experience in this area. My worries are the size and the performance of the app if I don't do something like this. However, with the app domain idea, this basically sounds like massive over-engineering. The requirement to host this unknown .dlls is not something I can change.
Is this idea as bad as it sounds, and what are the pro's / con's associated with it?
should the process run fine with ~100 assemblies in memory?
You'll have to try (it's easy to create a mock-up) but you're only stuck with the code. So at 1 MB a piece you would be using 100MB of discardable memory, I don't expect a problem.
Provided your instances are released and collected.
If you have the memory available and want better performance, you can either wait until the first call is made and the assemblies will be loaded lazily (subsequent calls will be faster), or if you don't want any slow calls made, then you could eager load the assemblies when the service starts. I don't see a reason to load/unload the assemblies on each call when memory is cheap. If you notice a performance problem, then I'd say think about unloading the assemblies when they're not being used.
This is essentially what IIS App Pools and Worker Processes do. Probably not insane, but there's a lot of room for implementation here that could lead to happy or unhappy results.

Categories