Looks like I'm trying to do something that other's haven't run into again. Atleast, from Google searching it seems like this is pretty unique.
I have a server running with a directory full of dlls. Each dll is used to process a certain type of order and managed by different people. When an order comes in, there is a table that determines which dll to spin up, run it and then spin it down (free from memory).
The process we started with was:
Create an AppDomain
Load needed dll into the domain
Run the code we need
Unload the AppDomain
The issue we had with this was the server would run out of memory or we would get "token errors". I think the issue is that we call some 3rd party pascal dlls I don't think are getting freed.
We updated our the process to:
Create an AppDomain on process load
Load the dll into this "global" AppDomain
Run the dll's code
This got rid of our memory issues and we have recieved zero token errors. However, when we want to move a dll from our test server to our production server we have to kill the whole service, and then the dlls are not locked in memory. Then reprocess any orders that died with the service and any that came in while the service was down.
I've worked with the group and tried to unload the AppDomain, but once it is unloaded it doesn't just free dll's that are running, it basically kills the whole domain. I've considered moving this dll loader to it's own System.Process, so that the OS can clean up the memory when the process ends. However, I'm not sure how to have one Process run code in another Process. This plan seems like basic OS security features would prevent it because virus' would have a heyday.
Does anyone have an idea how we can run dll's in an AppDomain and then having them unlocked from memory so that they can be updated? (Without creating and disposing many AppDomain because they don't seem to clean everything up.) It would be great to have a memory pool that I can nuke and but .NET isn't about manual memory management.
I'm working on an application extension system (plugins) where each plugin should be isolated into a separate AppDomain. The work is about to be completed, but there is still one important question about how long an AppDomain should live.
The system is used server-side, and it uses the plugins regularly, let's say it should call each plugin in every ten minutes once. In this case, taking every kind of overhead of AppDomains into count, which is more appropriate?
Create the AppDomain instances once and keep them alive for the entire life-cycle of the application (so each plugin call will go into the same AppDomain per plugin).
Create the Appdomain instances for each plugin calls and then Unload them.
Using AppDomain.CreateDomain(...):
1). create new app domain for each plugin and keep it alive during the entire application lifetime
pros: no overhead for: creating app domain, loading .dlls, etc on each plugin call
cons: all .dlls from all app domains are eating the memory during the entire application lifetime; need to be careful with static variables; no sandboxing between calls (if one breaks the app domain then all calls will fail)
2). create new app domain for each plugin call and unload after
pros: sandboxing between calls; releasing memory between calls
cons: overhead for: creating app domain, loading .dlls, etc on each plugin call
If you have many calls per plugin and large batch of .dlls for it, use option 1
If you have many calls per plugin and small batch of .dlls for it, use option 2
If you have few calls per plugin and small batch of .dlls for it, use option 2
If you want sandboxing between calls, use option 2
i have a advanced question which is related to remoting dlls in MEMORY. i have wcf service that take Dlls(Starter.DLL entryType) Also there is a Process dll which is referenced to wpf application . i have some steps:
1) wcf takes Dlls (add to stream memory..)
2) Process dll read from wcf
3) Process dll adds dll to currentdomain
foreach (byte[] binary in deCompressBinaries)
{
AppDomain.CurrentDomain.Load(binary);
}
4) wpf accesses Process' CreateApplication method
this._btnStartApp.Click += (s, args) =>
{
Process appMngr = new Process ();
appMngr.CreateApplication();
};
CreateApplication method :
object obj = appLoader.CreateInstance(appLoader.EntryType);
MethodInfo minfo = obj.GetType().GetMethod("Execute", BindingFlags.Instance | BindingFlags.Public | BindingFlags.CreateInstance);
but i can not see Execute method in getMethos result, if QuickWatching,How to run Execute method in another currentdomain ? this Dlls are adding Process Dll's currewntdomain not wpf currentdomain.
AppDomains are tricky to work with then dynamicly loading DLLs.
First; if you cannot see the method then verify you can get the type of the object. If you can see the type the object should be laoded into the current domain. Then you can itterate over the methods and inspect those to see if something is missmatched with the GetMethod call, wich could make it fail to show up. The CreateInstance flag might be the issue, I beleive that one is used for constructors.
Second; If you want things to run in a seperate appdomain (very important when you want to unload the dll later) then you have to build 3 things.
create a set of classes that can handle all the loading and execution of the of the dynamic DLL.
create an interface that allows your main code to talk to this set of classes; this iterface cannot return any types that came from the dynamic DLL; if it does you either laod that DLL or you crash. I recall marshal by ref may be needed, but uncertain, it has been a while.
Attach event handlers to your main Appdomain that notify you when you load a DLL into that appdomain. This will be how you can debug if you are inadvertidly loading the DLL into your current AppDomain.
http://msdn.microsoft.com/library/system.appdomain.assemblyload.aspx
Now you can create a new Appdomain, call your loading code in there and get results trough the interface. If you can complete the call and you had no load events trigered you got it working. you can dispaose of the AppDomain when you are done and the loaded DLL goes along with it.
Good luck, this was tricky to figure out and get working correctly.
I have a C# Windows Service that I recently moved from .NET 3.5 to .NET 4.0. No other code changes were made.
When running on 3.5, memory utilzation for a given work load was roughly 1.5 GB of memory and throughput was 20 X per second. (The X doesn't matter in the context of this question.)
The exact same service running on 4.0 uses between 3GB and 5GB+ of memory, and gets less than 4 X per second. In fact, the service will typically end up stalling out as memory usage continue to climb until my system is siting at 99% utilization and page file swapping goes nuts.
I'm not sure if this has to do with garbage collection, or what, but I'm having trouble figuring it out. My window service uses the "Server" GC via the config file switch seen below:
<runtime>
<gcServer enabled="true"/>
</runtime>
Changing this option to false didn't seem to make a difference. Futhermore, from the reading I've done on the new GC in 4.0, the big changes only effect the workstation GC mode, not server GC mode. So perhaps GC has nothing to do with the issue.
Ideas?
Well this was an interesting one.
The root cause turns out to be a change in the behavior of SQL Server Reporting Services' LocalReport class (v2010) when running this on top of .NET 4.0.
Basically, Microsoft altered the behavior of RDLC processing so that each time a report was processed it was done so in a seperate application domain. This was actually done specifically to address a memory leak caused by the inability to unload assemblies from app domains. When the LocalReport class processed an RDLC file, it actually creates an assembly on the fly and loads it into the app domain.
In my case, due to the large volume of report I was processing, this was resulting in very large numbers of System.Runtime.Remoting.ServerIdentity objects being created. This was my tip off to the cause, as I was confused as to why processing an RLDC required remoting.
Of course, to call a method on a class in another app domain, remoting is exactly what you use. In .NET 3.5, this wasn't necessary as, by default, the RDLC-assembly was loaded into the same app domain. In .NET 4.0, however, a new app domain is created by default.
The fix was fairly easy. First I needed to go enable legacy security policy using the following config:
<runtime>
<NetFx40_LegacySecurityPolicy enabled="true"/>
</runtime>
Next, I needed to force the RDLCs to be processed in the same app domain as my service by calling the following:
myLocalReport.ExecuteReportInCurrentAppDomain(AppDomain.CurrentDomain.Evidence);
This resolved the issue.
I ran into this exact issue. And it is true that app domains are created and not cleaned up. However I wouldn't recommend reverting to legacy. They can be cleaned up by ReleaseSandboxAppDomain().
LocalReport report = new LocalReport();
...
report.ReleaseSandboxAppDomain();
Some other things I also do to clean up:
Unsubscribe to any SubreportProcessing events,
Clear Data Sources,
Dispose the report.
Our windows service processes several reports a second and there are no leaks.
I'm pretty late to this, but I have a real solution and can explain why!
It turns out that LocalReport here is using .NET Remoting to dynamically create a sub appdomain and run the report in order to avoid a leak internally somewhere. We then notice that, eventually, the report will release all the memory after 10 to 20 minutes. For people with a lot of PDFs being generated, this isn't going to work. However, the key here is that they are using .NET Remoting. One of the key parts to Remoting is something called "Leasing". Leasing means that it will keep that Marshal Object around for a while since Remoting is usually expensive to setup and its probably going to be used more than once. LocalReport RDLC is abusing this.
By default, the leasing time is... 10 minutes! Also, if something makes various calls into it, it adds another 2 minutes to the wait time! Thus, it can randomly be between 10 and 20 minutes depending how the calls line up. Luckily, you can change how long this timeout happens. Unluckily, you can only set this once per app domain... Thus, if you need remoting other than PDF generation, you will probably need to make another service running it so you can change the defaults. To do this, all you need to do is run these 4 lines of code at startup:
LifetimeServices.LeaseTime = TimeSpan.FromSeconds(5);
LifetimeServices.LeaseManagerPollTime = TimeSpan.FromSeconds(5);
LifetimeServices.RenewOnCallTime = TimeSpan.FromSeconds(1);
LifetimeServices.SponsorshipTimeout = TimeSpan.FromSeconds(5);
You'll see the memory use start to rise and then within a few seconds you should see the memory start coming back down. Took me days with a memory profiler to really track this down and realize what was happening.
You can't wrap ReportViewer in a using statement (Dispose crashes), but you should be able to if you use LocalReport directly. After that disposes, you can call GC.Collect() if you want to be doubly sure you are doing everything you can to free up that memory.
Hope this helps!
Edit
Apparently, you should call GC.Collect(0) after generating a PDF report or else it appears the memory use could still get high for some reason.
You might want to
profile the heap
use WinDbg + SOS.dll to establish what resource is being leaked and from where the reference is held
Perhaps some API has changed semantics or there might even be a bug in the 4.0 version of the framework
Just for completeness, if anyone is looking for the equivalent ASP.Net web.config setting, it is:
<system.web>
<trust legacyCasModel="true" level="Full"/>
</system.web>
ExecuteReportInCurrentAppDomain works the same.
Thanks to this Social MSDN reference.
It seems as though Microsoft tried putting the report into its own separate memory space to work around all of the memory leaks rather than fix them. In doing so, they introduced some hard crashes, and ended up having more memory leaks anyway. They seem to cache the report definition, but never use it and never clean it up, and every new report creates a new report definition, taking up more and more memory.
I played around with doing the same thing: use a separate app domain and marshal the report over to it. I think that is a terrible solution and makes a mess very quickly.
What I did instead is similar: split the reporting part of your program out into its own separate reports program. This turns out to be a good way to organize your code anyway.
The tricky part is passing information to the separate program. Use the Process class to start a new instance of the reports program and pass any parameters it needs on the command line. The first parameter should be an enum or similar value indicating the report that should be printed. My code for this in the main program looks something like:
const string sReportsProgram = "SomethingReports.exe";
public static void RunReport1(DateTime pDate, int pSomeID, int pSomeOtherID) {
RunWithArgs(ReportType.Report1, pDate, pSomeID, pSomeOtherID);
}
public static void RunReport2(int pSomeID) {
RunWithArgs(ReportType.Report2, pSomeID);
}
// TODO: currently no support for quoted args
static void RunWithArgs(params object[] pArgs) {
// .Join here is my own extension method which calls string.Join
RunWithArgs(pArgs.Select(arg => arg.ToString()).Join(" "));
}
static void RunWithArgs(string pArgs) {
Console.WriteLine("Running Report Program: {0} {1}", sReportsProgram, pArgs);
var process = new Process();
process.StartInfo.FileName = sReportsProgram;
process.StartInfo.Arguments = pArgs;
process.Start();
}
And the reports program looks something like:
[STAThread]
static void Main(string[] pArgs) {
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
var reportType = (ReportType)Enum.Parse(typeof(ReportType), pArgs[0]);
using (var reportForm = GetReportForm(reportType, pArgs))
Application.Run(reportForm);
}
static Form GetReportForm(ReportType pReportType, string[] pArgs) {
switch (pReportType) {
case ReportType.Report1: return GetReport1Form(pArgs);
case ReportType.Report2: return GetReport2Form(pArgs);
default: throw new ArgumentOutOfRangeException("pReportType", pReportType, null);
}
}
Your GetReportForm methods should pull the report definition, make use of relevant arguments to obtain the dataset, pass the data and any other arguments to the report, and then place the report in a report viewer on a form and return a reference to the form. Note that it is possible to extract much of this process so that you can basically say 'give me a form for this report from this assembly using this data and these arguments'.
Also note that both programs must be able to see your data types that are relevant to this project, so hopefully you have extracted your data classes into their own library, which both of these programs can share a reference to. It would not work to have all of the data classes in the main program, because you would have a circular dependency between the main program and the report program.
Don't over do it with the arguments, either. Do any database querying you need in the reports program; don't pass a huge list of objects (which probably wouldn't work anyway). You should just be passing simple things like database ID fields, date ranges, etc. If you have particularly complex parameters, you might need to push that part of the UI to the reports program too and not pass them as arguments on the command line.
You can also put a reference to the reports program in your main program, and the resulting .exe and any related .dlls will be copied to the same output folder. You can then run it without specifying a path and just use the executable filename by itself (ie: "SomethingReports.exe"). You can also remove the reporting dlls from the main program.
One issue with this is that you will get a manifest error if you've never actually published the reports program. Just dummy publish it once, to generate a manifest and then it will work.
Once you have this working, it's very nice to see your regular program's memory stay constant when printing a report. The reports program appears, taking up more memory than your main program, and then disappears, cleaning it up completely with your main program taking up no more memory than it already had.
Another issue might be that each report instance will now take up more memory than before, since they are now entire separate programs. If the user prints a lot of reports and never closes them, it will use up a lot of memory very fast. But I think this is still much better since that memory can easily be reclaimed simply by closing the reports.
This also makes your reports independent of your main program. They can stay open even after closing the main program, and you can generate them from the command line manually, or from other sources as well.
Is there a way to release an object that was accessed using late-binding (i.e. created by the Activator.CreateInstance() method)?
I have an application that transforms files from one format to another. The assemblies that perform these translations live in a folder in my application directory.
When the application first starts up, I can delete these assemblies from the translation folder without any errors. However, once I process a document through the application (and have bound to one of the translation assemblies using late-binding), I can no longer delete the translation assemblies. At this point, I'm receiving an error message stating that the file is "in use by another application".
Is there a way to "release" the late-bound object in my application once I'm finished using it?
Once an assembly is loaded into an application domain it'll remain until the app domain shuts down.
To get around this load the assembly into it's own application domain, for example:
AppDomain app = AppDomain.CreateDomain("PlugInDomain");
ObjectHandle objectHandle = app.CreateInstanceFrom(assemblyPath,
"MyNamespace.MyComponent");
MyComponent component = (MyComponent) objectHandle.Unwrap();
// do stuff
// Now kill app domain, assembly can be overwritten after this.
AppDomain.Unload(app);
Once an assembly is loaded into the executing AppDomain, it cannot be unloaded (regardless of whether it is creating via reflection with Activator.CreateInstance).
The recommended approach here is to implement a secondary AppDomain with a lifetime that can unload when it wants to dispose the assemblies.
There are tons of examples, but here is one:
http://www.dotnet247.com/247reference/msgs/28/142174.aspx.
Since managing the lifetime of secondary AppDomains can be a pain, as an alternative, if you are using ASP .NET and are looking to load many dynamic assemblies, you can check when your current AppDomain becomes saturated with dynamically loaded assemblies by binding to the AppDomain.CurrentDomain.AssemblyLoaded event and keeping count, then requesting the hosting environment recycle the current AppDomain when it hits a critical number (say 500) like:
HostingEnvironment.InitiateShutdown();