I have a problem with a memory leak in .NET Core 3.1 API. The application is hosted in azure app service.
It is clearly visible on a graph that under constant load the memory is very slowly growing. it will only go down after app restart.
I created two memory dumps. One with high memory and one after restart and it's clearly visible that the reason is the app trying to load XmlSerialization.dll multiple times.
Now we have multiple other APIs that are using almost identical code when it comes to serialization and I'm not exactly sure why the problem occurs only in this one. Potentially because maybe this one has a much higher traffic when using the APIs.
I've read some articles about XmlSerializer class having memory issues but those were listed for some of the constructors we are not using. The only instance of using XmlSerializer directly in code was using an XmlSerializer(Type) constructor.
private static async Task<T> ParseResponseContentAsync<T>(HttpResponseMessage response, Accept accept)
{
try
{
using (Stream contentStream = await response.Content.ReadAsStreamAsync())
{
using (StreamReader reader = new StreamReader(contentStream, Encoding.UTF8))
{
switch (accept)
{
case Accept.Xml:
XmlSerializer serializer = new XmlSerializer(typeof(T));
return (T)serializer.Deserialize(reader);
case Accept.Json:
string stringContent = await reader.ReadToEndAsync();
return JsonConvert.DeserializeObject<T>(stringContent);
default:
throw new CustomHttpResponseException(HttpStatusCode.NotImplemented, $"Unsupported Accept type '{accept}'");
}
}
}
}
catch (Exception ex)
{
throw new InvalidOperationException($"Response content could not be deserialized as {accept} to {typeof(T)}", ex);
}
}
But I'm pretty sure this method is not used in this API anyway .
So another potential problematic place could be somewhere in the Controller serialization of responses.
Startup.cs registration:
services
.AddControllers(options =>
{
options.OutputFormatters.Add(new XmlSerializerOutputFormatter(
new XmlWriterSettings
{
OmitXmlDeclaration = false
}));
options.Filters.Add<CustomHttpResponseExceptionFilter>();
})
.AddNewtonsoftJson(options => options.SerializerSettings.Converters.Add(
new StringEnumConverter(typeof(CamelCaseNamingStrategy))))
.AddXmlSerializerFormatters();
Example of an endpoint:
[Produces(MimeType.ApplicationXml, MimeType.TextXml, MimeType.ApplicationJson, MimeType.TextJson)]
[ProducesResponseType(StatusCodes.Status200OK)]
[ProducesResponseType(StatusCodes.Status404NotFound)]
[ProducesResponseType(StatusCodes.Status401Unauthorized)]
[HttpGet("EndpointName")]
[Authorize]
public async Task<ActionResult<ResponseDto>> Get([FromModel] InputModel inputModel)
{
//some code
return responseDto;
}
Dto returned from the API:
[XmlRoot(ElementName = "SomeName")]
public class ResponseDto
{
[XmlElement(ElementName = "Result")]
public Result Result { get; set; }
[XmlAttribute(AttributeName = "Status")]
public string Status { get; set; }
[XmlAttribute(AttributeName = "DoneSoFar")]
public int DoneSoFar { get; set; }
[XmlAttribute(AttributeName = "OfTotal")]
public int OfTotal { get; set; }
}
Now I haven't been able to find any documented cases of .AddXmlSerialization causing these kinds of issues and I'm not sure what the solution or a workaround should be. Any help would be greatly appreciated.
EDIT:
I've run some additional tests as #dbc suggested.
Now it seems that we are not even hitting this line new XmlSerializer(typeof(T) in our scenarios since nothing was logged after logger code was added. We do however use default xml serialization for some of our API endpoints. Now one thing I noticed that might be causing this behavior is that the paths in memory dumps logs don't match the files that actually exist in the root folder.
The paths which are visible in memory dumps are *.Progress.Lib.XmlSerializers.dll or *.Domain.Lib.XmlSerializers.dll
Now I wonder if this isn't the issue documented here - link since I can't see those files in wwwroot directory.
If it is I'm not sure if the solution would be to somehow reference the .dlls directly ?
Edit2:
Adding a screen of how memory looks like after deploying cached serializer suggested by #dbc. There is no constant growth but it seems after few hours memory rises and doesn't go down. It is possible that the main problem is resolved but since it takes a lot of time to notice big differences we will monitor this for now. There is nothing showing in large object heap or any big number of memory is not allocated in managed memory. This API however when first deployed runs around 250 mB and after one day now at 850 mB. When we turn off the load test tool the memory didn't really go down too much.
Edit3:
So we looked closer at some historical data and it seems that the last screen is a normal behavior. It never grows beyond a certain point. Not sure why that happens but this is acceptable.
The assemblies that the new XmlSerializer(typeof(T)) constructor are trying to load are Microsoft XML Serializer Generator assemblies a.k.a Sgen.exe assemblies that might have or might not been created at the time the app was built.
But what are Sgen assemblies? In brief, XmlSerializer works by generating code to serialize and deserialize the type passed into the constructor, then compiling that generated code into a DLL and loading it into the application domain to do the actual serialization. This run-time DLL generation can be time-consuming, but as long as you use the XmlSerializer(Type) or XmlSerializer(Type, String) constructors it will only be done once per type T, with the resulting assembly being cached internally in a static dictionary by XmlSerializer.
As you might imagine this can cause the first call to new XmlSerializer(typeof(T)) to be slow, so (in .NET 2 I believe, this is all very old code) Microsoft introduced a tool to generate those run-time serialization DLLs at application build time: SGen.exe. This tool doesn't work for all types (e.g. generics) and was, if I recall correctly, finicky to use, but when it did work it did speed up serializer construction. Once loaded successfully the Sgen assembly is cached in the same cache used for generated assemblies.
And it seems like you have stumbled across a bug in .NET Core 3.1, 5, and 6 related to this:
The base class method OutputFormatter.CanWriteResult(OutputFormatterCanWriteContext context) of XmlSerializerOutputFormatter tests whether a type can be serialized by calling XmlSerializerOutputFormatter.CanWriteType(Type type). This in turn tests to see whether a type is serializable by XmlSerializer by attempting to construct a serializer for the type and returning false if construction failed because any exception was thrown. The serializer is cached if construction was successful, but nothing is cached if construction failed.
the new XmlSerializer(Type) constructor tries to load an Sgen assembly unless an assembly has already been cached for the type by a previous successful call to the constructor.
But if a type is not serializable by XmlSerializer, the constructor will throw an exception and nothing will be cached. Thus successive attempts to construct a serializer for the same non-serializable type will result in multiple calls to load Sgen assemblies.
As you yourself found, .NET Core itself permanently leaks a small amount of IndividualAssemblyLoadContext memory every time assembly load fails: Failed Assembly.Load and Assembly.LoadFile leaks memory #58093.
Putting all this together, enabling XML serialization when some of your DTOs are not serializable (because e.g. they don't have parameterless constructors) can result in ever-growing IndividualAssemblyLoadContext memory use.
So, what are your options for a workaround?
Firstly, issue #58093 was apparently fixed in .NET 7 with pull #68502 so if you upgrade to this version the problem may resolve itself.
Secondly, you could subclass XmlSerializerOutputFormatter to cache returned XmlSerializer instances even when null. This will prevent multiple attempts to create serializers for non-seializable types.
First, subclass XmlSerializerOutputFormatter and override XmlSerializerOutputFormatter.CreateSerializer(Type) as follows:
public class CachedXmlSerializerOutputFormatter : XmlSerializerOutputFormatter
{
// Cache and reuse the serializers returned by base.CreateSerializer(t). When null is returned for a non-serializable type,
// a null serializer will be cached and returned.
static readonly ConcurrentDictionary<Type, XmlSerializer> Serializers = new ConcurrentDictionary<Type, XmlSerializer>();
public CachedXmlSerializerOutputFormatter() : base() { }
public CachedXmlSerializerOutputFormatter(ILoggerFactory loggerFactory) : base(loggerFactory) { }
public CachedXmlSerializerOutputFormatter(XmlWriterSettings writerSettings) : base(writerSettings) { }
public CachedXmlSerializerOutputFormatter(XmlWriterSettings writerSettings, ILoggerFactory loggerFactory) : base(writerSettings, loggerFactory) { }
protected override XmlSerializer CreateSerializer(Type type) { return Serializers.GetOrAdd(type, (t) => base.CreateSerializer(t)); }
}
Then replace use of XmlSerializerOutputFormatter with your subclassed version as follows:
services
.AddControllers(options =>
{
options.OutputFormatters.Add(new CachedXmlSerializerOutputFormatter (
new XmlWriterSettings
{
OmitXmlDeclaration = false
}));
options.Filters.Add<CustomHttpResponseExceptionFilter>();
})
.AddNewtonsoftJson(options => options.SerializerSettings.Converters.Add(
new StringEnumConverter(typeof(CamelCaseNamingStrategy))))
.AddXmlSerializerFormatters();
This should in theory eliminate the repeated failing calls to load Sgen assemblies.
Notes:
If you have enabled XML model binding and some of your input types are not XML-serializable, you may need to similarly subclass XmlSerializerInputFormatter. Its CreateSerializer(Type type)) also fails to cache failed attempts to construct a serializer.
Demo fiddles:
Demo fiddle showing that that multiple calls to XmlSerializerOutputFormatter.CanWriteType() for a non-serializable DTO result in multiple assembly load failures here: demo #1.
Demo fiddle showing that CachedXmlSerializerOutputFormatter fixes this problem here: demo #2.
Demo that multiple calls to XmlSerializerOutputFormatter.CanWriteType() for a serializable DTO do not result in multiple assembly load failures, and hence don't cause growing IndividualAssemblyLoadContext memory use, here: demo #3.
This might not be feasible, but could you offload the XML generation onto Azure API Management?
https://learn.microsoft.com/en-us/azure/api-management/api-management-transformation-policies#ConvertJSONtoXML
Related
I'm starting to dive into the Orleans Streams and I'm running into an issue using ImplicitStreamSubscription. I'm building upon the QuickStart example by adding a new project that implements both the interfaces and the grains. Here is the all of the code I have so far in my grains.
[ImplicitStreamSubscription("RANDOMDATA")]
public class VSMDiscovery : Grain, IVSMDiscovery
{
public override Task OnActivateAsync()
{
Console.WriteLine("Started" + this.GetPrimaryKey());
return base.OnActivateAsync();
}
}
public interface IVSMDiscovery : IGrainWithIntegerKey
{
}
In the DevTest main, I simply send an event using
var guid = Guid.NewGuid();
//Get one of the providers which we defined in config
var streamProvider = Orleans.GrainClient.GetStreamProvider("SMSProvider");
//Get the reference to a stream
var stream = streamProvider.GetStream<int>(guid, "RANDOMDATA");
stream.OnNextAsync(1);
Everything seems to execute fine, a new grain is instantiated and OnActivateAsync is called which writes the message to the console, however I get this error.
VSM Started206d105b-d21b-496c-997a-9dac3cf370b3
Extension not installed on grain Draco.VSMConnection.VSMDiscovery attempting to invoke type Orleans.Streams.OrleansCodeGenStreamConsumerExtensionMethodInvoker from invokable Orleans.Runtime.ActivationData
Exception = Orleans.Runtime.GrainExtensionNotInstalledException: Extension not installed on grain Draco.VSMConnection.VSMDiscovery attempting to invoke type Orleans.Streams.OrleansCodeGenStreamConsumerExtensionMethodInvoker from invokable Orleans.Runtime.ActivationData
[2016-03-09 05:53:41.007 GMT 14 WARNING 103405 InsideRuntimeClient 127.0.0.1:11111] Extension not installed on grain Draco.VSMConnection.VSMDiscovery attempting to invoke type Orleans.Streams.OrleansCodeGenStreamConsumerExtensionMethodInvoker from invokable Orleans.Runtime.ActivationData for message NewPlacement Request S127.0.0.1:11111:195198808*cli/5853f180#9c59fabf->S127.0.0.1:11111:195198808*grn/EB2C0203/ac9d7a99#0e33939b #5: global::Orleans.Streams.IStreamConsumerExtension:DeliverItem()
As I mentioned, everything appears to be running ok, but having this error is very concerning. Any help would be greatly appreciated.
For me, this was caused by having a grain which had an implicit subscription attribute, but which FORGOT to subscribe to the stream in the OnActiveAsync method (which is required and is outlined in the quick start mentioned above).... not clear from the error message at all. Hope this saves someone else some pain.
You need to make sure that the "SMSProvider" stream provider is correctly specified in the config file, for both client and silo, like here: https://github.com/dotnet/orleans/blob/master/test/Tester/OrleansConfigurationForStreamingUnitTests.xml#L9
Initially, I think PowerShell instantiate one class only when the cmdlet tagged on this class is called. On execution, each cmdlet falls into the BeginProcess -> ProcessRecord -> EndProcess(StopProcess) path, and after the EndProcess is done, it seems the process will end and then the memory will collect all these class objects as garbage.Therefore each class should live in their own life cycle and not share any resources. When we are calling these cmdlets,
However I find that classes do share the same static values in the same module. For example, assume in my project I have two classes:
namespace PSDSL
{
[Cmdlet(VerbsCommon.Get, "MyTest")]
public class GetMyTest : Cmdlet
{
public static GlobalUserName = "";
[Parameter(Mandatory = false)]
public string Filepath { get; set; }
protected override void InnerProcessRecord()
{
if (_filepath != null)
{
GlobalUserName = _filepath;
}
Console.WriteLine(GlobalUserName);
}
}
}
namespace PSDSL
{
[Cmdlet(VerbsCommon.Get, "MyTest2")]
public class GetMyTest2 : Cmdlet
{
[Parameter(Mandatory = false)]
public string Filepath { get; set; }
protected override void InnerProcessRecord()
{
if (_filepath != null)
{
GlobalUserName = _filepath;
}
Console.WriteLine(GlobalUserName);
}
}
}
The two commands are pretty similar except one defines a static GlobalUserName. Calling these 2 cmdlets shows that the GlobalUserName can be read\write from both cmdlets.
My confusion is that, when are the classes be instaniated?
Whole assembly loaded at once and stays loaded till restart of the PowerShell prompt.
Details:
Smallest unit of code isolation in .Net is Assembly (in most cases single managed DLL).
Process that uses managed runtime can't load less than single assembly at a time - so all classes from that assembly (and related once on demand) will be loaded together. As result all static fields will be present at the same time in memory (note that static fields are initialized "before first use of the class" which mean they are not necessary initialized on load of the assembly).
There also no way to "unload" class or even assembly without using separate AppDomains. PowerShell does not use multiple AppDomains to load assemblies for different modules (generally cross-AddDomain calls require special attention during implementation and you'd know about it by now). As result once loaded module stays in memory till you quit PowerShell (covered in Powershell Unload Module... completely).
Since assembly is loaded once for all commandlets in it all static fields will be present at once and keep they values till exiting of PowerShell.
Side note: I'd strongly recommend avoiding static fields for anything but really static immutable data in general. It is way to easy to leave some random values there and impact future code. In PowerShell pipeline is the way to pass information between commandlets, other types of processes (WinForms, ASP.Net,...) have they own preferred mechanism to pass data instead of using static.
I'm currently trying to load and use the Gephi Toolkit from within a .Net 4 C# website.
I have a version of the toolkit jar file compiled against the IKVM virtual machine, which works as expected from a command line application using the following code:
var controller = (ProjectController)Lookup.getDefault().lookup(typeof(ProjectController));
controller.closeCurrentProject();
controller.newProject();
var project = controller.getCurrentProject();
var workspace = controller.getCurrentWorkspace();
The three instances are correctly instantiated in a form similar to org.gephi.project.impl.ProjectControllerImpl#8ddb93.
If however I run the exact same code, with the exact same using statements & references, the very first line loading the ProjectController instance returns null.
I have tried a couple of solutions
Firstly, I have tried ignoring the Lookup.getDefault().lookup(type) call, instead trying to create my own instances:
var controller = new ProjectControllerImpl();
controller.closeCurrentProject();
controller.newProject();
var project = controller.getCurrentProject();
var workspace = controller.getCurrentWorkspace();
This fails at the line controller.newProject();, I think because internally (using reflector) the same Lookup.getDefault().lookup(type) is used in a constructor, returns null and then throws an exception.
Secondly, from here: Lookup in Jython (and Gephi) I have tried to set the %CLASSPATH% to the location of both the toolkit JAR and DLL files.
Is there a reason why the Lookup.getDefault().lookup(type) would not work in a web environment? I'm not a Java developer, so I am a bit out of my depth with the Java side of this.
I would have thought it possible to create all of the instances myself, but haven't been able to find a way to do so.
I also cannot find a way of seeing why the ProjectController load returned null. No exception is thrown, and unless I'm being very dumb, there doesn't appear to be a method to see the result of the attempted load.
Update - Answer
Based on the answer from Jeroen Frijters, I resolved the issue like this:
public class Global : System.Web.HttpApplication
{
public Global()
{
var assembly = Assembly.LoadFrom(Path.Combine(root, "gephi-toolkit.dll"));
var acl = new AssemblyClassLoader(assembly);
java.lang.Thread.currentThread().setContextClassLoader(new MySystemClassLoader(acl));
}
}
internal class MySystemClassLoader : ClassLoader
{
public MySystemClassLoader(ClassLoader parent)
: base(new AppDomainAssemblyClassLoader(typeof(MySystemClassLoader).Assembly))
{ }
}
The code ikvm.runtime.Startup.addBootClassPathAssemby() didn't seem to work for me, but from the provided link, I was able to find a solution that seems to work in all instances.
This is a Java class loader issue. In a command line app your main executable functions as the system class loader and knows how to load assembly dependencies, but in a web process there is no main executable so that system class loader doesn't know how to load anything useful.
One of the solutions is to call ikvm.runtime.Startup.addBootClassPathAssemby() to add the relevant assemblies to the boot class loader.
For more on IKVM class loading issues see http://sourceforge.net/apps/mediawiki/ikvm/index.php?title=ClassLoader
I have an AppDomain that I'm using to load modules into a sandbox with:
class PluginLoader
{
public static AppDomain PluginSandbox;
static PluginLoader()
{
AppDomainSetup ads = new AppDomainSetup();
ads.ApplicationName = "Plugin Modules";
PermissionSet trustedLoadFromRemoteSourceGrantSet =
new PermissionSet(PermissionState.Unrestricted);
PluginSandbox =
AppDomain.CreateDomain("Plugin App Domain",
null, ads, trustedLoadFromRemoteSourceGrantSet);
}
And then later on, I'll pull in the DLL I need and create an object instance:
public IPlugin FindPlugin(string pluginName)
{
ObjectHandle handle =
PluginSandbox.CreateInstance(pluginName,
"Plugins." + pluginName);
IPlugin ip = (IPlugin)handle.Unwrap();
return ip;
}
I run through this a couple of times with no problems. Getting instances of various objects out in the Sandbox, with no problems.
A bit later in the code, in another method, I need to find the assembly to get an embedded resource (a compiled in data file, with ManifestResource). So I call:
Assembly [] ar = PluginSandbox.GetAssemblies();
And the error gets thrown:
A first chance exception of type 'System.IO.FileNotFoundException'
occurred in PluginRunner.dll.
Additional information: Could not load file or assembly '10wl4qso,
Version=1.0.3826.25439, culture info=neutral, PublicKeyToken=null'
or one of its dependencies. The system cannot find the file specified.
I'm not surprised. '10wl4qso' isn't the name of the assembly, the dll, or anything like it. In fact it seems pseudo-random for each run. Plus the added fun of GetAssemblies isn't even documented to throw this exception.
Now I can call GetAssemblies right after I get the initial object just fine, and everything is peachy. But a couple of seconds later, in a different method I get this. Being remoted, PluginSandbox has no useful information at all in the debugger.
I'm catching UnhandledException and DomainUnload on the AppDomain and neither is being triggered.
Why does my AppDomain suddenly not know about its assemblies?
Where's that garbage data coming from?
What can I do to prevent either/both of these from happening?
This weird named assembly you're seeing is probably generated by XmlSerializer. The XML serializer will output a dynamic assembly to be able to quickly serialize and deserialize a specific type quickly. Check your code for uses of XmlSerializer, comment them out and see if the problem occurs again.
I don't know if it helps you...
Try to override InitializeLifeTimeService on IPlugin. Your IPlugin implementation should inherits from MarshalByRefObject first.
public class PluginSample : MarshalByRefObject, IPlugin
{
public overrides object InitializeLifetimeService()
{
return null; //Return null to infinite object remote life.
}
//...implementation
}
Take a look at this article:
RemotingException when raising events across AppDomains
I have been trying to get the following code to work(everything is defined in the same assembly) :
namespace SomeApp{
public class A : MarshalByRefObject
{
public byte[] GetSomeData() { // }
}
public class B : MarshalByRefObject
{
private A remoteObj;
public void SetA(A remoteObj)
{
this.remoteObj = remoteObj;
}
}
public class C
{
A someA = new A();
public void Init()
{
AppDomain domain = AppDomain.CreateDomain("ChildDomain");
string currentAssemblyPath = Assembly.GetExecutingAssembly().Location;
B remoteB = domain.domain.CreateInstanceFromAndUnwrap(currentAssemblyPath,"SomeApp.B") as B;
remoteB.SetA(someA); // this throws an ArgumentException "Object type cannot be converted to target type."
}
}
}
What I'm trying to do is pass a reference of an 'A' instance created in the first AppDomain to the child domain and have the child domain execute a method on the first domain. In some point on 'B' code I'm going to call 'remoteObj.GetSomeData()'. This has to be done because the 'byte[]' from 'GetSomeData' method must be 'calculated' on the first appdomain.
What should I do to avoid the exception, or what can I do to achieve the same result?
The actual root cause was your dll was getting loaded from different locations in the two different app domains. This causes .NET to think they are different assemblies which of course means the types are different (even though they have the same class name, namespace etc).
The reason Jeff's test failed when run through a unit test framework is because unit test frameworks generally create AppDomains with ShadowCopy set to "true". But your manually created AppDomain would default to ShadowCopy="false". This would cause the dlls to be loaded from different locations which leads to the nice "Object type cannot be converted to target type." error.
UPDATE: After further testing, it does seem to come down to the ApplicationBase being different between the two AppDomains. If they match, then the above scenario works. If they are different it doesn't (even though I've confirmed that the dll is loaded into both AppDomains from the same directory using windbg) Also, if I turn on ShadowCopy="true" in both of my AppDomains, then it fails with a different message: "System.InvalidCastException: Object must implement IConvertible".
UPDATE2: Further reading leads me to believe it is related to Load Contexts. When you use one of the "From" methods (Assembly.LoadFrom, or appDomain.CreateInstanceFromAndUnwrap), if the assembly is found in one of the normal load paths (the ApplicationBase or one of the probing paths) then is it loaded into the Default Load Context. If the assembly isn't found there, then it is loaded into the Load-From Context. So when both AppDomains have matching ApplicationBase's, then even though we use a "From" method, they are both loaded into their respective AppDomain's Default Load Context. But when the ApplicationBase's are different, then one AppDomain will have the assembly in its Default Load Context while the other has the assembly in it's Load-From Context.
I can duplicate the issue, and it seems to be related to TestDriven.net and/or xUnit.net. If I run C.Init() as a test method, I get the same error message. However, if I run C.Init() from a console application, I do not get the exception.
Are you seeing the same thing, running C.Init() from a unit test?
Edit: I'm also able to duplicate the issue using NUnit and TestDriven.net. I'm also able to duplicate the error using the NUnit runner instead of TestDriven.net. So the problem seems to be related to running this code through a testing framework, though I'm not sure why.
This is a comment to #RussellMcClure but as it is to complex for a comment I post this as an answer:
I am inside an ASP.NET application and turning off shadow-copy (which would also solve the problem) is not really an option, but I found the following solution:
AppDomainSetup adSetup = new AppDomainSetup();
if (AppDomain.CurrentDomain.SetupInformation.ShadowCopyFiles == "true")
{
var shadowCopyDir = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location);
if (shadowCopyDir.Contains("assembly"))
shadowCopyDir = shadowCopyDir.Substring(0, shadowCopyDir.LastIndexOf("assembly"));
var privatePaths = new List<string>();
foreach (var dll in Directory.GetFiles(AppDomain.CurrentDomain.SetupInformation.PrivateBinPath, "*.dll"))
{
var shadowPath = Directory.GetFiles(shadowCopyDir, Path.GetFileName(dll), SearchOption.AllDirectories).FirstOrDefault();
if (!String.IsNullOrWhiteSpace(shadowPath))
privatePaths.Add(Path.GetDirectoryName(shadowPath));
}
adSetup.ApplicationBase = shadowCopyDir;
adSetup.PrivateBinPath = String.Join(";", privatePaths);
}
else
{
adSetup.ApplicationBase = AppDomain.CurrentDomain.SetupInformation.ApplicationBase;
adSetup.PrivateBinPath = AppDomain.CurrentDomain.SetupInformation.PrivateBinPath;
}
This will use the shadow-copy directory of the main app-domain as the application-base and add all shadow-copied assemblies to the private path if shadow-copy is enabled.
If someone has a better way of doing this please tell me.