Is there a strong reason why Microsoft chose not to support AppDomains in .NET Core?
AppDomains are particularly useful when building long running server apps, where we may want to update the assemblies loaded by the server is a graceful manner, without shutting down the server.
Without AppDomains, how are we going to replace our assemblies in a long running server process?
AppDomains also provide us a way to isolate different parts of server code. Like, a custom websocket server can have socket code in primary appdomain, while our services run in secondary appdomain.
Without AppDomains, the above scenario is not possible.
I can see an argument that may talk about using VMs concept of Cloud for handling assembly changes and not having to incur the overhead of AppDomains. But is this what Microsoft thinks or says? or they have a specific reason and alternatives for the above scenarios?
The point of the .NETCore subset was to keep a .NET install small. And easy to port. Which is why you can, say, run a Silverlight app on both Windows and OSX and not wait very long when you visit the web page. Downloading and installing the complete runtime and framework takes a handful of seconds, give or take.
Keeping it small inevitably requires features to be cut. Remoting was very high on that list, it is quite expensive. Otherwise well hidden, but you can for example see that delegates no longer have a functional BeginInvoke() method. Which put AppDomain on the cut list as well, you can't run code in an app domain without remoting support. So this is entirely by design.
Update for .NET Standard 2 and .NET Core 2
In .NET Standard 2 the AppDomain class is in there. However, many parts of that API will throw a PlatformNotSupportedException for .NET Core.
The main reason it's still in there is for basic stuff like registering an unhandled exception handler which will work.
The .NET Standard FAQ has this explanation:
Is AppDomain part of .NET Standard?
The AppDomain type is part of .NET Standard. Not all platforms will support the creation of new app domains, for example, .NET Core will not, so the method AppDomain.CreateDomain while being available in .NET Standard might throw PlatformNotSupportedException.
The primary reason we expose this type in .NET Standard is because the usage is fairly high and typically not associated with creating new app domains but for interacting with the current app domain, such as registering an unhandled exception handler or asking for the application's base directory.
Apart from that, the other answer and other answers also nicely explain why the bulk of AppDomain was still cut (e.g. throws a not supported exception).
App Domains
Why was it discontinued? AppDomains require runtime support and are generally quite expensive. While still implemented by CoreCLR, it’s not available in .NET Native and we don’t plan on adding this capability there.
What should I use instead? AppDomains were used for different purposes. For code isolation, we recommend processes and/or containers. For dynamic loading of assemblies, we recommend the new AssemblyLoadContext class.
Source: Porting to .NET Core | .NET Blog
You don't need AppDomains anymore, you now have LoadContexts:
public class CollectibleAssemblyLoadContext
: AssemblyLoadContext
{
public CollectibleAssemblyLoadContext() : base(isCollectible: true)
{ }
protected override Assembly Load(AssemblyName assemblyName)
{
return null;
}
}
byte[] result = null; // Assembly Emit-result from roslyn
System.Runtime.Loader.AssemblyLoadContext context = new CollectibleAssemblyLoadContext();
System.IO.Stream ms = new System.IO.MemoryStream(result);
System.Reflection.Assembly assembly = context.LoadFromStream(ms);
System.Type programType = assembly.GetType("RsEval");
MyAbstractClass eval = (MyAbstractClass )System.Activator.CreateInstance(programType);
eval.LoadContext = context;
eval.Stream = ms;
// do something here with the dynamically created class "eval"
and then you can say
eval.LoadContext.Unload();
eval.Stream.Dispose();
Bonus if you put that into the IDisposable interface of the abstract class, then you can just use using, if you want to.
Note:
This assumes a fixed abstract class in a common assembly
public abstract class MyAbstractClass
{
public virtual void foo()
{}
}
and a dynamically runtime-generated class ( using Roslyn), referencing the abstract class in the common assembly, which implements e.g.:
public class RsEval: MyAbstractClass
{
public override void foo()
{}
}
At one point, I heard that unloading assemblies would be enabled without using domains. I think that the System.Runtime.Loader.AssemblyLoadContext type in System.Runtime.Loader.dll is related to this work, but I don't see anything there that enables unloading yet.
I have heard in a community standup or some talk of Microsoft that the isolation feature of AppDomains are better handled by processes (and actually the common pattern in other platforms) and the unloading is indeed planned as a normal feature unrelated to AppDomains.
Related
I'm currently working on a C# product that will use a plugin type system. This isn't anything new and I have seen much info around about how to use a interface to implement this functionality quite easily.
I've also seen methods to implement backwards compatibility by updating the interface name, e.g.: Interface change between versions - how to manage?
There are multiple scenarios which I can foresee with our product in regards to version mismatches between the main exe and the plugin.
Main Program same plugin version as plugin
Main Program newer than plugin
Main Program older than plugin
From the info I've been able to gather 1 & 2 work just fine. But I haven't been able to figure out how to correctly implement "forward" compatibility (3) properly.
It is our intention to only ADD methods to the plugin API.
Any ideas would be a great help.
Isolated PluginAPI DLL
First, Your PluginAPI (containing the interfaces) should be a separate DLL to your main application. Your main application will reference the PluginAPI, and each plugin will reference the PluginAPI. You're most likely already doing this.
Interface Versioning
Second, structurally, you should create a new interface each time you add a new property or method.
For example:
Version 1: Plugins.IPerson
Version 2: Plugins.V2.IPerson : Plugins.IPerson
Version 3: Plugins.V3.IPerson : Plugins.V2.IPerson
In rare cases where you decide to remove or completely redesign your API, example:
Version 4: Plugins.V4.IPerson //Without any Interface inheritance
Isolated PluginAPI DLL Versioning
Finally, I am not 100% sure how versioning of the PluginAPI .dll will go even with this structural architecture of Interface versioning. It may work
OR
You may need to have matching dlls for each version (each referencing the previous version(s)). We will assume that this is the case.
Solution for case 3
So let's now take your case [3], main program older than plugin:
Person Plugin implements Plugins.V2.IPlugin and references the V3 .dll (just to make it interesting).
Main Program references the V1 .dll
The plugin folder will contain the V2 and V3 plugin .dlls
The main app folder will only contain the V1 plugin .dll (among other files)
Main App will find and load the Person plugin and reference through a V1 definition for the IPerson interface
Of course, only V1 methods and properties will be accessible from the plugin to the Main App
(Additional methods will be accessible through reflection - not that you would want to)
Bonus Update
When you might use plugins
Third-parties extending your system. Source code would be better if that's an option, or if it's web-based, redirect to their URL. This is a dream for many software projects, but you should wait until you have an interested third-party partner before doing the extra work to build the plugin framework.
User Editable "Scripts". You should not build your own scripting language, instead you should compiled the user c# code against a restrictive interface in an appdomain that is very restrictive (disabling reflection and others).
Security grouping - Your core software might use trusted platform calls. Riskier modules can be separated into another library and optionally excluded by end-users.
When not to use Plugins
I am an advocate for less-is-more. Don't overengineer. If you are building modular software that's great, use classes and namespaces (don't get carried away with interfaces). "Modular" means you are striving to adhere to SOLID principles, but that doesn't mean you need Plugin architecture. Even inversion of control is overkill in many situations.
If you plan to open to third-parties in the future, don't make it a plugin architecture to start with. You can build out a plugin framework later in stages: i) derive interfaces; ii) define your plugins with interfaces within the same project; iii) load your internal plugins with a plugin loader class; iv) finally, you can implement an external library loader. Each of these 4 steps leave you with a working system on their own and move you toward a finished plugin system.
Hot Swappable Plugins
When designing a plugin architecture, you may be interested to know that you can make plugins hot swappable:
Without Freeing Memory - Just keep loading the new plugin. This is usually fine, unless it's maybe for a server software which you expect i) to run for a very long time without restarting; AND ii) expect many plugin changes and upgrades during that time. When you load a plugin at runtime, it loads the assembly into memory and cannot be unloaded. See [2] for why.
With Freeing Memory - You can unload an AppDomain. An AppDomain runs in the same process but are reference isolated - you can't reference or call objects directly. Instead calls must be marshalled and data must be serialised in between appdomains. The added complexity is not worth it if you're not going to change plugins often, there is: i) a performance penalty due to marshalling/serialization, ii) much more coding complexity (you can't simply use events and delegates and methods as normal), iii) this all leads to more bugs and makes it more difficult to debug.
So if option [2] entices you, please try [1] first, and use that architecture until you have the problems necessary for [2]. Never over-architect. Trust me, I have built a [2] architecture before during University, it's fun, but in most cases overkill and will likely kill your project (spending too much time on non-business functions).
You need to assume that your plugins only implement the interface(s) exposed. If you release a new version of your main program with new interface you will check to see if your plugins support that interface. Therefore if a new plugin is presented to an old version of main. It will either support the requested interface or will not and will fail the test as a valid plugin.
Sorry if I am not clear enough, I've had a hard time writing this question.
I downloaded an open source software. I would like to expand the functionalities so I would like to create modules that encapsulates the functionality these modules would be .dll files.
I would like to have one completely independent from another: if I set a key to true in the config file and if the DLL is present on the folder, the plugin should be loaded.
The problem is: how can I make the call for the plugin dynamically (only call of the plugin is applied)?
If I reference the plugin classes directly, I would have to reference the plugin dll, but I want to be able to run the core software without the plugin. Is there any design pattern or other mechanism that would allow me to load and use the DLL only if the plugin is applied and still be possible to run the core software without the plugin?
There are various ways to achieve this and I will describe one simple solution here.
Make a common interface that each plugin must implement in order to be integrated with core application. Here is an example:
// Interface which plugins must implement
public interface IPlugin
{
void DoSomething(int Data);
}
// Custom plugin which implements interface
public class Plugin : IPlugin
{
public void DoSomething(int Data)
{
// Do something
}
}
To actually load your plugin from dll, you will need to use reflection, for example:
// Load plugin dll and create plugin instance
var a = Assembly.LoadFrom("MyCustomPlugin.dll");
var t = a.GetType("MyCustomPlugin.Plugin");
var p = (IPlugin)Activator.CreateInstance(t);
// Use plugin instance later
p.DoSomething(123);
You can use some kind of naming convention for your plugin assemblies and classes
so that you can load them easily.
You can use MEF.
The Managed Extensibility Framework (MEF) is a composition layer for
.NET that improves the flexibility, maintainability and testability of
large applications. MEF can be used for third-party plugin
extensibility, or it can bring the benefits of a loosely-coupled
plugin-like architecture to regular applications.
Here is programming guide.
Plugins or DLLs in .NET jargon are called assemblies. Check out the Assemply.Load method, and also this guide in msdn.
The System.Reflection namespace provides many tools that will help you with this scenario.
You can
inspect assemblies (DLL files) to examine the objects inside them,
find the types that you are looking for (specific classes, classes which implement specific interfaces, etc)
create new instances of those classes, and
invoke methods and access properties of those classes.
Typically you would write a class in the extension which does some work, create a method (e.g. DoWork()), and then invoke that method dynamically.
The MEF mentioned in this question does exactly this, just with a lot more framework.
If you had to expose functionality externally as a DLL but only a subset of functionality (meaning you can't provide a core DLL as it would expose everything) how is best to do this?
At the moment I can't really see any way of doing it that doesn't involve recreating the parts of the core library in a seperate DLL.
You could use internal along with Friend Assemblies. Your API can be a friend of the core library allowing access to internal members.
See here for more details - http://msdn.microsoft.com/en-us/library/0tke9fxk(v=vs.90).aspx
This would allow you to keep your core objects internalised whilst allowing the API access to them.
Note that you will STILL need to supply the core library. There's no way around this unless you use something to merge the .NET assemblies or you compile the code into your API library.
However I think this is a bad idea and you should keep such entities separate. I don't see why there is an issue shipping more than one library thesedays.
FYI - ILMerge will let you merge .NET assemblies, you can get it from here - http://research.microsoft.com/en-us/people/mbarnett/ilmerge.aspx
Surely by just creating a new project that wraps the core DLL, exposing only the methods you want exposed, each of which is acting more or less as a "pass-through" to the "Same" method in the core?
So if you core is called Core :)
it might have:
public int Foo()
{
//blah
}
public int Bar()
{
/blah
}
and if you want to only expose Foo, then you create a new project which references Core, and looks like this:
using Core;
public class MyApi
{
private Core _coreInstance.... //some way of reaching Core, in other words
public int Foo()
{
return _coreInstance.Foo();
}
}
An advantage of creating a separate assembly here is that you are then treating your core functionality as one concept, and the exposure of it publicly (to a particular purpose or audience) as another. You may very well want to expose "publicly" different functionality at a later stage, but to a different audience - you now have 2 different public APIs required: therefore any notion of what was "public" in your core assembly is now potentially ambiguous.
I think it depends on the aims you follow in hiding the core libraries.
If you don't want to allow your customers to call the code, for example if that may break usage scenarios of your libraries, or may cause undesirable behavior, or whatever to prevent CALLING the code, you can make the protected classes internal, and use InternalsVisibleToAttribute to include the Facade assembly. I would even use one more build configuration if I still needed core classes to be visible in my applications:
#if PUBLIC_BUILD
internal
#else
public
#endif
class ProtectedCoreClass
But of course if you have too many classes, some script should be prepared to change the existing classes, and Visual Studio's new class template should be modified.
But another case is if you want to prevent the source code from being WATCHED by your customers in order to hide some super unique algorithms or something. Then you should look into some code obfuscator. But there is absolutely no way to 100% guarantee the code from being decompiled and analyzed. It's only about the price crackers or competitors pay for it.
But if HIDING the source code is still extremly important, you should probably just host your code on your servers (to make sure the code is physically inaccessible) or in the cloud, and provide a WCF or a web service your exposing assembly will call.
Well, I have a project, and by the moment I am using .NET 4.0, because I would like that this application is compatible with windows XP, because EF 5.0 is only for windows 7 and upper.
However, I would like to implement some parts of the application with the features of .NET 4.5, such as EF 5.0.
So for my database access I have a reposotry class that now use EF 4.0, this is a independent dll, so I can create other repository dll that use EF 5, and in my project import both dlls, then I can instantiate the correct repository according to the version of EF 5.0 that I can use. This is a paramater in the config file. is this the best way?
I ask this because I don't know where I must declare my interface. because my repository classes need to implement this interface, but then this tie my dlls to my application, but I need to use this repositories in two different applications, so I want to implement once, and use in many applications. I want independent dlls, because now are two applications, but in the future, can be more.
The reason to want to use an interface in the application that uses the repositories is because I would like to instantiate at runtime the correct repository, according to the config file settings. So in the fututre I can implement new repositories and there is no needed to change the code.
EDIT1: I read about multi targeting, but if in my project I use features for example of .NET 4.0 and I want to complie for 3.5, I get an error because this feature does not exist in 3.5. That's correct. Then the only way is to mantain two different projects? It would be a double work.
Thanks.
Daimroc.
So for my database access I have a reposotry class that now use EF
4.0, this is a independent dll, so I can create other repository dll that use EF 5, and in my project import both dlls, then I can
instantiate the correct repository according to the version of EF 5.0
that I can use. This is a paramater in the config file. is this the
best way?
You can go this route and I don't really see an issue with it unless you think that this could cause maintenance/development headaches in the future. There are a couple of other things that you can look into doing. I think both are completely valid and probably just personal opinion/preference.
Modules You can go a modular route where your repository DLLs are potentially loaded dynamically. Look into Microsoft's Unity library. This should allow you to create an IModule in each of your repository DLLs that will set up your application as needed. Then just create a UnityBootstrapper class to tell it how to find your modules (manually add them, look in a directory, etc.). This should allow you to hot swap your repository DLLs and not have to worry about setting a config file if you don't want to.
Preprocessor Directives With preprocessor directives you get to define how your code will compile. Depending on how you have your classes structured this may be something fairly simple to set up or a complete nightmare that makes you want to abstract and refactor your classes. This question: Detect target framework version at compile time has an answer for handling different compile results depending on the target framework. Personally though, I like the modular route.
I ask this because I don't know where I must declare my interface.
because my repository classes need to implement this interface, but
then this tie my dlls to my application, but I need to use this
repositories in two different applications, so I want to implement
once, and use in many applications. I want independent dlls, because
now are two applications, but in the future, can be more.
The reason to want to use an interface in the application that uses
the repositories is because I would like to instantiate at runtime the
correct repository, according to the config file settings. So in the
fututre I can implement new repositories and there is no needed to
change the code.
Sounds like you need to create another library that is used to communicate between your UI and your Repository libraries. This can be a little tricky and overwhelming to set up just right. Basically you want your gateway DLL to house the interfaces and business objects. Your Application would reference this DLL and this DLL would reference your repositories.
Depending on your needs you may actually need to set up another intermediary DLL that would actually just house your interfaces and most basic utility classes. This would allow you to have your EF objects implement the same interface that your application is using without the need for your gateway DLL having to map your business objects and EF objects back and forth.
EDIT1: I read about multi targeting, but if in my project I use
features for example of .NET 4.0 and I want to complie for 3.5, I get
an error because this feature does not exist in 3.5. That's correct.
Then the only way is to mantain two different projects? It would be a
double work.
I believe you can get around this by using the Preprocessor Directives I mentioned above. Below is just an example of making a method handle work differently depending on if the framework is .NET 2.0; it's just an example and not tested. The DefineConstants will need to be set up, but this should allow you to handle 1 project for multiple framework targets while also being able to use newer .NET features as they are released.
public Person FindPersonByName(List<Person> people, string name)
{
#if DOTNET_20
foreach(Person person in people)
{
if (person.Name == name)
return person;
}
return null;
#else
return people.FirstOrDefault(p => p.Name == name);
#endif
}
I hope this was helpful and the best of luck in finding the right solution.
I've recently become a heavy user of Autofac's OwnedInstances feature. For example, I use it to provide a factory for creating a Unit of Work for my database, which means my classes which depend on the UnitOfWork factory are asking for objects of type :
Func<Owned<IUnitOfWork>>
This is incredibly useful--great for keeping IDisposable out of my interfaces--but it comes with a price: since Owned<> is part of the Autofac assembly, I have to reference Autofac in each of my projects that knows about Owned<>, and put "using Autofac.Features.OwnedInstances" in every code file.
Func<> has the great benefit of being built into the .NET framework, so I have no doubts that it's fine to use Func as a universal factory wrapper. But Owned<> is in the Autofac assembly, and every time I use it I'm creating a hard reference to Autofac (even when my only reference to Autofac is an Owned<> type in an interface method argument).
My question is: is this a bad thing? Will this start to bite me back in some way that I'm not yet taking into account? Sometimes I'll have a project which is referenced by many other projects, and so naturally I need to keep its dependencies as close as possible to zero; am I doing evil by passing a Func<Owned<IUnitOfWork>> (which is effectively a database transaction provider) into methods in these interfaces (which would otherwise be autofac-agnostic)?
Perhaps if Owned<> was a built-in .NET type, this whole dilemma would go away? (Should I even hold my breath for that to happen?)
I agree with #steinar, I would consider Autofac as yet another 3rd party dll that supports your project. Your system depends on it, why should you restrict yourself from referencing it? I would be more conserned if ILifetimeScope or IComponentContext were sprinkled around your code.
That said, I feel your consern. After all, a DI container should work behind the scenes and not "spill" into the code. But we could easily create a wrapper and an interface to hide even the Owned<T>. Consider the following interface and implementation:
public interface IOwned<out T> : IDisposable
{
T Value { get; }
}
public class OwnedWrapper<T> : Disposable, IOwned<T>
{
private readonly Owned<T> _ownedValue;
public OwnedWrapper(Owned<T> ownedValue)
{
_ownedValue = ownedValue;
}
public T Value { get { return _ownedValue.Value; } }
protected override void Dispose(bool disposing)
{
if (disposing)
_ownedValue.Dispose();
}
}
The registration could be done, either using a registration source or a builder, e.g. like this:
var cb = new ContainerBuilder();
cb.RegisterGeneric(typeof (OwnedWrapper<>)).As(typeof (IOwned<>)).ExternallyOwned();
cb.RegisterType<SomeService>();
var c = cb.Build();
You can now resolve as usual:
using (var myOwned = c.Resolve<IOwned<SomeService>>())
{
var service = myOwned.Value;
}
You could place this interface in a common namespace in your system for easy inclusion.
Both the Owned<T> and OwnedWrapper<T> are now hidden from your code, only IOwned<T> is exposed. Should requirements change and you need to replace Autofac with another DI container there's a lot less friction with this approach.
I would say that it's fine to reference a well defined set of core 3rd party DLLs in every project of an "enterprise application" solution (or any application that needs flexibility). I see nothing wrong with having a dependency on at least the following in every project that needs it:
A logging framework (e.g. log4net)
Some IoC container (e.g. Autofac)
The fact that these aren't part of the core .NET framework shouldn't stop us from using them as liberally.
The only possible negatives I can see are relatively minor compared to the possible benefits:
This may make the application harder to understand for the average programmer
You could have version compatibility problems in the future which you wouldn't encounter if you were just using the .NET framework
There is an obvious but minor overhead with adding all of these references to every solution
Perhaps if Owned<> was a built-in .NET
type, this whole dilemma would go
away? (Should I even hold my breath
for that to happen?)
It will become a built-in .NET type: ExportLifeTimeContext<T>. Despite the name, this class isn't really bound to the .NET ExportFactory<T>. The constructor simply takes a value, and an Action to invoke when the lifetime of that value is disposed.
For now, it is only available in Silverlight though. For the regular .NET framework you'll have to wait until .NET 4.x (or whatever the next version after 4.0 will be).
I don't think referencing the Autofac assembly is the real problem - I consider things like Owned appearing in application code a 'code smell'. Application code shouldn't care about what DI framework is being used and having Owned in your code now creates a hard dependency on Autofac. All DI related code should be cleanly contained in a set of configuration classes (Modules in the Autofac world).