I'm developing an application that heavily relies on a plugin architecture (*).
However I'm not sure what design pattern to use for dependencies between plugins, e.g. when plugin A depends on plugin B, possibly with some constraints (plugin B version between v1.05 and v1.30 or so)
My thoughts so far:
I could specify an interface for plugin B that never changes, and have plugin A reference this interface project only. Plugin B is then free to implement this in whatever way with versioning, and the latest available implementation will just be dependency-injected into the requested interfaces.
This could work, but it seems as though defining an interface which is very much tailored to the specific plugin's functions is a bit unnecessary; plus I suppose that I'd have to stick to that interface then; I could only enhance the plugins implementation in future versions easily, but not the interface.
I could ignore interfaces and just develop the plugins' implentations. Plugin A's project could then directly reference Plugin B's .dll. But as far as I know, this would cause errors when replacing Plugin B's .dll with a newer version, unless I add explicit version redirects in my applications config, wouldn't it?!
Are there any best practices? I suppose this issue is very similar to Nuget packages' depdendencies - does anyone happen to know how they have solved it?
Thanks
(*) in case it matters, my plugin architecture works as follows: I have all my plugins implement an interface IPlugin.
My main app then scans the plugin directory for all .dlls, filters out all classes that implement IPlugin, and uses Ninject to add a binding from IPlugin to the specific implementation (in the end, there'll be several bindings available for IPlugin, e.g. IPlugin -> Plugin1, IPlugin -> Plugin2 etc.). I'm then using Ninject to request/create a singleton instance of each plugin and register it in my main app. That way, my plugins can "request" dependencies via constructor arguments and Ninject/DI takes care of providing those.
As far as I am aware, Nuget tracks library dependencies using the metadata stored in the nuget package file. If I were you I'd avoid implementing arbitrary restrictions. What if one of your plugin developers wants to create a shared support library of useful classes, for example?
To my mind, a plugin should be a black box of functionality. If a plugin needs another plugin, then they should communicate via a standardized messaging platform rather than directly.
That said, you could always scrape all interface implementations from the library you load and hook those up as well as your plugins. That way the plugin developer can "request" implementations of those interfaces as well as plugins.
You'll need to cope with massive class libraries (I recommend only hooking up in Ninject interfaces that are referenced in plugin constructors) and with potential conflicts (two plugins might expect separate implementations of the same interface - which is the main reason I believe that a plugin should take care of itself internally, rather than hoping its design time expectations are fulfilled by the external plugin manager).
And in answer to (2), as long as the methods and properties you reference don't change name or signature, you shouldn't have any problems using a newer version of DLL B with DLL A. If you change a return type, change from a public field (which shouldn't exist in the first place) to a public property, change the parameters on a method or anything of that nature on a class that you're using from DLL B in DLL A, a recompile of A would be required.
Related
How to version abstractions in .Net when applying Dependency Inversion in a high code-reuse environment
I am interested in shifting toward using Dependency Inversion in .Net, but have come across something that puzzles me.
I don’t believe it is tied to a particular method or provider of DIP, but more a fundamental issue that perhaps others have solved. The issue I'm solving for is best laid out step-by-step as scenario below.
Assumption / Restriction
A considerable assumption or restriction to put out there up front, is that my development team has stuck with a rule of keeping our deployed assemblies to one and only one Assembly Version, specifically version “1.0.0.0”.
Thus far, we have not supported having more than this one Assembly Version of any given assembly we’ve developed deployed on a server for the sake of simplicity. This may be limiting, and there may be many good reasons to move away from this, but never the less, it is currently a rule we work with. So with this practice in mind, continue below.
Scenario
You have an IDoStuff interface contained in an abstraction assembly
Stuff.Abstractions.dll with 2 methods.
You compile component A.dll
with a class explicitly implementing IDoStuff with 2 methods.
You move A.dll to production use, Assembly Version 1.0.0.0, Assembly File
version 1.0.0.0.
You move Interface.dll to prod, Assembly Version
1.0.0.0, Assembly File version 1.0.0.0.
Everything works fine. Time passes by.
You add another method (“DoMoreStuff” for example) to the IDoStuff interface so that a different Component B can call it.
(Keeping Interface Segregation OO principle in mind, let’s say the DoMoreStuff method makes sense to be in this relatively small IDoStuff interface.)
You now have IDoStuff with 3 methods in Stuff.Abstractions.dll, and you’ve built Component B to use the new 3rd method.
You move Stuff.Abstractions.dll to production use (upgrade it), Assembly Version 1.0.0.0, Assembly File Version 1.0.0.1.
(note that the file version is incremented, but the assembly version and therefore the strong name stays the same)
You move B.dll to production use, Assembly Version 1.0.0.0, Assembly File version 1.0.0.17.
You don’t do a thing to A.dll. You figure there are no changes needed at this time.
Now you call code that attempts to execute A.dll on the same production server where it had been working before. At runtime the Dependency Inversion framework resolves the IDoStuff interface to a class inside A.dll and tries to create it.
Problem is that class in A.dll implemented the now extinct 2-method IDoStuff interface. As one might expect, you will get an exception like this one:
Method ‘DoMoreStuff’ in type ‘the IDoStuff Class inside A.dll’ from assembly ‘strong name of assembly A.dll’ does not have an implementation.
I am presented with two ways that I can think of to deal with this scenario when I’d have to add a method to an existing interface:
1) Update every functionality-providing assembly that uses Stuff.Abstractions.dll to have an implementation of the new ‘DoMoreStuff’ method.
This seems like doing things the hard way, but in a brute-force way would painfully work.
2) Bend the Assumption / Restriction stated above and start allowing more than one Assembly Version to exist (at least for abstraction definition assemblies).
This would be a bit different, and make for a few more assemblies on our servers, but it should allow for the following end state:
A.dll depends on stuff.abstractions.dll, Assembly Version 1.0.0.0, Assembly File Version 1.0.0.22 (AFV doesn’t matter other than identifying the build)
B.dll depends on stuff.abstractions.dll, Assembly Version 1.0.0.1, Assembly File Version 1.0.0.23 (AFV doesn’t matter other than identifying the build)
Both happily able to execute on the same server.
If both versions of stuff.abstractions.dll are installed on the server, then everything should get along fine. A.dll should not need to be altered either. Whenever it needs mods next, you’d have the option to implement a stub and upgrade the interface, or do nothing. Perhaps it would be better to keep it down to the 2 methods it had access to in the first place if it only ever needed them.
As a side benefit, we’d know that anything referencing stuff.abstractions.dll, version 1.0.0.0 only has access to the 2 interface methods, whereas users of 1.0.0.1 have access to 3 methods.
Is there a better way or an accepted deployment pattern for versioning abstractions?
Are there better ways to deal with versioning abstractions if you’re trying to implement a Dependency Inversion scheme in .Net?
Where you have one monolithic application, it seems simple since it’s all contained – just update the interface users and implementers.
The particular scenario I’m trying to solve for is a high code-reuse environment where you have lots of components that depend on lots of components. Dependency Inversion will really help break things up and make Unit Testing feel a lot less like System Testing (due to layers of tight coupling).
Part of the problem may be that you're depending directly on interfaces which were designed with a broader purpose in mind. You can mitigate the problem by having your classes depend on abstractions which were created for them.
If you define interfaces as needed to represent the dependencies of your classes rather than depending on external interfaces, you'll never have to worry about implementing interface members that you don't need.
Suppose I'm writing a class that involves an order shipment, and I realize that I'm going to need to validate the address. I might have a library or a service that performs such validations. But I wouldn't necessarily want to just inject that interface right into my class, because now my class has an outward-facing dependency. If that interface grows, I'm potentially violating the Interface Segregation Principle by depending on an interface I don't use.
Instead, I might stop and write an interface:
public interface IAddressValidator
{
ValidationResult ValidateAddress(Address address);
}
I inject that interface into my class and keep writing my class, deferring writing an implementation until later.
Then it comes time to implement that class, and that's when I can bring in my other service which was designed with a broader intent than just to service this one class, and adapt it to my interface.
public class MyOtherServiceAddressValidator : IAddressValidator
{
private readonly IOtherServiceInterface _otherService;
public MyOtherServiceAddressValidator(IOtherServiceInterface otherService)
{
_otherService = otherService;
}
public ValidationResult ValidateAddress(Address address)
{
// adapt my address to whatever input the other service
// requires, and adapt the response to whatever I want
// to return.
}
}
IAddressValidator exists because I defined it to do what I need for my class, so I never have to worry about having to implement interface members that I don't need. There won't ever be any.
There's always the option to version the interfaces; e.g., if there is
public interface IDoStuff
{
void GoFirst();
void GoSecond();
}
There could then be
public interface IDoStuffV2 : IDoStuff
{
void GoThird();
}
Then ComponentA can reference IDoStuff and ComponentB can be written against IDoStuffV2. Some people frown on interface inheritance, but I don't see any other way to easily version interfaces.
I have a PCL that I want to contain a bunch of base classes, so I do not have to make them again for each project. Now I am contemplating adding in Facebook, as I will have to reference an external dll each time I want to use my PCL in a project, even if it's a project with only a few screens, because I would have build errors if I don't.
For those saying that's not an issue: I am planning on adding even more external dll's that I don't need every time.
How can I solve this? I want to include the code to use this dll in my PCL, but I don't want to be forced to include the dll each time I use the PCL.
The problem here is that you probably want to use types from the external library in your code, and you can't do that without referencing the library.
A way around this problem is by using reflection, but your code will become much more complex and you wish you didn't.
Another solution is to:
Create an interface for each external dependency in your "common PCL" (ie. ISocialMediaPlatform for the facebook).
Create a new PCL for each external dependency, that references both your "common PCL" and the external library, and has a class that implements one of these interfaces (ie. FacebookSocialMediaPlatform : ISocialMediaPlatform)
This implementation can then reference the external dependency and use its types directly
Inject the implementation of each interface into your "common PCL" using reflection or a Dependency Injection framework
This does add another layer of complexity, but as a side effect it also makes your common PCL code testable.
Finally, the solution I personally would prefer, is to not have a huge "common PCL" at all, but to split it into a few smaller ones that fulfill one specific role.
I’m a C++ guy which has to work with some C# projects hence I have question. Having two projects placed on different svn servers I need them to share interface classes. How it should be solved in C#.
For example I have cs file which have interface and class used to pass data to the interface i.e.
Public Class data
{
public int a;
public int b;
}
Public Interface Ifoo
{
int foo(data);
}
This interface is implemented in ProjectA and used by ProjectB.
I want to be able to chose implementation of the interface so that in tests of ProjectB I will use special implementation of Ifoo interface.Chosing different dll using :
Assembly assembly = Assembly.LoadFrom(asm_name);
fooer = assembly.CreateInstance(class_name) as Ifoo;
Where I should place Ifoo interface?
I thought it should be placed in ProjectA svn repo (as ProjectA is owner of the interface) and then checkout it as an external with checkout of ProjectB .
Can you tell me what is the rule of thumb in such case?
BR
Krzysztof
First of all, whatever you decide to put your interface and asspciated data class (project A or project B svn or a new one), the first (and quite ovious) recomendation is that you put them together on its own library (DLL), without any dependency on other objects, so that becomes easy to share it across different projects.
To use it on a different project (do not matter if on another svn repository or not), you will have to give to that project physical access to this interface/data class. Being on its own dll and without the constraint of requiring other objects, it's a simple matter of add a reference of the library in the project.
With local copies of both projects, you don't need to copy the library itself into the other project.
In any case, you have to think well of your interface and data, so that you do not contantly make changes to them, in order to avoid having problems of compatibility between the projects. If you need to "add" something to the interface because of new features, create a new interface instead (and put it on other DLL). This way you will maintain compatibility with other projects that do not implement the new features.
If the data associated with the interface is so specific that any class implementing this interface will be used ONLY BY project A, so, the obvious place to put the DLL is into the project A. Usually this is the case when a software has the aability to use plugins. The interfaces are in a dll that can be "public" provided to plugin developers that do not have access to the main project itself. This is so simple as to make the DLL available to download. Beijng the SAME dll used on both main project and plugins, there will be no problems (than the reason to not change it).
But if your interface is more "generic" and is used to create something like a framework, where different projects (not related/not dependent) can use it alone, than, the suggestion to separete it in a third project (with its own svn) is more interesting. Using good polices regarding the development of this interface, will be less problematic to mantain the framework.
In the comments you said you can relate the "interface" to the project A, but if you can use it in project B without project A being involved, you can relate the interface to project B as well, and so, the option of moving the interface/associated data to a separetely project is preferable.
In any case, the underline implementation is irrelevant, as the main reason why we use interfaces in C# is exactly to be able to use an object in a "generic way" whithout (necessarily) having to care about how it is implemented.
I'm currently working on a C# product that will use a plugin type system. This isn't anything new and I have seen much info around about how to use a interface to implement this functionality quite easily.
I've also seen methods to implement backwards compatibility by updating the interface name, e.g.: Interface change between versions - how to manage?
There are multiple scenarios which I can foresee with our product in regards to version mismatches between the main exe and the plugin.
Main Program same plugin version as plugin
Main Program newer than plugin
Main Program older than plugin
From the info I've been able to gather 1 & 2 work just fine. But I haven't been able to figure out how to correctly implement "forward" compatibility (3) properly.
It is our intention to only ADD methods to the plugin API.
Any ideas would be a great help.
Isolated PluginAPI DLL
First, Your PluginAPI (containing the interfaces) should be a separate DLL to your main application. Your main application will reference the PluginAPI, and each plugin will reference the PluginAPI. You're most likely already doing this.
Interface Versioning
Second, structurally, you should create a new interface each time you add a new property or method.
For example:
Version 1: Plugins.IPerson
Version 2: Plugins.V2.IPerson : Plugins.IPerson
Version 3: Plugins.V3.IPerson : Plugins.V2.IPerson
In rare cases where you decide to remove or completely redesign your API, example:
Version 4: Plugins.V4.IPerson //Without any Interface inheritance
Isolated PluginAPI DLL Versioning
Finally, I am not 100% sure how versioning of the PluginAPI .dll will go even with this structural architecture of Interface versioning. It may work
OR
You may need to have matching dlls for each version (each referencing the previous version(s)). We will assume that this is the case.
Solution for case 3
So let's now take your case [3], main program older than plugin:
Person Plugin implements Plugins.V2.IPlugin and references the V3 .dll (just to make it interesting).
Main Program references the V1 .dll
The plugin folder will contain the V2 and V3 plugin .dlls
The main app folder will only contain the V1 plugin .dll (among other files)
Main App will find and load the Person plugin and reference through a V1 definition for the IPerson interface
Of course, only V1 methods and properties will be accessible from the plugin to the Main App
(Additional methods will be accessible through reflection - not that you would want to)
Bonus Update
When you might use plugins
Third-parties extending your system. Source code would be better if that's an option, or if it's web-based, redirect to their URL. This is a dream for many software projects, but you should wait until you have an interested third-party partner before doing the extra work to build the plugin framework.
User Editable "Scripts". You should not build your own scripting language, instead you should compiled the user c# code against a restrictive interface in an appdomain that is very restrictive (disabling reflection and others).
Security grouping - Your core software might use trusted platform calls. Riskier modules can be separated into another library and optionally excluded by end-users.
When not to use Plugins
I am an advocate for less-is-more. Don't overengineer. If you are building modular software that's great, use classes and namespaces (don't get carried away with interfaces). "Modular" means you are striving to adhere to SOLID principles, but that doesn't mean you need Plugin architecture. Even inversion of control is overkill in many situations.
If you plan to open to third-parties in the future, don't make it a plugin architecture to start with. You can build out a plugin framework later in stages: i) derive interfaces; ii) define your plugins with interfaces within the same project; iii) load your internal plugins with a plugin loader class; iv) finally, you can implement an external library loader. Each of these 4 steps leave you with a working system on their own and move you toward a finished plugin system.
Hot Swappable Plugins
When designing a plugin architecture, you may be interested to know that you can make plugins hot swappable:
Without Freeing Memory - Just keep loading the new plugin. This is usually fine, unless it's maybe for a server software which you expect i) to run for a very long time without restarting; AND ii) expect many plugin changes and upgrades during that time. When you load a plugin at runtime, it loads the assembly into memory and cannot be unloaded. See [2] for why.
With Freeing Memory - You can unload an AppDomain. An AppDomain runs in the same process but are reference isolated - you can't reference or call objects directly. Instead calls must be marshalled and data must be serialised in between appdomains. The added complexity is not worth it if you're not going to change plugins often, there is: i) a performance penalty due to marshalling/serialization, ii) much more coding complexity (you can't simply use events and delegates and methods as normal), iii) this all leads to more bugs and makes it more difficult to debug.
So if option [2] entices you, please try [1] first, and use that architecture until you have the problems necessary for [2]. Never over-architect. Trust me, I have built a [2] architecture before during University, it's fun, but in most cases overkill and will likely kill your project (spending too much time on non-business functions).
You need to assume that your plugins only implement the interface(s) exposed. If you release a new version of your main program with new interface you will check to see if your plugins support that interface. Therefore if a new plugin is presented to an old version of main. It will either support the requested interface or will not and will fail the test as a valid plugin.
Sorry if I am not clear enough, I've had a hard time writing this question.
I downloaded an open source software. I would like to expand the functionalities so I would like to create modules that encapsulates the functionality these modules would be .dll files.
I would like to have one completely independent from another: if I set a key to true in the config file and if the DLL is present on the folder, the plugin should be loaded.
The problem is: how can I make the call for the plugin dynamically (only call of the plugin is applied)?
If I reference the plugin classes directly, I would have to reference the plugin dll, but I want to be able to run the core software without the plugin. Is there any design pattern or other mechanism that would allow me to load and use the DLL only if the plugin is applied and still be possible to run the core software without the plugin?
There are various ways to achieve this and I will describe one simple solution here.
Make a common interface that each plugin must implement in order to be integrated with core application. Here is an example:
// Interface which plugins must implement
public interface IPlugin
{
void DoSomething(int Data);
}
// Custom plugin which implements interface
public class Plugin : IPlugin
{
public void DoSomething(int Data)
{
// Do something
}
}
To actually load your plugin from dll, you will need to use reflection, for example:
// Load plugin dll and create plugin instance
var a = Assembly.LoadFrom("MyCustomPlugin.dll");
var t = a.GetType("MyCustomPlugin.Plugin");
var p = (IPlugin)Activator.CreateInstance(t);
// Use plugin instance later
p.DoSomething(123);
You can use some kind of naming convention for your plugin assemblies and classes
so that you can load them easily.
You can use MEF.
The Managed Extensibility Framework (MEF) is a composition layer for
.NET that improves the flexibility, maintainability and testability of
large applications. MEF can be used for third-party plugin
extensibility, or it can bring the benefits of a loosely-coupled
plugin-like architecture to regular applications.
Here is programming guide.
Plugins or DLLs in .NET jargon are called assemblies. Check out the Assemply.Load method, and also this guide in msdn.
The System.Reflection namespace provides many tools that will help you with this scenario.
You can
inspect assemblies (DLL files) to examine the objects inside them,
find the types that you are looking for (specific classes, classes which implement specific interfaces, etc)
create new instances of those classes, and
invoke methods and access properties of those classes.
Typically you would write a class in the extension which does some work, create a method (e.g. DoWork()), and then invoke that method dynamically.
The MEF mentioned in this question does exactly this, just with a lot more framework.