i have developed a web application in .net 4.5, now a customer ask me a customization for some module of application (for example a different implementation of invoices) .
My question is can i "intercept" my customer for load customized assembly and load different assembly for general customer?
can i do it simply by reflection?
The key idea is to design the software the way its parts can be easily replaced. You should have separated your solution into multiple projects, so that you can quickly swap different implementations of your interfaces.
Furthermore, there's a thing called Dependency Injection, which basically means you can inject a different implementation depending on your needs either during a runtime or using a config file for instance. For the ease of use there are nice frameworks already prepared for you, like Ninject or Unity.
The application needs to have a solid architecture to support such possibilities. Maybe if you've provided more information about your system, I could have been more specific, but I believe doing some research on dependency injection will give you a good start.
Yes, you can. You can load an assembly from file, like this
var asmbly = System.Reflection.Assembly.LoadFrom(path);
And then use Reflection to load types.
There are several ways you can achieve this pluggability. One way is to extract the interface of the "module" and code your "module client" agains that interface, decoupling the concrete implementation from your "client" code. Then, at run time look inside the assempbly, load the type that implements said interface and inject into "module client".
I am pasting here some code I've written as a proof of concept for exacly that type of run-time loading of "modules" (this is MVC 3):
// "Module User" was decoupled from "module", by coding it against module's interface:
public class CustomerController : Controller
{
private ICustomerRepository _customerRepository;
// dependency injection constructor
public CustomerController(
...
ICustomerRepository customerRepository,
...)
{
_customerRepository = customerRepository;
}
public ActionResult Details(Nullable<Guid> id)
{
Customer c = _customerRepository.GetByKey(id.Value.ToString());
...
}
...
}
At run time:
// I first load the assembly
System.Reflection.Assembly dal = System.Reflection.Assembly.LoadFrom(System.IO.Path.Combine(pBinPath, pDALAssemblyName));
// I then look for types implementing ICustomerRepository
var addressRepositoryContract = typeof(QSysCamperCore.Domain.IAddressRepository);
var addressRepositoryImplementation = dal.GetTypes()
.First(p => addressRepositoryContract.IsAssignableFrom(p));
...
Note, that this type of programming requires a little more permissions - I am already a few years "rusty" on this code, but I remember issues with the trust level and, of course, file system access has to be considered.
There are frameworks that aid such application style. This is again a few years old, but there used to be so-called "Composite Application Block" on Microsoft's Patterns and Practices site, which was the base of two frameworks - Smart Cliet Softweare Factory and its web equivavelnt - Web Client Software Factory. They were a little heavier to stand up, but provided gread skeleton for such modularized (composite) applications.
Related
In book "Dependency Injection in .Net" by Mark Seemann, in second chapter, is analysis of some badly written 3-layer asp.net application. The main point is: application fails because the lowest layer, data access, can not be converted from SQL with Entity Framework to Azure with no-SQL database. Here is exact quotation:
To enable the e-commerce application as a cloud application, the Data Access library
must be replaced with a module that uses the Table Storage Service. Is this possible?
From the dependency graph in figure 2.10, we already know that both User Interface and Domain libraries depend on the Entity Framework-based Data Access library.
If we try to remove the Data Access library, the solution will no longer compile,
because a required DEPENDENCY is missing.
In a big application with dozens of modules, we could also try to remove those
modules that don’t compile to see what would be left. In the case of Mary’s application, it’s evident that we’d have to remove all modules, leaving nothing behind.
Although it would be possible to develop an Azure Table Data Access
library that mimics the API exposed by the original Data Access
library, there’s no way we could inject it into the application.
Graph 2.10:
My question is - why module that is imitating previous behavior can not be injected into application, and what that really mean? Is it related to Azure specifics? I have not been working much with no-sql database before.
Essentialy, what he means is that your UI code is directly dependent on the code in the Data Access library. An example of how this might be used in the UI-layer:
public class SomeController : Controller
{
[Route("someRoute")]
[HttpGet]
public ViewResult SomeRoute()
{
// Here we're using the data component directly
var dataComponent = new DataAccessLayer.DataComponent();
return View(dataComponent.GetSomeData());
}
}
If we want to swap out the DataAccess-library it means we would have to go into all our controllers and change the code to use the new component (unless we create exactly the same class names in the same namespaces, but that's unlikely).
On the other hand, we could also write the controller like this:
public class SomeController : Controller
{
IDataComponent _dataComponent;
public SomeController(IDataComponent dataComponent)
{
_dataComponent = dataComponent;
}
[Route("someRoute")]
[HttpGet]
public ViewResult SomeRoute()
{
// Now we're using the interface that was injected
return View(_dataComponent.GetSomeData());
}
}
By defining the class like this, we can externally specify which concrete class that implements the IDataComponent interface should be injected into the constructor. This allows us to "wire" our application externally. We're injecting a concrete class into a class.
Dependency Injection is one way to make it easier to "program against an interface, not a concrete class" .
The example Mark Seemann gives relates to databases vs Azure Table Storage, but it's just that, an example. This is not related to NoSql (or storage mechanisms in general). The same principles apply for everything that depends on other classes (generally service-type classes).
EDIT after comments:
It's indeed true that you could just modify the internals of the DataComponent (or repository if that's what you're using).
However, using DI (and programming against an interface in general) gives you more options:
You could have various implementations at the same time and inject a different implementation depending on which controller it is (for example)
You could reuse the same instance in all your controllers by specifying the lifecycle in the registration (probably not usable in this case)
For testing purposes, you could inject a different implementation into the controller (such as a mock, which you can test for invocations)
I always saw people always talking about using framework like Ninject, Unity, Windsor to do the dependency resolver and injection. Take following code for example:
public class ProductsController : ApiController
{
private IProductRepository _repository;
public ProductsController(IProductRepository repository)
{
_repository = repository;
}
}
My question is: why can't we simply write as:
public class ProductsController : ApiController
{
private IProductRepository _repository;
public ProductsController() :this(null)
{}
public ProductsController(IProductRepository repository)
{
_repository = repository?? new ProductRepository();
}
}
In that case seems we don't need any framework, even for the unit test we can easily mock.
So what's the real purpose for those framework?
Thanks in advance!
In that case your ProductsController still depends on a low level component (the concrete ProductRepository in your case) which is a violation of the Dependency Inversion Principle. Whether or not this is a problem depends on multiple factors, but it causes the following problems:
The creation of the ProductRepository is still duplicated throughout the application causing you to make sweeping changes throughout the application when the constructor of ProductRepository chances (assuming that ProductRepository is used in more places, which is quite reasonable) which would be an Open/Closed Principle violation.
It causes you to make sweeping changes whenever you decide to wrap this ProductService with a decorator or interceptor that adds cross-cutting concerns (such as logging, audit trailing, security filtering, etc) that you surely don't want to repeat that code throughout all your repositories (again an OCP violation).
It forces the ProductsController to know about the ProductsRepository, which might be a problem depending on the size and complexity of the application your are writing.
So this is not about the use of frameworks, it's about applying software design principles. If you decide to adhere to these principles to make your application more maintainable, the frameworks like Ninject, Autofac and Simple Injector can help you with making the startup path of your application more maintainable. But nothing is preventing you from applying these principles without the use of any tool or library.
Small disclaimer: I'm an avid Unity user, and here are my 2 cents.
1st: Violation of SOLID (SRP/OCP/DIP)
Already stated by #democodemonkey and #thumbmunkeys, you couple the 2 classes tightly. Let's say that some classes (let it be ProductsThingamajigOne and ProductsThingamajigTwo) are using the ProductsController, and are using its default constructor. What if in the architect decides that the system should not use a ProductsRepository that saves Products into files, but should use a database or a cloud storage. What would the impact be on the classes?
2nd: What if the ProductRepository needs another dependency?
If the repository is based on a database, you might need to provide it with a ConnectionString. If it's based on files, you might need to provide it with a class of settings providing the exact path of where to save the files - and the truth is, that in general, applications tend to contain dependency trees (A dependent on B and C, B dependent on D, C dependent on E, D dependent on F and G and so on) that have more then 2 levels, so the SOLID violations hurts more, as more code has to be changed to perform some task - but even before that, can you imagine the code that would create the whole app?
Fact is, classes can have many dependencies of theirs own - and in this case, the issues described earlier multiply.
That's usually the job of the Bootstrapper - it defines the dependency structure, and performs (usually) a single resolve that brings the whole system up, like a puppet on a string.
3rd: What if the Dependency-Tree is not a tree, but a Graph?
Consider the following case: Class A dependent on classes B and C, B and C both are dependent on class D, and are expecting to use the same instance of D. A common practice was to make D a singleton, but that could cause a lot of issues. The other option is to pass an instance of D into the constructor of A, and have it create B and C, or pass instances of B and C to A and create them outside - and the complexity goes on and on.
4th: Packing (Assemblies)
Your code assumes that 'ProductsController' can see 'ProductRepository' (assembly-wise). What if there's no reference between them? the Assembly Map can be non-trivial. usually, the bootstrapping code (I'm assuming that it's in code and not in configuration file for a second here) is written in an assembly that references the entire solution. (This was also described by #Steven).
5th: Cool stuff you can do with IoC containers
Singletons are made easy (with unity: simply use a 'containercontrolledlifetimemanager' when registering),
Lazy Instantiation made really easy (with unity: register mapping of and ask in the constructor for a Func).
Those are just a couple of examples of things that IoC containers give you for (almost) free.
Of course you could do that, but this would cause the following issues:
The dependency to IProductRepository is not explicit anymore, it looks like an optional dependency
Other parts of the code might instantiate a different implementation of IProductRepository, which would be probably a problem in this case
The class becomes tightly coupled to ProductsController as it internally creates a dependency
In my opinion this is not a question about a framework. The point is to make modules composable, by exposing their dependencies in a constructor or property. Your example somewhat obfuscates that.
If class ProductRepository is not defined in the same assembly as ProductsController (or if you would ever want to move it to a different assembly) then you have just introduced a dependency that you don't want.
That's an anti-pattern described as "Bastard Injection" in the seminal book "Dependency Injection in .Net" by Mark Seeman.
However, if ProductRepository is ALWAYS going to be in the same assembly as ProductsController and if it does not depend on anything that the rest of the ProductsController assembly depends upon, it could be a local default - in which case it would be ok.
From the class names, I'm betting that such a dependency SHOULD NOT be introduced, and you are looking at bastard injection.
Here ProductsController is responsible for creating the ProductRepository.
What happens if ProductRepository requires an additional parameter in its constructor? Then ProductsController will have to change, and this violates the SRP.
As well as adding more complexity to all of your objects.
As well as making it unclear as to whether a caller needs to pass the child object, or is it optional?
The main purpose is to decouple object creation from its usage or consumption. The creation of the object "usually" is taken care of by factory classes. In your case, the factory classes will be designed to return an object of a type which implements IProductRepository interface.
In some frameworks, like in Sprint.Net, the factory classes instantiate objects that are declaratively written in the configuration (i.e. in the app.config or web.config). Thus making the program totally independent of the object it needs to create. This can be quite powerful at times.
It is important to distinguish dependency injection and inversion of control is not the same. You can using dependency injection without IOC frameworks like unity, ninject ..., performing the injection manually what they often called poor man's DI.
In my blog I recently wrote a post about this issue
http://xurxodeveloper.blogspot.com.es/2014/09/Inyeccion-de-Dependencias-DI.html
Going back to your example, I see weaknesses in the implementation.
1 - ProductsController depends on a concretion and not an abstraction, violation SOLID
2 - In case of interface and the repository are living in different projects, you'd be forced to have a reference to the project where the repository is located
3 -If in the future you need to add a parameter to the constructor, you would have to modify the controller when it's a simply repository client.
4 -Controller and repository can be developed for differents programmers, controller programmer must know how create repository
consider this usecase:
suppose, in future, if you want to inject CustomProductRepository instead of ProductRepository to ProductsController , to the software which is already deployed to client site.
with Spring.Net you can just update the spring configuration file(xml) to use your CustomProductRepository. So, with this, you can avoid re-compiling and re-installing software on client site since you have not modified any code.
Sorry if I am not clear enough, I've had a hard time writing this question.
I downloaded an open source software. I would like to expand the functionalities so I would like to create modules that encapsulates the functionality these modules would be .dll files.
I would like to have one completely independent from another: if I set a key to true in the config file and if the DLL is present on the folder, the plugin should be loaded.
The problem is: how can I make the call for the plugin dynamically (only call of the plugin is applied)?
If I reference the plugin classes directly, I would have to reference the plugin dll, but I want to be able to run the core software without the plugin. Is there any design pattern or other mechanism that would allow me to load and use the DLL only if the plugin is applied and still be possible to run the core software without the plugin?
There are various ways to achieve this and I will describe one simple solution here.
Make a common interface that each plugin must implement in order to be integrated with core application. Here is an example:
// Interface which plugins must implement
public interface IPlugin
{
void DoSomething(int Data);
}
// Custom plugin which implements interface
public class Plugin : IPlugin
{
public void DoSomething(int Data)
{
// Do something
}
}
To actually load your plugin from dll, you will need to use reflection, for example:
// Load plugin dll and create plugin instance
var a = Assembly.LoadFrom("MyCustomPlugin.dll");
var t = a.GetType("MyCustomPlugin.Plugin");
var p = (IPlugin)Activator.CreateInstance(t);
// Use plugin instance later
p.DoSomething(123);
You can use some kind of naming convention for your plugin assemblies and classes
so that you can load them easily.
You can use MEF.
The Managed Extensibility Framework (MEF) is a composition layer for
.NET that improves the flexibility, maintainability and testability of
large applications. MEF can be used for third-party plugin
extensibility, or it can bring the benefits of a loosely-coupled
plugin-like architecture to regular applications.
Here is programming guide.
Plugins or DLLs in .NET jargon are called assemblies. Check out the Assemply.Load method, and also this guide in msdn.
The System.Reflection namespace provides many tools that will help you with this scenario.
You can
inspect assemblies (DLL files) to examine the objects inside them,
find the types that you are looking for (specific classes, classes which implement specific interfaces, etc)
create new instances of those classes, and
invoke methods and access properties of those classes.
Typically you would write a class in the extension which does some work, create a method (e.g. DoWork()), and then invoke that method dynamically.
The MEF mentioned in this question does exactly this, just with a lot more framework.
I am brand new to IoC and thus have been following the examples provided by Jeffery Palermo in his posts at http://jeffreypalermo.com/blog/the-onion-architecture-part-1/ and in his book hosted here https://github.com/jeffreypalermo/mvc2inaction/tree/master/manuscript/Chapter23
Most important to note is that I am not using a pre-rolled IoC container, mostly because I want to understand all the moving parts.
However, I am creating a windows service rather than an ASP.NET MVC webapp so I am little bogged down on the startup portion. Specifically, in the web.config he registers an IHttpModule implementation INSIDE the infrastructure project as the startup module and then uses a post-build event to copy the necessary dlls into the website directory to get around having a direct dependency in the web project itself.
I don't think I have this type of luxury in a true windows service, so how do I achieve something similar, should I have a small startup project which has dependencies to both the Infrastructure and Core, or is there another method to get around the compile-time restrictions of the windows service?
Thanks in advance.
Based on the tags of this question (c#) I'm assuming that you'll implement the Windows Service by deriving from ServiceBase. If so, the OnStart method will be your Composition Root - this is where you compose the application's object graph. After you've composed the object graph, composition is over and the composed object graph takes over.
In OnStop you can decommission the object graph again.
There's nothing stopping you from implementing the various components of the resolved object graph in separate assemblies. That's what I would do.
I think you missunderstood the role of an IoC framework.
To answer your question
but doesn't the reference imply dependency?
Yes it does, but on an other level. IoC is about dependencies between classes.
Instead of using new Something() in your class you provide a constructor which requires all dependent interfaces. This way the class has no control which implementation is passed to it. This is inversion of control. The IoC Container is just an aid to help managing the dependencies in a nice manner.
Say you have a ICustomerNotificationService interface with an implementation like
public class MailNotificationService : INotificationService
{
IMailerService _mailer;
ICustomerRepository _customerRepo;
IOrderRepository _orderRepo;
public MailNotificationService(IMailerService mailer,
ICustomerRepository customerRepo,
IOrderRepository oderRepo)
{
// set fields...
}
public void Notify(int customerId, int productId)
{
// load customer and order, format mail and send.
}
}
So if your application requests an instance of ICustomerNotificationServcie the container figures out which concrete implementations to take and tries to satisfy all dependencies the requested class has.
The advantage is that you can easily configure all dependencies in your bootstrapping logic and be able to change the behaviour of your application very easily.
For example when testing you start the application with an IMailerService implementation which writes the mails to a file and in production mode a real mail service is wired. This would not be possible if you newed up say a MailerService in your constructor instead of taking it as a parameter.
A good IoC container can handle much more, for you like lifetime management, singletons, scanning assemblies for Types you want to register and many more. We based our entire plugin system on Structure Map for example.
You may want to take a look at this blog article and its second part.
I've recently become a heavy user of Autofac's OwnedInstances feature. For example, I use it to provide a factory for creating a Unit of Work for my database, which means my classes which depend on the UnitOfWork factory are asking for objects of type :
Func<Owned<IUnitOfWork>>
This is incredibly useful--great for keeping IDisposable out of my interfaces--but it comes with a price: since Owned<> is part of the Autofac assembly, I have to reference Autofac in each of my projects that knows about Owned<>, and put "using Autofac.Features.OwnedInstances" in every code file.
Func<> has the great benefit of being built into the .NET framework, so I have no doubts that it's fine to use Func as a universal factory wrapper. But Owned<> is in the Autofac assembly, and every time I use it I'm creating a hard reference to Autofac (even when my only reference to Autofac is an Owned<> type in an interface method argument).
My question is: is this a bad thing? Will this start to bite me back in some way that I'm not yet taking into account? Sometimes I'll have a project which is referenced by many other projects, and so naturally I need to keep its dependencies as close as possible to zero; am I doing evil by passing a Func<Owned<IUnitOfWork>> (which is effectively a database transaction provider) into methods in these interfaces (which would otherwise be autofac-agnostic)?
Perhaps if Owned<> was a built-in .NET type, this whole dilemma would go away? (Should I even hold my breath for that to happen?)
I agree with #steinar, I would consider Autofac as yet another 3rd party dll that supports your project. Your system depends on it, why should you restrict yourself from referencing it? I would be more conserned if ILifetimeScope or IComponentContext were sprinkled around your code.
That said, I feel your consern. After all, a DI container should work behind the scenes and not "spill" into the code. But we could easily create a wrapper and an interface to hide even the Owned<T>. Consider the following interface and implementation:
public interface IOwned<out T> : IDisposable
{
T Value { get; }
}
public class OwnedWrapper<T> : Disposable, IOwned<T>
{
private readonly Owned<T> _ownedValue;
public OwnedWrapper(Owned<T> ownedValue)
{
_ownedValue = ownedValue;
}
public T Value { get { return _ownedValue.Value; } }
protected override void Dispose(bool disposing)
{
if (disposing)
_ownedValue.Dispose();
}
}
The registration could be done, either using a registration source or a builder, e.g. like this:
var cb = new ContainerBuilder();
cb.RegisterGeneric(typeof (OwnedWrapper<>)).As(typeof (IOwned<>)).ExternallyOwned();
cb.RegisterType<SomeService>();
var c = cb.Build();
You can now resolve as usual:
using (var myOwned = c.Resolve<IOwned<SomeService>>())
{
var service = myOwned.Value;
}
You could place this interface in a common namespace in your system for easy inclusion.
Both the Owned<T> and OwnedWrapper<T> are now hidden from your code, only IOwned<T> is exposed. Should requirements change and you need to replace Autofac with another DI container there's a lot less friction with this approach.
I would say that it's fine to reference a well defined set of core 3rd party DLLs in every project of an "enterprise application" solution (or any application that needs flexibility). I see nothing wrong with having a dependency on at least the following in every project that needs it:
A logging framework (e.g. log4net)
Some IoC container (e.g. Autofac)
The fact that these aren't part of the core .NET framework shouldn't stop us from using them as liberally.
The only possible negatives I can see are relatively minor compared to the possible benefits:
This may make the application harder to understand for the average programmer
You could have version compatibility problems in the future which you wouldn't encounter if you were just using the .NET framework
There is an obvious but minor overhead with adding all of these references to every solution
Perhaps if Owned<> was a built-in .NET
type, this whole dilemma would go
away? (Should I even hold my breath
for that to happen?)
It will become a built-in .NET type: ExportLifeTimeContext<T>. Despite the name, this class isn't really bound to the .NET ExportFactory<T>. The constructor simply takes a value, and an Action to invoke when the lifetime of that value is disposed.
For now, it is only available in Silverlight though. For the regular .NET framework you'll have to wait until .NET 4.x (or whatever the next version after 4.0 will be).
I don't think referencing the Autofac assembly is the real problem - I consider things like Owned appearing in application code a 'code smell'. Application code shouldn't care about what DI framework is being used and having Owned in your code now creates a hard dependency on Autofac. All DI related code should be cleanly contained in a set of configuration classes (Modules in the Autofac world).