In wanting to get some hands-on experience of good OO design I've decided to try to apply separation of concerns on a legacy app.
I decided that I wasn't comfortable with these calls being scattered all over the code base.
ConfigurationManager.AppSettings["key"]
While I've already tackled this before by writing a helper class to encapsulate those calls into static methods I thought it could be an opportunity to go a bit further.
I realise that ultimately I should be aiming to use dependency injection and always be 'coding to interfaces'. But I don't want to take what seems like too big a step. In the meantime I'd like to take smaller steps towards that ultimate goal.
Can anyone enumerate the steps they would recommend?
Here are some that come to mind:
Have client code depend on an interface not a concrete implementation
Manually inject dependencies into an
interface via constructor or property?
Before going to the effort of
choosing and applying an IoC
container how do I keep the code
running?
In order to fulfil a dependency the default
constructor of any class that needs a
configuration value could use a Factory
(with a static CreateObject() method)?
Surely I'll still have a concrete dependency on the Factory?...
I've dipped into Michael Feathers' book so I know that I need to introduce seams but I'm struggling to know when I've introduced enough or too many!
Update
Imagine that Client calls methods on WidgetLoader passing it the required dependencies (such as an IConfigReader)
WidgetLoader reads config to find out what Widgets to load and asks WidgetFactory to create each in turn
WidgetFactory reads config to know what state to put the Widgets into by default
WidgetFactory delegates to WidgetRepository to do the data access, which reads config to decide what diagnostics it should log
In each case above should the IConfigReader be passed like a hot potato between each member in the call chain?
Is a Factory the answer?
To clarify following some comments:
My primary aim is to gradually migrate some app settings out of the config file and into some other form of persistence. While I realise that with an injected dependency I can Extract and Override to get some unit testing goodness, my primary concern is not testing so much as to encapsulate enough to begin being ignorant of where the settings actually get persisted.
When refactoring a legacy code-base you want to iteratively make small changes over time. Here is one approach:
Create a new static class (i.e. MyConfigManager) with a method to get the app setting (i.e. GetAppSettingString( string key )
Do a global search and replace of "ConfigurationManager.AppSettings["key"] and replace instances with "MyConfigManager.GetAppSettingsString("key")"
Test and check-in
Now your dependency on the ConfigurationManager is in one place. You can store your settings in a database or wherever, without having to change tons of code. Down side is that you still have a static dependency.
Next step would be to change MyConfigManager into a regular instance class and inject it into classes where it is used. Best approach here is to do it incrementally.
Create an instance class (and an interface) alongside the static class.
Now that you have both, you can refactor the using classes slowly until they are all using the instance class. Inject the instance into the constructor (using the interface). Don't try for the big bang check-in if there are lots of usages. Just do it slowly and carefully over time.
Then just delete the static class.
Usually its very difficult to clean a legacy application is small steps, because they are not designed to be changed in this way. If the code is completely intermingled and you have no SoC it is difficult to change on thing without being forced to change everything else... Also it is often very hard to unit test anything.
But in general you have to:
1) Find the simplest (smallest) class not refactored yet
2) Write unit tests for this class so that you have confidence that your refactoring didn't break anything
3) Do the smallest possible change (this depends on the project and your common sense)
4) Make sure all the tests pass
5) Commit and goto 1
I would like to recommend "Refactoring" by Martin Fowler to give you more ideas: http://www.amazon.com/exec/obidos/ASIN/0201485672
For your example, the first thing I'd do is to create an interface exposing the functionality you need to read config e.g.
public interface IConfigReader
{
string GetAppSetting(string key);
...
}
and then create an implementation which delegates to the static ConfigurationManager class:
public class StaticConfigReader : IConfigReader
{
public string Get(string key)
{
return ConfigurationManager.AppSetting[key];
}
}
Then for a particular class with a dependency on the configuration you can create a seam which initially just returns an instance of the static config reader:
public class ClassRequiringConfig
{
public void MethodUsingConfig()
{
string setting = this.GetConfigReader().GetAppSetting("key");
}
protected virtual IConfigReader GetConfigReader()
{
return new StaticConfigReader();
}
}
And replace all references to ConfigManager with usages of your interface. Then for testing purposes you can subclass this class and override the GetConfigReader method to inject fakes so you don't need any actual config file:
public class TestClassRequiringConfig : ClassRequiringConfig
{
public IConfigReader ConfigReader { get; set; }
protected override IConfigReader GetConfigReader()
{
return this.ConfigReader;
}
}
[Test]
public void TestMethodUsingConfig()
{
ClassRequiringConfig sut = new TestClassRequiringConfig { ConfigReader = fakeConfigReader };
sut.MethodUsingConfig();
//Assertions
}
Then eventually you will be able to replace this with property/constructor injection when you add an IoC container.
EDIT:
If you're not happy with injecting instances into individual classes like this (which would be quite tedious if many classes depend on configuration) then you could create a static configuration class, and then allow temporary changes to the config reader for testing:
public static class Configuration
{
private static Func<IConfigReader> _configReaderFunc = () => new StaticConfigReader;
public static Func<IConfigReader> GetConfiguration
{
get { return _configReaderFunc; }
}
public static IDisposable CreateConfigScope(IConfigReader reader)
{
return new ConfigReaderScope(() => reader);
}
private class ConfigReaderScope : IDisposable
{
private readonly Func<IConfigReader> _oldReaderFunc;
public ConfigReaderScope(Func<IConfigReader> newReaderFunc)
{
this._oldReaderFunc = _configReaderFunc;
_configReaderFunc = newReaderFunc;
}
public void Dispose()
{
_configReaderFunc = this._oldReaderFunc;
}
}
}
Then your classes just access the config through the static class:
public void MethodUsingConfig()
{
string value = Configuration.GetConfigReader().GetAppSetting("key");
}
and your tests can use a fake through a temporary scope:
[Test]
public void TestMethodUsingConfig()
{
using(var scope = Configuration.CreateConfigScope(fakeReader))
{
new ClassUsingConfig().MethodUsingConfig();
//Assertions
}
}
Related
I have a library with some classes that realize the same interface:
internal class MyObj1 : IMyObj {
public MyObj1(string param1, int param2) {}
}
internal class MyObj2 : IMyObj {
public MyObj2(bool param1, string param2, int param3) {}
}
internal class MyObj3 : IMyObj {
public MyObj3(string param1, int param2) {}
}
I want to create an objects factory that allows to get access to MyObj1, MyObj2, MyObj3 only by IMyObj:
public class MyObjFactory {
public IMyObj Create<T>() {
return (IMyObj)Activator.CreateInstance(typeof(T));
}
}
I don't know how to pass constructor arguments to the factory method. Any idea?
It sounds like this is where you're at:
a) You don't want classes to create the additional classes they depend on, because that couples them together. Each class would have to know too much about the classes it depends on, such as their constructor arguments.
b) You create a factory to separate the creation of those objects.
c) You discover that the problem you had in (a) has now moved to (b), but it's exactly the same problem, only with more classes. Now your factory has to create class instances. But where will it get the constructor arguments it needs to create those objects?
One solution is using a DI container. If that is entirely familiar then that's 10% bad news and 90% good news. There's a little bit of a learning curve, but it's not bad. The 90% good news part is that you've reached a point where you realize you need it, and it's going to become an extraordinarily valuable tool.
When I say "DI container" - also called an "IoC (Inversion of Control) container," that refers to tools like Autofac, Unity, or Castle Windsor. I work primarily with Windsor so I use that in examples.
A DI container is a tool that creates objects for you without explicitly calling the constructors. (This explanation is 100% certain to be insufficient - you'll need to Google more. Trust me, it's worth it.)
Suppose you have a class that depends on several abstractions (interfaces.) And the implementations of those interfaces depend on more abstractions:
public class ClassThatDependsOnThreeThings
{
private readonly IThingOne _thingOne;
private readonly IThingTwo _thingTwo;
private readonly IThingThree _thingThree;
public ClassThatDependsOnThreeThings(IThingOne thingOne, IThingTwo thingTwo, IThingThree thingThree)
{
_thingOne = thingOne;
_thingTwo = thingTwo;
_thingThree = thingThree;
}
}
public class ThingOne : IThingOne
{
private readonly IThingFour _thingFour;
private readonly IThingFive _thingFive;
public ThingOne(IThingFour thingFour, IThingFive thingFive)
{
_thingFour = thingFour;
_thingFive = thingFive;
}
}
public class ThingTwo : IThingTwo
{
private readonly IThingThree _thingThree;
private readonly IThingSix _thingSix;
public ThingTwo(IThingThree thingThree, IThingSix thingSix)
{
_thingThree = thingThree;
_thingSix = thingSix;
}
}
public class ThingThree : IThingThree
{
private readonly string _connectionString;
public ThingThree(string connectionString)
{
_connectionString = connectionString;
}
}
This is good because each individual class is simple and easy to test. But how in the world are you going to create a factory to create all of these objects for you? That factory would have to know/contain everything needed to create every single one of the objects.
The individual classes are better off, but composing them or creating instances becomes a major headache. What if there are parts of your code that only need one of these - do you create another factory? What if you have to change one of these classes so that now it has more or different dependencies? Now you have to go back and fix all your factories. That's a nightmare.
A DI container (again, this example is using Castle.Windsor) allows you to do this. At first it's going to look like more work, or just moving the problem around. But it's not:
var container = new WindsorContainer();
container.Register(
Component.For<ClassThatDependsOnThreeThings>(),
Component.For<IThingOne, ThingOne>(),
Component.For<IThingTwo, ThingTwo>(),
Component.For<IThingThree, ThingThree>()
.DependsOn(Dependency.OnValue("connectionString", ConfigurationManager.ConnectionStrings["xyz"].ConnectionString)),
Component.For<IThingFour,IThingFour>(),
Component.For<IThingFive, IThingFive>(),
Component.For<IThingSix, IThingSix>()
);
Now, if you do this:
var thing = container.Resolve<ClassThatDependsOnThreeThings>();
or
var thingTwo = container.Resolve<IThingTwo>();
as long as you've registered the type with the container and you've also registered whatever types are needed to fulfill all the nested dependencies, the container creates each object as needed, calling the constructor of each object, until it can finally create the object you asked for.
Another detail you'll probably notice is that none of these classes create the things they depend on. There is no new ThingThree(). Whatever each class depends on is specified in its constructor. That's one of the fundamental concepts of dependency injection. If a class just receives and instance of IThingThree then it really never knows what the implementation is. It only depends on the interface and doesn't know anything about the implementation. That works toward Dependency Inversion, the "D" in SOLID. It helps protect your classes from getting coupled to specific implementation details.
That's very powerful. It means that, when properly configured, at any point in your code you can just ask for the dependency you need - usually as an interface - and just receive it. The class that needs it doesn't have to know how to create it. That means that 90% of the time you don't even need a factory at all. The constructor of your class just says what it needs, and container provides it.
(If you actually do need a factory, which does happen in some cases, Windsor and some other containers help you to create one. Here's an example.)
Part of getting this to work involves learning how to configure the type of application you're using to use a DI container. For example, in an ASP.NET MVC application you would configure the container to create your controllers for you. That way if your controllers depend on more things, the container can create those things as needed. ASP.NET Core makes it easier by providing its own DI container so that all you have to do is register your various components.
This is an incomplete answer because it describes what the solution is without telling you how to implement it. That will require some more searching on your part, such as "How do I configure XYZ for dependency injection," or just learning more about the concept in general. One author called it something like a $5 term for a $.50 concept. It looks complicated and confusing until you try it and see how it works. Then you'll see why it's built into ASP.NET Core, Angular, and why all sorts of languages use dependency injection.
When you reach the point - as you have - where you have the problems that DI solves, that's really exciting because it means you realize that there must be a better, cleaner way to accomplish what you're trying to do. The good news is that there is. Learning it and using it will have a ripple effect throughout your code, enabling you to better apply SOLID principles and write smaller classes that are easier to unit test.
I would recommend not using Activator.CreateInstance since it is relatively slow, and there is a reduction in runtime safety (e.g. if you get the number of constructor parameters wrong it will throw an exception at runtime).
I would suggest something like:
public IMyObj CreateType1(string param1, int param2)
{
return new MyObj1(param1, param2);
}
public IMyObj CreateType2(bool param1, string param2, int param3)
{
return new MyObj2(param1, param2, param3);
}
Use Activator.CreateInstance Method (Type, Object[])
Creates an instance of the specified type using the constructor that
best matches the specified parameters.
public IMyObj Create<T>(params object[] args)
{
return (IMyObj)Activator.CreateInstance(typeof(T),args);
}
Alternatively
public IMyObj Create<T>(string param1, int param2) where T : MyObj1
{
return (IMyObj)Activator.CreateInstance(typeof(T),args);
}
public IMyObj Create<T>(bool param1, string param2, int param3) where T : MyObj2
{
return (IMyObj)Activator.CreateInstance(typeof(T),args);
}
...
...
New to OOP here. I have defined an interface with one method, and in my derived class I defined another public method. My client code is conditionally instantiating a class of the interface type, and of course the compiler doesn't know about the method in one of the derived classes as it is not part of the underlying interface definition. Here is what I am talking about:
public interface IFileLoader
{
public bool Load();
}
public class FileLoaderA : IFileLoader
{
public bool Load();
//implementation
public void SetStatus(FileLoadStatus status)
{
//implementation
}
}
public class FileLoaderB : IFileLoader
{
public bool Load();
//implementation
//note B does not have a SetStatus method
}
public enum FileLoadStatus
{
Started,
Done,
Error
}
// client code
IFileLoader loader;
if (Config.UseMethodA)
{
loader = new FileLoaderA();
}
else
{
loader = new FileLoaderB();
}
//does not know about this method
loader.SetStatus (FileStatus.Done);
I guess I have two questions:
What should I be doing to find out if the object created at run-time has the method I am trying to use? Or is my approach wrong?
I know people talk of IOC/DI all the time. Being new OOP, what is the advantage of using an IOC in order to say, "when my app asks
for an IFileLoader type, use concrete class x", as opposed to simply
using an App.Config file to get the setting?
Referring to your two questions and your other post I'd recommend the following:
What should I be doing to find out if the object created at run-time has the method I am trying to use? Or is my approach wrong?
You don't necessarily need to find out the concrete implementation at runtime in your client code. Following this approach you kinda foil the crucial purpose of an interface. Hence it's rather useful to just naïvely use the interface and let the concrete logic behind decide what's to do.
So in your case, if one implementation's just able to load a file - fine. If your other implementation is able to the same and a bit more, that's fine, too. But the client code (in your case your console application) shouldn't care about it and just use Load().
Maybe some code says more than thousand words:
public class ThirdPartyLoader : IFileLoader
{
public bool Load(string fileName)
{
// simply acts as a wrapper around your 3rd party tool
}
}
public class SmartLoader : IFileLoader
{
private readonly ICanSetStatus _statusSetter;
public SmartLoader(ICanSetStatus statusSetter)
{
_statusSetter = statusSetter;
}
public bool Load(string fileName)
{
_statusSetter.SetStatus(FileStatus.Started);
// do whatever's necessary to load the file ;)
_statusSetter.SetStatus(FileStatus.Done);
}
}
Note that the SmartLoader does a bit more. But as a matter of separation of concerns its purpose is the loading part. The setting of a status is another class' task:
public interface ICanSetStatus
{
void SetStatus(FileStatus fileStatus);
// maybe add a second parameter with information about the file, so that an
// implementation of this interface knows everything that's needed
}
public class StatusSetter : ICanSetStatus
{
public void SetStatus(FileStatus fileStatus)
{
// do whatever's necessary...
}
}
Finally your client code could look something like the follwing:
static void Main(string[] args)
{
bool useThirdPartyLoader = GetInfoFromConfig();
IFileLoader loader = FileLoaderFactory.Create(useThirdPartyLoader);
var files = GetFilesFromSomewhere();
ProcessFiles(loader, files);
}
public static class FileLoaderFactory
{
public static IFileLoader Create(bool useThirdPartyLoader)
{
if (useThirdPartyLoader)
{
return new ThirdPartyLoader();
}
return new SmartLoader(new StatusSetter());
}
}
Note that this is just one possible way to do what you're looking for without having the necessity to determine IFileLoader's concrete implementation at runtime. There maybe other more elegant ways, which furthermore leads me to your next question.
I know people talk of IOC/DI all the time. Being new OOP, what is the advantage of using an IOC [...], as opposed to simply using an App.Config file to get the setting?
First of all separating of classes' responsibility is always a good idea especially if you want to painlessly unittest your classes. Interfaces are your friends in these moments as you can easily substitute or "mock" instances by e.g. utilizing NSubstitute. Moreover, small classes are generally more easily maintainable.
The attempt above already relies on some sort of inversion of control. The main-method knows barely anything about how to instantiate a Loader (although the factory could do the config lookup as well. Then main wouldn't know anything, it would just use the instance).
Broadly speaking: Instead of writing the boilerplate factory instantiation code, you could use a DI-Framework like Ninject or maybe Castle Windsor which enables you to put the binding logic into configuration files which might best fit your needs.
To make a long story short: You could simply use a boolean appSetting in your app.config that tells your code which implementation to use. But you could use a DI-Framework instead and make use of its features to easily instantiate other classes as well. It may be a bit oversized for this case, but it's definitely worth a look!
Use something like:
if((loader as FileLoaderA) != null)
{
((FileLoaderA)loader).SetStatus(FileStatus.Done);
}
else
{
// Do something with it as FileLoaderB type
}
IoC is normally used in situations where your class depends on another class that needs to be setup first, the IoC container can instantiate/setup an instance of that class for your class to use and inject it into your class usually via the constructor. It then hands you an instance of your class that is setup and ready to go.
EDIT:
I was just trying to keep the code concise and easy to follow. I agree that this is not the most efficient form for this code (it actually performs the cast twice).
For the purpose of determining if a particular cast is valid Microsoft suggests using the following form:
var loaderA = loader as FileLoaderA;
if(loaderA != null)
{
loaderA.SetStatus(FileStatus.Done);
// Do any remaining FileLoaderA stuff
return;
}
var loaderB = loader as FileLoaderB
if(loaderB != null)
{
// Do FileLoaderB stuff
return;
}
I do not agree with using is in the if. The is keyword was designed to determine if an object was instantiated from a class that implements a particular interface, rather than if a cast is viable. I have found it does not always return the expected result (especially if a class implements multiple interfaces through direct implementation or inheritance of a base class).
I've been reading up on how to write testable code and stumbled upon the Dependency Injection design pattern.
This design pattern is really easy to understand and there is really nothing to it, the object asks for the values rather then creating them itself.
However, now that I'm thinking about how this could be used the application im currenty working on I realize that there are some complications to it. Imagine the following example:
public class A{
public string getValue(){
return "abc";
}
}
public class B{
private A a;
public B(A a){
this.a=a;
}
public void someMethod(){
String str = a.getValue();
}
}
Unit testing someMethod () would now be easy since i can create a mock of A and have getValue() return whatever I want.
The class B's dependency on A is injected through the constructor, but this means that A has to be instantiated outside the class B so this dependency have moved to another class instead. This would be repeated many layers down and on some point instantiation has to be done.
Now to the question, is it true that when using Dependency Injection, you keep passing the dependencys through all these layers? Wouldn't that make the code less readable and more time consuming to debug? And when you reach the "top" layer, how would you unit test that class?
I hope I understand your question correctly.
Injecting Dependencies
No we don't pass the dependencies through all the layers. We only pass them to layers that directly talk to them. For example:
public class PaymentHandler {
private customerRepository;
public PaymentHandler(CustomerRepository customerRepository) {
this.customerRepository = customerRepository;
}
public void handlePayment(CustomerId customerId, Money amount) {
Customer customer = customerRepository.findById(customerId);
customer.charge(amount);
}
}
public interface CustomerRepository {
public Customer findById(CustomerId customerId);
}
public class DefaultCustomerRepository implements CustomerRepository {
private Database database;
public CustomerRepository(Database database) {
this.database = database;
}
public Customer findById(CustomerId customerId) {
Result result = database.executeQuery(...);
// do some logic here
return customer;
}
}
public interface Database {
public Result executeQuery(Query query);
}
PaymentHandler does not know about the Database, it only talks to CustomerRepository. The injection of Database stops at the repository layer.
Readability of the code
When doing manual injection without framework or libraries to help, we might end up with Factory classes that contain many boilerplate code like return new D(new C(new B(), new A()); which at some point can be less readable. To solve this problem we tend to use DI frameworks like Guice to avoid writing so many factories.
However, for classes that actually do work / business logic, they should be more readable and understandable as they only talk to their direct collaborators and do the work they need to do.
Unit Testing
I assume that by "Top" layer you mean the PaymentHandler class. In this example, we can create a stub CustomerRepository class and have it return a Customer object that we can check against, then pass the stub to the PaymentHandler to check whether the correct amount is charged.
The general idea is to pass in fake collaborators to control their output so that we can safely assert the behavior of the class under test (in this example the PaymentHandler class).
Why interfaces
As mentioned in the comments above, it is more preferable to depend on interfaces instead of concrete classes, they provide better testability(easy to mock/stub) and easier debugging.
Hope this helps.
Well yes, that would mean you have to pass the dependencies over all the layers. However, that's where Inversion of Control containers come in handy. They allow you to register all components (classes) in the system. Then you can ask the IoC container for an instance of class B (in your example), which would automatically call the correct constructor for you automatically creating any objects the constructor depends upon (in your case class A).
A nice discussion can be found here: Why do I need an IoC container as opposed to straightforward DI code?
IMO, your question demonstrates that you understand the pattern.
Used correctly, you would have a Composition Root where all dependencies are resolved and injected. Use of an IoC container here would resolve dependencies and pass them down through the layers for you.
This is in direct opposition to the Service Location pattern, which is considered by many as an anti-pattern.
Use of a Composition Root shouldn't make your code less readable/understandable as well-designed classes with clear and relevant dependencies should be reasonably self-documenting. I'm not sure about unit testing a Composition Root. It has a discreet role so it should be testable.
Let's say I am defining a browser implementation class for my application:
class InternetExplorerBrowser : IBrowser {
private readonly string executablePath = #"C:\Program Files\...\...\ie.exe";
...code that uses executablePath
}
This might at first glance to look like a good idea, as the executablePath data is near the code that will use it.
The problem comes when I try to run this same application on my other computer, that has a foreign-language OS: executablePath will have a different value.
I could solve this through an AppSettings singleton class (or one of its equivalents) but then no-one knows my class is actually dependent on this AppSettings class (which goes against DI ideias). It might pose a difficulty to Unit-Testing, too.
I could solve both problems by having executablePath being passed in through the constructor:
class InternetExplorerBrowser : IBrowser {
private readonly string executablePath;
public InternetExplorerBrowser(string executablePath) {
this.executablePath = executablePath;
}
}
but this will raise problems in my Composition Root (the startup method that will do all the needed classes wiring) as then that method has to know both how to wire things up and has to know all these little settings data:
class CompositionRoot {
public void Run() {
ClassA classA = new ClassA();
string ieSetting1 = "C:\asdapo\poka\poskdaposka.exe";
string ieSetting2 = "IE_SETTING_ABC";
string ieSetting3 = "lol.bmp";
ClassB classB = new ClassB(ieSetting1);
ClassC classC = new ClassC(B, ieSetting2, ieSetting3);
...
}
}
which will turn easily a big mess.
I could turn this problem around by instead passing an interface of the form
interface IAppSettings {
object GetData(string name);
}
to all the classes that need some sort of settings. Then I could either implement this either as a regular class with all the settings embedded in it or a class that reads data off a XML file, something along the lines. If doing this, should I have a general AppSettings class instance for the whole system, or have an AppSettings class associated to each class that might need one? That certainly seems like a bit of an overkill. Also, have all the application setings in the same place makes it easy to look and see what might be all the changes I need to do when tryign to move the program to different platforms.
What might be the best way to approach this common situation?
Edit:
And what about using an IAppSettings with all its settings hardcoded in it?
interface IAppSettings {
string IE_ExecutablePath { get; }
int IE_Version { get; }
...
}
This would allow for compile-time type-safety. If I saw the interface/concrete classes grow too much I could create other smaller interfaces of the form IMyClassXAppSettings. Would it be a burden too heavy to bear in med/big sized projects?
I've also reading about AOP and its advantages dealing with cross-cutting-concerns (I guess this is one). Couldn't it also offer solutions to this problem? Maybe tagging variables like this:
class InternetExplorerBrowser : IBrowser {
[AppSetting] string executablePath;
[AppSetting] int ieVersion;
...code that uses executablePath
}
Then, when compiling the project we'd also have compile time safety (having the compiler check that we actually implemented code that would weave in data. This would, of course, tie our API to this particular Aspect.
The individual classes should be as free from infrastructure as possible - constructs like IAppSettings, IMyClassXAppSettings, and [AppSetting] bleed composition details to classes which, at their simplest, really only depend on raw values such as executablePath. The art of Dependency Injection is in the factoring of concerns.
I have implemented this exact pattern using Autofac, which has modules similar to Ninject and should result in similar code (I realize the question doesn't mention Ninject, but the OP does in a comment).
Modules organize applications by subsystem. A module exposes a subsystem's configurable elements:
public class BrowserModule : Module
{
private readonly string _executablePath;
public BrowserModule(string executablePath)
{
_executablePath = executablePath;
}
public override void Load(ContainerBuilder builder)
{
builder
.Register(c => new InternetExplorerBrowser(_executablePath))
.As<IBrowser>()
.InstancePerDependency();
}
}
This leaves the composition root with the same problem: it must supply the value of executablePath. To avoid the configuration soup, we can write a self-contained module which reads configuration settings and passes them to BrowserModule:
public class ConfiguredBrowserModule : Module
{
public override void Load(ContainerBuilder builder)
{
var executablePath = ConfigurationManager.AppSettings["ExecutablePath"];
builder.RegisterModule(new BrowserModule(executablePath));
}
}
You could consider using a custom configuration section instead of AppSettings; the changes would be localized to the module:
public class BrowserSection : ConfigurationSection
{
[ConfigurationProperty("executablePath")]
public string ExecutablePath
{
get { return (string) this["executablePath"]; }
set { this["executablePath"] = value; }
}
}
public class ConfiguredBrowserModule : Module
{
public override void Load(ContainerBuilder builder)
{
var section = (BrowserSection) ConfigurationManager.GetSection("myApp.browser");
if(section == null)
{
section = new BrowserSection();
}
builder.RegisterModule(new BrowserModule(section.ExecutablePath));
}
}
This is a nice pattern because each subsystem has an independent configuration which gets read in a single place. The only benefit here is a more obvious intent. For non-string values or complex schemas, though, we can let System.Configuration do the heavy lifting.
I'd go with the last option - pass in an object that complies with the IAppSettings interface. In fact, I recently performed that refactor at work in order to sort out some unit tests and it worked nicely. However, there were few classes dependent on the settings in that project.
I'd go with creating a single instance of the settings class, and pass that in to anything that's dependant upon it. I can't see any fundamental problem with that.
However, I think you've already thought about this and seen how it can be a pain if you have lots of classes dependent on the settings.
If this is a problem for you, you can work around it by using a dependency injection framework such as ninject (sorry if you're already aware of projects like ninject - this might sound a bit patronizing - if you're unfamiliar, the why use ninject sections on github are a good place to learn).
Using ninject, for your main project you can declare that you want any class with a dependency on IAppSettings to use a singleton instance of your AppSettings based class without having to explicitly pass it in to constructors everywhere.
You can then setup your system differently for your unit tests by stating that you want to use an instance of MockAppSettings wherever IAppSettings is used, or by simply explicitly passing in your mock objects directly.
I hope I've got the gist of your question right and that I've helped - you already sound like you know what you're doing :)
I'm a complete newbie to ninject
I've been pulling apart someone else's code and found several instances of nInject modules - classes that derive from Ninject.Modules.Module, and have a load method that contains most of their code.
These classes are called by invoking the LoadModule method of an instance of StandardKernel and passing it an instance of the module class.
Maybe I'm missing something obvious here, but what is the benefit of this over just creating a plain old class and calling its method, or perhaps a static class with a static method?
The Ninject modules are the tools used to register the various types with the IoC container. The advantage is that these modules are then kept in their own classes. This allows you to put different tiers/services in their own modules.
// some method early in your app's life cycle
public Kernel BuildKernel()
{
var modules = new INinjectModule[]
{
new LinqToSqlDataContextModule(), // just my L2S binding
new WebModule(),
new EventRegistrationModule()
};
return new StandardKernel(modules);
}
// in LinqToSqlDataContextModule.cs
public class LinqToSqlDataContextModule : NinjectModule
{
public override void Load()
{
Bind<IRepository>().To<LinqToSqlRepository>();
}
}
Having multiple modules allows for separation of concerns, even within your IoC container.
The rest of you question sounds like it is more about IoC and DI as a whole, and not just Ninject. Yes, you could use static Configuration objects to do just about everything that an IoC container does. IoC containers become really nice when you have multiple hierarchies of dependencies.
public interface IInterfaceA {}
public interface IInterfaceB {}
public interface IInterfaceC {}
public class ClassA : IInterfaceA {}
public class ClassB : IInterfaceB
{
public ClassB(IInterfaceA a){}
}
public class ClassC : IInterfaceC
{
public ClassC(IInterfaceB b){}
}
Building ClassC is a pain at this point, with multiple depths of interfaces. It's much easier to just ask the kernel for an IInterfaceC.
var newc = ApplicationScope.Kernel.Get<IInterfaceC>();
Maybe I'm missing something obvious
here, but what is the benefit of this
over just creating a plain old class
and calling its method, or perhaps a
static class with a static method?
Yes, you can just call a bunch of Bind<X>().To<Z>() statements to setup the bindings, without a module.
The difference is that if you put these statements in a module then:
IKernel.Load(IEnumerable<Assembly>) can dynamically discover such modules through reflection and load them.
the bindings are logically grouped together under a name; you can use this name to unload them again with IKernel.Unload(string)
Maybe I'm missing something obvious here, but what is the benefit of this over just creating a plain old class and calling its method, or perhaps a static class with a static method?
For us, it is the ability to add tests at a later time very easily. Just override a few bindings with mockobjects and voila.....on legacy code without a DI that wired "everything" up, it is near impossible to start inserting test cases without some rework. With a DI in place AND as long as it was used properly where the DI wired everything up, it is very simple to do so even on legacy code that may be very ugly.
In many DI frameworks, you can use the production module for your test with a test module that overrides specific bindings with mockobjects(leaving the rest of the wiring in place). These may be system tests more than unit tests, but I tend to prefer higher level tests than the average developer as it tests the integration between classes and it is great documentation for someone who joins the project and can see the whole feature in action(instead of just parts of the feature) without having to setup a whole system).