I have a library with some classes that realize the same interface:
internal class MyObj1 : IMyObj {
public MyObj1(string param1, int param2) {}
}
internal class MyObj2 : IMyObj {
public MyObj2(bool param1, string param2, int param3) {}
}
internal class MyObj3 : IMyObj {
public MyObj3(string param1, int param2) {}
}
I want to create an objects factory that allows to get access to MyObj1, MyObj2, MyObj3 only by IMyObj:
public class MyObjFactory {
public IMyObj Create<T>() {
return (IMyObj)Activator.CreateInstance(typeof(T));
}
}
I don't know how to pass constructor arguments to the factory method. Any idea?
It sounds like this is where you're at:
a) You don't want classes to create the additional classes they depend on, because that couples them together. Each class would have to know too much about the classes it depends on, such as their constructor arguments.
b) You create a factory to separate the creation of those objects.
c) You discover that the problem you had in (a) has now moved to (b), but it's exactly the same problem, only with more classes. Now your factory has to create class instances. But where will it get the constructor arguments it needs to create those objects?
One solution is using a DI container. If that is entirely familiar then that's 10% bad news and 90% good news. There's a little bit of a learning curve, but it's not bad. The 90% good news part is that you've reached a point where you realize you need it, and it's going to become an extraordinarily valuable tool.
When I say "DI container" - also called an "IoC (Inversion of Control) container," that refers to tools like Autofac, Unity, or Castle Windsor. I work primarily with Windsor so I use that in examples.
A DI container is a tool that creates objects for you without explicitly calling the constructors. (This explanation is 100% certain to be insufficient - you'll need to Google more. Trust me, it's worth it.)
Suppose you have a class that depends on several abstractions (interfaces.) And the implementations of those interfaces depend on more abstractions:
public class ClassThatDependsOnThreeThings
{
private readonly IThingOne _thingOne;
private readonly IThingTwo _thingTwo;
private readonly IThingThree _thingThree;
public ClassThatDependsOnThreeThings(IThingOne thingOne, IThingTwo thingTwo, IThingThree thingThree)
{
_thingOne = thingOne;
_thingTwo = thingTwo;
_thingThree = thingThree;
}
}
public class ThingOne : IThingOne
{
private readonly IThingFour _thingFour;
private readonly IThingFive _thingFive;
public ThingOne(IThingFour thingFour, IThingFive thingFive)
{
_thingFour = thingFour;
_thingFive = thingFive;
}
}
public class ThingTwo : IThingTwo
{
private readonly IThingThree _thingThree;
private readonly IThingSix _thingSix;
public ThingTwo(IThingThree thingThree, IThingSix thingSix)
{
_thingThree = thingThree;
_thingSix = thingSix;
}
}
public class ThingThree : IThingThree
{
private readonly string _connectionString;
public ThingThree(string connectionString)
{
_connectionString = connectionString;
}
}
This is good because each individual class is simple and easy to test. But how in the world are you going to create a factory to create all of these objects for you? That factory would have to know/contain everything needed to create every single one of the objects.
The individual classes are better off, but composing them or creating instances becomes a major headache. What if there are parts of your code that only need one of these - do you create another factory? What if you have to change one of these classes so that now it has more or different dependencies? Now you have to go back and fix all your factories. That's a nightmare.
A DI container (again, this example is using Castle.Windsor) allows you to do this. At first it's going to look like more work, or just moving the problem around. But it's not:
var container = new WindsorContainer();
container.Register(
Component.For<ClassThatDependsOnThreeThings>(),
Component.For<IThingOne, ThingOne>(),
Component.For<IThingTwo, ThingTwo>(),
Component.For<IThingThree, ThingThree>()
.DependsOn(Dependency.OnValue("connectionString", ConfigurationManager.ConnectionStrings["xyz"].ConnectionString)),
Component.For<IThingFour,IThingFour>(),
Component.For<IThingFive, IThingFive>(),
Component.For<IThingSix, IThingSix>()
);
Now, if you do this:
var thing = container.Resolve<ClassThatDependsOnThreeThings>();
or
var thingTwo = container.Resolve<IThingTwo>();
as long as you've registered the type with the container and you've also registered whatever types are needed to fulfill all the nested dependencies, the container creates each object as needed, calling the constructor of each object, until it can finally create the object you asked for.
Another detail you'll probably notice is that none of these classes create the things they depend on. There is no new ThingThree(). Whatever each class depends on is specified in its constructor. That's one of the fundamental concepts of dependency injection. If a class just receives and instance of IThingThree then it really never knows what the implementation is. It only depends on the interface and doesn't know anything about the implementation. That works toward Dependency Inversion, the "D" in SOLID. It helps protect your classes from getting coupled to specific implementation details.
That's very powerful. It means that, when properly configured, at any point in your code you can just ask for the dependency you need - usually as an interface - and just receive it. The class that needs it doesn't have to know how to create it. That means that 90% of the time you don't even need a factory at all. The constructor of your class just says what it needs, and container provides it.
(If you actually do need a factory, which does happen in some cases, Windsor and some other containers help you to create one. Here's an example.)
Part of getting this to work involves learning how to configure the type of application you're using to use a DI container. For example, in an ASP.NET MVC application you would configure the container to create your controllers for you. That way if your controllers depend on more things, the container can create those things as needed. ASP.NET Core makes it easier by providing its own DI container so that all you have to do is register your various components.
This is an incomplete answer because it describes what the solution is without telling you how to implement it. That will require some more searching on your part, such as "How do I configure XYZ for dependency injection," or just learning more about the concept in general. One author called it something like a $5 term for a $.50 concept. It looks complicated and confusing until you try it and see how it works. Then you'll see why it's built into ASP.NET Core, Angular, and why all sorts of languages use dependency injection.
When you reach the point - as you have - where you have the problems that DI solves, that's really exciting because it means you realize that there must be a better, cleaner way to accomplish what you're trying to do. The good news is that there is. Learning it and using it will have a ripple effect throughout your code, enabling you to better apply SOLID principles and write smaller classes that are easier to unit test.
I would recommend not using Activator.CreateInstance since it is relatively slow, and there is a reduction in runtime safety (e.g. if you get the number of constructor parameters wrong it will throw an exception at runtime).
I would suggest something like:
public IMyObj CreateType1(string param1, int param2)
{
return new MyObj1(param1, param2);
}
public IMyObj CreateType2(bool param1, string param2, int param3)
{
return new MyObj2(param1, param2, param3);
}
Use Activator.CreateInstance Method (Type, Object[])
Creates an instance of the specified type using the constructor that
best matches the specified parameters.
public IMyObj Create<T>(params object[] args)
{
return (IMyObj)Activator.CreateInstance(typeof(T),args);
}
Alternatively
public IMyObj Create<T>(string param1, int param2) where T : MyObj1
{
return (IMyObj)Activator.CreateInstance(typeof(T),args);
}
public IMyObj Create<T>(bool param1, string param2, int param3) where T : MyObj2
{
return (IMyObj)Activator.CreateInstance(typeof(T),args);
}
...
...
Related
This question already has answers here:
What is dependency injection?
(37 answers)
Closed 4 years ago.
Is this dependency injection if I change the below code
class Needer
{
Needed obj;
AnotherNeeded obj2;
public Needer()
{
obj = new Neede();
obj2 = new AnotherNeede();
}
}
To this code
class Needer
{
Needed obj;
AnotherNeeded obj2;
public Needer(Needed param1, AnotherNeeded param2)
{
obj = param1;
obj2 = param2;
}
}
Robert C. Martin described Dependency Injecton in his SOLID design proposal. It basically states that:
High-level modules should not depend on low-level modules. Both should depend on abstractions.
Abstractions should not depend on details. Details should depend on abstractions.
Notice a word being used a lot in that description? "Abstraction".
You get part of the problem right in your second example, you no longer manually instantiate instances of the class, you instead pass them through the constructor. This leads into a new potential problem though, what if you need a different implementation of some class (e.g a "mock" and "real" services). If your constructor took the abstractions instead of the concretes you could change the implementations in your IoC configuration.
Any sort of service or functional class should typically have an abstraction behind it. This allows your code to be more flexible, extendable and easier to maintain. So to make your second example use true dependency injection:
class Needer
{
public INeeded obj { get; set; }
public IAnotherNeeded obj2 { get; set; }
public Needer(INeeded param1, IAnotherNeeded param2)
{
obj = param1;
obj2 = param2;
}
}
Now you can have all sorts of implementations:
public class MockNeeded : INeeded
public class ApiNeeded : INeeded
etc, etc
The first option is tightly coupling the dependent class to its dependencies by newing them up in the constructor. This makes testing that class in isolation very difficult (, but not impossible).
The second option follows what is sometimes referred to as the The Explicit Dependencies Principle
The Explicit Dependencies Principle states:
Methods and classes should explicitly require (typically through method parameters or constructor parameters) any collaborating objects they need in order to function correctly.
The said, it is also usually advised to have dependent classes depend on abstractions and not concretions or implementation concerns.
So assuming that those needed classes have interfaces/abstractions that they derive from it would look like
class Needer {
private readonly INeeded obj;
private readonly IAnotherNeeded obj2;
public Needer(INeeded param1, IAnotherNeeded param2) {
obj = param1;
obj2 = param2;
}
//...
}
The allows more flexibility with the dependent class as it decouples it from implementation concerns, which allows the class to be tested in isolation easier.
the second example is DI (dependency injection).
DI is usually used with IoC (Inversion of Control)
which takes care of creating the dependencies for you based on some configuration.
have a look at autofac.
autofac
The injection doesn't happen on it's own.
Instead, there should be some kind of a factory that generates objects and uses a dependency resolver, if available.
For instance, an MVC.NET framework is that kind of a "factory", when it creates an instance of, let's say, a Controller class, it uses the DependencyResolver.Current property to populate the constructor arguments.
I've got this app that uses Ninject for DI and I've got the following construction:
public class SomeServicHost
{
private ISomeService service;
public SomeServicHost(ISomeService service)
{
this.service = service;
}
public void DoSomething(int id)
{
// Injected instance
this.service.DoWhatever("Whatever");
// Without injection - this would be how it would be instantiated
var s = new SomeService(new Repo(id));
s.DoWhatever("Whatever");
}
}
interface ISomeService
{
void DoWhatever(string id);
}
class SomeService : ISomeService
{
private IRepo SomeRepo;
public SomeService(IRepo someRepo)
{
this.SomeRepo = someRepo;
}
public void DoWhatever(string id)
{
SomeRepo.DoSomething();
}
}
interface IRepo
{
void DoSomething();
}
class Repo : IRepo
{
private int queueId;
public Repo(int queueId)
{
this.queueId = queueId;
}
public void DoSomething()
{
// Whatever happens
}
So normal injection is not going to work. I'm going to need a factory. So I could do something like:
interface IServiceFactory
{
ISomeService GetService(int id)
{
return new SomeService(new Repo(id));
}
}
Which I acually have already. But if I do that, I lose all of the DI goodness, like lifecycle management, or swap out one implementation with another. Is there a more elegant way of doing this?
Keeping the factory as you describe it, seems fine to me. That's what they are for, when you have services that can only be instantiated at runtime depending on some runtime value (like id in your case), factory is the way to go.
But if I do that, I lose all of the DI goodness, like lifecycle
management, or swap out one implementation with another.
DI is not an alternative for factories. You need both worlds which lead to a flexible, loosely coupled design. Factories are for new-ing up dependencies, they know how to create them, with what configuration and when to dispose them, and if you want to swap implementations, you still change only one place: only this time it's the factory, not your DI configuration.
As a last note, you might be tempted to use the service locator anti-pattern, as proposed in another answer. You'll just end up with worse design, less testable, less clear, possibly tightly coupled with you DI container and so on. Your idea for using a factory is far better. (here are more examples, similar to yours).
Instead of doing
new SomeService(new Repo(id));
you can do
IResolutionRoot.Get<Repo>(new ConstructorArgument("id", id));
IResolutionRoot is the type resolution interface of the kernel, and you can have it constructor injected. This will allow you to benefit from the "DI goodness", as you called it. Please note that you might also want to use the ninject.extensions.contextpreservation extension.
The ninject.extensions.factory, that #Simon Whitehead pointed out already, helps you eliminate boiler plate code and basically does exactly what i've described above.
By moving the IResolutionRoot.Get<> call to another class (.. factory...) you unburden SomeService from the instanciation. the ninject.extension.factory does exactly that. Since the extension does not need an actual implementation, you can even save yourself some unit tests! Furthermore, what about if SomeService is extended and requires some more dependencies, which the DI manages? Well no worries, since you are using the kernel to instanciate the class, it will work automatically, and the IoC will manage your dependencies as it should.
If you wouldn't be using DI to do that instanciation, you might end up with using very little DI at the end.
As #zafeiris.m pointed out this is called Service Locator. And is also often called an anti pattern. Rightly so! Whenever you can achieve a better solution, you should. Here you probably can't. Suffice it to say it's better so stick with the best solution no matter what people call it.
There may be ways to cut down on the usage of service location. For example, if you have a business case, like "UpdateOrder(id)", and the method will create a service for that ID, then another service for that ID, then a logger for that ID,.. you may want to change it to creating an object which takes the ID as inheritable constructor argument and inject the ID-specific services and logger into that object, thus reducing the 3 factory/service locator calls to 1.
I've been reading up on how to write testable code and stumbled upon the Dependency Injection design pattern.
This design pattern is really easy to understand and there is really nothing to it, the object asks for the values rather then creating them itself.
However, now that I'm thinking about how this could be used the application im currenty working on I realize that there are some complications to it. Imagine the following example:
public class A{
public string getValue(){
return "abc";
}
}
public class B{
private A a;
public B(A a){
this.a=a;
}
public void someMethod(){
String str = a.getValue();
}
}
Unit testing someMethod () would now be easy since i can create a mock of A and have getValue() return whatever I want.
The class B's dependency on A is injected through the constructor, but this means that A has to be instantiated outside the class B so this dependency have moved to another class instead. This would be repeated many layers down and on some point instantiation has to be done.
Now to the question, is it true that when using Dependency Injection, you keep passing the dependencys through all these layers? Wouldn't that make the code less readable and more time consuming to debug? And when you reach the "top" layer, how would you unit test that class?
I hope I understand your question correctly.
Injecting Dependencies
No we don't pass the dependencies through all the layers. We only pass them to layers that directly talk to them. For example:
public class PaymentHandler {
private customerRepository;
public PaymentHandler(CustomerRepository customerRepository) {
this.customerRepository = customerRepository;
}
public void handlePayment(CustomerId customerId, Money amount) {
Customer customer = customerRepository.findById(customerId);
customer.charge(amount);
}
}
public interface CustomerRepository {
public Customer findById(CustomerId customerId);
}
public class DefaultCustomerRepository implements CustomerRepository {
private Database database;
public CustomerRepository(Database database) {
this.database = database;
}
public Customer findById(CustomerId customerId) {
Result result = database.executeQuery(...);
// do some logic here
return customer;
}
}
public interface Database {
public Result executeQuery(Query query);
}
PaymentHandler does not know about the Database, it only talks to CustomerRepository. The injection of Database stops at the repository layer.
Readability of the code
When doing manual injection without framework or libraries to help, we might end up with Factory classes that contain many boilerplate code like return new D(new C(new B(), new A()); which at some point can be less readable. To solve this problem we tend to use DI frameworks like Guice to avoid writing so many factories.
However, for classes that actually do work / business logic, they should be more readable and understandable as they only talk to their direct collaborators and do the work they need to do.
Unit Testing
I assume that by "Top" layer you mean the PaymentHandler class. In this example, we can create a stub CustomerRepository class and have it return a Customer object that we can check against, then pass the stub to the PaymentHandler to check whether the correct amount is charged.
The general idea is to pass in fake collaborators to control their output so that we can safely assert the behavior of the class under test (in this example the PaymentHandler class).
Why interfaces
As mentioned in the comments above, it is more preferable to depend on interfaces instead of concrete classes, they provide better testability(easy to mock/stub) and easier debugging.
Hope this helps.
Well yes, that would mean you have to pass the dependencies over all the layers. However, that's where Inversion of Control containers come in handy. They allow you to register all components (classes) in the system. Then you can ask the IoC container for an instance of class B (in your example), which would automatically call the correct constructor for you automatically creating any objects the constructor depends upon (in your case class A).
A nice discussion can be found here: Why do I need an IoC container as opposed to straightforward DI code?
IMO, your question demonstrates that you understand the pattern.
Used correctly, you would have a Composition Root where all dependencies are resolved and injected. Use of an IoC container here would resolve dependencies and pass them down through the layers for you.
This is in direct opposition to the Service Location pattern, which is considered by many as an anti-pattern.
Use of a Composition Root shouldn't make your code less readable/understandable as well-designed classes with clear and relevant dependencies should be reasonably self-documenting. I'm not sure about unit testing a Composition Root. It has a discreet role so it should be testable.
I'm using C# with Microsoft's Unity framework. I'm not quite sure how to solve this problem. It probably has something to do with my lack of understanding DI with Unity.
My problem can be summed up using the following example code:
class Train(Person p) { ... }
class Bus(Person p) { ... }
class Person(string name) { ... }
Person dad = new Person("joe");
Person son = new Person("timmy");
When I call the resolve method on Bus how can I be sure that the Person 'son' with the name 'timmy' is injected and when resolving Train how can I be sure that Person 'dad' with then name 'joe' is resolved?
I'm thinking maybe use named instances? But I'm at a loss. Any help would be appreciated.
As an aside, I would rather not create an IPerson interface.
Unless you register respectively "joe" and "timmy" as named dependencies, you can't be sure that "timmy" is injected into Schoolbus. In fact, if you attempt to register two instances of the same class as unnamed dependencies, you will have an ambiguous setup, and you will not be able to resolve Person at all.
In general, if you have to register a lot of named instances you are probably going about DI in the wrong way. The main idea of DI is to resolve Domain Services more than Domain Objects.
The primary idea of DI is to provide a mechanism that allows you to resolve abstract types (interfaces or abstract classes) into concrete types. Your example has no abstract types, so it doesn't really make a lot of sense.
One way to solve this would be to use an injection constructor with a named registration.
// Register timmy this way
Person son = new Person("Timmy");
container.RegisterInstance<Person>("son", son);
// OR register timmy this way
container.RegisterType<Person>("son", new InjectionConstructor("Timmy"));
// Either way, register bus this way.
container.RegisterType<Bus>(new InjectionConstructor(container.Resolve<Person>("son")));
// Repeat for Joe / Train
Mark Seeman got it right. And I sympathize with your confusion. I went through it myself when I learned to use automatic dependency injection containers. The problem is that there are many valid and reasonable ways to design and use objects. Yet only some of those approaches work with automatic dependency injectorion containers.
My personal history: I learned OO principles of object construction and Inversion Of Control long before I learned how to use Inversion of Control containers like the Unity or Castle Windsor containers. I acquired the habit of writing code like this:
public class Foo
{
IService _service;
int _accountNumber;
public Foo(IService service, int accountNumber)
{
_service = service;
_accountNumber = accountNumber;
}
public void SaveAccount()
{
_service.Save(_accountNumber);
}
}
public class Program
{
public static void Main()
{
Foo foo = new Foo(new Service(),1234);
foo.Save();
}
}
In this design, my Foo class is responsible for saving accounts to the database. It needs an account number to do that and a service to do the dirty work. This is somewhat similar to the concreted classes you provided above, where each object takes some unique values in the constructor. This works fine when you instantiate the objects with your own code. You can pass in the appropriate values at the right time.
However, when I learned about automatic dependency injection containers, I found that I was no longer instantiating Foo by hand. The container would instantiate the constructor arguments for me. This was a great convenience for the services like IService. But it obviously does not work so well for integers and strings and the like. In those cases, it would provide a default value (like zero for an integer). Instead, I had been accustomed to passing in context-specific values like account number, name, etc... So I had to adjust my style of coding and design to be like this:
public class Foo
{
IService _service;
public Foo(IService service)
{
_service = service;
}
public void SaveAccount(int accountNumber)
{
_service.Save(accountNumber);
}
}
public class Program
{
public static void Main()
{
Foo foo = new Foo(new Service());
foo.Save(1234);
}
}
It appears that both Foo classes are valid designs. But the second is useable with automatic dependency injection, and the first is not.
In wanting to get some hands-on experience of good OO design I've decided to try to apply separation of concerns on a legacy app.
I decided that I wasn't comfortable with these calls being scattered all over the code base.
ConfigurationManager.AppSettings["key"]
While I've already tackled this before by writing a helper class to encapsulate those calls into static methods I thought it could be an opportunity to go a bit further.
I realise that ultimately I should be aiming to use dependency injection and always be 'coding to interfaces'. But I don't want to take what seems like too big a step. In the meantime I'd like to take smaller steps towards that ultimate goal.
Can anyone enumerate the steps they would recommend?
Here are some that come to mind:
Have client code depend on an interface not a concrete implementation
Manually inject dependencies into an
interface via constructor or property?
Before going to the effort of
choosing and applying an IoC
container how do I keep the code
running?
In order to fulfil a dependency the default
constructor of any class that needs a
configuration value could use a Factory
(with a static CreateObject() method)?
Surely I'll still have a concrete dependency on the Factory?...
I've dipped into Michael Feathers' book so I know that I need to introduce seams but I'm struggling to know when I've introduced enough or too many!
Update
Imagine that Client calls methods on WidgetLoader passing it the required dependencies (such as an IConfigReader)
WidgetLoader reads config to find out what Widgets to load and asks WidgetFactory to create each in turn
WidgetFactory reads config to know what state to put the Widgets into by default
WidgetFactory delegates to WidgetRepository to do the data access, which reads config to decide what diagnostics it should log
In each case above should the IConfigReader be passed like a hot potato between each member in the call chain?
Is a Factory the answer?
To clarify following some comments:
My primary aim is to gradually migrate some app settings out of the config file and into some other form of persistence. While I realise that with an injected dependency I can Extract and Override to get some unit testing goodness, my primary concern is not testing so much as to encapsulate enough to begin being ignorant of where the settings actually get persisted.
When refactoring a legacy code-base you want to iteratively make small changes over time. Here is one approach:
Create a new static class (i.e. MyConfigManager) with a method to get the app setting (i.e. GetAppSettingString( string key )
Do a global search and replace of "ConfigurationManager.AppSettings["key"] and replace instances with "MyConfigManager.GetAppSettingsString("key")"
Test and check-in
Now your dependency on the ConfigurationManager is in one place. You can store your settings in a database or wherever, without having to change tons of code. Down side is that you still have a static dependency.
Next step would be to change MyConfigManager into a regular instance class and inject it into classes where it is used. Best approach here is to do it incrementally.
Create an instance class (and an interface) alongside the static class.
Now that you have both, you can refactor the using classes slowly until they are all using the instance class. Inject the instance into the constructor (using the interface). Don't try for the big bang check-in if there are lots of usages. Just do it slowly and carefully over time.
Then just delete the static class.
Usually its very difficult to clean a legacy application is small steps, because they are not designed to be changed in this way. If the code is completely intermingled and you have no SoC it is difficult to change on thing without being forced to change everything else... Also it is often very hard to unit test anything.
But in general you have to:
1) Find the simplest (smallest) class not refactored yet
2) Write unit tests for this class so that you have confidence that your refactoring didn't break anything
3) Do the smallest possible change (this depends on the project and your common sense)
4) Make sure all the tests pass
5) Commit and goto 1
I would like to recommend "Refactoring" by Martin Fowler to give you more ideas: http://www.amazon.com/exec/obidos/ASIN/0201485672
For your example, the first thing I'd do is to create an interface exposing the functionality you need to read config e.g.
public interface IConfigReader
{
string GetAppSetting(string key);
...
}
and then create an implementation which delegates to the static ConfigurationManager class:
public class StaticConfigReader : IConfigReader
{
public string Get(string key)
{
return ConfigurationManager.AppSetting[key];
}
}
Then for a particular class with a dependency on the configuration you can create a seam which initially just returns an instance of the static config reader:
public class ClassRequiringConfig
{
public void MethodUsingConfig()
{
string setting = this.GetConfigReader().GetAppSetting("key");
}
protected virtual IConfigReader GetConfigReader()
{
return new StaticConfigReader();
}
}
And replace all references to ConfigManager with usages of your interface. Then for testing purposes you can subclass this class and override the GetConfigReader method to inject fakes so you don't need any actual config file:
public class TestClassRequiringConfig : ClassRequiringConfig
{
public IConfigReader ConfigReader { get; set; }
protected override IConfigReader GetConfigReader()
{
return this.ConfigReader;
}
}
[Test]
public void TestMethodUsingConfig()
{
ClassRequiringConfig sut = new TestClassRequiringConfig { ConfigReader = fakeConfigReader };
sut.MethodUsingConfig();
//Assertions
}
Then eventually you will be able to replace this with property/constructor injection when you add an IoC container.
EDIT:
If you're not happy with injecting instances into individual classes like this (which would be quite tedious if many classes depend on configuration) then you could create a static configuration class, and then allow temporary changes to the config reader for testing:
public static class Configuration
{
private static Func<IConfigReader> _configReaderFunc = () => new StaticConfigReader;
public static Func<IConfigReader> GetConfiguration
{
get { return _configReaderFunc; }
}
public static IDisposable CreateConfigScope(IConfigReader reader)
{
return new ConfigReaderScope(() => reader);
}
private class ConfigReaderScope : IDisposable
{
private readonly Func<IConfigReader> _oldReaderFunc;
public ConfigReaderScope(Func<IConfigReader> newReaderFunc)
{
this._oldReaderFunc = _configReaderFunc;
_configReaderFunc = newReaderFunc;
}
public void Dispose()
{
_configReaderFunc = this._oldReaderFunc;
}
}
}
Then your classes just access the config through the static class:
public void MethodUsingConfig()
{
string value = Configuration.GetConfigReader().GetAppSetting("key");
}
and your tests can use a fake through a temporary scope:
[Test]
public void TestMethodUsingConfig()
{
using(var scope = Configuration.CreateConfigScope(fakeReader))
{
new ClassUsingConfig().MethodUsingConfig();
//Assertions
}
}