Let's say I am defining a browser implementation class for my application:
class InternetExplorerBrowser : IBrowser {
private readonly string executablePath = #"C:\Program Files\...\...\ie.exe";
...code that uses executablePath
}
This might at first glance to look like a good idea, as the executablePath data is near the code that will use it.
The problem comes when I try to run this same application on my other computer, that has a foreign-language OS: executablePath will have a different value.
I could solve this through an AppSettings singleton class (or one of its equivalents) but then no-one knows my class is actually dependent on this AppSettings class (which goes against DI ideias). It might pose a difficulty to Unit-Testing, too.
I could solve both problems by having executablePath being passed in through the constructor:
class InternetExplorerBrowser : IBrowser {
private readonly string executablePath;
public InternetExplorerBrowser(string executablePath) {
this.executablePath = executablePath;
}
}
but this will raise problems in my Composition Root (the startup method that will do all the needed classes wiring) as then that method has to know both how to wire things up and has to know all these little settings data:
class CompositionRoot {
public void Run() {
ClassA classA = new ClassA();
string ieSetting1 = "C:\asdapo\poka\poskdaposka.exe";
string ieSetting2 = "IE_SETTING_ABC";
string ieSetting3 = "lol.bmp";
ClassB classB = new ClassB(ieSetting1);
ClassC classC = new ClassC(B, ieSetting2, ieSetting3);
...
}
}
which will turn easily a big mess.
I could turn this problem around by instead passing an interface of the form
interface IAppSettings {
object GetData(string name);
}
to all the classes that need some sort of settings. Then I could either implement this either as a regular class with all the settings embedded in it or a class that reads data off a XML file, something along the lines. If doing this, should I have a general AppSettings class instance for the whole system, or have an AppSettings class associated to each class that might need one? That certainly seems like a bit of an overkill. Also, have all the application setings in the same place makes it easy to look and see what might be all the changes I need to do when tryign to move the program to different platforms.
What might be the best way to approach this common situation?
Edit:
And what about using an IAppSettings with all its settings hardcoded in it?
interface IAppSettings {
string IE_ExecutablePath { get; }
int IE_Version { get; }
...
}
This would allow for compile-time type-safety. If I saw the interface/concrete classes grow too much I could create other smaller interfaces of the form IMyClassXAppSettings. Would it be a burden too heavy to bear in med/big sized projects?
I've also reading about AOP and its advantages dealing with cross-cutting-concerns (I guess this is one). Couldn't it also offer solutions to this problem? Maybe tagging variables like this:
class InternetExplorerBrowser : IBrowser {
[AppSetting] string executablePath;
[AppSetting] int ieVersion;
...code that uses executablePath
}
Then, when compiling the project we'd also have compile time safety (having the compiler check that we actually implemented code that would weave in data. This would, of course, tie our API to this particular Aspect.
The individual classes should be as free from infrastructure as possible - constructs like IAppSettings, IMyClassXAppSettings, and [AppSetting] bleed composition details to classes which, at their simplest, really only depend on raw values such as executablePath. The art of Dependency Injection is in the factoring of concerns.
I have implemented this exact pattern using Autofac, which has modules similar to Ninject and should result in similar code (I realize the question doesn't mention Ninject, but the OP does in a comment).
Modules organize applications by subsystem. A module exposes a subsystem's configurable elements:
public class BrowserModule : Module
{
private readonly string _executablePath;
public BrowserModule(string executablePath)
{
_executablePath = executablePath;
}
public override void Load(ContainerBuilder builder)
{
builder
.Register(c => new InternetExplorerBrowser(_executablePath))
.As<IBrowser>()
.InstancePerDependency();
}
}
This leaves the composition root with the same problem: it must supply the value of executablePath. To avoid the configuration soup, we can write a self-contained module which reads configuration settings and passes them to BrowserModule:
public class ConfiguredBrowserModule : Module
{
public override void Load(ContainerBuilder builder)
{
var executablePath = ConfigurationManager.AppSettings["ExecutablePath"];
builder.RegisterModule(new BrowserModule(executablePath));
}
}
You could consider using a custom configuration section instead of AppSettings; the changes would be localized to the module:
public class BrowserSection : ConfigurationSection
{
[ConfigurationProperty("executablePath")]
public string ExecutablePath
{
get { return (string) this["executablePath"]; }
set { this["executablePath"] = value; }
}
}
public class ConfiguredBrowserModule : Module
{
public override void Load(ContainerBuilder builder)
{
var section = (BrowserSection) ConfigurationManager.GetSection("myApp.browser");
if(section == null)
{
section = new BrowserSection();
}
builder.RegisterModule(new BrowserModule(section.ExecutablePath));
}
}
This is a nice pattern because each subsystem has an independent configuration which gets read in a single place. The only benefit here is a more obvious intent. For non-string values or complex schemas, though, we can let System.Configuration do the heavy lifting.
I'd go with the last option - pass in an object that complies with the IAppSettings interface. In fact, I recently performed that refactor at work in order to sort out some unit tests and it worked nicely. However, there were few classes dependent on the settings in that project.
I'd go with creating a single instance of the settings class, and pass that in to anything that's dependant upon it. I can't see any fundamental problem with that.
However, I think you've already thought about this and seen how it can be a pain if you have lots of classes dependent on the settings.
If this is a problem for you, you can work around it by using a dependency injection framework such as ninject (sorry if you're already aware of projects like ninject - this might sound a bit patronizing - if you're unfamiliar, the why use ninject sections on github are a good place to learn).
Using ninject, for your main project you can declare that you want any class with a dependency on IAppSettings to use a singleton instance of your AppSettings based class without having to explicitly pass it in to constructors everywhere.
You can then setup your system differently for your unit tests by stating that you want to use an instance of MockAppSettings wherever IAppSettings is used, or by simply explicitly passing in your mock objects directly.
I hope I've got the gist of your question right and that I've helped - you already sound like you know what you're doing :)
Related
New to OOP here. I have defined an interface with one method, and in my derived class I defined another public method. My client code is conditionally instantiating a class of the interface type, and of course the compiler doesn't know about the method in one of the derived classes as it is not part of the underlying interface definition. Here is what I am talking about:
public interface IFileLoader
{
public bool Load();
}
public class FileLoaderA : IFileLoader
{
public bool Load();
//implementation
public void SetStatus(FileLoadStatus status)
{
//implementation
}
}
public class FileLoaderB : IFileLoader
{
public bool Load();
//implementation
//note B does not have a SetStatus method
}
public enum FileLoadStatus
{
Started,
Done,
Error
}
// client code
IFileLoader loader;
if (Config.UseMethodA)
{
loader = new FileLoaderA();
}
else
{
loader = new FileLoaderB();
}
//does not know about this method
loader.SetStatus (FileStatus.Done);
I guess I have two questions:
What should I be doing to find out if the object created at run-time has the method I am trying to use? Or is my approach wrong?
I know people talk of IOC/DI all the time. Being new OOP, what is the advantage of using an IOC in order to say, "when my app asks
for an IFileLoader type, use concrete class x", as opposed to simply
using an App.Config file to get the setting?
Referring to your two questions and your other post I'd recommend the following:
What should I be doing to find out if the object created at run-time has the method I am trying to use? Or is my approach wrong?
You don't necessarily need to find out the concrete implementation at runtime in your client code. Following this approach you kinda foil the crucial purpose of an interface. Hence it's rather useful to just naïvely use the interface and let the concrete logic behind decide what's to do.
So in your case, if one implementation's just able to load a file - fine. If your other implementation is able to the same and a bit more, that's fine, too. But the client code (in your case your console application) shouldn't care about it and just use Load().
Maybe some code says more than thousand words:
public class ThirdPartyLoader : IFileLoader
{
public bool Load(string fileName)
{
// simply acts as a wrapper around your 3rd party tool
}
}
public class SmartLoader : IFileLoader
{
private readonly ICanSetStatus _statusSetter;
public SmartLoader(ICanSetStatus statusSetter)
{
_statusSetter = statusSetter;
}
public bool Load(string fileName)
{
_statusSetter.SetStatus(FileStatus.Started);
// do whatever's necessary to load the file ;)
_statusSetter.SetStatus(FileStatus.Done);
}
}
Note that the SmartLoader does a bit more. But as a matter of separation of concerns its purpose is the loading part. The setting of a status is another class' task:
public interface ICanSetStatus
{
void SetStatus(FileStatus fileStatus);
// maybe add a second parameter with information about the file, so that an
// implementation of this interface knows everything that's needed
}
public class StatusSetter : ICanSetStatus
{
public void SetStatus(FileStatus fileStatus)
{
// do whatever's necessary...
}
}
Finally your client code could look something like the follwing:
static void Main(string[] args)
{
bool useThirdPartyLoader = GetInfoFromConfig();
IFileLoader loader = FileLoaderFactory.Create(useThirdPartyLoader);
var files = GetFilesFromSomewhere();
ProcessFiles(loader, files);
}
public static class FileLoaderFactory
{
public static IFileLoader Create(bool useThirdPartyLoader)
{
if (useThirdPartyLoader)
{
return new ThirdPartyLoader();
}
return new SmartLoader(new StatusSetter());
}
}
Note that this is just one possible way to do what you're looking for without having the necessity to determine IFileLoader's concrete implementation at runtime. There maybe other more elegant ways, which furthermore leads me to your next question.
I know people talk of IOC/DI all the time. Being new OOP, what is the advantage of using an IOC [...], as opposed to simply using an App.Config file to get the setting?
First of all separating of classes' responsibility is always a good idea especially if you want to painlessly unittest your classes. Interfaces are your friends in these moments as you can easily substitute or "mock" instances by e.g. utilizing NSubstitute. Moreover, small classes are generally more easily maintainable.
The attempt above already relies on some sort of inversion of control. The main-method knows barely anything about how to instantiate a Loader (although the factory could do the config lookup as well. Then main wouldn't know anything, it would just use the instance).
Broadly speaking: Instead of writing the boilerplate factory instantiation code, you could use a DI-Framework like Ninject or maybe Castle Windsor which enables you to put the binding logic into configuration files which might best fit your needs.
To make a long story short: You could simply use a boolean appSetting in your app.config that tells your code which implementation to use. But you could use a DI-Framework instead and make use of its features to easily instantiate other classes as well. It may be a bit oversized for this case, but it's definitely worth a look!
Use something like:
if((loader as FileLoaderA) != null)
{
((FileLoaderA)loader).SetStatus(FileStatus.Done);
}
else
{
// Do something with it as FileLoaderB type
}
IoC is normally used in situations where your class depends on another class that needs to be setup first, the IoC container can instantiate/setup an instance of that class for your class to use and inject it into your class usually via the constructor. It then hands you an instance of your class that is setup and ready to go.
EDIT:
I was just trying to keep the code concise and easy to follow. I agree that this is not the most efficient form for this code (it actually performs the cast twice).
For the purpose of determining if a particular cast is valid Microsoft suggests using the following form:
var loaderA = loader as FileLoaderA;
if(loaderA != null)
{
loaderA.SetStatus(FileStatus.Done);
// Do any remaining FileLoaderA stuff
return;
}
var loaderB = loader as FileLoaderB
if(loaderB != null)
{
// Do FileLoaderB stuff
return;
}
I do not agree with using is in the if. The is keyword was designed to determine if an object was instantiated from a class that implements a particular interface, rather than if a cast is viable. I have found it does not always return the expected result (especially if a class implements multiple interfaces through direct implementation or inheritance of a base class).
I have implemented a solution that has some core reusable classes that are easily registered and resolved using StructureMap. I then have an abstract factory to load additional families of products at runtime.
If I have a StructureMap registries like this one:
public ProductAClaimsRegistry()
{
var name = InstanceKeys.ProductA;
this.For<IClaimsDataAccess>().LifecycleIs(new UniquePerRequestLifecycle()).Use<ProductAClaimsDataAccess>().Named(name)
.Ctor<Func<DbConnection>>().Is(() => new SqlConnection(ConfigReader.ClaimsTrackingConnectionString));
this.For<IClaimPreparer>().LifecycleIs(new UniquePerRequestLifecycle()).Use<ProductAClaimPreparer>().Named(name);
this.For<IHistoricalClaimsReader>().LifecycleIs(new UniquePerRequestLifecycle()).Use<ProductAHistoricalClaimReader>().Named(name);
this.For<IProviderClaimReader>().LifecycleIs(new UniquePerRequestLifecycle()).Use<ProductAProviderClaimReader>().Named(name);
}
There may be a version for ProductB, ProductC and so on.
My abstract factory then loads the correct named instance like this:
public abstract class AbstractClaimsFactory
{
private IClaimsReader claimsReader;
private IClaimPreparer claimPreparer;
protected string InstanceKey { get; set; }
public virtual IClaimsReader CreateClaimReader()
{
return this.claimsReader;
}
public virtual IClaimPreparer CreateClaimPreparer()
{
return this.claimPreparer;
}
public void SetInstances()
{
this.CreateInstances();
var historicalReader = ObjectFactory.Container.GetInstance<IHistoricalClaimsReader>(this.InstanceKey);
var providerReader = ObjectFactory.Container.GetInstance<IProviderClaimReader>(this.InstanceKey);
this.claimsReader = new ClaimsReader(historicalReader, providerReader);
this.claimPreparer = ObjectFactory.Container.GetInstance<IClaimPreparer>(this.InstanceKey);
}
protected abstract void CreateInstances();
}
At runtime there is a processor class that has a concrete factory injected like this:
public void Process(AbstractClaimsFactory claimsFactory)
{
// core algorithm implemented
}
A concrete factory then exists for each product:
public class ProductAClaimsFactory : AbstractClaimsFactory
{
public ProductAClaimsFactory()
{
SetInstances();
}
protected override void CreateInstances()
{
InstanceKey = InstanceKeys.ProductA;
}
}
EDIT
The classes loaded in the factory are used by other classes that are Product agnostic - but they need to inject ProductA or ProductB behaviour.
public ClaimsReader(IHistoricalClaimsReader historicalClaimsReader, IProviderClaimReader providerClaimsReader)
{
this.historicalClaimsReader = historicalClaimsReader;
this.providerClaimsReader = providerClaimsReader;
}
I'm not exactly sure if this a text book abstract factory pattern and I'm new to StructureMap and more advance DI in general.
With this solution I think I have enforced a core algorithm and reused code where appropriate.
I also think that it is extensible as ProductN can easily be added without changing existing code.
The solution also has very good code coverage and the tests are very simple.
So, bottom line is: I am fairly happy with this solution but a colleague has questioned it, specificly around using ObjectFactory.Container.GetInstance<IClaimPreparer>(this.InstanceKey); to load named instances and he said it looks like the Service Locator anti pattern.
Is he correct?
If so, can anyone point out the downsides of this solution and how I might go about improving it?
This is service location. It's a problem as you have introduced a dependency on your service locator, ObjectFactory, rather than the interface, IClaimPreparer, your AbstractClaimsFactory class actually needs. This is going to make testing harder as you'll struggle to fake an implementation of IClaimPreparer. It also clouds the intention of your class as the class's dependencies are 'opaque'.
You need to look into the use of a Composition Root to resolve the anti-pattern. Have a look at Mark Seemann's work to find out more.
He's partially correct. Given a good DI container it is possible to register all your components and resolve the root object in your object tree... the DI container handles creating all the dependency for the root object (recursively) and creates the whole object tree for you. Then you can throw the DI container away. The nice thing about doing it that way is all references to DI container are confined to the entry-point of your app.
However, you are at least one step ahead of the curve since you didn't resolve dependencies in the constructor (or somewhere else) of the object using them, but instead resolved those in the factory and passed them in to the objects that need them via constructor-injection ;) (That's something I see often in code I work on and that is definitely an anti-pattern.
Here's a bit more about service locators and how they can be an anti-pattern:
http://martinfowler.com/articles/injection.html
http://blog.ploeh.dk/2010/02/03/ServiceLocatorisanAnti-Pattern/
Here's a bit more about the configure-resolve-release type pattern I hinted at:
http://blog.ploeh.dk/2010/08/30/Dontcallthecontainer;itllcallyou/
http://kozmic.net/2010/06/20/how-i-use-inversion-of-control-containers/
Okay, so recently I've been reading into ninject but I am having trouble understanding what makes it better over why they referred do as 'poor man's' DI on the wiki page. The sad thing is I went over all their pages on the wiki and still don't get it =(.
Typically I will wrap my service classes in a factory pattern that handles the DI like so:
public static class SomeTypeServiceFactory
{
public static SomeTypeService GetService()
{
SomeTypeRepository someTypeRepository = new SomeTypeRepository();
return = new SomeTypeService(someTypeRepository);
}
}
Which to me seems a lot like the modules:
public class WarriorModule : NinjectModule {
public override void Load() {
Bind<IWeapon>().To<Sword>();
Bind<Samurai>().ToSelf().InSingletonScope();
}
}
Where each class would have it's associated module and you Bind it's constructor to a concrete implementation. While the ninject code is 1 less line I am just not seeing the advantage, anytime you add/remove constructors or change the implementation of an interface constructor, you'd have to change the module pretty much the same way as you would in the factory no? So not seeing the advantage here.
Then I thought I could come up with a generic convention based factory like so:
public static TServiceClass GetService<TServiceClass>()
where TServiceClass : class
{
TServiceClass serviceClass = null;
string repositoryName = typeof(TServiceClass).ToString().Replace("Service", "Repository");
Type repositoryType = Type.GetType(repositoryName);
if (repositoryType != null)
{
object repository = Activator.CreateInstance(repositoryType);
serviceClass = (TServiceClass)Activator.CreateInstance(typeof (TServiceClass), new[]{repository});
}
return serviceClass;
}
However, this is crappy for 2 reasons: 1) Its tightly dependent on the naming convention, 2) It assumed the repository will never have any constructors (not true) and the service's only constructor will be it's corresponding repo (also not true). I was told "hey this is where you should use an IoC container, it would be great here!" And thus my research began...but I am just not seeing it and am having trouble understanding it...
Is there some way ninject can automatically resolve constructors of a class without a specific declaration such that it would be great to use in my generic factory (I also realize I could just do this manually using reflection but that's a performance hit and ninject says right on their page they don't use reflection).
Enlightment on this issue and/or showing how it could be used in my generic factory would be much appreciated!
EDIT: Answer
So thanks to the explanation below I was ably to fully understand the awesomeness of ninject and my generic factory looks like this:
public static class EntityServiceFactory
{
public static TServiceClass GetService<TServiceClass>()
where TServiceClass : class
{
IKernel kernel = new StandardKernel();
return kernel.Get<TServiceClass>();
}
}
Pretty awesome. Everything is handled automatically since concrete classes have implicit binding.
The benefit of IoC containers grows with the size of the project. For small projects their benefit compared to "Poor Man's DI" like your factory is minimal. Imagine a large project which has thousands of classes and some services are used in many classes. In this case you only have to say once how these services are resolved. In a factory you have to do it again and again for every class.
Example: If you have a service MyService : IMyService and a class A that requires IMyService you have to tell Ninject how it shall resolve these types like in your factory. Here the benefit is minimal. But as soon as you project grows and you add a class B which also depends on IMyService you just have to tell Ninject how to resolve B. Ninject knows already how to get the IMyService. In the factory on the other hand you have to define again how B gets its IMyService.
To take it one step further. You shouldn't define bindings one by one in most cases. Instead use convention based configuration (Ninject.Extension.Conventions). With this you can group classes together (Services, Repositories, Controllers, Presenters, Views, ....) and configure them in the same way. E.g. tell Ninject that all classes which end with Service shall be singletons and publish all their interfaces. That way you have one single configuration and no change is required when you add another service.
Also IoC containers aren't just factories. There is much more. E.g. Lifecycle managment, Interception, ....
kernel.Bind(
x => x.FromThisAssembly()
.SelectAllClasses()
.InNamespace("Services")
.BindToAllInterfaces()
.Configure(b => b.InSingletonScope()));
kernel.Bind(
x => x.FromThisAssembly()
.SelectAllClasses()
.InNamespace("Repositories")
.BindToAllInterfaces());
To be fully analagous your factory code should read:
public static class SomeTypeServiceFactory
{
public static ISomeTypeService GetService()
{
SomeTypeRepository someTypeRepository = new SomeTypeRepository();
// Somewhere in here I need to figure out if i'm in testing mode
// and i have to do this in a scope which is not in the setup of my
// unit tests
return new SomeTypeService(someTypeRepository);
}
private static ISomeTypeService GetServiceForTesting()
{
SomeTypeRepository someTypeRepository = new SomeTypeRepository();
return new SomeTestingTypeService(someTypeRepository);
}
}
And the equilvalent in Ninject would be:
public class WarriorModule : NinjectModule {
public override void Load() {
Bind<ISomeTypeService>().To<SomeTypeService>();
}
}
public class TestingWarriorModule : NinjectModule {
public override void Load() {
Bind<ISomeTypeService>().To<SomeTestingTypeService>();
}
}
Here, you can define the dependencies declaratively, ensuring that the only differences between your testing and production code are contained to the setup phase.
The advantage of an IoC is not that you don't have to change the module each time the interface or constructor changes, it's the fact that you can declare the dependencies declaratively and that you can plug and play different modules for different purposes.
I'm a complete newbie to ninject
I've been pulling apart someone else's code and found several instances of nInject modules - classes that derive from Ninject.Modules.Module, and have a load method that contains most of their code.
These classes are called by invoking the LoadModule method of an instance of StandardKernel and passing it an instance of the module class.
Maybe I'm missing something obvious here, but what is the benefit of this over just creating a plain old class and calling its method, or perhaps a static class with a static method?
The Ninject modules are the tools used to register the various types with the IoC container. The advantage is that these modules are then kept in their own classes. This allows you to put different tiers/services in their own modules.
// some method early in your app's life cycle
public Kernel BuildKernel()
{
var modules = new INinjectModule[]
{
new LinqToSqlDataContextModule(), // just my L2S binding
new WebModule(),
new EventRegistrationModule()
};
return new StandardKernel(modules);
}
// in LinqToSqlDataContextModule.cs
public class LinqToSqlDataContextModule : NinjectModule
{
public override void Load()
{
Bind<IRepository>().To<LinqToSqlRepository>();
}
}
Having multiple modules allows for separation of concerns, even within your IoC container.
The rest of you question sounds like it is more about IoC and DI as a whole, and not just Ninject. Yes, you could use static Configuration objects to do just about everything that an IoC container does. IoC containers become really nice when you have multiple hierarchies of dependencies.
public interface IInterfaceA {}
public interface IInterfaceB {}
public interface IInterfaceC {}
public class ClassA : IInterfaceA {}
public class ClassB : IInterfaceB
{
public ClassB(IInterfaceA a){}
}
public class ClassC : IInterfaceC
{
public ClassC(IInterfaceB b){}
}
Building ClassC is a pain at this point, with multiple depths of interfaces. It's much easier to just ask the kernel for an IInterfaceC.
var newc = ApplicationScope.Kernel.Get<IInterfaceC>();
Maybe I'm missing something obvious
here, but what is the benefit of this
over just creating a plain old class
and calling its method, or perhaps a
static class with a static method?
Yes, you can just call a bunch of Bind<X>().To<Z>() statements to setup the bindings, without a module.
The difference is that if you put these statements in a module then:
IKernel.Load(IEnumerable<Assembly>) can dynamically discover such modules through reflection and load them.
the bindings are logically grouped together under a name; you can use this name to unload them again with IKernel.Unload(string)
Maybe I'm missing something obvious here, but what is the benefit of this over just creating a plain old class and calling its method, or perhaps a static class with a static method?
For us, it is the ability to add tests at a later time very easily. Just override a few bindings with mockobjects and voila.....on legacy code without a DI that wired "everything" up, it is near impossible to start inserting test cases without some rework. With a DI in place AND as long as it was used properly where the DI wired everything up, it is very simple to do so even on legacy code that may be very ugly.
In many DI frameworks, you can use the production module for your test with a test module that overrides specific bindings with mockobjects(leaving the rest of the wiring in place). These may be system tests more than unit tests, but I tend to prefer higher level tests than the average developer as it tests the integration between classes and it is great documentation for someone who joins the project and can see the whole feature in action(instead of just parts of the feature) without having to setup a whole system).
In wanting to get some hands-on experience of good OO design I've decided to try to apply separation of concerns on a legacy app.
I decided that I wasn't comfortable with these calls being scattered all over the code base.
ConfigurationManager.AppSettings["key"]
While I've already tackled this before by writing a helper class to encapsulate those calls into static methods I thought it could be an opportunity to go a bit further.
I realise that ultimately I should be aiming to use dependency injection and always be 'coding to interfaces'. But I don't want to take what seems like too big a step. In the meantime I'd like to take smaller steps towards that ultimate goal.
Can anyone enumerate the steps they would recommend?
Here are some that come to mind:
Have client code depend on an interface not a concrete implementation
Manually inject dependencies into an
interface via constructor or property?
Before going to the effort of
choosing and applying an IoC
container how do I keep the code
running?
In order to fulfil a dependency the default
constructor of any class that needs a
configuration value could use a Factory
(with a static CreateObject() method)?
Surely I'll still have a concrete dependency on the Factory?...
I've dipped into Michael Feathers' book so I know that I need to introduce seams but I'm struggling to know when I've introduced enough or too many!
Update
Imagine that Client calls methods on WidgetLoader passing it the required dependencies (such as an IConfigReader)
WidgetLoader reads config to find out what Widgets to load and asks WidgetFactory to create each in turn
WidgetFactory reads config to know what state to put the Widgets into by default
WidgetFactory delegates to WidgetRepository to do the data access, which reads config to decide what diagnostics it should log
In each case above should the IConfigReader be passed like a hot potato between each member in the call chain?
Is a Factory the answer?
To clarify following some comments:
My primary aim is to gradually migrate some app settings out of the config file and into some other form of persistence. While I realise that with an injected dependency I can Extract and Override to get some unit testing goodness, my primary concern is not testing so much as to encapsulate enough to begin being ignorant of where the settings actually get persisted.
When refactoring a legacy code-base you want to iteratively make small changes over time. Here is one approach:
Create a new static class (i.e. MyConfigManager) with a method to get the app setting (i.e. GetAppSettingString( string key )
Do a global search and replace of "ConfigurationManager.AppSettings["key"] and replace instances with "MyConfigManager.GetAppSettingsString("key")"
Test and check-in
Now your dependency on the ConfigurationManager is in one place. You can store your settings in a database or wherever, without having to change tons of code. Down side is that you still have a static dependency.
Next step would be to change MyConfigManager into a regular instance class and inject it into classes where it is used. Best approach here is to do it incrementally.
Create an instance class (and an interface) alongside the static class.
Now that you have both, you can refactor the using classes slowly until they are all using the instance class. Inject the instance into the constructor (using the interface). Don't try for the big bang check-in if there are lots of usages. Just do it slowly and carefully over time.
Then just delete the static class.
Usually its very difficult to clean a legacy application is small steps, because they are not designed to be changed in this way. If the code is completely intermingled and you have no SoC it is difficult to change on thing without being forced to change everything else... Also it is often very hard to unit test anything.
But in general you have to:
1) Find the simplest (smallest) class not refactored yet
2) Write unit tests for this class so that you have confidence that your refactoring didn't break anything
3) Do the smallest possible change (this depends on the project and your common sense)
4) Make sure all the tests pass
5) Commit and goto 1
I would like to recommend "Refactoring" by Martin Fowler to give you more ideas: http://www.amazon.com/exec/obidos/ASIN/0201485672
For your example, the first thing I'd do is to create an interface exposing the functionality you need to read config e.g.
public interface IConfigReader
{
string GetAppSetting(string key);
...
}
and then create an implementation which delegates to the static ConfigurationManager class:
public class StaticConfigReader : IConfigReader
{
public string Get(string key)
{
return ConfigurationManager.AppSetting[key];
}
}
Then for a particular class with a dependency on the configuration you can create a seam which initially just returns an instance of the static config reader:
public class ClassRequiringConfig
{
public void MethodUsingConfig()
{
string setting = this.GetConfigReader().GetAppSetting("key");
}
protected virtual IConfigReader GetConfigReader()
{
return new StaticConfigReader();
}
}
And replace all references to ConfigManager with usages of your interface. Then for testing purposes you can subclass this class and override the GetConfigReader method to inject fakes so you don't need any actual config file:
public class TestClassRequiringConfig : ClassRequiringConfig
{
public IConfigReader ConfigReader { get; set; }
protected override IConfigReader GetConfigReader()
{
return this.ConfigReader;
}
}
[Test]
public void TestMethodUsingConfig()
{
ClassRequiringConfig sut = new TestClassRequiringConfig { ConfigReader = fakeConfigReader };
sut.MethodUsingConfig();
//Assertions
}
Then eventually you will be able to replace this with property/constructor injection when you add an IoC container.
EDIT:
If you're not happy with injecting instances into individual classes like this (which would be quite tedious if many classes depend on configuration) then you could create a static configuration class, and then allow temporary changes to the config reader for testing:
public static class Configuration
{
private static Func<IConfigReader> _configReaderFunc = () => new StaticConfigReader;
public static Func<IConfigReader> GetConfiguration
{
get { return _configReaderFunc; }
}
public static IDisposable CreateConfigScope(IConfigReader reader)
{
return new ConfigReaderScope(() => reader);
}
private class ConfigReaderScope : IDisposable
{
private readonly Func<IConfigReader> _oldReaderFunc;
public ConfigReaderScope(Func<IConfigReader> newReaderFunc)
{
this._oldReaderFunc = _configReaderFunc;
_configReaderFunc = newReaderFunc;
}
public void Dispose()
{
_configReaderFunc = this._oldReaderFunc;
}
}
}
Then your classes just access the config through the static class:
public void MethodUsingConfig()
{
string value = Configuration.GetConfigReader().GetAppSetting("key");
}
and your tests can use a fake through a temporary scope:
[Test]
public void TestMethodUsingConfig()
{
using(var scope = Configuration.CreateConfigScope(fakeReader))
{
new ClassUsingConfig().MethodUsingConfig();
//Assertions
}
}