Is this a bad use of a static property? - c#

If I have a class with a service that I want all derived classes to have access to (say a security object, or a repository) then I might do something like this:
public abstract class A
{
static ISecurity _security;
public ISecurity Security { get { return _security; } }
public static void SetSecurity(ISecurity security) { _security = security; }
}
public class Bootstrapper
{
public Bootstrapper()
{
A.SetSecurity(new Security());
}
}
It seems like lately I see static properties being shunned everywhere as something to absolutely avoid. To me, this seems cleaner than adding an ISecurity parameter to the constructor of every single derived class I make. Given all I've read lately though, I'm left wondering:
Is this is an acceptable application of dependency injection or am I violating some major design principle that could come back to haunt me later? I am not doing unit tests at this point so maybe if I were then I would suddenly realize the answer to my question. To be honest though I probably won't change my design over that, but if there is some other important reason why I should change it then I very well might.
Edit: I made a couple stupid mistakes the first time I wrote that code... it's fixed now. Just thought I'd point that out in case anyone happened to notice :)
Edit: SWeko makes a good point about all deriving classes having to use the same implementation. In cases where I've used this design, the service is always a singleton so it effectively enforces an already existing requirement. Naturally, this would be a bad design if that weren't the case.

This design could be problematic for a couple of reasons.
You already mention unit testing, which is rather important. Such static dependency can make testing much harder. When the fake ISecurity ever has to be anything else than a Null Object implementation, you will find yourself having to removing the fake implementation on test tear down. Removing it during test tear down prevents other tests from being influenced when you forget to remove that fake object. A test tear down makes your test more complicated. Not that much complicated, but having this adds up when many tests have tear down code and you'll have a hard time finding a bug in your test suit when one test forget to run the tear down. You will also have to make sure the registered ISecurity fake object is thread-safe and won't influence other tests that might run in parallel (test frameworks such as MSTest run tests in parallel for obvious performance reasons).
Another possible problem with injecting the dependency as static, is that you force this ISecurity dependency to be a singleton (and probably to be thread-safe). This disallows for instance to apply any interceptors and decorators that have a different lifestyle than singleton
Another problem is that removing this dependency from the constructor disables any analysis or diagnostics that could be done by the DI framework on your behalf. Since you manually set this dependency, the framework has no knowledge about this dependency. In a sense you move the responsibility of managing dependencies back to the application logic, instead of allowing the Composition Root to be in control over the way dependencies are wired together. Now the application has to know that ISecurity is in fact thread-safe. This is a responsibility that in general belongs to the Composition Root.
The fact that you want to store this dependency in a base type might even be an indication of a violation of a general design principle: The Single Responsibility Principle (SRP). It has some resemblance with a design mistake I made myself in the past. I had a set of business operations that all inherited from a base class. This base class implemented all sorts of behavior, such as transaction management, logging, audit trailing, adding fault tolerance, and.... adding security checks. This base class became an unmanageable God Object. It was unmanageable, simply because it had too many responsibilities; it violated the SRP. Here's my story if you want to know more about this.
So instead of having this security concern (it's probably a cross-cutting concern) implemented in a base class, try removing the base class all together and use a decorator to add security to those classes. You can wrap each class with one or more decorators and each decorator can handle one specific concern. This makes each decorator class easy to follow because they will follow the SRP.

The problem is that is not really dependency injection, even if it is encapsulated in the definition of the class. Admittedly,
static Security _security;
would be worse than Security, but still, the instances of A do not get to use whatever security the caller passed to them, they need to depend on the global setting of a static property.
What I'm trying to say is that your usage is not that different from:
public static class Globals
{
public static ISecurity Security {get; set;}
}

Related

Architecture: Dependency Injection, Loosely Coupled Assemblies, Implementation Hiding

I've been working on a personal project which, beyond just making something useful for myself, I've tried to use as a way to continue finding and learning architectural lessons. One such lesson has appeared like a Kodiak bear in the middle of a bike path and I've been struggling quite mightily with it.
The problem is essentially an amalgam of issues at the intersection of dependency injection, assembly decoupling and implementation hiding (that is, implementing my public interfaces using internal classes).
At my jobs, I've typically found that various layers of an application hold their own interfaces which they publicly expose, but internally implement. Each assembly's DI code registers the internal class to the public interface. This technique prevents outside assemblies from newing-up an instance of the implementation class. However, some books I've been reading while building this solution have spoken against this. The main things that conflict with my previous thinking have to do with the DI composition root and where one should keep the interfaces for a given implementation. If I move dependency registration to a single, global composition root (as Mark Seemann suggests), then I can get away from each assembly having to run its own dependency registrations. However, the downside is that the implementation classes have to be public (allowing any assembly to instantiate them). As for decoupling assemblies, Martin Fowler instructs to put interfaces in the project with the code that uses the interface, not the one that implements it. As an example, here is a diagram he provided, and, for contrast, a diagram for how I would normally implement the same solution (okay, these aren't quite the same; kindly focus on the arrows and notice when implementation arrows cross assembly boundaries instead of composition arrows).
Martin Style
What I've normally seen
I immediately saw the advantage in Martin's diagram, that it allows the lower assemblies to be swapped out for another, given that it has a class that implements the interface in the layer above it. However, I also saw this seemingly major disadvantage: If you want to swap out the assembly from an upper layer, you essentially "steal" the interface away that the lower layer is implementing.
After thinking about it for a little bit, I decided the best way to be fully decoupled in both directions would be to have the interfaces that specify the contract between layers in their own assemblies. Consider this updated diagram:
Is this nutty? Is it right on? To me, it seems like this solves the problem of interface segregation. It doesn't, however, solve the problem of not being able to hide the implementation class as internal. Is there anything reasonable that can be done there? Should I not be worried about this?
One solution that I'm toying around with in my head is to have each layer implement the proxy layer's interface twice; once with a public class and once with an internal class. This way, the public class could merely wrap/decorate the internal class, like this:
Some code might look like this:
namespace MechanismProxy // Simulates Mechanism Proxy Assembly
{
public interface IMechanism
{
void DoStuff();
}
}
namespace MechanismImpl // Simulates Mechanism Assembly
{
using MechanismProxy;
// This class would be registered to IMechanism in the DI container
public class Mechanism : IMechanism
{
private readonly IMechanism _internalMechanism = new InternalMechanism();
public void DoStuff()
{
_internalMechanism.DoStuff();
}
}
internal class InternalMechanism : IMechanism
{
public void DoStuff()
{
// Do whatever
}
}
}
... of course, I'd still have to address some issues regarding constructor injection and passing the dependencies injected into the public class to the internal one. There's also the problem that outside assemblies could possibly new-up the public Mechanism... I would need a way to ensure only the DI container can do that... I suppose if I could figure that out, I wouldn't even need the internal version. Anyway, if anyone can help me understand how to overcome these architectural problems, it would be mightily appreciated.
However, the downside is that the implementation classes have to be public (allowing any assembly to instantiate them).
Unless you are building a reusable library (that gets published on NuGet and gets used by other code bases you have no control over), there is typically no reason to make classes internal. Especially since you program to interfaces, the only place in the application that depends on those classes is the Composition Root.
Also note that, in case you move the abstractions to a different library, and let both the consuming and the implementing assembly depend on that assembly, those assemblies don't have to depend on each other. This means that it doesn't matter at all whether those classes are public or internal.
This level of separation (placing the interfaces in an assembly of its own) however is hardly ever needed. In the end it's all about the required granularity during deployment and the size of the application.
As for decoupling assemblies, Martin Fowler instructs to put interfaces in the project with the code that uses the interface, not the one that implements it.
This is the Dependency Inversion Principle, which states:
In a direct application of dependency inversion, the abstracts are owned by the upper/policy layers
This is a somewhat opinion based topic, but since you asked, I'll give mine.
Your focus on creating as many assemblies as possible to be as flexible as possible is very theoretical and you have to weigh the practical value vs costs.
Don't forget that assemblies are only a container for compiled code. They become mostly relevant only when you look at the processes for developing, building and deploying/delivering them. So you have to ask a lot more questions before you can make a good decision on how exactly to split up the code into assemblies.
So here a few examples of questions I'd ask beforehand:
Does it make sense from your application domain to split up the
assemblies in this way (e.g. will you really need to swap out
assemblies)?
Will you have separate teams in place for developing those?
What will the scope be in terms of size (both LOC and team sizes)?
Is it required to protect the implementation from being available/visible? E.g. are those external interfaces or internal?
Do you really need to rely on assemblies as a mechanism to enforce your architectural separation? Or are there other, better measures (e.g. code reviews, code checkers, etc.)?
Will your calls really only happen between assemblies or will you need remote calls at some point?
Do you have to use private assemblies?
Will sealed classes help instead enforcing your architecture?
For a very general view, leaving these additional factors out, I would side with Martin Fowler's diagram, because that is just the standard way of how to provide/use interfaces. If your answers to the questions indicate additional value by further splitting up/protecting the code that may be fine, too. But you'd have to tell us more about your application domain and you'd have to be able to justify it well.
So in a way you are confronted with two old wisdoms:
Architecture tends to follow organizational setups.
It is very easy to over-engineer (over-complicate) architectures but it is very hard to design them as simple as possible. Simple is most of the time better.
When coming up with an architecture you want to consider those factors upfront otherwise they'll come and haunt you later in the form of technical debt.
However, the downside is that the implementation classes have to be public (allowing any assembly to instantiate them).
That doesn't sound like a downside. Implementation classes, that are bound to abstractions in your Composition Root, could probably be used in an explicit way somewhere else for some other reasons. I don't see any benefit from hiding them.
I would need a way to ensure only the DI container can do that...
No, you don't.
Your confusion probably stems from the fact that you think of the DI and Composition Root like there is a container behind, for sure.
In fact, however, the infrastructure could be completely "container-agnostic" in a sense that you still have your dependencies injected but you don't think of "how". A Composition Root that uses a container is your choice, as good choice as possible another Composition Root where you manually compose dependencies. In other words, the Composition Root could be the only place in your code that is aware of a DI container, if any is used. Your code is built agaist the idea of Dependency Inversion, not the idea of Dependency Inversion container.
A short tutorial of mine can possibly shed some light here
http://www.wiktorzychla.com/2016/01/di-factories-and-composition-root.html

Autofac Interception with custom attributes

I've been looking for a specific solution for AOP logging. I need a interception that makes possible do something like this:
[MyCustomLogging("someParameter")]
The thing is, I saw examples in others DI frameworks that makes this possible. But my project is already using Autofac for DI and I don't know if is a good idea mix up with Unity (for example). In Autofac.extras.dynamiclibrary2 the class InterceptAttribute is sealed.
Anyone has an idea for this problem?
Ps.: I would be satisfied with this:
[Intercept(typeof(MyLoggingClass), "anotherParameter"]
Although the use of attributes to enrich types with metadata to feed cross-cutting concerns with data to use isn't bad, the use of attributes to mark classes or methods to run some cross-cutting concern usually is.
Marking code with the attribute like you shown has some serious downsides:
It makes your code dependent on the used interception library, making code harder to change and makes it harder to replace external libraries. The number of dependencies the core of your application has with external libraries should be kept to an absolute minimum. It would be ironic if your code was littered with dependencies upon the Dependency Injection library; the tool that is used to allow minimizing external dependencies and increasing loose coupling.
To apply cross-cutting concerns to a wide range of classes (which is what you usually want to do), you will have to go through the complete code base to add or remove attributes from methods. This is time consuming and error prone. But even worse, making sure that aspects are run in a particular order is hard with attributes. Some frameworks allow you to specify an order to the attribute (using some sort of Order property), but changing the order means making sweeping changes through the code to change the Order of the attributes. Forgetting one will cause bugs. This is a violation of the Open/closed principle.
Since the attribute references the aspect class (in your example typeof(MyLoggingClass)), this makes your code still statically dependent on the cross-cutting concern. Replacing the class with another will again cause you to do sweeping changes to your code base, and keeping the hard dependency makes it much harder to reuse code or decide at runtime or time of deployment whether the aspect should be applied or not. In many cases, you can't have this dependency from your code to the aspect, because the code lives in a base library, while the aspect is specific to the application framework. For instance, you might have the same business logic that runs both in a web application and a Windows service. When ran in a web application, you want to log in a different way. In other words, you are violating the Dependency inversion principle.
I therefore consider applying attributes this way bad practice.
Instead of using attributes like this, apply cross-cutting concerns transparently, either using interceptions or decorators. Decorators are my preferred approach, because their use is much cleaner, simpler, and therefore more maintainable. Decorators can be written without having to take a dependency at any external library and they can therefore be placed in any suitable place in your application. Downside of decorators however is that it's very cumbersome to write and apply them in case your design isn't SOLID, DRY and you're not following the Reused abstraction principle.
But if you use the right application design using SOLID and message based patterns, you'll find out that applying cross-cutting concerns such as logging is just a matter of writing a very simple decorator such as:
public class LoggingCommandHandlerDecorator<T> : ICommandHandler<T>
{
private readonly ILogger logger;
private readonly ICommandHandler<T> decoratee;
public LoggingCommandHandlerDecorator(ILogger logger, ICommandHandler<T> decoratee) {
this.logger = logger;
this.decoratee = decoratee;
}
public void Handle(T command) {
this.logger.Log("Handling {0}. Data: {1}", typeof(T).Name,
JsonConvert.SerializeObject(command));
this.decoratee.Handle(command);
}
}
Without a proper design, you can still use interception (without attributes), because interception allows you to 'decorate' any types that seem to have no relationship in code (share no common interface). Defining which types to intercept and which not can be cumbersome, but you will usually still be able to define this in one place of the application, thus without having to make sweeping changes throughout the code base.
Side node. As I said, using attributes to describe pure metadata is fine and preferable. For instance, take some code that is only allowed to run for users with certain permissions. You can mark that code as follows:
[Permission(Permissions.Crm.ManageCompanies)]
public class BlockCompany : ICommand {
public Guid CompanyId;
}
This attribute does not describe what aspects are run, nor does it reference any types from an external library (the PermissionAttribute is something you can (and should) define yourself), or any AOP-specific types. It solely enriches the code with metadata.
In the end, you obviously want to apply some cross-cutting concern that checks whether the current user has the right permissions, but the attribute doesn't force you into a specific direction. With the attribute above, I could imagine the decorator to look as follows:
public class PermissionCommandHandlerDecorator<T> : ICommandHandler<T>
{
private static readonly Guid requiredPermissionId =
typeof(T).GetCustomAttribute<PermissionAttribute>().PermissionId;
private readonly IUserPermissionChecker checker;
private readonly ICommandHandler<T> decoratee;
public PermissionCommandHandlerDecorator(IUserPermissionChecker checker,
ICommandHandler<T> decoratee) {
this.checker = checker;
this.decoratee = decoratee;
}
public void Handle(T command) {
this.checker.CheckPermission(requiredPermissionId);
this.decoratee.Handle(command);
}
}

How do I design a class that would normally be static when I'm using dependency injection?

I have a class that encapsulates a bunch of strings that serve as defaults for app settings that haven't been otherwise explicitly specified by the user.
I'm currently using a plain old class with relevantly-named instance methods—this sort of thing:
class SiteConfigurationConventions : ISiteConfigurationConventions
{
public String GetConfigurationFileName()
{
return "SiteConfiguration.xml";
}
}
It seems that a static class would be more conceptually appropriate (like System.Math) since these strings won't ever change at run time and no fields are required, but I'm not sure how compatible static classes are with DI. For example, it doesn't seem possible to register a static class with the container so it returns it to constructors asking for it in other objects being resolved by the container.
As it is now, I register
container.RegisterType<ISiteConfiguration, SiteConfiguration>();
So that the requesting constructor gets what it needs:
public SiteGenerator(ISiteConfiguration siteConfiguration)
My design options would seem to be:
Refactor to a static class and reference the concrete type directly in my consuming class rather than using constructor injection
Leave it as-is (class and instance resolved to an interface), perhaps optionally registering it using the singleton lifetime for the sake of correctness
Creatging some kind of facade or factory to hide the static behind. However, for some reason this options just strikes me as silly.
The notion of an "instance" of a class like this seems odd—static seems more conceptually correct. The only reason I'd be making it an instantiable class is to make it more DI friendly. Does that sound OK, or correct? Am I missing something entirely?
Any counsel would be most appreciated. :)
Most DI libraries give you the option to specify that a single instance can be used for all injections (creates a single instance and give that as the answer every time). This is a form of Singleton, and would probably suit your problem well.
For example, using MS Unity library, you would put:
container.RegisterInstance(new SiteConfiguration());
I consider the static keyword to be a form of built-in singleton implementation, while the DI route does much the same thing, but without using the compiler to take care of the details.
OK, after a bit of research, Googling, and thinking, I believe I've arrived at my own conclusions.
The use of static classes is in a sense at odds with the IoC principle and loose coupling that I intend to bake into my architecture. The static modifier is a way of saying that only one implementation can answer a particular purpose, which is at odds with DI generally (loose coupling, programming to interfaces, testability, and all the things that go with that).
Equally, the static modifier is really just a way of telling the compiler we want to restrict the number of instances of a class to one while simultaneously never allowing it to be assigned to a variable (i.e., no use of the new operator). If we are to employ IoC, we should be leaving lifestyle management like this up to the composition root, and we're never directly referencing concrete classes (other than FCL classes) this way anyway. So static classes serve little purpose to us.
Therefore, I say leave it as a plain old (non-static) class and apply a singleton lifestyle at the composition root. Unless, of course, you think your would-be static class is unlikely ever to change and that you'll never need to fake it in testing, in which case you could just treat it like a stable dependency (like an FCL class) and exclude it from your normal DI scheme, referencing the concrete class directly in consuming classes.
If you must depend on a third-party class that uses static methods or is itself entirely static that you want to inject as a dependency (and thus be able to substitute for testing, etc., purposes), you should perhaps still create an interface and rely on an instantiable adapter that calls the static methods to get those values.

Possible Valid Use of a Singleton?

I've got to the point in my design, where I am seriously considering a singleton.
As we all know, the "common" argument is "Never do it! It's terrible!", as if we'd littered our code with a bunch of goto statements.
ServiceStack is a wonderful framework. Myself and my team are sold on it, and we have a complicated web-service based infrastructure to implement. I have been encouraging an asynchronous design, and where possible - using SendAsync on the service-stack clients.
Given we have all these different systems doing different things, it occurred to me I'd like to have a common logger, (A web service in itself actually, with a fall-back to a local text file if the web service is not available - e.g. some demons are stalking the building). Whilst I am a big fan of Dependency Injection, it doesn't seem clean (at least, to me) to be passing a reference to a "use this logger client" to every single asynchronous request.
Given that ServiceStack's failure signature is a Func<TRESPONSE, Exception> (and I have no fault with this), I am not even sure that if the enclosing method that made the call in the first place would have a valid handle.
However, if we had a singleton logger at this point, it doesn't matter where we are in the world, what thread we are on, and what part of a myriad of anonymous functions we are in.
Is this an accepted valid case, or is it a non-argument - down with singletons?
Logging is one of the areas which makes sense to be a singleton, it should never have any side-effects to your code and you will almost always want the same logger to be used globally. The primary thing you should be concerned with when using Singletons is ThreadSafety, which in the case of most Loggers, they're ThreadSafe by default.
ServiceStack's Logging API allows you to both provide a substitutable Logging implementation by configuring it globally on App_Start with:
LogManager.LogFactory = new Log4NetFactory(configureLog4Net:true);
After this point every class now has access to Log4Net's logger defined in the Factory above:
class Any
{
static ILog log = LogManager.GetLogger(typeof(Any));
}
In all Test projects I prefer everything to be logged to the Console, so I just need to set it once with:
LogManager.LogFactory = new ConsoleLogFactory();
By default ServiceStack.Logging, logs to a benign NullLogger which ignores each log entry.
There's only one problem with classic implementation of a singleton -
it is easily accessible, and provokes direct use, which leads to strong coupling,
god objects, etc.
under classic implementation I mean this:
class Singleton
{
public static readonly Singleton Instance = new Singleton();
private Singleton(){}
public void Foo(){}
public void Bar(){}
}
If you use singleton only in terms of an object lifecycle strategy,
and let IoC framework manage this for you, maintaining loose coupling -
there is nothing wrong with having 'just one' instance of a class
for entire lifetime of application, as long as you make sure it is thread-safe.
If you are placing that common logging behind a static facade that application code calls, ask yourself how you would actually unit test that code. This is a problem that Dependency Injection tries to solve, but you are reintroducing it by letting application logic depend on a static class.
There are two other problems you might be having. To question I have for you is: Are you sure you don't log too much, and are you sure you aren't violating the SOLID principles.
I've written an SO answer a year back that discusses those two questions. I advice you to read it.
As always, I prefer to have a factory. This way I can change the implementation in future and maintain the client contract.
You could say that singleton's implmenentation could also change but factories are just more general. For example, the factory could implement arbitrary lifetime policy and change this policy over time or according to your needs. On the other hand, while this is technically possible to implement different lifetime policies for a singleton, what you get then should probably not be considered a "singleton" but rather a "singleton with specific lifetime policy". And this is probably just as bad as it sounds.
Whenever I am to use a singleton, I first consider a factory and most of the times, the factory just wins over singleton. If you really don't like factories, create a static class - a stateless class with static methods only. Chances are, you just don't need an object, just a set of methods.

What do programmers mean when they say, "Code against an interface, not an object."?

I've started the very long and arduous quest to learn and apply TDD to my workflow. I'm under the impression that TDD fits in very well with IoC principles.
After browsing some of TDD tagged questions here in SO, I read it's a good idea to program against interfaces, not objects.
Can you provide simple code examples of what this is, and how to apply it in real use cases? Simple examples is key for me (and other people wanting to learn) to grasp the concepts.
Consider:
class MyClass
{
//Implementation
public void Foo() {}
}
class SomethingYouWantToTest
{
public bool MyMethod(MyClass c)
{
//Code you want to test
c.Foo();
}
}
Because MyMethod accepts only a MyClass, if you want to replace MyClass with a mock object in order to unit test, you can't. Better is to use an interface:
interface IMyClass
{
void Foo();
}
class MyClass : IMyClass
{
//Implementation
public void Foo() {}
}
class SomethingYouWantToTest
{
public bool MyMethod(IMyClass c)
{
//Code you want to test
c.Foo();
}
}
Now you can test MyMethod, because it uses only an interface, not a particular concrete implementation. Then you can implement that interface to create any kind of mock or fake that you want for test purposes. There are even libraries like Rhino Mocks' Rhino.Mocks.MockRepository.StrictMock<T>(), which take any interface and build you a mock object on the fly.
It's all a matter of intimacy. If you code to an implementation (a realized object) you are in a pretty intimate relationship with that "other" code, as a consumer of it. It means you have to know how to construct it (ie, what dependencies it has, possibly as constructor params, possibly as setters), when to dispose of it, and you probably can't do much without it.
An interface in front of the realized object lets you do a few things -
For one you can/should leverage a factory to construct instances of the object. IOC containers do this very well for you, or you can make your own. With construction duties outside of your responsibility, your code can just assume it is getting what it needs. On the other side of the factory wall, you can either construct real instances, or mock instances of the class. In production you would use real of course, but for testing, you may want to create stubbed or dynamically mocked instances to test various system states without having to run the system.
You don't have to know where the object is. This is useful in distributed systems where the object you want to talk to may or may not be local to your process or even system. If you ever programmed Java RMI or old skool EJB you know the routine of "talking to the interface" that was hiding a proxy that did the remote networking and marshalling duties that your client didn't have to care about. WCF has a similar philosophy of "talk to the interface" and let the system determine how to communicate with the target object/service.
** UPDATE **
There was a request for an example of an IOC Container (Factory). There are many out there for pretty much all platforms, but at their core they work like this:
You initialize the container on your applications startup routine. Some frameworks do this via config files or code or both.
You "Register" the implementations that you want the container to create for you as a factory for the interfaces they implement (eg: register MyServiceImpl for the Service interface). During this registration process there is typically some behavioral policy you can provide such as if a new instance is created each time or a single(ton) instance is used
When the container creates objects for you, it injects any dependencies into those objects as part of the creation process (ie, if your object depends on another interface, an implementation of that interface is in turn provided and so on).
Pseudo-codishly it could look like this:
IocContainer container = new IocContainer();
//Register my impl for the Service Interface, with a Singleton policy
container.RegisterType(Service, ServiceImpl, LifecyclePolicy.SINGLETON);
//Use the container as a factory
Service myService = container.Resolve<Service>();
//Blissfully unaware of the implementation, call the service method.
myService.DoGoodWork();
When programming against an interface you will write code that uses an instance of an interface, not a concrete type. For instance you might use the following pattern, which incorporates constructor injection. Constructor injection and other parts of inversion of control aren't required to be able to program against interfaces, however since you're coming from the TDD and IoC perspective I've wired it up this way to give you some context you're hopefully familiar with.
public class PersonService
{
private readonly IPersonRepository repository;
public PersonService(IPersonRepository repository)
{
this.repository = repository;
}
public IList<Person> PeopleOverEighteen
{
get
{
return (from e in repository.Entities where e.Age > 18 select e).ToList();
}
}
}
The repository object is passed in and is an interface type. The benefit of passing in an interface is the ability to 'swap out' the concrete implementation without changing the usage.
For instance one would assume that at runtime the IoC container will inject a repository that is wired to hit the database. During testing time, you can pass in a mock or stub repository to exercise your PeopleOverEighteen method.
It means think generic. Not specific.
Suppose you have an application that notify the user sending him some message. If you work using an interface IMessage for example
interface IMessage
{
public void Send();
}
you can customize, per user, the way they receive the message. For example somebody want to be notified wih an Email and so your IoC will create an EmailMessage concrete class. Some other wants SMS, and you create an instance of SMSMessage.
In all these case the code for notifying the user will never be changed. Even if you add another concrete class.
The big advantage of programming against interfaces when performing unit testing is that it allows you to isolate a piece of code from any dependencies you want to test separately or simulate during the testing.
An example I've mentioned here before somewhere is the use of an interface to access configuration values. Rather than looking directly at ConfigurationManager you can provide one or more interfaces that let you access config values. Normally you would supply an implementation that reads from the config file but for testing you can use one that just returns test values or throws exceptions or whatever.
Consider also your data access layer. Having your business logic tightly coupled to a particular data access implementation makes it hard to test without having a whole database handy with the data you need. If your data access is hidden behind interfaces you can supply just the data you need for the test.
Using interfaces increases the "surface area" available for testing allowing for finer grained tests that really do test individual units of your code.
Test your code like someone who would use it after reading the documentation. Do not test anything based on knowledge you have because you have written or read the code. You want to make sure that your code behaves as expected.
In the best case you should be able to use your tests as examples, doctests in Python are a good example for this.
If you follow these guidelines changing the implementation shouldn't be an issue.
Also in my experience it is good practice to test each "layer" of your application. You will have atomic units, which in itself have no dependencies and you will have units which depend on other units until you eventually get to the application which in itself is a unit.
You should test each layer, do not rely on the fact that by testing unit A you also test unit B which unit A depends on (the rule applies to inheritance as well.) This, too, should be treated as an implementation detail, even though you might feel as if you are repeating yourself.
Keep in mind that once written tests are unlikely to change while the code they test will change almost definitely.
In practice there is also the problem of IO and the outside world, so you want to use interfaces so that you can create mocks if necessary.
In more dynamic languages this is not that much of an issue, here you can use duck typing, multiple inheritance and mixins to compose test cases. If you start disliking inheritance in general you are probably doing it right.
This screencast explains agile development and TDD in practice for c#.
By coding against an interface means that in your test, you can use a mock object instead of the real object. By using a good mock framework, you can do in your mock object whatever you like.

Categories