Is this just a functional implementation of the Command pattern? - c#

Recently I've found my self to implement code following a pattern like:
public class SomeClass()
{
private T Execute<T>(Func<T> function)
{
// Do some common stuff for every function like logging and try-catch
function();
}
public Type1 Command1()
{
Execute<Type1>(() => funcForCommand1);
}
public Type2 Command2()
{
Execute<Type2>(() => funcForCommand2);
}
}
Is this just a functional approach on the CommandPattern? Depending on the situation I've different version of this? You could probably achieve exactly the same thing by letting funcForCommandX inherit from somekind of ICommand that defines the Execute function, but I like my way in many situations better since most of the time the commands are only used in one location in the code and not needs to be exposed to the rest of the code. Of course you should implement the real command pattern if it is used in more locations in the code.

It is up to your needs. That is all I can say.
But I would like to state that this is not a Command Pattern but may be a method delegation.
Command Pattern focuses mainly on execution of a method/task. Following are standard behaviors that can be expected from a Command in this pattern:
Undo/Redo
Transactions
Composite Command Execution
Macros
When you wrap a method/task implementation in a Command, you can provide implementations on what to do to reverse/undo what has been done. Provide default implementation on attaching current execution to transactions, macro recording, thread-safe execution, etc.
With your approach you don't have that. And it's not easily doable until you wrap each task/method in a Command wrapper and provide above mentioned behaviors.
Take a look at Wikipedia article for further details on a Command Pattern.

Related

How do I deal with two situations that could be candidates for a strategy pattern solution?

I am designing a client that will call methods based on certain inputs. I will be sending in a billing system enum and calling an endpoint to determine which billing system is appropriate for an existing patient. Once I get the billing system, I have to check to see what type of operation I need to perform and make an API call based on the billing system.
For example, if I need to update a patient record and the patient is in BillingSystemA, I need to call a PUT-based method of the API for BillingSystemA.
I need to have CRUD methods for each billing system.
Selecting between the two billing systems and allowing for future growth made me think that the strategy pattern was a good fit. Strategy seems to work for the billing system, but what about the CRUD operations?
I have a BillingStrategy abstract class that has Create, Update, Get and Delete methods, but I need those methods to work against a variety of types. Can I just make the methods generic, like T Create<T> or bool Update<T> or do I need a strategy within a strategy to manage this? I've analyzed myself into a corner and could use some advice.
Here's a rough illustration. I invented a lot of the specifics, and the names aren't so great. I tend to revisit names as I refactor. The main point is to illustrate how we can break up the problem into pieces.
This assumes that there are classes for Patientand Treatment and an enum for InsuranceType. The goal is to bill a patient for a treatment, and determine where to send the bill based on the patient's insurance.
Here's a class:
public class PatientBilling
{
private readonly IBillingHandlerByInsuranceSelector _billingHandlerSelector;
private readonly IBillingHandler _directPatientBilling;
public PatientBilling(
IBillingHandlerByInsuranceSelector billingHandlerSelector,
IBillingHandler directPatientBilling)
{
_billingHandlerSelector = billingHandlerSelector;
_directPatientBilling = directPatientBilling;
}
public void BillPatientForTreatment(Patient patient, Treatment treatment)
{
var billingHandler = _billingHandlerSelector.GetBillingHandler(patient.Insurance);
var result = billingHandler.BillSomeone(patient, treatment);
if (!result.Accepted)
{
_directPatientBilling.BillSomeone(patient, treatment);
}
}
}
and a few interfaces:
public interface IBillingHandler
{
BillSomeoneResult BillSomeone(Patient patient, Treatment treatment);
}
public interface IBillingHandlerByInsuranceSelector
{
IBillingHandler GetBillingHandler(InsuranceType insurance);
}
As you can see this will rely heavily on dependency injection. This class is simple because it doesn't know anything at all about the different insurance types.
All it does is
Select a billing handler based on the insurance type
try to submit the bill to the insurance
if it's rejected, bill the patient
It doesn't know or care how any of that billing is implemented. It could be a database call, an API call, or anything else. That makes this class very easy to read and test. We've deferred whatever isn't related to this class. That's going to make it easier to solve future problems one at a time.
The implementation of IBillingHandlerByInsuranceSelector can be an abstract factory that will create and return the correct implementation of IBillingHandler according to the patient's insurance. (I'm glossing over that but there's plenty of information on how to create abstract factories with dependency injection containers.)
In a sense we could say that the first part of this problem is solved (although we're likely to refactor some more.) The reason why is that we can write unit tests for it, and any of the work specific to one insurance type or another will be in different classes.
Next we can write those insurance-specific implementations. Suppose one of the insurance types is WhyCo, and now we need to create an IBillingHandler for them. We're essentially going to repeat the same process.
For the sake of illustration, let's say that submitting a bill to WhyCo is done in two steps. First we have to make a request to check eligibility, and then we have to submit the bill. Maybe other insurance APIs do this in one step. That's okay, because no two implementations have to have anything in common with each other. They just implement the interface.
At this point we're dealing with the specifics of one particular insurance company, so somewhere in here we'll need to convert our Patient and Treatment information into whatever data they expect to receive.
public class WhyCoBillingHandler : IBillingHandler
{
private readonly IWhyCoService _whyCoService;
public WhyCoBillingHandler(IWhyCoService whyCoService)
{
_whyCoService = whyCoService;
}
public BillSomeoneResult BillSomeone(Patient patient, Treatment treatment)
{
// populate this from the patient and treatment
WhyCoEligibilityRequest eligibilityRequest = ...;
var elibility = _whyCoService.CheckEligibility(eligibilityRequest);
if(!elibility.IsEligible)
return new BillSomeoneResult(false, elibility.Reason);
// create the bill
WhyCoBillSubmission bill = ...;
_whyCoService.SubmitBill(bill);
return new BillSomeoneResult(true);
}
}
public interface IWhyCoService
{
WhyCoEligibilityResponse CheckEligibility(WhyCoEligibilityRequest request);
void SubmitBill(WhyCoBillSubmission bill);
}
At this point we still haven't written any code that talks to the WhyCo API. That makes WhyCoBillingHandler easy to unit test. Now we can write an implementation of IWhyCoService that calls the actual API. We can write unit tests for WhyCoBillingHandler and integration tests for the implementation of IWhyCoService.
(Perhaps it would have been better if translating our Patient and Treatment data into what they expect happened even closer to the concrete implementation.)
At each step we're writing pieces of the code, testing them, and deferring parts for later. The API class might be the last step in implementing WhyCo billing. Then we can move on to the next insurance company.
At each step we also decide how much should go into each class. Suppose we have to write a private method, and that method ends up being so complicated that it's bigger than the public method that calls it and it's hard to test. That might be where we replace that private method with another dependency (abstraction) that we inject into the class.
Or we might realize up front that some new functionality should be separated into its own class, and we can just start off with that.
The reason why I illustrated it this way is this:
I've analyzed myself into a corner
It's easy to become paralyzed when our code has to do so many things. This helps to avoid paralysis because it continually gives us a path forward. We write part of it to depend on abstractions, and then that part is done (sort of.) Then we implement those abstractions. The implementations require more abstractions, and we repeat (writing unit tests all the way in between.)
This doesn't enforce best practices and principles, but it gently guides us toward them. We're writing small, single-responsibility classes. They depend on abstractions. We're defining those abstractions (interfaces, in this case) from the perspective of the classes that need them, which leads to interface segregation. And each class is easy to unit test.
Some will point out that it's easy to get carried away with all the abstractions and create too many interfaces and too many layers of abstraction, and they are correct. But that's okay. At every single step we're likely to go a little off balance one way or the other.
As you can see, the problems that occur when we have to deal with the difference between billing systems becomes simpler. We just create every implementation differently.
Strategy seems to work for the billing system, but what about the CRUD operations?
The fact that they all have different CRUD operations is fine. We've made components similar where they need to be similar (the interfaces through which we interact with them) but the internal implementations can be as different as they need to be.
We've also sidestepped the question of which design patterns to use, except that IBillingHandlerByInsuranceSelector is an abstract factory. That's okay too, because we don't want to start off too focused on a design pattern. If this was a real application, I'd assume that a lot of what I'm doing will need to be refactored. Because the classes are small, unit tested, and depend on abstractions, it's easier to introduce design patterns when their use becomes obvious. When that happens we can likely isolate them to the classes that need them. Or, if we've just realized that we've gone in the wrong direction, it's still easier to refactor. Unless we can see the future that's certain to happen.
It's worth taking some time up front to understand the various implementation details to make sure that the way you have to work with them lines up with the code you're writing. (In other words, we still need to spend some time on design.) For example, if your billing providers don't give you immediate responses - you have to wait hours or days - then code that models it as an immediate response wouldn't make sense.
One more benefit is that this helps you to work in parallel with other developers. If you've determined that a given abstraction is a good start for your insurance companies and maybe you've written a few, then it's easy to hand off other ones to other developers. They don't have to think about the whole system. They can just write an implementation of an interface, creating whatever dependencies they need to.

Invoke a method by name with factory - what is the best practice solution

I want to find the best design solution for a home automation project.
I have the following entities:
//Switch light on/off
ISwitchable
{
Switch()
}
IDevice
{
}
Lamp : IDevice, ISwitchable
{
Switch()
}
If I receive an input from the user to switch the lamp on, I want to invoke the following:
Lamp.Switch(...)
Where the switch is a string input.
Ill use the factory method to select a specific device (Lamp) but what about a specific function?
What is the best practice solution to invoke a desire method (If it exists).
Is reflection is the way to go or i can think about a design without using reflection?
Thanks!
I think you're looking for the strategy pattern with an abstract factory to create the strategies.
Each different "operation" the you describe is one strategy. All of these strategies must implement the same interface (probably a method called something like Do/Invoke/Execute taking a parameter of the device).
The abstract factory is given something to indicate which strategy is required, and returns the specific implementation of that strategy (typically via a switch).
Taking one example, you would have a "turn on" strategy implemented by a class, and in the strategy's method, if the device it is given is appropriate (e.g. if it is ISwitchable), it could call the method on that device like this:
ISwitchable switchable=device as ISwitchable;
if (switchable!=null)
{
switchable.Switch();
}
At this point you can see that the ISwitchable interface in the question is no use, since you are in an operation that was trying to turn something on, but all it has to work with is a method which will toggle the switch (the device may already be 'on'). But I'm sure you can work out those details. And bear in mind that different operations may need devices to implement other interfaces.
I think the IDevice interface in the question is redundant - just use object. Certainly most interfaces with no members are worthless. It is of more value which of the operational interfaces each device implements, and that should suffice.

Registration of Concrete Type with Derived Interface

Wondering how to register these types given their inheritence, etc...
public interface ICommandHandler<in TCommand>
{
void Handle(TCommand command);
}
public abstract class AbstractCommandHandler<T> : ICommandHandler<T>
{
public abstract void Handle(T command);
}
public interface ILoginCommandHandler : ICommandHandler<object>{}
public class LoginCommandHandler : AbstractCommandHandler<object>, ILoginCommandHandler
{
public override void Handle(object command){}
}
Currently, I'm doing the following:
var container = new Container();
container.Register<ICommandHandler<object>, LoginCommandHandler>();
container.Register<ILoginCommandHandler, LoginCommandHandler>();
container.Verify();
var instance = container.GetInstance<ILoginCommandHandler>();
instance.Handle(new object());
This works, but I'm wondering if this is the correct method. The ILoginCommandHandler is just an easier way to identify the command and reduce code clutter. In addition I can add specific other methods there if I need at later point.
Also, I'm going to have at least one hundred of these so I'm going to want to use a package inside each satellite assembly. What's the best method for registering packages from multiple satellite assemblies. I have found that sometimes SimpleInjector didn't like where I placed the registrations (or I just did it wrong).
I understand how you try to minimize the use of generic typing from your code, but by doing this you completely disallow generic decoratos to be applied. Being able to easily apply decorators around a large range of command handler implementations is one of the strongest arguments for using the ICommandHandler<T> abstraction. Just by using the following registration for instance:
container.RegisterDecorator(typeof(ICommandHandler<>),
typeof(TransactionCommandHandlerDecorator<>));
container.RegisterDecorator(typeof(ICommandHandler<>),
typeof(DeadlockRetryCommandHandlerDecorator<>));
container.RegisterDecorator(typeof(ICommandHandler<>),
typeof(ValidationCommandHandlerDecorator<>));
container.RegisterDecorator(typeof(ICommandHandler<>),
typeof(SecurityCommandHandlerDecorator<>));
However, if you wish to resolve a ILoginCommandHandler, that means that all registered decorators need to implement ILoginCommandHandler. Otherwise your container will never be able to return such decorator. Applying one or two of those interfaces on your decorators wouldn't be that bad, but if you
"going to have at least one hundred of these" interfaces, this will lead to an unworkable situation.
And even if you don't have any decorators at the moment, I would advice against doing this, because having these interfaces completely disallows adding cross-cutting concerns in the future, which will lead to a situation with much more code clutter as what you're currently seeing with the extra amount of generic typing.
I can add specific other methods there if I need at later point.
You shouldn't do this, because that completely conflicts with the principles where this pattern is based on. Why would a consumer need more methods for executing that use case? Are those two methods two different use cases? Is one of those methods a query? You will be breaking three out of five SOLID principles when you do that (as explained here). And it again disallows applying cross-cutting concerns and don't forget that by introducing hundreds of interfaces, you're again introducing complexity. Now you've got one very simple (generic) abstraction. That makes your application design very clear.

Strategy or Command pattern?

Assuming I have a list of financial transactions, I have a need to execute a list of validation rules against those transactions. An example would be I have a transaction to purchase a product, however first I need to validate that the account in the transaction has enough available funds, that the product is not sold out etc. As a result of these many rules the transaction will be marked as rejected, as well as an error code should be specified.
Naturally I am thinking towards fronting my rules with an interface, allowing the executing code to roll through the rules executing each one until the first one rejects the transaction.
Each rule will require to be configured with parameters (ex. ValidateMinimumBalance will need to know that minimumBalance = 30). The result of a rule executing can be as simple as settings the rejection code on the transaction object, and the error code; or it can be as complicated as automatically modifying multiple other properties of the transaction.
My basic understanding of design patterns points to me either Strategy or Command patterns, but I am not entirely sure which one is better suited for this scenario.
Command Pattern
Each command will implement some sort of IValidate interface
The constructor of the command will take an instance of the transaction as the receiver in order to be able to read/validate the transaction as well as modify aspects of it. The constructor will also take an array of key/value pairs as parameters for the validation logic.
When I try to picture how the Strategy Pattern fits this scenario it looks very similar. In most examples the strategy is a simple object with a single method, however in my case the strategy will need a reference to the transaction as well as validation parameters.
Strategy is more used to swap out algorithms, its not really used for chaining validations. If you are going to have a pattern where you have one validation per type then you could use the strategy, if you are finding your having to use multiple validators, or the need to reuse validators. I think you are going to have to either find a new way to do it (aka COR) or within your strategy use the COR.
I actually would answer other. I think a combination chain of responsibility pattern and the composite pattern, or decorator for validators is much more suited for your needs.
Typing up an example implementation now.. but at a high level
Chain of Responsiblity
The design would revolve around something like:
abstract class Handler
{
protected Handler next;
public Handler(Handler h){
this.next = h;
}
public abstract bool Validate(Request request);
public abstract void Handle(Request request);
}
class CoreLogic: Handler
{
public CoreLogic(Handler handle) : base(handle){
}
public override void Validate(Request request){
return True
}
public override void Handle(Request request){
if(this.Validate(request)){
if(next!= null){
next.Handle(request);
}
}
}
}
class ValidBalance: Handler
{
public ValidBalance(Handler handle) : base(handle){
}
public override void Validate(Request request){
return True
}
public override void Handle(Request request){
if(this.Validate(request)){
if(next!= null){
next.Handle(request);
}
}
}
}
class MainApp
{
static void Main(){
Handler h = new ValidateBalance( new CoreLogic(null));
h.Handle(new Request());
}
}
Other useful links:
Chain of Responsiblity wikipedia
A Strategy would be something use to 'parameterize' a Command (telling it how parts of the operation should be executed).
When I try to picture how the Strategy Pattern fits this scenario it looks very similar.
Similar? It should look identical.
The distinction is one of how the context and delegation works. In principle a Command is the "active" agent. A Strategy is injected into some active agent. That distinction is pretty subtle.
It barely changes the design. What does change is the expectation.
Command objects (more-or-less) stand alone. They're built to do their work, and then they can vanish. No one cares about them any more. Perhaps they also use the Memento pattern, and have some future life, but perhaps not.
Strategy objects (more-or-less) live with the object into which they're injected. A Strategy would be part of some larger object, and could be replaced by a different implementation without breaking or changing anything else.
But the essential interface is largely the same.
In most examples the strategy is a simple object with a single method,
Those are poor examples.
however in my case the strategy will need a reference to the transaction as well as validation parameters.
Not unusual. Nothing wrong with it.
but I am not entirely sure which one
is better suited for this scenario
Neither :)
I strongly recommend to look at Interpreter. Actually your validator rules are just predicates formulated for your transactions. It's quite possible that soon you will need to combine these rules with AND, OR, NOT, etc.

Good Case For Interfaces

I work at a company where some require justification for the use of an Interface in our code (Visual Studio C# 3.5).
I would like to ask for an Iron Clad reasoning that interfaces are required for. (My goal is to PROVE that interfaces are a normal part of programming.)
I don't need convincing, I just need a good argument to use in the convincing of others.
The kind of argument I am looking for is fact based, not comparison based (ie "because the .NET library uses them" is comparison based.)
The argument against them is thus: If a class is properly setup (with its public and private members) then an interface is just extra overhead because those that use the class are restricted to public members. If you need to have an interface that is implemented by more than 1 class then just setup inheritance/polymorphism.
Code decoupling. By programming to interfaces you decouple the code using the interface from the code implementing the interface. This allows you to change the implementation without having to refactor all of the code using it. This works in conjunction with inheritance/polymorphism, allowing you to use any of a number of possible implementations interchangeably.
Mocking and unit testing. Mocking frameworks are most easily used when the methods are virtual, which you get by default with interfaces. This is actually the biggest reason why I create interfaces.
Defining behavior that may apply to many different classes that allows them to be used interchangeably, even when there isn't a relationship (other than the defined behavior) between the classes. For example, a Horse and a Bicycle class may both have a Ride method. You can define an interface IRideable that defines the Ride behavior and any class that uses this behavior can use either a Horse or Bicycle object without forcing an unnatural inheritance between them.
The argument against them is thus: If
a class is properly setup (with its
public and private members) then an
interface is just extra overhead
because those that use the class are
restricted to public members. If you
need to have an interface that is
implemented by more than 1 class then
just setup inheritance/polymorphism.
Consider the following code:
interface ICrushable
{
void Crush();
}
public class Vehicle
{
}
public class Animal
{
}
public class Car : Vehicle, ICrushable
{
public void Crush()
{
Console.WriteLine( "Crrrrrassssh" );
}
}
public class Gorilla : Animal, ICrushable
{
public void Crush()
{
Console.WriteLine( "Sqqqquuuuish" );
}
}
Does it make any sense at all to establish a class hierarchy that relates Animals to Vehicles even though both can be crushed by my giant crushing machine? No.
In addition to things explained in other answers, interfaces allow you simulate multiple inheritance in .NET which otherwise is not allowed.
Alas as someone said
Technology is dominated by two types of people: those who understand what they do not manage, and those who manage what they do not understand.
To enable unit testing of the class.
To track dependencies efficiently (if the interface isn't checked out and touched, only the semantics of the class can possibly have changed).
Because there is no runtime overhead.
To enable dependency injection.
...and perhaps because it's friggin' 2009, not the 70's, and modern language designers actually have a clue about what they are doing?
Not that interfaces should be thrown at every class interface: just those which are central to the system, and which are likely to experience significant change and/or extension.
Interfaces and abstract classes model different things. You derive from a class when you have an isA relationship so the base class models something concrete. You implement an interface when your class can perform a specific set of tasks.
Think of something that's Serializable, it doesn't really make sense (from a design/modelling point of view) to have a base class called Serializable as it doesn't make sense to say something isA Serializable. Having something implement a Serializable interface makes more sense as saying 'this is something the class can do, not what the class is'
Interfaces are not 'required for' at all, it's a design decision. I think you need to convince yourself, why, on a case-by-case basis, it is beneficial to use an interface, because there IS an overhead in adding an interface. On the other hand, to counter the argument against interfaces because you can 'simply' use inheritance: inheritance has its draw backs, one of them is that - at least in C# and Java - you can only use inheritance once(single inheritance); but the second - and maybe more important - is that, inheritance requires you to understand the workings of not only the parent class, but all of the ancestor classes, which makes extension harder but also more brittle, because a change in the parent class' implementation could easily break the subclasses. This is the crux of the "composition over inheritance" argument that the GOF book taught us.
You've been given a set of guidelines that your bosses have thought appropriate for your workplace and problem domain. So to be persuasive about changing those guidelines, it's not about proving that interfaces are a good thing in general, it's about proving that you need them in your workplace.
How do you prove that you need interfaces in the code you write in your workplace? By finding a place in your actual codebase (not in some code from somebody else's product, and certainly not in some toy example about Duck implementing the makeNoise method in IAnimal) where an interface-based solution is better than an inheritance-based solution. Show your bosses the problem you're facing, and ask whether it makes sense to modify the guidelines to accommodate situations like that. It's a teachable moment where everyone is looking at the same facts instead of hitting each other over the head with generalities and speculations.
The guideline seems to be driven by a concern about avoiding overengineering and premature generalisation. So if you make an argument along the lines of we should have an interface here just in case in future we have to..., it's well-intentioned, but for your bosses it sets off the same over-engineering alarm bells that motivated the guideline in the first place.
Wait until there's a good objective case for it, that goes both for the programming techniques you use in production code and for the things you start arguments with your managers about.
Test Driven Development
Unit Testing
Without interfaces producing decoupled code would be a pain. Best practice is to code against an interface rather than a concrete implementation. Interfaces seem rubbish at first but once you discover the benefits you'll always use them.
You can implement multiple interfaces. You cannot inherit from multiple classes.
..that's it. The points others are making about code decoupling and test-driven development don't get to the crux of the matter because you can do those things with abstract classes too.
Interfaces allow you to declare a concept that can be shared amongst many types (IEnumerable) while allowing each of those types to have its own inheritance hierarchy.
In this case, what we're saying is "this thing can be enumerated, but that is not its single defining characteristic".
Interfaces allow you to make the minimum amount of decisions necessary when defining the capabilities of the implementer. When you create a class instead of an interface, you have already declared that your concept is class-only and not usable for structs. You also make other decisions when declaring members in a class, such as visibility and virtuality.
For example, you can make an abstract class with all public abstract members, and that is pretty close to an interface, but you have declared that concept as overridable in all child classes, whereas you wouldn't have to have made that decision if you used an interface.
They also make unit testing easier, but I don't believe that is a strong argument, since you can build a system without unit tests (not recommended).
If your shop is performing automated testing, interfaces are a great boon to dependency injection and being able to test a unit of software in isolation.
The problem with the inheritance argument is that you'll either have a gigantic god class or a hierarchy so deep, it'll make your head spin. On top of that, you'll end up with methods on a class you don't need or don't make any sense.
I see a lot of "no multiple inheritance" and while that's true, it probably won't phase your team because you can have multiple levels of inheritance to get what they'd want.
An IDisposable implementation comes to mind. Your team would put a Dispose method on the Object class and let it propagate through the system whether or not it made sense for an object or not.
An interface declares a contract that any object implementing it will adhere to. This makes ensuring quality in code so much easier than trying to enforce written (not code) or verbal structure, the moment a class is decorated with the interface reference the requirements/contract is clear and the code won't compile till you've implemented that interface completely and type-safe.
There are many other great reasons for using Interfaces (listed here) but probably don't resonate with management quite as well as a good, old-fashioned 'quality' statement ;)
Well, my 1st reaction is that if you've to explain why you need interfaces, it's a uphill battle anyways :)
that being said, other than all the reasons mentioned above, interfaces are the only way for loosely coupled programming, n-tier architectures where you need to update/replace components on the fly etc. - in personal experience however that was too esoteric a concept for the head of architecture team with the result that we lived in dll hell - in the .net world no-less !
Please forgive me for the pseudo code in advance!
Read up on SOLID principles. There are a few reasons in the SOLID principles for using Interfaces. Interfaces allow you to decouple your dependancies on implementation. You can take this a step further by using a tool like StructureMap to really make the coupling melt away.
Where you might be used to
Widget widget1 = new Widget;
This specifically says that you want to create a new instance of Widget. However if you do this inside of a method of another object you are now saying that the other object is directly dependent on the use of Widget. So we could then say something like
public class AnotherObject
{
public void SomeMethod(Widget widget1)
{
//..do something with widget1
}
}
We are still tied to the use of Widget here. But at least this is more testable in that we can inject the implementation of Widget into SomeMethod. Now if we were to use an Interface instead we could further decouple things.
public class AnotherObject
{
public void SomeMethod(IWidget widget1)
{
//..do something with widget1
}
}
Notice that we are now not requiring a specific implementation of Widget but instead we are asking for anything that conforms to IWidget interface. This means that anything could be injected which means that in the day to day use of the code we could inject an actual implementation of Widget. But this also means that when we want to test this code we could inject a fake/mock/stub (depending on your understanding of these terms) and test our code.
But how can we take this further. With the use of StructureMap we can decouple this code even more. With the last code example our calling code my look something like this
public class AnotherObject
{
public void SomeMethod(IWidget widget1)
{
//..do something with widget1
}
}
public class CallingObject
{
public void AnotherMethod()
{
IWidget widget1 = new Widget();
new AnotherObject().SomeMethod(widget1);
}
}
As you can see in the above code we removed the dependency in the SomeMethod by passing in an object that conforms to IWidget. But in the CallingObject().AnotherMethod we still have the dependency. We can use StructureMap to remove this dependency too!
[PluginFamily("Default")]
public interface IAnotherObject
{
...
}
[PluginFamily("Default")]
public interface ICallingObject
{
...
}
[Pluggable("Default")]
public class AnotherObject : IAnotherObject
{
private IWidget _widget;
public AnotherObject(IWidget widget)
{
_widget = widget;
}
public void SomeMethod()
{
//..do something with _widget
}
}
[Pluggable("Default")]
public class CallingObject : ICallingObject
{
public void AnotherMethod()
{
ObjectFactory.GetInstance<IAnotherObject>().SomeMethod();
}
}
Notice that no where in the above code are we instantiating an actual implementation of AnotherObject. Because everything is wired for StructurMap we can allow StructureMap to pass in the appropriate implementations depending on when and where the code is ran. Now the code is truely flexible in that we can specify via configuration or programatically in a test which implementation we want to use. This configuration can be done on the fly or as part of a build process, etc. But it doesn't have to be hard wired anywhere.
Appologies as this doesn't answer your question regarding a case for Interfaces.
However I suggest getting the person in question to read..
Head First Design Patterns
-- Lee
I don't understand how its extra overhead.
Interfaces provide flexibility, manageable code, and reusability. Coding to an interface you don't need to worry about the concreted implementation code or logic of the certain class you are using. You just expect a result. Many class have different implementation for the same feature thing (StreamWriter,StringWriter,XmlWriter)..you do not need to worry about how they implement the writing, you just need to call it.

Categories