I have learnt the factory method design pattern, and at the same time, I have come across the Activator object and how to use it, from reading a tutorial (I have come across this object in intellisense a lot thougha).
Activator allows late binding, which can be extremely useful. But this is because we don't know what class we want to instantiate. Likewise, the factory method deals with the same problem in software engineering.
At a simple level, a bunch of ifs or a case statement and then instantiating an object based on the if condition is an implementation of the factory method, right?
On on a related topic, I have read that polymorphism can reduce coupling between objects by eliminating case statements. Is there an example of this?
Thanks
If you know at compile time all of the potential classes you would want to instantiate, use the Factory pattern, it will be faster and lets the compiler check your type safety.
On the other hand, if you don't know all of the classes that might need to be instantiated (for example, if you are trying to provide a plugin architecture), your only option is to use Activator.
The simple rule of thumb here is this: Choose a factory over using Activator (or any other type of runtime binding) as long as the scenario allows it.
Related
I'm reading and example provided in "Head first design patterns book" about decoration pattern.
I have noticed 2 things:
if you will need to remove a decorator from the stack of the wrapped decorators, you will have to iterate one by one through the component reference, which is O(n) complexity.
Conceptually I find it wrong to wrap (encapsulate) the base component in to the decorator object. It should be reversed; the component object should encapsulate the decorating objects.
I'm new to design patterns, and there is high probability that I'm wrong. Please explain to me what is specifically wrong with the way I think so I can learn.
I have created a different design, which solves the problems that I have mentioned. Maybe they add new problems; please feel free to point out the issues.
Here is the UML Diagram of the suggestion:
Basically what I did is that I have created a dictionary in the Component class which saves which decorators have been added, and made the Decorator abstract class not inherit from the component yet from Interface (so the component abstract class).
In this way, we can remove any decoration we want with O(1) complexity, and it is more logically constructed in the way that the component wraps the decorator, not the vice versa.
I understand that maybe I didn't noticed some advantage of the original Decorator pattern design. Please advice me.
Here is my code url.
Edit:
An example of when a customer will need to remove a decorator:
say for example the customer is choosing the condiments, he is adding whip, removing caramel, see each time how the total price vary, based on what the client is choosing to be added as a decorator.
if you will need to remove a decorator from the stack of the wrapped decorators, you will have to iterate one by one through the component reference, which is O(n) complexity.
It's true that removing decorators is theoretically more complex when they're wrapped. However, you need to consider what is a likely n. I'm going to guess for the decorator pattern as it was proposed, there are probably a small (max(n) == 20) iterators. Iterating over that many would not be a practical problem.
Conceptually I find it wrong to wrap (encapsulate) the base component in to the decorator object. It should be reversed; the component object should encapsulate the decorating objects.
The Decorator pattern seeks to add functionality (via concrete decorators) without having to modify the component class. In some cases, a Component can't be changed (e.g., it comes from a standard library). With the approach you propose, the Component would have to be modified to encapsulate its decorators. This is not the intention of Decorator.
In the original GoF book, the design patterns have clear definitions of the problem they solve, and the consequences (which are not all positive!) of the design pattern. In line with your complexity point (removing decorators), the authors mentioned this consequence:
Lots of little objects. A design that uses Decorator often results in systems composed of lots of little objects that all look alike. The objects differ only in the way they are interconnected, not in their class or in the value of their variables. Although these systems are easy to customize by those who understand them, they can be hard to learn and debug.
Why would you need to "remove a decorator from the stack of the warped decorators"? What does this mean? I think you're confusing two different concepts, the decorator pattern and stacks. The first is a design pattern in object-oriented programming, the latter is a data structure.
The decorator pattern exists so that new functionality can be added to a base component without the need to redefine the components that use/are dependent on it. That's why the decorator component "encapsulates" the base component, so that it can then use whatever functionalities it contains, while adding others required. If the base component encapsulated the decorator components, how would you reference a functionality present in any given one of them? Following you example, imagine I call Mocca.GetCost(). If it's not overriden nor redefined, CondimentDecorated.GetCost() will be called, which in turn, I imagine, considering what you're trying to do, will call Beverage.GetCost(). What will this method do? Iterate over the dictionary to look for which decorator method to call? This doesn't make sense, as, in calling CondimentDecorated.GetCost() you'll only be calling Beverage.GetCost() again. How will all this work if you can, as you said, "remove any decoration you want" from the decorators dictionary? What will be the behaviour then, when you call Mocca.GetCost()?
It's not that what you're trying to do isn't possible, and it's great to question why something is the way it is. But there's just a lot OOP misconceptions and violations here. Meaning, question not only how things could be made better, but also why they are done the way they are.
I have a situation where the implementation of an interface is determined at runtime. For example, I check a string and then determine which subclass to use, without IoC it looks like the following:
if (fruitStr == "Apple")
{
new AppleImpl().SomeMethod();
}
else
{
new BananaImpl().SomeMethod();
}
Both classes AppleImpl and BananaImpl are implementation of the same interface, say IFruit.
How can this be done using IoC/Dependency Injection, especially in Castle Windsor?
This is the single most-asked question about Dependency Injection, and gets asked over and over again on StackOverflow.
In short, it is best to use patterns to solve runtime creation rather than trying to use the container for more than composing object graphs, which is all it is designed for.
There are several patterns that can be used for this, but among the best options are to use Abstract Factory, Strategy, or a combination of the two. The exact solution depends on how the instance will be used - use a factory if you will be needing several short-lived instances and want to discard them after use, or use a strategy if you need to use the instances over and over again in a loop without having to recreate them each time. The combination is a tradeoff between high performance and low memory consumption.
I've been working on creating my own IoC container for learning purposes. After asking a couple questions about them, I was shown that creating a factory to "resolve" the objects was the best solution (see third solution here). User Krzysztof Koźmic showed that Castle Windsor actually can implement this for you.
I've been reading the source of CW all morning. I know when Resolve is called, it "returns the interface". How does this interface "intercept" calls (since there is no implementation behind) and call it's own methods?
I know there's obviously some reflection trickery going on here and it's quite amazing. I'm just not at all user how the "interception" is done. I tried venturing down the rabbit hole myself on git, but I've gotten lost. If anyone could point me in the right direction it'd be much appreciated.
Also - Wouldn't creating a typed factory have the dependency on the container inside the calling code? In ASP.NET MVC terms, that's what it seems to me.
EDIT: Found Reflection.Emit... could this be what's used?
EDIT2: The more and more I look into this, the more complicated it sounds to automatically create factories. I might end up just sticking with the repetitive code.
There are two separate concepts here:
Dependency injection merely instantiates an existing class that implements the interface. For example, you might have a MyServices class that implements IMyServices. IoC frameworks give you various ways to specify that when you ask for an IMyServices, it will resolve to an instance of MyServices. There might be some IL Emit magic going on to set up the factory or helper methods, but the actual instances are simply classes you've defined.
Mocking allows you to instantiate a class that implements an interface, without actually having to code that class. This does usually make use of Reflection and IL Emit, as you thought. Typically the emitted IL code is fairly simple, delegating the bulk of the work to methods written in C#. Most of the complexity of mocking has to do with specifying the behavior of the method itself, as the frameworks often allow you to specify behavior with a fluent syntax. Some, like Moles, simply let you specify a delegate to implement the method, though Moles can do other, crazier things like redirecting calls to static methods.
To elaborate a bit further, you don't actually need to use IL to implement IoC functionality, but this is often valuable to avoid the overhead of repeated Reflection calls, since Reflection is relatively expensive. Here is some information on what Castle Windsor is doing.
To answer your question, the most helpful place I found to start was the OpCodes class. This is a good summary of the available functionality in IL and how the OpCodes function. It's essentially a stack-based assembly language (no registers to worry about), but strongly-typed and with first-class access to object symbols and concepts, like types, fields, and methods. Here is a good Code Project article introducing the basics of IL. If you're interested, I can also send you some helper classes I've created over the last few years that I use for my own Emit code.
The typed factory is implemented using Castle DynamicProxy library. It generates a type on the fly that implements the interface and forwards all calls to that type, you make via the interface, to the interceptor.
It imposes no dependency in your code. The interface is created in your assembly, that you control, where you don't reference Windsor. In other assembly (entry point to the app) you tell Windsor about that interface and tell it to make it a factory and Windsor learns about your interface and does stuff with it. That's Inversion of Control in its glory.
It's actually not that complicated :)
ImpromptuInterface creates DLR dynamic proxies based on interfaces. It allows you to have a dynamic implementation with a static interface. In fact it even has a baseclass ImpromptuFactory that provides the starting point for creating factories with a dynamic implementation based on the interface.
Currently I have created a ABCFactory class that has a single method creating ABC objects. Now that I think of it, maybe instead of having a factory, I could just make a static method in my ABC Method. What are the pro's and con's on making this change? Will it not lead to the same? I don't foresee having other classes inherit ABC, but one never knows!
Thanks
Having a single, static method makes this much more difficult to test whereas having an instantiable object allows this to be easier to test. Also, dependency injection is later more of an option with the non-static solution.
Of course, if you don't need any of this, then these are not good arguments.
The main advantage of the factory method is the ability to hide reference to a specific class behind an interface. Since static methods can not be a part of the interface, static factory methods are basically the same as the constructor method itself. The only useful application of the static factory methods is to provide access to a private constructor - what is commonly used for singleton-pattern implementation.
In reality, if you want to get the benefits of a factory class, you need the static method in it's own class. This will allow you to later create new factory classes, or reconfigure the existing one to get different behaviors. For example, one factory class might create Unicorns which implement the IFourHoovedAnimal interface. You might have an algorithm written that does things with IFourHoovedAnimal's and needs to instantiate them. Later you can create a new factory class that instead instantiates Pegasus's which also implement IFourHoovedAnimal's. The old algorithm can now be reused for Pegasus's just by using the new factory! To make this work both the PegasusFactory and the UnicornFactory must inherit from some common base class(usually an abstract class).
So you see by placing the static method in it's own factory class, you can swap out factory classes with newer ones to reuse old algorithms. This also works for improving testability, because now unit tests can be fed a factory that creates mock objects.
I have done the latter before (static factory method on the class that you are creating instances of) for very small projects, but it was only because I needed it to help refactor some old code, but keep changes to a minimum. Basically in that case I had factored out a chunk of code that created a bunch of ASP.NET controls, and stuff all those controls into a user control. I wanted to make my new user control property based, but it was easier for the old legacy code to create the user control with a parameter based constructor.
So I created a static factory method that took all the parameters, and then instanced the user control and set it's properties based on the parameters. The old legacy code used this static method to create the user control, and future code would use the "prettier" properties instead.
For concrete classes, factory methods are really just a method of indirection around creating the actual type (which isn't to say they aren't useful, but as you've found, the factory method could really be anywhere).
Where the factory method really shines though is when your method creates instances of an interface type.
The "D" in Uncle Bob's SOLID Principles of Object Oriented Design is "The Dependency Inversion Priciple" Depend on abstractions, not on concretions.
An extreme following of that principle could have your main class create all your factories, with each factory using other factories via interfaces. The only appearance of "new" (creating concrete objects) would be in your main class, and your factories. All your objects would work with interfaces (abstractions), with the concrete dependencies obtained from supplied factory implementations.
You could then very easily adjust, or provide multiple Main classes customised for different scenarios.
Overuse of design patterns are dangerous, and creational design patterns make sense when you have class hierarchies with defined interfaces, or need to build rather complex objects. If you have a simple design, use simple solutions. Therefore, in your case, Factory Method would be enough
Yes, you are right, it is another design pattern :)
I have a class which is going to need to use the strategy design pattern. At run time I am required to switch different algorithms in and out to see the effects on the performance of the application.
The class in question currently takes four parameters in the constructor, each representing an algorithm.
How using Ninject (or a generalised approach) could I still use IOC but use the strategy pattern?
The current limitation is that my kernel (container) is aware of each algorithm interface, but that can only be bound to one concrete class. The only way around this I can see at the moment is pass in all eight algorithms at construction, but use different interfaces, but this seems totally uncessary. I wouldn't do this if I was not using an IOC container, so there must be some way around this.
Code example:
class MyModule : NinjectModule
{
public override void Load()
{
Bind<Person>().ToSelf();
Bind<IAlgorithm>().To<TestAlgorithm>();
Bind<IAlgorithm>().To<ProductionAlgorithm>();
}
}
Person needs to make use of both algorithms so I can switch at run time. But only TestAlgorithm is bound, as it's the first one in the container.
Let's take a step back and examine a slightly bigger picture. Since you want to be able to switch Strategy at run-time, there must be some kind of signalling mechanism that tells Person to switch the Strategy. If you application is UI-driven, perhaps there a button or drop-down list where the user can select which Strategy to use, but even if this is not the case, some outside caller must map a piece of run-time data to an instance of the Strategy.
The standard DI solution when you need to map a run-time instance to a dependency is to use an Abstract Factory.
Instead of registering the individual Strategies with the container, you register the factory.
It is entirely possible to write a complete API so that it's DI-friendly, but still DI Container-agnostic.
If you need to vary the IAlgorithm implementation at run-time, you can change Person to require an algorithm factory that provides different concrete algorithms based on run-time conditions.
Some dependency injection containers let you bind to anonymous creational delegates - if Ninject supports that, you could put the decision logic in one of those.