Correct pattern for exposing available actions - c#

I have an class which can perform many analytics on a given object and return back sets of results:
public class AnalyserClass
{
private SomeObject _someObject;
public AnalyserClass(SomeObject someobject)
{
_someObject = someobject;
}
public IEnumerable<Result> DoA
{
//checks A on someObject and returns some results
}
public IEnumerable<Result> DoB
{
//checks B on someObject and returns some results
}
//etc
}
public class Result
{
//various properties with result information
}
public class SomeObject
{
//this is the object which is analysed
}
I would like to expose these actions (DoA, DoB etc) in a CheckedListBox in a WinForm. The user would then tick the actions s/he wants performed and would then click on a Run button.
I would ideally like exposing the actions to be dynamic - so, if I develop a new action within my AnalyserClass, it will automatically show up and be executable from the WinForm without any code changes anywhere else.
I am a fairly new C# programmer. I have been researching how best to structure this and I have become a little bit confused between various patterns and which one would be most appropriate to use.
First of all I read up on the MVVM pattern, but this seems to be more complicated than is required here and I don't understand what the Model would be.
Then I looked at the Command pattern. But from what I understand, I would have to create a class wrapper for every single action (there are a lots) which would be quite time consuming and seem to be a bit cumbersome (change code in multiple places, so not 'dynamic'). I also don't understand how I could build the list of checkboxes from the command classes. This does seem to be the most appropriate pattern that I could find, but I am uncertain about it because of my lack of experience.
Your guidance is much appreciated.

I would not choose Reflection here, because it makes the things unnecessary complicated.
Furthermore, with your current approach, you would need to extend your AnalyserClass with new functionality every time you need a new analyzer tool, and that:
breaks the "open-closed" principle of SOLID,
breaks the "single responsibility" principle of SOLID,
makes your class too large and pretty unmaintainable.
I would introduce in your AnalyserClass a collection of supported actions:
class AnalyserClass
{
public IEnumerable<IAnalyzer> Analyzers { get; private set; }
}
...where the IAnalyzer interface describes your actions:
interface IAnalyzer
{
string Description { get; } // this is what user will see as the action name
Result Perform(SomeObject input);
}
Then you can implement the IAnalyzer in various classes as needed, even in different modules etc.
The only open point would be - how to add all the IAnalyzer instances into your AnalyzerClass.Analyzers collection?
Well:
you can use a DI framework (e.g. MEF) and let it discover all the things automatically,
you can inject them manually via DI,
you can use Reflection and scan the types manually,
you can add them manually e.g. in the constructor of the AnalyzerClass (simple but not recommended)
and so on...

Related

Unity injection with too many constructor parameters

I have the following Unity related question. The code stub below sets up the basic scenario and the question is at the bottom.
NOTE, that [Dependency] attribute does not work for the example below and results in StackoverflowException, but the constructor injection does work.
NOTE(2) Some of the comments below started to assign "labels", like code smell, bad design, etc... So, for the avoidance of confusion here is the business setup without any design first.
The question seems to cause a severe controversy even among some of the best-known C# gurus. In fact, the question is far beyond C# and it falls more into pure computer science. The question is based on the well-known "battle" between a service locator pattern and pure dependency injection pattern: https://martinfowler.com/articles/injection.html vs http://blog.ploeh.dk/2010/02/03/ServiceLocatorisanAnti-Pattern/ and a subsequent update to remedy the situation when the dependency injection becomes too complicated: http://blog.ploeh.dk/2010/02/02/RefactoringtoAggregateServices/
Here is the situation, which does not fit nicely into what is described in the last two but seems to fit perfectly into the first one.
I have a large (50+) collection of what I called micro services. If you have a better name, please "apply" it when reading. Each of them operates on a single object, let's call it quote. However, a tuple (context + quote) seems more appropriate. Quote is a business object, which gets processed and serialized into a database and context is some supporting information, which is necessary while quote is being processed, but is not saved into the database. Some of that supporting information may actually come from database or from some third-party services. This is irrelevant. Assembly line comes to mind as a real-world example: an assembly worker (micro service) receives some input (instruction (context) + parts (quote)), processes it (does something with parts according to instruction and / or modifies instruction) and passes it further if successful OR discards it (raises exception) in case of issues. The micro services eventually get bundled up into a small number (about 5) of high-level services. This approach linearizes processing of some very complex business object and allows testing each micro service separately from all others: just give it an input state and test that it produces expected output.
Here is where it gets interesting. Because of the number of steps involved, high-level services start to depend on many micro services: 10+ and more. This dependency is natural, and it just reflects the complexity of the underlying business object. On top of that micro services can be added / removed nearly on a constant basis: basically, they are some business rules, which are almost as fluid as water.
That severely clashes with Mark's recommendation above: if I have 10+ effectively independent rules applied to a quote in some high-level service, then, according to the third blog, I should aggregate them into some logical groups of, let's say no more than 3-4 instead of injecting all 10+ via constructor. But there are no logical groups! While some of the rules are loosely dependent, most of them are not and so artificially bundling them together will do more harm than good.
Throw in that the rules change frequently, and it becomes a maintenance nightmare: all real / mocked calls must be updated every time the rules change.
And I have not even mentioned that the rules are US state dependent and so, in theory, there are about 50 collections of rules with one collection per each state and per each workflow. And while some of the rules are shared among all states (like "save quote to the database"), there are a lot of state specific rules.
Here is a very simplified example.
Quote - business object, which gets saved into database.
public class Quote
{
public string SomeQuoteData { get; set; }
// ...
}
Micro services. Each of them performs some small update(s) to quote. Higher level services can be also built from some lower level micro services as well.
public interface IService_1
{
Quote DoSomething_1(Quote quote);
}
// ...
public interface IService_N
{
Quote DoSomething_N(Quote quote);
}
All micro services inherit from this interface.
public interface IQuoteProcessor
{
List<Func<Quote, Quote>> QuotePipeline { get; }
Quote ProcessQuote(Quote quote = null);
}
// Low level quote processor. It does all workflow related work.
public abstract class QuoteProcessor : IQuoteProcessor
{
public abstract List<Func<Quote, Quote>> QuotePipeline { get; }
public Quote ProcessQuote(Quote quote = null)
{
// Perform Aggregate over QuotePipeline.
// That applies each step from workflow to a quote.
return quote;
}
}
One of high level "workflow" services.
public interface IQuoteCreateService
{
Quote CreateQuote(Quote quote = null);
}
and its actual implementation where we use many of low level micro services.
public class QuoteCreateService : QuoteProcessor, IQuoteCreateService
{
protected IService_1 Service_1;
// ...
protected IService_N Service_N;
public override List<Func<Quote, Quote>> QuotePipeline =>
new List<Func<Quote, Quote>>
{
Service_1.DoSomething_1,
// ...
Service_N.DoSomething_N
};
public Quote CreateQuote(Quote quote = null) =>
ProcessQuote(quote);
}
There are two main ways to achieve DI:
Standard approach is to inject all dependencies through constructor:
public QuoteCreateService(
IService_1 service_1,
// ...
IService_N service_N
)
{
Service_1 = service_1;
// ...
Service_N = service_N;
}
And then register all types with Unity:
public static class UnityHelper
{
public static void RegisterTypes(this IUnityContainer container)
{
container.RegisterType<IService_1, Service_1>(
new ContainerControlledLifetimeManager());
// ...
container.RegisterType<IService_N, Service_N>(
new ContainerControlledLifetimeManager());
container.RegisterType<IQuoteCreateService, QuoteCreateService>(
new ContainerControlledLifetimeManager());
}
}
Then Unity will do its "magic" and resolve all services at run time. The problem is that currently we have about 30 such micro services and the number is expected to increase. Subsequently some of the constructors are already getting 10+ services injected. This is inconvenient to maintain, mock, etc...
Sure, it is possible to use the idea from here: http://blog.ploeh.dk/2010/02/02/RefactoringtoAggregateServices/ However, the microservices are not really related to each other and so bundling them together is an artificial process without any justification. In addition, it will also defeat the purpose of making the whole workflow linear and independent (a micro service takes a current "state", then preforms some action with quote, and then just moves on). None of them cares about any other micro services before or after them.
An alternative idea seems to create a single "service repository":
public interface IServiceRepository
{
IService_1 Service_1 { get; set; }
// ...
IService_N Service_N { get; set; }
IQuoteCreateService QuoteCreateService { get; set; }
}
public class ServiceRepository : IServiceRepository
{
protected IUnityContainer Container { get; }
public ServiceRepository(IUnityContainer container)
{
Container = container;
}
private IService_1 _service_1;
public IService_1 Service_1
{
get => _service_1 ?? (_service_1 = Container.Resolve<IService_1>());
set => _service_1 = value;
}
// ...
}
Then register it with Unity and change the constructor of all relevant services to something like this:
public QuoteCreateService(IServiceRepository repo)
{
Service_1 = repo.Service_1;
// ...
Service_N = repo.Service_N;
}
The benefits of this approach (in comparison to the previous one) are as follows:
All micro services and higher-level services can be created in a unified form: new micro services can be easily added / removed without the need to fix constructor call for the services and all unit tests. Subsequently, maintenance and complexity decreases.
Due to interface IServiceRepository, it is easy to create an automated unit test, which will iterate over all properties and validate that all services can be instantiated, which means that there will be no nasty run time surprises.
The problem with this approach is that it starts looking a lot like a service locator, which some people consider as an anti-pattern: http://blog.ploeh.dk/2010/02/03/ServiceLocatorisanAnti-Pattern/ and then people start to argue that that all dependencies must be made explicit and not hidden as in ServiceRepository.
What shall I do with that?
I would just create one interface:
public interface IDoSomethingAble
{
Quote DoSomething(Quote quote);
}
And a Aggregate:
public interface IDoSomethingAggregate : IDoSomethingAble {}
public class DoSomethingAggregate : IDoSomethingAggregate
{
private IEnumerable<IDoSomethingAble> somethingAbles;
public class DoSomethingAggregate(IEnumerable<IDoSomethingAble> somethingAbles)
{
_somethingAbles = somethingAbles;
}
public Quote DoSomething(Quote quote)
{
foreach(var somethingAble in _somethingAbles)
{
somethingAble.DoSomething(quote);
}
return quote;
}
}
Note: Dependency injection doesn't mean, you need to use it everywhere.
I would go for a factory:
public class DoSomethingAggregateFactory
{
public IDoSomethingAggregate Create()
{
return new DoSomethingAggregate(GetItems());
}
private IEnumerable<IDoSomethingAble> GetItems()
{
yield return new Service1();
yield return new Service2();
yield return new Service3();
yield return new Service4();
yield return new Service5();
}
}
Everything else (which is not constructor injected) hides explicit dependencies.
As a last resort, you could also create some DTO object, inject every needed Service via the Constructor (But only one time).
This way you can request the ProcessorServiceScope and have all Service available without needing to create the ctor logic for every class:
public class ProcessorServiceScope
{
public Service1 Service1 {get;};
public ServiceN ServiceN {get;};
public ProcessorServiceScope(Service1 service1, ServiceN serviceN)
{
Service1 = service1;
ServiceN = serviceN;
}
}
public class Processor1
{
public Processor1(ProcessorServiceScope serviceScope)
{
//...
}
}
public class ProcessorN
{
public ProcessorN(ProcessorServiceScope serviceScope)
{
//...
}
}
It seems like a ServiceLocator, but it does not hide the depencies, so I think this is kind of ok.
Consider the various interface methods listed:
Quote DoSomething_1(Quote quote);
Quote DoSomething_N(Quote quote);
Quote ProcessQuote(Quote quote = null)
Quote CreateQuote(Quote quote = null);
Apart from the names, they're all identical. Why make things so complicated? Considering the Reused Abstractions Principle, I'd argue that it'd be better if you had fewer abstractions, and more implementations.
So instead, introduce a single abstraction:
public interface IQuoteProcessor
{
Quote ProcessQuote(Quote quote);
}
This is a nice abstraction because it's an endomorphism over Quote, which we know is composable. You can always create a Composite of an endomorphism:
public class CompositeQuoteProcessor : IQuoteProcessor
{
private readonly IReadOnlyCollection<IQuoteProcessor> processors;
public CompositeQuoteProcessor(params IQuoteProcessor[] processors)
{
this.processors = processors ?? throw new ArgumentNullException(nameof(processors));
}
public Quote ProcessQuote(Quote quote)
{
var q = quote;
foreach (var p in processors)
q = p.ProcessQuote(q);
return q;
}
}
At this point, you're essentially done, I should think. You can now compose various services (those called microservices in the OP). Here's a simple example:
var processor = new CompositeQuoteProcessor(new Service1(), new Service2());
Such composition should go in the application's Composition Root.
The various services can have dependencies of their own:
var processor =
new CompositeQuoteProcessor(
new Service3(
new Foo()),
new Service4());
You can even nest the Composites, if that's useful:
var processor =
new CompositeQuoteProcessor(
new CompositeQuoteProcessor(
new Service1(),
new Service2()),
new CompositeQuoteProcessor(
new Service3(
new Foo()),
new Service4()));
This nicely addresses the Constructor Over-injection code smell, because the CompositeQuoteProcessor class only has a single dependency. Since that single dependency is a collection, however, you can compose arbitrarily many other processors.
In this answer, I completely ignore Unity. Dependency Injection is a question of software design. If a DI Container can't easily compose a good design, you'd be better off with Pure DI, which I've implied here.
If you must use Unity, you can always create concrete classes that derive from CompositeQuoteProcessor and take Concrete Dependencies:
public class SomeQuoteProcessor1 : CompositeQuoteProcessor
{
public SomeQuoteProcessor1(Service1 service1, Service3 service3) :
base(service1, service3)
{
}
}
Unity should be able to auto-wire that class, then...
Unity supports property injection. Instead of passing all those values in to the constructor just have public setters available with the [Dependency] attribute. This allows you to add as many injections as you need without having to update the constructor.
public class QuoteCreateService : QuoteProcessor, IQuoteCreateService
{
[Dependency]
protected IService_1 Service_1 { get; public set; }
// ...
[Dependency]
protected IService_N Service_N; { get; public set; }
public override QuoteUpdaterList QuotePipeline =>
new QuoteUpdaterList
{
Service_1.DoSomething_1,
// ...
Service_N.DoSomething_N
};
public Quote CreateQuote(Quote quote = null) =>
ProcessQuote(quote);
}
I never thought that I would answer my own question, though a substantial part of the credit should go to https://softwareengineering.stackexchange.com/users/115084/john-wu - he was the one who had my mind set in a proper direction.
Nevertheless, nearly two year have passed since the time when I asked the question and while I built the solution to the question slightly after asking it (and thanks to everyone who replied), it took more than a year for most of the developers in the company that I work for to actually understand how does it work and what does it do (and yes, they all are well above average developers and yes, the code is written in pure C# with no external libraries). So, I think that it could be important for others who might have similar business scenarios.
As mentioned in the question, the root of our problem is that the parameter space that we are dealing with is too large. We have about 6-8 values of what we call workflow (call it W), about 30-40 values of what we call a state config (call it S) – this is a combination of US state code and two other parameters, though not all triples are possible (the actual content of what is that state config is irrelevant), and about 30-50 values of what we call a risk rule (call it R) - that value depends on the product but this is also irrelevant as different products are treated differently.
So, the total dimension of parameter space is N = W * S * R and it is around 10K (and I am not much concerned about a precise value). Which means that when the code runs, we need approximately the following: for each workflow (obviously only one is running at a time but all of them do run at some time) and each state config (again only one is running at a time but any of them could run at some time) we need to evaluate all risk rules, which are relevant for that workflow and that state config.
Well, if the dimension of parameter space is around some N, then the number of tests needed to cover the whole space is at least on the order of that N. And this is exactly what the legacy code and tests were trying to do and what resulted in the question.
The answer turned out to be in a pure math, rather than in a pure computer science and it is based on what is called separable spaces: https://en.wikipedia.org/wiki/Separable_space and what in the group theory terms is called irreducible representation: https://en.wikipedia.org/wiki/Irreducible_representation . Though I have to admit that the latter one was more like an inspiration rather than the actual application of the group theory.
If you already lost me, that’s fine. Just, please, read the math mentioned above before proceeding further.
The space separability here means that we can choose such a space N so that subspaces W, S, and R become independent (or separable). To the best of my understanding, this can always be done for finite spaces that we are dealing with in CS.
This means that we can describe N space as e.g. S lists (or sets) of some rules whereas each rule is applicable in some of W workflows by assigning a set of applicable workflows to each rule. And yes, if we have some bad rules that originally should be applied in some weird combinations of workflows and state configs then we can split them into more than one rule, which would then allow maintaining separability.
This, of course, can be generalized, but I will skip the details as they are irrelevant.
At this point, someone may wonder, what’s the point. Well, if we can split N dimensional space (and N is about 10K in our case) into independent subspaces, then the magic happens and instead of writing on the order of N = W *S * R tests to cover the whole parameter space we only need to write on the order of W + S + R tests to cover the whole parameter space. In our case the difference is about 100X.
But that’s still not all. As we can describe the subspaces in the notions of sets or lists (depending on the needs) that naturally brings us to the notion of useless tests.
Wait, did I just say useless tests? Yes, I did. Let me explain. A typical TDD paradigm is that if the code failed, then the first thing that we need to do is to create a test, which would’ve caught that bug. Well, if the code is described by a static list or set (== list or set that was hard coded in the code) and the test would be described by an identity transformation from that list/set, then this makes such a test useless as it would have to repeat the original list/set…
The state configs form a historical pattern, e.g., let say, that we had some set of rules for the state of CA some time in 2018. That set of rules might be slightly changed to some other set of rules in 2019 and into some set of rules in 2020. These changes are small: a set of rule might pick up or lose a few rules and/or the rule might be tweaked a little bit, e.g. if we are comparing some value to be above some threshold, then the value of that threshold might be changed at some point for some state config. And once the rule or collection of rules is changed, then it should stay as it is until it changed again. Meanwhile some other rules could be changed, and every such change requires introduction of what we call state config. So, for each US state we have ordered collection (list) of these state configs and for each state config we have a collection of rules. Most of the rules don’t change but some of them do sporadically change as described. A natural IOC approach is to register each rule collection and each rule for each state config with IOC container, e.g. Unity using a combination of unique “name” of the state config and name of rule / collection (we actually run more than one collection of rules during workflow), whereas each rule already has a collection of workflows where it should be applicable. Then when the code runs for a given state config and a given workflow we can pull the collection out of Unity. A collection then contains the names of the rules that should be run. Then combining the name of the rule with the name of state config we can pull the actual rule out of Unity, filter the collection to leave only the rules that are applicable for a given workflow and then apply all the rules.
What happens here is that rule names / collection names form some closed sets and they benefit greatly by describing them that way. We obviously don’t want to register each rule / collection for each state config by hands as that would be tedious and error prone. So we use what we call “normalizers”. Let’s say that we have a general rule – that’s a rule that is the same for all state config. Then we register it by name only and the normalizer will “automatically” register it for all state configs. The same goes with the historic versioning. Once we register a rule / collection with Unity by rule / collection name + state config, then the normalizer will fill in the gap until we change the rule at some later state config.
As a result, each rule becomes extremely simple. Most of them have either zero or one injected constructor parameter, a few of them have two, and I know only one rule that has three injected parameters. As rules are independent and very simple, the tests for rules become very simple as well.
We do have some ideas to make the core of whatever I wrote above open source, provided that it could bring some value to the community...

Generic Interface w/ Polymorphism to handle Objects

Previous Post removed; Updated:
So I have a unique issue, which is possibly fairly common though. Properties are quite possibly are most commonly used code; as it requires our data to keep a constant value storage. So I thought how could I implement this; then I thought about how easy Generics can make life. Unfortunately we can't just use a Property in a Generic without some heavy legwork. So here was my solution / problem; as I'm not sure it is the best method- That is why I was seeking review from my peers.
Keep in mind the application will be massive; this is a very simple example.
Abstract:
Presentation Layer: The interface will have a series of fields; or even data to go across the wire through a web-service to our database.
// Interface:
public interface IHolder<T>
{
void objDetail(List<T> obj);
}
So my initial thought was an interface that will allow me to Generically handle each one of my objects.
// User Interface:
public class UI : IHolder
{
void objDetail(List<object> obj)
{
// Create an Instance
List<object> l = new List<object>();
// Add UI Fields:
l.Add(Guid.NewGuid());
l.Add(txtFirst.Text);
l.Add(txtLast.Text);
// l to our obj
obj = l;
return;
}
}
Now I have an interface; which has been used by our UI to put information in. Now; this is where the root of my curiosity has been thrown into the mixture.
// Create an Object Class
public class Customer : IHolder
{
// Member Variable:
private Guid _Id;
private String _First;
private String _Last;
public Guid Id
{
get { return _Id; }
set { _Id = value; }
}
public String First
{
get { return _First; }
set { _First = value; }
}
public String Last
{
get { return _Last; }
set { _Last = value; }
}
public virtual objDetail(List<Customer> obj)
{
// Enumerate through List; and assign to Properties.
}
}
Now this is where I thought it would be cool; if I could use Polymorphism to use the same interface; but Override it to do the method differently. So the Interface utilizes a Generic; with the ability to Morph to our given Object Class.
Now our Object Classes; can move toward our Entity interface which will handle basic Crud Operation.
I know this example isn't the best for my intention; as you really don't need to use Polymorphism. But, this is the overall idea / goal...
Interface to Store Presentation Layer UI Field Value
Implement the Properties to a Desired Class
Create a Wrapper Around my Class; which can be Polymorphed.
Morphed to a Generic for Crud Operation
Am I on the right path; is this taboo? Should I not do this? My application needs to hold each instance; but I need the flexibility to adapt very quickly without breaking every single instance in the process. That was how I thought I could solve the issue. Any thoughts? Suggestions? Am I missing a concept here? Or am I over-thinking? Did I miss the boat and implement my idea completely wrong? That is where I'm lost...
After pondering on this scenario a bit, I thought what would provide that flexibility while still ensuring the code is optimized for modification and business. I'm not sure this is the right solution, but it appears to work. Not only does it work, it works nicely. It appears to be fairly robust.
When is this approach useful? Well, when you intend to decouple your User Interface from your Logic. I'll gradually build each aspect so you can see the entire structure.
public interface IObjContainer<T>
{
void container(List<T> object);
}
This particular structure will be important. As it will store all of the desired content into it.
So to start you would create a Form with a series of Fields.
Personal Information
Address Information
Payment Information
Order Information
So as you can see all of these can be separate Database Tables, but belong to a similar Entity Model you are manipulating. This is quite common.
So a Segregation Of Concern will start to show slightly, the fields will be manipulated and passed through an Interface.
public interface IPersonalInformation
{
public string FirstName { get; set; }
public string LastName { get; set; }
}
So essentially the Interface is passing its variable, to the Interface. So you would culminate an interface to handle that entire form or individual interfaces that you wish to call so that they remain reusable.
So now you have a series of Interfaces, or a single once. But it contains all these variables to use. So you would now create a class:
public class CustomerProperties: IPersonalInformation, IOrderInformation
{
// Implement each Interface Property
}
Now you've created a container that will hold all of your values. What is nifty about this container is you can reuse the same values for another class in your application or choose different ones. But it will logically separate the User Interface.
So essentially this is acting similar to a Repository.
Now you can take these values and perform the desired logic. What becomes wonderful now, is after you've performed your logic you pass the object into our Generic List. Then you simply implement that method in another class for your goal and iterate through your list.
The honesty is it appears to work well and decouple nicely. I feel that it was a lot of work to do something similar to a normal Repository and Unit Of Work, this answers the question but weather or not it is ideal for your project I would look into Repository, Unit Of Work, Segregation Of Concern, Inversion Of Control, and Dependency Injection. They may do this same approach cleaner.
Update:
I thought about it after I wrote this up, I noticed you could actually implement those property values into the Generic List structure bypassing a series of interfaces; but that would introduce consistency issues as you'd have to be aware of what data is being passed in each time, in order. It's possible, but may not be ideal.

Proper way for form events to reach into application

I have some debugging functions that I would like to refactor, but seeing as they are debugging functions, it seems like they would be less likely to follow proper design. They pretty much reach into the depths of the app to mess with things.
The main form of my app has a menu containing the debug functions, and I catch the events in the form code. Currently, the methods ask for a particular object in the application, if it's not null, and then mess with it. I'm trying to refactor so that I can remove the reference to this object everywhere, and use an interface for it instead (the interface is shared by many other objects which have no relation to the debugging features.)
As a simplified example, imagine I have this logic code:
public class Logic
{
public SpecificState SpecificState { get; private set; }
public IGenericState GenericState { get; private set; }
}
And this form code:
private void DebugMethod_Click(object sender, EventArgs e)
{
if (myLogic.SpecificState != null)
{
myLogic.SpecificState.MessWithStuff();
}
}
So I'm trying to get rid of the SpecificState reference. It's been eradicated from everywhere else in the app, but I can't think of how to rewrite the debug functions. Should they move their implementation into the Logic class? If so, what then? It would be a complete waste to put the many MessWithStuff methods into IGenericState as the other classes would all have empty implementations.
edit
Over the course of the application's life, many IGenericState instances come and go. It's a DFA / strategy pattern kind of thing. But only one implementation has debug functionality.
Aside: Is there another term for "debug" in this context, referring to test-only features? "Debug" usually just refers to the process of fixing things, so it's hard to search for this stuff.
Create a separate interface to hold the debug functions, such as:
public interface IDebugState
{
void ToggleDebugMode(bool enabled); // Or whatever your debug can do
}
You then have two choices, you can either inject IDebugState the same way you inject IGenericState, as in:
public class Logic
{
public IGenericState GenericState { get; private set; }
public IDebugState DebugState { get; private set; }
}
Or, if you're looking for a quicker solution, you can simply do an interface test in your debug-sensitive methods:
private void DebugMethod_Click(object sender, EventArgs e)
{
var debugState = myLogic.GenericState as IDebugState;
if (debugState != null)
debugState.ToggleDebugMode(true);
}
This conforms just fine with DI principles because you're not actually creating any dependency here, just testing to see if you already have one - and you're still relying on abstractions over concretions.
Internally, of course, you still have your SpecificState implementing both IGenericState and IDebugState, so there's only ever one instance - but that's up to your IoC container, none of your dependent classes need know about it.
I'd highly recommend reading Ninject's walkthrough of dependency injection (be sure to read through the entire tutorial). I know this may seem like a strange recommendation given your question; however, I think this will save you a lot of time in the long run and keep your code cleaner.
Your debug code seems to depend on SpecificState; therefore, I would expect that your debug menu items would ask the DI container for their dependencies, or a provider that can return the dependency or null. If you're already working on refactoring to include DI, then providing your debug menu items with the proper internal bits of your application as dependencies (via the DI container) seems to be an appropriate way to achieve that without breaking solid design principles. So, for instance:
public sealed class DebugMenuItem : ToolStripMenuItem
{
private SpecificStateProvider _prov;
public DebugMenuItem(SpecificStateProvider prov) : base("Debug Item")
{
_prov = prov;
}
// other stuff here
protected override void OnClick(EventArgs e)
{
base.OnClick(e);
SpecificState state = _prov.GetState();
if(state != null)
state.MessWithStuff();
}
}
This assumes that an instance of SpecificState isn't always available, and needs to be accessed through a provider that may return null. By the way, this technique does have the added benefit of fewer event handlers in your form.
As an aside, I'd recommend against violating design principles for the sake of debugging, and have your debug "muck with stuff" methods interact with your internal classes the same way any other piece of code must - by its interface "contract". You'll save yourself a headache =)
I'd be inclined to look at dependency injection and decorators for relatively large apps, as FMM has suggested, but for smaller apps you could make a relatively easy extension to your existing code.
I assume that you push an instance of Logic down to the parts of your app somehow - either though static classes or fields or by passing into the constructor.
I would then extend Logic with this interface:
public interface ILogicDebugger
{
IDisposable PublishDebugger<T>(T debugger);
T GetFirstOrDefaultDebugger<T>();
IEnumerable<T> GetAllDebuggers<T>();
void CallDebuggers<T>(Action<T> call);
}
Then deep down inside your code some class that you want to debug would call this code:
var subscription =
logic.PublishDebugger(new MessWithStuffHere(/* with params */));
Now in your top-level code you can call something like this:
var debugger = logic.GetFirstOrDefaultDebugger<MessWithStuffHere>();
if (debugger != null)
{
debugger.Execute();
}
A shorter way to call methods on your debug class would be to use CallDebuggers like this:
logic.CallDebuggers<MessWithStuffHere>(x => x.Execute());
Back, deep down in your code, when your class that you're debugging is about to go out of scope, you would call this code to remove its debugger:
subscription.Dispose();
Does that work for you?

C#: Abstract Strategy base class serving as Abstract Factory for Strategy objects

I am trying to create a web-based tool for my company that, in essence, uses geographic input to produce tabular results. Currently, three different business areas use my tool and receive three different kinds of output. Luckily, all of the outputs are based on the same idea of Master Table - Child Table, and they even share a common Master Table.
Unfortunately, in each case the related rows of the Child Table contain vastly different data. Because this is the only point of contention I extracted a FetchChildData method into a separate class called DetailFinder. As a result, my code looks like this:
DetailFinder DetailHandler;
if (ReportType == "Planning")
DetailHandler = new PlanningFinder();
else if (ReportType == "Operations")
DetailHandler = new OperationsFinder();
else if (ReportType == "Maintenance")
DetailHandler = new MaintenanceFinder();
DataTable ChildTable = DetailHandler.FetchChildData(Master);
Where PlanningFinder, OperationsFinder, and MaintenanceFinder are all subclasses of DetailFinder.
I have just been asked to add support for another business area and would hate to continue this if block trend. What I would prefer is to have a parse method that would look like this:
DetailFinder DetailHandler = DetailFinder.Parse(ReportType);
However, I am at a loss as to how to have DetailFinder know what subclass handles each string, or even what subclasses exist without just shifting the if block to the Parse method. Is there a way for subclasses to register themselves with the abstract DetailFinder?
You could use an IoC container, many of them allows you to register multiple services with different names or policies.
For instance, with a hypothetical IoC container you could do this:
IoC.Register<DetailHandler, PlanningFinder>("Planning");
IoC.Register<DetailHandler, OperationsFinder>("Operations");
...
and then:
DetailHandler handler = IoC.Resolve<DetailHandler>("Planning");
some variations on this theme.
You can look at the following IoC implementations:
AutoFac
Unity
Castle Windsor
You might want to use a map of types to creational methods:
public class DetailFinder
{
private static Dictionary<string,Func<DetailFinder>> Creators;
static DetailFinder()
{
Creators = new Dictionary<string,Func<DetailFinder>>();
Creators.Add( "Planning", CreatePlanningFinder );
Creators.Add( "Operations", CreateOperationsFinder );
...
}
public static DetailFinder Create( string type )
{
return Creators[type].Invoke();
}
private static DetailFinder CreatePlanningFinder()
{
return new PlanningFinder();
}
private static DetailFinder CreateOperationsFinder()
{
return new OperationsFinder();
}
...
}
Used as:
DetailFinder detailHandler = DetailFinder.Create( ReportType );
I'm not sure this is much better than your if statement, but it does make it trivially easy to both read and extend. Simply add a creational method and an entry in the Creators map.
Another alternative would be to store a map of report types and finder types, then use Activator.CreateInstance on the type if you are always simply going to invoke the constructor. The factory method detail above would probably be more appropriate if there were more complexity in the creation of the object.
public class DetailFinder
{
private static Dictionary<string,Type> Creators;
static DetailFinder()
{
Creators = new Dictionary<string,Type>();
Creators.Add( "Planning", typeof(PlanningFinder) );
...
}
public static DetailFinder Create( string type )
{
Type t = Creators[type];
return Activator.CreateInstance(t) as DetailFinder;
}
}
As long as the big if block or switch statement or whatever it is appears in only one place, it isn't bad for maintainability, so don't worry about it for that reason.
However, when it comes to extensibility, things are different. If you truly want new DetailFinders to be able to register themselves, you may want to take a look at the Managed Extensibility Framework which essentially allows you to drop new assemblies into an 'add-ins' folder or similar, and the core application will then automatically pick up the new DetailFinders.
However, I'm not sure that this is the amount of extensibility you really need.
To avoid an ever growing if..else block you could switch it round so the individal finders register which type they handle with the factory class.
The factory class on initialisation will need to discover all the possible finders and store them in a hashmap (dictionary). This could be done by reflection and/or using the managed extensibility framework as Mark Seemann suggests.
However - be wary of making this overly complex. Prefer to do the simplest thing that could possibly work now with a view to refectoring when you need it. Don't go and build a complex self-configuring framework if you'll only ever need one more finder type ;)
You can use the reflection.
There is a sample code for Parse method of DetailFinder (remember to add error checking to that code):
public DetailFinder Parse(ReportType reportType)
{
string detailFinderClassName = GetDetailFinderClassNameByReportType(reportType);
return Activator.CreateInstance(Type.GetType(detailFinderClassName)) as DetailFinder;
}
Method GetDetailFinderClassNameByReportType can get a class name from a database, from a configuration file etc.
I think information about "Plugin" pattern will be useful in your case: P of EAA: Plugin
Like Mark said, a big if/switch block isn't bad since it will all be in one place (all of computer science is basically about getting similarity in some kind of space).
That said, I would probably just use polymorphism (thus making the type system work for me). Have each report implement a FindDetails method (I'd have them inherit from a Report abstract class) since you're going to end with several kinds of detail finders anyway. This also simulates pattern matching and algebraic datatypes from functional languages.

C# Design Pattern - How to write code based on highly configurable user selections

I would like to write code without a lot of switch, if/else, and other typical statements that would execute logic based on user input.
For example, lets say I have a Car class that I want to assemble and call Car.Run(). More importantly, lets say for the tires I have a chocie of 4 different Tire classes to choose from based on the user input.
For the, i dunno, body type, letS say i have 10 body type classes to choose from to construct my car object, and so on and so on.
What is the best pattern to use when this example is magnified by 1000, with the number of configurable parameters.
Is there even a pattern for this ? Ive looked at factory and abstract factory patterns, they dont quite fit the bill for this, although it would seem like it should.
I don't think the factory pattern would be remiss here. This is how I would set it up. I don't see how you can get away from switch/if based logic as fundamentally, your user is making a choice.
public class Car {
public Engine { get; set; }
//more properties here
}
public class EngineFactory {
public Engine CreateEngine(EngineType type {
switch (type) {
case Big:
return new BigEngine();
case Small:
return new SmallEngine();
}
}
}
public class Engine {
}
public class BigEngine : Engine {
}
public class SmallEngine : Engine {
}
public class CarCreator {
public _engineFactory = new EngineFactory();
//more factories
public Car Create() {
Car car = new Car();
car.Engine = _engineFactory.CreateEngine(ddlEngineType.SelectedValue);
//more setup to follow
return car;
}
}
The problem you tell of can be solved using Dependency Injection.
There're many frameworks implementing this pattern (for example, for .NET - excellent Castle.Windsor container).
I think elder_george is correct: you should look into DI containers. However, you might want to check the builder pattern (here too), which deals with "constructing" complex objects by assembling multiple pieces. If anything, this might provide you with some inspiration, and it sounds closer to your problem than the Factory.
You can get around having to use a lot of if or switch statements if you introduce the logic of registration in your factory, a registration entry would add a binding to your dictionary in your factory:
Dictionary<Type,Func<Engine>> _knownEngines;
In the above line, you bind a type to a factory function for example like so:
private void RegisterEngine<TEngineType>(Func<T> factoryFunc) where TEngineType : Engine
{
_knownEngines.Add(typeof(TEngineType), factoryFunc);
}
This would allow you to call:
RegisterEngine<BigEngine>(() => new BigEngine());
on your factory
So now you have a way of allowing your factory to know about a large number of engines without needing to resort to if/switch statements. If all your engines have a parameterless constructor you could even improve the above to:
public void RegisterEngine<TEngineType>() where TEngineType : Engine, new()
{
_knownEngines.Add(typeof(TEngineType), () => new TEngineType());
}
which would allow you to register your engines that your factory can create like so:
RegisterEngine<BigEngine>();
Now we simply need a way of associating a user input to the right type.
If we have some sort of enumeration then, we might might want to map the enum values to their corresponding type. There are many ways to achieve this, either with a dictionary in a similar way as we have done already, but this time it is an enum as a key and a type as a value or by decorating the enum values with their corresponding type as demonstrated here (If you have a very large number of values, this possibility could be interesting)
But, we can skip all this and just take a shortcut and associate the enumeration with the factory function directly.
So we would make our Dictionary look like this:
Dictionary<MyEngineEnumeration,Func<Engine>> _knownEngines;
You would register your engines
public void RegisterEngine<TEngineType>(MyEngineEnumeration key) where TEngineType : Engine, new()
{
_knownEngines.Add(key, () => new TEngineType());
}
like so:
RegisterEngine(MyEngineEnumeration.BigEngine);
And then you would have some sort of create method on your factory class that takes your enumeration value as key:
public Engine ResolveEngine(MyEngineEnumeration key)
{
// some extra safety checks can go here
return _knownEngines[key]
}
So your code would set your
Car.Engine = EngineFactory.ResolveEngine((MyEngineEnumeration)ddlEngine.SelectedValue)
You could follow the same pattern with wheels and so on.
Depending on your requirements, following a registration/resolution approach would allow you to potentially configure your available engines externally in an xml file or a database and allow you to make more engines available without modifying the release code file but by deploying a new assembly which is an interesting prospect.
Good luck!
You could use something like this:
Define a class representing an option within a set of options, ie. a TireType class, BodyType class.
Create an instance of the class for each option, get the data from a store. Fill a collection, ie TireTypeCollection.
Use the collection to fill any control that you show the user for him to select the options, in this way the user selects actually the option class selected.
Use the obejcts selected to build the class.
If any functionality requires chnges in behavior, you could use lamdas to represent that functionality and serialize the representation of the code to save it the store; or you could use delegates, creating a method for each functionality and selecting the correct method and saving it into a delegate on object creation.
What I would consider important in this approach is that any option presented to the user is fully functional, not only a list of names or ids.
You can try the policy class technique in C++.
http://beta.boost.org/community/generic_programming.html#policy
Are you simply asking if you can create an instance of a class based on a string (or maybe even a Type object)?
You can use Activator.CreateInstance for that.
Type wheelType = Type.GetType("Namespace.WheelType");
Wheel w = Activator.CreateInstance(wheelType) as Wheel;
You'd probably want to checking around the classes that you wind up creating, but that's another story.

Categories