I have a cache class that is registered as a single instance with Autofac. Whenever I clear the cache I call the method ExecuteCacheCleared();
The cache class looks like this
public IEnumerable<ICacheCleared> _cacheCleared { get; private set; }
public CacheService(IEnumerable<ICacheCleared> cacheCleared) : ICacheService
{
_cacheCleared = cacheCleared;
}
private void ExecuteCacheCleared()
{
if (_cacheCleared != null)
{
foreach (var cacheCleared in _cacheCleared)
{
cacheCleared.EntityCacheChanged();
}
}
}
I then have several concrete implementations of ICacheCleared that are called when ExecuteCacheCleared is called.
So currently I am registering bits in autofac as follows:
builder.RegisterType<CacheService>().As<ICacheService>().SingleInstance();
builder.RegisterType<CacheCleared>().As<ICacheCleared>().InstancePerRequest();
With the above, I get an error (which I get) because the SingleInstance won't work with the InstancePerRequest but in my CacheCleared concrete class i also inject other dependencies that need to be InstancePerRequest.
Hopefully, you can see what I am trying to achieve (basically trying to notify subscribing classes of changes) but I'm stuck on how to achieve this.
What you are trying to do will not be possible by taking a dependency on IEnumerable<>. That relationship type is intended for supporting multiple implementations of a service, not for resolving all instances.
I don't know of any way to do what you're trying to do OOTB. It's a fairly esoteric and difficult scenario. The best way I can think of is to invert the control: require your ICachedCleared implementations to take a dependency on ICacheService. In turn, give ICacheService a method like AddNotifier(ICacheCleared) which could be called from the constructor (if it's cheap--I highly suggest making it cheap) or on first use (if it's expensive--e.g., returns a Task).
AddNotifier() would add each to a collection which would then be called from ExecuteCacheCleared().
This is going to be pretty complex, because you're going to have to be VERY careful with this code in a multi-threaded environment--which you are surely in, since you're using InstancePerRequest(). Also, you're going to have to figure out a way of removing all those instances when the request has completed (use a collection of WeakReferences? Make ICacheCleared implement IDisposable and call a ICacheService.Remove() when disposed?). And probably many other issues I haven't considered.
The complexity of this is high enough that you may want to take a moment and reconsider the design itself before you head down the rabbit-hole.
Good luck!
It's not entirely clear what the intention of your design is. I'm using autofac and have caches which are instanced per request and others which are single instanced. Generally requests should be short lived, anything cached per request should be a snapshot at the time of the request.
Data we cache per request is generally done to avoid reloading or recalculating of data within the one request. It's dubious that you have a requirement to clear a cache instanced per request, even more dubious that you want to clear these caches across all the current requests, I don't understand how or why you would have a need for such functionality.
Anyhow, I think the answer could be very simple. ICacheService also needs to be InstancePerRequest or ICacheCleared needs to be single instanced.
Alternatively you should break up the design of your caches such that you separate the caches instanced per request from those which are single instanced. i.e. have a ICacheService which has a collection of ICacheCleared and a IRequestCacheService which has a collection of IRequestCacheCleared objects.
You can inject single instanced objects into objects instanced per request but not vice versa. So if you like the IRequestCacheService could also be injected with the ICacheService and call ExecuteCacheCleared on the ICacheService when ExecuteCacheCleared is called on the IRequestCacheService.
I have the following Unity related question. The code stub below sets up the basic scenario and the question is at the bottom.
NOTE, that [Dependency] attribute does not work for the example below and results in StackoverflowException, but the constructor injection does work.
NOTE(2) Some of the comments below started to assign "labels", like code smell, bad design, etc... So, for the avoidance of confusion here is the business setup without any design first.
The question seems to cause a severe controversy even among some of the best-known C# gurus. In fact, the question is far beyond C# and it falls more into pure computer science. The question is based on the well-known "battle" between a service locator pattern and pure dependency injection pattern: https://martinfowler.com/articles/injection.html vs http://blog.ploeh.dk/2010/02/03/ServiceLocatorisanAnti-Pattern/ and a subsequent update to remedy the situation when the dependency injection becomes too complicated: http://blog.ploeh.dk/2010/02/02/RefactoringtoAggregateServices/
Here is the situation, which does not fit nicely into what is described in the last two but seems to fit perfectly into the first one.
I have a large (50+) collection of what I called micro services. If you have a better name, please "apply" it when reading. Each of them operates on a single object, let's call it quote. However, a tuple (context + quote) seems more appropriate. Quote is a business object, which gets processed and serialized into a database and context is some supporting information, which is necessary while quote is being processed, but is not saved into the database. Some of that supporting information may actually come from database or from some third-party services. This is irrelevant. Assembly line comes to mind as a real-world example: an assembly worker (micro service) receives some input (instruction (context) + parts (quote)), processes it (does something with parts according to instruction and / or modifies instruction) and passes it further if successful OR discards it (raises exception) in case of issues. The micro services eventually get bundled up into a small number (about 5) of high-level services. This approach linearizes processing of some very complex business object and allows testing each micro service separately from all others: just give it an input state and test that it produces expected output.
Here is where it gets interesting. Because of the number of steps involved, high-level services start to depend on many micro services: 10+ and more. This dependency is natural, and it just reflects the complexity of the underlying business object. On top of that micro services can be added / removed nearly on a constant basis: basically, they are some business rules, which are almost as fluid as water.
That severely clashes with Mark's recommendation above: if I have 10+ effectively independent rules applied to a quote in some high-level service, then, according to the third blog, I should aggregate them into some logical groups of, let's say no more than 3-4 instead of injecting all 10+ via constructor. But there are no logical groups! While some of the rules are loosely dependent, most of them are not and so artificially bundling them together will do more harm than good.
Throw in that the rules change frequently, and it becomes a maintenance nightmare: all real / mocked calls must be updated every time the rules change.
And I have not even mentioned that the rules are US state dependent and so, in theory, there are about 50 collections of rules with one collection per each state and per each workflow. And while some of the rules are shared among all states (like "save quote to the database"), there are a lot of state specific rules.
Here is a very simplified example.
Quote - business object, which gets saved into database.
public class Quote
{
public string SomeQuoteData { get; set; }
// ...
}
Micro services. Each of them performs some small update(s) to quote. Higher level services can be also built from some lower level micro services as well.
public interface IService_1
{
Quote DoSomething_1(Quote quote);
}
// ...
public interface IService_N
{
Quote DoSomething_N(Quote quote);
}
All micro services inherit from this interface.
public interface IQuoteProcessor
{
List<Func<Quote, Quote>> QuotePipeline { get; }
Quote ProcessQuote(Quote quote = null);
}
// Low level quote processor. It does all workflow related work.
public abstract class QuoteProcessor : IQuoteProcessor
{
public abstract List<Func<Quote, Quote>> QuotePipeline { get; }
public Quote ProcessQuote(Quote quote = null)
{
// Perform Aggregate over QuotePipeline.
// That applies each step from workflow to a quote.
return quote;
}
}
One of high level "workflow" services.
public interface IQuoteCreateService
{
Quote CreateQuote(Quote quote = null);
}
and its actual implementation where we use many of low level micro services.
public class QuoteCreateService : QuoteProcessor, IQuoteCreateService
{
protected IService_1 Service_1;
// ...
protected IService_N Service_N;
public override List<Func<Quote, Quote>> QuotePipeline =>
new List<Func<Quote, Quote>>
{
Service_1.DoSomething_1,
// ...
Service_N.DoSomething_N
};
public Quote CreateQuote(Quote quote = null) =>
ProcessQuote(quote);
}
There are two main ways to achieve DI:
Standard approach is to inject all dependencies through constructor:
public QuoteCreateService(
IService_1 service_1,
// ...
IService_N service_N
)
{
Service_1 = service_1;
// ...
Service_N = service_N;
}
And then register all types with Unity:
public static class UnityHelper
{
public static void RegisterTypes(this IUnityContainer container)
{
container.RegisterType<IService_1, Service_1>(
new ContainerControlledLifetimeManager());
// ...
container.RegisterType<IService_N, Service_N>(
new ContainerControlledLifetimeManager());
container.RegisterType<IQuoteCreateService, QuoteCreateService>(
new ContainerControlledLifetimeManager());
}
}
Then Unity will do its "magic" and resolve all services at run time. The problem is that currently we have about 30 such micro services and the number is expected to increase. Subsequently some of the constructors are already getting 10+ services injected. This is inconvenient to maintain, mock, etc...
Sure, it is possible to use the idea from here: http://blog.ploeh.dk/2010/02/02/RefactoringtoAggregateServices/ However, the microservices are not really related to each other and so bundling them together is an artificial process without any justification. In addition, it will also defeat the purpose of making the whole workflow linear and independent (a micro service takes a current "state", then preforms some action with quote, and then just moves on). None of them cares about any other micro services before or after them.
An alternative idea seems to create a single "service repository":
public interface IServiceRepository
{
IService_1 Service_1 { get; set; }
// ...
IService_N Service_N { get; set; }
IQuoteCreateService QuoteCreateService { get; set; }
}
public class ServiceRepository : IServiceRepository
{
protected IUnityContainer Container { get; }
public ServiceRepository(IUnityContainer container)
{
Container = container;
}
private IService_1 _service_1;
public IService_1 Service_1
{
get => _service_1 ?? (_service_1 = Container.Resolve<IService_1>());
set => _service_1 = value;
}
// ...
}
Then register it with Unity and change the constructor of all relevant services to something like this:
public QuoteCreateService(IServiceRepository repo)
{
Service_1 = repo.Service_1;
// ...
Service_N = repo.Service_N;
}
The benefits of this approach (in comparison to the previous one) are as follows:
All micro services and higher-level services can be created in a unified form: new micro services can be easily added / removed without the need to fix constructor call for the services and all unit tests. Subsequently, maintenance and complexity decreases.
Due to interface IServiceRepository, it is easy to create an automated unit test, which will iterate over all properties and validate that all services can be instantiated, which means that there will be no nasty run time surprises.
The problem with this approach is that it starts looking a lot like a service locator, which some people consider as an anti-pattern: http://blog.ploeh.dk/2010/02/03/ServiceLocatorisanAnti-Pattern/ and then people start to argue that that all dependencies must be made explicit and not hidden as in ServiceRepository.
What shall I do with that?
I would just create one interface:
public interface IDoSomethingAble
{
Quote DoSomething(Quote quote);
}
And a Aggregate:
public interface IDoSomethingAggregate : IDoSomethingAble {}
public class DoSomethingAggregate : IDoSomethingAggregate
{
private IEnumerable<IDoSomethingAble> somethingAbles;
public class DoSomethingAggregate(IEnumerable<IDoSomethingAble> somethingAbles)
{
_somethingAbles = somethingAbles;
}
public Quote DoSomething(Quote quote)
{
foreach(var somethingAble in _somethingAbles)
{
somethingAble.DoSomething(quote);
}
return quote;
}
}
Note: Dependency injection doesn't mean, you need to use it everywhere.
I would go for a factory:
public class DoSomethingAggregateFactory
{
public IDoSomethingAggregate Create()
{
return new DoSomethingAggregate(GetItems());
}
private IEnumerable<IDoSomethingAble> GetItems()
{
yield return new Service1();
yield return new Service2();
yield return new Service3();
yield return new Service4();
yield return new Service5();
}
}
Everything else (which is not constructor injected) hides explicit dependencies.
As a last resort, you could also create some DTO object, inject every needed Service via the Constructor (But only one time).
This way you can request the ProcessorServiceScope and have all Service available without needing to create the ctor logic for every class:
public class ProcessorServiceScope
{
public Service1 Service1 {get;};
public ServiceN ServiceN {get;};
public ProcessorServiceScope(Service1 service1, ServiceN serviceN)
{
Service1 = service1;
ServiceN = serviceN;
}
}
public class Processor1
{
public Processor1(ProcessorServiceScope serviceScope)
{
//...
}
}
public class ProcessorN
{
public ProcessorN(ProcessorServiceScope serviceScope)
{
//...
}
}
It seems like a ServiceLocator, but it does not hide the depencies, so I think this is kind of ok.
Consider the various interface methods listed:
Quote DoSomething_1(Quote quote);
Quote DoSomething_N(Quote quote);
Quote ProcessQuote(Quote quote = null)
Quote CreateQuote(Quote quote = null);
Apart from the names, they're all identical. Why make things so complicated? Considering the Reused Abstractions Principle, I'd argue that it'd be better if you had fewer abstractions, and more implementations.
So instead, introduce a single abstraction:
public interface IQuoteProcessor
{
Quote ProcessQuote(Quote quote);
}
This is a nice abstraction because it's an endomorphism over Quote, which we know is composable. You can always create a Composite of an endomorphism:
public class CompositeQuoteProcessor : IQuoteProcessor
{
private readonly IReadOnlyCollection<IQuoteProcessor> processors;
public CompositeQuoteProcessor(params IQuoteProcessor[] processors)
{
this.processors = processors ?? throw new ArgumentNullException(nameof(processors));
}
public Quote ProcessQuote(Quote quote)
{
var q = quote;
foreach (var p in processors)
q = p.ProcessQuote(q);
return q;
}
}
At this point, you're essentially done, I should think. You can now compose various services (those called microservices in the OP). Here's a simple example:
var processor = new CompositeQuoteProcessor(new Service1(), new Service2());
Such composition should go in the application's Composition Root.
The various services can have dependencies of their own:
var processor =
new CompositeQuoteProcessor(
new Service3(
new Foo()),
new Service4());
You can even nest the Composites, if that's useful:
var processor =
new CompositeQuoteProcessor(
new CompositeQuoteProcessor(
new Service1(),
new Service2()),
new CompositeQuoteProcessor(
new Service3(
new Foo()),
new Service4()));
This nicely addresses the Constructor Over-injection code smell, because the CompositeQuoteProcessor class only has a single dependency. Since that single dependency is a collection, however, you can compose arbitrarily many other processors.
In this answer, I completely ignore Unity. Dependency Injection is a question of software design. If a DI Container can't easily compose a good design, you'd be better off with Pure DI, which I've implied here.
If you must use Unity, you can always create concrete classes that derive from CompositeQuoteProcessor and take Concrete Dependencies:
public class SomeQuoteProcessor1 : CompositeQuoteProcessor
{
public SomeQuoteProcessor1(Service1 service1, Service3 service3) :
base(service1, service3)
{
}
}
Unity should be able to auto-wire that class, then...
Unity supports property injection. Instead of passing all those values in to the constructor just have public setters available with the [Dependency] attribute. This allows you to add as many injections as you need without having to update the constructor.
public class QuoteCreateService : QuoteProcessor, IQuoteCreateService
{
[Dependency]
protected IService_1 Service_1 { get; public set; }
// ...
[Dependency]
protected IService_N Service_N; { get; public set; }
public override QuoteUpdaterList QuotePipeline =>
new QuoteUpdaterList
{
Service_1.DoSomething_1,
// ...
Service_N.DoSomething_N
};
public Quote CreateQuote(Quote quote = null) =>
ProcessQuote(quote);
}
I never thought that I would answer my own question, though a substantial part of the credit should go to https://softwareengineering.stackexchange.com/users/115084/john-wu - he was the one who had my mind set in a proper direction.
Nevertheless, nearly two year have passed since the time when I asked the question and while I built the solution to the question slightly after asking it (and thanks to everyone who replied), it took more than a year for most of the developers in the company that I work for to actually understand how does it work and what does it do (and yes, they all are well above average developers and yes, the code is written in pure C# with no external libraries). So, I think that it could be important for others who might have similar business scenarios.
As mentioned in the question, the root of our problem is that the parameter space that we are dealing with is too large. We have about 6-8 values of what we call workflow (call it W), about 30-40 values of what we call a state config (call it S) – this is a combination of US state code and two other parameters, though not all triples are possible (the actual content of what is that state config is irrelevant), and about 30-50 values of what we call a risk rule (call it R) - that value depends on the product but this is also irrelevant as different products are treated differently.
So, the total dimension of parameter space is N = W * S * R and it is around 10K (and I am not much concerned about a precise value). Which means that when the code runs, we need approximately the following: for each workflow (obviously only one is running at a time but all of them do run at some time) and each state config (again only one is running at a time but any of them could run at some time) we need to evaluate all risk rules, which are relevant for that workflow and that state config.
Well, if the dimension of parameter space is around some N, then the number of tests needed to cover the whole space is at least on the order of that N. And this is exactly what the legacy code and tests were trying to do and what resulted in the question.
The answer turned out to be in a pure math, rather than in a pure computer science and it is based on what is called separable spaces: https://en.wikipedia.org/wiki/Separable_space and what in the group theory terms is called irreducible representation: https://en.wikipedia.org/wiki/Irreducible_representation . Though I have to admit that the latter one was more like an inspiration rather than the actual application of the group theory.
If you already lost me, that’s fine. Just, please, read the math mentioned above before proceeding further.
The space separability here means that we can choose such a space N so that subspaces W, S, and R become independent (or separable). To the best of my understanding, this can always be done for finite spaces that we are dealing with in CS.
This means that we can describe N space as e.g. S lists (or sets) of some rules whereas each rule is applicable in some of W workflows by assigning a set of applicable workflows to each rule. And yes, if we have some bad rules that originally should be applied in some weird combinations of workflows and state configs then we can split them into more than one rule, which would then allow maintaining separability.
This, of course, can be generalized, but I will skip the details as they are irrelevant.
At this point, someone may wonder, what’s the point. Well, if we can split N dimensional space (and N is about 10K in our case) into independent subspaces, then the magic happens and instead of writing on the order of N = W *S * R tests to cover the whole parameter space we only need to write on the order of W + S + R tests to cover the whole parameter space. In our case the difference is about 100X.
But that’s still not all. As we can describe the subspaces in the notions of sets or lists (depending on the needs) that naturally brings us to the notion of useless tests.
Wait, did I just say useless tests? Yes, I did. Let me explain. A typical TDD paradigm is that if the code failed, then the first thing that we need to do is to create a test, which would’ve caught that bug. Well, if the code is described by a static list or set (== list or set that was hard coded in the code) and the test would be described by an identity transformation from that list/set, then this makes such a test useless as it would have to repeat the original list/set…
The state configs form a historical pattern, e.g., let say, that we had some set of rules for the state of CA some time in 2018. That set of rules might be slightly changed to some other set of rules in 2019 and into some set of rules in 2020. These changes are small: a set of rule might pick up or lose a few rules and/or the rule might be tweaked a little bit, e.g. if we are comparing some value to be above some threshold, then the value of that threshold might be changed at some point for some state config. And once the rule or collection of rules is changed, then it should stay as it is until it changed again. Meanwhile some other rules could be changed, and every such change requires introduction of what we call state config. So, for each US state we have ordered collection (list) of these state configs and for each state config we have a collection of rules. Most of the rules don’t change but some of them do sporadically change as described. A natural IOC approach is to register each rule collection and each rule for each state config with IOC container, e.g. Unity using a combination of unique “name” of the state config and name of rule / collection (we actually run more than one collection of rules during workflow), whereas each rule already has a collection of workflows where it should be applicable. Then when the code runs for a given state config and a given workflow we can pull the collection out of Unity. A collection then contains the names of the rules that should be run. Then combining the name of the rule with the name of state config we can pull the actual rule out of Unity, filter the collection to leave only the rules that are applicable for a given workflow and then apply all the rules.
What happens here is that rule names / collection names form some closed sets and they benefit greatly by describing them that way. We obviously don’t want to register each rule / collection for each state config by hands as that would be tedious and error prone. So we use what we call “normalizers”. Let’s say that we have a general rule – that’s a rule that is the same for all state config. Then we register it by name only and the normalizer will “automatically” register it for all state configs. The same goes with the historic versioning. Once we register a rule / collection with Unity by rule / collection name + state config, then the normalizer will fill in the gap until we change the rule at some later state config.
As a result, each rule becomes extremely simple. Most of them have either zero or one injected constructor parameter, a few of them have two, and I know only one rule that has three injected parameters. As rules are independent and very simple, the tests for rules become very simple as well.
We do have some ideas to make the core of whatever I wrote above open source, provided that it could bring some value to the community...
Right now I am studying the common design patterns and for the most part I understand the purpose of the decorator pattern. But what I don't get is, what is the purpose of wrapping an existing object in a decorator class?
Consider this scenario, since Progress is part of the observer pattern, I want to limit the amount of updates to its subscribers to prevent the UI thread from locking.
So I have decorated the class to only update once every 50 milliseconds.
public class ProgressThrottle<T> : Progress<T>
{
private DateTime _time = DateTime.Now;
public ProgressThrottle(Action<T> handler) : base(handler)
{
}
protected override void OnReport(T value)
{
if (DateTime.Now.AddMilliseconds(50) < _time)
{
base.OnReport(value);
_time = DateTime.Now;
}
}
}
public class ProgressThrottle2<T> : IProgress<T>
{
private DateTime _time = DateTime.Now;
private readonly IProgress<T> _wrapper;
public ProgressThrottle2(IProgress<T> wrapper)
{
_wrapper = wrapper;
}
public void Report(T value)
{
if (DateTime.Now.AddMilliseconds(50) < _time)
{
_wrapper.Report(value);
_time = DateTime.Now;
}
}
Both classes accomplish the same thing, except I find the first version better because it allows me to use the base constructor for setting the delegate for progress updates. The base class already supports overriding the method, so what is the need for me wrap the object?
Are both classes example of the decorator pattern? I would much rather use the first option but I rarely see examples in that manner.
Imagine you have n different implementations of the IProgress<T> interface.
For the sake of this example, let's consider two implementations:
EndpointProgress<T>, this would poll an endpoint and Report every time the response is different.
QueryProgress<T>, this would execute a database query periodically and Report every time the result is different.
In order to throttle both of these implementations using your first approach, you'd have to create two implementations of your ProgressThrottle<T>, one inheriting from EndpointProgress<T>, and another one inheriting from QueryProgress<T>.
In order to throttle both of these implementations using the second approach you'd just have to use a wrapped instance of EndpointProgress<T> and QueryProgress<T>.
var throttledEndpointProgress = new ProgressThrottle2<int>(new EndpointProgress<T>());
var throttledQueryProgress = new ProgressThrottle2<int>(new QueryProgress<T>());
Edit:
So in a scenario were I am certain I will not extend a class more than once to add functionality, is it acceptable to not use a wrapper?
I still would use the second implementation of the decorator (I'm not even sure of the first implementation would be considered the decorator pattern) for several reasons:
S.O.L.I.D. principles' open/closed principle states that:
Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.
I you have to modify your current Progress implementation in order to extend it, you are violating Open/Closed.
Having ProgressThrottle inherit from Progress means that every time Progress' constructor changes, ProgressThrottle also needs its constructor changed.
By using wrapper decorators, you're able to compose and combine decorators. Let's consider an implementation of IProgress<T> that logs every onReport call. You could —based on configuration, environment, etc— compose these decorators in different ways to achieve different goals:
var progress1 = new LoggingProgress<int>(
new ProgressThrottle<int>(new Progress<int>())
);
var progress2 = new ProgressThrottle<int>(
new LoggingProgress<int>(new Progress<int>())
);
Here, progress1 will only log the throttled reported progress. progress2 will log all the reported progress, but will report in a throttled manner. Depending on what your objectives are you might want one implementation or the other; or you might want both of them, one for diagnostics in staging and another one for prod, but the most important thing is that you don't have to change the implementation of your decorator in order to change this behavior.
Long time listener - first time caller. I am hoping to get some advice. I have been reading about caching in .net - both with System.Web.Caching and System.Runtime.Caching. I am wondering what additional benefits I can get vs simply creating a static variable with locking. My current (simple minded) caching method is like this:
public class Cache
{
private static List<Category> _allCategories;
private static readonly object _lockObject = new object();
public static List<Category> AllCategories
{
get
{
lock (_lockObject)
{
if (_allCategories == null)
{
_allCategories = //DB CALL TO POPULATE
}
}
return _allCategories;
}
}
}
Other than expiration (and I wouldn't want this to expire) I am at a loss to see what the benefit of using the built in caching are.
Maybe there are benefits for more complex caching scenarios that don't apply to me - or maybe I am just missing something (would not be the first time).
So, what is the advantage of using cache if I want a cache that never expires? Doesn't static variables do this?
First of all, Xaqron makes a good point that what you're talking about probably doesn't qualify as caching. It's really just a lazily-loaded globally-accessible variable. That's fine: as a practical programmer, there's no point bending over backward to implement full-on caching where it's not really beneficial. If you're going to use this approach, though, you might as well be Lazy and let .NET 4 do the heavy lifting:
private static Lazy<IEnumerable<Category>> _allCategories
= new Lazy<IEnumerable<Category>>(() => /* Db call to populate */);
public static IEnumerable<Category> AllCategories
{
get { return _allCategories.Value; }
}
I took the liberty of changing the type to IEnumerable<Category> to prevent callers from thinking they can add to this list.
That said, any time you're accessing a public static member, you're missing out on a lot of flexibility that Object-Oriented Programming has to offer. I'd personally recommend that you:
Rename the class to CategoryRepository (or something like that),
Make this class implement an ICategoryRepository interface, with a GetAllCategories() method on the interface, and
Have this interface be constructor-injected into any classes that need it.
This approach will make it possible for you to unit test classes that are supposed to do things with all the categories, with full control over which "categories" are tested, and without the need for a database call.
System.Runtime.Caching and System.Web.Caching have automatic expiration control, that can be based on file-changes, SQL Server DB changes, and you can implement your own "changes provider". You can even create dependecies between cache entries.
See this link to know more about the System.Caching namespace:
http://msdn.microsoft.com/en-us/library/system.runtime.caching.aspx
All of the features I have mentioned are documented in the link.
Using static variables, would require manual control of expiration, and you would have to use file system watchers and others that are provided in the caching namespace.
Other than expiration (and I wouldn't want this to expire) ...
If you don't care about the life-time the name is not caching anymore.
Still using Application (your case) and Session object are best practices for storing application level and session level data.
So I have an order manager class that looks like:
public class OrderManager
{
private IDBFactory _dbFactory;
private Order _order;
public OrderManager(IDBFactory dbFactory)
{
_dbFactory = dbFactory;
}
public void Calculate()
{
_order.SubTotal
_order.ShippingTotal
_order.TaxTotal
_order.GrandTotal
}
}
Now, the point here is to have a flexible/testible design.
I am very concerned about being able to write solid unit tests around this Calculate method.
Considerations:
1. Shipping has to be abstracted out, be loose coupled since the implementation of shipping could vary depending on USPS, UPS, fedex etc. (they have their own API's).
2. same goes with calculating tax
Should I just create a Tax and Shipping Manager class, and have a tax/shipping factory in the constructor? (exactly how I have designed my OrderManager) class?
(the only thing that I can think of, in terms of what I am "missing", is IoC, but I don't mind that and don't need that extra level of abstraction in my view).
Well, you are already moving towards dependency injection in your approach, so why not go the whole hog and use some sort of IoC container to handle this for you?
Yes, if you want it abstrated out, then create a separate class for it. If you want to truly unit test what is left, abstract out an interface and use mock testing. The problem is, the more you abstract out like this, the more plumbing together there is to do and the more you will find yourself wishing you were using an IoC framework of some kind.
You are suggesting constructor injection, which is a common approach. You also come across property injection (parameterless constructor, set properties instead). And there are also frameworks that ask you to implement an initialization interface of some kind that allows the IoC framework to do the initialization for you in a method call. Use whatever you feel most comfortable with.
I do think an IOC would help with the plumbing of instantiating the correct concrete classes but you still need to get your design the way you want it. I do think you need to abstract away the shipping with an interface that you can implement with a class for each of your shippers (USPS, UPS, FEDEx, etc) and could use a Factory class (ShippingManager) to pass the correct one out or depend on the IOC to do that for you.
public interface IShipper
{
//whatever goes into calculating shipping.....
decimal CalculateShippingCost(GeoData geo, decimal packageWeight);
}
You could also just inject an IShipper and ITaxer concrete classes into your OrderManager and you calculate method just calls into those classes....and can use an IOC nicely to handle that.
Just a thought:
Your Calculate() method taking no parameters, returning nothing and acting on private fields is not how I would do it. I would write it as a static method that takes in some numbers, an IShippingProvider and an ITaxJurisdiction and returns a dollar total. That way you have an opportunity to cache the expensive calls to UPS and your tax tables using memoization.
Could be that I'm prejudiced against public methods that work like that. They have burned me in the past trying to bind to controls, use code generators, etc.
EDIT: as for dependency injection/IOC, I don't see the need. This is what interfaces were made for. You're not going to be loading up a whole array of wacky classes, just some implementations of the same weight/zipcode combo.
That's what I would say if I were your boss.
I would take the Calculate method out into a class. Depending on your circumstances OrderCalculator might need to be aware of VAT, Currency, Discounts, ...
Just a thought.