Should I decouple the repository interface from the domain model - c#

Let’s say I have some DDD service that requires some IEnumerable<Foo> to perform some calculations. I came up with two designs:
Abstract the data access with an IFooRepository interface, which is quite typical
public class FooService
{
private readonly IFooRepository _fooRepository;
public FooService(IFooRepository fooRepository)
=> _fooRepository = fooRepository;
public int Calculate()
{
var fooModels = _fooRepository.GetAll();
return fooModels.Sum(f => f.Bar);
}
}
Do not rely on the IFooRepository abstraction and inject IEnumerable<Foo> directly
public class FooService
{
private readonly IEnumerable<Foo> _foos;
public FooService(IEnumerable<Foo> foos)
=> _foos = foos;
public int Calculate()
=> _foos.Sum(f => f.Bar);
}
This second design seems better in my opinion as FooService now does not care where the data is coming from and Calculate becomes pure domain logic (ignoring the fact that IEnumerable may come from an impure source).
Another argument for using the second design is that when IFooRepository performs asynchronous IO over the network, usually it will be desirable to use async-await like:
public class AsyncDbFooRepository : IFooRepository
{
public async Task<IEnumerable<Foo>> GetAll()
{
// Asynchronously fetch results from database
}
}
But as you need to async all the way down, FooService is now forced to change its signature to async Task<int> Calculate(). This seems to violate the dependency inversion principle.
However, there are also issues with the second design. First of all, you have to rely on the DI container (using Simple Injector as an example here) or the composition root to resolve the data access code like:
public class CompositionRoot
{
public void ComposeDependencies()
{
container.Register<IFooRepository, AsyncDbFooRepository>(Lifestyle.Scoped);
// Not sure if the syntax is right, but it demonstrates the concept
container.Register<FooService>(async () => new FooService(await GetFoos(container)));
}
private async Task<IEnumerable<Foo>> GetFoos(Container container)
{
var fooRepository = container.GetInstance<IFooRepository>();
return await fooRepository.GetAll();
}
}
Also in my specific scenario, AsyncDbFooRepository requires some sort of runtime parameter to construct, and that means you need an abstract factory to construct AsyncDbFooRepository.
With the abstract factory, now I have to manage the life cycles of all dependencies under AsyncDbFooRepository (the object graph under AsyncDbFooRepository is not trivial). I have a hunch that I am using DI incorrectly if I opt for the second design.
In summary, my questions are:
Am I using DI incorrectly in my second design?
How can I compose my dependencies satisfactorily for my second design?

One aspect of async/await is that it by definition needs to applied "all the way down" as you rightfully state. You however can't prevent the use of Task<T> when injecting an IEnumerable<T>, as you suggest in your second option. You will have to inject a Task<IEnumerable<T>> into constructors to ensure data is retrieved asynchronously. When injecting an IEnumerable<T> it either means that your thread gets blocked when the collection is enumerated -or- all data must be loaded during object graph construction.
Loading data during object graph construction however is problematic, because of the reasons I explained here. Besides that, since we're dealing with collections of data here, it means that all data must be fetched from the database on each request, even though not all data might be required or even used. This might cause quite a performance penalty.
Am I using DI incorrectly in my second design?
That's hard to say. An IEnumerable<T> is a stream, so you could consider it a factory, which means that injecting an IEnumerable<T> does not require the runtime data to be loaded during object construction. As long as that condition is met, injecting an IEnumerable<T> could be fine, but still makes it impossible to make the system asynchronous.
However, when injecting an IEnumerable<T> you might end up with ambiguity, because it might not be very clear what it means to be injecting an IEnumerable<T>. Is that collection a stream that is lazily evaluated or not? Does it contain all elements of T. Is T runtime data or a service?
To prevent this confusion, moving the loading of this runtime information behind an abstraction is typically the best thing to do. To make your life easier, you could make the repository abstraction generic as well:
public interface IRepository<T> where T : Entity
{
Task<IEnumerable<T>> GetAll();
}
This allows you to have one generic implementation and make one single registration for all entities in the system.
How can I compose my dependencies satisfactorily for my second design?
You can't. To be able to do this, your DI container must be able to resolve object graphs asynchronously. For instance, it requires the following API:
Task<T> GetInstanceAsync<T>()
But Simple Injection doesn't have such API, and neither does any other existing DI Container and that's for good reason. The reason is that object construction must be simple, fast and reliable and you lose that when doing I/O during object graph construction.
So not only is your second design undesirable, it is impossible to do so when data is loaded during object construction, without breaking the asynchonicity of the system and causing threads to block while using a DI container.

I try as much as possible (until now I've succeded every time) to not inject any service that do IO in my domain models as I like to keep them pure with no side effects.
That being said the second solution seems better but there is a problem with the signature of the method public int Calculate(): it uses some hidden data to perform the calculation, so it is not explicit. In cases like this I like to pass the transient input data as input parameter directly to the method like this:
public int Calculate(IEnumerable<Foo> foos)
In this way it is very clear what the method needs and what it returns (based on the combination of class name and method name).

Related

Simple injector - create a generic decorator for EF Core caching

I'm trying to implement caching for EF Core in my .NET Core project using Simple Injector as my DI. I'm using the CQRS pattern so I have a bunch of queries I'd like to cache (not all).
I have created a generic interface for a cached query, which takes a return type of the query, and the query arguments:
public interface ICachedQuery<T, P>
{
T Execute(P args);
string CacheStringKey { get; set; }
}
And here is one of my queries:
public class GetAssetsForUserQuery : ICachedQuery<Task<List<Asset>>, User>
{
readonly IDataContext dataContext;
public string CacheStringKey { get; set; }
public GetAssetsForUserQuery(IDataContext dataContext)
{
CacheStringKey = "GetAssetsForUserQuery";
this.dataContext = dataContext;
}
public async Task<List<Asset>> Execute(User user)
{
var allAssets = dataContext.Assets.ToList();
return allAssets;
}
}
My decorator is not too relevant in this case, but it here is the signature:
public class CachedCachedQueryDecorator<T, P> : ICachedQuery<T, P>
I register my query and decorater in Startup.cs like so:
Container.RegisterDecorator(typeof(ICachedQuery<,>), typeof(CachedCachedQueryDecorator<,>));
Container.Register<GetAssetsForUserQuery>();
And I inject my GetAssetsForUserQuery like so:
readonly GetAssetsForUserQuery getAssetsForUserQuery;
public GetTagsForUserQuery(GetAssetsForUserQuery getAssetsForUserQuery)
{
this.getAssetsForUserQuery = getAssetsForUserQuery;
}
But my decorator is never hit! Now, if I register my query to the interface ICachedQuery in Startup.cs like so:
Container.Register(typeof(ICachedQuery<,>), typeof(GetAssetsForUserQuery));
And I inject ICachedQuery instead of GetAssetsForUserQuery, then my decorator is hit. But ICachedQuery is a generic so I can't have it resolve for one specific query.
I know I am doing something fundamentally wrong, any help?
But my decorator is never hit!
That's correct. To understand why this is the case, it's best to visualize the object graph that you wish to be constructed:
new GetTagsForUserQuery(
new CachedCachedQueryDecorator<Task<List<Asset>>, User>(
new GetAssetsForUserQuery()))
PRO TIP: For many DI-related problems, it is very useful to construct the required object graph in plain C#, as the previous code snippet shows. This presents you with a clear mental model. This not only is a useful model for yourself, it is a useful way of communicating to others what it is you are trying to achieve. This is often much harder to comprehend when just showing DI registrations.
If you try this, however, this code won't compile. It won't compile because GetTagsForUserQuery requires a GetAssetsForUserQuery in its constructor, but a CachedCachedQueryDecorator<Task<List<Asset>>, User> is not a GetTagsForUserQuery—they are both an ICachedQuery<Task<List<Asset>>, User>, but that's not what GetTagsForUserQuery requires.
Because of this, it is technically impossible to wrap GetAssetsForUserQuery with a CachedCachedQueryDecorator and inject that decorator into GetTagsForUserQuery. And the same holds when you would be resolving GetAssetsForUserQuery directly from Simple Injector like this:
GetAssetsForUserQuery query = container.GetInstance<GetAssetsForUserQuery>();
In this case you are requesting a GetAssetsForUserQuery from the container, and this type is compile-time enforced. Also in this case it is impossible to wrap GetAssetsForUserQuery with the decorator while preserving GetAssetsForUserQuery's type.
What would work, though, is requesting the type by its abstraction:
ICachedQuery<Task<List<Asset>>, User> query =
container.GetInstance<ICachedQuery<Task<List<Asset>>, User>>();
In this case, you are requesting an ICachedQuery<Task<List<Asset>>, User> and the container is free to return you any type, as long as it implements ICachedQuery<Task<List<Asset>>, User>.
Same holds for your GetTagsForUserQuery. Only when you let it depend on ICachedQuery<,>, makes it possible to decorate it. The solution is, therefore, to register GetAssetsForUserQuery by its abstraction:
Container.RegisterDecorator(
typeof(ICachedQuery<,>),
typeof(CachedCachedQueryDecorator<,>));
Container.Register<ICachedQuery<Task<List<Asset>>, User>, GetAssetsForUserQuery>();
Here are a few tips, though:
Whether or not your queries (I typically call them the 'handlers', but what's in the name) are cacheable or not is an implementation detail. You shouldn't have to define a different abstraction for cacheable queries, and consumers shouldn't have to be aware of that.
Instead of exposing a separate CacheStringKey, try using the P args as the cache key. This can be done, for instance, by serializing the args to a JSON object. This makes caching more transparent. In case the args object is very complex, the number of cache entries will be too big anyway, so you typically only want to cache results of very simple arg requests.
Whether or not to cache, is rather an implementation detail that either should be incorporated in the Composition Root, or part of the query (handler) implementation. I typically do this by marking that implementation with an attribute, but an interface can work as well. You can then apply the decorator conditionally.
Prevent supplying full blown entities both as input and as output for your query (handlers). Instead use seperate data-centric POCOs (like DTOs). What does it mean to send a User as input? It's much clearer, though, when you send a GetAllUserAssets object. That GetAllUserAssets can probable just contain a UserId property. This makes it very easy to turn this object into a cachable entry. The same holds for output objects. Entities are very hard to cache reliably. This is much easier with POCOs or DTOs. They can be serialized with much less effort and risk.
I've written about CQRS-styled architectures in the past myself. See for instance this article. That article explains some of the points summed up above.

Are Func<T> parameters in constructor slowing down my IoC resolving?

I'm trying to improve the performance of my IoC container. We are using Unity and SimpleInjector and we have a class with this constructor:
public AuditFacade(
IIocContainer container,
Func<IAuditManager> auditManagerFactory,
Func<ValidatorFactory> validatorCreatorFactory,
IUserContext userContext,
Func<ITenantManager> tenantManagerFactory,
Func<IMonitoringComponent> monitoringComponentFactory)
: base(container, auditManagerFactory, GlobalContext.CurrentTenant,
validatorCreatorFactory, userContext, tenantManagerFactory)
{
_monitoringComponent = new Lazy<IMonitoringComponent>(monitoringComponentFactory);
}
I also have another class with this constructor:
public AuditTenantComponent(Func<IAuditTenantRepository> auditTenantRepository)
{
_auditTenantRepository = new Lazy<IAuditTenantRepository>(auditTenantRepository);
}
I'm seeing that the second one gets resolved in 1 millisecond, most of the time, whereas the first one takes on average 50-60 milliseconds. I'm sure the reasoning for the slower one is because of the parameters, it has more parameters. But how can I improve the performance of this slower one? Is it the fact that we are using Func<T> as parameters? What can I change if it is causing the slowness?
There is possibly a lot to improve on your current design. These improvements can be placed in five different categories, namely:
Possible abuse of base classes
Use of Service Locator anti-pattern
Use of Ambient Context anti-pattern
Leaky abstractions
Doing too much in injection constructors
Possible abuse of base classes
The general consensus is that you should prefer composition over inheritance. Inheritance is often overused and often adds more complexity compared to using composition. With inheritance the derived class is strongly coupled to the base class implementation. I often see a base class being used as practical utility class containing all sorts of helper methods for cross-cutting concerns and other behavior that some of the derived classes may need.
An often better approach is to remove the base class all together and inject a service into the implementation (the AuditFacade class in your case) that exposed just the functionality that the service needs. Or in case of cross-cutting concerns, don't inject that behavior at all, but wrap the implementation with a decorator that extends the class'es behavior with cross-cutting concerns.
In your case, I think the complication is clearly happening, since 6 out of 7 injected dependencies are not used by the implementation, but are only passed on to the base class. In other words, those 6 dependencies are implementation details of the base class, while the implementation still is forced to know about them. By abstracting (part of) that base class behind a service, you can minimize the number of dependencies that AuditFacade needs to two dependencies: the Func<IMonitoringComponent> and the new abstraction. The implementation behind that abstraction will have 6 constructor dependencies, but the AuditFacade (and other implementations) are oblivious to that.
Use of Service Locator anti-pattern
The AuditFacade depends on an IIocContainer abstraction and this is very like an implementation of the Service Locator pattern. Service Locator should be considered an anti-pattern because:
it hides a class' dependencies, causing run-time errors instead of
compile-time errors, as well as making the code more difficult to
maintain because it becomes unclear when you would be introducing a
breaking change.
There are always better alternatives to injecting your container or an abstraction over your container into application code. Do note that at some times you might want to inject the container into factory implementations, but at long as those are placed inside your Composition Root, there's no harm in that, since Service Locator is about roles, not mechanics.
Use of Ambient Context anti-pattern
The static GlobalContext.CurrentTenant property is an implementation of the Ambient Context anti-pattern. Mark Seemann and I write about this pattern in our book:
The problems with AMBIENT CONTEXT are related to the problems with SERVICE
LOCATOR. The main problems are:
The DEPENDENCY is hidden.
Testing becomes more difficult.
It becomes very hard to change the DEPENDENCY based on its context. [paragraph 5.3.3]
The use in this case is really weird IMO, because you grab the current tenant from some static property from inside your constructor to pass it on to the base class. Why doesn't the base class call that property itself?
But no one should call that static property. The use of those static properties makes your code harder to read and maintain. It makes unit testing harder and since your code base will usually be littered with calls to such static, it becomes a hidden dependency; it has the same downsides as the use of Service Locator.
Leaky abstractions
A Leaky Abstraction is a Dependency Inversion Principle violation, where the abstraction violates the second part of the principle, namely:
B. Abstractions should not depend on details. Details should depend on
abstractions.
Although Lazy<T> is not abstractions by itself (Lazy<T> is a concrete type), it can become leaky abstraction when used as constructor argument. For instance, if you are injecting an Lazy<IMonitoringComponent> instead of an IMonitoringComponent directly (which is what you are basically doing in your code), the new Lazy<IMonitoringComponent> dependency leaks implementation details. This Lazy<IMonitoringComponent> communicates to the consumer that the used IMonitoringComponent implementation is expensive or time consuming to create. But why should the consumer care about this?
But there are more problems with this. If at one point in time the used IUserContext implementation becomes costly to create, we must start to make sweeping changes throughout the application (a violation of the Open/Closed Principle) because all IUserContext dependencies need to be changed to Lazy<IUserContext> and all consumers of that IUserContext must be changed to use userContext.Value. instead. And you'll have to change all your unit tests as well. And what happens if you forget to change one IUserContext reference to Lazy<IUserContext> or when you accidentally depend on IUserContext when you create a new class? You have a bug in your code, because at that point the user context implementation is created right away and this will cause a performance problem (this causes a problem, because that is the reason you are using Lazy<T> in the first place).
So why are we exactly making sweeping changes to our code base and polluting it with that extra layer of indirection? There is no reason for this. The fact that a dependency is costly to create is an implementation detail. You should hide it behind an abstraction. Here's an example:
public class LazyMonitoringComponentProxy : IMonitoringComponent {
private Lazy<IMonitoringComponent> component;
public LazyMonitoringComponentProxy(Lazy<IMonitoringComponent> component) {
this.component = component;
}
void IMonitoringComponent.MonitoringMethod(string someVar) {
this.component.Value.MonitoringMethod(someVar);
}
}
In this example we've hidden the Lazy<IMonitoringComponent> behind a proxy class. This allows us to replace the original IMonitoringComponent implementation with this LazyMonitoringComponentProxy without having to make any change to the rest of the applicaiton. With Simple Injector, we can register this type as follows:
container.Register<IMonitoringComponent>(() => new LazyMonitoringComponentProxy(
new Lazy<IMonitoringComponent>(container.GetInstance<CostlyMonitoringComp>));
And just as Lazy<T> can be abused as leaky abstraction, the same holds for Func<T>, especially when you're doing this for performance reasons. When applying DI correctly, there is most of the time no need to inject factory abstractions into your code such as Func<T>.
Do note that if you are injecting Lazy<T> and Func<T> all over the place, you are complicating your code base unneeded.
Doing too much in injection constructors
But besides Lazy<T> and Func<T> being leaky abstractions, the fact that you need them a lot is an indication of a problem with your application, because Injection Constructors should be simple. If constructors take a long time to run, your constructors are doing too much. Constructor logic is often hard to test and if such constructor makes a call to the database or requests data from HttpContext, verification of your object graphs becomes much harder to the point that you might skip verification all together. Skipping verification of the object graph is a terrible thing to do, because this forces you to click through the complete application to find out whether or not your DI container is configured correctly.
I hope this gives you some ideas about improving the design of your classes.
You can hook into Simple Injector's pipeline and add profiling, which allows you to spot which types are slow to create. Here's an extension method that you can use:
public struct ProfileData {
public readonly ExpressionBuildingEventArgs Info;
public readonly TimeSpan Elapsed;
public ProfileData(ExpressionBuildingEventArgs info, TimeSpan elapsed) {
this.Info = info;
this.Elapsed = elapsed;
}
}
static void EnableProfiling(Container container, List<ProfileData> profileLog) {
container.ExpressionBuilding += (s, e) => {
Func<Func<object>, object> profilingWrapper = creator => {
var watch = Stopwatch.StartNew();
var instance = creator.Invoke();
profileLog.Add(new ProfileData(e, watch.Elapsed));
return instance;
};
Func<object> instanceCreator =
Expression.Lambda<Func<object>>(e.Expression).Compile();
e.Expression = Expression.Convert(
Expression.Invoke(
Expression.Constant(profilingWrapper),
Expression.Constant(instanceCreator)),
e.KnownImplementationType);
};
}
And you can use this as follows:
var container = new Container();
// TODO: Your registrations here.
// Hook the profiler
List<ProfileData> profileLog = new List<ProfileData>(1000);
// Call this after all registrations.
EnableProfiling(container, profileLog);
// Trigger verification to allow everything to be precompiled.
container.Verify();
profileLog.Clear();
// Resolve a type:
container.GetInstance<AuditFacade>();
// Display resolve time in order of time.
var slowestFirst = profileLog.OrderByDescending(line => line.Elapsed);
foreach (var line in slowestFirst)
{
Console.WriteLine(string.Format("{0} ms: {1}",
line.Info.KnownImplementationType.Name,
line.Elapsed.TotalMilliseconds);
}
Do note that the shown times include the time it takes to resolve the dependencies, but this will probably allow you pretty easily what type causes the delay.
There are two important thing I want to note about the given code here:
This code will have severely negative impact on the performance of resolving object graphs, and
The code is NOT thread-safe.
So don't use it in your production environment.
Everything you do has a cost associated with it. Typically, more constructor parameters that are resolved recursively take longer than fewer parameters. But you must decide if the cost is ok or too high.
In your case, will the 50 ms cause a bottleneck? are you only creating 1 instance or are you puking them out in a tight loop? Just comparing the 1 ms with 50 ms might cause you to condemn the slower one, but if the user cannot tell that 50 ms passed and it doesn't cause a problem elsewhere in your app, why run through hoops to make it faster if you don't know it'll ever be needed?

Entity Framework new dbContext in DAL method without using() scope

I'm a little bit familiar with Entity Framework for some simple projects, but now I want to go deeper and write better code.
There is plenty of topics talking about whether using statics methods in DAL or not. For the moment I'm more of the side of people who think yes we can use statics methods.
But I'm still thinking if some practices are good or not.
Lot of people do like this:
public IList<Person> GetAll()
{
using (var dbContext = new MyDbContext())
{
return dbContext.Persons.ToList();
}
}
But I'm wondering if doing like this is a good practice:
public static IQueryable<Person> GetAll()
{
var dbContext = new MyDbContext();
return dbContext.Persons;
}
The goal is to use only static methods in a static class as I think it's legit because this class is just a DAL, there will never be any property. I also need to do like this instead of using the using() scope to avoid disposing the context as this method return an IQueryable.
I'm sure some people already think "OMG no, your context will never be disposed", so please read this article: http://blog.jongallant.com/2012/10/do-i-have-to-call-dispose-on-dbcontext.html
I tried by myself and yes the context is disposed only when I don't need it anymore.
I repeat, the goal here is to use static methods so I can't use a dbContext property which the constructor instantiate.
So why people always use the using() scope?
Is it a bad practice to do it like I would like to?
Another bonus question: where is the [NotMapped] attribute with EF6? I've checked on both System.ComponentModel.DataAnnotations and System.ComponentModel.DataAnnotations.Schema but can't find it, this attribute is not recognized by the compiler.
Thank's for your answers
Following the Repository pattern, IQueryable<T> shall never be returned anyway.
Repository pattern, done right
Besides, your repositories depend on your DbContext. Let's say you have to work on customers in an accounting system.
Customer
public class Customer {
public int Id { get; protected set; }
public string GivenName { get; set; }
public string Surname { get; set; }
public string Address { get; set; }
}
CustomerRepository
public class CustomerRepository {
public CustomerRepository(DbContext context) {
if (context == null) throw new ArgumentNullException("context");
this.context = context;
}
public IList<Customer> GetAll() { return context.Customers.ToList(); }
public IList<Invoice> GetInvoicesFor(Customer customer) {
return context.Invoices
.Where(invoice => invoice.Customer.Id == customer.Id)
.ToList();
}
private readonly DbContext context;
}
So in fact, to answer your question in a more concise and precise way, I think neither approach is good. I would more preferably use a DbContext per business concern. When you access let's say the Customers Management features, then instantiate a single DbContext that shall be shared across all of your required repositories, then dispose this very DbContext once you exit this set of features. This way, you shall not have to use Using statements, and your contexts should be managed adequately.
Here's another short and simple good reference for the Repository pattern:
Repository (Martin Fowler)
In response to comments from the OP
But actually the point is I don't want to follow the repository pattern. People say "what about if your data source change?" I want to answer, what about if it never change? What the point of having a such powerful class but not using it just in case one day the database provider may change
Actually, the Repository Pattern doesn't only serve the purpose of easier data source change, it also encourages better separation of concerns and a more functional approach closer to the business domain as members in the repository shall all revolve around business terminologies.
For sure the repository itself cannot take over control to dispose a data context or whatever the object it uses to access the underlying data source, since it doesn't belong to it, it is only lended to it so that it can fulfill its tasks.
As for your point about will the data source change someday? No one can predict it. It is most likely to never change in most of the systems I have worked on. A database change is more likely to be seen after 10 years of the initial development for moerdnization purposes. That day, however, you'll understand how the Repository Pattern saves you time and headaches in comparison to a tightly coupled code. I work with tightly coupled code from legacy systems, and I do take the advantages for benefits. Prevention is better than cure.
But please lets focus on instantiate the dbContext in methods without the using() statement. Is it really bad? I mean also when we inject the context in the constructor we don't handle the dispose(), we let entity framework doing it and it manages it pretty well.
No, it isn't necessarily bad not to use using statements, as long as you dispose all unnecessary resources as long as they are no longer used. The using statements serves this purpose for you by doing it automatically instead of you having to take care about.
As for the Repository pattern, it can't dispose the context that is passed to it, and it shan't be disposed neither because the context is actually contextualized to a certain matter and is used across other features within a given business context.
Let's say you have Customer management features. Within them, you might also require to have the invoices for this customer, along with the transaction history. A single data context shall be used across all of the data access as long as the user works within the customer management business context. You shall then have one DbContext injected in your Customer management feature. This very same DbContext shall be shared across all of the repositories used to access your data source.
Once the user exits the Customer management functionalities, the DbContext shall get disposed accordingly as it may cause memory leaks. It is false to believe that as long as it is no longer used, everything gets garbage collected. You never know how the .NET Framework manages its resources and how long it shall take to dispose your DbContext. You only know that it might get disposed somehow, someday.
If the DbContext gets disposed immediately after a data access is performed, you'll have to instantiate a new instance everytime you need to access the underlying data source. It's a matter of common sense. You have to define the context under which the DbContext shall be used, and make it shared across the identified resources, and dispose it as soon as it is no longer needed. Otherwise, it could cause memory leaks and other such problems.
In response to comment by Mick
I would go further and suggest you should always return IQueryable to enable you to reuse that result passing it into other calls on your repositories. Sorry but your argument makes absolutely no sense to me. Repositories are not meant to be stand-alone one stop shops they should be used to break up logic into small, understandable, encapsulated, easily maintained chunks.
I shall disagree to always return IQueryable<T> through a Repository, otherwise what is it good to have multiple methods for? To retrieve the data within your repository, one could simply do:
public class Repository<T> where T : class {
public Repository(DbContext dataContext) { context = dataContext; }
public IQueryable<T> GetAll() { return context.Set<T>(); }
private readonly DbContext context;
}
and place predicates everywhere in your code to filter the data as per the views needs or whatsoever. When it is time to change the filter criterion or the like, you'll have to browse all of your code to make sure anyone didn't use a filter which was actually unexpected, and may cause the system to misbehave.
On a side note, I do understand your point and I might admit that for some reasons as described in your comment, it might be useful to return IQueryable<T>. Besides, I simply wonder for what good is it, since a repository's responsibility is to provide a class everything it needs to get its information data. So if one needs to pass along IQueryable<T>s to another repository, it sounds strange to me as if one wouldn't have completely investigated every possible way to retrieve data. If one need some data to process another query, lazy-loading can do it and no need to return IQueryables. As per its name, IQueryables are made to perform queries, and the Repository's responsibility to to access the data, that is, to perform queries.

When and where to call factories at runtime?

I recently asked about doing DI properly, and got some links to blog posts about it. I think I have a better understanding now - separate object construction from logic, by putting it in factories. But all of the examples are for things like websites, and say to do all the wiring at startup. Call one large factory which news everything and passes in all the dependencies.
But what if I don't want to instantiate everything up front? I have an object which contains a list of other objects which it can delegate to, but they are expensive, and used one at a time, so I construct them when needed and let them get collected when I'm done. I don't want to put new B() inside the logic of A because I would rather use DI - but how? Can A call the factory? That doesn't seem much better, unless the factory is maintaining state including the current dependencies. I just don't want to pass the full list of Bs into A when it's constructed, since it would be wasteful. If you want, B doesn't necessarily have to be inside A, although it makes logical sense (A is a game level, B is a single screen), but in any case the logic of A dictates when B is created.
So, who calls the factory to get B, and when?
Clarification: I'm not using framework for DI. I wonder if the term DI implies that?
In Ninject, you can register Func<B> and request that in the constructor to A.
Autofac will automagically supply Func<B> if B is already registered.
Or, you can take the more straight forward approach and define an explicit factory for B, and request that factory in the constructor; its just more typing as you'd have to create a factory for every dependency you want to lazily initialize.
Here's another SO answer that shows Ninject style factory methods: How do I handle classes with static methods with Ninject?
#Not Using A Framework: If you can, I'd probably look into using one: a IoC/DI framework usually will handle delayed creation for you out of the box.
If you want to continue to roll your own, then just pass the factory that creates B to your A object. Or, if you just don't like raw Funcs and don't want to have to create explicit factories for all your objects, then you could look into using Lazy<B> for a more formalized solution.
There are typically two patterns for using rarely needed objects that are expensive to create. The first pattern is using a factory, as David Faivre suggests. The other is by using a proxy.
A proxy is -from a design perspective- probably the cleanest solution, although it might need more code to implement. It is the cleanest, because the application can be totally unaware of this, because you don't need an extra interface (as the factory approach needs).
Here is an simple example with some interface and an expensive implementation:
interface IAmAService
{
void DoSomething();
}
class ExpensiveImpl : IAmAService
{
private object x = [some expensive init];
void IAmAService.DoSomething() { }
}
No you can implement a proxy based on that interface, that can delay the creation of that implementation:
class ServiceProxy : IAmAService
{
private readonly Func<IAmAService> factory;
private IAmAService instance;
public ServiceProxy(Func<IAmAService> factory)
{
this.factory = factory;
}
void IAmAService.DoSomething()
{
this.GetInstance().DoSomething();
}
private IAmAService GetInstance()
{
// TODO: Implement double-checked lock only a single
// instance may exist per ServiceProxy.
if (this.instance == null)
{
this.instance = this.factory();
}
return this.instance;
}
}
This proxy class accepts a factory delegate as dependency, just as David Faivre described in his answer, but this way the application won't have to depend on the Func<IAmAService>, but can simply depend on IAmAService.
Now instead of injecting an ExpensiveImpl, you can inject a ServiceProxy into other instances:
// Create the proxy
IAmAService service =
new ServiceProxy(() => new ExpensiveImpl());
// Inject it into whatever you wish, such as:
var customerService = new CustomerService(service);

Is there anything wrong with having a few private methods exposing IQueryable<T> and all public methods exposing IEnumerable<T>?

I'm wondering if there is a better way to approach this problem. The objective is to reuse code.
Let’s say that I have a Linq-To-SQL datacontext and I've written a "repository style" class that wraps up a lot of the methods I need and exposes IQueryables. (so far, no problem).
Now, I'm building a service layer to sit on top of this repository, many of the service methods will be 1<->1 with repository methods, but some will not. I think a code sample will illustrate this better than words.
public class ServiceLayer
{
MyClassDataContext context;
IMyRepository rpo;
public ServiceLayer(MyClassDataContext ctx)
{
context = ctx;
rpo = new MyRepository(context);
}
private IQueryable<MyClass> ReadAllMyClass()
{
// pretend there is some complex business logic here
// and maybe some filtering of the current users access to "all"
// that I don't want to repeat in all of the public methods that access
// MyClass objects.
return rpo.ReadAllMyClass();
}
public IEnumerable<MyClass> GetAllMyClass()
{
// call private IQueryable so we can do attional "in-database" processing
return this.ReadAllMyClass();
}
public IEnumerable<MyClass> GetActiveMyClass()
{
// call private IQueryable so we can do attional "in-database" processing
// in this case a .Where() clause
return this.ReadAllMyClass().Where(mc => mc.IsActive.Equals(true));
}
#region "Something my class MAY need to do in the future"
private IQueryable<MyOtherTable> ReadAllMyOtherTable()
{
// there could be additional constrains which define
// "all" for the current user
return context.MyOtherTable;
}
public IEnumerable<MyOtherTable> GetAllMyOtherTable()
{
return this.ReadAllMyOtherTable();
}
public IEnumerable<MyOtherTable> GetInactiveOtherTable()
{
return this.ReadAllMyOtherTable.Where(ot => ot.IsActive.Equals(false));
}
#endregion
}
This particular case is not the best illustration, since I could just call the repository directly in the GetActiveMyClass method, but let’s presume that my private IQueryable does some extra processing and business logic that I don't want to replicate in both of my public methods.
Is that a bad way to attack an issue like this? I don't see it being so complex that it really warrants building a third class to sit between the repository and the service class, but I'd like to get your thoughts.
For the sake of argument, lets presume two additional things.
This service is going to be exposed through WCF and that each of these public IEnumerable methods will be calling a .Select(m => m.ToViewModel()) on each returned collection which will convert it to a POCO for serialization.
The service will eventually need to expose some context.SomeOtherTable which wont be wrapped into the repository.
I think it's a good model since you can create basic IQueryable private functions that can be used by the functions you are exposing publicly. This way your public methods do not need to recreate a lot of the common functionality your IQueryable methods perform and they can be extended as needed and deferring the execution while still hiding that functionality publicly.
An example like how to get X out of some table which may take a lot of logic that you don't need in it's raw form. You then have that as a private method, as you do in your example, and then the public method adds the finalizing criteria or queries to generate a useable set of data which could differ from function to function. Why keep reinventing the wheel over and over... just create the basic design (which you IQueryable does) and drop on the tread pattern that is required as needed (your public IEnumerable does) :)
+1 for a good design IMO.

Categories