Maintain instances through application without singleton - c#

I have researched a lot, and found a few different options and opinions, but I'm not sure how to proceed.
I'm working on a project where there are almost-non-variable classes (e.g. Users) and very-variable classes (e.g. Meetings). I have a repository for each class which gets data directly from database, but I would like to implement a service layer for almost-non-variable classes. This way I could only load them at application's startup.
What can I do to load Users at startup and maintain it on RAM? So I don't have to fetch database every time I need a specific User.
Is singleton pattern a good option? I'm avoiding it, because of people saying it's an anti-pattern.

Related

Is Transient an acceptable scope for injecting a service in Blazor Server that uses DbContext?

I'm working on my first Blazor Server project and I am slowly fixing a lot of initial design errors that I made when I started out. I've been using C# for a while, but I'm new to web development, new to ASP.Net, new to Blazor, and new to web architecture standards, hence why I made so many mistakes early on when I didn't have a strong understanding of how best to implement my project in a way that promotes clean code and long term maintainability.
I've recently restructured my solution so that it follows the "Clean Architecture" outlined in this Microsoft documentation. I now have the following projects, which aim to mirror those described in the document:
CoreWebApp: A Blazor project, pages and components live here.
Core: A Class Library project, the domain model, interfaces, business logic, etc, live here.
Infrastructure: Anything to do with having EF Core access the underlying database lives here, ie ApplicationDbContext, any implementations of Repositories, etc.
I am at a point where I want to move existing implementations of the repository pattern into the Infrastructure project. This will allow me to decouple the Core project from the Infrastructure project by utilising the Dependency Injection system so that any business logic that uses the repositories depends only on the interfaces to those repositories (as defined in Core) and not the actual implementations themselves (to be defined in Infrastructure).
Both the Microsoft documentation linked above, and this video by CodeWrinkles on YouTube make the following two suggestions on how to correctly use DbContext in a Blazor Server project (I'll talk specifically about using DbContext in the context of a repository):
Scope usage of a repository to each individual database request. Basically every time you need the repository you instantiate a new instance, do what needs to be done, and as soon as the use of the repo goes out of scope it is automatically disposed. This is the shortest lived scope for the underlying DbContext and helps to prevent concurrency issues, but also forgoes the benefits of change tracking.
Scope the usage of a repository to the lifecycle of a component. Basically you create an instance of a repository in OnInitialisedAsync, and destroy the repository in the Dispose() method of the component. This allows usage of EF Cores change tracking.
The problem with these two approaches is that they don't allow for use of the DI system, in both cases the repository must be new'd and thus the coupling between Core and Infrastructure remains unbroken.
The one thing that I can't seem to understand is why case 2 can't be achieved by declaring the repository as a Transient service in Program.cs. (I suppose case 1 could also be achieved, you'd just hide spinning up a new DbContext on every access to the repository within the methods it exposes). In both the Microsoft documentation and the CodeWrinkles video they seem to lean pretty heavily on this wording for why the Transient scope isn't well aligned with DbContext:
Transient results in a new instance per request; but as components can be long-lived, this results in a longer-lived context than may be intended.
It seems counterintuitive to make this statement, and then provide a solution to the DbContext lifetime problem that will enable a lifetime that will align with the stated problem.
Scoping a repository to the lifetime of a component seems, to me, to be exactly the same as injecting a Transient instance of a repository as a service. When the component is created a new instance of the service is created, when the user navigates away from the page this instance is destroyed. If the user comes back to the page another instance is created and it will be different to the previous instance due to the nature of Transient services.
What I'd like to know is if there is any reason why I shouldn't create my repositories as Transient services? Is there some deeper aspect to the problem that I've missed? Or is the information that has been provided trying to lead me into not being able to take advantage of the DI system for no apparent reason? Any discussion on this is greatly appreciated!
It's a complex issue. With no silver bullet solution. Basically, you can't have you cake and eat it.
You either use EF as an [ORM] Object Request Mapper or you let EF manage your complex objects and in the process surrender your "Clean Design" architecture.
In a Clean Design solution, you map data classes to tables or views. Each transaction uses a "unit of work" Db Context obtained from a DBContextFactory. You only enable tracking on Create/Update/Delete transactions.
An order is a good example.
A Clean Design solution has data classes for the order and order items. A composite order object in the Core domain is built by make two queries into the data pipeline. One item query to get the order and one list query to get the order items associated with that order.
EF lets you build a data class which includes both the order data and a list of order items. You can open that data class in a DbContext, "process" that order by making changes and then call "SaveAsync" to save it back to the database. EF does all the complex stuff in building the queries and tracking the changes. It also holds the DbContext open for a long period.
Using EF to manage your complex objects closely couples your application domain with your infrastructure domain. Your application is welded to EF and the data stores it supports. It's why you will see some authors asserting that implementing the Repository Pattern with EF is an anti-pattern.
Taking the Order example above, you normally use a Scoped DI View Service to hold and manage the Order data. Your Order Form (Component) injects the service, calls an async get method to populate the service with the current data and displays it. You will almost certainly only ever have one Order open in an SPA. The data lives in the view service not the UI front end.
You can use transient services, but you must ensure they:
Don't use DBContexts
Don't implement IDisposable
Why? The DI container retains a reference to any Transient service it creates that implements IDisposable - it needs to make sure the service is disposed. However, it only disposes that service when the container itself is disposed. You build up redundant instances until the SPA closes down.
There are some situations where the Scoped service is too broad, but the Transient option isn't applicable such as a service that implements IDisposable. Using OwningComponentBase can help you solve that problem, but it can introduce a new set of problems.
If you want to see a working Clean Design Repository pattern example there's an article here - https://www.codeproject.com/Articles/5350000/A-Different-Repository-Pattern-Implementation - with a repo.

How to persist aggregates with repositories?

I am trying to learn some concepts about DDD and the part of persisting Aggregates is confusing me a bit. I have read various answers on the topic on SO but none of them seem to answer my question.
Let's say I have an Aggregate root of Product. Now I do not want to inject the ProductRepository that will persist this aggregate root in the constructor of the Product class itself. Imagine me writting code like
var prod = new Product(Factory.CreateProductRepository(), name, costprice);
in the UI layer. If I do not want to inject my repository via dependency injection in the Aggregate Root, then the question is where should this code go? Should I create a class only for persisting this AR? Can anyone suggest what is the correct & recommended approach to solve this issue?
My concern is not which ORM to use or how to make this AR ORM friendly or easy to persist, my question is around the right use of repositories or any persistence class.
Application Services
You are right, the domain layer should know nothing about persistence. So injecting the repository into Product is indeed a bad idea.
The DDD concept you are looking for is called Application Service. An application service is not part of the domain layer, but lives in the service layer (sometimes called application layer). Application services represent a use case (as opposed to a domain concept) and have the following responsibilities:
Perform input validation
Enforce access control
Perform transaction control
The last point means that an application service will query a repository for an aggregate of a specific type (e.g. by ID), modify it by using one of its methods, and then pass it back to the repository for updating the DB.
Repository Ganularity
Concerning your second question
Should I create a class only for persisting this AR?
Yes, creating one repository per aggregate is a common approach. Often, standard repository operations like getById(), update(), delete(), etc. are extracted into a reusable class (either a base class or by aggregation).
You can also create additional repositories for non-domain information, e.g. statistical data. In these cases, make sure that you don't accidentally miss a domain concept, however.

Implementing caching of services in Domain project being used by a Web API

My question is: how do I implement caching in my domain project, which is working like a normal stack with the repository pattern.
I have a setup that looks like the following:
ASP.NET MVC website
Web API
Domain project (using IoC, with Windsor)
My domain project for instance have:
IOrderRepository.cs
OrderRepository.cs
Order.cs
My ASP.NET MVC website calls the Web API and gets back some DTO classes. My Web API then maps these objects to business objects in my domain project, and makes the application work.
Nowhere in my application have I implemented caching.
Where should be caching be implemented?
I thought about doing it inside the methods in the OrderRepository, so my Get, GetBySpecification and Update methods has to call some generic cache handler injected by the OrderRepository.
This obviously gives some very ugly code, and isn't very generic.
How to maintain the cache?
Let's say we have a cache key like "OrderRepostory_123". When I call the Update method, should I call cacheHandler.Delete("OrderRepository_123") ? Because that seems very ugly as well
My own thoughts...
I can't really see a decent way to do it besides some of the messy methods I have described. Maybe I could make some cache layer, but I guess that would mean my WebAPI wouldn't call my OrderRepository anymore, but my CacheOrderRepository-something?
Personally, I am not a fan of including caching directly in repository classes. A class should have a single reason to change, and adding caching often adds a second reason. Given your starting point you have at least two likely reasonable options:
Create a new class that adds caching to the repository and exposes the same interface
Create a new service interface that uses one or more repositories and adds caching
In my experience #2 is often more valuable, since the objects you'd like to cache as a single unit may cross repositories. Of course, this depends on how you have scoped your repositories. A lot may depend on whether your repositories are based on aggregate roots (ala DDD), tables, or something else.
There are probably a million different ways to do this, but it seems to me (given the intent of caching is to improve performance) implementing the cache similar to a repository pattern - where the domain objects interact with the cache instead of the database, then perhaps a background thread could keep the database and cache in sync, and the initial startup of the app pool would fill the cache (assuming eager loading is desired). A whole raft of technical issues start to crop up, such as what to do if the cache is modified in a way that violates a database constraint. Code maintenance becomes a concern where any data structure related concerns possibly need to be implemented in multiple places. Concurrency issues start to enter the fray. Just some thoughts...
SQLCacheDependency with System.Web.Caching.Cache, http://weblogs.asp.net/andrewrea/archive/2008/07/13/sqlcachedependency-i-think-it-is-absolutely-brilliant.aspx . This will get you caching that gets invalidated based on other systems applying updates also.
there are multiple levels of caching depending on the situation however if you are looking for generic centralized caching with low number of changes I think you will be looking for EF second level caching and for more details check the following http://msdn.microsoft.com/en-us/magazine/hh394143.aspx
Also you can use caching on webapi level
Kindly consider if MVC and WebAPI the network traffic if they are hosted in 2 different data centers
and for huge read access portal you might consider Redis http://Redis.io
It sounds like you want to use a .NET caching mechanism rather than a distributed cache like Redis or Memcache. I would recommend using the System.Runtime.Caching.MemoryCache class instead of the traditional System.Web.Caching.Cache class. Doing this allows you to create your caching layer independent of your MVC/API layer because the MemoryCache has no dependencies on System.Web.
Caching your DTO objects would speed up your application greatly. This prevents you from having to wait for data to be assembled from a cache that mirrors your data layer. For example, requesting Order123 would only require a single cache read rather than to several reads to any FK data. Your caching layer would of course need to contain the logic to invalidate the cache on UPDATEs you perform. A recommended way would be to retrieve the cached order object and modify its properties directly, then persist to the DB asynchronously.

How many entities should RIA domain service include?

I was wondering about how to exactly implement domain service in RIA. Is it common to include all entities in the entire domain model in a single domain service, thus making the service responsible for the entire database? Is this the way it's normally done? I really have no reason to separate data access into different services, but I was wondering if this is considered a good practice, and what the pros and cons of such an approach would be.
Also, is it considered a good or bad practice to register domain context as a singleton with IOC, so that the entire application works with the same set of data, thus avoiding concurrency issues and similar problems?
Thoughts?
Thank you
We have two separate services in our app: one for the data model and one strictly used for authentication. We took this design from MS's business sample app structure.
We considered breaking up our data domain service into smaller components but decided against it because it didn't seem to add any advantage (other than reducing service class size.) If you have distinct data models that are completely independent from each other then going that route might make sense. Intuitively the domain service should represent the entire domain. If your domains are independent (with the occasional need for crossover) then it makes logical sense to segregate them in that way.
Regarding using the context as a Singleton: I tried that and ended up creating class-scope instances instead. We haven't experienced any issues doing it this way as they all use the same underlying data connection. I don't know what the "official" best practice is, but this is the way I've seen it done in numerous RIA apps.
Thanks Nick. I actually did the same thing as you, I built two services, one for authentication and one for data access. That seems most logical to me.
As for making datacontext a singleton, I've tried that as well and it works nicely. No need to constantly reload and refresh data and worry about concurrency issues in other classes :)

Domain Driven Design Layout Question

Im new to the DDD thing. I have a PROFILE class and a PROFILE REPOSITORY CLASS.
The PROFILE class contains the following fields -> Id, Description, ImageFilePath
So when I add a new Profile, I upload then image to the server and store the path to it in my db.
When I delete the profile, the image should be removed from my file system aswell.
My Question:
Where do I add logic for this. My profile repository has a Delete method. Should I add this logic here. Or should I add a service to encapsulate both actions.
Any comment would be appreciated...
Thanks
You have two different "actions" related to the images. You have a "physical" process and a "logical" process. The logical process is persisting the information about the image into the domain repository, since it is part of the domain. The physical process of add (and delete) are a prerequisite to the logical process.
Taking a step back, the physical process is completely independent of the logical process, but the opposite is not true. You obviously do not want to persist meta-information about the image (in the domain) if the image was not saved. Also, you don't want to remove the information from the domain if you cannot remove the physical file.
The domain should contain the information required to remove the logical instance of the image from the datasource. Think of the domain as a physically separate application. In this case, the domain has no actual knowledge that the data it is persisting has anything to do with a physical file. Make sure to keep it this way.
Generally, I have my entities in an assembly, then my repositories and domain services in another. The application services live outside of the domain model, but leverage it to do its work. So application services use one or domain services or other application services and domain services can use one or more repositories.
Keeping this in mind, you have two places for the actual deletion logic, and a third place to coordinate them. Here is how it would work if I were doing it. The domain service will leverage the repository for the logical delete from the underlying datasource (as well as a retrieval which you will need, as well). It is not aware of anything else other than working with the domain object instance. I also would have an application service (outside of the domain) which specifically dealt with removing the physical instance. For argument sake, I will assume you have an "ImageRepository" class and an "ImageServices" class, which contain your domain repository and your domain services, respectively. Your ImageServices needs a Delete() method, as well as whatever Find() methods you are using. I usually explicitly call the find methods as FindBy...() (i.e, FindByKey(), FindByName(), etc.).
You don't want to remove the logical instance if you haven't been able to remove the physical instance, so make sure you have a means of measuring success of the removal operation for the physical image. I would probably go with some sort of a custom exception in this case (since I would consider deleting a file to be a standard operation that should not commonly fail). This usually falls in the realm of "management". So usually I have an application service named something like "ImageManagementService". For simplicity sake, this service (since it is part of the application and not the domain) can have a private method to do the physical delete. Let's call it "DeleteImageFile()".
The third place is a coordination of these two operations, also as an application service. I would just make this the public method in the "ImageManagementService". We can call this one "RemoveImage". This application service will do the following:
Retrieve the instance information from the domain services (a passthrough call to your repository).
Use the instance information to locate the physical file and remove it (the first application service mentioned, again).
If the physical removal is successful, delete the instance (back to the domain service, facading the repository again).
So, what happens is the application itself calls the "RemoveImage()" method from the "ImageManagementService" instance. Internally, "RemoveImage()" first calls the "FindBy..()" from the domain's "ImageServices" to get an instance from the domain. The filepath is used from there to call to the private "DeleteImageFile() method in the "ImageManagementService" instance. Upon success, it will then call the "Delete()" methods in the domain's "ImageService", which is acting as a facade to your repository.
I think it is very important to focus on the separation of concerns in this case, because if you have an explicit separation (which you can do with different assemblies) you will become comfortable with knowing which kind of logic can go in which place. I highly recommend the Evan's book. Also, for a quick hit on the SOC concept as it relates to DDD, I recommend taking a look at Jeffrey Palermo's three part series on the "Onion Architecture".
Just a couple of notes as to why you would use a domain service instead of calling the repository directly from the application service. Primarily, the repository has more complicated instancing then the domain service. Remember, it is mostly a facade, but might have additional logic that does not fit in anywhere else in the domain. A good example of this might be if you wanted to enforce a unique filename. The domain object itself has no knowledge of other domain objects in other aggregates directly, so the domain service might check for an existing instance with the same name prior to a save operation. Very handy, indeed! Also, a domain service is not limited to a single repository. You can have a domain service coordinate efforts between multiple repositories. If you have overlapping aggregates, you might need to call work with two related aggregate roots at the same time. you can do this in the domain service, keeping that sort of logic in the domain and not bleeding into the application.
Hope this helps. I am sure that there are other ways to do this, but this is the way that I have found success in my own applications with similar scenarios.
#joseph.ferris: "Generally, I have my entities in an assembly, then my repositories and domain services in another. "
Personally, I prefer to see assemblies as a unit of deployment, not a separation of concerns design tool. For that, I'd rather use namespaces.
Ensuring no cyclic-dependencies (between those namespaces) that way is harder, but tools like NDepend can help out.
On a first approach, I think I would opt for the most simple approach, and delete the physical image from disk inside the ImageRepository.
It is maybe not the most 'correct' or 'pure' solution, but it is the most simple one, and this conforms to the 'choose the most simple solution that works' adagio.
When, in a later phase of the project, you feel that this solution is not good, and you feel you need a more complex (and maybe more pure) solution like the one proposed by joseph.ferris, then you can always refactor it.
It is easier to refactor a simple solution, then to refactor a complex solution. :)

Categories