Trying to understand which service lifetime best for service layer Transient or scoped(WHY).
I am looking for pros and cons of using scoped as service layers instead of transient. does Transient service works well with Database Transaction or keeping service layer as scoped is not a good thing to do.
Thanks
Usually, you should default to transient lifetimes. These are easy to understand and will generally discourage you from keeping state in your services. It’s also the most compatible lifetime with other services since it can be used from anywhere. So unless you have certain requirements, just choose transient by default.
Scoped services are good when you have expensive operations or temporary state that should be kept for the duration of the request. Database connections are a good example of that because a database connection is not super cheap and using a single connection for handling the single request of a single user (which isn’t happening concurrently) works pretty well. Other examples would be calculated things on top of the request data, e.g. data retrieved from external sources about the user (although here you might even consider a longer living cache).
If you aren’t creating your database connection yourself, chances are that you already have some service through which you will need to go in order to work with the database. This service is then hopefully already registered to be scoped service. An example for this is the DbContext from Entity Framework Core which will be registered as a scoped dependency by default.
If you consume such services, you can consume them from a transient service. Multiple (transient) services will just end up receiving the same instance. But that’s an implementation detail your services shouldn’t bother with. So the default suggestion still counts: Register the service as transient.
When deciding between transient and scoped, it’s also a good idea to consider the following question: Is the service resolve multiple times during the handling of a single request? Is there a problem creating a separate instance each time (e.g. is it an expensive operation)? Then choosing a scoped lifetime may help you.
Related
I'm working on my first Blazor Server project and I am slowly fixing a lot of initial design errors that I made when I started out. I've been using C# for a while, but I'm new to web development, new to ASP.Net, new to Blazor, and new to web architecture standards, hence why I made so many mistakes early on when I didn't have a strong understanding of how best to implement my project in a way that promotes clean code and long term maintainability.
I've recently restructured my solution so that it follows the "Clean Architecture" outlined in this Microsoft documentation. I now have the following projects, which aim to mirror those described in the document:
CoreWebApp: A Blazor project, pages and components live here.
Core: A Class Library project, the domain model, interfaces, business logic, etc, live here.
Infrastructure: Anything to do with having EF Core access the underlying database lives here, ie ApplicationDbContext, any implementations of Repositories, etc.
I am at a point where I want to move existing implementations of the repository pattern into the Infrastructure project. This will allow me to decouple the Core project from the Infrastructure project by utilising the Dependency Injection system so that any business logic that uses the repositories depends only on the interfaces to those repositories (as defined in Core) and not the actual implementations themselves (to be defined in Infrastructure).
Both the Microsoft documentation linked above, and this video by CodeWrinkles on YouTube make the following two suggestions on how to correctly use DbContext in a Blazor Server project (I'll talk specifically about using DbContext in the context of a repository):
Scope usage of a repository to each individual database request. Basically every time you need the repository you instantiate a new instance, do what needs to be done, and as soon as the use of the repo goes out of scope it is automatically disposed. This is the shortest lived scope for the underlying DbContext and helps to prevent concurrency issues, but also forgoes the benefits of change tracking.
Scope the usage of a repository to the lifecycle of a component. Basically you create an instance of a repository in OnInitialisedAsync, and destroy the repository in the Dispose() method of the component. This allows usage of EF Cores change tracking.
The problem with these two approaches is that they don't allow for use of the DI system, in both cases the repository must be new'd and thus the coupling between Core and Infrastructure remains unbroken.
The one thing that I can't seem to understand is why case 2 can't be achieved by declaring the repository as a Transient service in Program.cs. (I suppose case 1 could also be achieved, you'd just hide spinning up a new DbContext on every access to the repository within the methods it exposes). In both the Microsoft documentation and the CodeWrinkles video they seem to lean pretty heavily on this wording for why the Transient scope isn't well aligned with DbContext:
Transient results in a new instance per request; but as components can be long-lived, this results in a longer-lived context than may be intended.
It seems counterintuitive to make this statement, and then provide a solution to the DbContext lifetime problem that will enable a lifetime that will align with the stated problem.
Scoping a repository to the lifetime of a component seems, to me, to be exactly the same as injecting a Transient instance of a repository as a service. When the component is created a new instance of the service is created, when the user navigates away from the page this instance is destroyed. If the user comes back to the page another instance is created and it will be different to the previous instance due to the nature of Transient services.
What I'd like to know is if there is any reason why I shouldn't create my repositories as Transient services? Is there some deeper aspect to the problem that I've missed? Or is the information that has been provided trying to lead me into not being able to take advantage of the DI system for no apparent reason? Any discussion on this is greatly appreciated!
It's a complex issue. With no silver bullet solution. Basically, you can't have you cake and eat it.
You either use EF as an [ORM] Object Request Mapper or you let EF manage your complex objects and in the process surrender your "Clean Design" architecture.
In a Clean Design solution, you map data classes to tables or views. Each transaction uses a "unit of work" Db Context obtained from a DBContextFactory. You only enable tracking on Create/Update/Delete transactions.
An order is a good example.
A Clean Design solution has data classes for the order and order items. A composite order object in the Core domain is built by make two queries into the data pipeline. One item query to get the order and one list query to get the order items associated with that order.
EF lets you build a data class which includes both the order data and a list of order items. You can open that data class in a DbContext, "process" that order by making changes and then call "SaveAsync" to save it back to the database. EF does all the complex stuff in building the queries and tracking the changes. It also holds the DbContext open for a long period.
Using EF to manage your complex objects closely couples your application domain with your infrastructure domain. Your application is welded to EF and the data stores it supports. It's why you will see some authors asserting that implementing the Repository Pattern with EF is an anti-pattern.
Taking the Order example above, you normally use a Scoped DI View Service to hold and manage the Order data. Your Order Form (Component) injects the service, calls an async get method to populate the service with the current data and displays it. You will almost certainly only ever have one Order open in an SPA. The data lives in the view service not the UI front end.
You can use transient services, but you must ensure they:
Don't use DBContexts
Don't implement IDisposable
Why? The DI container retains a reference to any Transient service it creates that implements IDisposable - it needs to make sure the service is disposed. However, it only disposes that service when the container itself is disposed. You build up redundant instances until the SPA closes down.
There are some situations where the Scoped service is too broad, but the Transient option isn't applicable such as a service that implements IDisposable. Using OwningComponentBase can help you solve that problem, but it can introduce a new set of problems.
If you want to see a working Clean Design Repository pattern example there's an article here - https://www.codeproject.com/Articles/5350000/A-Different-Repository-Pattern-Implementation - with a repo.
I have a service injected into the ASP.NET dotnet framework service container. I inject this service as a Singleton, and its function is to maintain several data structures (Dictionay, List, Queue) in memory.
The service is perfectly accessible from the controllers, and my doubts are due to the lack of knowledge of the internal workings of ASP.NET.
My questions are:
Should I worry about creating the Singleton (thread safe) or does the container take care of it?
Are accesses to service methods enqueued in a single thread?, or can they be called concurrently? I ask to know if I have to use Concurrent Collections instead of the Generic ones.
Would it be convenient to make the methods asynchronous?
I accept any suggestion, and examples.
Thanks in advance.
I have api application ,service & repository class library application . Service part i write business logic and repository only communicate for database. My question which type of dependency is best for repository and service .
services.AddScoped<ITicketRepository, TicketRepository>();
services.AddTransient<ITicketRepository, TicketRepository>();
services.AddSingleton<ITicketRepository, TicketRepository>();
Like always, it depends. My suggestion is the following:
Scoped: in my opinion, there can be two main reasons for using this:
Your dependency has a dependency which has a scoped lifetime. In this case, you cannot use singleton, but can use scoped or transient. Which one you should take is based on the other criteria.
Your dependency has some state which makes it unsuitable to be used in singleton scope, but it is heavyweight enough that you don't want to register it as transient. Another possibility is that, again, it cannot be used in singleton scope, but it is fine to share the same instance per request (scope) and you don't want to add the overhead of constructing new ones if two types depend on the same thing and both of them are used to serve a single request.
Transient: this is the simplest approach. Every time an instance of a dependency registered in this manner is required, a new instance is created. This is probably the most foolproof, but can cause serious overhead if its usage is not justified. #Tony Ngo pointed out in his answer, quoting from the official docs, that this works best for lightweight, stateless objects, but I'd argue that statelessness is a very good indicator that you may want to use singleton lifetime as statelessness guarantees that the same object can be used concurrently just fine. Whether you choose transient or singleton lifetime in this case really depends whether you care about such aspects of performance like GC cost, which is obviously much, much higher if you create a new instance every time such a dependency is required, even if you could avoid doing so. Having said that, transient is used by many developers in this scenario as well, probably due to its foolproofness, or simply because they tend to think about it as the default choice.
Singleton: the points above basically summarize this one: you can choose this when there is absolutely no reason to create a new instance of the dependency for each request (scope) or to use an other dependent instance. Note that like said before, you cannot use singleton lifetime when the type has a dependency which is registered as scoped.
Transient lifetime services (AddTransient) are created each time
they're requested from the service container. This lifetime works best
for lightweight, stateless services.
Scoped lifetime services (AddScoped) are created once per client
request (connection).
Singleton lifetime services (AddSingleton) are created the first time
they're requested (or when Startup.ConfigureServices is run and an
instance is specified with the service registration).
So depend on what you need you can choose correct liftetime you can view it more here
I am assuming your TicketRepository is depend on your EF Core DbContext and your
EF Core DbContext is by default registered as ScopedSerivce so here registering TicketRepository as SingletonService is out of consideration as because:
It's dangerous to resolve a scoped service from a singleton. It may cause the service to have incorrect state when processing subsequent requests.
For more details: Dependency injection in ASP.NET Core-Service lifetimes
Now you can choose between AddTransient<> and AddScoped<> where:
Transient lifetime services (AddTransient) are created each time they're requested from the service container. This lifetime works best for lightweight, stateless services.
Scoped lifetime services (AddScoped) are created once per client request (connection).
I'm working on a web application that uses a couple of services to synchronize data with external resources. The application and the services share the same data layer and use Castle Windsor to implement IoC.
In the web application there is the a PerWebRequest lifestyle which limits the lifetime of an instance to the lifetime of a request. I want to use something similar in the services.
The services are triggered every once in a while to do the synchronization. I want the services and repositories in the datalayer to be singletons within the a single iteration of the service, similar to the PerWebRequest lifestyle in the web application.
What I've come up with is the concept of a Run. A run is a single invocation of the synchronization code within the service. That looks like this:
using( _runManager.Run() )
{
var sync = _usageRepoFactory.CreateInstance();
sync.SynchronizeUsage();
}
The implementation of IRun will release all instances with the PerRunLifeStyle resolved since it's creation when it is disposed, at the end of the using block.
This code looks quite clean, but I wonder if there is a better way of doing this. I have tried using child containers but found these rather 'heavy' after profiling the solution.
Any feedback is welcome. If needed I can post the IRun implementation as well.
Update
Based on the comments I've cleaned up the code a bit. I've introduced a new service IRunManager which is basically a factory for IRun. I've also started using a factory to get rid of the ServiceLocator invocation.
Take a look at this contextual lifestyle
I am moving onto a new team that has implemented a solution using SOA with WCF. The services are all very vertical, for example: a CustomerService, an AddressService, an AccountService, etc. To return the fully populated objects the services may call another service over a wcf endpoint.
There are a few very high level vertical areas, but underneath they can reuse a lot of the core service logic.
How valid is the following new architecture:
The webservices are thin layers that handle remote calls; they are strictly for communication. The real functionality would be implemented in something lets call, "business or domain services".
Domain Service responsibilities:
Reference data access / repository interfaces for working with the infrastructure
Call multiple repository methods to create fully populated objects
Process data against the complex business rules
Call other domain services (not having to call WCF)
This would give us domain services that can be tested outside of specific WCF and SQL Server implementations.
The web services reusing the different business services seems to be the biggest gain and yet the biggest potential pitfall.
On one hand the logic can be reused for multiple services, eliminating web service calling web service calling web service.
On the other hand, if someone changes one of the assemblies multiple services need to be updated, potentially breaking multiple applications.
Have people tried this and had success? Are there better approaches?
At first blush, it sounds like the design you've walked into might be an SOA antipattern identified in this article: a group of 'chatty services,' a term the authors use to describe a situation in which ...
developers realize a service by implementing a
number of Web services where each
communicates a tiny piece of data.
Another flavor of the same antipattern
is when the implementation of a
service ends up in a chatty dialog
communicating tiny pieces of
information rather than composing the
data in a comprehensive document-like
form.
The authors continue:
Degradation in performance and costly
development are the major consequences
of this antipattern. Additionally,
consumers have to expend extra effort
to aggregate these too finely grained
services to realize any benefit as
well as have the knowledge of how to
use these services together.
That can be a valid approach. The pitfall about updating multiple services depends on how closely related the services are. Do you have a use case where if Customer Service is updated and Address Service is not the clients can still work or is it more common that all services are used by the same client and hence should be updated together. Remember the service only changes if the WSDL changes and not implementation. If you manage not to change the DataContracts and OperationContracts of the front end services there is no worries.
One approach you may investigate is using in-proc WCF services for your domain services. Alternately the front end webservices can use domain managers/engines in separate;y layered assemblies which in turn uses repositories. You can have a coarse grain of webservice class implementations and fine grained managers for domain entities that are mocakble and unit testable.