Injecting a singleton into scopedService - c#

I'm working on adding push notification into my ASP.NET core 2.0.0 webApp. I want to have a notification service that would have a badgeCount member which I would update when I send out notifications or when I mark something as read.
I wanted to make this a singleton, but it seems like I can't use dependency injection for singletons. I need access to my dbContext and maybe some other Identity /or Entity services later.
Would it make sense for me to make my notifcation service a scopedService instead of a singleton so that I can use DI? Then have a notificationBadge singleton that I would inject into my scopedService so I can maintain it?
I'm doing this so that I don't have to calculate the badge count each time (involves using queries)
EDIT: Actually, after writing this I realized that singletons are probably only instantiated once on server startup and not per user. So my initial approach wouldn't work even if I could use DI. I'd probably have to add a field on my user class that extends the IdentityUser then right? Or is there a way around this so that I don't have to update/save this to any db record?

Understanding DI
So to try and cover your question DI is certainly what you want in terms of most things inside your application and website. It can do singletons, as well as scoped and transcients (new copy every time).
In order to really understand DI and specifically the .Net Core implenentation I actually make use of the DI from .Net Core in a stand-alone .Net Standard open source library so you can see how it is done.
Video explaining the DI and showing me make and use the DI outside of ASP.Net Core scene: https://www.youtube.com/watch?v=PrCoBaQH_aI
Source code: https://github.com/angelsix/dna-framework
This should answer your question regarding how to access the DbContext if you do not understand it already from the video above: https://www.youtube.com/watch?v=JrmtZeJyLgg
Scoped/Transcient vs Singleton
What you have to remember when it comes to whether or not to use a singleton instance is singletons are always in-memory, so you should always consider and try to make things scoped or transcient to save memory, if the creation of that service is not intense or slow. So it is basically a trade off between RAM usage vs speed on some generate grounds.
If you then have specific types of service the decision becomes a different one. For example for DbContext objects you can think of them like a "live, in-memory database query/proxy" and so just like SQL queries you want to create them, execute them and be done with them. That is why they are made scoped, so that when a controller is created (per request) a new DbContext is created, injected, used by an action and then destroyed.
I guess the simple answer is it doesn't usually matter too much and most applications won't have any major concern or issues but you do have to remember singletons stay in-memory for the lifecycle of your application or the app domain if you are in a rare multi-domain setup.
Notification Count
So the main question is really about badges. There are many things involved in this process and setup, and so I will limit my answer to the presumption that you are talking about a client logged into a website and you are providing the website UI, and want to show the badge count for, and that you are not on about for example some Android/iOS app or desktop application.
In terms of generating the badge count it would be a combination of all unread messages or items in your database for the user. I would do this calculation on request from the user visiting a page (So in an Action and returned to the view via Razer or ViewBag for example) that needs that information, or from requesting it via Ajax if you are using a more responsive/Ajax style site.
That again I presume is not an issue and I state it just for completeness and presumptions.
So the issue you are asking about is basically that every time the page changes or the badge count is re-requested you are concerned about the time in getting that information from the database, correct?
Personally I would not bother trying to "cache" this outside of the database, as it is a fast changing thing and you will likely have more hit trying to keep the cache in-sync than just calling the database.
Instead if you are concerned the query will be intensive to work out the badge count, I would instead every time any addition to the database of an unread/new item, or a marking of an item as read is done, you do a "SetUnreadCount" call that calculates and writes that value as a single integer to the database so your call to get the unread count is a Scalar call to the database and SUPER quick.

Related

Is Transient an acceptable scope for injecting a service in Blazor Server that uses DbContext?

I'm working on my first Blazor Server project and I am slowly fixing a lot of initial design errors that I made when I started out. I've been using C# for a while, but I'm new to web development, new to ASP.Net, new to Blazor, and new to web architecture standards, hence why I made so many mistakes early on when I didn't have a strong understanding of how best to implement my project in a way that promotes clean code and long term maintainability.
I've recently restructured my solution so that it follows the "Clean Architecture" outlined in this Microsoft documentation. I now have the following projects, which aim to mirror those described in the document:
CoreWebApp: A Blazor project, pages and components live here.
Core: A Class Library project, the domain model, interfaces, business logic, etc, live here.
Infrastructure: Anything to do with having EF Core access the underlying database lives here, ie ApplicationDbContext, any implementations of Repositories, etc.
I am at a point where I want to move existing implementations of the repository pattern into the Infrastructure project. This will allow me to decouple the Core project from the Infrastructure project by utilising the Dependency Injection system so that any business logic that uses the repositories depends only on the interfaces to those repositories (as defined in Core) and not the actual implementations themselves (to be defined in Infrastructure).
Both the Microsoft documentation linked above, and this video by CodeWrinkles on YouTube make the following two suggestions on how to correctly use DbContext in a Blazor Server project (I'll talk specifically about using DbContext in the context of a repository):
Scope usage of a repository to each individual database request. Basically every time you need the repository you instantiate a new instance, do what needs to be done, and as soon as the use of the repo goes out of scope it is automatically disposed. This is the shortest lived scope for the underlying DbContext and helps to prevent concurrency issues, but also forgoes the benefits of change tracking.
Scope the usage of a repository to the lifecycle of a component. Basically you create an instance of a repository in OnInitialisedAsync, and destroy the repository in the Dispose() method of the component. This allows usage of EF Cores change tracking.
The problem with these two approaches is that they don't allow for use of the DI system, in both cases the repository must be new'd and thus the coupling between Core and Infrastructure remains unbroken.
The one thing that I can't seem to understand is why case 2 can't be achieved by declaring the repository as a Transient service in Program.cs. (I suppose case 1 could also be achieved, you'd just hide spinning up a new DbContext on every access to the repository within the methods it exposes). In both the Microsoft documentation and the CodeWrinkles video they seem to lean pretty heavily on this wording for why the Transient scope isn't well aligned with DbContext:
Transient results in a new instance per request; but as components can be long-lived, this results in a longer-lived context than may be intended.
It seems counterintuitive to make this statement, and then provide a solution to the DbContext lifetime problem that will enable a lifetime that will align with the stated problem.
Scoping a repository to the lifetime of a component seems, to me, to be exactly the same as injecting a Transient instance of a repository as a service. When the component is created a new instance of the service is created, when the user navigates away from the page this instance is destroyed. If the user comes back to the page another instance is created and it will be different to the previous instance due to the nature of Transient services.
What I'd like to know is if there is any reason why I shouldn't create my repositories as Transient services? Is there some deeper aspect to the problem that I've missed? Or is the information that has been provided trying to lead me into not being able to take advantage of the DI system for no apparent reason? Any discussion on this is greatly appreciated!
It's a complex issue. With no silver bullet solution. Basically, you can't have you cake and eat it.
You either use EF as an [ORM] Object Request Mapper or you let EF manage your complex objects and in the process surrender your "Clean Design" architecture.
In a Clean Design solution, you map data classes to tables or views. Each transaction uses a "unit of work" Db Context obtained from a DBContextFactory. You only enable tracking on Create/Update/Delete transactions.
An order is a good example.
A Clean Design solution has data classes for the order and order items. A composite order object in the Core domain is built by make two queries into the data pipeline. One item query to get the order and one list query to get the order items associated with that order.
EF lets you build a data class which includes both the order data and a list of order items. You can open that data class in a DbContext, "process" that order by making changes and then call "SaveAsync" to save it back to the database. EF does all the complex stuff in building the queries and tracking the changes. It also holds the DbContext open for a long period.
Using EF to manage your complex objects closely couples your application domain with your infrastructure domain. Your application is welded to EF and the data stores it supports. It's why you will see some authors asserting that implementing the Repository Pattern with EF is an anti-pattern.
Taking the Order example above, you normally use a Scoped DI View Service to hold and manage the Order data. Your Order Form (Component) injects the service, calls an async get method to populate the service with the current data and displays it. You will almost certainly only ever have one Order open in an SPA. The data lives in the view service not the UI front end.
You can use transient services, but you must ensure they:
Don't use DBContexts
Don't implement IDisposable
Why? The DI container retains a reference to any Transient service it creates that implements IDisposable - it needs to make sure the service is disposed. However, it only disposes that service when the container itself is disposed. You build up redundant instances until the SPA closes down.
There are some situations where the Scoped service is too broad, but the Transient option isn't applicable such as a service that implements IDisposable. Using OwningComponentBase can help you solve that problem, but it can introduce a new set of problems.
If you want to see a working Clean Design Repository pattern example there's an article here - https://www.codeproject.com/Articles/5350000/A-Different-Repository-Pattern-Implementation - with a repo.

Mediatr - Where is the right place to invalidate/update cache

This question stems from this other question I had asked about too many interfaces, QCRS and Mediatr library (request/response)
Mediatr: reducing number of DI'ed objects
I have created bunch of commands and queries and I have bunch of behaviors and one of them being is a Cache behaviour that for every query, cache is checked for the value before the query is actually executed against the db. So far this is working great, but the delima comes in when I have an UpdateSomethingCommand, once I update the underlying object in the db, I would like to refresh the cache with what was successfully saved to the db.
My question is specifically when to actually update the cache:
In the UpdateSomethingCommandHandler (this might be breaking the SOLID principal)
Call another command in UpdateSomethingCommandHanlder that is specifically designed to update caches (not sure this is a good design principal)
Introduce another behavior that is specifically designed for updating caches (not sure how to go about this yet)
Is there a better solution?
We had a similar need on a project that uses MediatR and ended up incorporating caching into the mediator pipeline, including cache invalidation as you describe.
The basic premise is that we have two different behaviors inserted into the pipeline, one for caching a response from a request, and one for invalidating a cached request response from a different request.
There is a little bit of interplay between the two behaviors in the fact that they need to exchange a cache key in order to invalidate the correct request.
I've recently pulled some of this work into a stand-alone library that in theory can be dropped in as-is to any project using MediatR. In your case, you may just want to look at the techniques we've used here and recreate them as needed.
Rather than repeat everything here and now, I'll point you at the project page where there is some documentation under the Getting Started link on the homepage:
https://github.com/Imprise/Imprise.MediatR.Extensions.Caching
In my opinion, the cache invalidation makes the whole process extremely simple and straightforward, but there are cases where we needed finer control over when the invalidation occurs. In these cases the other approach we have taken is to inject an ICache<TRequest, TResponse> cache into INotificationHandlers and then call _cache.Remove(key); manually as needed. Then, from any requests you know should invalidate just raise a notification that is handled by the INotificationHandler e.g. _mediator.Publish(SomethingUpdated);
My suggestion is to use a cache behavior that acts on requests that implement some sort of ICacheableRequest marker interface and invalidating the cache as a step in the corresponding Update/Delete command handlers (like you mentioned in point 1).
If you choose to create an invalidator behavior there are a few problems.
First, it's unclear that the command is invalidating the cache. Whenever I need to check what's going on when an entity is updated/deleted, I just follow the command handler, there are no side effects (harder to follow) by creating a separate cache invalidator.
Second, even if putting the invalidation code in a separate file follows better the SRP, you will have to choose where to put the cache invalidator class. Does it go next to the cached query or next to the command handler that invalidates the cache?
Third, in many scenarios you won't have enough information about the key used to cache the request in the associated command, you'll only get that and any other extra invalidation condition in the CommandHandler.

Static Class Persistance in MVC Applications

I have over the last couple of months migrated my Webforms knowledge to MVC knowledge, and I have to say that after originally being an MVC skeptic I am loving MVC and the way it works.
The only thing I am still a bit unclear about is how static classes are persisted in MVC. In Webforms static class values were shared amongst the different clients accessing the application, which could lead to a user overwriting another users values should you decide to use static classes to save user related variables.
My first question is whether or not this is still the case in MVC?
Then my second question is about where to keep the DBContext instance in my MVC application. Currently I have it as a public variable in a static DAL class. The single context is then shared amongst all the clients.
The more I read about this the more I am starting to believe that this is the incorrect approach, but recreating the context inside each controller seems repetitive.
Is there a downside to having the Context in a static class?
The context is supposed to be short-lived, so it might seem repetitive, but yes, create a context inside each controller. Use a single context per request to be precise.
Persisting a single DbContext instance across the lifetime of an application doesn't sound like a good idea to me. I usually use one DbContext instance per Request.
Following are 2 points that you might want to consider while deciding the appropriate lifetime for DbContext in your application:
Re-using the same context for multiple Requests lets you benefit from
the Cached entities and you could save many hits to the Database but
you might then run into performance issues because you could end up
having all your Database entities in memory at some time.
Re-instantiating context too often on the other hand is also not
recommended because it is an expensive operation.
You have to find a balance somewhere between these 2 approaches and for me, instantiating DbContext Per Request works best in most scenarios.
The DbConext is not thread safe, EF allows only one concurrtent operation on the same context. Therefore it is not a good idea to share it across requests.
In the most cases a context per request is the best solution. (Hint: There are some IoC frameworks like autofac whichcan create instances per request)

Implementing caching of services in Domain project being used by a Web API

My question is: how do I implement caching in my domain project, which is working like a normal stack with the repository pattern.
I have a setup that looks like the following:
ASP.NET MVC website
Web API
Domain project (using IoC, with Windsor)
My domain project for instance have:
IOrderRepository.cs
OrderRepository.cs
Order.cs
My ASP.NET MVC website calls the Web API and gets back some DTO classes. My Web API then maps these objects to business objects in my domain project, and makes the application work.
Nowhere in my application have I implemented caching.
Where should be caching be implemented?
I thought about doing it inside the methods in the OrderRepository, so my Get, GetBySpecification and Update methods has to call some generic cache handler injected by the OrderRepository.
This obviously gives some very ugly code, and isn't very generic.
How to maintain the cache?
Let's say we have a cache key like "OrderRepostory_123". When I call the Update method, should I call cacheHandler.Delete("OrderRepository_123") ? Because that seems very ugly as well
My own thoughts...
I can't really see a decent way to do it besides some of the messy methods I have described. Maybe I could make some cache layer, but I guess that would mean my WebAPI wouldn't call my OrderRepository anymore, but my CacheOrderRepository-something?
Personally, I am not a fan of including caching directly in repository classes. A class should have a single reason to change, and adding caching often adds a second reason. Given your starting point you have at least two likely reasonable options:
Create a new class that adds caching to the repository and exposes the same interface
Create a new service interface that uses one or more repositories and adds caching
In my experience #2 is often more valuable, since the objects you'd like to cache as a single unit may cross repositories. Of course, this depends on how you have scoped your repositories. A lot may depend on whether your repositories are based on aggregate roots (ala DDD), tables, or something else.
There are probably a million different ways to do this, but it seems to me (given the intent of caching is to improve performance) implementing the cache similar to a repository pattern - where the domain objects interact with the cache instead of the database, then perhaps a background thread could keep the database and cache in sync, and the initial startup of the app pool would fill the cache (assuming eager loading is desired). A whole raft of technical issues start to crop up, such as what to do if the cache is modified in a way that violates a database constraint. Code maintenance becomes a concern where any data structure related concerns possibly need to be implemented in multiple places. Concurrency issues start to enter the fray. Just some thoughts...
SQLCacheDependency with System.Web.Caching.Cache, http://weblogs.asp.net/andrewrea/archive/2008/07/13/sqlcachedependency-i-think-it-is-absolutely-brilliant.aspx . This will get you caching that gets invalidated based on other systems applying updates also.
there are multiple levels of caching depending on the situation however if you are looking for generic centralized caching with low number of changes I think you will be looking for EF second level caching and for more details check the following http://msdn.microsoft.com/en-us/magazine/hh394143.aspx
Also you can use caching on webapi level
Kindly consider if MVC and WebAPI the network traffic if they are hosted in 2 different data centers
and for huge read access portal you might consider Redis http://Redis.io
It sounds like you want to use a .NET caching mechanism rather than a distributed cache like Redis or Memcache. I would recommend using the System.Runtime.Caching.MemoryCache class instead of the traditional System.Web.Caching.Cache class. Doing this allows you to create your caching layer independent of your MVC/API layer because the MemoryCache has no dependencies on System.Web.
Caching your DTO objects would speed up your application greatly. This prevents you from having to wait for data to be assembled from a cache that mirrors your data layer. For example, requesting Order123 would only require a single cache read rather than to several reads to any FK data. Your caching layer would of course need to contain the logic to invalidate the cache on UPDATEs you perform. A recommended way would be to retrieve the cached order object and modify its properties directly, then persist to the DB asynchronously.

Need a solution for middleware caching in C#

I have an ASP.net application that uses some common business object that we have created. These business objects are also used in a few other windows services or console applications.
The problem I am running into is that if I have a class "foo" and a class "bar" and each has a function loadClient(), if I call foo.loadClient() and bar.loadClient(), each request will hit the database. I figure implementing some sort of cache would reduce unnecessary round trips to the DB.
Here's the catch. I want the cache to be specific to each HTTP request that comes in on the ASP.net App. That is, a new request gets a brand new cache. The cache can exist for the lifetime of the other console applications since 90% of them are utilities.
I know I can use System.Web.Cache but I don't want my middleware tied to the System.Web libraries.
Hope that explains it. Can anyone point me in the right direction?
Thanks!
Are you reusing objects during the lifetime of a request? If not,then the model you have suggests that each postback will also create a new set of objects in effect obviating the need for a cache. Typically a cache has value when objects are shared across requests
As far as using a non web specific caching solution I've found the Microsoft Caching Application Block very robust and easy to use.
I think you can take a loot at Velocity project.
http://msdn.microsoft.com/en-us/data/cc655792.aspx - there is a brief article
If you are looking for interprocess caching then thats difficult.
But if you dont want your middleware tied to System.Web then you can write one interface library that will serve as bridge between your middleware and system.web.
In future if you want to tie it to other cache manager then you can just rewrite your bridge interface library keeping your middleware absolutely independent of actual cache manager.
The System.Runtime.Caching.MemoryCache is recommended by Microsoft in lieu of System.Web.Caching. It can be used in the context of the MS Caching Application Block suggested by Abhijeet Patel.
See:
Is System.Web.Caching or System.Runtime.Caching preferable for a .NET 4 web application
http://msdn.microsoft.com/en-us/library/dd997357.aspx

Categories