This question stems from this other question I had asked about too many interfaces, QCRS and Mediatr library (request/response)
Mediatr: reducing number of DI'ed objects
I have created bunch of commands and queries and I have bunch of behaviors and one of them being is a Cache behaviour that for every query, cache is checked for the value before the query is actually executed against the db. So far this is working great, but the delima comes in when I have an UpdateSomethingCommand, once I update the underlying object in the db, I would like to refresh the cache with what was successfully saved to the db.
My question is specifically when to actually update the cache:
In the UpdateSomethingCommandHandler (this might be breaking the SOLID principal)
Call another command in UpdateSomethingCommandHanlder that is specifically designed to update caches (not sure this is a good design principal)
Introduce another behavior that is specifically designed for updating caches (not sure how to go about this yet)
Is there a better solution?
We had a similar need on a project that uses MediatR and ended up incorporating caching into the mediator pipeline, including cache invalidation as you describe.
The basic premise is that we have two different behaviors inserted into the pipeline, one for caching a response from a request, and one for invalidating a cached request response from a different request.
There is a little bit of interplay between the two behaviors in the fact that they need to exchange a cache key in order to invalidate the correct request.
I've recently pulled some of this work into a stand-alone library that in theory can be dropped in as-is to any project using MediatR. In your case, you may just want to look at the techniques we've used here and recreate them as needed.
Rather than repeat everything here and now, I'll point you at the project page where there is some documentation under the Getting Started link on the homepage:
https://github.com/Imprise/Imprise.MediatR.Extensions.Caching
In my opinion, the cache invalidation makes the whole process extremely simple and straightforward, but there are cases where we needed finer control over when the invalidation occurs. In these cases the other approach we have taken is to inject an ICache<TRequest, TResponse> cache into INotificationHandlers and then call _cache.Remove(key); manually as needed. Then, from any requests you know should invalidate just raise a notification that is handled by the INotificationHandler e.g. _mediator.Publish(SomethingUpdated);
My suggestion is to use a cache behavior that acts on requests that implement some sort of ICacheableRequest marker interface and invalidating the cache as a step in the corresponding Update/Delete command handlers (like you mentioned in point 1).
If you choose to create an invalidator behavior there are a few problems.
First, it's unclear that the command is invalidating the cache. Whenever I need to check what's going on when an entity is updated/deleted, I just follow the command handler, there are no side effects (harder to follow) by creating a separate cache invalidator.
Second, even if putting the invalidation code in a separate file follows better the SRP, you will have to choose where to put the cache invalidator class. Does it go next to the cached query or next to the command handler that invalidates the cache?
Third, in many scenarios you won't have enough information about the key used to cache the request in the associated command, you'll only get that and any other extra invalidation condition in the CommandHandler.
Related
I'm working on adding push notification into my ASP.NET core 2.0.0 webApp. I want to have a notification service that would have a badgeCount member which I would update when I send out notifications or when I mark something as read.
I wanted to make this a singleton, but it seems like I can't use dependency injection for singletons. I need access to my dbContext and maybe some other Identity /or Entity services later.
Would it make sense for me to make my notifcation service a scopedService instead of a singleton so that I can use DI? Then have a notificationBadge singleton that I would inject into my scopedService so I can maintain it?
I'm doing this so that I don't have to calculate the badge count each time (involves using queries)
EDIT: Actually, after writing this I realized that singletons are probably only instantiated once on server startup and not per user. So my initial approach wouldn't work even if I could use DI. I'd probably have to add a field on my user class that extends the IdentityUser then right? Or is there a way around this so that I don't have to update/save this to any db record?
Understanding DI
So to try and cover your question DI is certainly what you want in terms of most things inside your application and website. It can do singletons, as well as scoped and transcients (new copy every time).
In order to really understand DI and specifically the .Net Core implenentation I actually make use of the DI from .Net Core in a stand-alone .Net Standard open source library so you can see how it is done.
Video explaining the DI and showing me make and use the DI outside of ASP.Net Core scene: https://www.youtube.com/watch?v=PrCoBaQH_aI
Source code: https://github.com/angelsix/dna-framework
This should answer your question regarding how to access the DbContext if you do not understand it already from the video above: https://www.youtube.com/watch?v=JrmtZeJyLgg
Scoped/Transcient vs Singleton
What you have to remember when it comes to whether or not to use a singleton instance is singletons are always in-memory, so you should always consider and try to make things scoped or transcient to save memory, if the creation of that service is not intense or slow. So it is basically a trade off between RAM usage vs speed on some generate grounds.
If you then have specific types of service the decision becomes a different one. For example for DbContext objects you can think of them like a "live, in-memory database query/proxy" and so just like SQL queries you want to create them, execute them and be done with them. That is why they are made scoped, so that when a controller is created (per request) a new DbContext is created, injected, used by an action and then destroyed.
I guess the simple answer is it doesn't usually matter too much and most applications won't have any major concern or issues but you do have to remember singletons stay in-memory for the lifecycle of your application or the app domain if you are in a rare multi-domain setup.
Notification Count
So the main question is really about badges. There are many things involved in this process and setup, and so I will limit my answer to the presumption that you are talking about a client logged into a website and you are providing the website UI, and want to show the badge count for, and that you are not on about for example some Android/iOS app or desktop application.
In terms of generating the badge count it would be a combination of all unread messages or items in your database for the user. I would do this calculation on request from the user visiting a page (So in an Action and returned to the view via Razer or ViewBag for example) that needs that information, or from requesting it via Ajax if you are using a more responsive/Ajax style site.
That again I presume is not an issue and I state it just for completeness and presumptions.
So the issue you are asking about is basically that every time the page changes or the badge count is re-requested you are concerned about the time in getting that information from the database, correct?
Personally I would not bother trying to "cache" this outside of the database, as it is a fast changing thing and you will likely have more hit trying to keep the cache in-sync than just calling the database.
Instead if you are concerned the query will be intensive to work out the badge count, I would instead every time any addition to the database of an unread/new item, or a marking of an item as read is done, you do a "SetUnreadCount" call that calculates and writes that value as a single integer to the database so your call to get the unread count is a Scalar call to the database and SUPER quick.
I am new to eventsourcing, so this might be a dreadfully incompetent question, so please bear with me:
We have an eventsourced, cqrs system with cassandra for persistance. We have a sequence/version number to handle conflicting modifications on an aggregate.
We need a readmodel for an administrative interface, that needs to display quite a few details from several bounded contexts and make it available for editing via a rest api.
What is the best practice handling concurrency in this readmodel. Thougts are the following:
1)
It would be nice to have a clean readmodel including all relevant data, that we can get with one request. This raises the problem: When multiple fields can be independently be edited, how do we actually create this readmodel guaranteeing that we handle the sequence for all fields? Could we add a sequence number per field and handle that somehow, but that will totally clutter our readmodel.
2)
We could have a readmodel per field making everything easy in theory, but creating a lot of requests, which would be generally stupid, but easy to manage.
3)
We could create a sequence-table seperate to the readmodel and keep track that way having both a general sequence and a per-field-sequence, and the use that to write a new readmodel when necessary.
Any thoughts.
We use strict event ordering, each event having an aggregate version inside it as metadata. We also have one, single-threaded projection. This guarantees that there will be no overrides.
The question arises about the scalability of such system and so far we have not hit the limits with this approach but the day might come. Daniel's suggestion to keep the aggregate version together with the read model makes sense but again, it will fail when scaling, assuming you will need competing consumers that will try updating your model simultaneously.
As we know, ordering is an issue with competing consumers, so I don't really have a ready-made answer for this. If the chance of concurrent update for the same field will be real (not just hypothetical), I would also consider field-level read models and UI composition.
The assumption in a CQRS ES based system is that you/the user doesn't edit read models directly. Read models are the result of all relevant events in the event stream, ie a projection. Therefore, read models change as a result of events which were caused as a result of commands.
In order to handle concurrency in that scenario, I favour using a version number stored against read models. When you form a command it should include the version of the read model table that was used to supply the context for the command.
You can then check that the events being raised have the right number. If not you can throw a concurrency exception. OR you can create a concurrency resolver service. This would check to see if the events which happened since your command was issues actually conflict or not.
If you want to know more about concurrency management in an event sourced system you can check out this post:
Handling Concurrency Conflicts in a CQRS and Event Sourced System
Hope that helps.
My project group and I are to develop a generic workflow system, and have decided to implement a single Node (a task in the workflow) as a C# Visual Studio Web API project (Using the ASP.NET MVC structure).
In the process of implementing a Node's logic, we've come across the trouble of how to store data in our Node. Our Node specifically consists of a few lists of Uri's leading to other Nodes as well as some status/state boolean values. These values are currently stored in a regular class but with all the values as internal static fields.
We're wondering if there's a better way to do this? Particularly, as we'd like to later apply a locking-mechanism, it'd be prefereable to have an object that we can interact with, however we are unsure of how we can access this "common" object in various Controllers - or rather in a single controller, which takes on the HTTP requests that we receive for ou Node.
Is there a way to make the Controller class use a modified constructor which takes this object? And if so, the next step: Where can we provide that the Controller will receive the object in this constructor? There appears to be no code which instantiates Web API controllers.
Accessing static fiels in some class seems to do the trick, data-wise, but it forces us to implement our own locking-mechanism using a boolean value or similar, instead of simply being able to lock the object when it is altered.
If I am not making any sense, do tell. Any answers that might help are welcome! Thanks!
Based on your comments, I would say the persistence mechanism you are after is probably one of the server-side caching options (System.Runtime.Caching or System.Web.Caching).
System.Runtime.Caching is the newer of the 2 technologies and provides the an abstract ObjectCache type that could potentially be extended to be file-based. Alternatively, there is a built-in MemoryCache type.
Unlike a static method, caches will persist state for all users based on a timeout (either fixed or rolling), and can potentially have cache dependencies that will cause the cache to be immediately invalidated. The general idea is to reload the data from a store (file or database) after the cache expires. The cache protects the store from being hit by every request - the store is only hit after the timeout is reached or the cache is otherwise invalidated.
In addition, you can specify that items are "Not Removable", which will make them survive when an application pool is restarted.
More info: http://bartwullems.blogspot.com/2011/02/caching-in-net-4.html
My question is: how do I implement caching in my domain project, which is working like a normal stack with the repository pattern.
I have a setup that looks like the following:
ASP.NET MVC website
Web API
Domain project (using IoC, with Windsor)
My domain project for instance have:
IOrderRepository.cs
OrderRepository.cs
Order.cs
My ASP.NET MVC website calls the Web API and gets back some DTO classes. My Web API then maps these objects to business objects in my domain project, and makes the application work.
Nowhere in my application have I implemented caching.
Where should be caching be implemented?
I thought about doing it inside the methods in the OrderRepository, so my Get, GetBySpecification and Update methods has to call some generic cache handler injected by the OrderRepository.
This obviously gives some very ugly code, and isn't very generic.
How to maintain the cache?
Let's say we have a cache key like "OrderRepostory_123". When I call the Update method, should I call cacheHandler.Delete("OrderRepository_123") ? Because that seems very ugly as well
My own thoughts...
I can't really see a decent way to do it besides some of the messy methods I have described. Maybe I could make some cache layer, but I guess that would mean my WebAPI wouldn't call my OrderRepository anymore, but my CacheOrderRepository-something?
Personally, I am not a fan of including caching directly in repository classes. A class should have a single reason to change, and adding caching often adds a second reason. Given your starting point you have at least two likely reasonable options:
Create a new class that adds caching to the repository and exposes the same interface
Create a new service interface that uses one or more repositories and adds caching
In my experience #2 is often more valuable, since the objects you'd like to cache as a single unit may cross repositories. Of course, this depends on how you have scoped your repositories. A lot may depend on whether your repositories are based on aggregate roots (ala DDD), tables, or something else.
There are probably a million different ways to do this, but it seems to me (given the intent of caching is to improve performance) implementing the cache similar to a repository pattern - where the domain objects interact with the cache instead of the database, then perhaps a background thread could keep the database and cache in sync, and the initial startup of the app pool would fill the cache (assuming eager loading is desired). A whole raft of technical issues start to crop up, such as what to do if the cache is modified in a way that violates a database constraint. Code maintenance becomes a concern where any data structure related concerns possibly need to be implemented in multiple places. Concurrency issues start to enter the fray. Just some thoughts...
SQLCacheDependency with System.Web.Caching.Cache, http://weblogs.asp.net/andrewrea/archive/2008/07/13/sqlcachedependency-i-think-it-is-absolutely-brilliant.aspx . This will get you caching that gets invalidated based on other systems applying updates also.
there are multiple levels of caching depending on the situation however if you are looking for generic centralized caching with low number of changes I think you will be looking for EF second level caching and for more details check the following http://msdn.microsoft.com/en-us/magazine/hh394143.aspx
Also you can use caching on webapi level
Kindly consider if MVC and WebAPI the network traffic if they are hosted in 2 different data centers
and for huge read access portal you might consider Redis http://Redis.io
It sounds like you want to use a .NET caching mechanism rather than a distributed cache like Redis or Memcache. I would recommend using the System.Runtime.Caching.MemoryCache class instead of the traditional System.Web.Caching.Cache class. Doing this allows you to create your caching layer independent of your MVC/API layer because the MemoryCache has no dependencies on System.Web.
Caching your DTO objects would speed up your application greatly. This prevents you from having to wait for data to be assembled from a cache that mirrors your data layer. For example, requesting Order123 would only require a single cache read rather than to several reads to any FK data. Your caching layer would of course need to contain the logic to invalidate the cache on UPDATEs you perform. A recommended way would be to retrieve the cached order object and modify its properties directly, then persist to the DB asynchronously.
I'm new to EF and it appears that I have made a mistake with it but I would like clarification.
My scenario:
Winforms App (ClickOnce)
A static class whose only responsibility is to update the DB via a DataServiceContext - single URI
Only one control in the entire application uses this class
With the static class I created a single readonly instance of a DataServiceContext. There is also a GetMethod which gets the data using a ToList() on the context - this list is then used for data binding. I just need simple CRUD so there is a Save/Delete method, entities are passed in and updated.
As I've read a bit more about EF I understand that shared contexts are bad due to issues with concurrency. It seems that I would get away with a static context in this scenario as there would only ever be a single user accessing the same context per application instance or would I? I want to keep things as simple as possible. I'm starting to think perhaps I should turn the static class into a regular class with an immutable DataServiceContext instance shared between methods as a safeguard? Perhaps I should apply a using(DataServiceContext) within each method that makes a service call via SaveChanges to tighten things up even more? Do I need to do these things now or might it be YAGNI?
As I'm self taught here (no mentors), I might be in danger of going AWOL. I probably need some ground rules about EF my current reading as not led me to as yet. Please help.
This isn't just about concurrency (but yes: that is an important concern) - it is also about correctness. If you have a single data-context, there are a few issues:
Firstly, memory: it will slowly grow over the life of the application, as more data is attached into the identity manager and change tracker.
Secondly - freshness: once things are attached to the data-context, you'll see the in-memory object - it may stop showing the up-to-date state of objects in the database
Thirdly - corruption: if anything goes wrong, the noral way of handling that is to simply rollback any in-flight changes, discard the data-context and report the error and/or retry the operation (on a fresh data-context); you can't keep using the old data-context - it is now in an undefined state
For all of these reasons, the general pattern is that you use a data-context only as a unit-of-work, to perform a single operation or a set of related / scoped operations. After that, burn it and start again.