Allow access to operation only one user at the time - c#

I have ASP.NET Core application which communicates with web api.
There is an business operation involving a few steps (do sth on one page, go to next and next). This multi-step operation is being done in context of one element.
So let's say you have a list of some business objects and your task is to accept object 3 from this list. Accepting is our multistep operation and if I am currently accepting object 3 no one else should be able to enter accepting operation for object 3. When I finish operation, it should be unlocked.
Hope the problem is understandable.
We don't want very time-consuming solution, the simplest idea was to create a database table which indicates when user starts operation, it saves id of the object and id of the user and automatcally remove itself after for example 5 minutes, if someone else want to access operation we check if it is blocked for this object. But it is kind of hacky and not very clean (what if user will go for a coffee and continue operation after 10 minutes?)
I'm looking for a better way to implement this kind of behaviour and appreciate any ideas

If I were to implement that behavior, I'll also use database, but kinda different way. I'll make a table of objects (object 3 is one of its row), adding a column for UserId, boolean OnProcess (to mark if the object is on process or not) and timestamp for StartProcess.
For a user to be able to enter the operation, run query like:
UPDATE Objects SET UserId = <CurrentUser>, StartProcess = <NOW>, OnProcess = true
OUTPUT Object.Id
WHERE Object.Id == 3 AND
(
OnProcess == false
OR ( OnProcess == true AND UserId == <CurrentUser> )
OR ( StartProcess <is more than 15 minutes ago>)
)
disclaimer: the query above is not an executable query, but it should be clear enough to understand what it does.
With the query above, the Object.Id will be returned when:
the object is not being processed by another user
the object is being processed by the CurrentUser itself, also resetting the StartProcess (some kind of sliding behavior). This way, if CurrentUser AFK for a given time (but not exceeding the threshold time) and comes back, he/she can comfortably continue the operation
the object is not being processed for the last 15 minutes. This is actually the threshold that I mention in previous point. As for how long (15 minutes in my example), it is really up to you.
If the Object.Id is returned for a user, then he/she are able to enter the operation.

You're looking for a semaphore. The lock keyword is the most basic of semaphores, but you can also use Semaphore/SemaphoreSlim, which provide the ability to do things like rate-limiting, whereas lock will literally gate one op at a time. However, your goal is to gate one op at a time, for a particular resource, which makes SemaphoreSlim a better choice, specifically a ConcurrentDictionary<string, SemaphoreSlim>.
You'll need a class with singleton lifetime (one instance for the entire life of the application). There, you'll add an ivar:
private readonly ConcurrentDictionary<string, SemaphoreSlim> _semaphores = new ConcurrentDictionary<string, SemaphoreSlim>();
Then, you'll add the following code around the operation you want to gate:
var semaphore = _semaphores.GetOrAdd("object3", _ => new SemaphoreSlim(1, 1));
await semaphore.WaitAsync();
// do something
semaphore.Release();
The "object3" there is obviously just a placeholder. You'll want to use whatever makes sense (ID, etc.) - something that uniquely identifies the particular resource you're gating. This then will only hold operations for that particular resource if there's an existing operation on that particular resource. A different resource would get its own semaphore and thus its own gate.

Related

Is there a way to lock a concurrent dictionary from being used

I have this static class
static class LocationMemoryCache
{
public static readonly ConcurrentDictionary<int, LocationCityContract> LocationCities = new();
}
My process
Api starts and initializes an empty dictionary
A background job starts and runs once every day to reload the dictionary from the database
Requests come in to read from the dictionary or update a specific city in the dictionary
My problem
If a request comes in to update the city
I update the database
If the update was successful, update the city object in the dictionary
At the same time, the background job started and queried all cities before I updated the specific city
The request finishes and the dictionary city now has the old values because the background job finished last
My solution I thought about first
Is there a way to lock/reserve the concurrent dictionary from reads/writes and then release it when I am done?
This way when the background job starts, it can lock/reserve the dictionary only for itself and when it's done it will release it for other requests to be used.
Then a request might have been waiting for the dictionary to be released and update it with the latest values.
Any ideas on other possible solutions?
Edit
What is the purpose of the background job?
If I manually update/delete something in the database I want those changes to show up after the background job runs again. This could take a day for the changes to show up and I am okay with that.
What happens when the Api wants to access the cache but its not loaded?
When the Api starts I block requests to this particular "Location" project until the background job marks IsReady to true. The cache I implemented is thread safe until I add the background job.
How much time does it take to reload the cache?
I would say less then 10 seconds for a total of 310,000+ records in the "Location" project.
Why I chose the answer
I chose Xerillio's answer because it solves the background job problem by keeping track of date times. Similar to a "object version" approach. I won't be taking this path as I have decided that if I do a manual update in the database, I might as well create an API route that does it for me so that I can update the db and cache at the same time. So I might remove the background job after all or just run it once a week. Thank you for all the answers and I am ok with a possible data inconsistency with the way I am updating the objects because if one route updates 2 specific values and another route updates 2 different specific values then the possibility of having a problem is very minimal
Edit 2
Let's imagine I have this cache now and 10,000 active users
static class LocationMemoryCache
{
public static readonly ConcurrentDictionary<int, LocationCityUserLogContract> LocationCityUserLogs = new();
}
Things I took into consideration
An update will only happen to objects that the user owns and the rate at which the user might update those objects is most likely once every minute. So that reduces the possibility of a problem by a lot for this specific example.
Most of my cache objects are related only to a specific user so it relates with bullet point 1.
The application owns the data, I don't. So I should never manually update the database unless it's critical.
Memory might be a problem but 1,000,000 normalish objects is somewhere between 80MB - 150MB. I can have a lot of objects in memory to gain performance and reduce the load on the database.
Having a lot of objects in memory will put pressure on Garbage Collection and that is not good but I don't think its bad at all for me because Garbage Collection only runs when memory gets low and all I have to do is just plan ahead to make sure there is enough memory. Yes it will run because of day to day operations but it won't be a big impact.
All of these considerations just so that I can have an in memory cache right at my finger tips.
I would suggest adding a UpdatedAt/CreatedAt property to your LocationCityContract or creating a wrapper object (CacheItem<LocationCityContract>) with such a property. That way you can check if the item you're about to add/update with is newer than the existing object like so:
public class CacheItem<T>
{
public T Item { get; }
public DateTime CreatedAt { get; }
// In case of system clock synchronization, consider making CreatedAt
// a long and using Environment.TickCount64. See comment from #Theodor
public CacheItem(T item, DateTime? createdAt = null)
{
Item = item;
CreatedAt = createdAt ?? DateTime.UtcNow;
}
}
// Use it like...
static class LocationMemoryCache
{
public static readonly
ConcurrentDictionary<int, CacheItem<LocationCityContract>> LocationCities = new();
}
// From some request...
var newItem = new CacheItem(newLocation);
// or the background job...
var newItem = new CacheItem(newLocation, updateStart);
LocationMemoryCache.LocationCities
.AddOrUpdate(
newLocation.Id,
newItem,
(_, existingItem) =>
newItem.CreatedAt > existingItem.CreatedAt
? newItem
: existingItem)
);
When a request wants to update the cache entry they do as above with the timestamp of whenever they finished adding the item to the database (see notes below).
The background job should, as soon as it starts, save a timestamp (let's call it updateStart). It then reads everything from the database and adds the items to the cache like above, where CreatedAt for the newLocation is set to updateStart. This way, the background job only updates the cache items that haven't been updated since it started. Perhaps you're not reading all items from DB as the first thing in the background job, but instead you read them one at a time and update the cache accordingly. In that case updateStart should instead be set right before reading each value (we could call it itemReadStart instead).
Since the way of updating the item in the cache is a little more cumbersome and you might be doing it from a lot of places, you could make a helper method to make the call to LocationCities.AddOrUpdate a little easier.
Note:
Since this approach is not synchronizing (locking) updates to the database, there's a race condition that means you might end up with a slightly out-of-date item in the cache. This can happen if two requests wants to update the same item simultaneously. You can't know for sure which one updated the DB last, so even if you set CreatedAt to the timestamp after updating each, it might not truly reflect which one was updated last. Since you're ok with a 24 hour delay from manually updating the DB until the background job updates the cache, perhaps this race condition is not a problem for you as the background job will fix it when run.
As #Theodor mentioned in the comments, you should avoid updating the object from the cache directly. Either use the C# 9 record type (as opposed to a class type) or clone the object if you want to cache new updates. That means, don't use LocationMemoryCache[locationId].Item.CityName = updatedName. Instead you should e.g. clone it like:
// You need to implement a constructor or similar to clone the object
// depending on how complex it is
var newLoc = new LocationCityContract(LocationMemoryCache[locationId].Item);
newLoc.CityName = updatedName;
var newItem = new CacheItem(newLoc);
LocationMemoryCache.LocationCities
.AddOrUpdate(...); /* <- like above */
By not locking the whole dictionary you avoid having requests being blocked by each other because they're trying to update the cache at the same time. If the first point is not acceptable you can also introduce locking based on the location ID (or whatever you call it) when updating the database, so that DB and cache are updated atomically. This avoids blocking requests that are trying to update other locations so you minimize the risk of requests affecting each other.
No, there is no way to lock a ConcurrentDictionary on demand from reads/writes, and then release it when you are done. This class does not offer this functionality. You could manually use a lock every time you are accessing the ConcurrentDictionary, but by doing so you would lose all the advantages that this specialized class has to offer (low contention under heavy usage), while keeping all its disadvantages (awkward API, overhead, allocations).
My suggestion is to use a normal Dictionary protected with a lock. This is a pessimistic approach that will result occasionally to some threads unnecessarily blocked, but it is also very simple and easy to reason about its correctness. Essentially all access to the dictionary and the database will be serialized:
Every time a thread wants to read an object stored in the dictionary, will first have to take the lock, and keep the lock until it's done reading the object.
Every time a thread wants to update the database and then the corresponding object, will first have to take the lock (before even updating the database), and keep the lock until all the properties of the object have been updated.
Every time the background job wants to replace the current dictionary with a new dictionary, will first have to take the lock (before even querying the database), and keep the lock until the new dictionary has taken the place of the old one.
In case the performance of this simple approach proves to be unacceptable, you should look at more sophisticated solutions. But the complexity gap between this solution and the next simplest solution (that also offers guaranteed correctness) is likely to be quite significant, so you'd better have good reasons before going that route.

Event sourcing incremental int id

I looked at a lot of event sourcing tutorials and all are using simple demos to focus on the tutorials topic (Event sourcing)
That's fine until you hit in a real work application something that is not covered in one of these tutorials :)
I hit something like this.
I have two databases, one event-store and one projection-store (Read models)
All aggregates have a GUID Id, what was 100% fine until now.
Now I created a new JobAggregate and a Job Projection.
And it's required by my company to have a unique incremental int64 Job Id.
Now I'm looking stupid :)
An additional issue is that a job is created multiple times per second!
That means, the method to get the next number have to be really safe.
In the past (without ES) I had a table, defined the PK as auto increment int64, save Job, DB does the job to give me the next number, done.
But how can I do this within my Aggregate or command handler?
Normally the projection job is created by the event handler, but that's to late in the process, because the aggregate should have the int64 already. (For replaying the aggregate on an empty DB and have the same Aggregate Id -> Job Id relation)
How should I solve this issue?
Kind regards
In the past (without ES) I had a table, defined the PK as auto increment int64, save Job, DB does the job to give me the next number, done.
There's one important thing to notice in this sequence, which is that the generation of the unique identifier and the persistence of the data into the book of record both share a single transaction.
When you separate those ideas, you are fundamentally looking at two transactions -- one that consumes the id, so that no other aggregate tries to share it, and another to write that id into the store.
The best answer is to arrange that both parts are part of the same transaction -- for example, if you were using a relational database as your event store, then you could create an entry in your "aggregate_id to long" table in the same transaction as the events are saved.
Another possibility is to treat the "create" of the aggregate as a Prepare followed by a Created; with an event handler that responds to the prepare event by reserving the long identifier post facto, and then sends a new command to the aggregate to assign the long identifier to it. So all of the consumers of Created see the aggregate with the long assigned to it.
It's worth noting that you are assigning what is effectively a random long to each aggregate you are creating, so you better dig in to understand what benefit the company thinks it is getting from this -- if they have expectations that the identifiers are going to provide ordering guarantees, or completeness guarantees, then you had best understand that going in.
There's nothing particularly wrong with reserving the long first; depending on how frequently the save of the aggregate fails, you may end up with gaps. For the most part, you should expect to be able to maintain a small failure rate (ie - you check to ensure that you expect the command to succeed before you actually run it).
In a real sense, the generation of unique identifiers falls under the umbrella of set validation; we usually "cheat" with UUIDs by abandoning any pretense of ordering and pretending that the risk of collision is zero. Relational databases are great for set validation; event stores maybe not so much. If you need unique sequential identifiers controlled by the model, then your "set of assigned identifiers" needs to be within an aggregate.
The key phrase to follow is "cost to the business" -- make sure you understand why the long identifiers are valuable.
Here's how I'd approach it.
I agree with the idea of an Id generator which is the "business Id" but not the "techcnical Id"
Here the core is to have an application-level JobService that deals with all the infrastructure services to orchestrate what is to be done.
Controllers (like web controller or command-lines) will directly consume the JobService of the application level to control/command the state change.
It's in PHP-like pseudocode, but here we talk about the architecture and processes, not the syntax. Adapt it to C# syntax and the thing is the same.
Application level
class MyNiceWebController
{
public function createNewJob( string $jobDescription, xxxx $otherData, ApplicationJobService $jobService )
{
$projectedJob = $jobService->createNewJobAndProject( $jobDescription, $otherData );
$this->doWhateverYouWantWithYourAleadyExistingJobLikeForExample301RedirectToDisplayIt( $projectedJob );
}
}
class MyNiceCommandLineCommand
{
private $jobService;
public function __construct( ApplicationJobService $jobService )
{
$this->jobService = $jobService;
}
public function createNewJob()
{
$jobDescription = // Get it from the command line parameters
$otherData = // Get it from the command line parameters
$projectedJob = $this->jobService->createNewJobAndProject( $jobDescription, $otherData );
// print, echo, console->output... confirmation with Id or print the full object.... whatever with ( $projectedJob );
}
}
class ApplicationJobService
{
// In application level because it just serves the first-level request
// to controllers, commands, etc but does not add "domain" logic.
private $application;
private $jobIdGenerator;
private $jobEventFactory;
private $jobEventStore;
private $jobProjector;
public function __construct( Application $application, JobBusinessIdGeneratorService $jobIdGenerator, JobEventFactory $jobEventFactory, JobEventStoreService $jobEventStore, JobProjectorService $jobProjector )
{
$this->application = $application; // I like to lok "what application execution run" is responsible of all domain effects, I can trace then IPs, cookies, etc crossing data from another data lake.
$this->jobIdGenerator = $jobIdGenerator;
$this->jobEventFactory = $jobEventFactory;
$this->jobEventStore = $jobEventStore;
$this->jobProjector = $jobProjector;
}
public function createNewJobAndProjectIt( string $jobDescription, xxxx $otherData ) : Job
{
$applicationExecutionId = $this->application->getExecutionId();
$businessId = $this->jobIdGenerator->getNextJobId();
$jobCreatedEvent = $this->jobEventFactory->createNewJobCreatedEvent( $applicationExecutionId, $businessId, $jobDescription, $otherData );
$this->jobEventStore->storeEvent( $jobCreatedEvent ); // Throw exception if it fails so no projecto will be invoked if the event was not created.
$entityId = $jobCreatedEvent->getId();
$projectedJob = $this->jobProjector->project( $entityId );
return $projectedJob;
}
}
Note: if projecting is too expensive for synchronous projection just return the Id:
// ...
$entityId = $jobCreatedEvent->getId();
$this->jobProjector->enqueueProjection( $entityId );
return $entityId;
}
}
Infrastructure level (common to various applications)
class JobBusinessIdGenerator implements DomainLevelJobBusinessIdGeneratorInterface
{
// In infrastructure because it accesses persistance layers.
// In the creator, get persistence objects and so... database, files, whatever.
public function getNextJobId() : int
{
$this->lockGlobalCounterMaybeAtDatabaseLevel();
$current = $this->persistance->getCurrentJobCounter();
$next = $current + 1;
$this->persistence->setCurrentJobCounter( $next );
$this->unlockGlobalCounterMaybeAtDatabaseLevel();
return $next;
}
}
Domain Level
class JobEventFactory
{
// It's in this factory that we create the entity Id.
private $idGenerator;
public function __construct( EntityIdGenerator $idGenerator )
{
$this->idGenerator = $idGenerator;
}
public function createNewJobCreatedEvent( Id $applicationExecutionId, int $businessId, string $jobDescription, xxxx $otherData ); : JobCreatedEvent
{
$eventId = $this->idGenerator->createNewId();
$entityId = $this->idGenerator->createNewId();
// The only place where we allow "new" is in the factories. No other places should do a "new" ever.
$event = new JobCreatedEvent( $eventId, $entityId, $applicationExecutionId, $businessId, $jobDescription, $otherData );
return $event;
}
}
If you do not like the factory creating the entityId, could seem ugly to some eyes, just pass it as a parameter with a specific type and pss the responsibility to create a new fresh one and do not reuse one at some other intermedaite service (never the application service) to create it for you.
Nevertheless if you do so, pay care to what if a "silly" service just creates "two" JobCreatedEvent with the same entity Id? That would really be ugly. At the end, creation would only occur once, and the Id is created at the very core of the "creation of the event of JobCreationEvent" (reundant redundancy). Your choice anyway.
Other classes...
class JobCreatedEvent;
class JobEventStoreService;
class JobProjectorService;
Things that do not matter in this post
We could discuss much if the projectors shoud be in the infrastructure level global to multiple applications calling them... or even in the domain (as I need "at least" one way to read the model) or it belongs more to the application (maybe the same model can be read in 4 different ways in 4 different applications and each they have their own projectors)...
We could discuss much where are the side-effects triggered if implicit in the event-store or in the application level (I've not called any side-effects processor == event listener). I think of side-effects being in the application layer as they depend on infrastructure...
But all this... is not the topic of this question.
I don't care about all those things for this "post". Of course they are not negligible topics and you will have your own strategy for them. And you have to design all this very carefully. But here the question was where to crete the auto-incremental Id coming from a business requierement. And doing all those projectors (sometimes called calculators) and side-effects (sometimes called reactors) in a "clean-code" way here would blur the focus of this answer. You get the idea.
Things that I care in this post
What I care is that:
If the experts what an "autonumeric" then it's a "domain requirement" and therefore its a property in the same level of definition than "description" or "other data".
The fact they want this property does not conflict with the fact that all entities have an "internal id" in the format that the coder chooses, being an uuid, a sha1 or whatever.
If you need sequential ids for that property, you need a "supplier of values" AKA JobBusinessIdGeneratorService which has nothing to do with the "entity Id" itself.
That Id generator will be the responsible to ensure that once the number has been autoincremented, it is syncrhonously persisted before it's being returned to the client, so it is impossible to return two times the same id upon failures.
Drawbacks
There's a sequence-leak you'll have to deal with:
If the Id generator points to 4007, the next call to getNextJobId() will increment it to 4008, persist the pointer to "current = 4008" and then return.
If for some reason the creation and persistence fails, then the next call will give 4009. We then will have a sequence of [ 4006, 4007, 4009, 4010 ], with 4008 missing.
It was because from the generator point of view, 4008 was "actually used" and it, as a generator, does not know what you did with it, the same way than if you have a dummy silly loop that extracts 100 numbers.
Do never compensate with a ->rollback() in a catch of a try / catch block because that can generate you concurrency problems if you get 2008, another process gets 2009, then the first process fails, the rollback will break. Just assume that "on failure" the Id was "just consumed" and do not blame the generator. Blame who failed.
I hope it helps!
#SharpNoizy, very simple.
Create your own Id Generator. Say an alphanumeric string, for example "DB3U8DD12X" that gives you billions of possibilites. Now, what you want to do is generate these ids in a sequencial order by giving each character an ordered value...
0 - 0
1 - 1
2 - 2
.....
10 - A
11 - B
Get the idea? So, what you do next is to create your function that will increment each index of your "D74ERT3E4" string using that matrix.
So, "R43E4D", "R43E4E", "R43E4F", "R43E4G"... get the idea?
Then when you application loads, you look at the database and find the latest Id generated. Then you load in memory the next 50,000 combinations (in case that you want super speed) and create a static class/method that is going to give you that value back.
Aggregate.Id = IdentityGenerator.Next();
this way you have control over the generation of your IDs because that's the only class that has that power.
I like this approach because is more "readable" when using it in your web api for example. GUIDs are hard (and tedious) to read, remember, etc.
GET api/job/DF73 is way better to remember than api/job/XXXX-XXXX-XXXXX-XXXX-XXXX
Does that make sense?

Modify the original object in Service Fabric Reliable Collections

I read this article about working with Reliable Collections and it is mentioned there you MUST not modify an object once you have given it to a reliable collection and the correct way to update a value in a reliable collection, is to get a copy (clone) of the value, checnge the cloned value and then update the cloned value in the RC.
Bad use:
using (ITransaction tx = StateManager.CreateTransaction()) {
// Use the user’s name to look up their data
ConditionalValue<User> user =
await m_dic.TryGetValueAsync(tx, name);
// The user exists in the dictionary, update one of their properties.
if (user.HasValue) {
// The line below updates the property’s value in memory only; the
// new value is NOT serialized, logged, & sent to secondary replicas.
user.Value.LastLogin = DateTime.UtcNow; // Corruption!
await tx.CommitAsync();
}
}
My Quesion is: why can't I modify the the object once I gave it to the RC? why do I have to clone the object before I change something in it? why can't I do something like (update the object in the same transaction):
using (ITransaction tx = StateManager.CreateTransaction()) {
// Use the user’s name to look up their data
ConditionalValue<User> user =
await m_dic.TryGetValueAsync(tx, name);
// The user exists in the dictionary, update one of their properties.
if (user.HasValue) {
// The line below updates the property’s value in memory only; the
// new value is NOT serialized, logged, & sent to secondary replicas.
user.Value.LastLogin = DateTime.UtcNow;
// Update
await m_dic.SetValue(tx, name, user.Value);
await tx.CommitAsync();
}
}
Thanks!
Reliable Dictionary is a replicated object store. If you update the objects inside Reliable Dictionary without going through Reliable Dictionary (e.g. TryUpdateAsync), then you can corrupt the state.
For example, if you change the object inside Reliable Dictionary using your reference, then the change will not be replicated to the secondary replicas.
This is because Reliable Dictionary does not know that you changed one of the TValues. Hence, the change will be lost if the replica ever fails over.
Above is the most simple example. Modifying objects directly can cause other serious problems like breaking ACID in multiple ways.
Technically you can do what you want. But don't forget about lock modes and isolation levels.
Here we can read: “Any Repeatable Read operation by default takes Shared locks. However, for any read operation that supports Repeatable Read, the user can ask for an Update lock instead of the Shared lock”.
That means that TryGetValueAsync gets only Shared lock. And attempt to update this value later could cause a dedlock.
The next statement is: “An Update lock is an asymmetric lock used to prevent a common form of deadlock that occurs when multiple transactions lock resources for potential updates at a later time.”
So, the correct code would be
await m_dic.TryGetValueAsync(tx, name, LockMode.Update)

Best way to prevent race conditions in a multi instance web environment?

Say you have an Action in ASP.NET MVC in a multi-instance environment that looks something like this*:
public void AddLolCat(int userId)
{
var user = _Db.Users.ById(userId);
user.LolCats.Add( new LolCat() );
user.LolCatCount = user.LolCats.Count();
_Db.SaveChanges();
}
When a user repeatedly presses a button or refreshes, race conditions will occur, making it possible that LolCatCount is not similar to the amount of LolCats.
Question
What is the common way to fix these issues? You could fix it client side in JavaScript, but that might not always be possible. I.e. when something happens on a page refresh, or because someone is screwing around in Fiddler.
I guess you have to make some kind of a network based lock?
Do you really have to suffer the extra latency per call?
Can you tell an Action that it is only allowed to be executed once per User?
Is there any common pattern already in place that you can use? Like a Filter or attribute?
Do you return early, or do you really lock the process?
When you return early, is there an 'established' response / response code I should return?
When you use a lock, how do you prevent thread starvation with (semi) long running processes?
* just a stupid example shown for brevity. Real world examples are a lot more complicated.
Answer 1: (The general approach)
If the data store supports transactions you could do the following:
using(var trans = new TransactionScope(.., ..Serializable..)) {
var user = _Db.Users.ById(userId);
user.LolCats.Add( new LolCat() );
user.LolCatCount = user.LolCats.Count();
_Db.SaveChanges();
trans.Complete();
}
this will lock the user record in the database making other requests wait until the transaction has been committed.
Answer 2: (Only possible with single process)
Enabling sessions and using session will cause implicit locking between requests from the same user (session).
Session["TRIGGER_LOCKING"] = true;
Answer 3: (Example specific)
Deduce the number of LolCats from the collection instead of keeping track of it in a separate field and thus avoid inconsistency issues.
Answers to your specific questsions:
I guess you have to make some kind of a network based lock?
yes, database locks are common
Do you really have to suffer the extra latency per call?
say what?
Can you tell an Action that it is only allowed to be executed once per User
You could implement an attribute that uses the implicit session locking or some custom variant of it but that won't work between processes.
Is there any common pattern already in place that you can use? Like a Filter or attribute?
Common practice is to use locks in the database to solve the multi instance issue. No filter or attribute that I know of.
Do you return early, or do you really lock the process?
Depends on your use case. Commonly you wait ("lock the process"). However if your database store supports the async/await pattern you would do something like
var user = await _Db.Users.ByIdAsync(userId);
this will free the thread to do other work while waiting for the lock.
When you return early, is there an 'established' response / response code I should return?
I don't think so, pick something that fits your use case.
When you use a lock, how do you prevent thread starvation with (semi) long running processes?
I guess you should consider using queues.
By "multi-instance" you're obviously referring to a web farm or maybe a web garden situation where just using a mutex or monitor isn't going to be sufficient to serialize requests.
So... do you you have just one database on the back end? Why not just use a database transaction?
It sounds like you probably don't want to force serialized access to this one section of code for all user id's, right? You want to serialize requests per user id?
It seems to me that the right thinking about this is to serialize access to the source data, which is the LolCats records in the database.
I do like the idea of disabling the button or link in the browser for the duration of a request, to prevent the user from hammering away on the button over and over again before previous requests finish processing and return. That seems like an easy enough step with a lot of benefit.
But I doubt that is enough to guarantee the serialized access you want to enforce.
You could also implement shared session state and implement some kind of a lock on a session-based object, but it would probably need to be a collection (of user id's) in order to enforce the serializable-per-user paradigm.
I'd vote for using a database transaction.
I suggest, and personally use mutex on this case.
I have write here : Mutex release issues in ASP.NET C# code , a class that handle mutex but you can make your own.
So base on the class from this answer your code will be look like:
public void AddLolCat(int userId)
{
// I add here some text in front of the number, because I see its an integer
// so its better to make it a little more complex to avoid conflicts
var gl = new MyNamedLock("SiteName." + userId.ToString());
try
{
//Enter lock
if (gl.enterLockWithTimeout())
{
var user = _Db.Users.ById(userId);
user.LolCats.Add( new LolCat() );
user.LolCatCount = user.LolCats.Count();
_Db.SaveChanges();
}
else
{
// log the error
throw new Exception("Failed to enter lock");
}
}
finally
{
//Leave lock
gl.leaveLock();
}
}
Here the lock is base on the user, so different users will not block each other.
About Session Lock
If you use the asp.net session on your call then you may win a free lock "ticket" from the session. The session is lock each call until the page is return.
Read about that on this q/a:
Web app blocked while processing another web app on sharing same session
Does ASP.NET Web Forms prevent a double click submission?
jQuery Ajax calls to web service seem to be synchronous
Well MVC is stateless meaning that you'll have to handle with yourself manually. From a purist perspective I would recommend preventing the multiple presses by using a client-side lock, although my preference is to disable the button and apply an appropriate CSSClass to demonstrate its disabled state. I guess my reasoning is we cannot fully determine the consumer of the action so while you provide the example of Fiddler, there is no way to truly determine whether multiple clicks are applicable or not.
However, if you wanted to pursue a server-side locking mechanism, this article provides an example storing the requester's information in the server-side cache and returns an appropriate response depending on the timeout / actions you would want to implement.
HTH
One possible solution is to avoid the redundancy which can lead to inconsistent data.
i.e. If LolCatCount can be determined at runtime, then determine it at runtime instead of persisting this redundant information.

Conditional locking based on value

I am writing a web service that allows users to create jobs within the system. Each user has an allowance of the number of jobs they can create. I have a method which checks that the user has some remaining credits which looks like this:
private bool CheckRemainingCreditsForUser(string userId)
{
lock(lockObj)
{
var user = GetUserFromDB(userId);
if (user.RemaingCredit == 0) return false;
RemoveOneCreditFromUser(user);
SaveUserToDB(user);
}
}
The problem I can see with this approach is that if multiple different users make a request at the same time they will only get processed one at a time which could cause performance issues to the client. Would it be possible to do something like this?
private bool CheckRemainingCreditsForUser(string userId)
{
//If there is a current lock on the value of userId then wait
//If not get a lock on the value of userId
var user = GetUserFromDB(userId);
if (user.RemaingCredit == 0) return false;
RemoveOneCreditFromUser(user);
SaveUserToDB(user);
//Release lock on the value of userId
}
This would mean that requests with different userIds could be processed at the same time, but requests with the same userId would have to wait for the previous request to finish
Yes, you could do that with a Dictionary<string, object>. To link a lockObject to every userId.
The problem would be cleaning up that Dictionary every so often.
But I would verify first that there really is a bottleneck here. Don't fix problems you don't have.
The alternative is to have a (optimistic) concurrency check in your db and just handle the (rare) conflict cases.
Instead of locking in every methods, why aren't you using a Singleton that will manage the User's rights ?
It will be responsible from giving the remaining allowances AND manage them at the same time without loosing the thread-safe code.
By the way, a method named CheckRemainingCreditsForUser should not remove allowances since the name isn't implying it, you may be the only developer on this project but it won't hurt to make 2 methods to manage this for re-useability and code comprehension.
EDIT : And this object should also hold the Users dictionary

Categories