Pattern for handling with Redis multiple simultaneous requests - c#

A have an API that executes a query that takes 1 minute to process. If someone makes a GET request to this API, I will execute the query and save the results in Redis.
New requests to this API will use the cached data from Redis, avoiding doing this 1 minute query again.
My problem is: at 8AM, my cache is dropped because new data is available in the database. The first API request will execute the 1 minute-long query. The second request will also execute the same 1 minute-long query, since the first one hasn't finished yet and Redis is empty.
In the end, I have thousands of queries running and the database can't handle all of them, and no query can finish because the database stops to work.
Is there a known pattern to handle this?
What I'm doing to handle this is to set a flag "isQueryRunning" (thread-safe by a lock) to allow just one thread to execute per time, leaving the others waiting, but I would like to know if there are other known strategies.

There are several strategies. The one you mentioned is valid, if somewhat basic because it won't work well behind a load balancer, as your lock is not distributed.
A common way around this is for state to be stored in a persistent store. In your case, this state flag could be stored in Redis itself. That gets you over the non-distributed lock problem.
However, this ties up the server because you're waiting on request threads. In REST it is common for an API to simply check the state and either
return stale data (a different cached copy still available while the cache is being rebuilt) or
return a 202 ACCEPTED HTTP status with a LOCATION header that has a URI that points to the new data. A client can then poll that location. This means of course you have to code that other endpoint, which will continue to return 202 until the data is available, and then either
return 200 with the data, or
return 301 or 307 (redirects back to the original URI)
The first is very simple if stale data is an OK thing. You can simply do a "swap" in the cache (very quick) when the new data is available. (Btw, this swap is probably better than simply dropping the data altogether before replacing it).
The second, is of course, more complex, but scales well and avoids stale data as much as possible. More than just a location can be returned. You may return info such as a possible ready-time for the data (e.g. 1 minute), a value indicating how much data is retrieved (e.g. a percentage), or other status. See here for example.

Related

Concurrency for many API requests

I have an API that are used for add/update new records to DB.
On start of such request I try to get data from db by some identifiers for request and do some updates.
In case there there are few concurrent requests to my API, some duplicates maybe be created.
So I am thinking about "wait till prev request is finished".
For this I found a solution to use new SemaphoreSlim(1,1) to allow only 1 request in a time.
But now I am wondering if it is a good solution. Because in case 1 request may take up to 1 min of processing, will other requests be alive until SemaphoreSlim allow to process them?
For sure it is something related to configuration, but it always some approx. number in settings and it may be limited by some additional threshold settings.
The canonical way to do this is to use database transactions.
For example, SQL Server's transaction isolation level "serializable" ensures that even if transactions are concurrent, the effect will be as if they had been executed one after another. This will give you the best of both worlds: Your requests can be processed in parallel, and the database engine ensures that locks and serialization happen if, and only if, it's required to avoid transactional inconsistency.
Conveniently, "serializable" is the default isolation level used by TransactionScope. Thus, if your DB library provider supports it, wrapping your code in a TransactionScope block might be all you need.

Caching and multi-thread synchronization with ReaderWriterLockSlim

I have a web-service which is called by some web-service clients. This web-service returns the current inventory list of an inventory. This list can be big, 10K+ of product IDs, and it takes quite some time (~4 minutes) to refresh the list by reading data in the database. I don't want to refresh the list every time this web-service is called, as it may consume too much resource on my database server, and the performance is always not very good.
What I intend to do is giving the inventory list some time-to-live value, which means when a client asks for the inventory list, if the data is not out-of-date I just return the data right away; if the data is obsolete I will read it from the database, update this list data and its time-to-live value, and then return the refreshed data back to the web-service client. As there may be several clients call this web-service, It looks like I need a multi-thread synchronization(multiple-read single-write, ReaderWriterLockSlim class?) to protect this inventory list, but I haven't found a good design to make this web-service have good performance: only one client refreshes the data, the other clients don't have to redo the work if the data is still within the time-to-live frame and the web-service should return the result as soon as possible after the write thread completes the update.
I also think about another solution (also use ReaderWriterLockSlim class): creating a separate thread to refresh the inventory list periodically (write-thread refreshes the data every 15 minutes), and let all the web-service clients use read-thread to read the data. This may work, but I don't really like it as this solution still waste resource of the web-server. For example, if there is no client's request, the system still has to refresh the inventory list data every 15 minutes.
Please suggest some solution. Thanks.
I would suggest using a MemoryCache.
https://stackoverflow.com/a/22935978/34092 can be used to detect when the item expires. https://msdn.microsoft.com/en-us/library/7kxdx246.aspx is also worth a read.
At this point, the first line of the code you write (in CacheRemovedCallback) should write the value back to the MemoryCache - this will allow readers to keep reading it.
Then it should get the latest data, and then write that new data to the MemoryCache (again passing in a CacheItemPolicy to allow the callback to be called when the latest version is removed). And so on and so on...
Do you ever run only one instance of your service? Then in-memory caching is enough for you. Or use a ConcurrentDictionary if you don't want to implement the lock yourself.
If you run multiple instances of that services, it might be advisable to use a out of process cache like Redis.
Also, you could eventually maintain the cached list so that it is always in sync with what you have in the database!?
There are many different cache vendors dotnet and asp.net and asp.net core have different solutions. For distributed caching there are also many different options. Just pick whatever fits best for the framework you use.
You can also use libraries like CacheManager to help you implement what you need more easily.

How to design a Web Service that returns cached results when called that are updated from an independent recurring thread?

I am looking to redesign a service that is used by several client applications. These applications make repeated requests at 30 to 60 second intervals of one particular method on the service. This method Gets data and then Caches it for approximately 30 to 45 seconds. Because the method is driven by requests it checks on every request to see if the time difference from the last cache is > 30 seconds and if so refreshes it before returning the results.
While I'd eventually like to move to a pub / sub model, for now I have to stay with polling. What I would like to do is create a repeating background process that refreshes the cache on a specified time interval independent of requests to the service. Then as requests to the method come it would always just return from cache.
I am not sure how exactly to accomplish this? I don't believe I want to tie the kickoff of the background thread to an initial request but I'm not sure how to start it. Do I have to create some kind of windows service that shares an App Domain or is there a better way?
Why you don't want to use cache expiration mechanism? In this case you can be sure that returned data will be correct if cached data became stale, and you don't need to do extra (possibly unneeded) requests to DB.

Write to db efficiently from a multithread application

I have a server application that receives data from clients that must be stored in a database.
Client/server communication is made with ServiceStack, and for every client call there can be 1 or more records to be written.
The clients doesn't need to wait the data to be written or to know if the data has been written.
At my customer site the database sometimes may be unavailable for short times so I want to retry the writing until the database is available again.
I can't use a servicebus, or other software..it must be only my server and the database.
I considered two possibilities:
1) fire a thread for every call to write a record (or group of records with a multiple insert) that in case of failure retries until it has success
2) enqueque the data to be written in a global in-memory list, and have a single background thread to continuosly make a single call to the db (with a multiple insert)
What do you consider the most efficient way do do it? or do you have another proposal?
Option 1 is easier, but I'm worried to have too many threads running at the same time, expecially if the db gets unavailable.
In case I'll follow the second route, my idea is:
1) every server thread opened by a client locks the global list to insert 1 or more records to write to the db, release the lock and closes
2) the background thread locks the global list that has for example 50 records, makes a deep copy to a temp list, unlocks the global list
3) the server thread continues to add data to the global list, in the meantime the background thread tries to write the 50 records, retrying until it has success
4) when the background thread manages to write, it locks again the global list (that maybe now has 80 records), remove the first 50 that has been written, and everything starts again
Is there a better way to do this?
--------- EDIT ----------
My issue is that I don't want in any way the client to have to wait, not even for the adding of the record-to-be-sent to a blocked list (that happens when the writing thread writes or tries to write the list to the DB).
That's why in my solution I lock the list only for the time to copy the list to a temporary list that will be written to db.
I'm just wondering if this is crazy and there is a much simpler solution that I'm not following.
My understanding of the problem is as follows:
1. Client sends a data to be inserted to DB
2. Server receives the data and inserts to DB
3. Client doesn't want to know if data is inserted properly or not
In this case, I would suggest, Let server create a single Queue which holds the data to be inserted to DB, let receive thread just receive the data from client and insert into inmemory Queue, this queue can be emptied by another thread which takes care of writing to DB to persist.
You may even use file based queue or priority queue or just in-memory queue for storing the records temporarily.
If you use the .Net Thread Pool you don't need to worry about creating too many threads as thread lifetime is managed for you.
Task.Factory.StartNew(DbWriteMethodHere)
If you want to be smarter you could add the records you want to commit to a BlockingCollection - and then have a thread do BlockingCollection<T>.Take(50) which will block until there is a big enough batch to commit.

Is there a fast and scalable solution to save data?

I'm developing a service that needs to be scalable in Windows platform.
Initially it will receive aproximately 50 connections by second (each connection will send proximately 5kb data), but it needs to be scalable to receive more than 500 future.
It's impracticable (I guess) to save the received data to a common database like Microsoft SQL Server.
Is there another solution to save the data? Considering that it will receive more than 6 millions "records" per day.
There are 5 steps:
Receive the data via http handler (c#);
Save the received data; <- HERE
Request the saved data to be processed;
Process the requested data;
Save the processed data. <- HERE
My pre-solution is:
Receive the data via http handler (c#);
Save the received data to Message Queue;
Request from MSQ the saved data to be processed using a windows services;
Process the requested data;
Save the processed data to Microsoft SQL Server (here's the bottleneck);
6 million records per day doesn't sound particularly huge. In particular, that's not 500 per second for 24 hours a day - do you expect traffic to be "bursty"?
I wouldn't personally use message queue - I've been bitten by instability and general difficulties before now. I'd probably just write straight to disk. In memory, use a producer/consumer queue with a single thread writing to disk. Producers will just dump records to be written into the queue.
Have a separate batch task which will insert a bunch of records into the database at a time.
Benchmark the optimal (or at least a "good" number of records to batch upload) at a time. You may well want to have one thread reading from disk and a separate one writing to the database (with the file thread blocking if the database thread has a big backlog) so that you don't wait for both file access and the database at the same time.
I suggest that you do some tests nice and early, to see what the database can cope with (and letting you test various different configurations). Work out where the bottlenecks are, and how much they're going to hurt you.
I think that you're prematurely optimizing. If you need to send everything into a database, then see if the database can handle it before assuming that the database is the bottleneck.
If the database can't handle it, then maybe turn to a disk-based queue like Jon Skeet is describing.
Why not do this:
1.) Receive data
2.) Process data
3.) Save original and processsed data at once
That would save you the trouble of requesting it again if you already have it. I'd be more worried about your table structure and your database machine then the actual flow though. I'd be sure to make sure that your inserts are as cheap as possible. If that isn't possible then queuing up the work makes some sense. I wouldn't use message queue myself. Assuming you have a decent SQL Server machine 6 million records a day should be fine assuming you're not writing a ton of data in each record.

Categories