Concurrent Instances - Data / Design problem - c#

I'm trying to come up with a design for an application (C#/Avalonia) which will allow creating views for data coming from multiple sources. The overall idea is to link the sources and present outcome using various visualization components.
There are multiple sources of data:
Database 1
Database 2/3/4
SOAP
Database 1 is going to be used to store everything related to the application itself (users, permissions and so on).
Databases 2-4+ are only data feeds.
SOAP - this is where I struggle, not quite sure how to handle. There could be 10-50 concurrent instances of the application running, and each one of them could request the same data update from SOAP (provider restrictions would make it impossible).
What I was thinking was to take the following approach:
Request initial data from SOAP
Cache the data in database 1 with a timestamp
Define a delay between the requests
Once a user requests a data update from SOAP, check if we should return cached or fresh data based on timestamp and delay value.
This approach leads to an issue where the user terminates the application in the middle of requesting new data.
User 1 requests new data, marks database to ensure no future requests are processed
User 2 requests new data - nothing happens at this stage, wait and query again
User 1 terminates - no new data for any users
Is the approach completely wrong and trying to handle it using Client only would be a suicide?

Related

Need to perform a operation in which there will be multiple users send request whereas each request taking higher time to execute single operation

Here we are using angular 9 as Front end and .net core 3.1 as backend and sql server as db.
Each user will send single request to server but to execute each request server is taking almost 90 minutes for its operations.
Need to find solution for backend to hold all request or to execute some request at a time but it should not overlap the existing running request calculations.
We are thinking to bring Queue in backend already tried parallel foreach but it is messing up some calculations.
Is there any other way you can help me??
Thanks in advance
Sounds for me you have an asynchronous operation. So there are multiple solutions from my point of view I will describe two simple once which you easily can extend.
First you have to seperate the trigger of your „job“ and query the state.
With your less given informations about the whole system I assume you have running an api on a server. So you could use the files system to persist the running jobs and query them vie seperate request. So one request scheduled the „job“ and the other returns the information of the state.
Of there is no file system you also can use a database for this or just store it in memory ( be aware of undesired restarts of the programme drop your infos).
Achieved this functionality using Hangfire: https://docs.hangfire.io/en/latest/

ASP.NET + Entity Framework - handling intermittent traffic spikes

I have an MVC and WebAPI application that needs to log activities performed by the users back to my database. This is almost always a single insert into a table that have less than 5 columns (i.e. very little data is crossing the wire). The data interface that I am currently using is Entity Framework 6
Every once in a while, I'll get a large number of users needing to log that they performed a single activity. In this case, "Large Number" could be a couple hundred requests every second. This typically will only last for a few minutes at most. The rest of the time, I see very manageable traffic to the site.
When the traffic spikes, Some of my clients are getting timeout errors because the page doesn't finish loading until the server has inserted the data into the database. Now, the actual inserting of the data into the database isn't necessary for the user to continue on using the application, so I can cache these requests somewhere locally, and then batch insert them later.
Is there any good solutions for ASP.NET MVC to buffer incoming request data and then batch insert them into the database every few seconds?
As for my environment, I have several servers running Server 2012 R2 in a load balanced Web Farm. I would prefer to stay stateless if at all possible, because users might hit different servers per request.
When the traffic spikes, Some of my clients are getting timeout errors because the page doesn't finish loading until the server has inserted the data into the database.
I would suggest using a message queue. Have the website rendering code simply post an object to the queue representing the action, and have a separate process (e.g. Windows Service) read off the queue and write to the database using Entity Framework.
UPDATE
Alternatively you could log access to a file (fast), and have a separate process read the file and write the information into your database.
I prefer the message queue option, but it does add another piece of architecture.

Maintaining Data across machines in the Cloud

I'm working on a Cloud-Hosted ZipFile creation service.
This is a Cross-Origin WebApi2 service used to provide ZipFiles from a file system that cannot host any server side code.
The basic operation goes like this:
User makes a POST request with a string[] of Urls that correlate to file locations
WebApi reads the array into memory, and creates a ticket number
WebApi returns the ticket number to the user
AJAX callback then redirects the user to a web address with the ticket number appended, which returns the zip file in the HttpResponseMessage
In order to handle the ticket system, my design approach was to set up a Global Dictionary that paired a randomly generated 10 digit number to a List<String> value, and the dictionary was paired to a Queue storing 10,000 entries at a time. ([Reference here][1])
This is partially due to the fact that WebApi does not support Cache
When I make my AJAX call locally, it works 100% of the time. When I make the call remotely, it works about 20% of the time.
When it fails, this is the error I get:
The given key was not present in the dictionary.
Meaning, the ticket number was not found in the Global Dictionary Object.
We (with the help of Stack) tracked down the issue to multiple servers in the Cloud.
In this case, there are three.
That doesn't mean there is a one-in-three chance of this working, what seems to be going on is this:
Calls made while the browser is on the cloud site work 100% of the time because the same machine handles the whole operation end-to-end
Calls made from other sites work far less often because there is no continuity between the machine who takes the AJAX call, and the machine who takes the subsequent REDIRECT to the website to download the file. It's simple luck of the draw if the same machine handles both.
Now, I'm sure we could create a database to handle requests, but that seems like a lot more work to maintain state among these machines.
Is there any non-database way for these machines to maintain the same Dictionary across all sessions that doesn't involve setting up a fourth machine just to handle queue?
Is the reason for the dictionary simply to have a queue of operations?
It seems you either need:
A third machine that hosts the queue (despite your objection). If you're using Azure, an obvious choice might be the distributed Azure Cache Service.
To forget about the dictionary and just have the server package and deliver the requested result, perhaps in an asynchronous operation.
If your ASP.NET web app uses session state, you will need to configure an external session state provider (either the Redis Cache Service or a SQL Server session state provider).
There's a step-by-step guide here.

Data persistence for sending customer emails

I'm developing a system to handle sending transactional emails to our customers.
This is how it works:
1. An event occurs during the order's life cycle, for example 'shipped'
2. This event will trigger the creation of an email in the database (email queue)
3. A separate windows service is polling the db table for new emails to send. When it finds one it calls a Web service with all the required data. It's the Web service's responsibility to handle the actual sending of the email.
My question relates to step 2.
When an email triggering event occurs, should I take a snapshot of all the data required by the service (thereby duplicating data and introducing new tables) or should I get the required data from the transactional db tables only at the point where I'm ready to call the Web service.
It totally depend on your data volume. If you have large amount of data then go with first solution that is de-normalize data in another separate table (you may have duplication) and then send email.

How do I reuse TcpClients in synchronous blocking mode?

I have a tcpclient that connects to a backend system sending xml queries and receiving xml responses.
The backend requires that the client logon and set some environment settings before any querying can take place. This is an expensive operation so it makes sense to create the tcpclient and keep it open for repeated queries.
The backend I'm told is optimised for handling many connections and for performance reasons I'd like to have numerous tcpclients connecting.
The queries are in the form of a list which contains thousands of items.
My question is how best to create a group of reusable connected tcpclients so I can execute a number of simultaneous requests from the list (say 10 at a time), what pattern would suit this scenario and are there any examples I can learn best practice from?
Currently it just executes them one by one using a single service which encapsulates the connection and logon process.
QueryService service = new QueryService(server, port, user, pass, params, app);
foreach(var item in queries)
{
service.ExecuteRequest(item);
}
service.Disconnect();
What you need is a thread pool or Object pool pattern. Basically, you can create a pool of Service objects and then when any element (function or object) of client application need to access the service - it can just refers to the service object based on some criteria.
To make this successful, you must have a request should be state-less so that when an arbitrary service object is selected for making a request to some server - the history should not create a problem.

Categories