I have an ASP.NET Core Web API project. That has one controller with a method called GetLocations
GetLocations connects to 5 other web services on the internet. Gathers some info and return a collection via json. In this method I am caching the data every 5 mins using In Memory caching.
If the cache expires, it tries to connect to all 5 services and get the info and so on.
My problem is:
I have a lot of users requesting this data constantly, 50 requests a second to this API.
When the cache expires I believe there is some kind of thread locking. I have limited visibility into the project at the moment but I suspect that all these requests are calling the method and reaching out to the 5 dependent services until one of them gets a completed response from all 5.
Is my assumption right? If so how can I go about fixing this? Will I need to make each call to the web services async? Will that help this scenario? I am not 100% sure because the requests are what triggers the method call.
You should definitely make the calls to the external services use Async / Await.
That's just a given - as the best practice is to always use async for I/O heavy operations (such as calling a third-party service).
Now, you should also create a class that manages these calls. You can add it as a Singleton in your IoCConfig. In that class, make sure you're "locking" to avoid the issue you just described and not call the underlying services numerous times while the cache is being built.
Check here:
https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/lock-statement
You are facing this issue because of following reason.
You are using Cache and it will expire at some definite time.
After Cache expire you call external web service method to collect the data. Now at this point of time it might happen that other request in queue get choose for execution.
Once that another request is chosen for execution it also end up checking cache and it now data in cache then execute external service and so on for other request.
Solution to this.
First Check cache contains data or not.
If Not create lock so following section only be executed by single thread.
Now in that lock section again check for cache and if cache contains data then simply return but it does not contains then call external service.
At this point of time if another thread get selected for execution then it has to wait for execlsive section to complete its works.
Once that section get completed it store data in cache and so after if any queued or new request is there it choose data from cache.
Note : It should something like this.
public List<string> GetData()
{
if(Cache[key] == null)
{
lock(obj) // obj should be static
{
if(Cache[key] == null)
{
// Load data from service
Cache[key] == data;
}
}
}
return (List<string>)Cache[Key];
}
Related
I have an issue with using a Database in a thread in my asp.net Application.
When I want to start my application I want to start a thread called "BackgroundWorker" with it, which runs in the background till the whole application is stopped.
The problem is that I have massive problems with the dbContext in the thread.
I I try to start the walker in my Startup.cs in the methods "ConfigureServices" or "Configure" and then initialize the dbContext in the constructor in the Walker like this "dbContext = new ApplicationContext()" it tells me that the connection is not configured, when I try to operate in the while(true) queue on the database.
If I write an own Controller for the Walker which receives a ApplicationContext in his constructor and then starts a Thread like this, if i call this controller once with a GET Request:
public BackgroundWorker(ChronicusContext dbContext)
{
_dbContext = dbContext;
_messageService = new MailMessageService();
}
// GET: api/backgroundworker
[HttpGet]
[Route("start")]
public void StartWorker()
{
//Thread thread = new Thread(this.DoBackGroundWork);
Thread thread = new Thread(() => DoBackGroundWork(this._dbContext));
thread.Start();
}
public void DoBackGroundWork(ChronicusContext _dbContext)
{
while (true)
{
if (_dbContext.PollModels.Any()) //Here is the exception
{
...
}
}
}
Then I receive an System.ObjectDisposedException that the object is already disposed inside the while (true) queue.
I tried those and similar things in many different ways but allways receive exceptions like these two or that the database connection is closed.
Can somebody help me and tell me, how this works?
Thank you!
Generally, server side multithreading for Web Applications does not happen often and is, most times, a huge no no.
Conceptually, your server is "multithreaded", it handles many HTTP requests from clients/users/other servers. For mobile and web architecture/design, your server(s) process multiple requests and your clients are handling asynchronous calls and dealing with waiting for responses from long running calls like your API method StartWorker.
Think of this scenario, you make a request to your WebAPI method StartWorker, the client, making the request is waiting for a response, putting the work on another thread does nothing as the client is still waiting for a response.
For example, let's consider your client an HTML web page with an Ajax call. You call StartWorker via Ajax, you will be loading data into a HTML table. You will desire, from a UX perspective, to put up a progress spinner while that long running StartWorker responds to your HTML Page Ajax call request. When StartWorker responds, the Ajax call loads the HTML table with the StartWorker response. StartWorker has to respond with the data. If StartWorker responds beforehand than you will have to send a push notification, via SignalR, for example, when the other thread completes and has the data you need for the HTML table.
Hopefully, you see, the call to the WebAPI method, takes the same amount of time from a Ajax request/response perspective, so multithreading becomes pointless in this scenario, a most common web application scenario.
You can have your client UI load other UI elements, showing a progress spinner in HTML table UI area, until your database call is complete and responds with the data to your Ajax call. This way your users know things are happening and something is still loading.
If you still need your additional thread in your API for your project needs, I believe you have to be using Entity Framework 6 or greater to support asynchronous queries, see this tutorial:
http://www.codeproject.com/Tips/805923/Asynchronous-programming-in-Web-API-ASP-NET-MVC
UPDATE
Now that I know you need to run a SQL query on a repeating frequency of time, and you have an Azure Web App, what you want to use is Azure Automation if you are using Sql Azure or create a Sql Server Job if you are using a Sql Server instance as your backend
DbContext is not thread safe. You need to create a new context from inside your thread.
public void DoBackGroundWork()
{
ChronicusContext anotherContext= new ChronicusContext();
while (true)
{
if (anotherContext.PollModels.Any())
{
...
}
}
}
I'm very new to Web API and I have an unusual pattern that I need to implement. In the Post method of my controller, it is to take an object which includes a CallbackURL. It will then immediately return an HTTP response to the caller. Afterwards, it will use a 3rd party, off-site API to perform some work with the object. Once that work is done, the controller is to post the results of that work to the CallbackURL.
However, I do not know how to implement this in Web API. Once I return the HTTP response, the controller's lifecycle is over, correct? If so, how do I perform the work I need to do after I return the response?
If you only need to post results to a url and not to the client that initiated the call, you could possibly do something as easy as this:
public string MyAPIMethod(object input)
{
Task.Factory.StartNew(() =>
{
//call third-party service and post result to callback url here.
});
return "Success!";
}
The api call will return right away, and the Task you created will continue the processing in a different thread.
Creating a task to finish up the request (as suggested by Jason P above) will most likely solve the problem, thread-safety provided. However that approach might hurt the performance of your Web service if calls to the 3rd party API take a significant amount of time to complete and/or you are expecting many concurrent clients. If that was the case, your problem seems to be the perfect candidate for a service pattern called "Request/Acknowledge/Callback" (also "Request/Acknowledge/Relay"). Using that pattern, your Web API method will just store each request (including the callback URL) into a queue/database and return quickly. A separate module (possibly running on more than one machine, depending on the number and complexity of the tasks) will take care of completing the tasks, and subsequently notifying completion through the callback URL (please see http://servicedesignpatterns.com/ClientServiceInteractions/RequestAcknowledge).
This is presuming you want to return the results of your 3rd-party query to the caller.
You're correct, this is outside of what's possible with WebAPI. Once you return the HTTP Response, the client also has no connection to your server.
You should look into Asp.Net SignalR, which allows a persistent connection between the client and server, working in modern browsers, and even back to IE7 (though officially unsupported), as well as supporting non-browser clients.
You can then do a couple of things, all of which require the client to connect to SignalR first.
Option 1: You can call your WebApi controller, which can return, but not before launching a task. This task can query the 3rd party api, then invoke a function on the caller via SignalR with the results that you want to provide.
Option 2: You can call a SignalR Hub action, which can talk back to your client. You can tell your client the immediate response, query the 3rd-party api, then return the results you want to provide.
I have created a large application and I have now run into a problem.
I have separated customers by virdirs so they have always been in different application pools. I took advantage of this and set static variables for db connection string and other context stuff on session_start and had it available to me throughout my app.
Now I have been overloaded with the amount of virdirs I have had to create (over 500 and quickly growing) and I feel I need to move these to one (or several) application pools. The problem is I don't pass the "session context" I get from the URL, throughout the app. Changing to the app to pass the context down would basically mean I would need to rewrite the app.
Is there a way I could set this context for a session (ie, one call to my API) and not for the entire app domain? Your help is greatly appreciated!
context example
- db con str
- customer log folder
EDIT: I was thinking I could maybe have a table that links the context information to the thread id (System.Threading.Thread.CurrentThread.ManagedThreadId)?
What do you mean by 'context'? Do you mean the request/response/session information? You do not need to pass them manually. For the duration of processing of a http request, the ASP.Net framework exposes all that in a static way:
var ctx = System.Web.HttpContext.Current;
var req = ctx.Request;
var rsp = ctx.Response;
var sess = ctx.Session;
var cache = ctx.Cache;
var myOtherFoos = ctx.Items;
In an ASP.Net application, you can access the static Current-Context from just anywhere, provided you have added the reference to System.Web assembly.
If you do not mean that "context", but if you mean some your own additional information that you need to pass along with the request-processing, then the Items of HttpContext is just for that! It is a fresh collection created at each new request and you can use it as a lightweight scratchpad to keep your things during single-request processing. Unlike Cache or Session, the "Items" evaporate at the moment the request processing ends, so there's no worry about leaks or mixing data with other requests. Just be careful in what keys you pick, try to not collide with the framework thingies that sit there :)
I've got several web-services: asmx,wcf. At couple of them there are some methods, which take a lot of time for processing, but size of input data for these methods are small and it takes not much time to transfer on the wire. I want move to not sync model. Client passes data to service, service answers that data transfer was correct and process it at background thread witout connection with client. So agter transfering connection should be closed. IS it possible? Can u help me with articles or may be just google request.
John is right - Once you close an http connection, it is done. You can't get back to the same process.
So if you can use another technology that allows duplex on one connection (e.g. WCF), do it!
However,
if you have no choice but to use webservices,
here are three ways to make it work. You may get timeouts on any of them.
Option 1:
Forget the part about 'client answers data was correct.' Just have each thread make its request and wait for the data.
Option 2:
Now, assuming that won't work and you must do the validation, this way requires the client to make 2 requests.
First request: returns valid/invalid.
Second request: returns the long-running results.
Variation of option 2:
If you have timeout problems, you could have the first request generate a GUID or unique database key and start another process, passing it this key, and return the key to the client. (if you can get the server to allow you to start a process - depends on security settings/needs - if not you may be able to start an async thread and have it keep running after the websvc one ends?) The process will do the long task, update the row in the database w/ the unique id when finished, revealing the results plus a 'done' flag. The second request by the client could always return immediately and if the processing is not done, return that, if it is, return the results. The client will repeat this every 5 sec or so until done.
Hacks, I know, but we don't always have a choice for the technology we use.
Don't do this with ASMX web services. They weren't designed for that. If you must do it with ASMX, then have the ASMX pass the data off to a Windows Service that will do the actual work, in the background.
This is more practical with WCF.
We have been writing stuff to interact with the UK gov website and the way they handle something similar is that you send your request and data to the server and it responds saying, roughly, "thanks very much - we're processing it now, please call back later using this id" - all in an XML message. You then, at some point later, send a new http request to the service saying, essentially, "I'm enquiring about the status of this particular request id" and the server returns a result that says either it has processed OK, or processed with errors, or is still processing, please try again in xx seconds.
Similar to option 2 described previously.
It's a polling solution rather than a callback or 2 way conversation but it seems to work.
The server will need to keep, or have access to, some form of persistent table or log for each request state - it can contain eg, the id, the original request, current stage through the workflow, any error messages so far, the result (if any) etc. And the web service should probably have passed the bulk of the request off to a separate Windows service as already mentioned.
Not sure if this is the right terminology, let me explain what I want.
I have a web service that's available on the network - the web service has 1 web method.
What I want is... if the web service is running and performing tasks and another call is made to this web service, I want the 2nd call to fail or pend for a certain period of time then fail. Because only 1 instance of this web service should be called at once.
I was thinking of writing a value to the application object (like in asp.net) but then I have to be very careful to make sure that this value gets updated, in case of any errors, it might not... so this is dangerous, and would leave the web service in a state where no one can get to it.
Is there not a more dynamic way to determine if the web service is getting called or not?
You cannot do this with legacy ASMX web services. They have no support for different instance schemes.
I believe you can do this with WCF, as you can configure the service to have only a single instance.
If you are using WCF, this is simple. Use the service throttling settings to specify that you want MaxConcurrentCalls = 1 and MaxInstances = 1. You'll also want to set the ConcurrencyMode to Single for your ServiceBehavior.
I dont know much about web services on whether you can configure a web server to only start 1 instance of your web service, but you could try creating a mutex within your web service.
A Mutex is an interprocess synchronization object which can be used to detect if another instance of your web service is running.
So, what you can do is create a mutex with a name, then Wait on it. If more than 1 instance of your web service is alive, then the mutex will wait.
You could implement the check inside of the webmethod since it will be running in the same IIS process
You could create a poor man's mutex and have the first instance create a file and have consecutive instances check the existence of the file. Try Catch your web method and place the deletion of the file in the finally.
If you are WCF I recommend "bobbymcr" answer, but for legacy web service you can use Monitor instead or mutex as mutex is costly (because it is a kernel object) but if you do not care about performance and responsiveness of the service use the Mutex simply.
See this sample for using Monitor class
private static object lockObject = new object();
public void SingleMethod()
{
try
{
Monitor.TryEnter(lockObject,millisecondsTimeout);
//method code
}
catch
{
}
finally
{
Monitor.Exit(lockObject);
}
}