Hangfire jobs insertion impacted on the API performance of the application - c#

I'm Working with ASP.NET Core 2.2 Web API multi-tenant application. The application uses Hangfire to run background tasks. We are trying to improve the performance of the application.
It stores all the jobs in a separate DB (Hangfire DB), but this impact on API performance. I have traced the API request in order to check the request time, here is the result:
Here is the code
public async Task<string> AddUser(UserModel user)
{
CreateUserInBackgorund(user);
// removed code
return "some status";
}
[Queue(Constants.Critical)]
public void CreateUserInBackgorund(UserModel user)
{
BackgroundJob.Enqueue(() => CreateUser(user));
}
public async Task CreateUser(UserModel user)
{
try
{
//Other code
}
catch (Exception ex)
{
_logger.Error(ex.Message, ex);
}
}
The trace logs seem that background calls impacted on the performance of the request. Is there a way to reduce this time or use another approach?

Because of some lack of information, here are suggestions on what to check:
There are issues might be issues with SQL Server THREADPOOL Waits rather than with C# code.
https://www.sqlskills.com/help/waits/threadpool/
Also, you may get more information about long running queries by check SQL Server tools
https://www.sqlshack.com/how-to-identify-slow-running-queries-in-sql-server/
Please, be aware each transaction locks the table, and if there are a lot of transactions in queue they may significantly reduce performance as well.
In [Queue(Constants.Critical)] there is definitely logic that the main request thread needs to execute before to start execute method. It could be also reasonable to try something like
Task.Run(()=> CreateUserInBackgorund(user)); in AddUser(UserModel user)
and to check how it would influence.
It's weird practice to use background jobs in such way. Usually background jobs are utilized as schedulers for the business logic parts that are not executed by users and for those cases when some logic needs to be executed in specific time or period. E.g. calculate something each day, or each week, month and etc. If you are using background jobs just only for the purpose to pass response to the user faster, it would be better to use some Queue / Service Bus e.g. RabbitMQ for messaging events and than to add listeners that would be executing the required tasks on_message_received event on demand. So it would not be necessary to use hangfire for those things that it has not been really designed and for those massive amount of user requests. It would gain possibility to significantly reduce the amount of hangfire related requests to its database and the overall pressure on SQL Server as well.

Related

Migrating polling to SignalR without polling on the server?

I have a piece of functionality in my web application that kicks off a long running report on my server. Currently, I have an ajax method on my client that is constantly running to check if the currently running report is complete. It does this by repeatedly calling a method that queries the database to determine if a given report's status has changed from Running to either Error or Complete. Once the report is finished, I perform an ajax load to get the data.
I'm looking at implementing SignalR to add some additional functionality to other pages, and I figured this would be a good test case to get things going. My main concern is how to alert the client when the report is complete. Using SignalR, I can simply say something like:
public class ReportHub : Hub
{
public async Task ReportComplete(string userId, ReportRunStatus guid)
{
await Clients.User(userId).SendAsync("ReportComplete", guid);
}
}
However, I want to try to avoid putting a long running loop on the server as I'm afraid this could degrade performance as operations scale up. Is there a better way to handle checking the report status and alerting clients than simply polling until completion? Or is there some easy way to constantly be looking at the table and alerting on completed reports?

HostingEnvironment.QueueBackgroundWorkItem() in ASP.NET for small background tasks

I came a across a nice little tool that has been added to ASP.NET in v4.5.2
I am wandering how safe it is and how one can effectively utilize it in an ASP.NET MVC or Web API scenario.
I know I am always wanting to do a quick and simple fire and forget task in my web applications. For example:
Sending an emails/s
Sending push notifications
Logging analytics or errors to the db
Now typically I just create a method called
public async Task SendEmailAsync(string to, string body)
{
//TODO: send email
}
and I would use it like so:
public async Task<ActionResult> Index()
{
...
await SendEmailAsync(User.Identity.Username, "Hello");
return View();
}
now my concern with this is that, I am delaying the user in order to send my email to them. This doesn't make much sense to me.
So I first considered just doing:
Task.Run(()=> SendEmailAsync(User.Identity.Username, "Hello"));
however when reading up about this. It is apparently not the best thing to do in IIS environment. (i'm not 100% sure on the specifics).
So this is where I came across HostingEnvironment.QueueBackgroundWorkItem(x=> SendEmailAsync(User.Identity.Username, "Hello"));
This is a very quick and easy way to offload the send email task to a background worker and serve up the users View() much quicker.
Now I am aware this is not for tasks running longer than 90 seconds and is not 100% guaranteed executution.
But my question is:
Is HostingEnvironment.QueueBackgroundWorkItem() sufficient for: sending emails, push notifications, db queries etc in a standard ASP.NET web site.
It depends.
The main benefit of QueueBackgroundWorkItem is the following, emphasis mine (source):
Differs from a normal ThreadPool work item in that ASP.NET can keep track of how many work items registered through this API are currently running, and the ASP.NET runtime will try to delay AppDomain shutdown until these work items have finished executing.
Essentially, QueueBackgroundWorkItem helps you run tasks that might take a couple of seconds by attempting not to shutdown your application while there's still a task running.
Running a normal database query or sending out a push notification should be a matter of a couple hundred milliseconds (or a few seconds); neither should take a very long time and should thus be fine to run within QueueBackgroundWorkItem.
However, there's no guarantee for the task to finish — as you said, the task is not awaited. It all depends on the importance of the task to execute. If the task must complete, it's not a good candidate for QueueBackgroundWorkItem.

Make Controller methods asynchronous or leave synchronous? [duplicate]

This question already has answers here:
When should I use Async Controllers in ASP.NET MVC?
(8 answers)
Closed 6 years ago.
I am working with a pre-existing C# ASP.NET MVC webapplication and I'm adding some functionality to it that I can't decide whether or not to make async.
Right now, the application home page just processes a page and a user logs in. Nothing more, and nothing asynchronous going on at all.
I am adding functionality that will, when the homepage is visited, generate a call to a Web API that subsequently calls a database that grabs an identifier and returns it to an HTML tag on the home page. This identifier will not be visible on the screen, only on the source/HTML view (this is being added for various internal tracking purposes).
The Web API/database call is simple, just grab an identifier and return it to the controller. Regardless, I'm wondering whether the app should make this call asynchronously? The website traffic isn't immense, but I'm still wondering about concurrency, performance and future scalability.
The one catch is that I'd have to make the entire ActionMethod async and I'm not sure what the affects of that would be. The basic pattern, currently synchronous, is below:
public ActionResult Index()
{
var result = GetID();
ViewBag.result = result.Data;
return View();
}
public JsonResult GetID()
{
var result = "";
var url = "http://APIURL/GetID";
using (WebClient client = new WebClient())
{
result = client.DownloadString(url);
}
return Json(result, JsonRequestBehavior.AllowGet);
}
Any thoughts?
First and foremost, realize the purpose of async, in the context of a web application. A web server has what's called a thread pool. Generally speaking, 1 thread == 1 request, so if you have a 1000 threads in the pool (typical), your website can roughly serve 1000 simultaneous requests. Also keep in mind that, it often takes many requests to render a single resource. The HTML document itself is one request, but each image, JS file, CSS file, etc. is also a request. Then, there's any AJAX requests the page may issue. In other words, it's not uncommon for a request for a single resource to generate 20+ requests to the web server.
Given that, when your server hits its max requests (all threads are being utilized), any further requests are queued and processed in order as threads are made available. What async does is buy you some additional head room. If there's threads that are in a wait-state (waiting for the results of a database query, the response from a web service, a file to be read from the filesystem, etc.), then async allows these threads to be returned to the pool, where they are then able to field some of those waiting requests. When whatever the thread was waiting on completes, a new thread is requested to finish servicing the request.
What is important to note here is that a new thread is requested to finish servicing the request. Once the thread has been released to the pool, you have to wait for a thread again, just like a brand new request. This means running async can sometimes take longer than running sync, depending on the availability of threads in the pool. Also, async caries with it a non-insignificant amount of overhead that also adds to the overall load time.
Async != faster. It can many times be slower, but it allows your web server to more efficiently utilize resources, which could mean the difference between falling down and gracefully bearing load. Because of this, there's no one universal answer to a question like "Should I just make everything async?" Async is a trade-off between raw performance and efficiency. In some situations it may not make sense to use async at all, while in others you might want to use it for everything that's applicable. What you need to do is first identity the stress points of your application. For example, if your database instance resides on the same server as your web server (not a good idea, BTW), using async on your database queries would be fairly pointless. The chief culprit of waiting is network latency, whereas filesystem access is typically relatively quick. On the other hand, if your database server is in a remote datacenter and has to not only travel the pipes between there and your web server but also do things like traverse firewalls, well, then your network latency is much more significant, and async is probably a very good idea.
Long and short, you need to evaluate your setup, your environment and the needs of your application. Then, and only then, can you make smart decisions about this. That said, even given the overhead of async, if there's network latency involved at all, it's a pretty safe bet async should be used. It's perfectly acceptable to err on the site of caution and just use async everywhere it's applicable, and many do just that. If you're looking to optimize for performance though (perhaps you're starting the next Facebook?), then you'd want to be much more judicious.
Here, the reason to use async IO is to not have many threads running at the same time. Threads consume OS resources and memory. The thread pool also cal be a little slow to adjust to sudden load. If your thread count in a web app is below 100 and load is not extremely spikey you have nothing to worry about.
Generally, the slower a web service and the more often it is called the more beneficial async IO can be. You will need on average (latency * frequency) threads running. So 100ms call time and 10 calls per second is about 1 thread on average.
Run the numbers and see if you need to change anything or not.
Any thoughts?
Yes, lot's of thoughts...but that alone doesn't count as an answer. ;)
There is no real good answer here since there isn't much context provided. But let's address what we know.
Since we are a web application, each request/response cycle has a direct impact on performance and can be a bottleneck.
Since we are internally invoking another API call from ours, we shouldn't assume that it is hosted on the same server - as such this should be treated just like all I/O bound operations.
With the two known factors above, we should make our calls async. Consider the following:
public async Task<ActionResult> Index()
{
var result = await GetIdAsync();
ViewBag.result = result.Data;
return View();
}
public async Task<JsonResult> GetIdAsync()
{
var result = "";
var url = "http://APIURL/GetID";
using (WebClient client = new WebClient())
{
// Note the API change here?
result = await client.DownloadStringAsync(url);
}
return Json(result, JsonRequestBehavior.AllowGet);
}
Now, we are correctly using async and await keywords with our Task<T> returning operations. This will help to ensure ideal performance. Notice the API change on the client.DownloadStringAsync too, this is very important.

Perform Tasks at Different Intervals in Azure Worker Role

I have a simple Azure Worker role running that performs a task every few seconds. Below is the code that accomplishes this.
public override void Run()
{
try
{
while (true)
{
DoSomething();
System.Threading.Thread.Sleep(3000);
}
}
catch (Exception ex)
{
Log.Add(ex, true);
}
}
What I'd like to do now is add a second task DoSomethingElse() that fires once and only once per day. I've thought of a couple of ways to accomplish this:
Add a counter that calls the new task every nth loop
Add conditional logic to the new task that compares the current time to a prescribed time of day
Use some TBD scheduler library (such as Quartz.NET)
The first two solutions strike me as very brittle without additional code to deal with situations where the service is stopped and restarted. The third solution strikes me as potentially overkill.
My question is, what is the best practice for scheduling tasks at different intervals within an Azure Worker Role? I have a slight preference for sticking with straight .NET and not using a third-party library (though I'm not ruling it out).
Note, #3 above comes from this older question Recommend a C# Task Scheduling Library
The first two options are the simplest but they are brittle - especially in the cloud where roles can be recycled/load balanced etc... If the persistence is in memory or even disk based in the cloud, then it will be brittle.
Outside of other third party options, you could look at persisting the schedule and execution data into external storage (table services, sql azure, etc...). On a periodic timer, the worker role can query for the jobs that are due to be performed, record starting and then run the job. That also allows you to potentially scale out the number of worker roles since it's persistence is external.
This can get complicated in a hurry but if you keep it simple with frequency and recording run times, it can be fairly straight forward.
Steve Marx wrote a nice couple of blog entries on how to build a task scheduler on Windows Azure using blob leases, I think you will find this very useful.

Optimizing an ASMX web service with Multiple Long-Running Operations

I'm writing an ASP.NET web service using C# that has a DoLookup() function. For each call to the DoLookup() function I need my code to execute two separate queries: one to another web service at a remote site and one to a local database. Both queries have to complete before I can compile the results and return them as the response to the DoLookup method. The problem I'm dealing with is that I want to make this as efficient as possible, both in terms of response time and resource usage on the web server. We are expecting up to several thousand queries per hour. Here's a rough C#-like overview of what I have so far:
public class SomeService : System.Web.Services.WebService
{
public SomeResponse DoLookup()
{
// Do the lookup at the remote web service and get the response
WebResponse wr = RemoteProvider.DoRemoteLookup();
// Do the lookup at the local database and get the response
DBResponse dbr = DoDatabaseLookup();
SomeResponse resp = new SomeResponse( wr, dbr);
return resp;
}
}
The above code does everything sequentially and works great but now I want to make it more scalable. I know that I can call the DoRemoteLookup() function asynchronously ( RemoteProvider has BeginRemoteLookup / EndRemoteLookup methods) and that I can also do the database lookup asynchronously using the BeginExecuteNonQuery / EndExecuteNonQuery methods.
My question (finally) is this: how do I fire both the remote web service lookup AND the database lookup simultaneously on separate threads and ensure that they have both completed before returning the response?
The reason I want to execute both requests on separate threads is that they both potentially have long response times (1 or 2 seconds) and I'd like to free up the resources of the web server to handle other requests while it is waiting for responses. One additional note - I do have the remote web service lookup running asynchronously currently, I just didn't want to make the sample above too confusing. What I'm struggling with is getting both the remote service lookup AND the database lookup started at the same time and figuring out when they have BOTH completed.
Thanks for any suggestions.
You can use a pair of AutoResetEvents, one for each thread. At the end of thread execution, you call AutoResetEvents.Set() to trigger the event.
After spawning the threads, you use WaitAll() with the two AutoResetEvents. This will cause the thread to block until both events are set.
The caveat to this approach is that you must ensure the Set() is guarantee to be called, otherwise you will block forever. Additionally ensure that with threads you exercise proper exception handling, or you will inadvertently cause more performance issues when unhanded exceptions cause your web application to restart.
MSDN Has sample code regarding AutoResetEvent usage.
See Asynchronous XML Web Service Methods, How to: Create Asynchronous Web Service Methods and How to: Chain Asynchronous Calls with a Web Service Method.
But note the first paragraph of those articles:
This topic is specific to a legacy technology. XML Web services and XML Web service clients should now be created using Windows Communication Foundation (WCF).
BTW, doing things the way these articles say is important because it frees up the ASP.NET worker thread while the long-running task runs. Otherwise, you might be blocking the worker thread, preventing it from servicing further requests, and impacting scalability.
Assuming you can have a callback function for both the web request and the database lookup then something along these lines may work
bool webLookupDone = false;
bool databaseLookupDone = false;
private void FinishedDBLookupCallBack()
{
databaseLookupDone = true;
if(webLookupDone)
{
FinishMethod();
}
}
private void FinishedWebLookupCallBack()
{
webLookupDone = true;
if(databaseLookupDone)
{
FinishMethod();
}
}
I guess I don't have enough rep to upvote nor to comment. So this is a comment on John Saunders answer and Alan's comment on it.
You definitely want to go with John's answer if you are concerned about scalability and resource consumption.
There are two considerations here: Speeding up an individual request, and making your system handle many concurrent requests efficiently. The former both Alan's and John's answer achieve by performing the external calls in parallel.
The latter, and it sounds like that was your main concern, is achieved by not having threads blocked anywhere, i.e. John's answer.
Don't spawn your own threads. Threads are expensive, and there are already plenty of threads in the IO Threadpool that will handle your external calls for you if you use the asynch methods provided by the .net framework.
Your service's webmethod needs to be asynch as well. Otherwise a worker thread will be blocked until your external calls are done (it's still 1-2 seconds even if they run in parallel). And you only have 12 threads per CPU handling incoming requests (if your machine.config is set according to recommendation.) I.e. you would at most be able to handle 12 concurrent requests (times the # of CPUs). On the other hand if your web method is asynch the Begin will return pretty much instantenously and the thread returned to the worker thread pool ready to handle another incoming request, while your external calls are being waited on by the IO completion port, where they will be handled by threads from the IO thread pool, once they return.

Categories