I'm working on a project with Asp.net MVC 5 and web api and SQL Server. I should implement a functionality that required scheduling.
I have some users in this system that every user can register an order and I save order information in the database.
Problem: for each order that has been registered in system after 2 minutes, I should send a message to his owner and notify him about his order status.
How should I check each order status 2 minutes after it has been registered?
Should I schedule a task per order that has been registered? (it could be more than 500 order per sec so I don't thinks it is a good solution)
I want a solution to handle this with a good performance.
Your best solution here is Hangfire
Hangfire is built for these kind of challenges. It really doesn't matter how many jobs you have. After you have configured Hangfire you can simply pass methods to the queue. You can also delay the execution with a TimeSpan
BackgroundJob.Schedule(
() => Console.WriteLine("Delayed!"),
TimeSpan.FromMinutes(2));
You can even chain jobs, very handy if you have multiple steps in your order process:
BackgroundJob.Schedule(() => {
SomeProcessToComplete();
},TimeSpan.FromMinutes(2));
static void SomeProcessToComplete(){
//after code runs add another job to the queue
BackgroundJob.Schedule(
() => Console.WriteLine("Delayed!"),
TimeSpan.FromMinutes(2));
}
another solution could be ,each time order is saved write it to a text file on server,and at the same time add watch to notify you the text file is changed ,which will trigger your service to send message
Related
I have an ASP.NET Core Web API project. That has one controller with a method called GetLocations
GetLocations connects to 5 other web services on the internet. Gathers some info and return a collection via json. In this method I am caching the data every 5 mins using In Memory caching.
If the cache expires, it tries to connect to all 5 services and get the info and so on.
My problem is:
I have a lot of users requesting this data constantly, 50 requests a second to this API.
When the cache expires I believe there is some kind of thread locking. I have limited visibility into the project at the moment but I suspect that all these requests are calling the method and reaching out to the 5 dependent services until one of them gets a completed response from all 5.
Is my assumption right? If so how can I go about fixing this? Will I need to make each call to the web services async? Will that help this scenario? I am not 100% sure because the requests are what triggers the method call.
You should definitely make the calls to the external services use Async / Await.
That's just a given - as the best practice is to always use async for I/O heavy operations (such as calling a third-party service).
Now, you should also create a class that manages these calls. You can add it as a Singleton in your IoCConfig. In that class, make sure you're "locking" to avoid the issue you just described and not call the underlying services numerous times while the cache is being built.
Check here:
https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/lock-statement
You are facing this issue because of following reason.
You are using Cache and it will expire at some definite time.
After Cache expire you call external web service method to collect the data. Now at this point of time it might happen that other request in queue get choose for execution.
Once that another request is chosen for execution it also end up checking cache and it now data in cache then execute external service and so on for other request.
Solution to this.
First Check cache contains data or not.
If Not create lock so following section only be executed by single thread.
Now in that lock section again check for cache and if cache contains data then simply return but it does not contains then call external service.
At this point of time if another thread get selected for execution then it has to wait for execlsive section to complete its works.
Once that section get completed it store data in cache and so after if any queued or new request is there it choose data from cache.
Note : It should something like this.
public List<string> GetData()
{
if(Cache[key] == null)
{
lock(obj) // obj should be static
{
if(Cache[key] == null)
{
// Load data from service
Cache[key] == data;
}
}
}
return (List<string>)Cache[Key];
}
I have a web service called S
My client have a web service called C
My client send a request to my web service (web service S)
Web service S will send a response to client ( C )
After that, my service (S) will create 1 invoice message and send it to client web service (C)
Client web service return result to my web service (S)
How to implement it?
As I understand, you want to return a response to client app, but still continue with some processing.
There are a few possibilities here:
Start a new thread in 2., that will create the invoice and send it to client WS. This however can be error-prone - your web service might die or be shut down in the middle of creating the invoice and client WS will never know.
Use something like hangfire to schedule invoice creation. Hangfire stores scheduled tasks in DB, so it will be eventually executed, even in case of failure. This does not require additional configuration other than setting up the backend db. Processing happens in the same hosting process of your Service.
Use a ServiceBus or MSMQ - the idea is simmilar as with Hangfire - you send a message (saying like "create invoice with parameters X") to the Bus, the Bus makes sure the message gets delivered to anyone that listens for it. Then you register a listener that would handle that kind of message and create the invoice. This would require more work, since you have to choose the Service Bus engine, take a moment to understand it, install, configure, etc.
This is a good case for a domain event. I don't know what the first request is - perhaps placing an order?
When the order is placed then you would raise an event indicating that an order was placed. The event could contain either some information about the order or a reference (id) that can be used to retrieve it. Then other listeners would respond accordingly.
One benefit is that it keeps different parts of your application decoupled. For example, the class that submits an order doesn't need to know that there's going to be an invoice. It just raises an event indicating that an order has been placed and then goes on its way.
That becomes even more important if you want to have multiple behaviors when an order is placed. Perhaps you also want to send an email confirming that your received the order. Now you can add that additional behavior as an event listener with no modification to the code that places the order.
Also, your application could grow so that perhaps there's another service for placing orders. (I'm running with "placing orders" although I don't know what the specific event is.) You don't want multiple points in your application that follow all of the post-ordering steps. If those steps change then you'd have to modify code in all of those places. Instead you just raise the event.
Here's a popular article that describes the concept well. There are numerous implementations of an event bus. Here's one.
In pseudocode, you could now have a few event handlers, each of which is completely decoupled from your ordering code.
The event itself is raised immediately after the order is submitted.
var order = SubmitOrder(info);
eventBus.Raise(new OrderSubmitEvent(order));
Then you have some event handlers which are registered to respond to that event.
public class SendInvoiceOrderEventHandler : IEventHandler<OrderSubmitEVent>
{
public void HandleEvent(OrderSubmitEvent e)
{
//e contains details about the order. Send an invoice request
}
}
public class SendConfirmationOrderEventHandler : IEventHandler<OrderSubmitEVent>
{
public void HandleEvent(OrderSubmitEvent e)
{
//send an email confirmation
}
}
In asp.net I want to give the customer an immediate response and close the connection, and then continue execution which may be lengthy and display unimportant messages. But none of this should be visible to the customer.
I already tried Response.Flush / Close / End, CompleteRequest and anonymous delegates, but couldn't get it to work with any of this.
Sample:
Response.Write("time: "+HttpContext.Current.Session["test"]);
MagicallyEndReponse(); //But how?
Thread.Sleep(10000); //Customer should not experience any delay
HttpContext.Current.Session["test"] = DateTime.Now; //This should be available when reloading 15s later
Response.Write("BORING INFO!"); //Customer should not see this
I wouldn't recommend background thread processing in an ASP.NET application, it's not what ASP.NET or IIS is designed for.
My advice would be look at having a separate service (e.g. an internal Windows Service) which picks up work from the website and processes it, this would allow you to write a more robust multi-threaded application. You could use a durable messaging system like MSMQ / NServiceBus to pass messages to / from the service (this would mean no work is lost if the website happened to go down or restart).
The natural response for these types of request would be 202 Accepted and then possibly exposing an API for the client to query to check on the progress.
I'm working on an ASP.NET MVC application where I need to keep updating a file in a time-interval. I will eventually be hosting this website on Windows Azure.
I was just wondering if the approach mentioned in Phil Haack's post
The Dangers of Implementing Recurring Background Tasks In ASP.NET is still the best approach or if I should look into creating a console app or so and use Azure Web Jobs to run it?
Any thoughts appreciated.
Thanks,
Daniel
You can do something like this:
private void AddHourlyTask(string task)
{
DateTime expiration = DateTime.Now.AddHours(1);
expiration = new DateTime(expiration.Year, expiration.Month, expiration.Day, expiration.Hour, expiration.Minute, expiration.Second, expiration.Kind);
OnCacheRemove = new CacheItemRemovedCallback(CacheItemRemoved);
HttpRuntime.Cache.Insert(
task,
task,
null,
expiration,
Cache.NoSlidingExpiration,
CacheItemPriority.NotRemovable,
OnCacheRemove);
}
And then in a separate function:
public void CacheItemRemoved(string k, object v, CacheItemRemovedReason r)
{
if (k == "HelloWorld")
{
Console.Write("Hello, World!");
AddHourlyTask(k);
}
This would go into your Application_Start() function as:
AddHourlyTask("HelloWorld");
In order for this to work, you also need to add this somewhere in your class:
private static CacheItemRemovedCallback OnCacheRemove = null;
The functions would all sit in your Global.asax.cs file
You might look at scheduling a simple console app, batch file, perl script, etc. with the windows task scheduler. Depending on what it needs to do, it could be as simple as invoking a web method in your ASP.Net MVC web app.
One option is looking into Quartz.
Quartz.NET is a full-featured, open source job scheduling system that can be used from smallest apps to large scale enterprise systems.
I asked a similar question on Programmers: How do I make my ASP.NET application take an action based on time?
Accepted Answer:
A scheduled task triggered by either the Task Scheduler or Sql Server is the way to go here. But if you really want to manage it within your webapp, you might want to look at something like Quartz.NET. It will let you do scheduled tasks from the CLR. Then your challenge is "how do I make sure the AppDomain stays up to run the tasks."
Another way to do it as a scheduled task yet keep most of the "smarts" on the server is to make it a task that can be called over HTTP with some sort of authorization key. This lets you write a relatively simple program to call it -- if not a simple shell script -- and to keep most of the complexity in the web app which is likely already capable of running the task.
Either way rolling your own task manager is really a path fraught with peril.
Scheduling tasks in an ASP.NET MVC project is possible using the Revalee open source project.
Revalee is a service that allows you to schedule web callbacks to your web application). In your case, you would schedule a callback that would perform your desired action (i.e., update a file). Revalee works very well with tasks that are discrete transactional actions, like updating a database value or sending an automated email message (read: not long running). The code to perform your action would all reside within your MVC app. When your application launches for the very first time, then you would schedule the first web callback. When your application is called back to generate the report, then you would schedule the next callback.
To use Revalee, you would:
Install the Revalee Service, a Windows Service, on your server. The Windows Service is available in the source code (which you would compile yourself) or in a precompiled version available at the Revalee website.
Use the MVC-specific Revalee client library in your Visual Studio project. (There is a non-MVC version too.) The client library is available in the source code (which, again, you would compile yourself) or in a precompiled version available via NuGet.
You would register a future callback when your application launches for the very first time via the ScheduleHourlyCallback() method (this example is assuming that you need your action to run once per hour).
private void ScheduleHourlyCallback()
{
// Schedule your callback for an hour from now
var callbackTime = DateTimeOffset.Now.AddHours(1.0);
// Your web app's Uri, including any query string parameters your app might need
Uri callbackUrl = new Uri("http://yourwebapp.com/Callback/UpdateFile");
// Register the callback request with the Revalee service
RevaleeRegistrar.ScheduleCallback(callbackTime, callbackUrl);
}
When Revalee calls your application back, your app would perform whatever action you have coded it to do and your app schedules the next callback too (by calling the ScheduleHourlyCallback() method from within your controller's action).
I hope this helps.
Note: The code example above uses a synchronous version of ScheduleCallback(), the Revalee client library also supports asynchronous calls à la:
RevaleeRegistrar.ScheduleCallbackAsync(callbackTime, callbackUrl);
Disclaimer: I was one of the developers involved with the Revalee project. To be clear, however, Revalee is free, open source software. The source code is available on GitHub.
I've got several web-services: asmx,wcf. At couple of them there are some methods, which take a lot of time for processing, but size of input data for these methods are small and it takes not much time to transfer on the wire. I want move to not sync model. Client passes data to service, service answers that data transfer was correct and process it at background thread witout connection with client. So agter transfering connection should be closed. IS it possible? Can u help me with articles or may be just google request.
John is right - Once you close an http connection, it is done. You can't get back to the same process.
So if you can use another technology that allows duplex on one connection (e.g. WCF), do it!
However,
if you have no choice but to use webservices,
here are three ways to make it work. You may get timeouts on any of them.
Option 1:
Forget the part about 'client answers data was correct.' Just have each thread make its request and wait for the data.
Option 2:
Now, assuming that won't work and you must do the validation, this way requires the client to make 2 requests.
First request: returns valid/invalid.
Second request: returns the long-running results.
Variation of option 2:
If you have timeout problems, you could have the first request generate a GUID or unique database key and start another process, passing it this key, and return the key to the client. (if you can get the server to allow you to start a process - depends on security settings/needs - if not you may be able to start an async thread and have it keep running after the websvc one ends?) The process will do the long task, update the row in the database w/ the unique id when finished, revealing the results plus a 'done' flag. The second request by the client could always return immediately and if the processing is not done, return that, if it is, return the results. The client will repeat this every 5 sec or so until done.
Hacks, I know, but we don't always have a choice for the technology we use.
Don't do this with ASMX web services. They weren't designed for that. If you must do it with ASMX, then have the ASMX pass the data off to a Windows Service that will do the actual work, in the background.
This is more practical with WCF.
We have been writing stuff to interact with the UK gov website and the way they handle something similar is that you send your request and data to the server and it responds saying, roughly, "thanks very much - we're processing it now, please call back later using this id" - all in an XML message. You then, at some point later, send a new http request to the service saying, essentially, "I'm enquiring about the status of this particular request id" and the server returns a result that says either it has processed OK, or processed with errors, or is still processing, please try again in xx seconds.
Similar to option 2 described previously.
It's a polling solution rather than a callback or 2 way conversation but it seems to work.
The server will need to keep, or have access to, some form of persistent table or log for each request state - it can contain eg, the id, the original request, current stage through the workflow, any error messages so far, the result (if any) etc. And the web service should probably have passed the bulk of the request off to a separate Windows service as already mentioned.