MVC 5 Shared Long Running Task - c#

I have a long running action/method that is called when a user clicks a button on a internal MVC5 application. The button is shared by all users, meaning a second person can come in and click it seconds after it has been clicked. The long running task is updating a shared task window to all clients via SignalR.
Is there a recommended way to check if the task is still busy and simply notifying the user it's still working? Is there another recommended approach? (can't use external windows service for the work)
Currently what I am doing seems like a bad idea or I could be wrong and it's feasible. See below for a sample of what I am doing.
public static Task WorkerTask { get; set; }
public JsonResult SendData()
{
if (WorkerTask == null)
{
WorkerTask = Task.Factory.StartNew(async () =>
{
// Do the 2-15 minute long running job
});
WorkerTask = null;
}
else
{
TempData["Message"] = "Data is already being exported. Please see task window for the status.";
}
return Json(Url.Action("Export", "Home"), JsonRequestBehavior.AllowGet);
}

I don't think what you're doing will work at all. I see three issues:
You are storing the WorkerTask on the controller (I think). A new controller is created for every request. Therefore, a new WorkerTask will always be created.
If #1 weren't true, you would still need to wrap the instantiation of WorkerTask in a lock because multiple clients could reach the WorkerTask == null check at the same time.
You shouldn't have long running tasks in your web application. The app pool could restart at any time killing your WorkerTask.
If you want to skip the best practices advice of "don't do long running work in your web app", you could use the HostingEnvironment.QueueBackgroundWorkItem introduced in .NET 4.5.2 to kick off the long running task. You could store a variable in the HttpApplication.Cache to indicate whether the long running process has been kicked off.
This solution has more than a few issues (it won't work in a web farm, the app pool could die, etc.). A more robust solution would be to use something like Quartz.net or Hangfire.

Related

Making async call to Controller function (void). Not getting control back to UI

I am working on a MVC 5 based report generating web application (Excel files). On one "GenerateReports" page, on button click, I am calling StartMonthly function. This takes control to a void method "GenerateReportMainMonthly" in the controller. This method calls another void method "GenerateReportMain". In GenerateReportMain, there are 5 other void functions that are being called.
I do not want the control to get stuck at back-end until the report generation is completed. On button click, an alert box should show "Report Generation started." and the control should come back to the "GenerateReports" page.
I have tried ajax but have not been able to get the control back to UI. How can I get the control back to the same page without waiting for the back-end process to complete?
$('#btnStart').on('click', StartMonthly);
function StartMonthly() {
var url = '/GenerateReport/GenerateReportMainMonthly';
window.location.href = url;
}
public void GenerateReportMainMonthly()
{
_isDaily = false;
GenerateReportMain();
}
It seems you are after running background tasks in your controllers. This is generally a bad idea (see this SO answer) as you might find that your server process has been killed mid-way and your client will have to handle it somehow.
If you absolutely must run long-ish processes in your controller and cannot extract it into a background worker of some sort, you can opt for something like this SO answer suggests. Implementation will vary depending on your setup and how fancy you are willing/able to go, but the basics will ramain the same:
you make an instantaneous call to initiate your long action and return back a job id to refer back to
your backend will process the task and update the status accordingly
your client will periodically check for status and do your desired behaviour when the job is reported complete.
If I were to tackle this problem I'd try to avoid periodic polling and rather opt for SignalR updates as describled in this blog post (this is not mine, I just googled it up for example).

That async-ing feeling - httpclient and mvc thread blocking

Dilemma, dilemma...
I've been working up a solution to a problem that uses async calls to the HttpClient library (GetAsync=>ConfigureAwait(false) etc). IIn a console app, my dll is very responsive and the mixture of using the async await calls and the Parallel.ForEach(=>) really makes me glow.
Now for the issue. After moving from this test harness to the target app, things have become problematic. I'm using asp.net mvc 4 and have hit a few issues. The main issue really is that calling my process on a controller action actually blocks the main thread until the async actions are complete. I've tried using an async controller pattern, I've tried using Task.Factory, I've tried using new Threads. You name it, I've tried all the flavours - and then some!.
Now, I appreciate that the nature of http is not designed to facilitate long processes like this and there are a number of articles here on SO that say don't do it. However, there are mitigating reasons why i NEED to use this approach. The main reason that I need to run this in mvc is due to the fact that I actually update the live data cache (on the mvc app) in realtime via raising an event in my dll's code. This means that fragments of the 50-60 data feeds can be pushed out live before the entire async action is complete. Therefore, client apps can receive partial updates within seconds of the async action being instigated. If I were to delegate the process out to a console app that ran the entire process in the background, I'd no longer be able to harness those fragment partial updates and this is the raison d'etre behind the entire choice of this architecture.
Can anyone shed light on a solution that would allow me to mitigate the blocking of the thread, whilst at the same time, allow each async fragment to be consumed by my object model and fed out to the client apps (I'm using signalr to make these client updates). A kind of nirvanna would be a scenario where an out-of-process cache object could be shared between numerous processes - the cache update could then be triggered and consumed by my mvc process (aka - http://devproconnections.com/aspnet-mvc/out-process-caching-aspnet). And so back to reality...
I have also considered using a secondary webservice to achieve this, but would welcome other options before once again over engineering my solution (there are already many moving parts and a multitude of async Actions going on).
Sorry not to have added any code, I'm hoping for practical philosophy/insights, rather than code help on this, tho would of course welcome coded examples that illustrate a solution to my problem.
I'll update the question as we move in time, as my thinking process is still maturing on this.
[edit] - for the sake of clarity, the snippet below is my brothers grimm code collision (extracted from a larger body of work):
Parallel.ForEach(scrapeDataBases, new ParallelOptions()
{
MaxDegreeOfParallelism = Environment.ProcessorCount * 15
},
async dataBase =>
{
await dataBase.ScrapeUrlAsync().ConfigureAwait(false);
await UpdateData(dataType, (DataCheckerScrape)dataBase);
});
async and Parallel.ForEach do not mix naturally, so I'm not sure what your console solution looks like. Furthermore, Parallel should almost never be used on ASP.NET at all.
It sounds like what you would want is to just use Task.WhenAll.
On a side note, I think your reasoning around background processing on ASP.NET is incorrect. It is perfectly possible to have a separate process that updates the clients via SignalR.
Being that your question is pretty high level without a lot of code. You could try Reactive Extensions.
Something like
private IEnumerable<Task<Scraper>> ScrappedUrls()
{
// Return the 50 to 60 task for each website here.
// I assume they all return the same type.
// return .ScrapeUrlAsync().ConfigureAwait(false);
throw new NotImplementedException();
}
public async Task<IEnumerable<ScrapeOdds>> GetOdds()
{
var results = new Collection<ScrapeOdds>();
var urlRequest = ScrappedUrls();
var observerableUrls = urlRequest.Select(u => u.ToObservable()).Merge();
var publisher = observerableUrls.Publish();
var hubContext = GlobalHost.ConnectionManager.GetHubContext<OddsHub>();
publisher.Subscribe(scraper =>
{
// Whatever you do do convert to the result set
var scrapedOdds = scraper.GetOdds();
results.Add(scrapedOdds);
// update anything else you want when it arrives.
// Update SingalR here
hubContext.Clients.All.UpdatedOdds(scrapedOdds);
});
// Will fire off subscriptions and not continue until they are done.
await publisher;
return results;
}
The merge option will process the results as they come in. You can then update the signalR hubs plus whatever else you need to update as they come in. The controller action will have to wait for them all to come in. That's why there is an await on the publisher.
I don't really know if httpClient is going to like to have 50 - 60 web calls all at once or not. If it doesn't you can just take the IEnumerable to an array and break it down into a smaller chunks. And also there should be some error checking in there. With Rx you can also tell it to SubscribeOn and ObserverOn different threads but I think with everything being pretty much async that wouldn't be necessary.

C# console app to send email at scheduled times

I've got a C# console app running on Windows Server 2003 whose purpose is to read a table called Notifications and a field called "NotifyDateTime" and send an email when that time is reached. I have it scheduled via Task Scheduler to run hourly, check to see if the NotifyDateTime falls within that hour, and then send the notifications.
It seems like because I have the notification date/times in the database that there should be a better way than re-running this thing every hour.
Is there a lightweight process/console app I could leave running on the server that reads in the day's notifications from the table and issues them exactly when they're due?
I thought service, but that seems overkill.
My suggestion is to write simple application, which uses Quartz.NET.
Create 2 jobs:
First, fires once a day, reads all awaiting notification times from database planned for that day, creates some triggers based on them.
Second, registered for such triggers (prepared by the first job), sends your notifications.
What's more,
I strongly advice you to create windows service for such purpose, just not to have lonely console application constantly running. It can be accidentally terminated by someone who have access to the server under the same account. What's more, if the server will be restarted, you have to remember to turn such application on again, manually, while the service can be configured to start automatically.
If you're using web application you can always have this logic hosted e.g. within IIS Application Pool process, although it is bad idea whatsoever. It's because such process is by default periodically restarted, so you should change its default configuration to be sure it is still working in the middle of the night, when application is not used. Unless your scheduled tasks will be terminated.
UPDATE (code samples):
Manager class, internal logic for scheduling and unscheduling jobs. For safety reasons implemented as a singleton:
internal class ScheduleManager
{
private static readonly ScheduleManager _instance = new ScheduleManager();
private readonly IScheduler _scheduler;
private ScheduleManager()
{
var properties = new NameValueCollection();
properties["quartz.scheduler.instanceName"] = "notifier";
properties["quartz.threadPool.type"] = "Quartz.Simpl.SimpleThreadPool, Quartz";
properties["quartz.threadPool.threadCount"] = "5";
properties["quartz.threadPool.threadPriority"] = "Normal";
var sf = new StdSchedulerFactory(properties);
_scheduler = sf.GetScheduler();
_scheduler.Start();
}
public static ScheduleManager Instance
{
get { return _instance; }
}
public void Schedule(IJobDetail job, ITrigger trigger)
{
_scheduler.ScheduleJob(job, trigger);
}
public void Unschedule(TriggerKey key)
{
_scheduler.UnscheduleJob(key);
}
}
First job, for gathering required information from the database and scheduling notifications (second job):
internal class Setup : IJob
{
public void Execute(IJobExecutionContext context)
{
try
{
foreach (var kvp in DbMock.ScheduleMap)
{
var email = kvp.Value;
var notify = new JobDetailImpl(email, "emailgroup", typeof(Notify))
{
JobDataMap = new JobDataMap {{"email", email}}
};
var time = new DateTimeOffset(DateTime.Parse(kvp.Key).ToUniversalTime());
var trigger = new SimpleTriggerImpl(email, "emailtriggergroup", time);
ScheduleManager.Instance.Schedule(notify, trigger);
}
Console.WriteLine("{0}: all jobs scheduled for today", DateTime.Now);
}
catch (Exception e) { /* log error */ }
}
}
Second job, for sending emails:
internal class Notify: IJob
{
public void Execute(IJobExecutionContext context)
{
try
{
var email = context.MergedJobDataMap.GetString("email");
SendEmail(email);
ScheduleManager.Instance.Unschedule(new TriggerKey(email));
}
catch (Exception e) { /* log error */ }
}
private void SendEmail(string email)
{
Console.WriteLine("{0}: sending email to {1}...", DateTime.Now, email);
}
}
Database mock, just for purposes of this particular example:
internal class DbMock
{
public static IDictionary<string, string> ScheduleMap =
new Dictionary<string, string>
{
{"00:01", "foo#gmail.com"},
{"00:02", "bar#yahoo.com"}
};
}
Main entry of the application:
public class Program
{
public static void Main()
{
FireStarter.Execute();
}
}
public class FireStarter
{
public static void Execute()
{
var setup = new JobDetailImpl("setup", "setupgroup", typeof(Setup));
var midnight = new CronTriggerImpl("setuptrigger", "setuptriggergroup",
"setup", "setupgroup",
DateTime.UtcNow, null, "0 0 0 * * ?");
ScheduleManager.Instance.Schedule(setup, midnight);
}
}
Output:
If you're going to use service, just put this main logic to the OnStart method (I advice to start the actual logic in a separate thread not to wait for the service to start, and the same avoid possible timeouts - not in this particular example obviously, but in general):
protected override void OnStart(string[] args)
{
try
{
var thread = new Thread(x => WatchThread(new ThreadStart(FireStarter.Execute)));
thread.Start();
}
catch (Exception e) { /* log error */ }
}
If so, encapsulate the logic in some wrapper e.g. WatchThread which will catch any errors from the thread:
private void WatchThread(object pointer)
{
try
{
((Delegate) pointer).DynamicInvoke();
}
catch (Exception e) { /* log error and stop service */ }
}
You trying to implement polling approach, where a job is monitoring a record in DB for any changes.
In this case we are trying to hit DB for periodic time, so if the one hour delay reduced to 1 min later stage, then this solution turns to performance bottle neck.
Approach 1
For this scenario please use Queue based approach to avoid any issues, you can also scale up number of instances if you are sending so many emails.
I understand there is a program updates NotifyDateTime in a table, the same program can push a message to Queue informing that there is a notification to handle.
There is a windows service looking after this queue for any incoming messages, when there is a message it performs the required operation (ie sending email).
Approach 2
http://msdn.microsoft.com/en-us/library/vstudio/zxsa8hkf(v=vs.100).aspx
you can also invoke C# code from SQL Server Stored procedure if you are using MS SQL Server. but in this case you are making use of your SQL server process to send mail, which is not a good practice.
However you can invoke a web service, OR WCF service which can send emails.
But Approach 1 is error free, Scalable , Track-able, Asynchronous , and doesn't trouble your data base OR APP, you have different process to send email.
Queues
Use MSMQ which is part of windows server
You can also try https://www.rabbitmq.com/dotnet.html
Pre-scheduled tasks (at undefined times) are generally a pain to handle, as opposed to scheduled tasks where Quartz.NET seems well suited.
Furthermore, another distinction is to be made between fire-and-forget for tasks that shouldn't be interrupted/change (ex. retries, notifications) and tasks that need to be actively managed (ex. campaign or communications).
For the fire-and-forget type tasks a message queue is well suited. If the destination is unreliable, you will have to opt for retry levels (ex. try send (max twice), retry after 5 minutes, try send (max twice), retry after 15 minutes) that at least require specifying message specific TTL's with a send and retry queue. Here's an explanation with a link to code to setup a retry level queue
The managed pre-scheduled tasks will require that you use a database queue approach (Click here for a CodeProject article on designing a database queue for scheduled tasks)
. This will allow you to update, remove or reschedule notifications given you keep track of ownership identifiers (ex. specifiy a user id and you can delete all pending notifications when the user should no longer receive notifications such as being deceased/unsubscribed)
Scheduled e-mail tasks (including any communication tasks) require finer grained control (expiration, retry and time-out mechanisms). The best approach to take here is to build a state machine that is able to process the e-mail task through its steps (expiration, pre-validation, pre-mailing steps such as templating, inlining css, making links absolute, adding tracking objects for open tracking, shortening links for click tracking, post-validation and sending and retrying).
Hopefully you are aware that the .NET SmtpClient isn't fully compliant with the MIME specifications and that you should be using a SAAS e-mail provider such as Amazon SES, Mandrill, Mailgun, Customer.io or Sendgrid. I'd suggest you look at Mandrill or Mailgun. Also if you have some time, take a look at MimeKit which you can use to construct MIME messages for the providers allow sending raw e-mail and doesn't necessarily support things like attachments/custom headers/DKIM signing.
I hope this sets you on the right path.
Edit
You will have to use a service to poll at specific intervals (ex. 15 seconds or 1 minute). The database load can be somewhat negated by checkout out a certain amount of due tasks at a time and keeping an internal pool of messages due for sending (with a time-out mechanism in place). When there's no messages returned, just 'sleep' the polling for a while. I'd would advise against building such a system out against a single table in a database - instead design an independant e-mail scheduling system that you can integrate with.
I would turn it into a service instead.
You can use System.Threading.Timer event handler for each of the scheduled times.
Scheduled tasks can be scheduled to run just once at a specific time (as opposed to hourly, daily, etc.), so one option would be to create the scheduled task when the specific field in your database changes.
You don't mention which database you use, but some databases support the notion of a trigger, e.g. in SQL: http://technet.microsoft.com/en-us/library/ms189799.aspx
If you know when the emails need to be sent ahead of time then I suggest that you use a wait on an event handle with the appropriate timeout. At midnight look at the table then wait on an event handle with the timeout set to expire when the next email needs to be sent. After sending the email wait again with the timeout set based on the next mail that should be sent.
Also, based on your description, this should probably be implemented as a service but it is not required.
I have been dealing with the same problem about three years ago. I have changed the process several times before it was good enough, I tell you why:
First implementation was using special deamon from webhosting which called the IIS website. The website checked the caller IP and then check the database and send emails. This was working until one day, when I got a lot of very dirty emails from the users that I have totally spammed their mailboxes. The drawback of keeping email in database and sending from SMTP email is that there is NOTHING which ensure DB to SMTP transaction. You are never sure if the email has been successfully sent or not. Sending email can be successfull, can failed or it can be false positive or it can be false negative (SMTP client tells you, that the email was not sent, but it was). There was some problem with SMTP server and the server returned false(email not send), but the email was sent. The daemon was resending the email every hour the whole day before the dirty emails appears.
Second implementation: To prevent spamming, I have changed the algorithm, that the email is considered to be sent even if it failed (my email notification was not too important). My first advice is: "Don't launch the deamon too often, because this false negative smtp error makes users upset."
After several month there were some changes on the server and the daemon was not working well. I got the idea from the stackoverflow: bind the .NET timer to the web application domain. It wasn't good idea, because it seems, that IIS can restart application from time to time because of memory leaks and the timer never fires if the restarts are more often then timer ticks.
The last implementation. Windows scheduler every hour fires python batch which read local website. This fire ASP.NET code. The advantage is that time windows scheduler call the the local batch and website reliably. IIS doesn't hang, it has restart ability. The timer site is part of my website, it is still one projects. (you can use console app instead). Simple is better. It just works!
Your first choice is the correct option in my opinion. Task Scheduler is the MS recommended way to perform periodic jobs. Moreover it's flexible, can reports failures to ops, is optimized and amortized amongst all tasks in the system, ...
Creating any console-kind app that runs all the time is fragile. It can be shutdown by anyone, needs an open seesion, doesn't restart automatically, ...
The other option is creating some kind of service. It's guaranteed to be running all the time, so that would at least work. But what was your motivation?
"It seems like because I have the notification date/times in the database that there should be a better way than re-running this thing every hour."
Oh yeah optimization... So you want to add a new permanently running service to your computer so that you avoid one potentially unrequired SQL query every hour? The cure looks worse than the disease to me.
And I didn't mention all the drawbacks of the service. On one hand, your task uses no resource when it doesn't run. It's very simple, lightweight and the query efficient (provided you have the right index).
On the other hand, if your service crashes it's probably gone for good. It needs a way to be notified of new e-mails that may need to be sent earlier than what's currently scheduled. It permanently uses computer resources, such as memory. Worse, it may contain memory leaks.
I think that the cost/benefit ratio is very low for any solution other than the trivial periodic task.

BackgroundWorker in MVC3 application freezes UI

I'm developing MVC3 based web application, which at one point needs to cache large amount of data from some external database. Process takes about 10-30 min (depending on a traffic) so I put it in BackgroundWorker. When you click particular button on the webpage, using ajax it just access other method in controller, depending on the returned value proper information is displayed on user interface.
Controller:
if (IsDbLocked())
{
return this.Json(new
{
success = false,
message = "There is already an update requested by other user on the way."
});
}
this.model.DataUpdateBegin();
return this.Json(new { success = true });
Model:
public void DataUpdateBegin()
{
var backgroundWorker = new BackgroundWorker
{
WorkerSupportsCancellation = false,
WorkerReportsProgress = true
};
backgroundWorker.DoWork += this.DataUpdateWorkerDoWork;
backgroundWorker.ProgressChanged += this.DataUpdaterWorkerProgressChanged;
backgroundWorker.RunWorkerCompleted += this.DataUpdaterWorkerRunWorkerCompleted;
if (this.DataUpdateLockDb(true))
{
backgroundWorker.RunWorkerAsync();
}
}
Now when I do update, UI still freezes. While debuging controller I can see, that it starts BackgroundWorker and instantly continues to return statement (with success = true), but then it just finishes, and nothing else happens (returned message never reaches webpage).
I can see page from other browser/user and everything works ok, but this particular thread is locked for several minutes (not entire 10-30 min, as it's get unlocked after about 5 min - timeout?)
My question is, what I did wrong and how to fix it. I expect backgroundWorker to just run in the background, and free user to move around page wherever he wish. Also making an entry in database and making some external application just fetch it and do all the work (in real background) is not an option for me.
Do not use Background worker like this. Yes, the work will be done within another thread, but still in scope of that web request. Web requests are not ment to be alive for 30 minutes, there are plenty of things that can go wrong (timeouts, app pool restart, other IIS behaviour..)
If you have this type of long-running task, you should do it in some worker - windows service, maybe console application, etc. and from web you just start it (console) or set it to be done (message queue, azure queue)
Also, i hope you are not locking database (you method IsDbLocked()) for 30 minutes? Just do your import in transaction and use proper isolation level (read commited) so DB work normally all the time and the changes are there instantly when import finishes.
I'm developing MVC3 based web application
... so I put it in BackgroundWorker
BackgroundWorker is designed to work with Windows (forms) application; to achieve similar in web application, use Task.Factory.StartNew or Thread (more primitive)

How do I manage threads in a C# web app?

I built a little web application that displays charts. I was thinking that it might be useful for the superuser of the app to do a complete data refresh, however this process takes around 10 minutes to complete. I was thinking perhaps the user could click a button that would start off a new thread to do a data refresh and subsequent clicks would kill the thread and restart the data population process. The user would then be free to browse about the site and view the charts as their data is populated.
Is there a simple method of accomplishing something like this?
You can twist ASP.NET to do this sort of thing, but it violates a few good general rules for ASP.NET development -- and could really cause problems in a server farm.
So, the most obvious route is to do this work in a web service. You can have the method return a chunk of HTML if you want. You could also add status methods to see how the thread is progressing.
Other options include: Handing the intense processing off to a database server (sounds like this might be a good use of OLAP) or, another cheap trick might be to set up the click to fire off a scheduled task that runs on the server. Can you provide some additional detail about the environment? Single server? Data storage platform, version of .net?
Ok, I didn't use either answer so here is what I did. I decided that it would be better if subsequent clicks would terminate instead of the currently executing one. Thanks for your answers guys.
//code behind
protected void butRefreshData_Click(object sender, EventArgs e)
{
Thread t = new Thread(new ThreadStart(DataRepopulater.DataRepopulater.RepopulateDatabase));
t.Start();
}
//DataRepopulater.cs
namespace DataRepopulater
{
public static class DataRepopulater
{
private static string myLock = "My Lock";
public static void RepopulateDatabase()
{
if(Monitor.TryEnter(myLock))
{
DoWork();
Monitor.Exit(myLock);
}
}
}

Categories