In my webapi application, I want to schedule an action at certain time. Here is my code:
private readonly ConcurrentDictionary<string, Timer> timers;
public void Schedule(TimeSpan when, Action<ElapsedEventArgs, Item> expiredCallback, Item item)
{
Timer timer = null;
if (timers.TryGetValue(item.Label, out timer))
{
return;
}
timer = new Timer();
timer.Interval = when.TotalMilliseconds;
var addResult = this.timers.TryAdd(item.Label, timer);
if (addResult)
{
timer.Elapsed += (sender, e) =>
{
Timer expiredTimer = null;
if (this.timers.TryRemove(item.Label, out expiredTimer))
{
expiredTimer.Enabled = false;
expiredTimer.Dispose();
}
expiredCallback(e, item);
};
timer.Start();
}
}
The problem with this code is that if the application pool recycles after I schedule an action, I am assuming that the action will not be executed, since the timers are held in memory.
A better solution is to schedule a task using a scheduler api and from that scheduled task to call the api, but this would complicate things... So is there a simple way to make this code work in the scenario that I've described?
Postgres, have pgAgent to control shedules. It is a free easy_to_use time sheduler, where you may shedule all tasks that needs to be run. All the tasks are saved in tables, so nothing is lost if the webserver restarts. And it logs in the tablelog, if a shedule fails.
Example picks of the pgagent (my local server).
Related
I have a windows service which performs multiple task which i have separated into functions, some will take lets say 5 minutes to complete, while some will take less.
private System.Timers.Timer tim = null;
protected override void OnStart(string[] args)
{
tim = new System.Timers.Timer();
this.tim.Interval = 30000;
this.tim.Elapsed += new System.Timers.ElapsedEventHandler(this.OnTimedEvent_Tick);
tim.Enabled = true;
}
private void OnTimedEvent_Tick(Object source, System.Timers.ElapsedEventArgs e)
{
Task task0 = Task.Factory.StartNew(() => Function1()); // doing some database operations
Task task1 = Task.Factory.StartNew(() => Function2()); // doing some other database operation
Task task10 ......Up to Function10()
Task.WaitAll(task0,task1, task2, task3, task4,task5, task6,task7,task8,task9,task10);
}
Is there a draw back to the above method? if my windows service is to run lets say every 30 seconds. IF there is how do i approach it?
This would work fine in your case since there are only limited number of tasks. For cases where number of tasks that can be created are unknown, do consider using Parallel.ForEach instead. The thing that you need to handle here is exception. Put your code in a try.... catch statement.
try
{
....your code here.....
}
catch (AggregateException e)
{
var x = e.Flatten();
// Log or do what-ever
}
The correct answer would depend on what those tasks are actually doing. If all tasks must be completed prior to restarting any of them, set tim.AutoReset = false. Then after Task.WaitAll() call tim.Start(). This will ensure your wait time is between complete executions of all tasks. Otherwise, if your timer time is smaller than task execution time, you won't see any wait time.
If some of your functions will periodically take longer than timer interval (30 seconds) it will cause threads count to increase without any control. So you will end by using all possible threads which will result in processing delays. If timer interval is shorter than processing time, consider applying pause-resume timer system
I created an activity which executes a web request and stores the result into the database. I found out that for these long running activities I should write some different code so that the workflow engine thread won't be blocked.
public sealed class WebSaveActivity : NativeActivity
{
protected override void Execute(NativeActivityContext context)
{
GetAndSave(); // This takes 1 hour to accomplish.
}
}
How should I rewrite this activity to meet the requirements for a long running activity
You could either spawn a thread within your existing process using e.g. ThreadPool.QueueUserWorkItem() so the rest of your workflow will continue to run if that is desired. Be sure to understand first what multithreading and thread synchronization means, though.
Or you could look into Hangfire or similar components to offload the entire job into a different process.
EDIT:
Based on your comment you could look into Task-based Asynchronous Pattern (TAP): Link 1, Link 2 which would give you a nice model of writing code that continues to work on things that can be done while waiting for the result of your long running action until it returns. I am, however, not certain if this covers your all needs. In Windows Workflow Foundation specifically, you might want to look into some form of workflow hibernation/persistence.
This scenario is where using WF's persistence feature shines. It allows you to persist a workflow instance to a database, to allow for some long running operation to complete. Once that completes, a second thread or process can re-hydrate the workflow instance and allow it to resume.
First you specify to the workflow application a workflow instance store. Microsoft provides a SQL workflow instance store implementation you can use, and provides the SQL scripts you can run on your SQL Server.
namespace MySolution.MyWorkflowApp
{
using System.Activities;
using System.Activities.DurableInstancing;
using System.Activities.Statements;
using System.Threading;
internal static class Program
{
internal static void Main(string[] args)
{
var autoResetEvent = new AutoResetEvent(false);
var workflowApp = new WorkflowApplication(new Sequence());
workflowApp.InstanceStore = new SqlWorkflowInstanceStore("server=mySqlServer;initial catalog=myWfDb;...");
workflowApp.Completed += e => autoResetEvent.Set();
workflowApp.Unloaded += e => autoResetEvent.Set();
workflowApp.Aborted += e => autoResetEvent.Set();
workflowApp.Run();
autoResetEvent.WaitOne();
}
}
}
Your activity would spin up a secondary process / thread that will actually perform the save operation. There is a variety of ways you could do this:
On a secondary thread
By invoking a web method asynchronously that actually does the heavy lifting of performing the save operation
Your activity would look like this:
public sealed class WebSaveActivity : NativeActivity
{
public InArgument<MyBigObject> ObjectToSave { get; set; }
protected override bool CanInduceIdle
{
get
{
// This notifies the WF engine that the activity can be unloaded / persisted to an instance store.
return true;
}
}
protected override void Execute(NativeActivityContext context)
{
var currentBigObject = this.ObjectToSave.Get(context);
currentBigObject.WorkflowInstanceId = context.WorkflowInstanceId;
StartSaveOperationAsync(this.ObjectToSave.Get(context)); // This method should offload the actual save process to a thread or even a web method, then return immediately.
// This tells the WF engine that the workflow instance can be suspended and persisted to the instance store.
context.CreateBookmark("MySaveOperation", AfterSaveCompletesCallback);
}
private void AfterSaveCompletesCallback(NativeActivityContext context, Bookmark bookmark, object value)
{
// Do more things after the save completes.
var saved = (bool) value;
if (saved)
{
// yay!
}
else
{
// boo!!!
}
}
}
The bookmark creation signals to the WF engine that the workflow instance can be unloaded from memory until something wakes up the workflow instance.
In your scenario, you'd like the workflow to resume once the long save operation completes. Lets assume the StartSaveOperationAsync method writes a small message to a queue of some sort, that a second thread or process polls to perform the save operations:
public static void StartSaveOperationAsync(MyBigObject myObjectToSave)
{
var targetQueue = new MessageQueue(".\private$\pendingSaveOperations");
var message = new Message(myObjectToSave);
targetQueue.Send(message);
}
In my second process, I can then poll the queue for new save requests and re-hydrate the persisted workflow instance so it can resume after the save operation finishes. Assume that the following method is in a different console application:
internal static void PollQueue()
{
var targetQueue = new MessageQueue(#".\private$\pendingSaveOperations");
while (true)
{
// This waits for a message to arrive on the queue.
var message = targetQueue.Receive();
var myObjectToSave = message.Body as MyBigObject;
// Perform the long running save operation
LongRunningSave(myObjectToSave);
// Once the save operation finishes, you can resume the associated workflow.
var autoResetEvent = new AutoResetEvent(false);
var workflowApp = new WorkflowApplication(new Sequence());
workflowApp.InstanceStore = new SqlWorkflowInstanceStore("server=mySqlServer;initial catalog=myWfDb;...");
workflowApp.Completed += e => autoResetEvent.Set();
workflowApp.Unloaded += e => autoResetEvent.Set();
workflowApp.Aborted += e => autoResetEvent.Set();
// I'm assuming the object to save has a field somewhere that refers the workflow instance that's running it.
workflowApp.Load(myObjectToSave.WorkflowInstanceId);
workflowApp.ResumeBookmark("LongSaveOperation", true); // The 'true' parameter is just our way of saying the save completed successfully. You can use any object type you desire here.
autoResetEvent.WaitOne();
}
}
private static void LongRunningSave(object myObjectToSave)
{
throw new NotImplementedException();
}
public class MyBigObject
{
public Guid WorkflowInstanceId { get; set; } = Guid.NewGuid();
}
Now the long running save operation will not impede the workflow engine, and it'll make more efficient use of system resources by not keeping workflow instances in memory for long periods of time.
I'm trying to figure out the best way to implement a delay into Task such that after the delay it calls itself again to attempt the same work.
My application is a server that generates reports from the database after the mobile devices sync their data with the server, however If another user has called the report generation method recently, I want it to pause for a period of time and then attempt to run again.
This is my current attempt
private static DateTime _lastRequest = Datetime.MinValue;
public async void IssueReports()
{
await Task.Run(() =>
{
if (DateTime.Now < _lastRequest + TimeSpan.FromMinutes(3)) //checks to see when a user last completed this method
{
Task.Delay(TimeSpan.FromMinutes(2));
IssueReports(); //calls itself again after the delay
return;
}
});
//code to generate reports goes here
_lastRequest = DateTime.Now; //updates last request into the static variable after it has finished running
}
Initially if it failed the check then the task would just end. This prevented 2 users hitting the database at the same time and it causing duplicate reports to be generated. However, the problem is that if 2 users sync within that same window then the second user reports wouldn't be sent until another sync call is done.
The delay is supposed to give the server time to finish generating the reports and updating the database before the next batch is requested by calling itself.
Am I overcomplicating things? I'm worried about it potentially hammering system resources with multiple loops in the event the reports take a long time to process
Following example run background service every 10 seconds recursively. This method is recommended only if you believe your task will complete within 10 seconds.
public frm_testform()
{
InitializeComponent();
dispatcherTimer_Tick().DoNotAwait();
}
private async Task dispatcherTimer_Tick()
{
DispatcherTimer timer = new DispatcherTimer();
TaskCompletionSource<bool> tcs = null;
EventHandler tickHandler = (s, e) => tcs.TrySetResult(true);
timer.Interval = TimeSpan.FromSeconds(10);
timer.Tick += tickHandler;
timer.Start();
while (true)
{
tcs = new TaskCompletionSource<bool>();
await Task.Run(() =>
{
// Run your background service and UI update here
await tcs.Task;
}
}
I was wondering the best way to get round this issue.
I have created a Windows Service that connects to a mailbox, processes the emails, then cleans up after itself, waits a certain amount of time and repeats.
protected override void OnStart(string[] args)
{
this._mainTask = new Task(this.Poll, this._cancellationToken.Token, TaskCreationOptions.LongRunning);
this._mainTask.Start();
}
private void Poll()
{
CancellationToken cancellation = this._cancellationToken.Token;
TimeSpan interval = TimeSpan.Zero;
while (!cancellation.WaitHandle.WaitOne(interval))
{
using (IImapClient emailClient = new S22ImapClient())
{
ImapClientSettings chatSettings = ...;
emailClient.Connect(chatSettings); // CAN SOMETIMES HANG HERE
// SOME WORK DONE HERE
}
interval = this._waitAfterSuccessInterval;
// check the cancellation state.
if (cancellation.IsCancellationRequested)
{
break;
}
}
}
Now I am using a 3rd party IMAP client "S22.Imap". When I create the email client object on occasion it will hang on creation as it is attempting to login. This in turn will hang my Windows Service indefinitely.
public class S22ImapClient : IImapClient
{
private ImapClient _client;
public void Connect(ImapClientSettings imapClientSettings)
{
this._client = new ImapClient(
imapClientSettings.Host,
imapClientSettings.Port,
imapClientSettings.EmailAddress,
imapClientSettings.Password,
AuthMethod.Login,
true);
}
}
How would I change the "S22ImapClient.Connect()" call to, behind the covers, use some method to attempt to connect for a set amount of time, then abort if it has not been able to?
The solution to this will also be used for anything that I need to do with the mail client, for example "GetMessage()", "DeleteMessage()" etc
You could use a cancellation token source and give it a time to cancel after in the event that it hangs too long. Otherwise you would just have to extend the third party class and implement an async version of the Connect method. This is untested but should give you the basic idea.
private void Poll()
{
CancellationTokenSource source = new CancellationTokenSource();
TimeSpan interval = TimeSpan.Zero;
while (!source.Token.WaitHandle.WaitOne(interval))
{
using (IImapClient emailClient = new S22ImapClient())
{
ImapClientSettings chatSettings = ...;
var task = Task.Run(() =>
{
source.CancelAfter(TimeSpan.FromSeconds(5));
emailClient.Connect(chatSettings); // CAN SOMETIMES HANG HERE
}, source.Token);
// SOME WORK DONE HERE
}
interval = this._waitAfterSuccessInterval;
// check the cancellation state.
if (source.IsCancellationRequested)
{
break;
}
}
}
I decided to stop using the S22.Imap email client for this particular problem, and use another 3rd party component, ActiveUp MailSystem, as it includes async calls out the box.
This way I can do code like this:
IAsyncResult connectResult = this._client.BeginConnectSsl(imapClientSettings.Host, imapClientSettings.Port, null);
if (!connectResult.AsyncWaitHandle.WaitOne(this._connectionTimeout))
{
throw new EmailTimeoutException(this._connectionTimeout);
}
I need to run a background thread for my MVC 4 app, where the thread wakes up every hour or so to delete old files in database, then goes back to sleep. This method is below:
//delete old files from database
public void CleanDB()
{
while (true)
{
using (UserZipDBContext db = new UserZipDBContext())
{
//delete old files
DateTime timePoint = DateTime.Now.AddHours(-24);
foreach (UserZip file in db.UserFiles.Where(f => f.UploadTime < timePoint))
{
db.UserFiles.Remove(file);
}
db.SaveChanges();
}
//sleep for 1 hour
Thread.Sleep(new TimeSpan(1, 0, 0));
}
}
but where should I start this thread? The answer in this question creates a new Thread and start it in Global.asax, but this post also mentions that "ASP.NET is not designed for long running tasks". My app would run on a shared host where I don't have admin privilege, so I don't think i can install a seperate program for this task.
in short,
Is it okay to start the thread in Global.asax given my thread doesn't do much (sleep most of the time and small db)?
I read the risk of this approach is that the thread might get killed (though not sure why). How can i detect when the thread is killed and what can i do?
If this is a VERY bad idea, what else can I do on a shared host?
Thanks!
UPDATE
#usr mentioned that methods in Application_Start can be called more than once and suggested using Lazy. Before I read up on that topic, I thought of this approach. Calling SimplePrint.startSingletonThread() multiple times would only instantiate a single thread (i think). Is that correct?
public class SimplePrint
{
private static Thread tInstance = null;
private SimplePrint()
{
}
public static void startSingletonThread()
{
if (tInstance == null)
{
tInstance = new Thread(new ThreadStart(new SimplePrint().printstuff));
tInstance.Start();
}
}
private void printstuff()
{
DateTime d = DateTime.Now;
while (true)
{
Console.WriteLine("thread started at " + d);
Thread.Sleep(2000);
}
}
}
I think you should try Hangfire.
Incredibly easy way to perform fire-and-forget, delayed and recurring
tasks inside ASP.NET applications. No Windows Service required.
Backed by Redis, SQL Server, SQL Azure, MSMQ, RabbitMQ.
So you don't need admin priveleges.
RecurringJob.AddOrUpdate(
() =>
{
using (UserZipDBContext db = new UserZipDBContext())
{
//delete old files
DateTime timePoint = DateTime.Now.AddHours(-24);
foreach (UserZip file in db.UserFiles.Where(f => f.UploadTime < timePoint))
{
db.UserFiles.Remove(file);
}
db.SaveChanges();
}
}
Cron.Hourly);
ASP.NET is not designed for long-running tasks, yes. But only because their work and data can be lost at any time when the worker process restarts.
You do not keep any state between iterations of your task. The task can safely abort at any time. This is safe to run in ASP.NET.
Starting the thread in Application_Start is a problem because that function can be called multiple times (surprisingly). I suggest you make sure to only start the deletion task once, for example by using Lazy<T> and accessing its Value property in Application_Start.
static readonly Lazy<object> workerFactory =
new Lazy<object>(() => { StartThread(); return null; });
Application_Start:
var dummy = workerFactory.Value;
For some reason I cannot think of a better init-once pattern right now. Nothing without locks, volatile or Interlocked which are solutions of last resort.