I have several Machine classes which have state whether they are online/offline and DateTime EndsAt when they will turn offline if they are online. They are (mapped?) to database using EF. When i turn them on i pass amount of seconds for them to stay online and create System.Threading.Timer to change its state back to offline when the time comes (EndsAt == DateTime.Now). Turning them on works fine, however they don't turn off - turnoff() is never called. And on top of that if it would be called and object would change its own variables will they be saved by entity framework?
public class Machine
{
private Timer timer=null;
[Key]
public int MachineId { get; set; }
public bool Online { get; set; }
public DateTime EndsAt { get; set; }
public void TurnOn(TimeSpan amount)
{
Debug.WriteLine("Turn on reached");
if (!Online)
{
EndsAt = DateTime.Today.Add(amount);
Online = true;
setTimer();
}
}
private void turnOff(object state)
{
Online = false;
Occuppied = false;
Debug.WriteLine("Timer ended!");
}
private void setTimer()
{
Debug.WriteLine("Timer being set");
if (EndsAt.CompareTo(DateTime.Now) == 1)
{
timer = new Timer(new TimerCallback(turnOff));
int msUntilTime = (int)((EndsAt - DateTime.Now).TotalMilliseconds);
timer.Change(msUntilTime, Timeout.Infinite);
}
else
{
Debug.WriteLine("EndsAt is smaller than current date");
}
}
}
Controller method where turnOn() is called
[HttpPost]
public ActionResult TurnOn() {
bool isChanged = false;
if (Request["machineId"] != null && Request["amount"] != null)
{
byte machineId = Convert.ToByte(Request["machineId"].ToString());
int amount = Convert.ToInt32(Request["amount"].ToString());
foreach (var machine in db.Machines.ToList())
{
if (machine.MachineId == machineId)
{
machine.TurnOn(TimeSpan.FromSeconds(amount));
db.Entry(machine).State = EntityState.Modified;
db.SaveChanges();
isChanged = true;
}
}
}
if (isChanged)
return new HttpStatusCodeResult(HttpStatusCode.OK);
else
return new HttpStatusCodeResult(HttpStatusCode.BadRequest);
}
The problem comes not from Entity Framework but ASP.NET.
The best way I can describe it is imagine your page request in ASP.NET is a console application, every new request the application starts up, does the request and responds to the user, waits a tiny bit for another request to come in then exits the Main() function.
If you created a Timer in that kind of application once the "tiny bit" runs out and the Main() returns your timer will not be running anymore and the thing you where waiting to happen will never happen. IIS does this exact process but it does it with AppDomain recycling, if no requests come in it will shut down the AppDomain and will kill your timer.
There two ways I know of to handle this problem:
The first way is you need to make a 2nd application that runs as a windows service outside of IIS that is always running, it will be what holds the timer. When you want to run any kind of long running operation that will outlive a page request you use WCF or some other technology for your web app to communicate with the service to start up the timer, when the timer is done either the service executes whatever operation you wanted done.
The second way to do it is you save the timer request in a database then in the background before every request you check the database of events and see if any need to be executed. There are libraries like hangfire that make this process easy, they also have tricks to keep the app domain alive longer or wake it back up if it shuts down (often they use two websites that talk to each other each keeping the other one alive).
Even though this specific question has been answered, here's some related discussion I hope can be helpful in the case of a timer callback not working.
Import considerations when using Threading.Timer
1.) Timer is subject to garbage collection. Even if active, it may be collected as garbage if it does not haven a reference.
2.) DotNet has many different types of timers, and it's important to use the right kind in the right way because it involves threading. Use Forms.Timer for Forms, Threading.Timer or wrap it in Timers.Timer (debate on thread safety), or Web.UI.Timer with ASP.NET for web page postbacks.
3.) The Callback method is defined when the timer is instantiated and cannot be changed.
Timer Related Tools
1.) You can use Thread.Sleep to release CPU resources and place your thread in a waitsleepjoin state which is essentially stopped.
2.) Sometimes a Task can be used along with or instead of a timer.
3.) Stopwatch can be used in different ways, for example, with an empty loop.
Related
I've got a Timer that's doing a 60 second countdown. When the ticks hit 60 seconds, it stops and disposes - no problem (I think). This is run in the context of a WebApi service. I need to be able to cancel the countdown from a UI, so I've exposed a method to handle this. Since the controller is transient (thanks Luaan) and, as Daniel points out, the app pool is not predictable, I need a way to send a "cancellable" countdown to clients. Ideas anyone?
[HttpGet]
public IHttpActionResult CancelCountdown()
{
// DOES NOTHING BECAUSE THERE'S A NEW INSTANCE OF THE CONTROLLER
timer.Stop();
timer.Dispose();
return Ok();
}
private void StartCountdown()
{
// MAY BE A BAD SOLUTION BECAUSE THE APP POOL MAY RECYCLE
timer.Interval = _timeIntervalInMilliseconds;
timer.Elapsed += BroadcastToClients;
timer.Start();
}
private void BroadcastToClients(object sender, EventArgs e)
{
_elapsed += 1;
if (_elapsed == _duration)//_duration is 60
{
timer.Stop();
timer.Dispose();
return;
}
_messageHub.Clients.All.shutdown(_elapsed);
}
It's kind of hard to provide an adequate solution without knowing what you're trying to accomplish with this, but i'll give it a shot.
As Luaan pointed out, controllers are designed to be essentially stateless, so you shouldn't put instance variable on them except for it's external dependencies, since each request creates a new instance of the controller class.
You could store the timer on a static dictionary, indexed by a GUID, and return the GUID on your controller and use it as the cancellation token.
Something like:
private static Dictionary<string,Timer> timers = new Dictionary<Guid,Timer>();
public Guid StartCountdown()
{
// MAY BE A BAD SOLUTION BECAUSE THE APP POOL MAY RECYCLE
timer.Interval = _timeIntervalInMilliseconds;
timer.Elapsed += BroadcastToClients;
var guid = Guid.NewGuid().ToString();
timers.Add(guid,timer);
timer.Start();
return guid;
}
public IHttpActionResult CancelCountdown(Guid cancelationToken)
{
//If the timer no longer exist or the user supplied a wrong token
if(!timers.HasKey(cancelationToken)) return;
var timer = timers[cancelationToken];
timer.Stop();
timer.Dispose();
timers.Remove(cancelationToken);
}
However this won't solve the problem with the AppPool recycling. For a more robust solution, instead of using a timer, you could store the start date and time of each countdown in a more permanent storage (say an SQL database, a NoSQL databse, a redis server or whatever), and have a running thread or global timer, or something like Hangfire, initialized on startup, that constantly checks your countdown storage. If enough time has passed to send a broadcast message you send it, and mark the countdown as finished. If a user wants to cancel the countdown, the controller will simply read the appropiate record, mark it as cancelled, and your running thread can ignore it.
If you go with this approach, you'll need to take into account some considerations:
If the timer interval is set too short you could have a perfomance bottleneck for having to access a permament storage too often. If the interval is too long, the countdown won't be too precise.
To alleviate this problem you could store the countdowns start time in permanent storage, in case the app pool resets and you need to restore them. And also have them stored in memory on a static variable for quicker access.
Please note that if you're working with a server farm instead of a single server, static variables won't be shared across instances.
I have a long running action/method that is called when a user clicks a button on a internal MVC5 application. The button is shared by all users, meaning a second person can come in and click it seconds after it has been clicked. The long running task is updating a shared task window to all clients via SignalR.
Is there a recommended way to check if the task is still busy and simply notifying the user it's still working? Is there another recommended approach? (can't use external windows service for the work)
Currently what I am doing seems like a bad idea or I could be wrong and it's feasible. See below for a sample of what I am doing.
public static Task WorkerTask { get; set; }
public JsonResult SendData()
{
if (WorkerTask == null)
{
WorkerTask = Task.Factory.StartNew(async () =>
{
// Do the 2-15 minute long running job
});
WorkerTask = null;
}
else
{
TempData["Message"] = "Data is already being exported. Please see task window for the status.";
}
return Json(Url.Action("Export", "Home"), JsonRequestBehavior.AllowGet);
}
I don't think what you're doing will work at all. I see three issues:
You are storing the WorkerTask on the controller (I think). A new controller is created for every request. Therefore, a new WorkerTask will always be created.
If #1 weren't true, you would still need to wrap the instantiation of WorkerTask in a lock because multiple clients could reach the WorkerTask == null check at the same time.
You shouldn't have long running tasks in your web application. The app pool could restart at any time killing your WorkerTask.
If you want to skip the best practices advice of "don't do long running work in your web app", you could use the HostingEnvironment.QueueBackgroundWorkItem introduced in .NET 4.5.2 to kick off the long running task. You could store a variable in the HttpApplication.Cache to indicate whether the long running process has been kicked off.
This solution has more than a few issues (it won't work in a web farm, the app pool could die, etc.). A more robust solution would be to use something like Quartz.net or Hangfire.
For good understanding I will take a simple abstraction of DHCP lease as example: The lease contains the IP and MAC address, the time it was granted at and can be renewed with a given time span. Once expired an event will be invoked. Again, this is just serving as the most minimal example I could come up with:
using System;
using System.Net;
using System.Net.NetworkInformation;
using System.Timers;
namespace Example
{
public class Lease
{
public IPAddress IP
{
get;
private set;
}
public PhysicalAddress MAC
{
get;
private set;
}
public DateTime Granted
{
get;
private set;
}
public event EventHandler Expired;
private readonly Timer timer;
public Lease(IPAddress ip, PhysicalAddress mac, TimeSpan available)
{
IP = ip;
MAC = mac;
timer = new Timer();
timer.AutoReset = false;
timer.Elapsed += timerElapsed;
Renew(available);
}
public void timerElapsed(object sender, EventArgs e)
{
var handle = Expired;
if (handle != null)
{
handle(this, EventArgs.Empty);
}
}
public void Renew(TimeSpan available)
{
Granted = DateTime.Now;
timer.Interval = available.TotalMilliseconds;
timer.Enabled = true;
}
}
}
Is there anything to consider when creating - for example - "a few thousand" instances of such a class? I am mostly concerned about the timers. Should I consider another design pattern for such a task (like a manager for all the leases,or not use timers at all?) or is there nothing to worry about when creating a lot of timers, and this is the appropriate way? At least I always try to be cautious when it comes to timers and events.
Rather than creating thousands of timers, you could just store the expiration time of each Lease object, then in a single thread query for the expired ones periodically.
An off the top of my head code example:
var leases = new List<Lease>();
var running = true;
var expiredChecker = Task.Factory.StartNew(() =>
{
while (running)
{
var expired = leases.All(l => l.ExpirationDate < DateTime.Now);
// do something with the expired lease objects
}
});
Assuming you have an IEnumerable<Lease>, a DateTime property called ExpirationDate on your Lease object, you can then cancel this by setting running to false when you want to stop.
I would suppose this depends partly on what resources you have available on your server, and what kind of accuracy and performance you need.
An alternative approach might be to store something as simple as a time stamp in each instance, and checking that value regularly, comparing it to current time, and updating it appropriately. I have a hunch that this might be easier on performance - but you should try to benchmark it somehow to be sure.
Of course, if you have a large number of instances, iterating over all of them might also take some time, so perhaps pooling these into groups, where each group is handled in a separate thread on regular (adjustable?) intervals might be an option.
It's a bit hard to give a great answer here without some info about performance, so you should probably just create a proof of concept, and test a couple of strategies that you think might work, and try to benchmark them to see which fits best.
According to the System.Timers.Timer MSDN page:
The server-based Timer is designed for use with worker threads in a
multithreaded environment. Server timers can move among threads to
handle the raised Elapsed event, resulting in more accuracy than
Windows timers in raising the event on time.
Which means it is not very likely to be causing issues when you are running a couple thousand timers at the same time.
That doesn't mean it is a good approach, you should probably be looking for a more centralized solution to this problem.
I recommend use a System.Threading.Timer instead of the System.Timers.Timer. The second one is wrapper about the first one to be visible in the design time and it is not necessary if you really don't need design time support. Timer internally calls ThreadPool.QueueUseWorkItem, than threadpool is responsible for maintaining thread on timer tick. Thread pool uses only one thread to maintain all the timers object and this thread decide when each timer queue new thread on timer tick.
Than I cant see any overhead unless your timers will tick so quick than you are not able do all on tick job and you simply queue too much work in thread pool.
I'm working on a windows forms application and fighting with a very harsh error. The application is supposed to run on a local machine and handle requests form a server applicaton. The client application looks like this:
public Reader mr_obj;
public Form1()
{
mr_obj = new MyReader.Reader(7137);
mr_obj.UserEvent += new ReaderEvent(UserEvent);
}
private void UserEvent(UserEvent e, long threadID)
{
Thread.Sleep(1000);
SafeSomethingToDB();
}
The Reader() object is connecting the client application to the server application. So after this, the server application is able to trigger the UserEvent() method in the client application. Ther problem is now, that the client application, which handles the UserEvents, crashes if the UserEvent() method gets triggered twice within one second.
(Its actually not crashing just hanging untill you kill the task, a try catch wont return an error)
What I've tried so far is to delegate the Thread.Sleep() and SafeSomethingToDB() to another thread. This doesnt work because the server application does not wait until the tread is finished. So the server application does not find the data in the DB because its not waiting 1 second...
The same problem happens when I did that with background workers.
Is there a possibility to handle these two triggers, which come from the same server application, in sort of a parallell way at the same time?
Any suggestions very apreciated
EDIT: I think locking the method does not cause the application to process both triggers in the same time. To make this visible I'v tried this:
private void UserEventHandler(UserEvent e, long threadID)
{
lock (_lockObject)
{
MessageBox.Show("Messagebox 1");
MessageBox.Show("Messagebox 2");
}
}
When the first request triggers UserEvent() "MessageBox1" appeares. If you press OK, "MessageBox2" appeares. But if the UserEvent gets triggered a second time while "Messagebox2" is still opened, "MessageBox1" does not appear. Instead of that the application start hanging. Shouldn "MessageBox1" appear again triggered by the second trigger of UserEvent() when the two triggers really ar bbeing processed at the same time? So the two triggers are not beeing preformed parallel or am I mistaking here?
Without knowing why you do the Sleep or what exactly SafeSomethingToDB does and what causes your problems, try to synchronize the calls:
private readonly object _lockObject = new object();
private void UserEvent(UserEvent e, long threadID)
{
lock(_lockObject)
{
Thread.Sleep(1000);
SafeSomethingToDB();
}
}
I think a simple lock for synchronization will work for you, try this
public Reader mr_obj;
private static readonly object sync = new object();
public Form1()
{
mr_obj = new MyReader.Reader(7137);
mr_obj.UserEvent += new ReaderEvent(UserEvent);
}
private void UserEvent(UserEvent e, long threadID)
{
lock(sync)
{
SafeSomethingToDB();
}
}
As you write in the comments, if SafeSomethingToDB() is called a second time before the first call has finished, then it crashes. So in other words: SafeSomethingToDB() is not re-entrant.
What you can do is use a Mutex (which stands for mutual exclusion), which defines a "critical section" in your code, meaning a code that can have only one thread executing it at any one time.
For instance:
private static Mutex mutex = new Mutex();
public void SafeSomethingToDB()
{
mutex.WaitOne(); // wait until it is safe to enter the critical section
// Critical section begins here
DoWorkAndStuff();
mutex.ReleaseMutex(); // indicate the end of the critical section
}
For more about System.Threading.Mutex, see http://msdn.microsoft.com/en-us/library/system.threading.mutex(v=vs.110).aspx.
I'm fairly new to C#, and recently built a small webapp using .NET 4.0. This app has 2 parts: one is designed to run permanently and will continuously fetch data from given resources on the web. The other one accesses that data upon request to analyze it. I'm struggling with the first part.
My initial approach was to set up a Timer object that would execute a fetch operation (whatever that operation is doesn't really matter here) every, say, 5 minutes. I would define that timer on Application_Start and let it live after that.
However, I recently realized that applications are created / destroyed based on user requests (from my observation they seem to be destroyed after some time of inactivity). As a consequence, my background activity will stop / resume out of my control where I would like it to run continuously, with absolutely no interruption.
So here comes my question: is that achievable in a webapp? Or do I absolutely need a separate Windows service for that kind of things?
Thanks in advance for your precious help!
Guillaume
While doing this on a web app is not ideal..it is achievable, given that the site is always up.
Here's a sample: I'm creating a Cache item in the global.asax with an expiration. When it expires, an event is fired. You can fetch your data or whatever in the OnRemove() event.
Then you can set a call to a page(preferably a very small one) that will trigger code in the Application_BeginRequest that will add back the Cache item with an expiration.
global.asax:
private const string VendorNotificationCacheKey = "VendorNotification";
private const int IntervalInMinutes = 60; //Expires after X minutes & runs tasks
protected void Application_Start(object sender, EventArgs e)
{
//Set value in cache with expiration time
CacheItemRemovedCallback callback = OnRemove;
Context.Cache.Add(VendorNotificationCacheKey, DateTime.Now, null, DateTime.Now.AddMinutes(IntervalInMinutes), TimeSpan.Zero,
CacheItemPriority.Normal, callback);
}
private void OnRemove(string key, object value, CacheItemRemovedReason reason)
{
SendVendorNotification();
//Need Access to HTTPContext so cache can be re-added, so let's call a page. Application_BeginRequest will re-add the cache.
var siteUrl = ConfigurationManager.AppSettings.Get("SiteUrl");
var client = new WebClient();
client.DownloadData(siteUrl + "default.aspx");
client.Dispose();
}
private void SendVendorNotification()
{
//Do Tasks here
}
protected void Application_BeginRequest(object sender, EventArgs e)
{
//Re-add if it doesn't exist
if (HttpContext.Current.Request.Url.ToString().ToLower().Contains("default.aspx") &&
HttpContext.Current.Cache[VendorNotificationCacheKey] == null)
{
//ReAdd
CacheItemRemovedCallback callback = OnRemove;
Context.Cache.Add(VendorNotificationCacheKey, DateTime.Now, null, DateTime.Now.AddMinutes(IntervalInMinutes), TimeSpan.Zero,
CacheItemPriority.Normal, callback);
}
}
This works well, if your scheduled task is quick.
If it's a long running process..you definitely need to keep it out of your web app.
As long as the 1st request has started the application...this will keep firing every 60 minutes even if it has no visitors on the site.
I suggest putting it in a windows service. You avoid all the hoops mentioned above, the big one being IIS restarts. A windows service also has the following benefits:
Can automatically start when the server starts. If you are running in IIS and your server reboots, you have to wait until a request is made to start your process.
Can place this data fetching process on another machine if needed
If you end up load-balancing your website on multiple servers, you could accidentally have multiple data fetching processes causing you problems
Easier to main the code separately (single responsibility principle). Easier to maintain the code if it's just doing what it needs to do and not also trying to fool IIS.
Create a static class with a constructor, creating a timer event.
However like Steve Sloka mentioned, IIS has a timeout that you will have to manipulate to keep the site going.
using System.Runtime.Remoting.Messaging;
public static class Variables
{
static Variables()
{
m_wClass = new WorkerClass();
// creates and registers an event timer
m_flushTimer = new System.Timers.Timer(1000);
m_flushTimer.Elapsed += new System.Timers.ElapsedEventHandler(OnFlushTimer);
m_flushTimer.Start();
}
private static void OnFlushTimer(object o, System.Timers.ElapsedEventArgs args)
{
// determine the frequency of your update
if (System.DateTime.Now - m_timer1LastUpdateTime > new System.TimeSpan(0,1,0))
{
// call your class to do the update
m_wClass.DoMyThing();
m_timer1LastUpdateTime = System.DateTime.Now;
}
}
private static readonly System.Timers.Timer m_flushTimer;
private static System.DateTime m_timer1LastUpdateTime = System.DateTime.MinValue;
private static readonly WorkerClass m_wClass;
}
public class WorkerClass
{
public delegate WorkerClass MyDelegate();
public void DoMyThing()
{
m_test = "Hi";
m_test2 = "Bye";
//create async call to do the work
MyDelegate myDel = new MyDelegate(Execute);
AsyncCallback cb = new AsyncCallback(CommandCallBack);
IAsyncResult ar = myDel.BeginInvoke(cb, null);
}
private WorkerClass Execute()
{
//do my stuff in an async call
m_test2 = "Later";
return this;
}
public void CommandCallBack(IAsyncResult ar)
{
// this is called when your task is complete
AsyncResult asyncResult = (AsyncResult)ar;
MyDelegate myDel = (MyDelegate)asyncResult.AsyncDelegate;
WorkerClass command = myDel.EndInvoke(ar);
// command is a reference to the original class that envoked the async call
// m_test will equal "Hi"
// m_test2 will equal "Later";
}
private string m_test;
private string m_test2;
}
I think you can can achieve it by using a BackgroundWorker, but i would rather suggest you to go for a service.
Your application context lives as long as your Worker Process in IIS is functioning. In IIS there's some default timeouts for when the worker process will recycle (e.g. Number of Idle mins (20), or regular intervals (1740).
That said, if you adjust those settings in IIS, you should be able to have the requests live, however, the other answers of using a Service would work as well, just a matter of how you want to implement.
I recently made a file upload functionality for uploading Access files to the database (not the best way but just a temporary fix to a longterm issue).
I solved it by creating a background thread that ran through the ProcessAccess function, and was deleted when completed.
Unless IIS has a setting in which it kills a thread after a set amount of time regardless of inactivity, you should be able to create a thread that calls a function that never ends. Don't use recursion because the amount of open functions will eventually blow up in you face, but just have a for(;;) loop 5,000,000 times so it'll keep busy :)
Application Initialization Module for IIS 7.5 does precisely this type of init work. More details on the module are available here Application Initialization Module