I've got a Timer that's doing a 60 second countdown. When the ticks hit 60 seconds, it stops and disposes - no problem (I think). This is run in the context of a WebApi service. I need to be able to cancel the countdown from a UI, so I've exposed a method to handle this. Since the controller is transient (thanks Luaan) and, as Daniel points out, the app pool is not predictable, I need a way to send a "cancellable" countdown to clients. Ideas anyone?
[HttpGet]
public IHttpActionResult CancelCountdown()
{
// DOES NOTHING BECAUSE THERE'S A NEW INSTANCE OF THE CONTROLLER
timer.Stop();
timer.Dispose();
return Ok();
}
private void StartCountdown()
{
// MAY BE A BAD SOLUTION BECAUSE THE APP POOL MAY RECYCLE
timer.Interval = _timeIntervalInMilliseconds;
timer.Elapsed += BroadcastToClients;
timer.Start();
}
private void BroadcastToClients(object sender, EventArgs e)
{
_elapsed += 1;
if (_elapsed == _duration)//_duration is 60
{
timer.Stop();
timer.Dispose();
return;
}
_messageHub.Clients.All.shutdown(_elapsed);
}
It's kind of hard to provide an adequate solution without knowing what you're trying to accomplish with this, but i'll give it a shot.
As Luaan pointed out, controllers are designed to be essentially stateless, so you shouldn't put instance variable on them except for it's external dependencies, since each request creates a new instance of the controller class.
You could store the timer on a static dictionary, indexed by a GUID, and return the GUID on your controller and use it as the cancellation token.
Something like:
private static Dictionary<string,Timer> timers = new Dictionary<Guid,Timer>();
public Guid StartCountdown()
{
// MAY BE A BAD SOLUTION BECAUSE THE APP POOL MAY RECYCLE
timer.Interval = _timeIntervalInMilliseconds;
timer.Elapsed += BroadcastToClients;
var guid = Guid.NewGuid().ToString();
timers.Add(guid,timer);
timer.Start();
return guid;
}
public IHttpActionResult CancelCountdown(Guid cancelationToken)
{
//If the timer no longer exist or the user supplied a wrong token
if(!timers.HasKey(cancelationToken)) return;
var timer = timers[cancelationToken];
timer.Stop();
timer.Dispose();
timers.Remove(cancelationToken);
}
However this won't solve the problem with the AppPool recycling. For a more robust solution, instead of using a timer, you could store the start date and time of each countdown in a more permanent storage (say an SQL database, a NoSQL databse, a redis server or whatever), and have a running thread or global timer, or something like Hangfire, initialized on startup, that constantly checks your countdown storage. If enough time has passed to send a broadcast message you send it, and mark the countdown as finished. If a user wants to cancel the countdown, the controller will simply read the appropiate record, mark it as cancelled, and your running thread can ignore it.
If you go with this approach, you'll need to take into account some considerations:
If the timer interval is set too short you could have a perfomance bottleneck for having to access a permament storage too often. If the interval is too long, the countdown won't be too precise.
To alleviate this problem you could store the countdowns start time in permanent storage, in case the app pool resets and you need to restore them. And also have them stored in memory on a static variable for quicker access.
Please note that if you're working with a server farm instead of a single server, static variables won't be shared across instances.
Related
Edit: If useful, this project is on GitHub at https://github.com/lostchopstik/BetterBlync
I am building an application for the Blync status light using their provided API. This application polls the Lync/Skype for Biz client and converts the status to the appropriate light color. All aspects thus far work as expected, however when I leave this program running for an extended period of time, the memory usage grows until a System.OutOfMemory exception occurs.
I have narrowed the problem down to the DispatcherTimer holding the timer in memory and preventing it from being GCed. After reading some things online I found you could manually call for garbage collection, but this is bad practice. Regardless, here is what I have in my code right now:
private void initTimer()
{
timer = new DispatcherTimer();
timer.Interval = new TimeSpan( 0, 0, 0, 0, 200 );
timer.Tick += new EventHandler( Timer_Tick );
timer.Start();
}
private void Timer_Tick(object sender, EventArgs e)
{
// Check to see if any new lights are connected
blync.FindBlyncLights();
// Get current status from Lync client
lync.GetStatus();
// Change to new color
setStatusLight();
if ( count++ == 100 )
{
count = 0;
GC.Collect();
}
}
The timer ticks every 200ms. I commented out all methods inside the timer and just let it run empty, and it still burned memory.
I am wondering what the proper way to handle this timer is. I've used the DispatcherTimer in the past and not had this issue.
I would also be open to trying something besides the DispatcherTimer.
If it is also useful, I have been messing with MemProfiler and here as my current graph with manual GC:
http://imgur.com/Iut91mF
It's a little hard to tell without seeing the rest of the code or the class the timer belongs to. I don't see anywhere you call Stop() on the timer. Does it need to be stopped?
You could also keep a local reference to the timer in whatever class you're in and call Start() and Stop() as needed.
If the timer never needs to be stopped and runs indefinitely, I would certainly look at what you're allocating as the timer runs and that's probably where your issue is.
I have several Machine classes which have state whether they are online/offline and DateTime EndsAt when they will turn offline if they are online. They are (mapped?) to database using EF. When i turn them on i pass amount of seconds for them to stay online and create System.Threading.Timer to change its state back to offline when the time comes (EndsAt == DateTime.Now). Turning them on works fine, however they don't turn off - turnoff() is never called. And on top of that if it would be called and object would change its own variables will they be saved by entity framework?
public class Machine
{
private Timer timer=null;
[Key]
public int MachineId { get; set; }
public bool Online { get; set; }
public DateTime EndsAt { get; set; }
public void TurnOn(TimeSpan amount)
{
Debug.WriteLine("Turn on reached");
if (!Online)
{
EndsAt = DateTime.Today.Add(amount);
Online = true;
setTimer();
}
}
private void turnOff(object state)
{
Online = false;
Occuppied = false;
Debug.WriteLine("Timer ended!");
}
private void setTimer()
{
Debug.WriteLine("Timer being set");
if (EndsAt.CompareTo(DateTime.Now) == 1)
{
timer = new Timer(new TimerCallback(turnOff));
int msUntilTime = (int)((EndsAt - DateTime.Now).TotalMilliseconds);
timer.Change(msUntilTime, Timeout.Infinite);
}
else
{
Debug.WriteLine("EndsAt is smaller than current date");
}
}
}
Controller method where turnOn() is called
[HttpPost]
public ActionResult TurnOn() {
bool isChanged = false;
if (Request["machineId"] != null && Request["amount"] != null)
{
byte machineId = Convert.ToByte(Request["machineId"].ToString());
int amount = Convert.ToInt32(Request["amount"].ToString());
foreach (var machine in db.Machines.ToList())
{
if (machine.MachineId == machineId)
{
machine.TurnOn(TimeSpan.FromSeconds(amount));
db.Entry(machine).State = EntityState.Modified;
db.SaveChanges();
isChanged = true;
}
}
}
if (isChanged)
return new HttpStatusCodeResult(HttpStatusCode.OK);
else
return new HttpStatusCodeResult(HttpStatusCode.BadRequest);
}
The problem comes not from Entity Framework but ASP.NET.
The best way I can describe it is imagine your page request in ASP.NET is a console application, every new request the application starts up, does the request and responds to the user, waits a tiny bit for another request to come in then exits the Main() function.
If you created a Timer in that kind of application once the "tiny bit" runs out and the Main() returns your timer will not be running anymore and the thing you where waiting to happen will never happen. IIS does this exact process but it does it with AppDomain recycling, if no requests come in it will shut down the AppDomain and will kill your timer.
There two ways I know of to handle this problem:
The first way is you need to make a 2nd application that runs as a windows service outside of IIS that is always running, it will be what holds the timer. When you want to run any kind of long running operation that will outlive a page request you use WCF or some other technology for your web app to communicate with the service to start up the timer, when the timer is done either the service executes whatever operation you wanted done.
The second way to do it is you save the timer request in a database then in the background before every request you check the database of events and see if any need to be executed. There are libraries like hangfire that make this process easy, they also have tricks to keep the app domain alive longer or wake it back up if it shuts down (often they use two websites that talk to each other each keeping the other one alive).
Even though this specific question has been answered, here's some related discussion I hope can be helpful in the case of a timer callback not working.
Import considerations when using Threading.Timer
1.) Timer is subject to garbage collection. Even if active, it may be collected as garbage if it does not haven a reference.
2.) DotNet has many different types of timers, and it's important to use the right kind in the right way because it involves threading. Use Forms.Timer for Forms, Threading.Timer or wrap it in Timers.Timer (debate on thread safety), or Web.UI.Timer with ASP.NET for web page postbacks.
3.) The Callback method is defined when the timer is instantiated and cannot be changed.
Timer Related Tools
1.) You can use Thread.Sleep to release CPU resources and place your thread in a waitsleepjoin state which is essentially stopped.
2.) Sometimes a Task can be used along with or instead of a timer.
3.) Stopwatch can be used in different ways, for example, with an empty loop.
For good understanding I will take a simple abstraction of DHCP lease as example: The lease contains the IP and MAC address, the time it was granted at and can be renewed with a given time span. Once expired an event will be invoked. Again, this is just serving as the most minimal example I could come up with:
using System;
using System.Net;
using System.Net.NetworkInformation;
using System.Timers;
namespace Example
{
public class Lease
{
public IPAddress IP
{
get;
private set;
}
public PhysicalAddress MAC
{
get;
private set;
}
public DateTime Granted
{
get;
private set;
}
public event EventHandler Expired;
private readonly Timer timer;
public Lease(IPAddress ip, PhysicalAddress mac, TimeSpan available)
{
IP = ip;
MAC = mac;
timer = new Timer();
timer.AutoReset = false;
timer.Elapsed += timerElapsed;
Renew(available);
}
public void timerElapsed(object sender, EventArgs e)
{
var handle = Expired;
if (handle != null)
{
handle(this, EventArgs.Empty);
}
}
public void Renew(TimeSpan available)
{
Granted = DateTime.Now;
timer.Interval = available.TotalMilliseconds;
timer.Enabled = true;
}
}
}
Is there anything to consider when creating - for example - "a few thousand" instances of such a class? I am mostly concerned about the timers. Should I consider another design pattern for such a task (like a manager for all the leases,or not use timers at all?) or is there nothing to worry about when creating a lot of timers, and this is the appropriate way? At least I always try to be cautious when it comes to timers and events.
Rather than creating thousands of timers, you could just store the expiration time of each Lease object, then in a single thread query for the expired ones periodically.
An off the top of my head code example:
var leases = new List<Lease>();
var running = true;
var expiredChecker = Task.Factory.StartNew(() =>
{
while (running)
{
var expired = leases.All(l => l.ExpirationDate < DateTime.Now);
// do something with the expired lease objects
}
});
Assuming you have an IEnumerable<Lease>, a DateTime property called ExpirationDate on your Lease object, you can then cancel this by setting running to false when you want to stop.
I would suppose this depends partly on what resources you have available on your server, and what kind of accuracy and performance you need.
An alternative approach might be to store something as simple as a time stamp in each instance, and checking that value regularly, comparing it to current time, and updating it appropriately. I have a hunch that this might be easier on performance - but you should try to benchmark it somehow to be sure.
Of course, if you have a large number of instances, iterating over all of them might also take some time, so perhaps pooling these into groups, where each group is handled in a separate thread on regular (adjustable?) intervals might be an option.
It's a bit hard to give a great answer here without some info about performance, so you should probably just create a proof of concept, and test a couple of strategies that you think might work, and try to benchmark them to see which fits best.
According to the System.Timers.Timer MSDN page:
The server-based Timer is designed for use with worker threads in a
multithreaded environment. Server timers can move among threads to
handle the raised Elapsed event, resulting in more accuracy than
Windows timers in raising the event on time.
Which means it is not very likely to be causing issues when you are running a couple thousand timers at the same time.
That doesn't mean it is a good approach, you should probably be looking for a more centralized solution to this problem.
I recommend use a System.Threading.Timer instead of the System.Timers.Timer. The second one is wrapper about the first one to be visible in the design time and it is not necessary if you really don't need design time support. Timer internally calls ThreadPool.QueueUseWorkItem, than threadpool is responsible for maintaining thread on timer tick. Thread pool uses only one thread to maintain all the timers object and this thread decide when each timer queue new thread on timer tick.
Than I cant see any overhead unless your timers will tick so quick than you are not able do all on tick job and you simply queue too much work in thread pool.
I'm fairly new to C#, and recently built a small webapp using .NET 4.0. This app has 2 parts: one is designed to run permanently and will continuously fetch data from given resources on the web. The other one accesses that data upon request to analyze it. I'm struggling with the first part.
My initial approach was to set up a Timer object that would execute a fetch operation (whatever that operation is doesn't really matter here) every, say, 5 minutes. I would define that timer on Application_Start and let it live after that.
However, I recently realized that applications are created / destroyed based on user requests (from my observation they seem to be destroyed after some time of inactivity). As a consequence, my background activity will stop / resume out of my control where I would like it to run continuously, with absolutely no interruption.
So here comes my question: is that achievable in a webapp? Or do I absolutely need a separate Windows service for that kind of things?
Thanks in advance for your precious help!
Guillaume
While doing this on a web app is not ideal..it is achievable, given that the site is always up.
Here's a sample: I'm creating a Cache item in the global.asax with an expiration. When it expires, an event is fired. You can fetch your data or whatever in the OnRemove() event.
Then you can set a call to a page(preferably a very small one) that will trigger code in the Application_BeginRequest that will add back the Cache item with an expiration.
global.asax:
private const string VendorNotificationCacheKey = "VendorNotification";
private const int IntervalInMinutes = 60; //Expires after X minutes & runs tasks
protected void Application_Start(object sender, EventArgs e)
{
//Set value in cache with expiration time
CacheItemRemovedCallback callback = OnRemove;
Context.Cache.Add(VendorNotificationCacheKey, DateTime.Now, null, DateTime.Now.AddMinutes(IntervalInMinutes), TimeSpan.Zero,
CacheItemPriority.Normal, callback);
}
private void OnRemove(string key, object value, CacheItemRemovedReason reason)
{
SendVendorNotification();
//Need Access to HTTPContext so cache can be re-added, so let's call a page. Application_BeginRequest will re-add the cache.
var siteUrl = ConfigurationManager.AppSettings.Get("SiteUrl");
var client = new WebClient();
client.DownloadData(siteUrl + "default.aspx");
client.Dispose();
}
private void SendVendorNotification()
{
//Do Tasks here
}
protected void Application_BeginRequest(object sender, EventArgs e)
{
//Re-add if it doesn't exist
if (HttpContext.Current.Request.Url.ToString().ToLower().Contains("default.aspx") &&
HttpContext.Current.Cache[VendorNotificationCacheKey] == null)
{
//ReAdd
CacheItemRemovedCallback callback = OnRemove;
Context.Cache.Add(VendorNotificationCacheKey, DateTime.Now, null, DateTime.Now.AddMinutes(IntervalInMinutes), TimeSpan.Zero,
CacheItemPriority.Normal, callback);
}
}
This works well, if your scheduled task is quick.
If it's a long running process..you definitely need to keep it out of your web app.
As long as the 1st request has started the application...this will keep firing every 60 minutes even if it has no visitors on the site.
I suggest putting it in a windows service. You avoid all the hoops mentioned above, the big one being IIS restarts. A windows service also has the following benefits:
Can automatically start when the server starts. If you are running in IIS and your server reboots, you have to wait until a request is made to start your process.
Can place this data fetching process on another machine if needed
If you end up load-balancing your website on multiple servers, you could accidentally have multiple data fetching processes causing you problems
Easier to main the code separately (single responsibility principle). Easier to maintain the code if it's just doing what it needs to do and not also trying to fool IIS.
Create a static class with a constructor, creating a timer event.
However like Steve Sloka mentioned, IIS has a timeout that you will have to manipulate to keep the site going.
using System.Runtime.Remoting.Messaging;
public static class Variables
{
static Variables()
{
m_wClass = new WorkerClass();
// creates and registers an event timer
m_flushTimer = new System.Timers.Timer(1000);
m_flushTimer.Elapsed += new System.Timers.ElapsedEventHandler(OnFlushTimer);
m_flushTimer.Start();
}
private static void OnFlushTimer(object o, System.Timers.ElapsedEventArgs args)
{
// determine the frequency of your update
if (System.DateTime.Now - m_timer1LastUpdateTime > new System.TimeSpan(0,1,0))
{
// call your class to do the update
m_wClass.DoMyThing();
m_timer1LastUpdateTime = System.DateTime.Now;
}
}
private static readonly System.Timers.Timer m_flushTimer;
private static System.DateTime m_timer1LastUpdateTime = System.DateTime.MinValue;
private static readonly WorkerClass m_wClass;
}
public class WorkerClass
{
public delegate WorkerClass MyDelegate();
public void DoMyThing()
{
m_test = "Hi";
m_test2 = "Bye";
//create async call to do the work
MyDelegate myDel = new MyDelegate(Execute);
AsyncCallback cb = new AsyncCallback(CommandCallBack);
IAsyncResult ar = myDel.BeginInvoke(cb, null);
}
private WorkerClass Execute()
{
//do my stuff in an async call
m_test2 = "Later";
return this;
}
public void CommandCallBack(IAsyncResult ar)
{
// this is called when your task is complete
AsyncResult asyncResult = (AsyncResult)ar;
MyDelegate myDel = (MyDelegate)asyncResult.AsyncDelegate;
WorkerClass command = myDel.EndInvoke(ar);
// command is a reference to the original class that envoked the async call
// m_test will equal "Hi"
// m_test2 will equal "Later";
}
private string m_test;
private string m_test2;
}
I think you can can achieve it by using a BackgroundWorker, but i would rather suggest you to go for a service.
Your application context lives as long as your Worker Process in IIS is functioning. In IIS there's some default timeouts for when the worker process will recycle (e.g. Number of Idle mins (20), or regular intervals (1740).
That said, if you adjust those settings in IIS, you should be able to have the requests live, however, the other answers of using a Service would work as well, just a matter of how you want to implement.
I recently made a file upload functionality for uploading Access files to the database (not the best way but just a temporary fix to a longterm issue).
I solved it by creating a background thread that ran through the ProcessAccess function, and was deleted when completed.
Unless IIS has a setting in which it kills a thread after a set amount of time regardless of inactivity, you should be able to create a thread that calls a function that never ends. Don't use recursion because the amount of open functions will eventually blow up in you face, but just have a for(;;) loop 5,000,000 times so it'll keep busy :)
Application Initialization Module for IIS 7.5 does precisely this type of init work. More details on the module are available here Application Initialization Module
Needed:
A Windows Service That Executes Jobs from a Job Queue in a DB
Wanted:
Example Code, Guidance, or Best Practices for this type of Application
Background:
A user will click on an ashx link that will insert a row into the DB.
I need my windows service to periodically poll for rows in this table, and it should execute a unit of work for each row.
Emphasis:
This isn't completely new terrain for me.
EDIT: You can assume that I know how to create a Windows Service and basic data access.
But I need to write this service from scratch.
And I'd just like to know upfront what I need to consider.
EDIT: I'm most worried about jobs that fail, contention for jobs, and keeping the service running.
Given that you are dealing with a database queue, you have a fair cut of the job already done for you due to the transactional nature of databases. Typical queue driven application has a loop that does:
while(1) {
Start transction;
Dequeue item from queue;
process item;
save new state of item;
commit;
}
If processing crashes midway, the transaction rolls back and the item is processed on the next service start up.
But writing queues in a database is actually a lot trickier than you believe. If you deploy a naive approach, you'll find out that your enqueue and dequeue are blocking each other and the ashx page becomes unresponsive. Next you'll discover the dequeue vs. dequeue are deadlocking and your loop is constantly hitting error 1205. I strongly urge you to read this article Using Tables as Queues.
Your next challenge is going to be getting the pooling rate 'just right'. Too aggressive and your database will be burning hot from the pooling requests. Too lax and your queue will grow at rush hours and will drain too slowly. You should consider using an entirely different approach: use a SQL Server built-in QUEUE object and rely on the magic of the WAITFOR(RECEIVE) semantics. This allows for completely poll free self load tuning service behavior. Actually, there is more: you don't need a service to start with. See Asynchronous Procedures Execution for an explanation on what I'm talking about: launching processing asynchronously in SQL Server from a web service call, in a completely reliable manner. And finally, if the logic must be in C# process then you can leverage the External Activator, which allows the processing to be hosted in standalone processes as opposed to T-SQL procedures.
First you'll need to consider
How often to poll for
Does your service just stop and start or does it support pause and continue.
Concurrency. Services can increase the likelihood of a encountering a problem
Implementation
Use a System.Timers.Timer not a Threading.Timer
Maker sure you set the Timer.AutoReset to false. This will stop the reentrant problem.
Make sure to include execution time
Here's the basic framework of all those ideas. It includes a way to debug this which is a pain
public partial class Service : ServiceBase{
System.Timers.Timer timer;
public Service()
{
timer = new System.Timers.Timer();
//When autoreset is True there are reentrancy problme
timer.AutoReset = false;
timer.Elapsed += new System.Timers.ElapsedEventHandler(DoStuff);
}
private void DoStuff(object sender, System.Timers.ElapsedEventArgs e)
{
Collection stuff = GetData();
LastChecked = DateTime.Now;
foreach (Object item in stuff)
{
try
{
item.Dosomthing()
}
catch (System.Exception ex)
{
this.EventLog.Source = "SomeService";
this.EventLog.WriteEntry(ex.ToString());
this.Stop();
}
TimeSpan ts = DateTime.Now.Subtract(LastChecked);
TimeSpan MaxWaitTime = TimeSpan.FromMinutes(5);
if (MaxWaitTime.Subtract(ts).CompareTo(TimeSpan.Zero) > -1)
timer.Interval = MaxWaitTime.Subtract(ts).TotalMilliseconds;
else
timer.Interval = 1;
timer.Start();
}
protected override void OnPause()
{
base.OnPause();
this.timer.Stop();
}
protected override void OnContinue()
{
base.OnContinue();
this.timer.Interval = 1;
this.timer.Start();
}
protected override void OnStop()
{
base.OnStop();
this.timer.Stop();
}
protected override void OnStart(string[] args)
{
foreach (string arg in args)
{
if (arg == "DEBUG_SERVICE")
DebugMode();
}
#if DEBUG
DebugMode();
#endif
timer.Interval = 1;
timer.Start();
}
private static void DebugMode()
{
Debugger.Break();
}
}
EDIT Fixed loop in Start()
EDIT Turns out Milliseconds is not the same as TotalMilliseconds
You may want to have a look at Quartz.Net to manage scheduling the jobs. Not sure if it will fit your particular situation, but it's worth a look.
Some things I can think of, based on your edit:
Re: job failure:
Determine whether a job can be retried and do one of the following:
Move the row to an "error" table for logging / reporting later OR
Leave the row in the queue so that it will be reprocessed by the job service
You could add a column like WaitUntil or something similar to delay retrying the job after a failure
Re: contention:
Add a timestamp column such as "JobStarted" or "Locked" to track when the job was started. This will prevent other threads (assuming your service is multithreaded) from trying to execute the job simultaneously.
You'll need to have some cleanup process that goes through and clears stale jobs for re-processing (in the event the job service fails and your lock is never released).
Re: keeping the service running
You can tell windows to restart a service if it fails.
You can detect previous failure upon startup by keeping some kind of file open while the service is running and deleting it upon successful shutdown. If your service starts up and that file already exists, you know the service previously failed and can alert an operator or perform the necessary cleanup operations.
I'm really just poking around in the dark here. I'd strongly suggest prototyping the service and returning with any specific questions about the way it functions.