Heartbeat implementation with Thread.Sleep() - c#

in my application I have an "heartbeat" functionality that is currently implemented in a long running thread in the following way (pseudocode):
while (shouldBeRunning)
{
Thread.Sleep(smallInterval);
if (DateTime.UtcNow - lastHeartbeat > heartbeatInterval)
{
sendHeartbeat();
lastHeartbeat = DateTime.UtcNow;
}
}
Now, it happens that when my application is going through some intensive CPU time (several minutes of heavy calculations in which the CPU is > 90% occupied), the heartbeats get delayed, even if smallInterval << heartbeatInterval.
To crunch some numbers: heartbeatInterval is 60 seconds, lastHeartbeat is 0.1 seconds and the reported delay can be up to 15s. So, in my understanding, that means that a Sleep(10) can last like a Sleep(15000) when the CPU is very busy.
I have already tried setting the thread priority as AboveNormal - how can I improve my design to avoid such problems?

Is there any reason you can't use a Timer for this? There are three sorts you can use and I usually go for System.Timers.Timer. The following article discusses the differences though:
http://msdn.microsoft.com/en-us/magazine/cc164015.aspx
Essentially timers will allow you to set up a timer with a periodic interval and fire an event whenever that period ticks past. You can then subscribe to the event with a delegate that calls sendHeartbeat().
Timers should serve you better since they won't be affected by the CPU load in the same way as your sleeping thread. It has the advantage of being a bit neater in terms of code (the timer set up is very simple and readable) and you won't have a spare thread lying around.

You seem to be trying to reinvent one of the timer classes.
How about using System.Timers.Timer for example?
var timer = new System.Timers.Timer(smallInterval);
timer.Elapsed += (s, a) => sendHeartbeat;
timer.Enabled = true;
One of the issues here may be, at a guess, how often your thread gets scheduled when the CPU is under load. Your timer implementation is inherently single threaded and blocks. A move to one of the framework timers should alleviate this as (taking the above timer as an example) the elapsed event is raised on a thread pool thread, of which there are many.

Unfortunately, Windows is not a Real Time OS and so there are few guarantees about when threads are executed. The Thread.Sleep () only schedules the earliest time when the thread should be woken up next, it is up to the OS to wake up the thread when there's a free time slice. The exact criteria for waking up a sleeping thread is probably not documented so that the Window's kernel team can change the implementation as they see fit.
I'm not sure that Timer objects will solve this as the heartbeat thread still needs to be activated after the timer has expired.
One solution is to elevate the priority of the heartbeat thread so that it gets a chance of executing more often.
However, heartbeats are usually used to determine if a sub-system has got stuck in an infinite loop for example, so they are generally low priority. When you have a CPU intensive section, do a Thread.Sleep (0) at key points to allow lower priority threads a chance to execute.

Related

Inconsistent intervals with System.Windows.Forms.Timer

Please be kind, I'm just learning C# and inheriting this application from a former-employee is my first C# project.
I am observing inconsistent and slow periods with System.Windows.Forms.Timer. The application is written in C# with MS Visual Studio.
The timer is set for an interval of 100 msec yet I am observing periods ranging from 110 msec to 180 msec.
I am using several tools to observe this including:
- a SW oscilloscope (the Iocomp.Instrumentation.Plotting.Plot package),
- a real oscilloscope,
- letting the timer run for some time and comparing the number of ticks * 100 msec to both the system time and to a stopwatch.
In all cases I am observing a 10% lag that becomes evident within the first few seconds.
The methods that are executed with each tick take fewer than 4 msec to run. There is no time-consuming asynchronous processing happening, either. This shouldn't matter, though, as the timer tick is an interrupt, not an event added to an event handler queue (as far as I know).
Has anyone experienced a problem like this before? What were the root causes?
Thanks.
Timers are only as accurate as the operating system clock interrupt. Which ticks 64 times per second by default, 15.625 msec. You cannot get a clean 100 msec interval from that, it isn't divisible by 15.625. You get the next integer multiple, 7 x 15.625 = 109.375 msec. Very close to the 110 msec you observed.
You need to add the latency in the handling of the timer notification to this theoretical minimum. Timers have to compete with everything else that's going on in your UI thread. They are treated as the least important notification to be delivered. Sent messages go first, user input goes next, painting is next, timer messages are last. Either way, if you have an elaborate user interface that takes a while to repaint then the Tick event is going to be delayed until that's done. Same for any event handler you write that does something non-trivial like reading a file or querying a dbase.
To get a more responsive timer that doesn't suffer from this kind of latency, you need to use an asynchronous timer. System.Threading.Timer or System.Timers.Timer. Avoid the latter. Their callback runs on a threadpool thread so can get running pretty quickly. Be very careful what you do in this callback, lots of things you cannot do because they are not thread-safe.
You can these timers more accurate by changing the clock interrupt rate. That requires pinvoke, call timeBeginPeriod(). timeEndPeriod() when you're done.
Yes,I always faced this issue with System.Windows.Forms.Timer as it doesnt ticks accurately(most of the time).
You can try System.Timers.Timer instead and it raises interrupt precisely(atleast for 100ms precision)
System.Windows.Forms.Timer is really just a wrapper for the native WM_TIMER message. this means that the timer message is placed in the message queue at time roughly close to the interval you requested (plus or minus... there's no guarantee here). when that message is processed is entirely dependant on other messages in the queue and how long each takes to process. For example, if you block the UI thread (and thus block the queue from processing new messages) you won't get the timer event until after you unblock.
Windows is not a real-time operating system, you can't expect fine-grained accuracy in timers. If you want something more fine-grained, a multimedia timer is the best choice.
This is old, but in case anyone comes here looking for an actually correct answer:
From https://msdn.microsoft.com/en-us/library/system.windows.forms.timer(v=vs.110).aspx (emphasis mine):
The Windows Forms Timer component is single-threaded, and is limited to an accuracy of 55 milliseconds. If you require a multithreaded timer with greater accuracy, use the Timer class in the System.Timers namespace.
So with Windows.Forms.Timer you can get 55ms, 110ms, 165ms, etc.; which is consistent with what you were seeing. If you need higher precision, try System.Timers.Timer or System.Threading.Timer

Understanding the strange behavior of multimedia timer

I am using Multimedia timers in my application (C# .NET) to increase accuracy of my timer and to achieve 1 ms timer frequency. My application had been working great so far until recently it started behaving strangely. I am trying to understand what is wrong with my application. Below are the steps taken
timer frequency is set to 1 ms, callback is called on every 1ms
there are 4 threads, each creating its own timer object. They all are set to call the callback after 1ms. These are individual instances and not shared.
old piece of code execution time was about 0.3 ms. This was working fine until next step.
application code is changed. Timer callback function now takes about 1.2 ms for execution. This is clearly a problem. (I am going to work on optimizing the code later. But now I just want to understand the multimedia timer behavior)
only the 1st thread is calling the timer callback where as for other threads the call back is called only twice or thrice and after that the callback is never called.
Looks like for other threads, the timer even is missed (?) and it cannot catch up. (Its missed for every interrupt).
Could you please explain me the behavior of the timer objects. Are all the threads actually pointing to same timer object since its a single process?
Why are other threads not calling the timer callback?
The maximum resolution for the Multimedia timer is 1ms. This causes the programmable interrupt controller (on the hardware) to fire every 1ms. If you fire up 4 threads that all create timers which have 1ms timings that does not mean you will get events more than once per millisecond.
I encourage you to read the Why are the Multimedia Timer APIs (timeSetEvent) not as accurate as I would expect? blog post on MSDN.
Some quotes that are applicable here (emphasis mine):
The MM Timer APIs allow the developer to reprogram the Programmable
Interrupt Controller (PIC) on the machine. You can specify the new
timer resolution. Typically, we will set this to 1 millisecond. This
is the maximum resolution of the timer. We can’t get sub-millisecond
accuracy. The effect of this reprogramming of the PIC is to cause the
OS to wake up more often. This increases the chances that our
application will be notified by the operating system at the time we
specified. I say, “Increases the chances” because we still can’t
guarantee that we will actually receive the notification even though
the OS work up when we told it.
And:
Remember that the PIC is used to wake up the OS so that it can decide
what thread should be run next. The OS uses some very complex rules to
determine what thread gets to occupy the processor next. Two of the
things that the OS looks at to determine if it should run a thread or
not are thread priority and thread quantum.
So, even if you put the resolution down to the maximum of 1ms, you are not guaranteed that your thread will be the one chosen to do its work.
I suppose that you use a system timer that runs callbacks on a single dedicated thread.
Then you set the system interval to 1 ms. And before your change the callback takes 0.3 ms to complete, so the callbacks of the 4 threads take 4 * 0.3 = 1.2 ms to complete. So they manage to complete on 1-2 time intervals, and can all start again after that.
But after your change each callback takes 1.2 ms itself. So we have requests to run callbacks from the threads 2-4 and another request from thread 1 (because the time interval ran out). After that it depends on the timer used, which request it will serve. It turns out, that the one from the first thread.

Using Timers to create a single timeout frequently

I'm working with a timeout which is set to occur after a certain elapsed period, after which I would like to get a callback. Right now I am doing this using a Timer that when fired disposes of itself.
public class Timeouter
{
public void CreateTimeout(int timeout, Action onTimeout)
{
Timer t = null;
t = new Timer(_ =>
{
onTimeout();
t.Dispose();
}, new object(), timeout, Timeout.Infinite);
}
}
I'm a bit concerned regarding the resource use of this timer since it could potentially be called quite frequently and would thus setup a lot of timers to fire just once and dispose of themselves. Considering that the timer is an IDisposable it would indicate to me that it indeed uses some sort of expensive resource to accomplish its task.
Am I worrying too much about the resource usage of the Timer, or perhaps the solution is fine as it is?
Do I have any other options for doing this? Would it be better to have a single timer and fiddling with it's frequency starting and stopping it as necessary in order to accommodate several of these timeouts? Any other potentially more lightweight option to have a task execute once after a given period of time has elapsed?
.Net has 2 or 3 timer classes which are expensive. However the System.Threading.Timer class which you're using is a very cheap one. This class do not use kernel resources or put a thread to sleep waiting for timeout. Instead it uses only one thread for all Timer instances, so you can easily have thousands of timers and still get a tiny processor and memory footprint. You must call Dispose only because you must notify the system to stop tracking some timer instance, but this do not implies that this is a expensive class/task at all.
Once the timeout is reached this class will schedule the callback to be executed by a ThreadPool thread, so it do not start a new thread or something like this.
Though its not an answer, but due to length I added it as answer.
In a server/Client environment, AFAIK using Timers on server is not the best approach, rather if you have thick clients or even thin clients, you should devise some polling mechanism on client if it wants a certain operation to be performed on the server for itself(Since a client can potentially disconnect after setting up a timer and then reinstantiate and set a timer again an so on, causing your server to be unavailable at sometime in future(a potential DOS attack)),
or else think of a single timer strategy to deal with all clients, which implements sliding expirations or client specific strategies to deal with it.
one other option is to maintain a sorted list of things which will timeout, add them to the list with their expiry time instead of their duration, keep the list sorted by the expiry time and then just pop the first item off the list when it expires.
You will of course need to most of this on a secondary thread and invoke your callbacks. You don't actaully need to keep the thread spinning either, you could set a wait handle on the add method with a timeout set for (a bit less than) the duration until the next timeout is due. See here for more information on waiting with a timeout.
I don't know if this would be better than creating lots of timers.
Creating a cheap timer that can time many intervals is intuitively simple. You only need one timer. Set it up for the closest due time. When it ticks, fire the callback or event for every timer that was due. Then just repeat, looking again through the list of active timers for the next due time. If a timer changes its interval then just repeat the search again.
Something potentially expensive might happen in the callback. Best way to deal with that is to run that code on a threadpool thread.
That's extraordinarily frugal use of system resources, just one timer and the cheapest possible threads. You pay for that with a little overhead whenever a timer's state changes, O(n) complexity to look through the list of active timers, you can make most of it O(log(n)) with a SortedList. But the Oh is very small.
You can easily write that code yourself.
But you don't have to, System.Timers.Timer already works that way. Don't help.

Thread.Sleep() sleeps for longer

I have a Winform which needs to wait for about 3 - 4 hours. I can't close and somehow reopen the App, as it does few things in background, while it waits.
To achieve the wait - without causing trouble to the UI thread and for other reasons -, I have a BackgroundWorker to which I send how many milliseconds to wait and Call Thread.Sleep(waitTime); in its doWork event. In the backGroundWorker_RunWorkerCompleted event, I do what the program is supposed to do after the wait.
This works fine on the development machine. i.e. the wait ends when it has to end. But on the Test machine, it keeps waiting for longer. It happened two times, first time it waited exactly 1 hour more than specified time and second time it waited more for about 2 Hours and 40 minutes.
Could there be any obvious reason for this to happen or am I missing something?
The dev machine is Win XP and Test machine is Win 7.
I propose to use ManualResetEvent instead:
http://msdn.microsoft.com/en-us/library/system.threading.manualresetevent.aspx
ManualResetEvent mre = new ManualResetEvent(false);
mre.WaitOne(waitTime);
...
//your background worker process
mre.Set();
As a bonus you will have an ability to interrupt this sleep quicker.
Have a look at this article which explains the reason:
Thread.Sleep(n) means block the current thread for at least the number
of timeslices (or thread quantums) that can occur within n
milliseconds. The length of a timeslice is different on different
versions/types of Windows and different processors and generally
ranges from 15 to 30 milliseconds. This means the thread is almost
guaranteed to block for more than n milliseconds. The likelihood that
your thread will re-awaken exactly after n milliseconds is about as
impossible as impossible can be. So, Thread.Sleep is pointless for
timing.
By the way it also explains why not to use Thread.Sleep ;)
I agree to the other recommendations to use a Timer instead of the Thread.Sleep.
In my humble opinion, the difference in wait time cannot solely be explained by the information that you have given us. I would really think that the cause of the difference lies within the moment of starting the sleep. So the actual Thread.sleep(waitTime); call. Are you sure that the sleep is called at the moment you think it is?
And, as suggested by the comment, if you really need to wait for this long; consider using a Timer to start the events needed. Or even scheduling of some sort, within your application. Of course, this depends on your actual implementation and thus can be easier said than done. But it 'feels' silly, letting a BackgroundWorker sleep for so long.
PREFIX: This requires .NET 4 or newer
Consider making your function async and simply doing:
await Task.Delay(waitTime);
Alternately, if you can't make your function async (or don't want to) you could also do:
Task.Delay(waitTime).Wait();
This is a one-line solution and anyone with a copy of Reflector can verify that Task.Delay uses a timer internally.

Best way to check when a specified date occurs

Are there any classes in the .NET framework I can use to throw an event if time has caught up with a specified DateTime object?
If there isn't, what are the best practices when checking this? Create a new thread constantly checking? A timer (heaven forbid ;) )?
I wouldn't go with the thread approach. While a sleeping thread doesn't consume user CPU time, it does use Kernel/system CPU time. Secondly, in .NET you can't adjust the Thread's stack size. So even if all it does is sleep, you are stuck with a 2MB hit (I believe that is the default stack size of a new thread) for nothing.
Using System.Threading.Timer. It uses an efficient timer queue. It can have hundreds of timers that are lightweight and only execute on 1 thread that is reused between all timers (assuming most timers aren't firing at the same time).
When a thread is sleeping it consumes no CPU usage. A very simple way would be to have a thread which sleeps until the DateTime. For example
DateTime future = DateTime.Now.Add(TimeSpan.FromSeconds(30));
new Thread(() =>
{
Thread.Sleep(future - DateTime.Now);
//RaiseEvent();
}).Start();
This basically says, get a date in the future (thirty seconds from now). Then create a thread which will sleep for the difference of the times. Then raise your event.
Edit: Adding some more info about timers. There is nothing wrong with timers, but I think it might be more work. You could have a timer with an interval of the difference between the times. This will cause the tick event to fire when the time has caught up to the date time object.
An alternative, which I would not recommend, and I seem to think you have though of this, is to have a timer go off every five seconds and check to see if the times match. I would avoid that approach and stick with having the thread sleep until there is work to be done.
A timer is probably not a bad way to go. Just use DateTime.Now to detect if it's over the target time. Don't use == unless you try to normalize the times to the minute or the hour or something.

Categories