I've started the a desktop app development in C# (VS2010 .Net Fw 4.0) involving several timers.
At first, I was using System.Timers in order to send data by USB to a data bus. My point is that I need to send different periodic binary messages with several specific intervals of time such as 10ms, 20ms, 40ms, 100ms, 500ms, 1000ms ....
In each timer I have a callback function who has to send the correct data flow for each raised event.
The problem is when I have the full amount of signals working, the real interval is different than expected (3000ms raises to 3500, 100ms to 340) because when I add the 10ms to 40ms timers, the cpu is overload almost 100% and loses all the precission.
I'm working in VMWare with 2GB RAM and 2CPU cores. The Host machine is an i7-2600K CPU # 3,40GHz. I think this is no the problem.(but I think).
Before I wrote here, i was looking for an answer about how to be more exact with timing, using the most optimized resource consumption. but all I found it was unespecific.
I already knows about System.Diagnostics.Stopwatch reading about it here but this class have not events. Also System.Windows.Forms.Timer but is more imprecise, and with low resolution.
There is a good implementation here with microseconds resolution but it is the reason of my overloading CPU!
What do you think about my further steps!? I'll appreciate any help or idea you have
10% timing resolution is the goal in 10ms' interval!
I will clarify any extra info you need!
Related
Thread.Sleep used to be tied to the system clock which clocked in at a interval at roughly 16 ms, so anything below 16ms would yield 16ms sleep. The behaviour seems to have changed some way down the line line.
What has changed? Is Thread.Sleep no longer tied to the system clock but to the high res timer? Or has the default system clock frequency increased in Windows 10?
edit:
It seems people are intersted in knowing why I use Thread.Sleep, its actually out of the scope of the question, the question is why the behavior have changed. But anyway, I noticed the change in my open source project freepie https://andersmalmgren.github.io/FreePIE/
Its a input/output emulator which is controlled by the end user using Iron python. That runs in the background. It has no natural interrupts. So I need to marshal the scripting thread so it does not starve an entire core.
Thanks to Hans Passant comment which I first missed I found the problem. It turns out Nvidias driver suit is the problem maker.
Platform Timer Resolution:Outstanding Timer Request A program or
service has requested a timer resolution smaller than the platform
maximum timer resolution. Requested Period 10000 Requesting Process
ID 13396 Requesting Process Path \Device\HarddiskVolume2\Program
Files\NVIDIA Corporation\NVIDIA GeForce Experience\NVIDIA Share.exe
This is so idiotic on so many levels. In the long run this is even bad for the environment since any computer with nvidia will use more power.
edit: Hans Comment, relevant part:
Too many programs want to mess with it, on top of the list is a free
product of a company that sells a mobile operating system. Run
powercfg /energy from an elevated command prompt to find the evildoer,
"Platform Timer Resolution" in the generated report.
The newer hardware does have a more precise timer, indeed. But the purpose of Thread.Sleep is not to be precise, and it should not be used as a timer.
It is all in the documentation:
If dwMilliseconds is less than the resolution of the system clock, the
thread may sleep for less than the specified length of time. If
dwMilliseconds is greater than one tick but less than two, the wait
can be anywhere between one and two ticks, and so on.
Also very important is that after the time elapses, your thread is not automatically run. In stead, the thread is marked as ready to run. This means that it will get time only when there are no higher priority threads that are also ready to run, and not before the currently running threads consume their quantum or get into a waiting state.
I'm looking for some kind of timer that has a higher resolution than the Windows default of ~15ms. I don't need a timer for time measurement but rather a timer that is able to wait X milliseconds (or call an event every X milliseconds). I know it's possible to change the Windows timer resolution with NtSetTimerResolution, but that affects all applications (which I don't want). I don't need much precision, so say if I'm looking for 2ms then 1.5ms and 2.5ms would be OK too.
Using spinners work but this obviously causes too much CPU usage. Ideas that are creative are welcome too, as long as it can get the job done.
NtSetTimerResolution and timeBeginPeriod can increase timer resoultion, but they are system wide. If anyone has good idea, please tell me.
I don't recommend that you do this. Google has modified Chrome to increase the timer frequency only when necessary, which works in most cases.
The default timer resolution on Windows is 15.6 ms – the timer interrupts 64 times per second. As the program increases the timer frequency, they increase power consumption and impair battery life. They also waste more computing power than I expected – they slow down your computer! Because of these problems, Microsoft has been telling developers for years not to increase the timer frequency.
OK, that title was perhaps vague, but allow me to explain.
I'm dealing with a large list, of hundreds of messages to be sent to a CAN bus as byte arrays. Each of these messages has an Interval property detailing how often the message must be sent, in milliseconds. But I'll get back to that.
So I have a thread. The thread loops through this giant list of messages until stopped, with the body roughly like this:
Stopwatch timer = new Stopwatch();
sw.Start();
while(!ShouldStop)
{
foreach(Message msg in list)
{
if(msg.IsReadyToSend(timer)) msg.Send();
}
}
This works great, with phenomenal accuracy in honoring the Message objects' Interval. However, it hogs an entire CPU. The problem is that, because of the massive number of messages and the nature of the CAN bus, there is generally less than half a millisecond before the thread has to send another message. There would never be a case the thread would be able to sleep for, say, more than 15 milliseconds.
What I'm trying to figure out is if there is a way to do this that allows for the thread to block or yield momentarily, allowing the processor to sleep and save some cycles. Would I get any kind of accuracy at all if I try splitting the work into a thread per message? Is there any other way of doing this that I'm not seeing?
EDIT: It may be worth mentioning that the Message's Interval property is not absolute. As long as the thread continues to spew messages, the receiver should be happy, but if the thread regularly sleeps for, say, 25 ms because of higher priority threads stealing its time-slice, it could raise red flags for the receiver.
Based on the updated requirement there is very good chance that default setup with Sleep(0) could be enough - messages may be sent in small bursts, but it sounds like is ok. Using multimedia timer may make burst less noticeable. Building more tolerance to receiver of the messages may be better approach (if possible).
If you need hard milliseconds accuracy with good guarantees - C# on Windows is not the best choice - separate hardware (even Adruino) may be needed, or at least lower level code that C#.
Windows is not RT OS, so you can't really get sub-millisecond accuracy.
Busy loop (possibly on high-pri thread) as you have is common approach if you need sub-millisecond accuracy.
You can try using Multimedia timers (sample - Multimedia timer interrupts in C# (first two interrupts are bad)), as well to change default time slice to 1ms (see Why are .NET timers limited to 15 ms resolution? for sample/explanation).
In any case you should be aware that your code can loose its time-slice if there are other higher priority threads to be scheduled and all your efforts would be lost.
Note: you obviously should consider if more sensible data structure is more suitable (i.e. heap or priority queue may work better to find next item).
As you have discovered, the most accurate way to "wait" on a CPU is to poll the RTC. However that is computationally intensive. If you are needing to get to the clock accuracy in timing, there is no other way.
However, in your original post, you said that the timing was in the order of 15ms.
On my 3.3GHz Quad Core i5 at home, 15ms x 3.3GHz = 50 Million Clock cycles (or 200 million if you count all the cores).
That is an eternity.
Loose sleep timing is most likely more than accurate enough for your purposes.
To be frank if you needed Hard RT, C# on the .net VM running on the .net GC on the Windows Kernel is the wrong choice.
Every n*x milliseconds I perform an action where n = 0, 1, 2, ...; x is some increment.
Example - every 25 milliseconds I perform a calculation.
This action can take fewer than x seconds for each increment. As a result, I need a way in C# to wait the remaining (x - actual_time) milliseconds.
Example - if the calculation only takes 20 milliseconds, I need to wait 5 more milliseconds before re-running the calculation.
Please advise.
Thanks,
Kevin
I need a way in C# to wait the remaining (x - actual_time) milliseconds.
I presume that is C# running on Windows.
And there is your problem. Windows is not a "realtime" operating system.
The best you can do if you need millisecond-grade timing precision is to set the thread priority of your thread extremely high, and then busy-wait while querying the high performance timer (Stopwatch).
You cannot yield to another thread; the other thread could run for as much as 16 milliseconds before the operating system context switches it, and of course unless you are the highest priority thread, you have no guarantee that control is coming back to you after those 16 milliseconds are up.
Now, setting thread priority high and then busy waiting is one of the most rude things you can possibly do; essentially you will be taking control of the user's machine and not allowing them to do anything else with it.
Therefore what I would do is abandon this course of action entirely. Either, (1) consider obtaining an operating system designed for realtime process control if that is in fact your application, rather than an operating system designed for multitasking a bunch of line-of-business applications. Or (2) abandon your requirement that the action happen exactly every 25 milliseconds. Just perform the calculation once and yield the remainder of your quantum to another thread. When you get control back, see if more than 25 ms has passed since you yielded; if it has not, yield again. If it has, start over and perform the calculation.
That level of accuracy will be very difficult to achieve in a non real-time operating system like Windows. Your best bet might be to look into the multimedia timers.
The other .NET timers won't have the kind of resolution your need.
At 25ms, you may be the wrong side of the resolution of your available timers in .Net.
However - as a general solution I'd probably attempt this a different way to your "do calculation..wait until 25ms has passed" approach.
A better way may well be to use a System.Timers.Timer, on a 25ms trigger, to trigger the calculation.
var timer = new Timer(25);
timer.Elapsed += (sender, eventArgs) =>
{
DoCalc();
};
timer.Start();
In the above example, a DoCalc method will be called every 25 ms (timer resolution issues notwithstanding). You would need to consider what to do if your calculation overran it's allotted time though. As it stands, the above code would allow a second calculation to start, even if the previous had not completed.
This is a difficult one, and your options are fairly limited, as Eric Lippert and Matt Burland indicate. Basically, you can either
resort to using multimedia timers (google "multimedia timer component" or "winmm.dll"), which, although supporting time resolutions down to 0.500 ms, are no longer recommended as of Windows Vista, require Win32 interop and may bump up your CPU usage quite noticeably, or
come up with an approximated time slicing algorithm that will use the standard timer (whose resolution is usually 15.625 ms on multicore desktops), dynamically varying the timer interval upon each tick based on the difference of desired and actual time elapsed since the last timer tick (you can measure this fairly accurately using high resolution CPU performance counters, e.g. the Stopwatch class).
The latter solution will statistically give you a 40Hz timer in your sample use case, but you'll have significant jitter due to the low resolution of the timer you are using.
This is the tradeoff, the call is yours to make.
Here's a high-precision timer I wrote. I get roughly <1ms avg precision on 25ms interval. But if Windows is busy it may come in late. It's fairly lean on CPU utilization. Feel free to use. It uses Sleep(1) when the next tick is more than 15ms away and then SpinUntil (which yields as needed) to keep CPU usage at bay. Uses .NET4 features.
Link: High Precision Timer
I am using the code below
Thread.Sleep(5);
at the end of a while loop. To try and get a 5ms delay between iterations.
Sometimes it sleeps for 16ms. This I understand and accept, since it depends on when the CPU gets around to servicing the thread. However once it has woken the next iteration it seems to wake immediately after the sleep call (I am logging with timestamps). Is there a problem with using such a short sleep interval that it is treated as zero?
Your problem is like that on most modern machines, DateTime.UtcNow has a resolution of about 10-15ms (though I see the documentation says it's about 10ms since NT 3.5). If you want higher resolution timing, see the Stopwatch class, specifically Stopwatch.GetTimestamp().
Also note that Stopwatch will only use the high-resolution timers if they are available (Stopwatch.IsHighResolution will tell you at run-time). If not, it falls back to DateTime.UtcNow.Ticks.
Most likely, the problem is simply that your timer has limited resolution. If it only updates, say, every 10ms, you're going to see the same timestamp on some iterations, even if 5ms have passed.
Which timer are you using to generate your timestamps?
If I remember correctly NT time slicing, which was introduced in the NT kernel and was still active in the same way as of XP, operates right around the 5ms mark. We were building a realtime application and ran in to that problem. You will not be able to consistently get a 5ms sleep time. What we found was that you will sometimes get 10 - 16 ms, sometimes no ms and occasionally yet rarely get 5 ms.
I was doing these tests around 5 years ago though so things may have changed since then.
What sort of system are you running this on? Small intervals can depend on the processor and how high a resolution it supports. I ran an app on a handheld once where the resolution of the timer itself was 16ms. So it may be hardware related. Try increasing the time period to say 30ms and see if it works then.