I tried implementing the .NET Stopwatch for fun, but I got some unexpected results.
I was fully expecting about 100 ms execution time from this program.
Is the Stopwatch class inaccurate or what is going on here?
Code:
namespace Timer
{
class Program
{
Stopwatch s = new Stopwatch();
static void Main(string[] args)
{
s.Start();
for (int i = 0; i < 100; i++)
{
Thread.Sleep(1);
}
s.Stop();
Console.WriteLine("Elapsed Time " + s.ElapsedMilliseconds + " ms");
Console.ReadKey();
}
}
}
Result is 190 ms
Because Thread.Sleep(n) means: at least for n msces.
Some time ago I wrote an answer (on another topic though), which contained an external reference:
Thread.Sleep(n) means block the current thread for at least the number
of timeslices (or thread quantums) that can occur within n
milliseconds.
The length of a timeslice is different on different versions/types of
Windows and different processors and generally ranges from 15 to 30
milliseconds. This means the thread is almost guaranteed to block for
more than n milliseconds. The likelihood that your thread will
re-awaken exactly after n milliseconds is about as impossible as
impossible can be. So, Thread.Sleep is pointless for timing.
According to MSDN:
The system clock ticks at a specific rate called the clock resolution.
The actual timeout might not be exactly the specified timeout, because
the specified timeout will be adjusted to coincide with clock ticks.
Since you are not in a real time OS, your program can be interrupt by something else, and using Sleep increase the chanches this will happen, even because the number of milliseconds you thread will wait it will be at least n milliseconds. Try with a plain Thread.Sleep(100) and you probably find something closer.
Related
I'm comparing execution time of different Redis clients. I'm following the advice of several Stack Overflow answers, like this one (in regards to using stopwatch for a simple timer). Notice my results in the image below. The Visual Studio execution time is 926ms but the timer value is 16,815ms. Stopwatch results are consistently higher (by quite a lot). What am I doing wrong?
The Stopwatch measures elapsed time between Start and Stop by counting timer ticks in the underlying timer mechanism. Having said that means if you use F10 to step over statements then also Stopwatch is running in background.
Let's perform a test to compare both time with below test to get better understanding. Suppose breakpoint is on 1st line and on last time. Press F5 when execution is paused at 1st break-point and compare time shown by debugger and Stopwatch. Those would be comparable.
Stopwatch stopwatch = Stopwatch.StartNew(); //Breakpoint
Print(); //Have a Thread.Sleep(1000);
stopwatch.Stop();
int count = 10; //Breakpoint
In my case Stopwatch showed elapsed time as 1002ms and debugger showed <1005ms.
[
Please note that times would significantly differ if you step over each line (one at a time). Stopwatch will show significantly greater time as elapsed time would be more.
I want to time total runtime of a progam I am working on.
Currently the code looks similar to this (sw is a member of the program class):
void Main(string[] args)
{
sw.Start();
//Code here starts a thread
//Code here joins the thread
//Code operates on results of threaded operation
sw.Stop();
Console.WrtieLine("Operation took {0}", sw.Elapsed.ToString());
}
I assume this issue is caused by the thread taking control of the program execution but that's the operation that takes the most time and I'm supposed to avoid changing that code as other projects depend on it as well.
For reference, simple observation shows that the code takes nearly half an hour to run, but the elapsed time according to the stopwatch is only 23 seconds or so. Exact output Operation took 00:00:23.1064841
In this case, the best solution is to use Environment.TickCount as it measures milliseconds since the system booted and is not processor-core dependent like Stopwatch.
int StartTime = Environment.TickCount;
//Your Code to measure here
int EndTime = Environment.TickCount;
int runningTime = EndTime - StartTIme; //running time in milliseconds
// code to do whatever you want with runningTime
I'm running a process in a loop which has a limit on the number of operations it does per day. When it reaches this limit I've currently got it checking the the time in a loop to see if it a new date.
Would the best option be to:
Keep checking the time every second for new date
Calculate the number of seconds until midnight and sleep that length of time
Something else?
Don't use Thread.Sleep for this type of thing. Use a Timer and calculate the duration you need to wait.
var now = DateTime.Now;
var tomorrow = now.AddDays(1);
var durationUntilMidnight = tomorrow.Date - now;
var t = new Timer(o=>{/* Do work*/}, null, TimeSpan.Zero, durationUntilMidnight);
Replace the /* Do Work */ delegate with the callback that will resume your work at the specified interval.
Edit: As mentioned in the comments, there are many things that can go wrong if you assume the "elapsed time" an application will wait for is going to match real-world time. For this reason, if timing is important to you, it is better to use smaller polling intervals to find out if the clock has reached the time you want your work to happen at.
Even better would be to use Windows Task Scheduler to run your task at the desired time. This will be much more reliable than trying to implement it yourself in code.
Windows has a task scheduler that handles exactly this duty. Create the program to do that which it is supposed to do. Then set it up as a scheduled task.
Just calculate a period to wait and run an asynchronous timer in this way you can avoid extra CPU consuming whilst waiting:
var nextDateStartDateTime = DateTime.Now.AddDays(1).Subtract(DateTime.Now.TimeOfDay);
double millisecondsToWait = (nextDateStartDateTime - DateTime.Now).TotalMilliseconds;
System.Threading.Timer timer = new Timer(
(o) => { Debug.WriteLine("New day comming on"); },
null,
(uint)millisecondsToWait
0);
Considering the two options you've provided:
There are 60*60*24 = 86,400 seconds per day, so you could potentially do a lot of checking if you hit the limit early. Additionally, busy waiting is a waste of CPU cycles, and it will slow down everything else that is running.
You should calculate the number of seconds until midnight and sleep that long (although I believe the sleep paramater takes ms rather than s, so a simple conversion may be needed).
EDIT:
An additional benefit of calculating then sleeping is that if a user wants to bypass your restriction by changing the clock, they will not be able to (since the clock reading midnight won't wake the process as it would with continual checking). However, with a better understanding of how your program works internally, the user could change the clock to almost midnight every time they are about to reach the limit of operations, causing the thread to wake up in a few minutes or even a few seconds. It's a more complicated exploitation than would be doable with your first suggestion, but it can be done.
This is how I make a thread sleep till tomorrow 6AM
minutesToSleep = (int)(new DateTime(DateTime.Now.AddDays(1).Year, DateTime.Now.AddDays(1).Month, DateTime.Now.AddDays(1).Day, 6, 0, 0) - DateTime.Now).TotalMinutes;
Console.WriteLine("Sleeping for {0} minutes (until tomorrow 6AM)", minutesToSleep);
I am using Visual Studio Express Edition and it don't have any profiler or code analyzer.
Code having two delegate performing same task, one by using anonymous method and one by Lambda expression. I want to compare which one is taking less time.
How can I do this in VS express? (not only for delegate for methods also)
If it is Duplicate, please link it.
Thanks
I tried Like This:
/** Start Date time**/
DateTime startTime = DateTime.Now;
/********do the square of a number by using LAMBDA EXPRESSIONS********/
returnSqr myDel = x => x * x;
Console.WriteLine("By Lambda Expression Square of {0} is: {1}", a,myDel(a));
/** Stop Date time**/
DateTime stopTime = DateTime.Now;
TimeSpan duration = stopTime - startTime;
Console.WriteLine("Execution time 1:" + duration.Milliseconds);
/** Start Date time**/
DateTime startTime2 = DateTime.Now;
/*****do the square of a number by using ANONYMOUS EXPRESSIONS********/
returnSqr myDel1 = delegate(int x) { return x * x;};
Console.WriteLine("By Anonymous Method Square of {0} is: {1}", a, myDel1(a));
/** Stop Date time**/
DateTime stopTime2 = DateTime.Now;
TimeSpan duration2 = stopTime2 - startTime2;
Console.WriteLine("Execution Time 2:" + duration.Milliseconds);
Output gives:
Execution time 1 : 0
Execution time 2 : 0
Why like this?
You can use the stopwatch class.
Stopwatch sw = Stopwatch.StartNew();
// rest of the code
sw.Stop();
Console.WriteLine("Total time (ms): {0}", (long) sw.ElapsedMilliseconds);
use StopWatch class to start a timer before the code is run and stop it after the code has ended. Do this for both code snippets and find out which takes mroe time.
This is not a perfect solution but it helps
You might consider using the Scenario class which supports simple begin/end usage, with optional logging via ETW.
From their Wiki:
You can use the Scenario class to add performance instrumentation to an application (either .NET or native C++). The shortest description of a Scenario is "a named stopwatch that can log when you start and stop it".
Just using the stopwatch class should do the trick.
Most likely, your code is executing more quickly than the maximum resolution of your timing device. That's why it's reporting "0" milliseconds as the execution time for both pieces of code: the amount of time that has passed is undetectable. For some people, that might be enough to conclude it probably doesn't matter which way you do it.
However, if it's important to get an accurate comparison of each method's speed, you could try running the code in a tight loop. Essentially, do whatever it is you are doing about 100 or 1,000 or even a million times in a row, however many it takes to pass a sufficient amount of time that will register on your time-keeping device. Obviously, it won't take anywhere near that long to run the routine once or even several times in your final code, but at least it will give you an idea as to the relative speed of each option.
I have code running in a loop and it's saving state based on the current time. Sometimes this can be just milliseconds apart, but for some reason it seems that DateTime.Now will always return values of at least 10 ms apart even if it's only 2 or 3 ms later. This presents a major problem since the state i'm saving depends on the time it was saved (e.g. recording something)
My test code that returns each value 10 ms apart:
public static void Main()
{
var dt1 = DateTime.Now;
System.Threading.Thread.Sleep(2);
var dt2 = DateTime.Now;
// On my machine the values will be at least 10 ms apart
Console.WriteLine("First: {0}, Second: {1}", dt1.Millisecond, dt2.Millisecond);
}
Is there another solution on how to get the accurate current time up to the millisecond ?
Someone suggested to look at the Stopwatch class. Although the Stopwatch class is very accurate it does not tell me the current time, something i need in order to save the state of my program.
Curiously, your code works perfectly fine on my quad core under Win7, generating values exactly 2 ms apart almost every time.
So I've done a more thorough test. Here's my example output for Thread.Sleep(1). The code prints the number of ms between consecutive calls to DateTime.UtcNow in a loop:
Each row contains 100 characters, and thus represents 100ms of time on a "clean run". So this screen covers roughly 2 seconds. The longest preemption was 4ms; moreover, there was a period lasting around 1 second when every iteration took exactly 1 ms. That's almost real-time OS quality!1 :)
So I tried again, with Thread.Sleep(2) this time:
Again, almost perfect results. This time each row is 200ms long, and there's a run almost 3 seconds long where the gap was never anything other than exactly 2ms.
Naturally, the next thing to see is the actual resolution of DateTime.UtcNow on my machine. Here's a run with no sleeping at all; a . is printed if UtcNow didn't change at all:
Finally, while investigating a strange case of timestamps being 15ms apart on the same machine that produced the above results, I've run into the following curious occurrences:
There is a function in the Windows API called timeBeginPeriod, which applications can use to temporarily increase the timer frequency, so this is presumably what happened here. Detailed documentation of the timer resolution is available via the Hardware Dev Center Archive, specifically Timer-Resolution.docx (a Word file).
Conclusions:
DateTime.UtcNow can have a much higher resolution than 15ms
Thread.Sleep(1) can sleep for exactly 1ms
On my machine, UtcNow grows grow by exactly 1ms at a time (give or take a rounding error - Reflector shows that there's a division in UtcNow).
It is possible for the process to switch into a low-res mode, when everything is 15.6ms-based, and a high-res mode, with 1ms slices, on the fly.
Here's the code:
static void Main(string[] args)
{
Console.BufferWidth = Console.WindowWidth = 100;
Console.WindowHeight = 20;
long lastticks = 0;
while (true)
{
long diff = DateTime.UtcNow.Ticks - lastticks;
if (diff == 0)
Console.Write(".");
else
switch (diff)
{
case 10000: case 10001: case 10002: Console.ForegroundColor=ConsoleColor.Red; Console.Write("1"); break;
case 20000: case 20001: case 20002: Console.ForegroundColor=ConsoleColor.Green; Console.Write("2"); break;
case 30000: case 30001: case 30002: Console.ForegroundColor=ConsoleColor.Yellow; Console.Write("3"); break;
default: Console.Write("[{0:0.###}]", diff / 10000.0); break;
}
Console.ForegroundColor = ConsoleColor.Gray;
lastticks += diff;
}
}
It turns out there exists an undocumented function which can alter the timer resolution. I haven't investigated the details, but I thought I'd post a link here: NtSetTimerResolution.
1Of course I made extra certain that the OS was as idle as possible, and there are four fairly powerful CPU cores at its disposal. If I load all four cores to 100% the picture changes completely, with long preemptions everywhere.
The problem with DateTime when dealing with milliseconds isn't due to the DateTime class at all, but rather, has to do with CPU ticks and thread slices. Essentially, when an operation is paused by the scheduler to allow other threads to execute, it must wait at a minimum of 1 time slice before resuming which is around 15ms on modern Windows OSes. Therefore, any attempt to pause for less than this 15ms precision will lead to unexpected results.
IF you take a snap shot of the current time before you do anything, you can just add the stopwatch to the time you stored, no?
You should ask yourself if you really need accurate time, or just close enough time plus an increasing integer.
You can do good things by getting now() just after a wait event such as a mutex, select, poll, WaitFor*, etc, and then adding a serial number to that, perhaps in the nanosecond range or wherever there is room.
You can also use the rdtsc machine instruction (some libraries provide an API wrapper for this, not sure about doing this in C# or Java) to get cheap time from the CPU and combine that with time from now(). The problem with rdtsc is that on systems with speed scaling you can never be quite sure what its going to do. It also wraps around fairly quickly.
All that I used to accomplish this task 100% accurately is a timer control, and a label.
The code does not require much explanation, fairly simple.
Global Variables:
int timer = 0;
This is the tick event:
private void timeOfDay_Tick(object sender, EventArgs e)
{
timeOfDay.Enabled = false;
timer++;
if (timer <= 1)
{
timeOfDay.Interval = 1000;
timeOfDay.Enabled = true;
lblTime.Text = "Time: " + DateTime.Now.ToString("h:mm:ss tt");
timer = 0;
}
}
Here is the form load:
private void DriverAssignment_Load(object sender, EventArgs e)
{
timeOfDay.Interval= 1;
timeOfDay.Enabled = true;
}
Answering the second part of your question regarding a more precise API, the comment from AnotherUser lead me to this solution that in my scenario overcomes the DateTime.Now precision issue:
static FileTime time;
public static DateTime Now()
{
GetSystemTimePreciseAsFileTime(out time);
var newTime = (ulong)time.dwHighDateTime << (8 * 4) | time.dwLowDateTime;
var newTimeSigned = Convert.ToInt64(newTime);
return new DateTime(newTimeSigned).AddYears(1600).ToLocalTime();
}
public struct FileTime
{
public uint dwLowDateTime;
public uint dwHighDateTime;
}
[DllImport("Kernel32.dll")]
public static extern void GetSystemTimePreciseAsFileTime(out FileTime lpSystemTimeAsFileTime);
In my own benchmarks, iterating 1M, it returns on an average 3 ticks vs DateTime.Now 2 ticks.
Why 1600 is out of my jurisdiction, but I use it to get the correct year.
EDIT: This is still an issue on win10. Anybody interested can run this peace of evidence:
void Main()
{
for (int i = 0; i < 100; i++)
{
Console.WriteLine(Now().ToString("yyyy-MM-dd HH:mm:ss.fffffff"));
Console.WriteLine(DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss.fffffff"));
Console.WriteLine();
}
}
// include the code above
You could use DateTime.Now.Ticks, read the artical on MSDN
"A single tick represents one hundred nanoseconds or one ten-millionth of a second."