I am passing data from a c++ .dll through to a C# application using DllImport.
What i would like to do is time the data transfer time. So I would like to get the system time in milliseconds in the dll function, and then do the same again on the C# side, and get the difference between the two to calculate the time taken.
On the c++ side, I am sending a long that I am getting like this:
boost::posix_time::ptime current_date_microseconds = boost::posix_time::microsec_clock::local_time();
long millisecondStamp2 = current_date_microseconds.time_of_day().total_milliseconds();
I send that long through to C# as a variable named timestamp, and then run:
long milliseconds = DateTime.Now.Ticks / TimeSpan.TicksPerMillisecond;
long elapsed = milliseconds - timestamp;
When I print the values they look like this:
63705280140098 //c#
54540098 //c++
63705225600000 // elapsed
Why are the c++ value and the C# value so different?
How can I get equivalent values from the system clock in this way?
Please ignore the comment that claims that .NET DateTime ticks are divided into two parts. That comment is not correct. The DateTime.Ticks property returns a tick count that has units of "one ten-millionth of a second", and which measures the number of such ticks from "0:00:00 UTC on January 1, 0001, in the Gregorian calendar". It is a straight integer value, with all of the bits contributing equally according to their significance in the value to the total.
Now, as far as the discrepancy in your result goes…
The C++ expression current_date_microseconds.time_of_day().total_milliseconds() is giving you the total milliseconds for the day. I.e. that's the total number of milliseconds since midnight (based on the value, appears you executed the code around 3PM local time).
On the other hand, the .NET expression using DateTime.Now is measuring milliseconds since the start of the epoch, i.e. since Jan 1, 0001.
The two values are not comparable at all. They represent two completely different time periods.
In theory, you could fix this problem by using instead, for the .NET side, DateTime.Now.TimeOfDay.TotalMilliseconds. This would get you a lot closer to the value you expected.
However…
It's not clear to me that there's any guarantee that the C++ POSIX API you're using will use exactly the same clock reference as the .NET API. Furthermore, even if it is, there is some overhead in the API itself, along with thread-scheduling perturbations that may introduce error into the calculation.
It seems to me that a much better approach would be for you on the .NET side to use the System.Diagnostics.Stopwatch class to measure the entire time that the call into the C++ DLL takes, and then in the C++ DLL, use your POSIX API to measure the time that the C++ code takes to execute and pass that back to the C# side.
Then the C# side can just subtract the C++ time from its own time, to determine roughly what the total overhead of the call was. (Making sure, of course, to use exactly the same units for each value…e.g. milliseconds.)
Even so, it's important to keep in mind:
If you return the C++ time value in the same call, that in and of itself could affect the total overhead of the call.
Some of the apparent overhead could be thread-scheduling effects. I.e. if your thread gets pre-empted during the call, then part of your measurement will be the time during which the thread was pre-empted.
At least on the .NET side, and probably on the C++ side as well, there are still limitations to the precision of the timing. The Stopwatch class is definitely more precise and preferable over DateTime, but if the overhead is small enough, you may not get useful results (but of course, if it's that small, then it's probably good enough to discover that it's too small to get useful results :) ).
Related
I currently use a solution for getting a higher resolution timestamp in C# by taking a start time using DateTime.UtcNow and then using a Stopwatch to add ticks to it as time goes by. I came across Stopwatch.GetTimestamp() as a potential alternative or even better solution, but I cannot find reliable information on exactly what this function returns.
Best source of info seems to be this.
GetTimestamp() returns machine-dependent ticks which can be converted into seconds by dividing by the stopwatch frequency. If I do this, I get a value that appears to be a UTC UNIX timestamp which is exactly what I'm after - but I haven't seen anything that states that this is what I should expect from it.
One clue from MSDN states that:
If the Stopwatch class uses a high-resolution performance counter,
GetTimestamp returns the current value of that counter. If the
Stopwatch class uses the system timer, GetTimestamp returns the
current DateTime.Ticks property of the DateTime.Now instance.
Looking then at DateTime.Ticks, we then see:
The value of this property represents the number of 100-nanosecond
intervals that have elapsed since 12:00:00 midnight, January 1, 0001
(0:00:00 UTC on January 1, 0001, in the Gregorian calendar), which
represents DateTime.MinValue.
I'm therefore not clear how simply dividing some machine-dependent tick-count by the frequency can get me a UNIX 1970+ timestamp? Is it possible that if a high performance timer is not available on the target platform that I might get year 0001-based timestamp instead? Or maybe something else entirely, again depending on the available hi-res timer?
Can you describe your use case? If you're interested in extra precision, I don't see how you could possibly get it by starting out with DateTime.UtcNow, and then, separately, calling Stopwatch.Start() -- if you add Stopwatch.Elapsed to DateTime.UtcNow, the value is going to be inaccurate, because you have no way of knowing how long after the DateTime.UtcNow call that the stopwatch actually started. If you start the stopwatch first, you have the same problem in reverse.
Generally speaking, in .NET 4.6, there is a ToUnixTimeMilliseconds call on DateTimeOffset that may be helpful (e.g. DateTimeOffset.UtcNow.ToUnixTimeMilliseconds())
I am trying to calculate a video framerate in a program. To do this I take
DateTime.Now
at the beginning of a stream, and then again after every frame whilst also incrementing a framecounter.
Then I calculate the FPS like so:
int fps = (int)(frames / (TimeSpan.FromTicks(CurrentTime.Ticks).Seconds - TimeSpan.FromTicks(StartTime.Ticks).Seconds));
The problem is that I occassionally get a negative number out, meaning the start time must be later than the current time. How can this be the case? Does anyone know anough about these functions to explain?
Seconds gives you the seconds part of the TimeSpan, not the total duration of the TimeSpan converted in seconds. This means that Seconds will never be greater than 60.
Use TotalSeconds instead
You should consider using StopWatch for such needs, It has much better precision
The datetime functions are probably not precise enough for your needs, you may want to look into performance counters instead. I think the StopWatch class is what your looking for. System.Diagnostics.StopWatch. that is using the QueryPerformanceFrequency and QueryPerformanceCounter functions for the timing.
I don't know if the title makes sense, but I am trying to time two different methods and see how many times they execute per second, or say per 10 seconds.
For instance:
DividePolygons1(Polygon[] polys)
DividePolygons2(Polygon[] polys)
DividePolygons1 ran:
1642 times per 1 second
DividePolygons2 ran:
1890 times per 1 second
The System.Diagnostics.Stopwatch class will help you here, but be careful to use the results somehow so that the optimizer doesn't eliminate the logic you're trying to measure.
Beyond that, just run the code you're profiling several million times in a loop (adjust the iteration count to make it take between 1 and 30 seconds), then divide the number of iterations by the time taken to get the throughput in executions per second.
What I would do:
Start a Stopwatch.
In those functions, I increment a simple variable (long, float, or double, depending on how often you think they'll get called) so it's incremented on each call.
Call the first function.
Stop the Stopwatch and check the TotalSeconds against the variable I've been incrementing.
Repeat for the second function.
Visual Studio 2010 has a profiler which could determine the exact number of methods calls per time unit.
I need to format the day time using QueryPerformanceCounter Win32 API.
The format, is: HH:mm:ss.ffffff , containing hours minuts seconds and microseconds.
I need to use THIS function, because another process (written in C) is using this function and the purpose is using the same function in both places.
Thanks
The System.Diagnostics.Stopwatch class uses QueryPerformanceCounter(), saves you from having to P/Invoke it.
You should not use QueryPerformanceCounter to determine time of day. It can only be used to determine an elapsed interval with a very high resolution as it returns the number of ticks that passed since the computer was last restarted.
As such, at best, you may only determine how many hours, minutes, and seconds have passed since a previous reading of QueryPerformanceCounter which must not have happened too long in the past.
In order to convert from ticks to seconds you need to determine the frequency (using QueryPerformanceFrequency) of the ticks on the computer you're running the QueryPerformanceCounter function and then divide your reading by that frequency:
// obtain frequency
long freq;
QueryPerformanceFrequency(freq);
// then obtain your first reading
long start_count;
long end_count;
QueryPerformanceCounter(start_count)
// .. do some work
// obatin your second reading
QueryPerformanceCounter(end_count);
// calculate time elapsed
long milliseconds_elapsed = (long)(((double)(end_count - start_count) / freq) * 1000);
// from here on you can format milliseconds_elapsed any way you need to
An alternative to the above example would be to use the TimeSpan structure available in .Net which has a constructor that takes ticks like so:
// then obtain your first reading
long start_count;
long end_count;
QueryPerformanceCounter(start_count)
// .. do some work
// obatin your second reading
QueryPerformanceCounter(end_count);
TimeSpan time_elapsed = new TimeSpan(end_count - start_count);
Console.WriteLine("Time Elapsed: " + time_elapsed.ToString());
Can use :
1) The System.Diagnostics.Stopwatch class uses QueryPerformanceCounter(), saves you from having to P/Invoke it.
2) Can use directly by importing from the Win32 dll . [DLLImport(Win32)] and the name ofthe function
Possibly I misunderstand the question, as for me none of the previous answers are relevant at all.
I had the problem (which sent me here): Given a value from QueryPerformanceCounter, because something out of my control specifies timestamps using that function, how can I convert these values to a normal date / time?
I figured that QueryPerformanceCounter returns the number of seconds since the system booted, multiplied (and extended in resolution) depending on QueryPerformanceFrequency.
Thus, the most simple solution is to get the current date/time, subtract the amount of seconds returned by QueryPerformanceCounter/QueryPerformanceFrequency, and then add the values you like to format as time of day.
I'm having problems deciding on what is the best way is to handle and store time measurements.
I have an app that has a textbox that allows the users to input time in either hh:mm:ss or mm:ss format.
So I was planning on parsing this string, tokenizing it on the colons and creating TimeSpan (or using TimeSpan.Parse() and just adding a "00:" to the mm:ss case) for my business logic. Ok?
How do I store this as in a database though? What would the field type be? DateTime seems wrong. I don't want a time of 00:54:12 to be stored as 1901-01-01 00:54:12 that seems a bit poor?
TimeSpan has an Int64 Ticks property that you can store instead, and a constructor that takes a Ticks value.
I think the simplest is to just convert user input into a integer number of seconds. So 54:12 == 3252 seconds, so store the 3252 in your database or wherever. Then when you need to display it to the user, you can convert it back again.
For periods less than a day, just use seconds as other have said.
For longer periods, it depends on your db engine. If SQL Server, prior to version 2008 you want a datetime. It's okay- you can just ignore the default 1/1/1900 date they'll all have. If you are fortunate enough to have sql server 2008, then there are separate Date and Time datatypes you can use. The advantage with using a real datetime/time type is the use of the DateDiff function for comparing durations.
Most databases have some sort of time interval type. The answer depends on which database you're talking about. For Oracle, it's just a floating point NUMBER that represents the number of days (including fractional days). You can add/subtract that to/from any DATE type and you get the right answer.
As an integer count of seconds (or Milliseconds as appropriate)
Are you collecting both the start time and stop time? If so, you could use the "timestamp" data type, if your DBMS supports that. If not, just as a date/time type. Now, you've said you don't want the date part to be stored - but consider the case where the time period spans midnight - you start at 23:55:01 and end at 00:05:14, for example - unless you also have the date in there. There are standard build in functions to return the elapsed time (in seconds) between two date-time values.
Go with integers for seconds or minutes. Seconds is probably better. you'll never kick yourself for choosing something with too much precision. Also, for your UI, consider using multiple text inputs you don't have to worry about the user actually typing in the ":" properly. It's also much easier to add other constraints such as the minute and second values conting containing 0-59.
and int type should do it, storing it as seconds and parsing it back and forth
http://msdn.microsoft.com/en-us/library/ms187745.aspx