Why is the minimum resolution of a DateTime based on Ticks (100-nanosecond units) rather than on Milliseconds?
TimeSpan and DateTime use the same Ticks making operations like adding a TimeSpan to a DateTime trivial.
More precision is good. Mainly useful for TimeSpan, but above reason transfers that to DateTime.
For example StopWatch measures short time intervals often shorter than a millisecond. It can return a TimeSpan.
In one of my projects I used TimeSpan to address audio samples. 100ns is short enough for that, milliseconds wouldn't be.
Even using milliseconds ticks you need an Int64 to represent DateTime. But then you're wasting most of the range, since years outside 0 to 9999 aren't really useful. So they chose ticks as small as possible while allowing DateTime to represent the year 9999.
There are about 261.5 ticks with 100ns. Since DateTime needs two bits for timezone related tagging, 100ns ticks are the smallest power-of-ten interval that fits an Int64.
So using longer ticks would decrease precision, without gaining anything. Using shorter ticks wouldn't fit 64 bits. => 100ns is the optimal value given the constraints.
From MSDN;
A single tick represents one hundred nanoseconds or one ten-millionth
of a second. There are 10,000 ticks in a millisecond.
A tick represents the total number of ticks in local time, which is midnight on January 1st in the year 0001. But a tick is also smallest unit for TimeSpan also. Since ticks are Int64, so if miliseconds used instead of ticks, there can be a information losing.
Also could be a default CLS implementation.
Just for the information:
1 millisecond = 10 000 ticks
1 second = 10 000 000 ticks
Using difference (delta) of two ticks you can get more granular precision (later converting them to millisecond or seconds)
In a C# DateTime context, ticks starts from 0 (DateTime.MinValue.Ticks) up until DateTime.MaxValue.Ticks
new DateTime(0) //numbers between 0 and (864*10^9-1) produces same date 01/01/0001
new DateTime(DateTime.MaxValue.Ticks) //MaxValue tick generates 12/31/9999
System time ticks are incremented by 864 billion ticks per day.
for higher time resolution, even though you don't need it most of the time.
The tick is what the system clock works with.
Related
I have a DateTime represented as long (8 bytes), that came from DateTime.ToBinary(), let's call it dateTimeBin. Is there an optimal way of dropping the Time information (I only care for the date) so I can compare it to a start of day? Lets say we have this sample value as a start of day.
DateTime startOfDay = new DateTime(2020,3,4,0,0,0);
long startOfDayBin = startOfDay.ToBinary();
I obviously know I can always convert to a DateTime object then get the date component. However, this operation is going to happen billions of times and every little performance tweak helps.
Is there an efficient way of extracting the Date info of dateTimeBin without converting it to DateTime? Or any arithmetic operation on the long that will return the date only?
Is there a way to match startOfDay (or startOfDayBin) and dateTimeBin if they have the same date components?
Is there a way to see if (dateTimeBin >= startOfDayBin), I don't think the long comparison is valid.
N.B. all the dates are UTC
Since you are working only with UTC dates - makes sense to use DateTime.Ticks instead of DateTime.ToBinary, because former has relatively clear meaning - number of ticks since epoch, just like the unix time, the only difference is unix time interval is second and not tick (where tick is 1/10.000.000 of a second), and epoch is midnight January 1st of 0001 year and not year 1970. While ToBinary only promises that you can restore original DateTime value back and that's it.
With ticks it's easy to extract time and date. To extract time, you need to remainder of division of ticks by number of ticks in a full day, so
long binTicks = myDateTime.Ticks;
long ticksInDay = 24L * 60 * 60 * 10_000_000;
long time = binTicks % ticksInDay;
You can then use convert that to TimeSpan:
var ts = TimeSpan.FromTicks(time);
for convenience, or use as is. The same with extracting only date: just substract time
long date = binTicks - (binTicks % ticksInDay);
Regular comparision (dateTimeBin >= startOfDayBin) in also valid for tick values.
I have this simple bit of code:
//LastUpdate = DateTime.Npw
//These two lines occur every frame
TimeSpan timeSpan = DateTime.Now - data.LastUpdate;
Debug.Log(timeSpan.Milliseconds);
But the result of this is showing milliseconds not really increasing, it fluctuates between 100 to 900 ms. It should be ever increasing in size since, the time past is increasing.
I have checked that LastUpdate does not change, so that isn't the cause.
I guess i am misunderstanding how timespan works. I am trying to get milliseconds that has passed between LastUpdate and now of the current frame.
Am i using it wrong ? I don't understand the issue.
From TimeSpan docs:
.Milliseconds (emphasis mine):
Gets the milliseconds component of the time interval represented by the current TimeSpan structure.
You want to use .TotalMilliseconds:
Gets the value of the current TimeSpan structure expressed in whole and fractional milliseconds.
Hi.
I am trying to convert an incoming datetime value that comes to our system in a string format. It seems that when the precision of milliseconds is higher than 7, the datetime parsing in .NET does not seem to like the value and cannot convert/parse the value. I am a bit stuck on what to do for this? My only current thought is there is a limit on the millisecond size and that anymore precision is not possible? But I want to confirm this is the case rather than assume.
Example:
string candidateDateTimeString = "2017-12-08T15:14:38.123456789Z";
if (!success)
{
success = DateTime.TryParseExact(trayportDateTimeString, "yyyy-
MM-dd'T'HH:mm:ss.fffffffff'Z'",
CultureInfo.InvariantCulture, dateTimeStyles, out dateTime);
}
If I reduce the 'f' values down to just 7, then date time parsing works fine. Is there a limit? Or am I doing something obvious wrong?
According to Custom Date and Time Format Strings docs, 7 is maximum supported digits of second fraction.
Internally, all DateTime values are represented as the number of ticks (the number of 100-nanosecond intervals) that have elapsed since 12:00:00 midnight, January 1, 0001.
https://msdn.microsoft.com/en-us/library/system.datetime(v=vs.110).aspx
see also: https://learn.microsoft.com/en-us/dotnet/standard/base-types/custom-date-and-time-format-strings
Precision of date and time values is more complex than you might think. There are different levels of precision involved:
Precision of DateTime
DateTime stores the number of ticks since 01.01.0001 00:00 as a 64 bit value. One tick is 100 nanoseconds. Since this is the maximum precision that can be stored, it makes no sense to format to a precision higher than that. You can just add as many zeros as you need to represent a higher precision. If you need to represent shorter timespans than 100 nanoseconds, you need to use a different type, such as an Int64 with a custom tick size.
Precision of DateTime.Now
When you call DateTime.Now, you get a much lower precision than DateTime can store. The exact value depends on the system clock, but it is usually in the milliseconds range.
Precision of Stopwatch
When you measure the time with Stopwatch, depending on your system, you might get the time from a high performance clock, which more precise than the clock used for DateTime.Now, but still less than 100 nanoseconds. On a system without high performance clock, the precision is the one of the regular system clock.
Summary
Unless the value that you are parsing originates from a high precision clock (like an atomic clock), parsing it to the full precision of DateTime, makes not much practical sense. And if it comes from such a source, you need to resort to a different data type to represent the value.
I'm trying to convert ULONG to DateTime and as DateTime accepts Ticks as param which are LONG, here's how I do it.
ulong time = 12354;
new DateTime((long)time).ToString("HH:mm:ss");
The result of this is 00:00:00.
I don't understand the result, am I doing something wrong?
P.S. i.Time is not 0, I checked multiple times.
Citing the documentation:
Initializes a new instance of the DateTime structure to a specified number of ticks.
ticks
Type: System.Int64
A date and time expressed in the number of 100-nanosecond intervals that have elapsed since January 1, 0001 at 00:00:00.000 in the Gregorian calendar.
This is 100 nanoseconds which is a super small time unit. So unless your number is larger than 10000000, you don’t even get a single second:
Console.WriteLine(new DateTime((long)10000000).ToString());
// 01.01.0001 00:00:01
So you should really think about what your “time left” (i.Time) value is supposed to mean? Is this really time in the unit of 100 nanoseconds? Very likely not. It’s probably more about seconds or something completely different.
Btw. if the number you have does not actually represent a moment in time, you should not use DateTime. You should use TimeSpan instead. Its long constructor has the same behavior though, but you can use one of the handy static functions to create a time span with the correct unit:
var ts = TimeSpan.FromSeconds(1000);
Console.WriteLine(ts.ToString());
// 00:16:40
Because a tick is 100 nanoseconds, and so 12354 ticks is only 1235400 nanoseconds which is only .0012354 seconds. So your datetime is .0012354 seconds after midnight on 1 Jan in the year one.
I'm getting via WSDL from C# application timestamp number like
6.3527482515083E+17
6.3527482515047E+17
6.352748251638E+17
6.3527482514463E+17
All are the times in the past (this year, probably)
I think that's is the datetime counted from YEAR ZERO. I try to count up seconds from ZERO and get someting about 63537810544. But this is not exact, because missing leap years.
exists in PHP any function how to get UNIX timestamp ??? or convert it to STRING datetime ???
I get values via WSDL so I can't reformat it on source...
They are in 100 nanosecond ticks (1/10,000,000 of a second) from 12:00:00 midnight, January 1, 0001. This information can be found in this MSDN article.
A single tick represents one hundred nanoseconds or one ten-millionth of a second. There are 10,000 ticks in a millisecond, or 10 million ticks in a second.
The value of this property represents the number of 100-nanosecond intervals that have elapsed since 12:00:00 midnight, January 1, 0001, which represents DateTime.MinValue. It does not include the number of ticks that are attributable to leap seconds.
The magic constant that represents the number of 100ns ticks between 12:00:00 midnight January 1 0001 and 12:00:00 midnight January 1 1970 (Unix epoch time) is 62135596800000000. So if you subtract that from your numbers you get 100ns ticks since beginning of Unix Epoch time. Divide that by 10,000,000 and you get seconds. And that is usable in PHP. Sample code below to demonstrate (unixepoch is in seconds):
<?php
$msdatetime = 6.3527482515083E+17;
$unixepoch = ($msdatetime - 621355968000000000)/10000000 ;
echo date("Y-m-d H:i:s", $unixepoch);
?>
Output:
2014-02-08 11:55:15
I found this constant listed on this helpful site dealing with the problem of getting unix epoch time from other formats.