.NET datetime Millisecond precision issue when converting from string to datetime - c#

Hi.
I am trying to convert an incoming datetime value that comes to our system in a string format. It seems that when the precision of milliseconds is higher than 7, the datetime parsing in .NET does not seem to like the value and cannot convert/parse the value. I am a bit stuck on what to do for this? My only current thought is there is a limit on the millisecond size and that anymore precision is not possible? But I want to confirm this is the case rather than assume.
Example:
string candidateDateTimeString = "2017-12-08T15:14:38.123456789Z";
if (!success)
{
success = DateTime.TryParseExact(trayportDateTimeString, "yyyy-
MM-dd'T'HH:mm:ss.fffffffff'Z'",
CultureInfo.InvariantCulture, dateTimeStyles, out dateTime);
}
If I reduce the 'f' values down to just 7, then date time parsing works fine. Is there a limit? Or am I doing something obvious wrong?

According to Custom Date and Time Format Strings docs, 7 is maximum supported digits of second fraction.

Internally, all DateTime values are represented as the number of ticks (the number of 100-nanosecond intervals) that have elapsed since 12:00:00 midnight, January 1, 0001.
https://msdn.microsoft.com/en-us/library/system.datetime(v=vs.110).aspx
see also: https://learn.microsoft.com/en-us/dotnet/standard/base-types/custom-date-and-time-format-strings

Precision of date and time values is more complex than you might think. There are different levels of precision involved:
Precision of DateTime
DateTime stores the number of ticks since 01.01.0001 00:00 as a 64 bit value. One tick is 100 nanoseconds. Since this is the maximum precision that can be stored, it makes no sense to format to a precision higher than that. You can just add as many zeros as you need to represent a higher precision. If you need to represent shorter timespans than 100 nanoseconds, you need to use a different type, such as an Int64 with a custom tick size.
Precision of DateTime.Now
When you call DateTime.Now, you get a much lower precision than DateTime can store. The exact value depends on the system clock, but it is usually in the milliseconds range.
Precision of Stopwatch
When you measure the time with Stopwatch, depending on your system, you might get the time from a high performance clock, which more precise than the clock used for DateTime.Now, but still less than 100 nanoseconds. On a system without high performance clock, the precision is the one of the regular system clock.
Summary
Unless the value that you are parsing originates from a high precision clock (like an atomic clock), parsing it to the full precision of DateTime, makes not much practical sense. And if it comes from such a source, you need to resort to a different data type to represent the value.

Related

Using DateTime for a duration of time e.g. 45:00

I'd just like to know if it is possible to use the DateTime type for durations such as 45:00 (45 minutes) or 120:00 (120 minutes). These values also need to be stored into a Local Sql Server DB. If it is possible, could anyone possibly hint how this could be done using Datetime, or if not just let me know a way it could be done using a different type.
Thank you in advance,
Jamie
You should use the TimeSpan structure
TimeSpan interval = new TimeSpan(0, 45, 0);
Console.WriteLine(interval.ToString());
For the database storing part, you could store the property Ticks because a specific constructor for the TimeSpan structure allows to instantiate a new TimeSpan passing the Ticks value
long ticks = GetTimeSpanValueFromDb();
TimeSpan interval = new TimeSpan(ticks);
I wish to add also that you need a BIGINT T-SQL datatype field to store a long NET datatype
I store durations in seconds in the database and then convert to HH:MM:SS format when comes time to display the data.
Why don't you use TimeSpan instead? You can convert them to Ticks (int), store them in the db and the reverse the process when you need the value.
This is merely a matter of interpretation. SQL Server stores datetime as two four byte integers. One is a signed int count of days from a reference date, the other is an unsigned time of day such that 32bits exactly maps 24 hours. Without the implicit epoch, this isn't a datetime, it's a duration. Nothing prevents you from interpreting it that way.
Of course, it would be more convenient to pick a unit and simply use a float. This is what Windows does, storing datetime as a number of days from a reference date expressed as an 8-byte float (a double).
Personally I don't like "day" as a unit of time. The rotational period of our planet is not constant, and it is necessary to mess about with leap seconds to maintain the illusion that there are 86400 seconds in every day. A better choice is the SI unit, the second, which is defined in terms of repeatable, invariant physical constants.
Better again would be the picosecond, since we could dump the double and use an int64, with all the attendant arithmetical and comparative performance advantages. Depiction in mixed human scale units (yyyy mmm d HH:mm:ss) is already something of a trial. Mapping functions that currently work with fractional days could trivially be scaled to microseconds, although the compensation for leap seconds and leap days would have to be rewritten.
I say picosecond because this is the finest division that fits in 64 bits while encompassing a useful span of time (50,000 years). Femto fits, but fifty years isn't wide enough. I know that eventually there be a year 50K problem but frankly I doubt anyone but archeologists will care about records from 50,000 years ago.

Why is DateTime based on Ticks rather than Milliseconds?

Why is the minimum resolution of a DateTime based on Ticks (100-nanosecond units) rather than on Milliseconds?
TimeSpan and DateTime use the same Ticks making operations like adding a TimeSpan to a DateTime trivial.
More precision is good. Mainly useful for TimeSpan, but above reason transfers that to DateTime.
For example StopWatch measures short time intervals often shorter than a millisecond. It can return a TimeSpan.
In one of my projects I used TimeSpan to address audio samples. 100ns is short enough for that, milliseconds wouldn't be.
Even using milliseconds ticks you need an Int64 to represent DateTime. But then you're wasting most of the range, since years outside 0 to 9999 aren't really useful. So they chose ticks as small as possible while allowing DateTime to represent the year 9999.
There are about 261.5 ticks with 100ns. Since DateTime needs two bits for timezone related tagging, 100ns ticks are the smallest power-of-ten interval that fits an Int64.
So using longer ticks would decrease precision, without gaining anything. Using shorter ticks wouldn't fit 64 bits. => 100ns is the optimal value given the constraints.
From MSDN;
A single tick represents one hundred nanoseconds or one ten-millionth
of a second. There are 10,000 ticks in a millisecond.
A tick represents the total number of ticks in local time, which is midnight on January 1st in the year 0001. But a tick is also smallest unit for TimeSpan also. Since ticks are Int64, so if miliseconds used instead of ticks, there can be a information losing.
Also could be a default CLS implementation.
Just for the information:
1 millisecond = 10 000 ticks
1 second = 10 000 000 ticks
Using difference (delta) of two ticks you can get more granular precision (later converting them to millisecond or seconds)
In a C# DateTime context, ticks starts from 0 (DateTime.MinValue.Ticks) up until DateTime.MaxValue.Ticks
new DateTime(0) //numbers between 0 and (864*10^9-1) produces same date 01/01/0001
new DateTime(DateTime.MaxValue.Ticks) //MaxValue tick generates 12/31/9999
System time ticks are incremented by 864 billion ticks per day.
for higher time resolution, even though you don't need it most of the time.
The tick is what the system clock works with.

Disparity between date/time calculations in C# versus Delphi

Delphi:
SecondsBetween(StrToDateTime('16/02/2009 11:25:34 p.m.'), StrToDateTime('1/01/2005 12:00:00 a.m.'));
130289133
C#:
TimeSpan span = DateTime.Parse("16/02/2009 11:25:34 p.m.").Subtract(DateTime.Parse("1/01/2005 12:00:00 a.m."));
130289134
It's not consistent either. Some dates will add up the same, ie..
TimeSpan span = DateTime.Parse("16/11/2011 11:25:43 p.m.").Subtract(DateTime.Parse("1/01/2005 12:00:00 a.m."));
SecondsBetween(StrToDateTime('16/11/2011 11:25:43 p.m.'), StrToDateTime('1/01/2005 12:00:00 a.m.'));
both give
216905143
The total amount of seconds is actually being used to encode data, and I'm trying to port the application to C#, so even one second completely throws everything off.
Can anybody explain the disparity? And is there a way to get c# to match delphi?
Edit: In response to suggestions that it might be leap second related: Both date ranges contain the same amount of leap seconds (2), so you would expect a mismatch for both. But instead we're seeing inconsistency
16/02/2009 - 1/01/2005 = Delphi and C# calculate a different total seconds
16/11/2011 - 1/01/2005 = They calculate the same total seconds
The issue it seems related to this QC 59310, the bug was fixed in Delphi XE.
One will likely deal with Leap Seconds. However, .NET does not as far as I'm aware.
You don't mention how you convert the c# TimeSpan into a number. The TotalSeconds property is a floating point value - perhaps it's a rounding problem in the double to int conversion?

Why does formatting a DateTime as a string truncate and not round the milliseconds?

When a Double is formatted as a string rounding is used. E.g.
Console.WriteLine(12345.6.ToString("F0"));
outputs
12346
However, when a DateTime is formatted as a string truncation is used. E.g.
var ci = CultureInfo.InvariantCulture;
var dateTime = DateTime.Parse("2011-09-14T15:18:42.999", ci);
Console.WriteLine(dateTime.ToString("o", ci));
Console.WriteLine(dateTime.ToString("s", ci));
Console.WriteLine(dateTime.ToString("yyyy-MM-hhThh:mm:ss.f", ci));
outputs
2011-09-14T15:18:42.9990000
2011-09-14T15:18:42
2011-09-14T15:18:42.9
What is the reasoning (if any) behind this behavior?
Rounding to nearest second can be achieved by adding half a second before formatting as a string:
var ci = CultureInfo.InvariantCulture;
var dateTime = DateTime.Parse("2010-12-31T23:59:59.999", ci);
Console.WriteLine(dateTime.ToString("s", ci));
var roundedDateTime = dateTime.AddMilliseconds(500);
Console.WriteLine(roundedDateTime.ToString("s", ci));
outputs
2010-12-31T23:59:59
2011-01-01T00:00:00
This is a bit subjective, but I would say that rounding date and times values as opposed to truncating them would result in a "more" unexpected behavior.
For example, rounding new DateTime(2011, 1, 1, 23, 59, 59, 999) would result in a new day completely. This sounds much more weird than just truncating the value.
Old question, but it has been referred to from a new, and the answers discuss reasons for rounding or not (which of course are valid) but leaves the question with no answer.
The reason for not rounding is, that ToString just prints the date/time parts you ask for.
Thus, for example, neither will it round to the nearest minute:
Console.WriteLine(dateTime.ToString("yyyy-MM-hhThh:mm", ci));
Output:
2011-09-03T03:18
With no parameter, ToString uses the default date/time format string of your environment.
At the final unit of measurement, if an event occurs at that frequency, rounding cuts way down on aliasing.
For example, if jitter makes one second data frame come in 2 mS into the second and the next frame to arrive at 990 mS into the same frame, they will both be stamped as being in the same second. A jitter of only a few milliseconds would result in many scattered non-unique key values.
Rounding would put them into several seconds cleanly
until the jitter got much more severe, say +/- 499 mS.
The purpose of rounding is to stop the resolution from going on forever.
When the uncertainty is way below the resolution, it cuts aliasing tremendously.
"Cascading" can only occur at less than the resolution of the boundary.
For example, a toggling year number seems shocking, but this can only occur at less
than a second (or millisecond, etc) from midnight New Year's. Nothing unexpected
or especially inaccurate to that.
To truly prevent aliasing (the same time mentioned twice) you need to implement
anti-aliasing (as is done in graphics), after rounding.

Safely comparing local and universal DateTimes

I just noticed what seems like a ridiculous flaw with DateTime comparison.
DateTime d = DateTime.Now;
DateTime dUtc = d.ToUniversalTime();
d == dUtc; // false
d.Equals(dUtc); //false
DateTime.Compare(d, dUtc) == 0; // false
It appears that all comparison operations on DateTimes fail to do any type of smart conversion if one is DateTimeKind.Local and one is DateTimeKind.UTC. Is the a better way to reliably compare DateTimes aside from always converting both involved in the comparison to utc time?
When you call .Equal or .Compare, internally the value .InternalTicks is compared, which is a ulong without its first two bits. This field is unequal, because it has been adjusted a couple of hours to represent the time in the universal time: when you call ToUniversalTime(), it adjusts the time with an offset of the current system's local timezone settings.
You should see it this way: the DateTime object represents a time in an unnamed timezone, but not a universal time plus timezone. The timezone is either Local (the timezone of your system) or UTC. You might consider this a lack of the DateTime class, but historically it has been implemented as "number of ticks since 1970" and doesn't contain timezone info.
When converting to another timezone, the time is — and should be — adjusted. This is probably why Microsoft chose to use a method as opposed to a property, to emphasize that an action is taken when converting to UTC.
Originally I wrote here that the structs are compared and the flag for System.DateTime.Kind is different. This is not true: it is the amount of ticks that differs:
t1.Ticks == t2.Ticks; // false
t1.Ticks.Equals(t2.Ticks); // false
To safely compare two dates, you could convert them to the same kind. If you convert any date to universal time before comparing you'll get the results you're after:
DateTime t1 = DateTime.Now;
DateTime t2 = someOtherTime;
DateTime.Compare(t1.ToUniversalTime(), t2.ToUniversalTime()); // 0
DateTime.Equals(t1.ToUniversalTime(), t2.ToUniversalTime()); // true
Converting to UTC time without changing the local time
Instead of converting to UTC (and in the process leaving the time the same, but the number of ticks different), you can also overwrite the DateTimeKind and set it to UTC (which changes the time, because it is now in UTC, but it compares as equal, as the number of ticks is equal).
var t1 = DateTime.Now
var t2 = DateTime.SpecifyKind(t1, DateTimeKind.Utc)
var areEqual = t1 == t2 // true
var stillEqual = t1.Equals(t2) // true
I guess that DateTime is one of those rare types that can be bitwise unequal, but compare as equal, or can be bitwise equal (the time part) and compare unequal.
Changes in .NET 6
In .NET 6.0, we now have TimeOnly and DateOnly. You can use these to store "just the time of day", of "just the date of the year". Combine these in a struct and you'll have a Date & Time struct without the historical nuisances of the original DateTime.
Alternatives
Working properly with DateTime, TimeZoneInfo, leap seconds, calendars, shifting timezones, durations etc is hard in .NET. I personally prefer NodaTime by Jon Skeet, which gives control back to the programmer in a meaningful an unambiguous way.
Often, when you’re not interested in the timezones per se, but just the offsets, you can get by with DateTimeOffset.
This insightful post by Jon Skeet explains in great depth the troubles a programmer can face when trying to circumvent all DateTime issues when just storing everything in UTC.
Background info from the source
If you check the DateTime struct in the .NET source, you'll find a note that explains how originally (in .NET 1.0) the DateTime was just the number of ticks, but that later they added the ability to store whether it was Universal or Local time. If you serialize, however, this info is lost.
This is the note in the source:
// This value type represents a date and time. Every DateTime
// object has a private field (Ticks) of type Int64 that stores the
// date and time as the number of 100 nanosecond intervals since
// 12:00 AM January 1, year 1 A.D. in the proleptic Gregorian Calendar.
//
// Starting from V2.0, DateTime also stored some context about its time
// zone in the form of a 3-state value representing Unspecified, Utc or
// Local. This is stored in the two top bits of the 64-bit numeric value
// with the remainder of the bits storing the tick count. This information
// is only used during time zone conversions and is not part of the
// identity of the DateTime. Thus, operations like Compare and Equals
// ignore this state. This is to stay compatible with earlier behavior
// and performance characteristics and to avoid forcing people into dealing
// with the effects of daylight savings. Note, that this has little effect
// on how the DateTime works except in a context where its specific time
// zone is needed, such as during conversions and some parsing and formatting
// cases.
To deal with this, I created my own DateTime object (let's call it SmartDateTime) that contains the DateTime and the TimeZone. I override all operators like == and Compare and convert to UTC before doing the comparison using the original DateTime operators.

Categories