We have a Scala/Java back end that is generating the equivalent of DateTime.MaxValue in .NET.
I am sent the following date as a string "9999-12-31T23:59:59.999999999Z".
If I used DateTime.TryParse("9999-12-31T23:59:59.999999999Z", out var dateTime), then it throws an ArgumentOutOfRangeException (The added or subtracted value results in an un-representable DateTime.Parameter name: value).
I didn't expect this, since I was calling TryParse. Perhaps returning false would have been more intuitive?
If I reduce the year, I can see .NET is rolling the date over to the following day, which obviously wont work on a max date/time!
DateTime.TryParse("9998-12-31T23:59:59.999999999Z", out var dateTime);
dateTime.ToString().Dump();
Outputs: 01/01/9999 00:00:00
If I reduce the precision of the ms by 2, then it works:
DateTime.TryParse("9998-12-31T23:59:59.9999999Z", out var dateTime);
dateTime.ToString().Dump();
Outputs: 31/12/9998 23:59:59
This really looks like a bug in .NET? Is this expected behaviour?
Passing Min/Max/Infinity and etc. values between different platforms is a bad idea. Each platform might have its own representation of special values (not only dates). Therefore the only valid option is to pass epoch values (milliseconds options are preferable in most cases), since they are known to the both parties.
If the above is impossible for some reason then you have two ugly options:
Replace special values in your Scala/Java output with your own "encoding". For example as "MaxValue", or take your pick as your see fit. On the .Net side you will detect special values and translate them accordingly.
Insert some simple preprocessing into your .Net code. For example check
"9999-12-31T23:59:59.999999999".StartsWith("9999")
for max values.
You have too many nines in your string. The exception you observing is the precision issue.
Try doing the following:
DateTime.MaxValue.ToString("o")
it will result in "9999-12-31T23:59:59.9999999" rather than "9999-12-31T23:59:59.999999999Z", i.e. two less nines at the end.
Use "9999-12-31T23:59:59.9999999Z" as an input and it will parse successfully into DateTime.MaxValue.
PS: TryParse will also convert this value to your local timezone, which I assume is not somewhat you would anticipate. Use extended version instead:
DateTime.TryParse("9999-12-31T23:59:59.9999999Z", CultureInfo.InvariantCulture, DateTimeStyles.RoundtripKind, out var dateTime);
Adding DateTimeStyles.AdjustToUniversal as argument fix it for .NET Framework
e.g. for Windows Powershell:
$d = [DateTime]::Parse("9999-12-31T23:59:59.9999999Z", $null, 16)
Get-Date $d
Related
This is a follow up question to this question:
Standard conformant way of converting std::time_t to System::DateTime?
I need to convert a c++11 time_t to .NET DateTime by a string. The program needs to take the value of the current time_t, create a string which is going to be written to a file, and then another program which is written in C# is going to read that value and convert it to DateTime.
The problem is that I don't have access to the C# code so I need to make the needed string in the c++ side. Currently, the C# side knows how to convert strings in this format:
2018-05-13T10:03:18.4195735+03:00
I'm not sure if this is a regular conventional string that represents a DateTime and how exactly to create it.
Edit: In addition to the time_t value I also got the milliseconds value as a chrono::seconds::rep so you can just assume I got both
This is the ISO 8601 format, an international standard to represente date and time.
In your case, you have <date>T<time><zone>
<date> = 2018-05-13 (YYYY-MM-DD)
<time> = 10:03:18.4195735 (hh:mm:ss.sssssss)
<zone> = +03:00
You can see more information about this convention here: ISO 8601
Probably they're using something to parse it to DateTime.
I don't know much about C++, but looks like you can use GetTimeZoneInformation to get the timezone (Windows only). The time and date I think you already have, though they're asking for more precision than milliseconds. Looks like you can get only 1/1000000 of second in C++ with chrono (one nanosecond), which would be 6 decimal places. Do they really need it so precise? If not you can just add a 0 in the 7th decimal place.
Is there a universal string to date conversion method available. I have a situation where I get a datetime value as string but the value could be in any of the valid datetime format.
How do I parse it to to DateTime object in C#?
Edit
I have a possible list of formats but that can be quite large.. I have read about DateTime.Parse(string value, string[] format, culture, style) method where I can pass multiple formats in the format parameter and one of the possible format would be matched.. So how much performance impact would this approach have? –
Quick answer: universal: No.
Because there are written date formats that are ambiguous. Eg. to me today is "08/04/14", but in a historical context to the west of the Atlantic that could be the 4th August in an earlier century.1
If you have a subset of formats to process then DataTime.TryParseExact has an overload that takes an array of formats to try in turn.
Any method that you do not pass a date format to will use the current locale settings to start the parsing: which is likely to lead to invalid parses because you don't know the input format.
The best solution is to use ISO format (yyyy-MM-dd) the next best is to get the precise rules from whomever is specifying the system.
EDIT: Additional. While the best and next best may not be possible, the various DataTime parse, try-parse and -exact methods have overloads that accept an IFormatProvider (which CultureInfo implements) that would allow you to be explicit about the source of the data.
In terms of performance (from updated question): in an earlier version of the framework I looked at DateTime.TryParseExact in some detail, and I seem to recall it simply tried each format in turn finishing as soon as it got a successful parse. Thus if the most common format is first in the array its performance will be little different to the single format overload. But (1) if you need this functionality then you need it, whatever the overhead; (2) it seems unlikely your application's performance will be much affected by this (compared to, say, the cost of reading the data into and writing it out of the application).
1 Strictly speaking this is not true, I've worked too often with the, IMHO, horrible US format to know to write today as "2014-04-08".
I'm working on a program to import data from an IBM iSeries server into an MSSQL 2008 R2 database. Unfortunately, some developer long ago decided to store dates as a decimal type, effectively breaking the CYYMMDD format being used to store the dates.
For example, in that format, August 1st, 1995 would be stored as: 0950801. However, what's actually being stored in the database is 95081, which obviously throws an exception if I try to convert it to System.DateTime.
If it were a simple matter of missing a leading 0, I could easily add that to the string before trying to convert it. However, there are several (thousands, really) of dates that are only 3 or 4 digits, which I really don't know what to make of. For example, there's a date stored as 1128. I don't know what to make of that at all. If I just tack on 3 leading 0's to that and convert it, it produces an obviously incorrect date.
So, does anyone know a reliable way to parse these dates? Either directly through through SQL select statements, or doing some manipulation in C#? Or am I just to assume 3 and 4 digit dates were never entered correctly in the first place, and just discard such dates?
I would suggest looking at the programs (and particularly any change Comments [assuming they exist]) that insert / update the Table (Query DB2 for this). The change Comments will hopefully tell you if the Date format was changed (e.g. Y2K) and why.
Also looking at any program that read the DB, there may be special code to handle the Date. There may be code to determine the Date Format.
The 95081 could also be an Ordinal date (YYDDD) where DDD is the Day of the Year. See Ordinal or Julian-Date. These where Dates popular at one stage.
I would guess the DB-Field was originally YYMMDD with no Century. The format was probably changed to CYYMMDD for Y2k. Dates like 1128 (and 221) are probably YYMMDD dates that where created before the Y2k changes where Implemented (or where missed in the original y2k implementation and changed later).
After a lot of trial and error, I think I've found the solution.
SELECT
(CASE WHEN INT(SUBSTR(DIGITS(DTPSTD), 1,2)) > MOD(YEAR( CURRENT DATE),100)
THEN DATE(CONCAT(CONCAT(CONCAT(SUBSTR(DIGITS(DTPSTD), 3,2), '/'), CONCAT(SUBSTR(DIGITS(DTPSTD), 5,2), '/')), CONCAT('19', SUBSTR(DIGITS(DTPSTD), 1,2))))
ELSE DATE(CONCAT(CONCAT(CONCAT(SUBSTR(DIGITS(DTPSTD), 3,2), '/'), CONCAT(SUBSTR(DIGITS(DTPSTD), 5,2), '/')), CONCAT('20', SUBSTR(DIGITS(DTPSTD), 1,2))))
END) AS TransactionDate
FROM TABLE_NAME
WHERE CUSTOMER_ID = 1
DTPSTD is "Date Posted"
As far as I can tell, this works for any date in 1900 or 2000, but wouldn't work for dates older than 1-Jan-1900. In my case that's fine, since I don't have any dates older than 1920 or so stored.
Decimal CYYMMDD was a standard IBM format, where C was zero for 1900's and 1 for 2000's. This dates back to the S/38 (around 1982) or perhaps earlier. But I don't recall seeing them use it prior to the S/38, which was the predecessor to the AS/400 and iSeries.
I suggest creating a user-defined function in DB2 to convert your decimal dates into ISO date values. DB2 for i will cache results of DETERMINISTIC functions, so the function doesn't have to be recomputed every time it sees a date value it has processed before.
Update
Here is an example I adapted to convert a packed decimal(8,0) value in ccyymmd or yymmdd format into a DB2 Date:
CREATE OR REPLACE FUNCTION
Cvt_Dec8cymd_to_Date ( dtin dec(8,0) )
returns date
LANGUAGE SQL
CONTAINS SQL
DETERMINISTIC -- caches results
NO EXTERNAL ACTION
RETURNS NULL ON NULL INPUT
NOT FENCED
SET OPTION DBGVIEW = *SOURCE
prc: BEGIN NOT ATOMIC -- don't rollback on error
DECLARE ans date;
DECLARE cymd dec(8,0);
-- add declarations for conditions and handlers here
SET ans = null;
CASE
WHEN dtin > 999999 THEN -- more than 6 digits given
set cymd = dtin;
WHEN dtin < 400000 THEN -- yr < 40 means 2000's
SET cymd = 20000000 + dtin;
ELSE -- yr >= 40 means 1900's
SET cymd = 19000000 + dtin;
END CASE;
--convert to date
SET ans = date( insert(insert(digits(cymd),7,0,'-'),5,0,'-') );
RETURN ans;
END prc
;
It is simple logic, without error handling for invalid values.
Someone else may have a better example, or might improve this one.
I have searched around but have yet to find a satisfying answer for my issue:
Overview: I am working on a small app that uses several DateTime fields into a single table using WCF. We are trying to eliminate null fields in all of our tables. Up until now, whenever we are converting a datetime field selected from a table we first verify that it is null before we display or otherwise "use" it. If the value is NULL, we substitute DateTime.MinValue for the value. Since we are now removing the nullable aspect of all fields, we need to insert a common value representing null; since DateTime.MinValue() is substituted everywhere in the code, it seems like a viable value to put into the field as a null substitute.
The problem: Inserting a DateTime.MinValue() causes the generic "problem executing this request" error.
Solution? : As has been documented elsewhere, the DateTime.MinValue() has an unspecified DateTimeKind so... we add the ToUniversalTime call, as in :
DateTime nullDate = DateTime.MinValue.ToUniversalTime(); // 1/1/0001 6:00:00 AM
This doesn't work, perhaps because of some "acceptable range" setting within Date conversions on WCF?
Noted in the comment is the resulting value of "1/1/0001 6:00 am". The odd 6:00am or the year "1" aside, that seems fine. (Considering the past of default years like 12:00 AM 1970 are still fairly standard, but I digress...)
Yes, I know that DateTime is not C#, it's .NET. I'm also aware that WCF has some form of "min" and "max" times allowed within its conversion constraints. I don't know how to set those (nor do I know how to set the VerboseMessages bit; I'm fairly new to WCF.)
The suggestion to manually create a new field value that converts nicely into the table is viable, and it works:
DateTime nullDate = new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc);
(BTW, If we use the year 1 instead of 1970 (for example) the insert will also fail.)
Even if we were to use the above nullDate as a workable substitute... the problem is that everywhere in the code we are still evaluating against DateTime.MinValue() representing a null date, and those two dates aren't quite the same.
The only viable solution that I can think of is to override or otherwise create an extended class from DateTime to create a common NullDate and modify all the code accordingly.
Does anyone see a good alternative? Does anyone know a solution to inserting the System.DateTime.MinValue() into a WCF table such as altering the acceptable boundries of a good/bad date?
I feel like I'm going to have to bite the bullet and change all references to MinValue... I'm trying to avoid that because it doesn't follow any sort of standard logical thought in evaluating "default" values.
Suggestions anyone?
It depends on the database you are using to store the data. SQL Server, for example, has a minimum date of 1/1/1753, see MSDN. I am not sure about Oracle.
If you Have to use a magic date, you can use that, or something else specific (I use 10 December 1815 in honor of the Lady Ada Lovelace, the first programmer). Generally speaking, if you have a situation where you have no known value, the data store should represent the value as 'null.' Now in practice, that's not always viable. What you could do is refactor the nullable columns into a subtable, and only include a child record when you in fact have something to record.
my C# unit test has the following statement:
Assert.AreEqual(logoutTime, log.First().Timestamp);
Why it is failed with following information:
Assert.AreEqual failed. Expected:<4/28/2010 2:30:37 PM>. Actual:<4/28/2010 2:30:37 PM>.
Are they not the same?
Update:
Use this if you only care to second:
Assert.AreEqual(logoutTime.ToString(), log.First().Timestamp.ToString());
Have you verified that the number of ticks/milliseconds are equal?
If you do DateTime.Now() twice back to back, they will appear to be the same number down to the minute and probably even down to the second, but they will often vary by ticks. If you want to check equality only to the minute, compare each DateTime only to that degree. For information on rounding DateTimes, see here
A note about resolution:
The Now property is frequently used to measure performance. However, because of its low resolution, it is not suitable for use as a benchmarking tool. A better alternative is to use the Stopwatch class.
Try something like Assert.AreEqual(logoutTime.Ticks, log.First().Timestamp.Ticks)
The Assert fail method is probably calling ToString() on the DateTime which returns a truncated, human-readable form of the date without the milliseconds component. This is why it appears they are equal when, in fact, the DateTime object has a precision of a 100-nanosecond unit (known as a Tick). That means it is highly unlikely two DateTime objects will have the exact same value. To compare you probably want to truncate the value, perhaps by formatting the date to the fidelity you require.
Using entity framework, if you fetch from the database using .AsNoTracking() the DateTime property will be rounded ever so slightly, whereas it won't necessarily be rounded without .AsNoTracking() if the original value is still in memory. Thus for integration tests involving a round-trip to the database, I guess it's best to use .ToString() because the database will reduce the precision slightly.
Are you sure that logoutTime and log.First().Timestamp are both typed as DateTime?
If so, they might also have different values for the more specific time infomation (e.g., milliseconds).
Assuming that logoutTime and log.First().Timestamp are both of type DateTime, you should try using this instead:
Assert.AreEqual(logoutTime.Ticks, log.First().Timestamp.Ticks);
While working on unit test, I found below steps very useful to compare some date with mock date.
Mock date field as below:
mockDate = new DateTime(2020, 10, 10)
Call service method.
Assertion can be done like this:
Assert.AreEqual("10/10/2020 12:00:00 AM", _service.startDate.ToString());
Note while doing assertion:
We have to provide date like : 10/10/2020 12:00:00 AM
Then on service method date item we need to apply ToString(), this will convert date time into string value for comparison
If we just have to do assertion with datetime today date
Assert.AreEqual(DateTime.Today, _service.startDate);
I suppose Assert.AreEqual<T> uses Object.Equals() to determine equality of the objects but not the values.
Probably this statement is comparing two different objects and therefore is returning false.