Why does DateTime.AddDays round to nearest millisecond? - c#

I am dividing two doubles in .NET and using the result object to work out an end date from a start date by calling (dtStart is predefined):
var dValue = 1500.0/8400.0;
var dtEnd = dtStart.AddDays(dValue);
After inspecting dtEnd I found that the result was only accurate to the nearest millisecond. After looking it up I found that .AddMilliseconds etc. all round the number and TimeSpan.FromDays does a similar thing. I was wondering if there was a reason why this rounding was done since it seems like the only way to get the correct value here is to use .AddTicks?
For reference .AddDays calls (where MillisPerDay = 86400000)
public DateTime AddDays(double value)
{
return Add(value, MillisPerDay);
}
which calls
private DateTime Add(double value, int scale)
{
long millis = (long)(value * scale + (value >= 0? 0.5: -0.5));
if (millis <= -MaxMillis || millis >= MaxMillis)
throw new ArgumentOutOfRangeException("value", Environment.GetResourceString("ArgumentOutOfRange_AddValue"));
return AddTicks(millis * TicksPerMillisecond);
}

Edit: After thinking things over, I now realize the first version of my answer was wrong.
Here are the comments in Microsoft's source code:
// Returns the DateTime resulting from adding a fractional number of
// xxxxs to this DateTime. The result is computed by rounding the
// fractional number of xxxxs given by value to the nearest
// millisecond, and adding that interval to this DateTime. The
// value argument is permitted to be negative.
These comments appear on five different AddXxxxs(double value) methods, where Xxxx = Days, Hours, Milliseconds, Minutes and Seconds.
Note that this is only for the methods that accept a floating point value. (And one may question whether or not it is a good idea to involve floating point values in date calculations - but that's a topic for another day.)
Now, as the OP correctly points out, these five methods all call this method:
// Returns the DateTime resulting from adding a fractional number of
// time units to this DateTime.
private DateTime Add(double value, int scale) {
long millis = (long)(value * scale + (value >= 0? 0.5: -0.5));
if (millis <= -MaxMillis || millis >= MaxMillis)
throw new ArgumentOutOfRangeException("value", Environment.GetResourceString("ArgumentOutOfRange_AddValue"));
return AddTicks(millis * TicksPerMillisecond);
}
So what is being done is that the value being added to the DateTime is rounded to the nearest number of millisecond before being added. But not the result - only the value being added (or subtracted).
This is actually documented, for example http://msdn.microsoft.com/en-us/library/system.datetime.adddays%28v=vs.110%29.aspx "The value parameter is rounded to the nearest millisecond."
Why it does this I don't know. Maybe the programmers figured that if you're using floating point values you should be aware that your values are typically not completely accurate. Or maybe they want to simulate to some degree Java-style times, which are based on milliseconds.

Related

error: The input string was not in the correct format

Checking for number 3 in Text
error:
The input string was not in the correct format
public Text Timer;
private void FixedUpdate()
{
if (Convert.ToDouble(Timer.text) == 3)
{
Debug.Log("w");
}
}
we don't know which string content your Time.text has but you should rather use either double.TryParse or since everything in the Unity API uses float anyway maybe rather float.TryParse. Alternatively you can also cast it to (float) since you are wanting to compare it basically to an int value anyway the precision doesn't really matter.
Because a second point is: You should never directly compare floating point type numbers via == (See How should I do floating point comparison?). It is possible that a float (or double) value logically should have the value 3 like e.g. (3f + 3f + 3f) / 3f but due to the floating point imprecision results in a value like 2.9999999999 or 3.000000001 in which case a comparing to 3 might fail unexpectedly.
Rather use the Unity built-in Mathf.Approximately which basically equals using
if(Math.Abs(a-b) <= epsilon)
where Mathf.Epsilon
The smallest value that a float can have different from zero.
So I would recommend to rather do something like
if (double.TryParse(Timer.text, out var time) && Mathf.Approximately((float)time, 3))
{
Debug.Log("w");
}
Note however that if this is a timer that is continuously increased you might never exactly hit the value 3 so you might want to either use a certain threshold range around it like
if (double.TryParse(Timer.text, out var time) && time - 3 >= someThreshold)
{
Debug.Log("w");
}
or simply use e.g.
if (double.TryParse(Timer.text, out var time) && time >= 3)
{
Debug.Log("w");
}
if you only want any value bigger or equal to 3.

More elegant way of comparing large numbers divided by a common modulus

I have this comparison of time ticks, but I only want to know that the ticks are equal up to a certain granularity. For that I've come up with an idea of dividing ticks by a common modulus, subtracting the remainder and compare what's left.
long value1 = DateTime.UtcNow.Ticks;
long value2 = 8884736516532874;
Assert.IsTrue((value1 - value1 % 1000)==(value2 - value2 % 1000));
I am sure there's gotta be a more elegant, better way of doing that.
Comparing with inaccuracies is a pretty common thing, in particular when dealing with floating-point numbers such as float.
In your case you can achieve the same by calculating the difference of both values and checking if it is smaller than a pre-defined epsilon:
var epsilon = 1000;
Assert.IsTrue(Math.Abs(value1 - value2) < epsilon);
As per your comment
I know that both values will be within same date, same hours and within 10 minute lag between each other
So you can go even a bit further and compare dates directly:
var difference = date1.Subtract(date2);
Assert.IsTrue(Math.Abs(difference.TotalMinutes) < 10);
I think that condition is equals to
value1 /(10^11) == value2/(10^11)

Mathematically determine the precision and scale of a decimal value

I have been looking at some way to determine the scale and precision of a decimal in C#, which led me to several SO questions, yet none of them seem to have correct answers, or have misleading titles (they really are about SQL server or some other databases, not C#), or any answers at all. The following post, I think, is the closest to what I'm after, but even this seems wrong:
Determine the decimal precision of an input number
First, there seems to be some confusion about the difference between scale and precision. Per Google (per MSDN):
Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number.
With that being said, the number 12345.67890M would have a scale of 5 and a precision of 10. I have not discovered a single code example that would accurately calculate this in C#.
I want to make two helper methods, decimal.Scale(), and decimal.Precision(), such that the following unit test passes:
[TestMethod]
public void ScaleAndPrecisionTest()
{
//arrange
var number = 12345.67890M;
//act
var scale = number.Scale();
var precision = number.Precision();
//assert
Assert.IsTrue(precision == 10);
Assert.IsTrue(scale == 5);
}
but I have yet to find a snippet that will do this, though several people have suggested using decimal.GetBits(), and others have said, convert it to a string and parse it.
Converting it to a string and parsing it is, in my mind, an awful idea, even disregarding the localization issue with the decimal point. The math behind the GetBits() method, however, is like Greek to me.
Can anyone describe what the calculations would look like for determining scale and precision in a decimal value for C#?
This is how you get the scale using the GetBits() function:
decimal x = 12345.67890M;
int[] bits = decimal.GetBits(x);
byte scale = (byte) ((bits[3] >> 16) & 0x7F);
And the best way I can think of to get the precision is by removing the fraction point (i.e. use the Decimal Constructor to reconstruct the decimal number without the scale mentioned above) and then use the logarithm:
decimal x = 12345.67890M;
int[] bits = decimal.GetBits(x);
//We will use false for the sign (false = positive), because we don't care about it.
//We will use 0 for the last argument instead of bits[3] to eliminate the fraction point.
decimal xx = new Decimal(bits[0], bits[1], bits[2], false, 0);
int precision = (int)Math.Floor(Math.Log10((double)xx)) + 1;
Now we can put them into extensions:
public static class Extensions{
public static int GetScale(this decimal value){
if(value == 0)
return 0;
int[] bits = decimal.GetBits(value);
return (int) ((bits[3] >> 16) & 0x7F);
}
public static int GetPrecision(this decimal value){
if(value == 0)
return 0;
int[] bits = decimal.GetBits(value);
//We will use false for the sign (false = positive), because we don't care about it.
//We will use 0 for the last argument instead of bits[3] to eliminate the fraction point.
decimal d = new Decimal(bits[0], bits[1], bits[2], false, 0);
return (int)Math.Floor(Math.Log10((double)d)) + 1;
}
}
And here is a fiddle.
First of all, solve the "physical" problem: how you're gonna decide which digits are significant. The fact is, "precision" has no physical meaning unless you know or guess the absolute error.
Now, there are 2 fundamental ways to determine each digit (and thus, their number):
get+interpret the meaningful parts
calculate mathematically
The 2nd way can't detect trailing zeros in the fractional part (which may or may not be significant depending on your answer to the "physical" problem), so I won't cover it unless requested.
For the first one, in the Decimal's interface, I see 2 basic methods to get the parts: ToString() (a few overloads) and GetBits().
ToString(String, IFormatInfo) is actually a reliable way since you can define the format exactly.
E.g. use the F specifier and pass a culture-neutral NumberFormatInfo in which you have manually set all the fields that affect this particular format.
regarding the NumberDecimalDigits field: a test shows that it is the minimal number - so set it to 0 (the docs are unclear on this), - and trailing zeros are printed all right if there are any
The semantics of GetBits() result are documented clearly in its MSDN article (so laments like "it's Greek to me" won't do ;) ). Decompiling with ILSpy shows that it's actually a tuple of the object's raw data fields:
public static int[] GetBits(decimal d)
{
return new int[]
{
d.lo,
d.mid,
d.hi,
d.flags
};
}
And their semantics are:
|high|mid|low| - binary digits (96 bits), interpreted as an integer (=aligned to the right)
flags:
bits 16 to 23 - "the power of 10 to divide the integer number" (=number of fractional decimal digits)
(thus (flags>>16)&0xFF is the raw value of this field)
bit 31 - sign (doesn't concern us)
as you can see, this is very similar to IEEE 754 floats.
So, the number of fractional digits is the exponent value. The number of total digits is the number of digits in the decimal representation of the 96-bit integer.
Racil's answer gives you the value of the internal scale value of the decimal which is correct, although if the internal representation ever changes it'll be interesting.
In the current format the precision portion of decimal is fixed at 96 bits, which is between 28 and 29 decimal digits depending on the number. All .NET decimal values share this precision. Since this is constant there's no internal value you can use to determine it.
What you're apparently after though is the number of digits, which we can easily determine from the string representation. We can also get the scale at the same time or at least using the same method.
public struct DecimalInfo
{
public int Scale;
public int Length;
public override string ToString()
{
return string.Format("Scale={0}, Length={1}", Scale, Length);
}
}
public static class Extensions
{
public static DecimalInfo GetInfo(this decimal value)
{
string decStr = value.ToString().Replace("-", "");
int decpos = decStr.IndexOf(".");
int length = decStr.Length - (decpos < 0 ? 0 : 1);
int scale = decpos < 0 ? 0 : length - decpos;
return new DecimalInfo { Scale = scale, Length = length };
}
}

Multiplying fractions in safe way c#

I am trying to work out how to work out the total cost of a vehicle repair job in a numerically safe way, avoiding rounding errors. I get the amount of time spent on a job, then multiply it by a constant labour rate value to get the correct amount cost of labour, however it is not working out how it should be. Here is my example when there has been 20 minutes spent on the job.
This clearly works out wrong as a third of £30 is £10, so how do I avoid the rounding error I am getting?
Here is how I get the total time.
TimeSpan totalTime = TimeSpan.Zero;
foreach (DataRow timeEntry in dhJob.DataStore.Tables[jobTimeCollectionName].Rows)
{
DateTime start = Convert.ToDateTime(timeEntry["jobtimestart"]);
DateTime end = Convert.ToDateTime(timeEntry["jobtimeend"]);
totalTime += (end - start);
}
tb_labourtime.Text = Convert.ToString(Math.Round(totalTime.TotalHours, 2));
tb_labourtotal.Text = (Convert.ToDouble(tb_labourtime.Text) * Convert.ToInt32(tb_labourrate.Text)).ToString();
Any help / advice is appreciated.
Firstly, you are converting the totalTime to a string representation which only has two digits of precision, which is not going to be very accurate.
Secondly, when doing financial calculations, you should generally use the decimal type rather than the double type, which will give you greater accuracy(although it still isn't completely accurate).
The first thing to do is to use the totalTime to calculate the total wages rather than using a converted string value:
tb_labourtotal.Text = (totalTime.TotalHours * Convert.ToInt32(tb_labourrate.Text)).ToString();

Getting a Weighted Average Date Value?

If I have 50% weight on 6/3/2011 and 50% weight on 6/1/2011, the weighted average will be 6/2/2011.
Now, I can't seem to figure out how I can do this with uneven weights, since it's not like you can multiply a DateTime by a double, and sum up the results (or can you?).
DateTime dateA = ...;
DateTime dateB = ...;
TimeSpan difference = dateA - dateB;
double units = difference.Ticks;
// Do your weighted logic here on 'units'.
DateTime average = dateA + new TimeSpan(units);
Something like the above (you get the idea - basically need to normalise the difference into a format you can work with, i.e. ticks, etc.).
You can find the difference between two dates(start and end) in terms of days. Apply the weight on the difference_days and get the final output date by startdate + weightedDays
You can't multiply a datetime by a double, but you can set a value for date1 and date2 on a scale (1 to 100) and figure out where the value you would be in the middle. The 1 versus 100 ends up on 50 in your 50/50 scenario.
You then have to figure the number of days in the range. You can then multiply by the weighted decimal (as a percent) and turn that into number of days. Then add that number of days to the first value.
Since you can turn dates into numbers, this gives some pretty interesting other means of accomplishing this. A TimeSpan is one way of setting this up as a number.
Use the long timestamps of the DateTime Objects.
Can you use the Ticks property of DateTime? Something like:
DateTime firstDate = new DateTime(2011, 6, 5);
DateTime secondDate = new DateTime(2011, 6, 1);
double weight1 = 0.4;
double weight2 = 0.6;
var averageTicks = (firstDate.Ticks * weight1) + (secondDate.Ticks * weight2) / 2;
DateTime averageDate = new DateTime(Convert.ToInt64(averageTicks));
I think the weighted average formula would be sumproduct (Weights*Dates)/sum (Weights)
Where sumproduct means the sum of all factors. Factors being the multiplication of weights and date.ticks. If sum of weights=1 then only the numerator remains.
i had the same problem, only with money(cashflows) which means there is a compounding\discounting rate of growth\decay of value known as time value of money)...
In order to enable you to avoid that mistake i composed a spreadsheet found at:
https://1drv.ms/x/s!AqGuYeJW3VHggc9ARWCxcHIeodd2Pg
You just have to experiment with it!!!...And understand that at 0% interest rate the results are identical to the weighted average described in other posts but as rate increases there is deviation. Then you gotta build an XNPV function for your programming language and fuse it with your app.
also there are a number of examples with growth in algebra books.maybe there is an intersection with them and datetime-offsets found in cashflows

Categories