If I have 50% weight on 6/3/2011 and 50% weight on 6/1/2011, the weighted average will be 6/2/2011.
Now, I can't seem to figure out how I can do this with uneven weights, since it's not like you can multiply a DateTime by a double, and sum up the results (or can you?).
DateTime dateA = ...;
DateTime dateB = ...;
TimeSpan difference = dateA - dateB;
double units = difference.Ticks;
// Do your weighted logic here on 'units'.
DateTime average = dateA + new TimeSpan(units);
Something like the above (you get the idea - basically need to normalise the difference into a format you can work with, i.e. ticks, etc.).
You can find the difference between two dates(start and end) in terms of days. Apply the weight on the difference_days and get the final output date by startdate + weightedDays
You can't multiply a datetime by a double, but you can set a value for date1 and date2 on a scale (1 to 100) and figure out where the value you would be in the middle. The 1 versus 100 ends up on 50 in your 50/50 scenario.
You then have to figure the number of days in the range. You can then multiply by the weighted decimal (as a percent) and turn that into number of days. Then add that number of days to the first value.
Since you can turn dates into numbers, this gives some pretty interesting other means of accomplishing this. A TimeSpan is one way of setting this up as a number.
Use the long timestamps of the DateTime Objects.
Can you use the Ticks property of DateTime? Something like:
DateTime firstDate = new DateTime(2011, 6, 5);
DateTime secondDate = new DateTime(2011, 6, 1);
double weight1 = 0.4;
double weight2 = 0.6;
var averageTicks = (firstDate.Ticks * weight1) + (secondDate.Ticks * weight2) / 2;
DateTime averageDate = new DateTime(Convert.ToInt64(averageTicks));
I think the weighted average formula would be sumproduct (Weights*Dates)/sum (Weights)
Where sumproduct means the sum of all factors. Factors being the multiplication of weights and date.ticks. If sum of weights=1 then only the numerator remains.
i had the same problem, only with money(cashflows) which means there is a compounding\discounting rate of growth\decay of value known as time value of money)...
In order to enable you to avoid that mistake i composed a spreadsheet found at:
https://1drv.ms/x/s!AqGuYeJW3VHggc9ARWCxcHIeodd2Pg
You just have to experiment with it!!!...And understand that at 0% interest rate the results are identical to the weighted average described in other posts but as rate increases there is deviation. Then you gotta build an XNPV function for your programming language and fuse it with your app.
also there are a number of examples with growth in algebra books.maybe there is an intersection with them and datetime-offsets found in cashflows
Related
I have this comparison of time ticks, but I only want to know that the ticks are equal up to a certain granularity. For that I've come up with an idea of dividing ticks by a common modulus, subtracting the remainder and compare what's left.
long value1 = DateTime.UtcNow.Ticks;
long value2 = 8884736516532874;
Assert.IsTrue((value1 - value1 % 1000)==(value2 - value2 % 1000));
I am sure there's gotta be a more elegant, better way of doing that.
Comparing with inaccuracies is a pretty common thing, in particular when dealing with floating-point numbers such as float.
In your case you can achieve the same by calculating the difference of both values and checking if it is smaller than a pre-defined epsilon:
var epsilon = 1000;
Assert.IsTrue(Math.Abs(value1 - value2) < epsilon);
As per your comment
I know that both values will be within same date, same hours and within 10 minute lag between each other
So you can go even a bit further and compare dates directly:
var difference = date1.Subtract(date2);
Assert.IsTrue(Math.Abs(difference.TotalMinutes) < 10);
I think that condition is equals to
value1 /(10^11) == value2/(10^11)
I am trying to work out how to work out the total cost of a vehicle repair job in a numerically safe way, avoiding rounding errors. I get the amount of time spent on a job, then multiply it by a constant labour rate value to get the correct amount cost of labour, however it is not working out how it should be. Here is my example when there has been 20 minutes spent on the job.
This clearly works out wrong as a third of £30 is £10, so how do I avoid the rounding error I am getting?
Here is how I get the total time.
TimeSpan totalTime = TimeSpan.Zero;
foreach (DataRow timeEntry in dhJob.DataStore.Tables[jobTimeCollectionName].Rows)
{
DateTime start = Convert.ToDateTime(timeEntry["jobtimestart"]);
DateTime end = Convert.ToDateTime(timeEntry["jobtimeend"]);
totalTime += (end - start);
}
tb_labourtime.Text = Convert.ToString(Math.Round(totalTime.TotalHours, 2));
tb_labourtotal.Text = (Convert.ToDouble(tb_labourtime.Text) * Convert.ToInt32(tb_labourrate.Text)).ToString();
Any help / advice is appreciated.
Firstly, you are converting the totalTime to a string representation which only has two digits of precision, which is not going to be very accurate.
Secondly, when doing financial calculations, you should generally use the decimal type rather than the double type, which will give you greater accuracy(although it still isn't completely accurate).
The first thing to do is to use the totalTime to calculate the total wages rather than using a converted string value:
tb_labourtotal.Text = (totalTime.TotalHours * Convert.ToInt32(tb_labourrate.Text)).ToString();
I am dividing two doubles in .NET and using the result object to work out an end date from a start date by calling (dtStart is predefined):
var dValue = 1500.0/8400.0;
var dtEnd = dtStart.AddDays(dValue);
After inspecting dtEnd I found that the result was only accurate to the nearest millisecond. After looking it up I found that .AddMilliseconds etc. all round the number and TimeSpan.FromDays does a similar thing. I was wondering if there was a reason why this rounding was done since it seems like the only way to get the correct value here is to use .AddTicks?
For reference .AddDays calls (where MillisPerDay = 86400000)
public DateTime AddDays(double value)
{
return Add(value, MillisPerDay);
}
which calls
private DateTime Add(double value, int scale)
{
long millis = (long)(value * scale + (value >= 0? 0.5: -0.5));
if (millis <= -MaxMillis || millis >= MaxMillis)
throw new ArgumentOutOfRangeException("value", Environment.GetResourceString("ArgumentOutOfRange_AddValue"));
return AddTicks(millis * TicksPerMillisecond);
}
Edit: After thinking things over, I now realize the first version of my answer was wrong.
Here are the comments in Microsoft's source code:
// Returns the DateTime resulting from adding a fractional number of
// xxxxs to this DateTime. The result is computed by rounding the
// fractional number of xxxxs given by value to the nearest
// millisecond, and adding that interval to this DateTime. The
// value argument is permitted to be negative.
These comments appear on five different AddXxxxs(double value) methods, where Xxxx = Days, Hours, Milliseconds, Minutes and Seconds.
Note that this is only for the methods that accept a floating point value. (And one may question whether or not it is a good idea to involve floating point values in date calculations - but that's a topic for another day.)
Now, as the OP correctly points out, these five methods all call this method:
// Returns the DateTime resulting from adding a fractional number of
// time units to this DateTime.
private DateTime Add(double value, int scale) {
long millis = (long)(value * scale + (value >= 0? 0.5: -0.5));
if (millis <= -MaxMillis || millis >= MaxMillis)
throw new ArgumentOutOfRangeException("value", Environment.GetResourceString("ArgumentOutOfRange_AddValue"));
return AddTicks(millis * TicksPerMillisecond);
}
So what is being done is that the value being added to the DateTime is rounded to the nearest number of millisecond before being added. But not the result - only the value being added (or subtracted).
This is actually documented, for example http://msdn.microsoft.com/en-us/library/system.datetime.adddays%28v=vs.110%29.aspx "The value parameter is rounded to the nearest millisecond."
Why it does this I don't know. Maybe the programmers figured that if you're using floating point values you should be aware that your values are typically not completely accurate. Or maybe they want to simulate to some degree Java-style times, which are based on milliseconds.
I am calculating the average of some values. Everything works fine.
What I want to do is to round the double to the 2nd decimal place.
e.g.
I would have 0.833333333333333333 displayed as
0.83
Is there anyway to do this?
Round the double itself like:
Math.Round(0.83333, 2, MidpointRounding.AwayFromZero);
(You should define MidpointRounding.AwayAwayFromZero to get the correct results. Default this function uses bankers rounding. read more about bankers rounding: http://www.xbeat.net/vbspeed/i_BankersRounding.htm so you can see why this won't give you the right results)
Or just the display value for two decimals:
myDouble.ToString("F");
Or for any decimals determined by the number of #
myDouble.ToString("#.##")
You say displays as - so that would be:
var d = value.ToString("f2");
See Standard numeric format strings
If you actually want to adjust the value down to 2dp then you can do what #middelpat has suggested.
Use
Math.Round(decimal d,int decimals);
as
Math.Round(0.833333333,2);
This will give you the result 0.83.
double d = 0.833333333333333333;
Math.Round(d, 2).ToString();
Try
decimalVar = 0.833333333333333333;
decimalVar.ToString ("#.##");
If you want to see 0.85781.. as 0.85, the easiest way is to multiply by 100, cast to int and divide by 100.
int val = (int)(0.833333333333333333 * 100);
var result = val /100;
It should produce the result you're looking for.
I'm running NUnit tests to evaluate some known test data and calculated results. The numbers are floating point doubles so I don't expect them to be exactly equal, but I'm not sure how to treat them as equal for a given precision.
In NUnit we can compare with a fixed tolerance:
double expected = 0.389842845321551d;
double actual = 0.38984284532155145d; // really comes from a data import
Expect(actual, EqualTo(expected).Within(0.000000000000001));
and that works fine for numbers below zero, but as the numbers grow the tolerance really needs to be changed so we always care about the same number of digits of precision.
Specifically, this test fails:
double expected = 1.95346834136148d;
double actual = 1.9534683413614817d; // really comes from a data import
Expect(actual, EqualTo(expected).Within(0.000000000000001));
and of course larger numbers fail with tolerance..
double expected = 1632.4587642911599d;
double actual = 1632.4587642911633d; // really comes from a data import
Expect(actual, EqualTo(expected).Within(0.000000000000001));
What's the correct way to evaluate two floating point numbers are equal with a given precision? Is there a built-in way to do this in NUnit?
From msdn:
By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally.
Let's assume 15, then.
So, we could say that we want the tolerance to be to the same degree.
How many precise figures do we have after the decimal point? We need to know the distance of the most significant digit from the decimal point, right? The magnitude. We can get this with a Log10.
Then we need to divide 1 by 10 ^ precision to get a value around the precision we want.
Now, you'll need to do more test cases than I have, but this seems to work:
double expected = 1632.4587642911599d;
double actual = 1632.4587642911633d; // really comes from a data import
// Log10(100) = 2, so to get the manitude we add 1.
int magnitude = 1 + (expected == 0.0 ? -1 : Convert.ToInt32(Math.Floor(Math.Log10(expected))));
int precision = 15 - magnitude ;
double tolerance = 1.0 / Math.Pow(10, precision);
Assert.That(actual, Is.EqualTo(expected).Within(tolerance));
It's late - there could be a gotcha in here. I tested it against your three sets of test data and each passed. Changing pricision to be 16 - magnitude caused the test to fail. Setting it to 14 - magnitude obviously caused it to pass as the tolerance was greater.
This is what I came up with for The Floating-Point Guide (Java code, but should translate easily, and comes with a test suite, which you really really need):
public static boolean nearlyEqual(float a, float b, float epsilon)
{
final float absA = Math.abs(a);
final float absB = Math.abs(b);
final float diff = Math.abs(a - b);
if (a * b == 0) { // a or b or both are zero
// relative error is not meaningful here
return diff < (epsilon * epsilon);
} else { // use relative error
return diff / (absA + absB) < epsilon;
}
}
The really tricky question is what to do when one of the numbers to compare is zero. The best answer may be that such a comparison should always consider the domain meaning of the numbers being compared rather than trying to be universal.
How about converting the items each to string and comparing the strings?
string test1 = String.Format("{0:0.0##}", expected);
string test2 = String.Format("{0:0.0##}", actual);
Assert.AreEqual(test1, test2);
Assert.That(x, Is.EqualTo(y).Within(10).Percent);
is a decent option (changes it to a relative comparison, where x is required to be within 10% of y). You may want to add extra handling for 0, as otherwise you'll get an exact comparison in that case.
Update:
Another good option is
Assert.That(x, Is.EqualTo(y).Within(1).Ulps);
where Ulps means units in the last place. See https://docs.nunit.org/articles/nunit/writing-tests/constraints/EqualConstraint.html#comparing-floating-point-values.
I don't know if there's a built-in way to do it with nunit, but I would suggest multiplying each float by the 10x the precision you're seeking, storing the results as longs, and comparing the two longs to each other.
For example:
double expected = 1632.4587642911599d;
double actual = 1632.4587642911633d;
//for a precision of 4
long lActual = (long) 10000 * actual;
long lExpected = (long) 10000 * expected;
if(lActual == lExpected) { // Do comparison
// Perform desired actions
}
This is a quick idea, but how about shifting them down till they are below zero? Should be something like num/(10^ceil(log10(num))) . . . not to sure about how well it would work, but its an idea.
1632.4587642911599 / (10^ceil(log10(1632.4587642911599))) = 0.16324587642911599
How about:
const double significantFigures = 10;
Assert.AreEqual(Actual / Expected, 1.0, 1.0 / Math.Pow(10, significantFigures));
The difference between the two values should be less than either value divided by the precision.
Assert.Less(Math.Abs(firstValue - secondValue), firstValue / Math.Pow(10, precision));
open FsUnit
actual |> should (equalWithin errorMargin) expected