Is it possible to use the TimeSpan.FromHours with values of type decimal? I need it to be as precise as possible. I know I could cast it to double but that will make the result inaccurate. I was thinking about writing an overloaded extension method but I am not sure how to do that.
Edit
A little background information. The database is using decimal as the type which is decimal in C#. If I am using double, I would have to use float in the database which is known to be bad.
Multiple the decimal value by the number of ticks per hour (TimeSpan.TicksPerHour).
Cast the result to long, and then use TimeSpan.FromTicks.
The risks of this are:
You have a decimal representing a number of hours that can't be represented in a TimeSpan
You have a decimal which requires sub-tick precision
In neither case can you actually end up with a TimeSpan which accurately represents your value - so you'd need to look for an alternative type. (If you want to use my Noda Time library, the Duration type has a precision of nanoseconds and a range of just under 46,000 years - both positive and negative. But you'd really need to move to Noda Time everywhere.)
A System.TimeSpan is backed by long which still would not hold enough precision for the full width of a decimal. If you need that precision, I'd suggest writing your own TimeSpan.
Related
I am using .Net version 4.6 and I am using DateTimeOffset.FromUnixTimeMilliseconds to convert nanoseconds to DateTimeOffset.
long j = 1580122686878258600;
var X = DateTimeOffset.FromUnixTimeMilliseconds(Convert.ToInt64(j * 0.000001));
I am storing the nanoseconds as long, still I have to do conversion to Int64 while multiplying with 0.000001 to convert nano seconds to milli seconds.
Is there other better way ?
Yes, if you don't want to convert this to (long) again, you can instead divide by 1000000 instead of multiplying by 0.000001.
But if you need to multiple, then no, you must convert the result of multiplying a long and a double back to a long in this case. First, the value 0.000001 is of type double. Second, the compiler will imply a conversion of long to double for the multiplication between these two types, and the result will be a double as well. The reason for the implied conversion is that there will be loss of precision (the decimal places) when converting back to long. The method DateTimeOffset.FromUnixTimeMilliseconds() only accepts a single long parameter (long and Int64 are the same data type, long is just an alias for Int64), so you have to convert your result back.
In the case of dividing by 1000000, division by two long values still results in the division, but any decimal places are truncated.
In both cases, you may want to consider the effect of rounding and precision loss, if desired. You get a different value if you use a nano value of 1580122686878999999. For multiplication of long and double (i.e. (long)(1580122686878999999 * 0.000001)) results in 1580122686879. But using division of long's, you instead get 1580122686878.
I have some side comments on the implementation, as well, offering some alternatives:
If you don't like the Convert.ToInt64() notation/call itself, then you can use a standard cast (i.e. (long)(j * 0.000001)). If you don't like doing this, you can construct the DateTimeOffset using a constructor that accepts "ticks" instead, which you can get from a TimeSpan struct, which has a FromMilliseconds() method that accepts a double, (e.g. new DateTimeOffset(TimeSpan.FromMilliseconds(j * 0.000001).Ticks, TimeSpan.Zero)). The cast seems to be the most straightforward and concise code, though.
Further, expanding on the "ticks" constructor above, the best solution might be that you instead divide down to "ticks", which are 100-nanosecond and more precise than milliseconds. In this case you can achieve "ticks" by dividing nanoseconds by 100 (multiplying by 0.01) to gain even more precision, e.g. new DateTimeOffset((long)(j * 0.01), TimeSpan.Zero). I only offer this with the thought that you may want the most precision possible from the initial nanoseconds value.
From what I understand decimal is used for precision and is recommended for monetary calculations. Double gives better range, but less precision and is a lot faster than decimal.
What if I have time and rate, I feel like double is suited for time and decimal for rate. I can't mix the two and run calculations without casting which is yet another performance bottleneck. What's the best approach here? Just use decimal for time and rate?
Use double for both. decimal is for currency or other situations where the base-10 representation of the number is important. If you don't care about the base-10 representation of a number, don't use decimal. For things like time or rates of change of physical quantities, the base-10 representation generally doesn't matter, so decimal is not the most appropriate choice.
The important thing to realize is that decimal is still a floating-point type. It still suffers from rounding error and cannot represent certain "simple" numbers (such as 1/3). Its one advantage (and one purpose) is that it can represent decimal numbers with fewer than 29 significant digits exactly. That means numbers like 0.1 or 12345.6789. Basically any decimal you can write down on paper with fewer than 29 digits. If you have a repeating decimal or an irrational number, decimal offers no major benefits.
The rule of thumb is to use the type that is more suitable to the values you will handle. This means that you should use DateTime or TimeSpan for time, unless you only care about a specific unit, like seconds, days, etc., in which case you can use any integer type. Usually for time you need precision and don't want any error due to rounding, so I wouldn't use any floating point type like float or double.
For anything related to money, of course you don't want any rounding error either, so you should really use decimal here.
Finally, only if for some very specific requirements you need absolute speed in a calculation that is done millions of times and for which decimal happens not to be fast enough, only then I would think of using another faster type. I would first try with integer values (maybe multiplying your value by a power of 10 if you have decimals) and only divide by this power of 10 at the end. If this can't be done, only then I would think of using a double. Don't do a premature optimization if you are not sure it's needed.
We are using these double values to represent bill amounts. I read that it is better to use 'decimal" datatype rather than double to minimize rounding errors. But Its a very big project and changing all dataypes to decimal is a herculean task.
SO we tried Math.Round with both kinds of midpoint rounding but nothing works. There is some kind of an error.
Is there anyway to make the rounding upto 2 decimal places accurately?
EDIT:
Sorry for not providing examples. The problem is once the values (totally there are 24 "double"values)get added before rounding(they were upto 15 places originally), the summed value comes to 18167.04 which is desired. But when they are rounded to 2 decimal places (using Math.Round or Math.Round with MidpointRounding), the summed value is 18167.07 (differs by .03).
Using Decimal datatype is apt for monetary calculations but since it a huge project, for now, implementing the change in datatype is a task.
No way of Rounding works.
Is the problem really with the datatype here or because of rounding?
WIll the same Rounding method work if decimal datatype is used?
Floating point accuracy is not defined by decimal points, it is defined by significant digits. 123456789123456789.0 is no more or less accurate than 0.123456789123456789.
There is a problem that frequently occurs when dealing with financial values in programming:
Float values do not tanslate well to decimal fractions.
Many developers tend to think of float values as decimal fractions, because they are mostly represented by these, when converting them to strings and vice versa. This is not the case. Float values have their fractional part stored as binary fraction, as descibed here.
This makes float values (and their calulations) being slightly askew from their decimal representations.
One way out of this problem, is (as you stated) to use the data type decimal, that was construced to do calculations that have to translate directly to decimal fractions (as financial calculations do).
Another would be to round all results of floating point calculations, before displaying or storing them. For financial calculations one should use:
Math.Round([your value here], 2, MidpointRounding.AwayFromZero);
I would advise to opt for the former, whenever possible. It spares many a headache, when calculation is done. With the rounding approach, one has to take rounding errors into account at many points. It may be a big task to convert an existing project to decimal, but it will most propably pay out in the long run (if it is possible, in the fist place)...
Regarding your edit:
This problem arises, because the rounding errors get cumulated if you round before you sum. You could opt rounding the sum after summing up the un-rounded values. Whether you use double or decimal is irellevant in this regard, as this applies to both.
Whether this is allowed, depends on the type of financial calculation you do.
If the the summands appear in printed form on e.g. an invoice, then it is most propably not allowed to do so.
We are storing financial data in a SQL Server database using the decimal data type and we need 6-8 digits of precision in the decimal. When we get this value back through our data access layer into our C# server, it is coming back as the decimal data type.
Due to some design constraints that are beyond my control, this needs to be converted. Converting to a string isn't a problem. Converting to a double is as the MS documentation says "[converting from decimal to double] can produce round-off errors because a double-precision floating-point number has fewer significant digits than a decimal."
As the double (or string) we can round to 2 decimal places after any calculations are done, so what is the "right" way to do the decimal conversion to ensure that we don't lose any precision before the rounding?
The conversion won't produce errors within the first 8 digits. double has 15-16 digits of precision - less than the 28-29 of decimal, but enough for your purposes by the sounds of it.
You should definitely put in place some sort of plan to avoid using double in the future, however - it's an unsuitable datatype for financial calculations.
If you round to 2dp, IMO the "right" way would be store an integer that is the multiple - i.e. for 12.34 you store the integer 1234. No more double rounding woe.
If you must use double, this still works; all integers are guaranteed to be stored exactly in double - so still use the same trick.
Is it appropriate to use the double type to store percentage values (for example a discount percentage in a shop application) or would it be better to use the decimal type?
Floating-point types (float and double are particularly ill-suited to financial applications.
Financial calculations are almost always decimal, while floating-point types are almost always binary. Many common values that are easy to represent in decimal are impossible to represent in binary. For example, 0.2d = 0.00110011...b. See http://en.wikipedia.org/wiki/Binary_numeral_system#Fractions_in_binary for a good discussion.
It's also worth talking about how you're representing prices in your system. decimal is a good choice, but floating point is not, for reasons listed above. Because you believe in Object Oriented Programming, you're going to wrap that decimal in a new Money type, right? A nice treatment of money comes in Kent Beck's Test Driven Development by Example.
Perhaps you will consider representing percentages as an integer, and then dividing by 100 every time you use it. However, you are setting yourself up for bugs (oops, I forgot to divide) and future inflexibility (customer wants 1/10ths of a percent, so go fix every /100 to be /1000. Oops, missed one - bug.)
That leaves you with two good options, depending on your needs. One is decimal. It's great for whole percentages like 10%, but not for things like "1/3rd off today only!", as 1/3 doesn't represent exactly in decimal. You'd like it if buying 3 of something at 1/3rd off comes out as a whole number, right?
Another is to use a Fraction type, which stores an integer numerator and denominator. This allows you to represent exact values for all rational numbers. Either implement your own Fraction type or pick one up from a library (search the internet).
You can probably get away with saving the discount percentage as an integer. Just store 10 or 25 or whatever, and when you need to work out the price of something:
newprice = price * discount / 100
decimal does come at a performance cost, but it's usually worth it for financial uses. The reason it has low performance (the worst of all numeric types) is that it doesn't map directly to a hardware type. That means it requires more of the work to be done in software.
Note that it is not only an issue of size. decimal is an integer scaled by a power of 10, while the float and double types are scaled by powers of 2. That means terminating decimal values like 0.1 can be exactly represented using decimal, while they are non-terminating (and thus rounded) for float and double.
I try to avoid floating-point whenever possible. Nothing irritates me more than having .25 not equal to .25, something that happens when you start dealing with them.
A regular float should be fine, unless you need accuracy to like, five decimal places.