C# Limit left decimal places - c#

How do you limit the number of places on the left hand side of a decimal?
So 123.45 needs to be 23.45.
I would like the output to be a decimal.

You must use modulo and ToString(string Format), so
var resultString = (number % 100).ToString("#00.00");
is the correct operation

Well, if you need "one hundred twenty-three" to become "twenty-three," one arithmetic way to do that is "mod(100.0)." Works equally well for positive or negative numbers.
Another approach might be to convert the value to a string, then lop-off some of the leading digits/characters. (This would avoid applying yet-another arithmetic operation to a floating-point value, risking the dreaded "off by one-cent" that gets accountants and other types so upset.) Requires more debugging on your part, though.

Related

C# decimal and double

From what I understand decimal is used for precision and is recommended for monetary calculations. Double gives better range, but less precision and is a lot faster than decimal.
What if I have time and rate, I feel like double is suited for time and decimal for rate. I can't mix the two and run calculations without casting which is yet another performance bottleneck. What's the best approach here? Just use decimal for time and rate?
Use double for both. decimal is for currency or other situations where the base-10 representation of the number is important. If you don't care about the base-10 representation of a number, don't use decimal. For things like time or rates of change of physical quantities, the base-10 representation generally doesn't matter, so decimal is not the most appropriate choice.
The important thing to realize is that decimal is still a floating-point type. It still suffers from rounding error and cannot represent certain "simple" numbers (such as 1/3). Its one advantage (and one purpose) is that it can represent decimal numbers with fewer than 29 significant digits exactly. That means numbers like 0.1 or 12345.6789. Basically any decimal you can write down on paper with fewer than 29 digits. If you have a repeating decimal or an irrational number, decimal offers no major benefits.
The rule of thumb is to use the type that is more suitable to the values you will handle. This means that you should use DateTime or TimeSpan for time, unless you only care about a specific unit, like seconds, days, etc., in which case you can use any integer type. Usually for time you need precision and don't want any error due to rounding, so I wouldn't use any floating point type like float or double.
For anything related to money, of course you don't want any rounding error either, so you should really use decimal here.
Finally, only if for some very specific requirements you need absolute speed in a calculation that is done millions of times and for which decimal happens not to be fast enough, only then I would think of using another faster type. I would first try with integer values (maybe multiplying your value by a power of 10 if you have decimals) and only divide by this power of 10 at the end. If this can't be done, only then I would think of using a double. Don't do a premature optimization if you are not sure it's needed.

Is this a good solution for R#'s complaint about loss of precision?

There was some code in my project that compared two double values to see if their difference exceeded 0, such as:
if (totValue != 1.0)
Resharper complained about this, suggesting I should use "EPSILON" and added just such a constant to the code (when invited to do so). However, it does not create the constant itself or suggest what value it should be. Is this a good solution:
const double EPSILON = double.Epsilon; // see http://msdn.microsoft.com/en-us/library/system.double.epsilon.aspx
. . .
if (Math.Abs(totValue - 1.0) > EPSILON)
compValue = Convert.ToString(totValue*Convert.ToDouble(compValue));
?
UPDATE
I changed it to this:
const double EPSILON = 0.001;
...thinking that's probably both large and small enough to work for typical double vals (not scientific stuff, just "I have 2.5 of these," etc.)
No, it is not a sensible value for your epsilon. The code you have is no different than a straight equality check.
double.Epsilon is the smallest possible difference that there can be between any two doubles. There is no way for any two doubles to be closer to each other than by being double.Epsilon apart, so the only way for this check to be true is for them to be exactly equal.
As for what epsilon value you should actually use, that simply depends, which is why one isn't automatically generated for you. It all depends on what types of operations you're doing to the data (which affects the possible deviation from the "true value") along with how much precision actually care about in your application (and of course if the precision that you care about is greater than your margin of error, you have a problem). Your epsilon needs to be some precision value greater than (or equal to) the precision you need, while being less than the possible margin of error of all operations performed on either numeric value.
Yes. But even better is to not use floating point. Use decimal instad.
However, if, for some reason, you have to stick to double never compare directly, that means, never rely on e.g. a-b == 0 with (a and b being some double values which are meant to be equal).
Floating point arithmetic is fast, but not precise, and taken that into account, R# is correct.

Convert.ToDecimal(double x) - System.OverflowException

What is the maximum double value that can be represented\converted to a decimal?
How can this value be derived - example please.
Update
Given a maximum value for a double that can be converted to a decimal, I would expect to be able to round-trip the double to a decimal, and then back again. However, given a figure such as (2^52)-1 as in #Jirka's answer, this does not work. For example:
Test]
public void round_trip_double_to_decimal()
{
double maxDecimalAsDouble = (Math.Pow(2, 52) - 1);
decimal toDecimal = Convert.ToDecimal(maxDecimalAsDouble);
double toDouble = Convert.ToDouble(toDecimal);
//Fails.
Assert.That(toDouble, Is.EqualTo(maxDecimalAsDouble));
}
All integers between -9,007,199,254,740,992 and 9,007,199,254,740,991 can be exactly represented in a double. (Keep reading, though.)
The upper bound is derived as 2^53 - 1. The internal representation of it is something like (0x1.fffffffffffff * 2^52) if you pardon my hexadecimal syntax.
Outside of this range, many integers can be still exactly represented if they are a multiple of a power of two.
The highest integer whatsoever that can be accurately represented would therefore be 9,007,199,254,740,991 * (2 ^ 1023), which is even higher than Decimal.MaxValue but this is a pretty meaningless fact, given that the value does not bother to change, for example, when you subtract 1 in double arithmetic.
Based on the comments and further research, I am adding info on .NET and Mono implementations of C# that relativizes most conclusions you and I might want to make.
Math.Pow does not seem to guarantee any particular accuracy and it seems to deliver a bit or two fewer than what a double can represent. This is not too surprising with a floating point function. The Intel floating point hardware does not have an instruction for exponentiation and I expect that the computation involves logarithm and multiplication instructions, where intermediate results lose some precision. One would use BigInteger.Pow if integral accuracy was desired.
However, even (decimal)(double)9007199254740991M results in a round trip violation. This time it is, however, a known bug, a direct violation of Section 6.2.1 of the C# spec. Interestingly I see the same bug even in Mono 2.8. (The referenced source shows that this conversion bug can hit even with much lower values.)
Double literals are less rounded, but still a little: 9007199254740991D prints out as 9007199254740990D. This is an artifact of internal multiplication by 10 when parsing the string literal (before the upper and lower bound converge to the same double value based on the "first zero after the decimal point"). This again violates the C# spec, this time Section 9.4.4.3.
Unlike C, C# has no hexadecimal floating point literals, so we cannot avoid that multiplication by 10 by any other syntax, except perhaps by going through Decimal or BigInteger, if these only provided accurate conversion operators. I have not tested BigInteger.
The above could almost make you wonder whether C# does not invent its own unique floating point format with reduced precision. No, Section 11.1.6 references 64bit IEC 60559 representation. So the above are indeed bugs.
So, to conclude, you should be able to fit even 9007199254740991M in a double precisely, but it's quite a challenge to get the value in place!
The moral of the story is that the traditional belief that "Arithmetic should be barely more precise than the data and the desired result" is wrong, as this famous article demonstrates (page 36), albeit in the context of a different programming language.
Don't store integers in floating point variables unless you have to.
MSDN Double data type
Decimal vs double
The value of Decimal.MaxValue is positive 79,228,162,514,264,337,593,543,950,335.

new type consists of decimal and double

In my application I have calculations that need precise as decimal and data range as double.
If I use decimal, calculations cause overflow and if I use double, the calculations cause precise lost.
What's the solution?
Check if this fits for you:
http://www.codeproject.com/Articles/88980/Rational-Numbers-NET-4-0-Version-Rational-Computin
Anyway you can use it as start-point to build your own "high precision floating point number". Do not forget anything you'll use it'll be really slower than primitive types.

Is a double really unsuitable for money?

I always tell in c# a variable of type double is not suitable for money. All weird things could happen. But I can't seem to create an example to demonstrate some of these issues. Can anyone provide such an example?
(edit; this post was originally tagged C#; some replies refer to specific details of decimal, which therefore means System.Decimal).
(edit 2: I was specific asking for some c# code, so I don't think this is language agnostic only)
Very, very unsuitable. Use decimal.
double x = 3.65, y = 0.05, z = 3.7;
Console.WriteLine((x + y) == z); // false
(example from Jon's page here - recommended reading ;-p)
You will get odd errors effectively caused by rounding. In addition, comparisons with exact values are extremely tricky - you usually need to apply some sort of epsilon to check for the actual value being "near" a particular one.
Here's a concrete example:
using System;
class Test
{
static void Main()
{
double x = 0.1;
double y = x + x + x;
Console.WriteLine(y == 0.3); // Prints False
}
}
Yes it's unsuitable.
If I remember correctly double has about 17 significant numbers, so normally rounding errors will take place far behind the decimal point. Most financial software uses 4 decimals behind the decimal point, that leaves 13 decimals to work with so the maximum number you can work with for single operations is still very much higher than the USA national debt. But rounding errors will add up over time. If your software runs for a long time you'll eventually start losing cents. Certain operations will make this worse. For example adding large amounts to small amounts will cause a significant loss of precision.
You need fixed point datatypes for money operations, most people don't mind if you lose a cent here and there but accountants aren't like most people..
edit
According to this site http://msdn.microsoft.com/en-us/library/678hzkk9.aspx Doubles actually have 15 to 16 significant digits instead of 17.
#Jon Skeet decimal is more suitable than double because of its higher precision, 28 or 29 significant decimals. That means less chance of accumulated rounding errors becoming significant. Fixed point datatypes (ie integers that represent cents or 100th of a cent like I've seen used) like Boojum mentions are actually better suited.
Since decimal uses a scaling factor of multiples of 10, numbers like 0.1 can be represented exactly. In essence, the decimal type represents this as 1 / 10 ^ 1, whereas a double would represent this as 104857 / 2 ^ 20 (in reality it would be more like really-big-number / 2 ^ 1023).
A decimal can exactly represent any base 10 value with up to 28/29 significant digits (like 0.1). A double can't.
My understanding is that most financial systems express currency using integers -- i.e., counting everything in cents.
IEEE double precision actually can represent all integers exactly in the range -2^53 through +2^53. (Hacker's Delight, pg. 262) If you use only addition, subtraction and multiplication, and keep everything to integers within this range then you should see no loss of precision. I'd be very wary of division or more complex operations, however.
Using double when you don't know what you are doing is unsuitable.
"double" can represent an amount of a trillion dollars with an error of 1/90th of a cent. So you will get highly precise results. Want to calculate how much it costs to put a man on Mars and get him back alive? double will do just fine.
But with money there are often very specific rules saying that a certain calculation must give a certain result and no other. If you calculate an amount that is very very very close to $98.135 then there will often be a rule that determines whether the result should be $98.14 or $98.13 and you must follow that rule and get the result that is required.
Depending on where you live, using 64 bit integers to represent cents or pennies or kopeks or whatever is the smallest unit in your country will usually work just fine. For example, 64 bit signed integers representing cents can represent values up to 92,223 trillion dollars. 32 bit integers are usually unsuitable.
No a double will always have rounding errors, use "decimal" if you're on .Net...
Actually floating-point double is perfectly well suited to representing amounts of money as long as you pick a suitable unit.
See http://www.idinews.com/moneyRep.html
So is fixed-point long. Either consumes 8 bytes, surely preferable to the 16 consumed by a decimal item.
Whether or not something works (i.e. yields the expected and correct result) is not a matter of either voting or individual preference. A technique either works or it doesn't.

Categories