We have an class responsible for converting internal units to display units. So if our display units are m and internal units are mm it would divide the internal units by a conversionFactor of 1000. The user can add entities into the system at a varying x,y,z co-ordinates. We have an odd occurrence where a user is inputting units at 1000 mm so the display is showing 1m. the input is consistently 1000 mm but every now and again the division of 1000/1000 seems to be throwing up .9999999m instead of 1m. so in our grid we have 1m,1m,1m,1m,0.9999m,1m,1m etc. Sometimes the .9999m never appears some times it is straight away sometimes it is occurs after 20 to 100 inputs. We are investigating if something odd is happening on the input side, but I wondered if anyone else has come across something like this?
I should say we are converting it to a string to display.
If the two numbers you're dividing are floating-point values (i.e. double, float, decimal) than you may be experiencing a rounding error. Try changing them to non-floating types if possible and try to see if you can replicate the problem.
I'm guessing it's a display thing... what happens when you format the string to say... 9 decimal places?
var str = string.format("{0.000000000}", funkyVal);
I'd ask this via comment, but apparently I'm not a high enough level ;(
Thanks for all your help we have tracked it down to a weird side effect from inputing a different object.
The issue is that if a different object is inserted by any multiple of 3 times the error is triggered e.g. objectA is input 3 times at 1m all okay, then after this objectB is input at 1m 0.9999m appears however if objectA is input 1,2,4 or 5 times there is no problem. 6 times and the problem reappears, 9 times etc. What fun we have.
Related
Hello I'm currently following the Computing with C# and the .NET Framework book and I'm having difficulty on one of the exercises which is
Write a C# program to make change. Enter the cost of an item that is less than one dollar. Output
the coins given as change, using quarters, dimes, nickels, and pennies. Use the fewest coins
possible. For example, if the item cost 17 cents, the change would be three quarters, one nickel,
and three pennies
Since I'm still trying to grasp c# programming the best method I came up with is using the while loop.
while(costOfItem >= 0.50)
{
costOfItem -= 0.50;
fiftyPence++;
}
I have these for each of the pences 20,10,5 etc..
I'm checking if the amount is greater than or equal to 50 pence, if so, i reduce 50 pence from the amount given by the user and add 1 to the fiftypence variable.
then it moves onto the next while loop, which I have for each pences. The problem is, somewhere along the line one of the loops takes away, lets say 20 pence, and the costOfItem becomes something like "0.1999999999999" then it never drops down to 0, which it should to get the correct amount of change.
Any help is appreciated, please don't suggest over complex procedures that I have yet covered.
Never use double or float for money operations. Use Decimal.
For all other problems of calculation accuracy you have to use "Double Epsilon Comparison" like Double.Epsilon for equality, greater than, less than, less than or equal to, greater than or equal to
If you do the calculation in cents, you can use integers and then you don't get into floating point rounding problems
Sounds to me you are using float or double as datatype for costOfItem.
float and double store their values in binary. But not all decimal values have an exact representation in binary. Therefore, a very close approximation is stored. This is why you get values like "0.1999999999999".
For money calculations, you should always use decimal, since they avoid those small inaccuracies.
You can read more about the difference in Jon Skeets awesome answer to Difference between Decimal, Float and Double in .NET?
Thank you all for the fast replies. The issue was that i used double instead of decimal the links provided have people explaining why it's wrong to do so in some cases. Using double should be avoided in arithmetic since apparently some float numbers do not have exact a binary representation, so instead of .23 it gives 0.2299999999999. Changing the variable to decimal fixed my issue
I have a little trouble with Math.Ceiling() in C#.
I call it on the result of a division by a number followed by a multiplication by the same number, e.g. 20 000 / 184 * 184. I would expect that result to be 20 000 but it is 20 001. Are there any possible ways how to avoid this behavior when trying round up value?
Thank you in advance
When running the code you supplied we have the following
twentyThousand/oneEightyFour * oneEightyFour
The answer is 20000.000000000000000000000001
Hence when you do the ceiling we have 20001.
By the following article I think the result is due to in inaccuracy introduced when performing the division , this yields 108.69565217391304347826086957 and as Jon stated
As a very broad rule of thumb, if you end up seeing a very long string representation (ie most of the 28/29 digits are non-zero) then chances are you've got some inaccuracy along the way.
http://csharpindepth.com/Articles/General/Decimal.aspx
As light pointed out in the comments, you shouldn't be getting 20001 at all.
20000 / 184 would yield 108. Which then would give you 19872 when multiplied by 184.
Somewhere you are doing something other than what you posted. Where is Math.Ceiling() even called?
I will say, if the numbers are hard coded, you can put a decimal in the code and it will treat it as such. If you are using variables that represent numbers, be sure they are formatted as some floating point type (decimal,double,float) depending on the accuracy needed.
Console.WriteLine(20000 / 184 * 184); // 19872
Console.WriteLine((20000.0 / 184.0 * 184.0)); // 20000
Are there any possible ways how to avoid this behavior when trying round up value?
In this particular case you can avoid the problem by multiplying first, then dividing:
result = (20000m * 184m) / 184m ;
Since the precision is lost in the division, multiplying first prevents that imprecision form getting exaggerated when you multiply.
I'm a new user of C#, but learned to make small simple games. So I'm having fun and training C# that way.
However, now I need to change a moving objects speed by 0.2, so I can change the speed using an interval, without the object bugging out. I'm using 'int' values to set speed values of the objects. My objects are moving with 2 pixels per milisec (1/1000 sec). I have tried multiplying with 2, but after doing this once or twice, the objects will move so fast, they bug out.
Looked through other questions on the site, but can't find anything, which seems to help me out.
So:
Is it possible to make an 'int', which hold a decimal value ?
If yes, then how can I make it, without risking bugs in the program ?
Thanks in advance!
Is it possible to make an 'int', which hold a decimal value ?
No, a variable of type int can only contains an integer number. In the world of C# and CLR an int is any integer number that can be represented by 32 bits. Nothing less, nothing more. However, a decimal value can be represented by integers, please see update below and comments,
In your case, I think that a float or a double would do the job. (I don't refer to decimal, since we use decimal for financial calculations).
Update
One important outcome of the comments below, coming from mike-wise, is the fact that a float could be represented by integers and actually this was the case before computers got float point registers. One more contribution on this made by mike is that we can find more information on this in the The Art of Computer Programming, Volume 2 chapter 4.
If you want to have only the integer part and if necessary also have a decimal , you could use a float (or double) and forcing the cast to int
I want to change CultureInfo of double or the string.
For example I get double value from the code in format like 3015.0
I don't know in what unit is this but I need value in the meters and these are not in the meters because I am on the altitude of cca.100m
I have tried: double.Parse(test, new System.Globalization.CultureInfo("hr-HR"));
and double.Parse(test, new System.Globalization.CultureInfo());
but nothing is a right format what I need.
Any idea what I can do? This is windows form C# application if this is important. Framework 4.0
EDIT:
As you can see on this LINK I had a similar problem before and it was solved with culture info. Problem is that on picture 1 are the values that I get and on the picture 2 are the real values that I need to get ( when I say real I mean in the right format) I think that problem is in the culture somewhere as on my previously question I had problem with decimal values).
This is not related to Culture info.
Looks like you are getting a measurement in feet while you are expecting it to be in meters. In fact, 100 meters = 328.08399 feet and your measurements might be in 10 feets i.e 3015.0 = 301.5 feet (some GPS receivers do not support floats or doubles and therefore return only integers multiplied by 10 to have one decimal accuracy)
If you are using a cheap GPS receiver than this is expected as the accuracy is not that great (this would explain why you are getting 3015.0 instead of 3280)
I hope this helps.
Your problem has nothing to do with CultureInfo but with unit conversion. Probably you will have to do a unit convertion. Are you sure that the number is not 301.5? This would probably mean that the altitue is given in feet.
double altitudeMeters = 0.3048 * altitudeFeet;
The setting of the current culture will not convert units for you. It only affects the formatting of numbers (for example, some cultures use a comma instead of a period for the decimal point). You'll have to do the units conversion yourself.
double.Parse will simply convert the string into a number. It doesn't do unit conversions. The different culture information is for when there is a decimal comma (e.g. French) etc.
You will have to build some logic into your application to convert the number from what looks like feet to meters. If you can be sure that the data is always going to be in the "wrong" format then a simple feet to meters (1 foot = 0.3048 meters) conversion will work. Given that this is a GPS device you might be able to assume this.
If the numbers can be in any format then you will need to analyse the number and if it's outside the sensible range convert it. However, this will fail if someone enters "100". Is this metres or feet?
To ensure you get the right units you will either have to have the user select the units on a separate control or include the units in the input string. If you do the latter then you'll need to get into parsing the string to see if it contains a units string, stripping it off, parsing the number and the string and then doing the conversion.
Altitude comes from $GPGGA string which indicates the units being used. What does the $GPGGA string look like?
see http://aprs.gids.nl/nmea/#gga
if you look at the raw data in the string you will know if you are collecting the right numbers and their units
Alex
Why can't c# do any exact operations.
Math.Pow(Math.Sqrt(2.0),2) == 2.0000000000000004
I know how doubles work, I know where the rounding error is from, I know that it's almost the correct value, and I know that you can't store infinite numbers in a finite double. But why isn't there a way that c# can calculate it exactly, while my calculator can do it.
Edit
It's not about my calculator, I was just giving an example:
http://www.wolframalpha.com/input/?i=Sqrt%282.000000000000000000000000000000000000000000000000000000000000000000000000000000001%29%5E2
Cheers
Chances are your calculator can't do it exactly - but it's probably storing more information than it's displaying, so the error after squaring ends up outside the bounds of what's displayed. Either that, or its errors happen to cancel out in this case - but that's not the same as getting it exactly right in a deliberate way.
Another option is that the calculator is remembering the operations that resulted in the previous results, and applying algebra to cancel out the operations... that seems pretty unlikely though. .NET certainly won't try to do that - it will calculate the intermediate value (the root of two) and then square it.
If you think you can do any better, I suggest you try writing out the square root of two to (say) 50 decimal places, and then square it exactly. See whether you come out with exactly 2...
Your calculator is not calculating it exactly, it just that the rounding error is so small that it's not displayed.
I believe most calculators use binary-coded decimals, which is the equivalent of C#'s decimal type (and thus is entirely accurate). That is, each byte contains two digits of the number and maths is done via logarithms.
What makes you think your calculator can do it? It's almost certainly displaying less digits than it calculates with and you'd get the 'correct' result if you printed out your 2.0000000000000004 with only five fractional digits (for example).
I think you'll probably find that it can't. When I do the square root of 2 and then multiply that by itself, I get 1.999999998.
The square root of 2 is one of those annoying irrational numbers like PI and therefore can't be represented with normal IEEE754 doubles or even decimal types. To represent it exactly, you need a system capable of symbolic math where the value is stored as "the square root of two" so that subsequent calculations can deliver correct results.
The way calculators round up numbers vary from model to model. My TI Voyage 200 does algebra to simplify equations (among other things) but most calculators will display only a portion of the real value calculated, after applying a round function on the result. For example, you may find the square root of 2 and the calculator would store (let's say) 54 decimals, but will only display 12 rounded decimals. Thus when doing a square root of 2, then do a power of that result by 2 would return the same value since the result is rounded. In any case, unless the calculator can keep an infinite number of decimals, you'll always have a best approximate result from complexe operations.
By the way, try to represent 10.0 in binary and you'll realize that you can't represent it evenly and you'll end up with (something like) 10.00000000000..01
Your calculator has methods which recognize and manipulate irrational input values.
For example: 2^(1/2) is likely not evaluated to a number in the calculator if you do not explicitly tell it to do so (as in the ti89/92).
Additionally, the calculator has logic it can use to manipulate them such as x^(1/2) * y^(1/2) = (x*y)^1/2 where it can then wash, rinse, repeat the method for working with irrational values.
If you were to give c# some method to do this, I suppose it could as well. After all, algebraic solvers such as mathematica are not magical.
It has been mentioned before, but I think what you are looking for is a computer algebra system. Examples of these are Maxima and Mathematica, and they are designed solely to provide exact values to mathematical calculations, something not covered by the CPU.
The mathematical routines in languages like C# are designed for numerical calculations: it is expected that if you are doing calculations as a program you will have simplified it already, or you will only need a numerical result.
2.0000000000000004 and 2. are both represented as 10. in single precision. In your case, using single precision for C# should give the exact answer
For your other example, Wolfram Alpha may use higher precision than machine precision for calculation. This adds a big performance penalty. For instance, in Mathematica, going to higher precision makes calculations about 300 times slower
k = 1000000;
vec1 = RandomReal[1, k];
vec2 = SetPrecision[vec1, 20];
AbsoluteTiming[vec1^2;]
AbsoluteTiming[vec2^2;]
It's 0.01 second vs 3 seconds on my machine
You can see the difference in results using single precision and double precision introduced by doing something like the following in Java
public class Bits {
public static void main(String[] args) {
double a1=2.0;
float a2=(float)2.0;
double b1=Math.pow(Math.sqrt(a1),2);
float b2=(float)Math.pow(Math.sqrt(a2),2);
System.out.println(Long.toBinaryString(Double.doubleToRawLongBits(a1)));
System.out.println(Integer.toBinaryString(Float.floatToRawIntBits(a2)));
System.out.println(Long.toBinaryString(Double.doubleToRawLongBits(b1)));
System.out.println(Integer.toBinaryString(Float.floatToRawIntBits(b2)));
}
}
You can see that single precision result is exact, whereas double precision is off by one bit