Convert fraction to float in C# - c#

Can anyone please help.
I'm following a tutorial found here as I have a situation where I have to get the equation of a line in point slope form i.e. y−y1=m(x−x1).
I get up to step 3 of the tutorial no problem, but then I got stuck. In order to go from this equation y−3=**3/11**(x−4) to this 11y−33=3(x−4) (getting rid of the fraction on the right), I have to multiply by 11 on both sides.
However, my problem is that I obviously wont be using fractions but floating point decimal numbers in C#. So my values would be 0.272727 rather than 3/11. So what would I need to multiply with on both sides to give me correct answer? Or can this even be done?
My question is this, how can I get from this y−3=**0.272727**(x−4) to 11y−33=3(x−4) in decimal form?
Does anyone have any suggestions or alternatives that I can use?
Thanks in advance

Fraction Class
You can actually use Fractions in C#
Using it, you avoid the deviation of the rounding.

I think your mistaking the equation solving step for the calculation.
You need to first solve your equation to some form you can actually compute.
Normal programming languages (not true for Mathematic etc) can't deal with symbolic calculations or unknowns.
They can only compute the result of an expression given conrete values for all variables used

Firstly,
before trying to run the expression which calculates your equation you should detect which value has the denominator with a substring, or wathever else, after that, multiply your equation , and after that try to calculate it.
Or, another way is to use The class FRACTION

Related

double type Multiplication in C# giving me wrong values [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
If I execute the following expression in C#:
double i = 10*0.69;
i is: 6.8999999999999995. Why?
I understand numbers such as 1/3 can be hard to represent in binary as it has infinite recurring decimal places but this is not the case for 0.69. And 0.69 can easily be represented in binary, one binary number for 69 and another to denote the position of the decimal place.
How do I work around this? Use the decimal type?
Because you've misunderstood floating point arithmetic and how data is stored.
In fact, your code isn't actually performing any arithmetic at execution time in this particular case - the compiler will have done it, then saved a constant in the generated executable. However, it can't store an exact value of 6.9, because that value cannot be precisely represented in floating point point format, just like 1/3 can't be precisely stored in a finite decimal representation.
See if this article helps you.
why doesn't the framework work around this and hide this problem from me and give me the
right answer,0.69!!!
Stop behaving like a dilbert manager, and accept that computers, though cool and awesome, have limits. In your specific case, it doesn't just "hide" the problem, because you have specifically told it not to. The language (the computer) provides alternatives to the format, that you didn't choose. You chose double, which has certain advantages over decimal, and certain downsides. Now, knowing the answer, you're upset that the downsides don't magically disappear.
As a programmer, you are responsible for hiding this downside from managers, and there are many ways to do that. However, the makers of C# have a responsibility to make floating point work correctly, and correct floating point will occasionally result in incorrect math.
So will every other number storage method, as we do not have infinite bits. Our job as programmers is to work with limited resources to make cool things happen. They got you 90% of the way there, just get the torch home.
And 0.69 can easily be represented in
binary, one binary number for 69 and
another to denote the position of the
decimal place.
I think this is a common mistake - you're thinking of floating point numbers as if they are base-10 (i.e decimal - hence my emphasis).
So - you're thinking that there are two whole-number parts to this double: 69 and divide by 100 to get the decimal place to move - which could also be expressed as:
69 x 10 to the power of -2.
However floats store the 'position of the point' as base-2.
Your float actually gets stored as:
68999999999999995 x 2 to the power of some big negative number
This isn't as much of a problem once you're used to it - most people know and expect that 1/3 can't be expressed accurately as a decimal or percentage. It's just that the fractions that can't be expressed in base-2 are different.
but why doesn't the framework work around this and hide this problem from me and give me the right answer,0.69!!!
Because you told it to use binary floating point, and the solution is to use decimal floating point, so you are suggesting that the framework should disregard the type you specified and use decimal instead, which is very much slower because it is not directly implemented in hardware.
A more efficient solution is to not output the full value of the representation and explicitly specify the accuracy required by your output. If you format the output to two decimal places, you will see the result you expect. However if this is a financial application decimal is precisely what you should use - you've seen Superman III (and Office Space) haven't you ;)
Note that it is all a finite approximation of an infinite range, it is merely that decimal and double use a different set of approximations. The advantage of decimal is it produces the same approximations that you would if you were performing the calculation yourself. For example if you calculated 1/3, you would eventually stop writing 3's when it was 'good enough'.
For the same reason that 1 / 3 in a decimal systems comes out as 0.3333333333333333333333333333333333333333333 and not the exact fraction, which is infinitely long.
To work around it (e.g. to display on screen) try this:
double i = (double) Decimal.Multiply(10, (Decimal) 0.69);
Everyone seems to have answered your first question, but ignored the second part.

Calculating the total amount of change given

Hello I'm currently following the Computing with C# and the .NET Framework book and I'm having difficulty on one of the exercises which is
Write a C# program to make change. Enter the cost of an item that is less than one dollar. Output
the coins given as change, using quarters, dimes, nickels, and pennies. Use the fewest coins
possible. For example, if the item cost 17 cents, the change would be three quarters, one nickel,
and three pennies
Since I'm still trying to grasp c# programming the best method I came up with is using the while loop.
while(costOfItem >= 0.50)
{
costOfItem -= 0.50;
fiftyPence++;
}
I have these for each of the pences 20,10,5 etc..
I'm checking if the amount is greater than or equal to 50 pence, if so, i reduce 50 pence from the amount given by the user and add 1 to the fiftypence variable.
then it moves onto the next while loop, which I have for each pences. The problem is, somewhere along the line one of the loops takes away, lets say 20 pence, and the costOfItem becomes something like "0.1999999999999" then it never drops down to 0, which it should to get the correct amount of change.
Any help is appreciated, please don't suggest over complex procedures that I have yet covered.
Never use double or float for money operations. Use Decimal.
For all other problems of calculation accuracy you have to use "Double Epsilon Comparison" like Double.Epsilon for equality, greater than, less than, less than or equal to, greater than or equal to
If you do the calculation in cents, you can use integers and then you don't get into floating point rounding problems
Sounds to me you are using float or double as datatype for costOfItem.
float and double store their values in binary. But not all decimal values have an exact representation in binary. Therefore, a very close approximation is stored. This is why you get values like "0.1999999999999".
For money calculations, you should always use decimal, since they avoid those small inaccuracies.
You can read more about the difference in Jon Skeets awesome answer to Difference between Decimal, Float and Double in .NET?
Thank you all for the fast replies. The issue was that i used double instead of decimal the links provided have people explaining why it's wrong to do so in some cases. Using double should be avoided in arithmetic since apparently some float numbers do not have exact a binary representation, so instead of .23 it gives 0.2299999999999. Changing the variable to decimal fixed my issue

Is it possible to make an 'int' with a decimal value in C#?

I'm a new user of C#, but learned to make small simple games. So I'm having fun and training C# that way.
However, now I need to change a moving objects speed by 0.2, so I can change the speed using an interval, without the object bugging out. I'm using 'int' values to set speed values of the objects. My objects are moving with 2 pixels per milisec (1/1000 sec). I have tried multiplying with 2, but after doing this once or twice, the objects will move so fast, they bug out.
Looked through other questions on the site, but can't find anything, which seems to help me out.
So:
Is it possible to make an 'int', which hold a decimal value ?
If yes, then how can I make it, without risking bugs in the program ?
Thanks in advance!
Is it possible to make an 'int', which hold a decimal value ?
No, a variable of type int can only contains an integer number. In the world of C# and CLR an int is any integer number that can be represented by 32 bits. Nothing less, nothing more. However, a decimal value can be represented by integers, please see update below and comments,
In your case, I think that a float or a double would do the job. (I don't refer to decimal, since we use decimal for financial calculations).
Update
One important outcome of the comments below, coming from mike-wise, is the fact that a float could be represented by integers and actually this was the case before computers got float point registers. One more contribution on this made by mike is that we can find more information on this in the The Art of Computer Programming, Volume 2 chapter 4.
If you want to have only the integer part and if necessary also have a decimal , you could use a float (or double) and forcing the cast to int

c# floating point for loop, unexpected results

Can anyone explain to me why this program:
for(float i = -1; i < 1; i += .1F)
Console.WriteLine(i);
Outputs this:
-1
-0.9
-0.8
-0.6999999
-0.5999999
-0.4999999
-0.3999999
-0.2999999
-0.1999999
-0.99999993
7.450581E-08
0.1000001
0.2000001
0.3000001
0.4000001
0.5000001
0.6000001
0.7000001
0.8000001
0.9000002
Where is the rounding error coming from??
I'm sure this question must have been asked in some form before but I can't find it anywhere quickly. :)
The answer comes down to the way that floating point numbers are represented. You can go into the technical detail via wikipedia but it is simply put that a decimal number doesn't necessarily have an exact floating point representation...
The way floating point numbers (base 2 floating point anyway like doubles and floats) work [0]is by adding up powers of 1/2 to get to what you want. So 0.5 is just 1/2. 0.75 is 1/2+1/4 and so on.
the problem comes that you can never represent 0.1 in this binary system without an unending stream of increasingly smaller powers of 2 so the best a computer can do is store a number that is very close to but not quite 0.1.
Usually you don't notice these differences but they are there and sometimes you can make them manifest themselves. There are a lot of ways to deal with these issues and which one you use is very much dependant on what you are actually doing with it.
[0] in the slightly handwavey close enough kind of way
Floating point numbers are not correct, they are always approximated because they must be rounded!!
They are precise in binary representation.
Every CPU or pc could lead to different results.
Take a look at Wikipedia page
The big issue is that 0.1 cannot be represented in binary, just like 1 / 3 or 1 / 7 cannot be represented in decimal. So since the computer has to cut off at some point, it will accumulate a rounding error.
Try doing 0.1 + 0.7 == 0.8 in pretty much any programming language, you'll get false as a result.
In C# to get around this, use the decimal type to get better precision.
This will explain everything about floating-point:
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
The rounding error comes from the fact that Float is not a precise data type (when converted to decimal), it is an approxomation, note in the C# Reference Float is specified as having 7 digits of decimal precision.
It is fundamental to the any floating point variable. The reasons are complex but there is plenty of information if you google it.
Try using Decimal instead.
As other posters have intimated, the problem stems from the assumption that floating point numbers are a precise decimal representation. They are not- they are a precise binary (base-2) representation of a number. The problem you are experiencing is that you cannot always express a precise binary number in decimal format- just like you cannot express 1/3 in decimal format (.33333333...). At some point, rounding must occur.
In your example, rounding is occurring when you express .1F (because that is not a value that can be expressed precisely in base-2).

Why can't c# calculate exact values of mathematical functions

Why can't c# do any exact operations.
Math.Pow(Math.Sqrt(2.0),2) == 2.0000000000000004
I know how doubles work, I know where the rounding error is from, I know that it's almost the correct value, and I know that you can't store infinite numbers in a finite double. But why isn't there a way that c# can calculate it exactly, while my calculator can do it.
Edit
It's not about my calculator, I was just giving an example:
http://www.wolframalpha.com/input/?i=Sqrt%282.000000000000000000000000000000000000000000000000000000000000000000000000000000001%29%5E2
Cheers
Chances are your calculator can't do it exactly - but it's probably storing more information than it's displaying, so the error after squaring ends up outside the bounds of what's displayed. Either that, or its errors happen to cancel out in this case - but that's not the same as getting it exactly right in a deliberate way.
Another option is that the calculator is remembering the operations that resulted in the previous results, and applying algebra to cancel out the operations... that seems pretty unlikely though. .NET certainly won't try to do that - it will calculate the intermediate value (the root of two) and then square it.
If you think you can do any better, I suggest you try writing out the square root of two to (say) 50 decimal places, and then square it exactly. See whether you come out with exactly 2...
Your calculator is not calculating it exactly, it just that the rounding error is so small that it's not displayed.
I believe most calculators use binary-coded decimals, which is the equivalent of C#'s decimal type (and thus is entirely accurate). That is, each byte contains two digits of the number and maths is done via logarithms.
What makes you think your calculator can do it? It's almost certainly displaying less digits than it calculates with and you'd get the 'correct' result if you printed out your 2.0000000000000004 with only five fractional digits (for example).
I think you'll probably find that it can't. When I do the square root of 2 and then multiply that by itself, I get 1.999999998.
The square root of 2 is one of those annoying irrational numbers like PI and therefore can't be represented with normal IEEE754 doubles or even decimal types. To represent it exactly, you need a system capable of symbolic math where the value is stored as "the square root of two" so that subsequent calculations can deliver correct results.
The way calculators round up numbers vary from model to model. My TI Voyage 200 does algebra to simplify equations (among other things) but most calculators will display only a portion of the real value calculated, after applying a round function on the result. For example, you may find the square root of 2 and the calculator would store (let's say) 54 decimals, but will only display 12 rounded decimals. Thus when doing a square root of 2, then do a power of that result by 2 would return the same value since the result is rounded. In any case, unless the calculator can keep an infinite number of decimals, you'll always have a best approximate result from complexe operations.
By the way, try to represent 10.0 in binary and you'll realize that you can't represent it evenly and you'll end up with (something like) 10.00000000000..01
Your calculator has methods which recognize and manipulate irrational input values.
For example: 2^(1/2) is likely not evaluated to a number in the calculator if you do not explicitly tell it to do so (as in the ti89/92).
Additionally, the calculator has logic it can use to manipulate them such as x^(1/2) * y^(1/2) = (x*y)^1/2 where it can then wash, rinse, repeat the method for working with irrational values.
If you were to give c# some method to do this, I suppose it could as well. After all, algebraic solvers such as mathematica are not magical.
It has been mentioned before, but I think what you are looking for is a computer algebra system. Examples of these are Maxima and Mathematica, and they are designed solely to provide exact values to mathematical calculations, something not covered by the CPU.
The mathematical routines in languages like C# are designed for numerical calculations: it is expected that if you are doing calculations as a program you will have simplified it already, or you will only need a numerical result.
2.0000000000000004 and 2. are both represented as 10. in single precision. In your case, using single precision for C# should give the exact answer
For your other example, Wolfram Alpha may use higher precision than machine precision for calculation. This adds a big performance penalty. For instance, in Mathematica, going to higher precision makes calculations about 300 times slower
k = 1000000;
vec1 = RandomReal[1, k];
vec2 = SetPrecision[vec1, 20];
AbsoluteTiming[vec1^2;]
AbsoluteTiming[vec2^2;]
It's 0.01 second vs 3 seconds on my machine
You can see the difference in results using single precision and double precision introduced by doing something like the following in Java
public class Bits {
public static void main(String[] args) {
double a1=2.0;
float a2=(float)2.0;
double b1=Math.pow(Math.sqrt(a1),2);
float b2=(float)Math.pow(Math.sqrt(a2),2);
System.out.println(Long.toBinaryString(Double.doubleToRawLongBits(a1)));
System.out.println(Integer.toBinaryString(Float.floatToRawIntBits(a2)));
System.out.println(Long.toBinaryString(Double.doubleToRawLongBits(b1)));
System.out.println(Integer.toBinaryString(Float.floatToRawIntBits(b2)));
}
}
You can see that single precision result is exact, whereas double precision is off by one bit

Categories