I'm trying to parse a binary STereoLithography file(.stl) in .net(C#) which has 32-bit floating-point numbers(IEEE-754) in it.
I need to parse those numbers and then later store those numbers in a string representation in a PovRay script, which is a plain text file.
I tried this in nodejs using the readFloatLE function which gives me back a Number(double-precision value).
In .net I only found the Bitconverter.ToSingle function which reads the binary 32bits and gives me a float which has less decimal precision(7) than the nodejs parsing.
The nodejs parsing gives a povray-script with nubmers like: -14.203535079956055
While the .net only gives me: -14.2035351
So, how do I parse the binary 32 bits to a Double in .net to get the higher precision?
[Edit]
Using the anwser from taffer: casting to converted float to a double and then using the 'round-trip' formatter for string representation.
Comparing to the nodejs output there are still minor rounding differences but those are in the 13th-16th decimals.
You did not lose any precision. Unlike JavaScript, the default .NET ToString uses general number formatting, which may truncate the last digits.
But if you use the round-trip format specifier you get the exact result:
var d = double.Parse("-14.203535079956055");
Console.WriteLine(d.ToString("R")); // displays -14.203535079956055
Edit
In JavaScript there is no 32-bit floating number type. Every number is a double. So even if you use the readFloatLE function, it will be parsed as a double from 32 bits. See the C# code below, which demonstrates what actually happens:
var d = (double)float.Parse("-14.2035351");
Console.WriteLine(d.ToString("R")); // displays -14.203535079956055
Or even more precisely if you read the numbers from a byte buffer:
var d = (double)BitConverter.ToSingle(new byte[] { 174,65,99,193 }, 0);
Console.WriteLine(d.ToString("R")); // displays -14.203535079956055
Related
so I just started experimenting with C#. I have one line of code and the output is a "?".
Code: Console.WriteLine(Math.Pow(Math.Exp(1200), 0.005d));
Output: ?
I am using Visual Studio and it also says exited with code 0.
It should output 403.4287934927351. I also tried it out with Geogebra and it's correct as shown in the image so it's not infinity.
C# double type (which is returned by Math.Exp function) is a fixed-size type (64 bits), and so it cannot represent arbitrary large numbers. Largest number it can represent is double.MaxValue constant, and it's of order 10^308, which is less than what you are trying to compute (e^2000).
When result of computation exceeds maximum representable value - special "number" representing Infinity is returned. So
double x = Math.Exp(1200); // cannot represent this with double type
bool isInifinite = double.IsInfinity(x); // true
After you got this "infinity" - all other computations involving it will just return infinity back, there is nothing else they can do. So then whole expression Math.Pow(Math.Exp(1200), 0.005d)) returns "infinity".
When you try to write result to console, it gets converted to string. The rules for converting mentioned infinity to string are as follows:
Regardless of the format string, if the value of a Single or Double
floating-point type is positive infinity, negative infinity, or not a
number (NaN), the formatted string is the value of the respective
PositiveInfinitySymbol, NegativeInfinitySymbol, or NaNSymbol property
that is specified by the currently applicable NumberFormatInfo object.
In your current culture, PositiveInfinitySymbol is likely "∞", but your console encoding likely cannot represent this symbol, so it outputs "?". You can change console encoding to UTF8 like this:
Console.OutputEncoding = Encoding.UTF8;
And then it will show "∞" correctly.
There is no framework-provided type to work with arbitrary sized rational numbers, as far as I know. For integer there is BigInteger type though.
In this specific case you can do fine with just double, because you can do the same thing with:
Console.WriteLine(Math.Exp(1200 * 0.005d));
// outputs 403.4287934927351
Now there are no intermediate results exceeding capacity of double, and so it works fine.
For cases where it's not possible - there are third party libraries which allow to work with arbitrary large rationals.
When converting a binary number in C/C++ and C# we're getting 2 different results by its rounding.
For example - let's take 1000010000010110110000010111111, in C# - we would get 34.84448 while we would get 34.844479, why is this minor difference?
Converting in C#:
float f = System.BitConverter.ToSingle(bytearray,0);
//bytearray is an array that contains our binary number
in C++:
int a = 1108041919; //The number that is being represented
float f = *(float *)&a;
There are many ways to unambiguously represent the same floating point value in decimal. For example, you can add arbitrarily many zeros after the exact output (note that since each power of two has a decimal representation of finite length, so does every floating point number).
The criterion for "when can I stop printing extra digits" is usually chosen as "you can stop printing extra digits when you would get the exact same value back if you parsed the decimal output into a float again". In short: It is expected that outputting a float "round-trips".
If you parse the decimal representations 34.844479 and 34.84448, you will find that they both convert back to the floating point value 0x420b60bf or 01000010000010110110000010111111. So both these strings represent the same floating point number. (Source: Try it yourself on https://www.h-schmidt.net/FloatConverter/IEEE754.html)
Your question boils down to "Why do the different runtime libraries print out different values for the same float?", to which the answer is "it's generally up to the library to figure out when to stop printing digits, they are not required to stop at the bare minimum". As long as you can get the same float back when you parse it again, the library did its job.
If you want to see the exact same decimal strings, you can achieve that with appropriate formatting options.
Since the value is the same we could guess that the printing function that is handling the value could be the minor difference in there :-)
I am trying to convert some Java code into C# and ran across an issue with parsing very large exponential numbers in .NET.
The number I am trying to parse is "1.79769313486232E+308".
I have tried using both double (which is what is used in the code I am translating) and decimal, but both throw an overflow exception that the number is too large.
double result = double.Parse("1.79769313486232E+308",
System.Globalization.NumberStyles.Float,
System.Globalization.CultureInfo.InvariantCulture)
I have tried various other combinations as well, such as using NumberStyles.Any.
This is working fine in Java. But before I attempt to convert the code from Java, I was hoping that there is another (native) option in .NET. Any ideas?
System.Numerics.BigInteger result = System.Numerics.BigInteger.Parse("1.79769313486232E+308", System.Globalization.NumberStyles.Float);
You can try BigInteger you should add reference to System.Numerics in your project
EDIT
Because of the comments, current number can be represented by int(without losing any information) because it is integer itself. This is scientific notation so how this translates. For an example:
1.23E+11 or as 1.23 X 10^11
So in his case:
1.79769313486232E+308 = 1.79769313486232*10^308
Which is away from double boundaries and can be written as biginteger. The number is integer itself so there is no problem!
Because it is too large bro.
Double can hold up to:
1.7976931348623157E+308
See: this
I am reading in an XML file and reading a specific value of 10534360.9
When I parse the string for a decimal ( decimal.Parse(afNodes2[0][column1].InnerText) ) I get 10534360.9, but when I parse it for a float ( float.Parse(afNodes2[0][column1].InnerText) ) I get 1.053436E+07.
Can someone explain why?
You are seeing the value represented in "E" or "exponential" notation.
1.053436E+07 is equivalent to 1.053436 x 10^7, or 10,534,360, which is the most precise way .NET can store 10,534,360.9 in the System.Single (float) data type (32 bits, 7 digits of precision).
You're seeing it represented that way because it is the default format produced by the Single.ToString() method, which the debugger uses to display values on the watch screens, console, etc.
EDIT
It probably goes without saying, but if you want to retain the precision of the original value in the XML, you could choose a data type that can retain more information about your numbers:
System.Double (64 bits, 15-16 digits)
System.Decimal (128 bits, 28-29 significant digits)
1.053436E+07 == 10534360.9
Those numbers are the same, just displayed differently.
Because float has a precision of 7 digits.
Decimal has a precision of 28 digits.
When viewing the data in a debugger, or displaying it via .ToString, by default it might be formatted using scientific notation:
Some examples of the return value are "100", "-123,456,789", "123.45e+6", "500", "3.1416", "600", "-0.123", and "-Infinity".
To format it as exact output, use the R (round trip) format string:
myFloat.ToString("R");
http://msdn.microsoft.com/en-us/library/dwhawy9k(v=vs.110).aspx
Does anyone know of an elegant way to get the decimal part of a number only? In particular I am looking to get the exact number of places after the decimal point so that the number can be formatted appropriately. I was wondering if there is away to do this without any kind of string extraction using the culture specific decimal separator....
For example
98.0 would be formatted as 98
98.20 would be formatted as 98.2
98.2765 would be formatted as 98.2765 etc.
It it's only for formatting purposes, just calling ToString will do the trick, I guess?
double d = (double)5 / 4;
Console.WriteLine(d.ToString()); // prints 1.75
d = (double)7 / 2;
Console.WriteLine(d.ToString()); // prints 3.5
d = 7;
Console.WriteLine(d.ToString()); // prints 7
That will, of course, format the number according to the current culture (meaning that the decimal sign, thousand separators and such will vary).
Update
As Clement H points out in the comments; if we are dealing with great numbers, at some point d.ToString() will return a string with scientific formatting instead (such as "1E+16" instead of "10000000000000000"). One way to overcome this probem, and force the full number to be printed, is to use d.ToString("0.#"), which will also result in the same output for lower numbers as the code sample above produces.
You can get all of the relevant information from the Decimal.GetBits method assuming you really mean System.Decimal. (If you're talking about decimal formatting of a float/double, please clarify the question.)
Basically GetBits will return you 4 integers in an array.
You can use the scaling factor (the fourth integer, after masking out the sign) to indicate the number of decimal places, but you should be aware that it's not necessarily the number of significant decimal places. In particular, the decimal representations of 1 and 1.0 are different (the former is 1/1, the latter is 10/10).
Unfortunately, manipulating the 96 bit integer is going to require some fiddly arithmetic unless you can use .NET 4.0 and BigInteger.
To be honest, you'll get a simpler solution by using the built in formatting with CultureInfo.InvariantCulture and then finding everything to the right of "."
Just to expand on the point about getbits, this expression gets the scaling factor from a decimal called foo:
(decimal.GetBits(foo)[3] & 16711680)>>16
You could use the Int() function to get the whole number component, then subtract from the original.