What is the best way to convert/parse a string into a ulong in c# and keep precision ?
Direct cast is not possible, and the Convert class utility is not providing ulong conversion, so I used an intermediate decimal variable, but I am losing precision.
decimal d = Decimal.Parse("1.0316584"));
Console.Write(d) // displays 1.0316584
ulong u = (ulong)d;
Console.Write(u) // displays 1 , precision is lost
I first tried to use a long parser, but I got thrown out :
long l = Int64.Parse("1.0316584")); // throws System.FormatException
EDIT :
Ok sorry my bad : My question was very badly put. "long" is indeed an integer in C# I was confused by other previously used languages. Plus, I had to use ulong because this is what the third party code I am using requests.So the multiplying factor as suggested in an answer was indeed the way to go
ulong is an integer type and can never have any precision for fractional/decimal values.
Strings can store a lot more data than a long. If you convert to a long, you run the risk of not being able to convert it back.
e.g. if I have the string Now is the time for all good men to come to the aid of their party. that can't really be converted to a long. "Precision" will be lost.
Having said that, A long is a 64 bit integer. It can't store that kind of data unless you're willing to change "encoding" somehow. If you have code that looks like this:
decimal d = Decimal.Parse("1.0316584"));
Console.Write(d) // displays 1.0316584
ulong u = (ulong)(d * 1000000000m);
Console.Write(u / 1000000000m) // displays 1.0316584? Precision is not lost.
Related
Currently, I am having a bit of a problem with my C# code. I have a set of code that is supposed to turn a string in the form of "x^y^z..." into a number, so I have set up a method that looks like this.
public long valueOfPower()
{
long[] number = Array.ConvertAll(this.power.Split('^'), long.Parse);
if(number.Length == 1)
{
return number[0];
}
long result = number[0];
long power = number[number.Length-1];
for (long i = number.Length-1; i > 1; i-- )
{
power = (long)Math.Pow((int)number[(int)i-1], (int)power);
}
result= (long)Math.Pow((int)result,(int)power);
return result;
}
The problem I am having is that when something like 2^2^2^2^2 is entered, I get an extremely large negative number. I am not sure if it is something wrong with my code, or because 2^2^2^2^2 is too large of a number for the long object, but I don't understand what is happening.
So, the question is, why is my code returning a large negative number when "this.power" is 2^2^2^2^2, but normal numbers with smaller inputs(like 2^2^2^2)?
(Sorry about the random casting, that came from me experimenting with different number types.)
What is happening is overflow. Each data type is stored as a certain number of bits. Because that number of bits is limited, the biggest number any data type can store is limited. Because the most significant bit often represents the sign of the number, when the maximum value for a data type is exceeded, that bit flips and the computer now interprets it as a negative number.
You can use the checked keyword to throw an exception if your math would overflow. More info on that here: https://msdn.microsoft.com/en-us/library/74b4xzyw.aspx
Another possible solution would be using a BigInteger. More info here: https://msdn.microsoft.com/en-us/library/system.numerics.biginteger.aspx
See this for the max values of data types in C#: http://timtrott.co.uk/data-types-ranges/
See this for more info on overflow: https://en.wikipedia.org/wiki/Integer_overflow
2^2^2^2^2 is, well, quite a large number and as a result is overflowing the maximum length of the long data type (9,223,372,036,854,775,807) but some margin.
You could try using the BigInteger class out of System.Numerics, or come up with some other method of representing such a number.
The Overflow problem you are experiencing is because you are doing downcasting.
number and power are supposed to be long, but in your calculation, for ex:
power = (long)Math.Pow((int)number[(int)i-1], (int)power);
// you are downcasting number and power into int.
when you do calculation in int, then your value will become negative because of overflow, and then you convert it back to long.
Also, my Math.Pow only accepts double as parameter and returns double. I don't know how you are allowed to provide int as parameters.
So, to fix your issue, it should look like this:
power = (long)Math.Pow((double)number[(int)i-1], (double)power);
// and
result= (long)Math.Pow((double)result,(double)power);
Then, if you want to get something bigger than long, consider using BigInteger.
I have a float number, say 1.2999, that when put into a Convert.ToDecimal returns 1.3. The reason I want to convert the number to a decimal is for precision when adding and subtracting, not for rounding up. I know for sure that the decimal type can hold that number, since it can hold numbers bigger than a float can.
Why is it rounding the number up? Is there anyway to stop it from rounding?
Edit: I don't know why mine is rounding and yours is not, here is my exact code:
decNum += Convert.ToDecimal((9 * 0.03F) + 0);
I am really confused now. When I go into the debugger and see the output of the (9 * 0.03F) + 0 part, it shows 0.269999981 as float, but then it converts it into 0.27 decimal. I know however that 3% of 9 is 0.27. So does that mean that the original calculation is incorrect, and the convert is simply fixing it?
Damn I hate numbers so much lol!
What you say is happening doesn't appear to happen.
This program:
using System;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
float f = 1.2999f;
Console.WriteLine(f);
Decimal d = Convert.ToDecimal(f);
Console.WriteLine(d);
}
}
}
Prints:
1.2999
1.2999
I think you might be having problems when you convert the value to a string.
Alternatively, as ByteBlast notes below, perhaps you gave us the wrong test data.
Using float f = 1.2999999999f; does print 1.3
The reason for that is the float value is not precise enough to represent 1.299999999f exactly. That particular value ends up being rounded to 1.3 - but note that it is the float value that is being rounded before it is converted to the decimal.
If you use a double instead of a float, this doesn't happen unless you go to even more digits of precision (when you reach 1.299999999999999)
[EDIT] Based on your revised question, I think it's just expected rounding errors, so definitely read the following:
See "What Every Computer Scientist Should Know About Floating-Point Arithmetic" for details.
Also see this link (recommended by Tim Schmelter in comments below).
Another thing to be aware of is that the debugger might display numbers to a different level of precision than the default double.ToString() (or equivalent) so that can lead to you seeing slightly different numbers.
Aside:
You might have some luck with the "round trip" format specifier:
Console.WriteLine(1.299999999999999.ToString());
Prints 1.3
But:
Console.WriteLine(1.299999999999999.ToString("r"));
Prints 1.2999999999999989
(Note that sneaky little 8 at the penultimate digit!)
For ultimate precision you can use the Decimal type, as you are already doing. That's optimised for base-10 numbers and provides a great many more digits of precision.
However, be aware that it's hundreds of times slower than float or double and that it can also suffer from rounding errors, albeit much less.
I have the following code:
float f = 0.3f;
double d1 = System.Convert.ToDouble(f);
double d2 = System.Convert.ToDouble(f.ToString());
The results are equivalent to:
d1 = 0.30000001192092896;
d2 = 0.3;
I'm curious to find out why this is?
Its not a loss of precision .3 is not representable in floating point. When the system converts to the string it rounds; if you print out enough significant digits you will get something that makes more sense.
To see it more clearly
float f = 0.3f;
double d1 = System.Convert.ToDouble(f);
double d2 = System.Convert.ToDouble(f.ToString("G20"));
string s = string.Format("d1 : {0} ; d2 : {1} ", d1, d2);
output
"d1 : 0.300000011920929 ; d2 : 0.300000012 "
You're not losing precision; you're upcasting to a more precise representation (double, 64-bits long) from a less precise representation (float, 32-bits long). What you get in the more precise representation (past a certain point) is just garbage. If you were to cast it back to a float FROM a double, you would have the exact same precision as you did before.
What happens here is that you've got 32 bits allocated for your float. You then upcast to a double, adding another 32 bits for representing your number (for a total of 64). Those new bits are the least significant (the farthest to the right of your decimal point), and have no bearing on the actual value since they were indeterminate before. As a result, those new bits have whatever values they happened to have when you did your upcast. They're just as indeterminate as they were before -- garbage, in other words.
When you downcast from a double to a float, it'll lop off those least-significant bits, leaving you with 0.300000 (7 digits of precision).
The mechanism for converting from a string to a float is different; the compiler needs to analyze the semantic meaning of the character string '0.3f' and figure out how that relates to a floating point value. It can't be done with bit-shifting like the float/double conversion -- thus, the value that you expect.
For more info on how floating point numbers work, you may be interested in checking out this wikipedia article on the IEEE 754-1985 standard (which has some handy pictures and good explanation of the mechanics of things), and this wiki article on the updates to the standard in 2008.
edit:
First, as #phoog pointed out below, upcasting from a float to a double isn't as simple as adding another 32 bits to the space reserved to record the number. In reality, you'll get an addition 3 bits for the exponent (for a total of 11), and an additional 29 bits for the fraction (for a total of 52). Add in the sign bit and you've got your total of 64 bits for the double.
Additionally, suggesting that there are 'garbage bits' in those least significant locations a gross generalization, and probably not be correct for C#. A bit of explanation, and some testing below suggests to me that this is deterministic for C#/.NET, and probably the result of some specific mechanism in the conversion rather than reserving memory for additional precision.
Way back in the beforetimes, when your code would compile into a machine-language binary, compilers (C and C++ compilers, at least) would not add any CPU instructions to 'clear' or initialize the value in memory when you reserved space for a variable. So, unless the programmer explicitly initialized a variable to some value, the values of the bits that were reserved for that location would maintain whatever value they had before you reserved that memory.
In .NET land, your C# or other .NET language compiles into an intermediate language (CIL, Common Intermediate Language), which is then Just-In-Time compiled by the CLR to execute as native code. There may or may not be an variable initialization step added by either the C# compiler or the JIT compiler; I'm not sure.
Here's what I do know:
I tested this by casting the float to three different doubles. Each one of the results had the exact same value.
That value was exactly the same as #rerun's value above: double d1 = System.Convert.ToDouble(f); result: d1 : 0.300000011920929
I get the same result if I cast using double d2 = (double)f; Result: d2 : 0.300000011920929
With three of us getting the same values, it looks like the upcast value is deterministic (and not actually garbage bits), indicating that .NET is doing something the same way across all of our machines. It's still true to say that the additional digits are no more or less precise than they were before, because 0.3f isn't exactly equal to 0.3 -- it's equal to 0.3, up to seven digits of precision. We know nothing about the values of additional digits beyond those first seven.
I use decimal cast for correct result in this case and same other case
float ff = 99.95f;
double dd = (double)(decimal)ff;
The question is, why do these code snippets give different results?
private void InitializeOther()
{
double d1, d2, d3;
int i1;
d1 = 4.271343859532459e+18;
d2 = 4621333065.0;
i1 = 5;
d3 = (i1 * d1) - Utils.Sqr(d2);
MessageBox.Show(d3.ToString());
}
and
procedure TForm1.InitializeOther;
var d1, d2, d3 : Double;
i1 : Integer;
begin
d1:=4.271343859532459e+18;
d2:=4621333065.0;
i1:=5;
d3:=i1*d1-Sqr(d2);
ShowMessage(FloatToStr(d3));
end;
The Delphi code gives me 816, while the c# code gives me 0. Using a calculator, I get 775. Can anybody please give me a detailed explanation?
Many thanks!
Delphi stores intermediate values as Extended (an 80-bit floating point type). This expression is Extended:
i1*d1-Sqr(d2);
The same may not be true of C# (I don't know). The extra precision could be making a difference.
Note that you're at the limits of the precision of the Double data type here, which means that calculations here won't be accurate.
Example:
d1 = 4.271343859532459e+18
which can be said to be the same as:
d1 = 4271343859532459000
and so:
d1 * i1 = 21356719297662295000
in reality, the value in .NET will be more like this:
2.1356719297662296E+19
Note the rounding there. Hence, at this level, you're not getting the right answers.
This is certainly not an explanation of this exact situation but it will help to explain the problem.
What Every Computer Scientist Should Know About Floating-Point Arithmetic
A C# double has at most 16 digits of precision. Taking 4.271343859532459e+18 and multiply by 5 will give a number of 19 digits. You want to have a number with only 3 digits as a result. Double cannot do this.
In C#, the Decimal type can handle this example -- if you know to use the 123M format to initialize the Decimal values.
Decimal d1, d2, d3;
int i1;
d1 = 4.271343859532459e+18M;
d2 = 4621333065.0M;
i1 = 5;
d3 = (i1 * d1) - (d2*d2);
MessageBox.Show(d3.ToString());
This gives 775.00 which is the correct answer.
Any calculation such as this is going to lead to dramas with typical floating point arithmetic. The larger the difference in scaling of the numbers, the bigger the chance of an accuracy problem.
http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems gives a good overview.
I think that this is an error caused by the limited precision (Above all, because using doubles instead of integers). Perhaps d1 isn't the same after the assignment. d2*d2 will surely be different than the correct value as it's bigger than 2^32.
As 5*d1 is even bigger than 2^64, even using 64-bit integers won't help. You'd have to use bignums or a 128-bit integer class to get the correct result.
Basically, as other people have pointed out, double-precision isn't precise enough for the scale of the computation you're trying to do. Delphi uses "extended precision" by default, which adds another 16 bits over Double to allow for more precise computation. The .NET framework doesn't have an extended-precision data type.
Not sure what type your calculator is using, but it's apparently doing something different from both Delphi and C#.
As commented by the others, the double isn't precise enough for your computation.
The decimal is a good alternative eventhough someone pointed out that it would be rounded it is not.
In C#, the Decimal type cannot handle this example easily either since 4.271343859532459e+18 will be rounded to 4271343859532460000.
This is not the case. The answer if you use decimal will be correct. But as he said the range is different.
Why does (string)int32 always throw: Cannot convert type 'int' to 'string'
public class Foo
{
private int FooID;
public Foo()
{
FooID = 4;
string s = (string)FooID; //throws compile error
string sss = FooID.ToString(); //no compile error
}
}
Because there is no type conversion defined from Int32 to string. That's what the ToString method is for.
If you did this:
string s = (string)70;
What would you expect to be in s?
A. "70" the number written the way humans would read it.
B. "+70" the number written with a positive indicator in front.
C. "F" the character represented by ASCII code 70.
D. "\x00\x00\x00F" the four bytes of an int each separately converted to their ASCII representation.
E. "\x0000F" the int split into two sets of two bytes each representing a Unicode character.
F. "1000110" the binary representation for 70.
G. "$70" the integer converted to a currency
H. Something else.
The compiler can't tell so it makes you do it the long way.
There are two "long ways". The first is to use one of the the Convert.ToString() overloads something like this:
string s = Convert.ToString(-70, 10);
This means that it will convert the number to a string using base 10 notation. If the number is negative it displays a "-" at the start, otherwise it just shows the number. However if you convert to Binary, Octal or Hexadecimal, negative numbers are displayed in twos complement so Convert.ToString(-7, 16) becomes "ffffffba".
The other "long way" is to use ToString with a string formatter like this:
string s2 = 70.ToString("D");
The D is a formatter code and tells the ToString method how to convert to a string. Some of the interesting codes are listed below:
"D" Decimal format which is digits 0-9 with a "-" at the start if required. E.g. -70 becomes "-70".
"D8" I've shown 8 but could be any number. The same as decimal, but it pads with zeros to the required length. E.g. -70 becomes "-00000070".
"N" Thousand separators are inserted and ".00" is added at the end. E.g. -1000 becomes "-1,000.00".
"C" A currency symbol is added at the start after the "-" then it is the same as "N". E.g. Using en-Gb culture -1000 becomes "-£1,000.00".
"X" Hexadecimal format. E.g. -70 becomes "46".
Note: These formats are dependent upon the current culture settings so if you are using en-Us you will get a "$" instead of a "£" when using format code "C".
For more information on format codes see MSDN - Standard Numeric Format Strings.
When performing a cast operation like (string)30, you're basically telling the computer that "this object is similar enough to the object I'm casting to that I won't lose much (or any) data/functionality".
When casting a number to a string, there is a "loss" in data. A string can not perform mathematical operations, and it can't do number related things.
That is why you can do this:
int test1 = 34;
double test2 = (double)test1;
But not this:
int test1 = 34;
string test2 = (string)test1;
This example isn't perfect since (IIRC) the conversion here is implicit, however the idea is that when converting to a double, you don't lose any information. That data type can still basically act the same whether it's a double or an int. The same can't be said of an int to a string.
Casting is (usually) only allowed when you won't lose much functionality after the conversion is done.
The .ToString() method is different from casting because it's just a method that returns a string data type.
Just another note: In general, if you want to do an explicit conversion like this (and I agree with many other answers here as to why it needs to be an explicit conversion) don't overlook the Convert type. It is designed for these sorts of primitive/simple type conversions.
I know that when starting in C# (and coming from C++) I found myself running into type casts that seemed like they should have worked. C++ is just a bit more liberal when it comes to this sort of thing, so in C# the designers wanted you to know when your type conversion were ambiguous or ill-advised. The Convert type then fills in the gaps by allowing you to explicitly convert and understand the side-effects.
Int doesn't have an explicit operator to string. However, they have Int32.ToString();. Maybe you can create one with Extension methods, if you really want to.
Because C# does not know how to convert int to string, the library (.ToString) does.
Int32 can't be casted to string because C# compiler doesn't know how to converter from one type to another.
Why is that, you will ask the reason is simple int32 is type integer/numerical and string is, oh well type string/characters. You may be able to convert from numerical type to another numerical type (ex. float to int, but be warned possible loss of accuracy).
If all possible casts where coded in the compiler the compiler would become too slow and certainly would miss much of the possible casts created for user defined types which is a major NO NO.
So the workaround is that any object in C# has a function inherited .ToString() that knows how to handle every type because .ToString() is specifically coded for the specific type, and as you guessed by now returns a string.
After all Int32 type is some bits in memory (32 bits to be exact) and string type is another pile of bits (how many bits depends on how much has been allocated) the compiler/runtime doesn't know anything just by itself looking at that data until now. .ToString() function accesses the specific metadata needed to convert from one to the other.
I just found the answer. ToString works fine on an int variable. You just have to make sure you add the brackets.
ie...
int intMyInt=32;
string strMyString = intMyInt.ToString();
VB .net is perfectly capable of casting from an int to string... in fact so is Java. Why should C# have a problem with it? Sometimes it is necessary to use a number value in calculations and then display the result in a string format. How else would you manage to display the result of a calculation in a TextBox?
The responses given here make no sense. There must be a way to cast an int as a string.