This question already has answers here:
How do you deal with numbers larger than UInt64 (C#)
(8 answers)
Closed 8 years ago.
Hi all I am having a numeric digit with 20 characters as follows 34432523434234423434, I tried this converting using long,UInt64 but still I am getting an exception Value was either too large or too small so can some one help me out how can I convert this value
Your value is actually 65 bits long, so doesn't matter HOW you change its type, it will not fit into a 64bit variable.
2**64 = 18446744073709551616
your value = 34432523434234423434
Big integers aren't actually limited to 20 digits, they're limited to the numbers that can be expressed in 64 bits (for example, the number 99,999,999,999,999,999,999 is not a valid big integer despite it being 20 digits long).
The reason you have this limitation is that native format integers can be manipulated relatively fast by the underlying hardware whereas textual versions of a number (tend to) need to be processed one digit at a time.
If you want a number larger than the largest 64-bit unsigned integer 18,446,744,073,709,551,615 then you will need to store it as a string (or other textual field) and hope that you don't need to do much mathematical manipulation on it.
Alternatively, you can look into floating point numbers which have a larger range but less precision, or decimal numbers which should be able to give you 65 digits for an integral value, with decimal(65,0) as the column type.
Related
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 4 months ago.
I have a problem with conversation of double values.
In the picture Project's framework is .NET6 and i tried on .NET5 and i get same value again on x64.
Can you explain this situation to me?
And how can I get the value unchanged on x64?
Thank you.
This is expected behavior: floating point numbers have a limited number of significant digits, and a number that has a finite number of digits in their decimal representation may require infinite digits in their binary representation.
E.g., the decimal 0.1 is, in binary, 0.00011001100110011.... repeating. Storing this in a float or double, the number of digits is truncated.
Floating point numbers behave similar, but not identical to "real" numbers. This is something you should be aware of.
For financial mathematics, in C#, use the decimal type.
Here you find a good "what every developer should know" overview: https://floating-point-gui.de/.
The standard governing the most common (almost ubiquitous) implementation of floating point types is IEEE 754.
This question already has answers here:
c# string to float conversion invalid?
(3 answers)
Closed 5 years ago.
float.Parse("534818068")
returns: 534818080
I understand that there are many complications with float and decimal values. But maybe someone could explain this behaviour to me.
Thanks!
Floating point numbers have a relative precision, i.e. something like 7 or 8 digits. So only the first 7 or 8 digits are correct, independent of the actual total size of the number.
Floating point numbers are stored internally using the IEEE 754 standard (a sign, a biased exponent and a fraction).
float numbers are stored with a 32 bits representation, which means they will have a precision of 7 digits.
On the other hand, double are stored with a 64 bits representation, thus having 15-16 digits (source).
Which is why you shouldn't usually compare floats for equality for instance.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
So a common question you see on SO is how to convert between type x and type z but I want to know how does the computer do this?
For example, how does it take an int out of a string?
My theory is that a string is a char array at its core so its going index by index and checking it against the ascii table. If it falls within the range of ints then its added to the integer. Does it happen at an even lower level than this? Is there bitmasking taking place? How does this happen?
Disclaimer: not for school, just curious.
This question can only be answered when restricting the types to a somewhat managable subset. To do so, let us consider the three interesting types: Strings, integers and floats.
The only other truly different basic type is a pointer, which is not usually converted in any meaningful manner (even the NULL check is not actually a conversion, but a special built in semantic for the 0 literal).
int to float and vice versa
Converting integers to floats and vice versa is simple, since modern CPUs provide an instruction to deal with that case directly.
string to integer type
Conversion from string to integer is fairly simple, because no numeric errors will happen. Indeed, any string is just a sequence of code points (which may or may not be represented by char or wchar_t), and the common method to work through this goes along the lines of the following:
unsigned result = 0;
for(size_t i = 0; i < str.size(); ++i) {
unsigned c = str[i] - static_cast<unsigned>('0');
if(c > '9') {
if(i) return result; // ok: integer over
else throw "no integer found";
}
if((MAX_SIZE_T - c) / 10 < result) throw "integer overflow";
result = result * 10 + c;
}
If you wish to consider things like additional bases (e.g. strings like 0x123 as a hexadecimal representation) or negative values, it obivously requires a few more tests, but the basic algorithm stays the same.
int to string
As expected, this basically works in reverse: An implementation will always take the remainder of a division by 10 and then divide by 10. Since this will give the number in reverse, one can either print into a buffer from the back or reverse the result again.
string to floating point type
Parsing strings to a double (or float) is significantly more complex, since the conversion is supposed to happen with the highest possible accuracy. The basic idea here is to read the number as a string of digits while only remembering where the dot was and what the exponent is. Then, you would assemble the mantissa from this information (which basically is a 53 bit integer) and the exponent and assemble the actual bit pattern for the resulting number. This would then be copied into your target value.
While this approach works perfectly fine, there are literally dozens of different approaches in use, all varying in performance, correctness and robustness.
Actual implementations
Note that actual implementations may have to do one more important (and horribly ugly) thing, which is locale. For example, in the German locale the "," is the decimal point and not the thousands seperator, so pi is roughly "3,1415926535".
Perl string to double
TCL string to double
David M. Gay AT&T Paper string to double, double to string and source code
Boost Spirit
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Just learning C# at the moment.
I don't get why anyone would use Integers when they could use floats or doubles..
Floats will add/subtract whole numbers AND decimal numbers so why would anyone ever bother using a plain old integer?
Seems like floats or double will take care of anything that an Integer can do with the bonus of being able to handle . numbers too..
Thanks!
The main reason is the same reason we often prefer to use integer fractions instead of fixed-precision decimals. With rational fractions, (1/3) times 3 is always 1. (1/3) plus (2/3) is always 1. (1/3) times 2 is (2/3).
Why? Because integer fractions are exact, just like integers are exact.
But with fixed-precision real numbers -- it's not so pretty. If (1/3) is .33333, then 3 times (1/3) will not be 1. And if (2/3) is .66666, then (1/3)+(2/3) will not be one. But if (2/3) is .66667, then (1/3) times 2 will not be (2/3) and 1 minus (1/3) will not be (2/3).
And, of course, you can't fix this by using more places. No number of decimal digits will allow you to represent (1/3) exactly.
Floating point is a fixed-precision real format, much like my fixed-precision decimals above. It doesn't always follow the naive rules you might expect. See the classic paper What Every Computer Scientist Should Know About Floating-Point Arithmetic.
To answer your question, to a first approximation, you should use integers whenever you possibly can and use floating point numbers only when you have to. And you should always remember that floating point numbers have limited precision and comparing two floating point numbers to see if they are equal can give results you might not expect.
As you can see in here the different types each have their own size.
For example, when working with big databases, an int or float may double up the required size.
You have different datatypes because they use a different amount of bits to store data.
Typically an integer will use less memory than a double, that is why one doesn't just use the largest possible datatype.
http://en.wikipedia.org/wiki/Data_type
Most of the answer given here deal directly with math concepts. There are purely computational reasons for using integers. several jump to mind:
loop counters, yes I know you can a decimal/double in the for loop but really do you need that degree of complexity?
internal enumeration values
array indices
There are several reasons. Performance, memory, even the desire to not see decimals (without having to play with format strings).
Floating point:
In computing, floating point describes a method of representing an approximation to real numbers in a way that can support a wide range of values. Numbers are, in general, represented approximately to a fixed number of significant digits and scaled using an exponent.
Integer:
In computer science, an integer is a datum of integral data type, a data type which represents some finite subset of the mathematical integers.
So, precision is one argument.
There are several reasons. First off, like people have already said, double stores 64-bit numeric values, while int only requires 32-bit.
Float is a different case. Both int and float store 32-bit numbers, but float is less precise. A float value is precise up to 7 digits, but beyond that it is just an approximation. If you have larger numbers, or if there is some case where you purposefully want to force only integer values with no fractional numbers, int is the way to go. If you don't care about loss of precision and want to allow a wider range of values, you can use float instead.
The primary reason for using integers is memory consumption and performance.
doubles are in most cases stored in a 64b memory blocks (compared to 32b for int)and use a somewhat complicated standard for representation (in some cases approximation of the real values)
complicated representation that requires calculation of mantis and exponent.
in most cases it requires use of dedicated coprocessor for floating point arithmetic
for integers complements and shifting can be used in order to speed up the arithmetic operations.
a number of use cases where it is more appropriate (natural) to use integers like for indexing arrays, loops, for getting reminder of the division, counting etc.
Also if you would need to do all of this using types for representing real numbers your code would be more more error prone.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
?(1.0-0.9-0.1)
-0.000000000000000027755575615628914
?((double)1.0-(double)0.9-(double)0.1)
-0.000000000000000027755575615628914
?((double)1.0-(double)0.9-(double)0.1).GetType()
{Name = "Double" FullName = "System.Double"}
?((double)1.0-(double)0.9-(double)0.1).ToString()
"-2,77555756156289E-17"
How a Double.ToString() displays more chars(32) that double's precision(15-16)?
I expect that MyObject.ToString() represents just MyObject and not MyObject+SomeTrashFromComputer
Why
?0.1
0.1
?0.2-0.1
0.1
?0.1-0.1
0.0
BUT
?1.0-0.9-0.1
-0.000000000000000027755575615628914
WHY
?1.0-0.1-0.9
0.0
BUT
?1.0-0.9-0.1
-0.000000000000000027755575615628914
How a Double.ToString() displays more chars(32) that double's precision(15-16)?
It isn't displaying 32, it is displaying 17, leading zeros don't count. Floating point means it can keep track of changes in magnitude separately from changes in value.
I expect that MyObject.ToString() represents just MyObject
It does, there may be a slight difference due to the mechanics of floating point numbers, but the true number is represented by the string precisely.
not MyObject+SomeTrashFromComputer
There is no trash, there is floating point inaccuracy. It exists in decimal too, write down 1/3 as a decimal number exactly. You can't, it involves a repeating decimal place. Double's are stored in base 2, so even 0.1 creates a repeating "decimal".
Also note that you are getting two different representations because you are calling two different display methods. ToString has specific semantics, while your debugging window probably has different ones. Also look up scientific notation if you want to know what the E means.
Check this System.Double
Remember that a floating-point number can only approximate a decimal number, and that the precision of a floating-point number determines how accurately that number approximates a decimal number. By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally. The precision of a floating-point number has several consequences:
Two floating-point numbers that appear equal for a particular precision might not compare equal because their least significant digits are different.
A mathematical or comparison operation that uses a floating-point number might not yield the same result if a decimal number is used because the floating-point number might not exactly approximate the decimal number.
A value might not roundtrip if a floating-point number is involved. A value is said to roundtrip if an operation converts an original floating-point number to another form, an inverse operation transforms the converted form back to a floating-point number, and the final floating-point number is equal to the original floating-point number. The roundtrip might fail because one or more least significant digits are lost or changed in a conversion.
I think you're unclear on two aspects of a floating point number; the precision and the range.
The precision of a floating point representation is how closely it can approximate a given decimal. The precision of a double is 15-16 digits.
The range of a floating point representation is related to how large or small of a number can be approximated by that representation. The range of a double is +/-5.0e-324 to +/-1.7e308.
So in your case, the calculation is precise to exactly 16 characters, and after that is not, as would be expected.
Some numbers that would seem simple are just not representable in standard floating point representation. If you require absolutely no deviation, you should use a different data type like decimal.