I have a number: 94,800,620,800
Float is 4-byte data-type.
Int32 is also 4-byte data-type.
float f = 94800620800; // ok
Int32 t = 94800620800; // error
Please explain this problem. Why I get a error when using Int32. Why I can use this number for float data-type because both of them are 4-byte data-type. Thanks.
Because the number you are trying to assign is larger than the largest possible value for a number of type Int32, which happens to be 2,147,483,647. To note, the maximum value for a Single is 3.402823 × 1038.
The max value for Int32 is 2,147,483,647 - which is less than 94,800,620,800.
A float can take a value in the following range: ±1.5 × 10−45 to ±3.4 × 1038
Also, check out this SO question - what the difference between the float and integer data type when the size is same in java?. It's a Java question, but the concept is the same and there's a detailed explanation of the difference, even though they're the same size.
Because that number is too big for a 4 byte int. Scalar values like Int32 have a minimum and maximum limit (which are -231 and 231 - 1 in this case, respectively), and you simply can't store a value outside this range.
Floating point numbers are stored totally differently, so you won't get compiler errors with huge values, only possible precision problems later, during runtime.
Because of those types internal representation.
float uses something like i,d ^ n where i is the integral part, d is the decimal part and n is the exponent (of course, this happens in base 2).
In this way, you can store bigger numbers in a float, but it won't be accurate as a, say, Int32 in storing integral numbers. If you try to convert
float f = 94800620800;
to an integral type large enough to store its value, it may not be the same as the initial 94800620800.
Integers types are exact representations, while floating point numbers are a combination of significant digits and exponent.
The floating point wiki page is explaining how this works.
Maybe you should read the error message? ;)
Error Integral constant is too large
the maximum valiue of a 32 bit int is 2,147,483,647
the float on the other hand works because it stores a mantissa and exponent, rather than just a single number, so it can cope with a much bigger range at the expense of possibly losing precision
try printing out your float and you will get 94800620000 instead of 94800620800 as the lower bits are lost
Int32 has a max value of 2,147,483,647. Your value is much higher.
Take a look at system.int32.maxvalue
Provided value of the constant expression is not within the range of int datatype.
The range of Int32 goes from − 2,147,483,648 to 2,147,483,647. Your variable is way out of range.
Related
Is -10 between 1.5e-45 and 3.4e+38?
If yes, explain to me why. I am no more that good in Maths. So, sorry for the weakness of these questions level.
According to C# documentation (of computer programming for that language) on Microsoft site, their variables float type is between 1.5 × 1e-45 and 3.4 × 1e+38?
But, 1e-3 equals 0.001 and 1e-6 equals 0.000001. That means the more we decrease a negative exponent, the less will be the resulting value but that one will still greater than zero.
Here comes the problem, I tried to use a float variable giving to it -10 as value expecting to get an error but to my surprise, -10 was accepted.
I am confused.
The documentation doesn't say so explicitly, but the two floating-point types are signed. So -10 can be represented by a floating-point type just as 10 can.
There's a few things to note.
The first is that float (and for that matter, double) is signed, so the values can be either positive or negative (or zero).
The other is that the range is a matter of precision. If you try to set a float to a value of less absolute value than than it can handle, like 1E-50 it will be set to zero rather than error; which is what you get when you round 1×10-50 to the precision it can cope with, while if you try to give it a value larger than the absolute value it can handle, like 1E50 then it will be set to ∞ (or -∞ for -1E50) because again that's as precisely as it can represent something that large.
That C# documentation is poorly written and has some errors or omissions:
The range it describes is only for the magnitude of positive finite numbers. The floating-point formats also include zero, negative numbers, and infinities. Thus, the true range is from −∞ to +∞.
It uses approximate decimal values to describe the range. The true values for float are:
The smallest representable positive number is exactly 2−149, which is 1.40129846432481707092372958328991613128026194187651577175706828388979108268586060148663818836212158203125e-45.
The largest representable finite number is exactly 2128−2104, which is 340282346638528859811704183484516925440.
I may very well have not the proper understanding of significant figures, but the book
C# 6.0 in a Nutshell by Joseph Albahari and Ben Albahari (O’Reilly).
Copyright 2016 Joseph Albahari and Ben Albahari, 978-1-491-92706-9.
provides the table below for comparing double and decimal:
Is it not counter-intuitive that, on the one hand, a double can hold a smaller quantity of significant figures, while on the other it can represent numbers way bigger than decimal, which can hold a higher quantity of significant figures ?
Imagine you were told you can store a value, but were given a limitation: You can only store 10 digits, 0-9 and a negative symbol. You can create the rules to decode the value, so you can store any value.
The first way you store things is simply as the value xxxxxxxxxx, meaning the number 123 is stored as 0000000123. Simple to store and read. This is how an int works.
Now you decide you want to store fractional numbers, so you change the rules a bit. Now you store xxxxxxyyyy, where x is the integer portion and y is the fractional portion. So, 123.98 would be stored as 0001239800. This is roughly how a Decimal value works. You can see the largest value I can store is 9999999999, which translates to 999999.9999. This means I have a hard upper limit on the size of the value, but the number of the significant digits is large at 10.
There is a way to store larger values, and that's to store the x and y components for the formula in xxxxxxyyyy. So, to store 123.98, you need to store 01239800-2, which I can calculate as . This means I can store much bigger numbers by changing 'y', but the number of significant digits is basically fixed at 6. This is basically how a double works.
The answer lies in the way that doubles are encoded. Rather than just being a direct binary representation of a number, they have 3 parts: sign, exponent, and fraction.
The sign is obvious, it controls + or -.
The fraction part is also obvious. It's binary fraction that represents a number in between 0 and 1.
The exponent is where the magic happens. It signifies a scaling factor.
The final float calculation comes out to (-1)^$sign * (1 + $fraction) * 2 ^$exponent
This allows much higher values than a straight decimal number because of the exponent. There's a lot of reading out there on why this works and how to do addition and multiplication with these encoded numbers. Google around for "IEEE floating point format" or whatever topic you need. Hope that helps!
The Range has nothing to do with the precision. Double has a binary representation (base 2). Not all numbers can be represented exactly as we humans know them in the decimal format. Not to mention and accumulated rounding errors of addition and division. A larger range means a greater MAX VALUE and a smaller MIN VALUE than decimal.
Decimal on the other side is (base 10). It has a smaller range (smaller MAX VALUE and larger MIN VALUE). This has nothing to do with precision, since it is not represented using floating binary point representation, it can represent numbers more precisely and though is recommended for human-made numbers and calculations.
I have following code
int varOut;
int.TryParse(txt1.Text, out varOut); // Here txt1.Text = 4286656181793660
Here txt1.Text is the random 16 digit number generated by JavaScript which is an integer. But the above code always return false i.e. varOut value is always zero.
What I am doing wrong here ?
The limit for int (32-bit integer) is -2,147,483,648 to 2,147,483,647. Your number is too large.
For large integer number such as your case, try to Parse using long.TryParse (or Int64.TryParse since Int64 is long in C#) instead. The limit for long number is of the range of -9.2e18 to 9.2e18*
long varOut;
long.TryParse(txt1.Text, out varOut); // Here txt1.Text = 4286656181793660
It should be sufficient for your number, which is only around 4.2e15 (4,286,656,181,793,660).
Alternatively, you may want to consider using decimal.TryParse if you want to have decimal number (containing fraction, higher precision).
decimal varOut;
decimal.TryParse(txt1.Text, out varOut); // Here txt1.Text = 4286656181793660
It is 128-bit data type, with the range of -7.9e28 to 7.9e28, and 28-29 significant digits precision, fits best for any calculation involving money.
And, as a last remark to complete the answer, it may be unsafe to use double - do not use it. Although double has a very high range of ±5.0 × 10e−324 to ±1.7 × 10e308, its precision is only about 15-16 digits (reference).
double varOut;
double.TryParse(txt1.Text, out varOut); // Not a good idea... since the input number is 16-digit Here txt1.Text = 4286656181793660
In this case, your number consists of 16 digits, which is in the borderline of the double precision. Thus, in some cases, you may end up with wrong result. Only if you are sure that your number will be at most 15-digit precision that you are safe to use it.
*-9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.
int is just shorthand for int32; it's a 32 bit (signed) integer, meaning that it can't hold a number larger than around 2 billion. Your number is larger than that, and so is not a valid int value.
Use MaxLength property to limit the number of digits and user cannot enter more than int32.
TextBox.MaxLength=9
Looks like you may be using value(s) which exceed the capacity of the type you're using... look at https://msdn.microsoft.com/en-us/library/system.int32.maxvalue%28v=vs.110%29.aspx
Store it as a long instead of an int.
https://msdn.microsoft.com/en-us/library/ctetwysk.aspx
You should use long instead of int. Your number is too large for int
Use long.TryParse()
Your number is too large to convert into int.
or you can use int64.tryparse
I feel like this should be easy to find somewhere online but I'm having a hard time.
Does anyone know what the c# value is for Double.Epsilon? I'm looking for the exact numerical value.
Here is its declaration:
[__DynamicallyInvokable]
public const double Epsilon = 4.94065645841247E-324;
No, this certainly is not correct.
First: The value of Double.Epsilon can easily be found out by either a small program or by reading the documentation:
4.94065645841247E-324
Second: Don't confuse this value with Machine Epsilon which is usually used in comparisons between two double values. See this question for more details on "Machine Epsilon".
MSDN page:
The value of this constant is 4.94065645841247e-324.
None of these answers is the exact numerical value. The exact value is a power of 2, namely 2^-1074, as this is how IEEE floating point numbers are actually stored in modern computers. All the other answers given are decimal approximations. If you assign that decimal approximation to a double, then it will round off to 2^-1074, so internally, the register or memory location will receive the true "Epsilon" value. So, using the decimal constant to initial a storage location to the minimum floating point value works, but the decimal constant is still not the actual value of this minimum floating point value.
Explanation: The smallest positive value written in IEEE notation is
0.0000000000000000000000000000000000000000000000000001B x 2^-1022.
(The B is for binary base)
That is a 1 shifted 52 bits to the right of the binary point, then shifted another 1022 bits for a total of 1074 bits. The leading sign bit is zero. Sign bit (1 bit) plus coefficient (52 bits) plus exponent (11 bits) give 64 bits of storage.
Note also this is a "denormalized" floating point value, because the exponent is -1022.
See https://en.wikipedia.org/wiki/IEEE_floating_point and search for "1074".
ps. My calculator gives a more precise representation of the value of Double.Epsilon = 2^-1074 as 4.9406564584124654417656879286822e-324.
Refreshing on floating points (also PDF), IEEE-754 and taking part in this discussion on floating point rounding when converting to strings, brought me to tinker: how can I get the maximum and minimum value for a given floating point number whose binary representations are equal.
Disclaimer: for this discussion, I like to stick to 32 bit and 64 bit floating point as described by IEEE-754. I'm not interested in extended floating point (80-bits) or quads (128 bits IEEE-754-2008) or any other standard (IEEE-854).
Background: Computers are bad at representing 0.1 in binary representation. In C#, a float represents this as 3DCCCCCD internally (C# uses round-to-nearest) and a double as 3FB999999999999A. The same bit patterns are used for decimal 0.100000005 (float) and 0.1000000000000000124 (double), but not for 0.1000000000000000144 (double).
For convenience, the following C# code gives these internal representations:
string GetHex(float f)
{
return BitConverter.ToUInt32(BitConverter.GetBytes(f), 0).ToString("X");
}
string GetHex(double d)
{
return BitConverter.ToUInt64(BitConverter.GetBytes(d), 0).ToString("X");
}
// float
Console.WriteLine(GetHex(0.1F));
// double
Console.WriteLine(GetHex(0.1));
In the case of 0.1, there is no lower decimal number that is represented with the same bit pattern, any 0.99...99 will yield a different bit representation (i.e., float for 0.999999937 yields 3F7FFFFF internally).
My question is simple: how can I find the lowest and highest decimal value for a given float (or double) that is internally stored in the same binary representation.
Why: (I know you'll ask) to find the error in rounding in .NET when it converts to a string and when it converts from a string, to find the internal exact value and to understand my own rounding errors better.
My guess is something like: take the mantissa, remove the rest, get its exact value, get one (mantissa-bit) higher, and calculate the mean: anything below that will yield the same bit pattern. My main problem is: how to get the fractional part as integer (bit manipulation it not my strongest asset). Jon Skeet's DoubleConverter class may be helpful.
One way to get at your question is to find the size of an ULP, or Unit in the Last Place, of your floating-point number. Simplifying a little bit, this is the distance between a given floating-point number and the next larger number. Again, simplifying a little bit, given a representable floating-point value x, any decimal string whose value is between (x - 1/2 ulp) and (x + 1/2 ulp) will be rounded to x when converted to a floating-point value.
The trick is that (x +/- 1/2 ulp) is not a representable floating-point number, so actually calculating its value requires that you use a wider floating-point type (if one is available) or an arbitrary width big decimal or similar type to do the computation.
How do you find the size of an ulp? One relatively easy way is roughly what you suggested, written here is C-ish pseudocode because I don't know C#:
float absX = absoluteValue(x);
uint32_t bitPattern = getRepresentationOfFloat(absx);
bitPattern++;
float nextFloatNumber = getFloatFromRepresentation(bitPattern);
float ulpOfX = (nextFloatNumber - absX);
This works because adding one to the bit pattern of x exactly corresponds to adding one ulp to the value of x. No floating-point rounding occurs in the subtraction because the values involved are so close (in particular, there is a theorem of ieee-754 floating-point arithmetic that if two numbers x and y satisfy y/2 <= x <= 2y, then x - y is computed exactly). The only caveats here are:
if x happens to be the largest finite floating point number, this won't work (it will return inf, which is clearly wrong).
if your platform does not correctly support gradual underflow (say an embedded device running in flush-to-zero mode), this won't work for very small values of x.
It sounds like you're not likely to be in either of those situations, so this should work just fine for your purposes.
Now that you know what an ulp of x is, you can find the interval of values that rounds to x. You can compute ulp(x)/2 exactly in floating-point, because floating-point division by 2 is exact (again, barring underflow). Then you need only compute the value of x +/- ulp(x)/2 suitable larger floating-point type (double will work if you're interested in float) or in a Big Decimal type, and you have your interval.
I made a few simplifying assumptions through this explanation. If you need this to really be spelled out exactly, leave a comment and I'll expand on the sections that are a bit fuzzy when I get the chance.
One other note the following statement in your question:
In the case of 0.1, there is no lower
decimal number that is represented
with the same bit pattern
is incorrect. You just happened to be looking at the wrong values (0.999999... instead of 0.099999... -- an easy typo to make).
Python 3.1 just implemented something like this: see the changelog (scroll down a bit), bug report.