How to convert hexadecimal string to number in C#?
I tried as below but it is giving negative value:
var dec1 = long.Parse("F0A6AFE69D2271E7", System.Globalization.NumberStyles.HexNumber);
// Result: dec1 = -1106003253459258905
While in Javascript it works fine as below:
var dec2 = parseInt("F0A6AFE69D2271E7", 16);
// Result: dec2 = 17340740820250292000
The number you're parsing is outside the range of long - long.MaxValue is 0x7FFFFFFFFFFFFFFF, and your value is 0xF0A6AFE69D2271E7.
Use ulong.Parse instead and it should be fine.
I suspect it's "working" in JavaScript because (at the time of writing) all JavaScript numbers are 64-bit floating point values, so have a huge range - but less precision, which is why a value which is clearly odd (last hex digit 7) is giving an even result.
See https://en.wikipedia.org/wiki/Two%27s_complement to understand why your number was interpreted as negative number. Presicely speaking, it's not "out of range" problem but, just your hex representation gives exact negative number. "long" type is signed integer and cannot hold full-64bit positive number since its MSB is kept for presenting negative numbers. Try using "ulong" instead.
Related
so I just started experimenting with C#. I have one line of code and the output is a "?".
Code: Console.WriteLine(Math.Pow(Math.Exp(1200), 0.005d));
Output: ?
I am using Visual Studio and it also says exited with code 0.
It should output 403.4287934927351. I also tried it out with Geogebra and it's correct as shown in the image so it's not infinity.
C# double type (which is returned by Math.Exp function) is a fixed-size type (64 bits), and so it cannot represent arbitrary large numbers. Largest number it can represent is double.MaxValue constant, and it's of order 10^308, which is less than what you are trying to compute (e^2000).
When result of computation exceeds maximum representable value - special "number" representing Infinity is returned. So
double x = Math.Exp(1200); // cannot represent this with double type
bool isInifinite = double.IsInfinity(x); // true
After you got this "infinity" - all other computations involving it will just return infinity back, there is nothing else they can do. So then whole expression Math.Pow(Math.Exp(1200), 0.005d)) returns "infinity".
When you try to write result to console, it gets converted to string. The rules for converting mentioned infinity to string are as follows:
Regardless of the format string, if the value of a Single or Double
floating-point type is positive infinity, negative infinity, or not a
number (NaN), the formatted string is the value of the respective
PositiveInfinitySymbol, NegativeInfinitySymbol, or NaNSymbol property
that is specified by the currently applicable NumberFormatInfo object.
In your current culture, PositiveInfinitySymbol is likely "∞", but your console encoding likely cannot represent this symbol, so it outputs "?". You can change console encoding to UTF8 like this:
Console.OutputEncoding = Encoding.UTF8;
And then it will show "∞" correctly.
There is no framework-provided type to work with arbitrary sized rational numbers, as far as I know. For integer there is BigInteger type though.
In this specific case you can do fine with just double, because you can do the same thing with:
Console.WriteLine(Math.Exp(1200 * 0.005d));
// outputs 403.4287934927351
Now there are no intermediate results exceeding capacity of double, and so it works fine.
For cases where it's not possible - there are third party libraries which allow to work with arbitrary large rationals.
I have a line in my function that calculates the sum of two digits.
I get the sum with this syntax:
sum += get2DigitSum((acctNumber[0] - '0') * 2);
which multiplys the number on index 0 with 2.
public static int get2DigitSum(int num)
{
return (num / 10) + (num % 10);
Lets say we have number 9 on index 0. If i have acctNumber[0] - '0' it passes the 9 into the other function.But if I don't have the - '0' after the acctNumber[0] it passes 12. I don't understand why I get wrong result if I don't use - '0'.
The text "0" and the number 0 are not at all equal to a computer.
The character '0' has in fact the ASCII number 48 (or 0x30 in hex), so to convert the character '0' into the number 0 you need to subtract 48 - in C and most languages based on it, this can be written as subtracting the character '0', which has the numerical value 48.
The beauty is, that the character '1' has the ASCII number 49, so subtracting the number 48 (or the character '0') gives 49-48=1 and so on.
So the important part is: Computers are not only sensitve to data (patterns of bits in some part of the machine), but also to the interpretation of this data - in your case interpreting it as a text and interpreting ist as a number is not the same, but gives a difference of 48, which you need to get rid of by a subtraction.
Because you are providing acctNumber[0] to get2DigitSum.
get2DigitSum accepts an integer, but acctNumber[0] is not an integer, it holds an char which represents a character with an integer value.
Therefore, you need to subtract the '0' to get the integer.
'0' to '9' have ASCII values of 48 to 57.
When you subtract two char values, actually there ASCII values get subtracted. That's why, you need to subtract '0'
Internally all Characters are represented as numbers. Numbers that then get converted into nice pictograms during display only.
Now the digits 0-9 are ASCII codes 48-57. Basically they are offset by +48. Past 57 you find the english alphabet in small and then large. And before that various operators and even a bunch of unprintable characters.
Normally you would not be doing this kind of math at all. You would feed the whole string into a Parse() or TryParse() function and then work with the parsed numbers. There are a few cases where you would not do that and isntead go for "math with Characters":
you did not know about Parse and integers when you made it
you want to support arbitary sized numbers in your calculations. This is a common beginner approach (the proper way is BigInteger).
You might be doing stuff like sorting mixed letter/number strings by the fully interpreted number (so 01 would come before 10). The same way windows sorts files with numbers in them.
You do not have a prewritten parse function. Like I did back when I started learning in C++ back in 2000.
Now I know that converting a int to hex is simple but I have an issue here.
I have an int that I want to convert it to hex and then add another hex to it.
The simple solution is int.Tostring("X") but after my int is turned to hex, it is also turned to string so I can't add anything to it until it is turned back to int again.
So my question is; is there a way to turn a int to hex and avoid having turned it to string as well. I mean a quick way such as int.Tostring("X") but without the int being turned to string.
I mean a quick way such as int.Tostring("X") but without the int being
turned to string.
No.
Look at this way. What is the difference between those?
var i = 10;
var i = 0xA;
As a value, they are exactly same. As a representation, first one is decimal notation and the second one is hexadecimal notation. The X you use hexadecimal format specifier which generates hexadecimal notation of that numeric value.
Be aware, you can parse this hexadecimal notation string to integer anytime you want.
C# convert integer to hex and back again
There is no need to convert. Number ten is ten, write it in binary or hex, yes their representation will differ depending in which base you write them but value is same. So just add another integer to your integer - and convert the final result to hex string when you need it.
Take example. Assume you have
int x = 10 + 10; // answer is 20 or 0x14 in Hex.
Now, if you added
int x = 0x0A + 0x0A; // x == 0x14
Result would still be 0x14. See?
Numeric 10 and 0x0A have same value just they are written in different base.
Hexadecimal string although is a different beast.
In above case that could be "0x14".
For computer this would be stored as: '0', 'x', '1', '4' - four separate characters (or bytes representing these characters in some encoding). While in case with integers, it is stored as single integer (encoded in binary form).
I guess you missing the point what is HEX and what is INT. They both represent an numbers. 1, 2, 3, 4, etc.. numbers. There's a way to look at numbers: as natural numbers: INT and hexadecimal - but at the end those are same numbers. For example if you have number: 5 + 5 = 10 (int) and A (as hex) but it the same number. Just view on them are different
Hex is just a way to represent number. The same statment is true for decimal number system and binary although with exception of some custom made numbers (BigNums etd) everything will be stored as binary as long as its integer (by integer i mean not floating point number). What would you really like to do is probably performing calculations on integers and then printing them as a Hex which have been already described in this topic C# convert integer to hex and back again
The short answer: no, and there is no need.
The integer One Hundred and seventy nine (179) is B3 in hex, 179 in base-10, 10110011 in base-2 and 20122 in base-3. The base of the number doesn't change the value of it. B3, 17, 10110011, and 20122 are all the same number, they are just represented different. So it doesn't matter what base they are in as long as you do you mathematical operations on numbers in the same base it doesn't matter what the base is.
So in your case with Hex numbers, they can contain characters such as 'A','B', 'C', and so on. So when you get a value in hex if it is a number that will contain a letter in its hex representation it will have to be a string as letters are not ints. To do what you want, it would be best to convert both numbers to regular ints and then do math and convert to Hex after. The reason for this is that if you want to be able to add (or whatever operation) with them looking like hex you are going to to need to change the behavior of the desired operator on string which is a hassle.
I am reading in an XML file and reading a specific value of 10534360.9
When I parse the string for a decimal ( decimal.Parse(afNodes2[0][column1].InnerText) ) I get 10534360.9, but when I parse it for a float ( float.Parse(afNodes2[0][column1].InnerText) ) I get 1.053436E+07.
Can someone explain why?
You are seeing the value represented in "E" or "exponential" notation.
1.053436E+07 is equivalent to 1.053436 x 10^7, or 10,534,360, which is the most precise way .NET can store 10,534,360.9 in the System.Single (float) data type (32 bits, 7 digits of precision).
You're seeing it represented that way because it is the default format produced by the Single.ToString() method, which the debugger uses to display values on the watch screens, console, etc.
EDIT
It probably goes without saying, but if you want to retain the precision of the original value in the XML, you could choose a data type that can retain more information about your numbers:
System.Double (64 bits, 15-16 digits)
System.Decimal (128 bits, 28-29 significant digits)
1.053436E+07 == 10534360.9
Those numbers are the same, just displayed differently.
Because float has a precision of 7 digits.
Decimal has a precision of 28 digits.
When viewing the data in a debugger, or displaying it via .ToString, by default it might be formatted using scientific notation:
Some examples of the return value are "100", "-123,456,789", "123.45e+6", "500", "3.1416", "600", "-0.123", and "-Infinity".
To format it as exact output, use the R (round trip) format string:
myFloat.ToString("R");
http://msdn.microsoft.com/en-us/library/dwhawy9k(v=vs.110).aspx
I am trying to convert the double value 9007199254740992.0 to a string.
But there seems to be a rounding error (the last 2 becomes a 0):
(9007199254740992.0).ToString("#") // Returns "9007199254740990"
(9007199254740992.0).ToString() // Returns"9.00719925474099E+15"
First I thought that maybe the number couldn't be represented as a double. But it can. This can be seen by casting it to a long and then converting it to a string.
((long)9007199254740991.0).ToString() // Returns "9007199254740991"
((long)9007199254740992.0).ToString() // Returns "9007199254740992"
Also, I found that if I use the "R" format, it works.
(9007199254740992.0).ToString("R") // Returns "9007199254740992"
Can anyone explain why ToString("#") doesn't return the double with the integer part at full precision?
As can be seen on MSDN:
By default, the return value only contains 15 digits of precision
although a maximum of 17 digits is maintained internally. If the value
of this instance has greater than 15 digits, ToString returns
PositiveInfinitySymbol or NegativeInfinitySymbol instead of the
expected number. If you require more precision, specify format with
the "G17" format specification, which always returns 17 digits of
precision, or "R", which returns 15 digits if the number can be
represented with that precision or 17 digits if the number can only be
represented with maximum precision.