The title says everything. I'm trying to convert char into int in visual studio.
I have already tried this:
int a;
a = (int)x;
System.Console.WriteLine(a);
but it's not giving me anything besides this(from trying to understand the code):
114117105
This will just work:
//a char is a 16-bit numerical value
//usually used for the representation of characters
//here I assign 'a' to it; keep in mind; 'a' has also a numeric representation
//i.e.: 97
char x = 'a';
int a;
//the `int a` is an integer and, through a cast, receives the numeric value
//besides the bit-width (debatable) the data contents are the same.
//If we assign it to an `int` only the "purpose" c.q. representation will change
a = (int)x;
//since we put in an `int` a number will be shown (because of it's purpose)
//if we put in x, then `a` will be shown.
System.Console.WriteLine(a);
Output
97
As you have understand by now; a string, is an array of chars.
Therefore a string is hard to represent by a single number, because it is 2 dimensional.
It would be the same as saying, convert: 0,4,43434,878728,3477,3.14159265 to a single number.
https://dotnetfiddle.net/qSYUdP
https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/char
On why the output for a is 97; you can look it up in the character table, e.g.: ascii.
Please note that the actual character that is outputted is determined by the chosen font/character table. For most fonts the ASCII is implemented, but it's not guaranteed. So, 97 will not always produce a.
Related
I have a line in my function that calculates the sum of two digits.
I get the sum with this syntax:
sum += get2DigitSum((acctNumber[0] - '0') * 2);
which multiplys the number on index 0 with 2.
public static int get2DigitSum(int num)
{
return (num / 10) + (num % 10);
Lets say we have number 9 on index 0. If i have acctNumber[0] - '0' it passes the 9 into the other function.But if I don't have the - '0' after the acctNumber[0] it passes 12. I don't understand why I get wrong result if I don't use - '0'.
The text "0" and the number 0 are not at all equal to a computer.
The character '0' has in fact the ASCII number 48 (or 0x30 in hex), so to convert the character '0' into the number 0 you need to subtract 48 - in C and most languages based on it, this can be written as subtracting the character '0', which has the numerical value 48.
The beauty is, that the character '1' has the ASCII number 49, so subtracting the number 48 (or the character '0') gives 49-48=1 and so on.
So the important part is: Computers are not only sensitve to data (patterns of bits in some part of the machine), but also to the interpretation of this data - in your case interpreting it as a text and interpreting ist as a number is not the same, but gives a difference of 48, which you need to get rid of by a subtraction.
Because you are providing acctNumber[0] to get2DigitSum.
get2DigitSum accepts an integer, but acctNumber[0] is not an integer, it holds an char which represents a character with an integer value.
Therefore, you need to subtract the '0' to get the integer.
'0' to '9' have ASCII values of 48 to 57.
When you subtract two char values, actually there ASCII values get subtracted. That's why, you need to subtract '0'
Internally all Characters are represented as numbers. Numbers that then get converted into nice pictograms during display only.
Now the digits 0-9 are ASCII codes 48-57. Basically they are offset by +48. Past 57 you find the english alphabet in small and then large. And before that various operators and even a bunch of unprintable characters.
Normally you would not be doing this kind of math at all. You would feed the whole string into a Parse() or TryParse() function and then work with the parsed numbers. There are a few cases where you would not do that and isntead go for "math with Characters":
you did not know about Parse and integers when you made it
you want to support arbitary sized numbers in your calculations. This is a common beginner approach (the proper way is BigInteger).
You might be doing stuff like sorting mixed letter/number strings by the fully interpreted number (so 01 would come before 10). The same way windows sorts files with numbers in them.
You do not have a prewritten parse function. Like I did back when I started learning in C++ back in 2000.
So I was looking up C# Caesar ciphers online and I found this website:
https://www.programmingalgorithms.com/algorithm/caesar-cipher
I looked through and generally the code made sense to me until this part:
char offset = char.IsUpper(ch) ? 'A' : 'a';
return (char)((((ch + key) - offset) % 26) + offset);
I understand the ternary operator, it's mainly the second line that I can't make sense of, it returns a character,but somehow adds a character and a number together, subtracts a character, gets the modulus and then adds on a character?
The only explanation I've come up with is that each character has an ID and it's doing the operations on that rather than the character itself?
Honestly it's a bit beyond me, if someone could explain it that would be great.
Thanks in advance.
Say you have a key pressed, say it was F, the ASCII code will be 0x46
thus:
int ch = 0x46;
then the value is shifted by the key parameter (let's take 3)
int key = 21;
the offset is just the offset between the numerical vaue and the ASCII code:
'A' - 'A' = 0 -> A is at index 0 of letters
'B' - 'A' = 1 -> B is at index 1 of letters
...
'Z' - 'A' = 25 -> Z is at index 25
same thing when letters are lowercase, using 'a'.
now the % 26 performs some round robin on letters
thus (('F' + 21) -'A') % 26 gives 0
then coming back in the letters range:
0 + 'A' = 'A'
As described in your title, this is just a Caesar cypher in C.
According to ECMA-334 (C# language spec):
The char type is used to represent Unicode code units. A variable of type char represents a single 16-bit Unicode code unit.
According to the unicode.org glossary*:
Code Unit. The minimal bit combination that can represent a unit of encoded text for processing or interchange. The Unicode Standard uses 8-bit code units in the UTF-8 encoding form, 16-bit code units in the UTF-16 encoding form, and 32-bit code units in the UTF-32 encoding form.
From these two resources we can infer that the char type is a 16-bit wide field of binary digits. What better way to implement a 16-bit wide field of binary digits than as a 16-bit integer, hmmmm?
I have a simple code as follows
int n=Console.Read();
Console.WriteLine(n);
When I gave 100 as input it is printing only 49 which is ASCII decimal for 1 and then what about the remaining zeros. I also found in the msdn website as "The next character from the input stream, or negative one (-1) if there are currently no more characters to be read." and Read() has a integer return type. Actually is it returning the number of characters read ? Then what is the use of it?
It returns the next character in the input stream. 1 is the first thing you input, and the 0's must be still in the input stream.
The int value you were getting is a char being cast as an int. If you change int n to char n (and cast it as a char) the output will be "1".
Now I know that converting a int to hex is simple but I have an issue here.
I have an int that I want to convert it to hex and then add another hex to it.
The simple solution is int.Tostring("X") but after my int is turned to hex, it is also turned to string so I can't add anything to it until it is turned back to int again.
So my question is; is there a way to turn a int to hex and avoid having turned it to string as well. I mean a quick way such as int.Tostring("X") but without the int being turned to string.
I mean a quick way such as int.Tostring("X") but without the int being
turned to string.
No.
Look at this way. What is the difference between those?
var i = 10;
var i = 0xA;
As a value, they are exactly same. As a representation, first one is decimal notation and the second one is hexadecimal notation. The X you use hexadecimal format specifier which generates hexadecimal notation of that numeric value.
Be aware, you can parse this hexadecimal notation string to integer anytime you want.
C# convert integer to hex and back again
There is no need to convert. Number ten is ten, write it in binary or hex, yes their representation will differ depending in which base you write them but value is same. So just add another integer to your integer - and convert the final result to hex string when you need it.
Take example. Assume you have
int x = 10 + 10; // answer is 20 or 0x14 in Hex.
Now, if you added
int x = 0x0A + 0x0A; // x == 0x14
Result would still be 0x14. See?
Numeric 10 and 0x0A have same value just they are written in different base.
Hexadecimal string although is a different beast.
In above case that could be "0x14".
For computer this would be stored as: '0', 'x', '1', '4' - four separate characters (or bytes representing these characters in some encoding). While in case with integers, it is stored as single integer (encoded in binary form).
I guess you missing the point what is HEX and what is INT. They both represent an numbers. 1, 2, 3, 4, etc.. numbers. There's a way to look at numbers: as natural numbers: INT and hexadecimal - but at the end those are same numbers. For example if you have number: 5 + 5 = 10 (int) and A (as hex) but it the same number. Just view on them are different
Hex is just a way to represent number. The same statment is true for decimal number system and binary although with exception of some custom made numbers (BigNums etd) everything will be stored as binary as long as its integer (by integer i mean not floating point number). What would you really like to do is probably performing calculations on integers and then printing them as a Hex which have been already described in this topic C# convert integer to hex and back again
The short answer: no, and there is no need.
The integer One Hundred and seventy nine (179) is B3 in hex, 179 in base-10, 10110011 in base-2 and 20122 in base-3. The base of the number doesn't change the value of it. B3, 17, 10110011, and 20122 are all the same number, they are just represented different. So it doesn't matter what base they are in as long as you do you mathematical operations on numbers in the same base it doesn't matter what the base is.
So in your case with Hex numbers, they can contain characters such as 'A','B', 'C', and so on. So when you get a value in hex if it is a number that will contain a letter in its hex representation it will have to be a string as letters are not ints. To do what you want, it would be best to convert both numbers to regular ints and then do math and convert to Hex after. The reason for this is that if you want to be able to add (or whatever operation) with them looking like hex you are going to to need to change the behavior of the desired operator on string which is a hassle.
I have a character '¿'. If I cast it with integer in C, result is -61 and same casting in C#, result is 191. Can someone explain me the reason.
C Code
char c = '¿';
int I = (int)c;
Result I = -62
C# Code
char c = '¿';
int I = (int)c;
Result I = 191
This is how singed/unsigned numbers are represented and converted.
It looks like C compiler's default in your case use signed byte as underlying type for char (since you are note explicitly specifying unsigend char compiler's default is used, See - Why is 'char' signed by default in C++? ).
So 191 (0xBF) as signed byte means negative number (most significant bit is 1) - -65.
If you'd use unsigned char value would stay positive as you expect.
If your compiler would you wider type for char (i.e. short) that 191 would stay as positive 191 irrespective of whether or not char is signed or not.
In C# where it always unsigned - see MSDN char:
Type: char
Range: U+0000 to U+FFFF
So 191 will always convert to to int as you expect.