Convert convert int to alphabetical value - c#

i would like to convert int's between 1-26 to the corresponding alphabetical char letter
i saw somewhere i could use char outPut = (char)currentValue; or Convert.ToChar(currentValue however both do not seem to work. i am testing this by having the program print the char to the console. both attempts have resulted in a empty character being printed
i am making a program that takes in a char letter, converts it to an int to do some maths to it and then converts the final int back to a char letter and prints the char to the console

char outPut = (char)currentValue;
would give you a char that represents some whitespace character (since those are represented as integers up to 32.)
What you actually want is a char representing a letter. The following code will give you the ith letter of the alphabet in lowercase:
char c = (char)((int)'a' + i - 1);
Or in uppercase:
char c = (char)((int)'A' + i - 1);
It uses the fact that letters are represented by consecutive integers.

You can offset by a character you want. In this case, I use 'a'.
Console.WriteLine((char)(5 + 'a'));
As your range starts with 1, you need to use 'a'-1.

int and char can be converted back and forth by using the ASCII table
ASCII value for A is 65. So you can get char A by doing
char a = (char) 65;
to convert it back using the 1-26 representation just do
int num = a - 65 + 1;
You would need to do some boundary check to make sure the number is within range of course.

Related

c# - How to convert from int to short?

Doing an exercise where the output is different from the input.
For example if I input the number "6" into the console the output will be 54. What is the reason behind the output and how could I get the same numer as input?
{
int intVal;
short ShortVal;
intVal = Console.Read();
ShortVal = (short)intVal;
Console.WriteLine("{0}", ShortVal);
}
}
The answer is in the documentation for Console.Read:
Returns [...] The next character from the input stream, or negative one (-1) if there are currently no more characters to be read.
It is written a bit sloppily, because technically it returns the Unicode character code of the character.[1]
If you type in the character 6, what is the ordinal Unicode character value for this character? It's 54 (hex 0x0036; think of ASCII codes, but the encoding not being ASCII but rather Unicode). And that's precisely what you are seeing/getting here.
If you want to get the numeric value of the digit the character is representing (which is not the same as the numerical Unicode character code), take a look at the Unicode character codes for the characters 0...9. They occupy the character code range from 0x0030 (for the '0' character) to 0x0039 (for the 9 character). It shouldn't be hard to notice that you could simply subtract the Unicode character code of the 0 character to get the value of the digit these characters represent:
intVal = Console.Read();
if (intVal < '0' || intVal > '9')
{
Console.WriteLine("Not a numerical digit.");
}
else
{
var digitValue = intVal - '0';
Console.WriteLine("{0}", digitValue);
}
Now, this hopefully helps furthering understanding of character codes. However, the example code i have given here could be written differently, as the char type has a convenient method which provides the digit value of characters which represent digits (or represent numerical values, like roman numerals for example):
intVal = Console.Read();
if (intVal <= -1)
{
Console.WriteLine("No character entered");
}
else
{
var digitValue = char.GetNumericValue((char) intVal);
if (digitValue < 0)
{
Console.WriteLine("Not a numerical digit.");
}
else
{
Console.WriteLine("{0}", digitValue);
}
}
[1] If the method were to return a character, its return type would be char and not int. Although, the char type can be converted/cast to a numeric value such as int or short and back to get a characters Unicode character value and vice versa, so this distinction of characters vs. Unicode character code values seldomly matters in practical situations. However, here it may be useful to make this distinction to get a better understanding of what you observed.

When I use split.string and Convert.ToInt32 the value changes, why?

For example,
string a = "4,3,2";
a.Split(',');
int one = Convert.ToInt32(a[0]);
int two = Convert.ToInt32(a[2]);
If I were to Console.WriteLine(a[0]); it will give me 4, and Console.WriteLine(a[2]) will give me 2. However, Console.WriteLine(one) and Console.WriteLine(two) gives me 52 and 50 respectively. Why is this so?
Character '4' is unicode codepoint 52 and character '3' unicode codepoint 51. You're converting characters instead of strings. The problem is that you're ignoring the result of a.Split(','); and then dereference the individual characters from a, and Convert.ToInt32(char) does:
Converts the value of the specified Unicode character to the
equivalent 32-bit signed integer.
and
The ToInt32(Char) method returns a 32-bit signed integer that
represents the UTF-16 encoded code unit of the value argument.
Instead use the strings after splitting:
string[] split = a.Split(',');
int one = Convert.ToInt32(split[0]);
int two = Convert.ToInt32(split[1]);

How can I get third and fourth letters from string?

I want to get third and fourth letters from PlayerPrefs.GetString("String")
and Parse to int.
for example;
string playerLevelstr = PlayerPrefs.GetString("Player")[2] + PlayerPrefs.GetString("Player")[3];
//My Player string is "0012000000" but when I plus third and fourth letter, playerLevelstr should be "12" but it is "96".
int playerLevelint = int.Parse(playerLevelstr);
The indexer on a string returns a char.
If you use the + operator on chars together, it essentially does integer math on the two chars.
See this question for more information on that.
Though that means you should get 99 ('1' is 49, '2' is 50') not 96. But maybe that was a typo on one end or the other?
Regardless, you should either convert the chars to strings (.ToString() on them) or use the Substring function on the string instead. And don't forget your length/null checks!
This code PlayerPrefs.GetString("Player")[2] returns a char, which is being being converted to int (being the ASCII value of the character) when you add it to another char.
Do this instead:
string playerLevelstr = PlayerPrefs.GetString("Player").Substring(2,2);

Why does Visual Studio add a slash at the start of an int, when it's converted to char?

Why does Visual Studio add a slash at the start of an int, when it's converted to char?
When I convert int 0 to char, it changes to \0. Why? I need char 0.
\0 is just how Visual Studio shows the character null in debug windows.
The \ is an escape sequence, so \0 tells you this is the null character.
If you want the character 0, you need to use the correct ASCII value for that character - 48.
char n = (char)0; // The _null_ character, displayed \0
char z = (char)48; // The character 0
char z2 = '0'; // The character 0, using a character literal
To recive char '0' convert int 48 to char or assign simply by
var c = '0';
When you convert int 0 to char you receive ASCII symbol at 0 decimal position, which is null. Char '0' has ASCII value as 48, so converting 48 to char will result in '0' char.
Check ASCII symbols and their decimal representation at ASCII Table. Remember, that anytime you convert int to char, it takes this ASCII table into account
If you want the char 0, you need to use either a character literal:
'0'
or, you can use
0.ToString()[0]
Depending on where the value is coming from.
\0 is an escape sequence for the character with the ASCII value 0, and this is different from the character for the digit 0.

How to get a char from an ASCII Character Code in C#

I'm trying to parse a file in C# that has field (string) arrays separated by ASCII character codes 0, 1 and 2 (in Visual Basic 6 you can generate these by using Chr(0) or Chr(1) etc.)
I know that for character code 0 in C# you can do the following:
char separator = '\0';
But this doesn't work for character codes 1 and 2?
Two options:
char c1 = '\u0001';
char c1 = (char) 1;
You can simply write:
char c = (char) 2;
or
char c = Convert.ToChar(2);
or more complex option for ASCII encoding only
char[] characters = System.Text.Encoding.ASCII.GetChars(new byte[]{2});
char c = characters[0];
It is important to notice that in C# the char type is stored as Unicode UTF-16.
From ASCII equivalent integer to char
char c = (char)88;
or
char c = Convert.ToChar(88)
From char to ASCII equivalent integer
int asciiCode = (int)'A';
The literal must be ASCII equivalent. For example:
string str = "Xสีน้ำเงิน";
Console.WriteLine((int)str[0]);
Console.WriteLine((int)str[1]);
will print
X
3626
Extended ASCII ranges from 0 to 255.
From default UTF-16 literal to char
Using the Symbol
char c = 'X';
Using the Unicode code
char c = '\u0058';
Using the Hexadecimal
char c = '\x0058';

Categories