How to get a char from an ASCII Character Code in C# - c#

I'm trying to parse a file in C# that has field (string) arrays separated by ASCII character codes 0, 1 and 2 (in Visual Basic 6 you can generate these by using Chr(0) or Chr(1) etc.)
I know that for character code 0 in C# you can do the following:
char separator = '\0';
But this doesn't work for character codes 1 and 2?

Two options:
char c1 = '\u0001';
char c1 = (char) 1;

You can simply write:
char c = (char) 2;
or
char c = Convert.ToChar(2);
or more complex option for ASCII encoding only
char[] characters = System.Text.Encoding.ASCII.GetChars(new byte[]{2});
char c = characters[0];

It is important to notice that in C# the char type is stored as Unicode UTF-16.
From ASCII equivalent integer to char
char c = (char)88;
or
char c = Convert.ToChar(88)
From char to ASCII equivalent integer
int asciiCode = (int)'A';
The literal must be ASCII equivalent. For example:
string str = "Xสีน้ำเงิน";
Console.WriteLine((int)str[0]);
Console.WriteLine((int)str[1]);
will print
X
3626
Extended ASCII ranges from 0 to 255.
From default UTF-16 literal to char
Using the Symbol
char c = 'X';
Using the Unicode code
char c = '\u0058';
Using the Hexadecimal
char c = '\x0058';

Related

Convert convert int to alphabetical value

i would like to convert int's between 1-26 to the corresponding alphabetical char letter
i saw somewhere i could use char outPut = (char)currentValue; or Convert.ToChar(currentValue however both do not seem to work. i am testing this by having the program print the char to the console. both attempts have resulted in a empty character being printed
i am making a program that takes in a char letter, converts it to an int to do some maths to it and then converts the final int back to a char letter and prints the char to the console
char outPut = (char)currentValue;
would give you a char that represents some whitespace character (since those are represented as integers up to 32.)
What you actually want is a char representing a letter. The following code will give you the ith letter of the alphabet in lowercase:
char c = (char)((int)'a' + i - 1);
Or in uppercase:
char c = (char)((int)'A' + i - 1);
It uses the fact that letters are represented by consecutive integers.
You can offset by a character you want. In this case, I use 'a'.
Console.WriteLine((char)(5 + 'a'));
As your range starts with 1, you need to use 'a'-1.
int and char can be converted back and forth by using the ASCII table
ASCII value for A is 65. So you can get char A by doing
char a = (char) 65;
to convert it back using the 1-26 representation just do
int num = a - 65 + 1;
You would need to do some boundary check to make sure the number is within range of course.

In C# When is the right time to use Apostrophe Quotation marks

I would like to know why some things have to be within a pair of Apostrophes and others within Quotation marks?
void trythis(){
char myChar = 'Stuff';
String myString = "Blah";
int myInteger = '22';
Serial.print(myChar );
Serial.print(myString );
Serial.print(myInteger );
}
Number should have no quotes int x= 56
characters have single quotes char ch = 'a';
strings have double quotes string name = "Bob";
Character literals use a single quote. So when you're dealing with char, that's 'x'.
String literals use double quotes. So when you're dealing with string, that's "x".
A char is a single UTF-16 code unit - in most cases "a single character". A string is a sequence of UTF-16 code units, i.e. "a piece of text" of (nearly) arbitrary length.
Your final example, after making it compile, would look something like:
int myInteger = 'x';
That's using a character literal, but then implicitly converting it to int - equivalent to:
char tmp = 'x';
int myInteger = tmp;
The code you wrote doesn't compile at all.
Single quotes are used for character literals (single characters, which are stored as UTF-16 in .NET). Integers are not quoted.
This would be valid:
char myChar = 's';
string myString = "Blah";
int myInteger = 22;

Why does Visual Studio add a slash at the start of an int, when it's converted to char?

Why does Visual Studio add a slash at the start of an int, when it's converted to char?
When I convert int 0 to char, it changes to \0. Why? I need char 0.
\0 is just how Visual Studio shows the character null in debug windows.
The \ is an escape sequence, so \0 tells you this is the null character.
If you want the character 0, you need to use the correct ASCII value for that character - 48.
char n = (char)0; // The _null_ character, displayed \0
char z = (char)48; // The character 0
char z2 = '0'; // The character 0, using a character literal
To recive char '0' convert int 48 to char or assign simply by
var c = '0';
When you convert int 0 to char you receive ASCII symbol at 0 decimal position, which is null. Char '0' has ASCII value as 48, so converting 48 to char will result in '0' char.
Check ASCII symbols and their decimal representation at ASCII Table. Remember, that anytime you convert int to char, it takes this ASCII table into account
If you want the char 0, you need to use either a character literal:
'0'
or, you can use
0.ToString()[0]
Depending on where the value is coming from.
\0 is an escape sequence for the character with the ASCII value 0, and this is different from the character for the digit 0.

How do I get the STX character of hex 02

I have a device to which I'm trying to connect via a socket, and according to the manual, I need the "STX character of hex 02".
How can I do this using C#?
Just a comment to GeoffM's answer (I don't have enough points to comment the proper way).
You should never embed STX (or other characters) that way using only two digits.
If the next character (after "\x02") was a valid hex digit, that would also be parsed and it would be a mess.
string s1 = "\x02End";
string s2 = "\x02" + "End";
string s3 = "\x0002End";
Here, s1 equals ".nd", since 2E is the dot character, while s2 and s3 equal STX + "End".
You can use a Unicode character escape: \u0002
Cast the Integer value of 2 to a char:
char cChar = (char)2;
\x02 is STX Code you can check the ASCII Table
checkFinal = checkFinal.Replace("\x02", "End").ToString().Trim();
Within a string, clearly the Unicode format is best, but for use as a byte, this approach works:
byte chrSTX = 0x02; // Start of Text
byte chrETX = 0x03; // End of Text
// etc...
You can embed the STX within a string like so:
byte[] myBytes = System.Text.Encoding.ASCII.GetBytes("\x02Hello, world!");
socket.Send(myBytes);

How do I convert C# characters to their hexadecimal code representation

What I need to do is convert a C# character to an escaped unicode string:
So, 'A' - > "\x0041".
Is there a better way to do this than:
char ch = 'A';
string strOut = String.Format("\\x{0}", Convert.ToUInt16(ch).ToString("x4"));
Cast and use composite formatting:
char ch = 'A';
string strOut = String.Format(#"\x{0:x4}", (ushort)ch);

Categories