I need to send an array of bytes to a hardware (SDZ16 matrix) using a Serial Port. The trick is in the fact that that hardware expects strings of hexadecimal and ASCII characters.
When assigning values to the array of bytes, even if I set the bytes to an explicit hexadecimal value
(bytes[0] = 0xF2, for instance), it will print the equivalent decimal value (242 instead of F2).
I am suspicious that the problem is in the Console.WriteLine(); which when printing each byte sets them by default as integers(?) How does C# keep track that there is an Hexadecimal value inside an int?
If I assign bytes[0] = 0xF2; will the hardware understand it in hexadecimal even if Console.WriteLine(); shows differently will testing?
If you want to get a string representation in hex format you can do so by using a corresponding numeric format string:
byte value = 0xF2;
string hexString = string.Format("{0:X2}", value);
Note that Console.WriteLine has an overload that takes a format string and a parameter list:
Console.WriteLine("{0:X2}", value);
Update: I just had a glimpse at the documentation here, and it seems that you need to send commands by providing the corresponding ASCII representation in the form of a string. You can get the ASCII representation using:
byte value = 0x01;
string textValue = value.ToString().PadLeft(2, '0');
byte[] ascii = Encoding.ASCII.GetBytes(textValue)
My tip would be to carefully check the documentation of your equipment to find out which exact format is expected.
it will print the equivalent decimal value (242 instead of F2).
Yes because 0xF2 is still 242. It is just an hexadecimal notation. Most comman prefix is 0x in this notation. Even if you use debugger, you can see it's decimal notation.
I am suspicious that the problem is in the Console.WriteLine(); which
when printing each byte sets them by default as integers(?)
No, Console.WriteLine() method nothing do here.
How does C# keep track that there is an Hexadecimal value inside an
int?
There is no such a thing as Hexadecimal value inside an int. It is just a notation.
If you wanna hexadecimal notation of a number, you can use The hexadecimal "X" format specifier like;
byte b = 0xF2;
Console.WriteLine(b.ToString("X")); //F2
If you wanna get with prefix; you can do;
byte b = 0xF2;
Console.WriteLine("0x{0}", b.ToString("X")); //0xF2
Related
I have used XmlReader Class and have to Parsed font file Svg format. I could not parsed glyph
tags attribute unicode string value
<svg><font><glyph unicode="" /></font></svg>
I had tried
if (xmlReader.GetAttribute("unicode") != null)
{
string unicode = xmlReader.GetAttribute("unicode");
}
Got output
unicode=""
I need exactly unicode string value.
Can anyone Answer please !
There's nothing wrong with the response - that's the Unicode character represented by the UTF-8 E600 hex value. This value is in the private use area which means there is no standard glyph that can be shown, so a default glyph is used.
A char is a 16-bit number representing a UTF16 codepoint, so you already have the codepoint. If you want to format it as a hex string you can use the x4 format, eg:
char theChar=unicode.Chars[0];
string hexString=String.Format("{0:x4}",theChar);
This will return E600.
If you want to output the same string as the original you can use "&#x{0x:4}"
I'm writing a little program based off of a Python code that I have found. There is a few lines I need help with. It is about hasing a value using SHA256 encryption.
The python code is as follows:
first = hashlib.sha256((valueOne + valueTwo).encode()).hexdigest()
second = hashlib.sha256(str(timestamp) + value).encode()).hexdigest()
And when I execute it, my values are as follows:
first: 93046e57a3c183186e9e24ebfda7ca04e7eb4d8119060a8a39b48014d4c5172b
second: bde1c946749f6716fde713d46363d90846a841ad56a4cf7eaccbb33aa1eb1b70
My C# code is:
string first = sha256_hash((secret + auth_token));
string second = sha256_hash((timestamp.ToString() + secret));
And when I execute it, my values are:
first: 9346e57a3c183186e9e24ebfda7ca4e7eb4d81196a8a39b48014d4c5172b
second: bde1c946749f6716fde713d46363d9846a841ad56a4cf7eaccbb33aa1eb1b70
As you can see, the values are slightly different. The python code returns two values BOTH with the length of 64 characters, where as in C# the values are 60 characters and 63 characters respectively.
My sha256_hash method is from here: Obtain SHA-256 string of a string
Any help would be appreciated, thanks.
Your hex digest method is not producing length-2 hex values for bytes < 16. The byte \x08 is being added to your hex output as just '8' instead of '08', leading to an output that is too short.
Adjust the format to produce 0-padded hex characters:
foreach (Byte b in result)
Sb.Append(b.ToString("x2"));
See Standard Numeric Format Strings for more information on how to format bytes to hexadecimal strings.
I convert a byte array to a string , and I convert this string to byte array.
these two byte arrays are different.
As below:
byte[] tmp = Encoding.ASCII.GetBytes(Encoding.ASCII.GetString(b));
Suppose b is a byte array.
b[0]=3, b[1]=188, b[2]=2 //decimal system
Result:
tmp[0]=3, tmp[1]=63, tmp[2]=2
So that's my problem, what's wrong with it?
188 is out of range for ASCII. Characters that are not in the corresponding character set are transposed to '?' by design (would you prefer transposing to "1/4"?)
ASCII is 7-bit only, so others are invalid. By default it uses ? to replace any invalid bytes and that's why you get a ?.
For 8-bit character sets, you should be looking for either the Extended ASCII (which is later defined "ISO 8859-1") or the code page 437 (which is often confused with Extended ASCII, but in fact it's not).
You can use the following code:
Encoding enc = Encoding.GetEncoding("iso-8859-1");
// For CP437, use Encoding.GetEncoding(437)
byte[] tmp = enc.GetBytes(enc.GetString(b));
The character 188 is not defined for ASCII. Instead, you're getting 63, which is a question mark.
The ASCII character set has a range from 1 to 127. You can see 188 is not in this range and is converted to ? (= ASC 63).
Not every sequence of bytes is necessarily a valid sequence of encoded values for a particular encoding.
So the result of Encoding.ASCII.GetString(b) on an arbitrary array of bytes, b, is poorly defined. (And could be, for any other encoding also).
If you need to take an arbitrary byte array and obtain a sequence of characters, you might want to look into the Convert classes ToBase64String and FromBase64String. If that's not what you're trying to do, maybe explain the original problem to us.
188 isn't in the range of ASCII (7 bit), you should use Encoding.Default to get the ANSI encoding:
byte[] b = new byte[3]{ 3, 188, 2 };
byte[] tmp = Encoding.Default.GetBytes(Encoding.Default.GetString(b));
Does Ruby have an equivalent to .NET's Encoding.ASCII.GetString(byte[])?
Encoding.ASCII.GetString(bytes[]) takes an array of bytes and returns a string after decoding the bytes using the ASCII encoding.
Assuming your data is in an array like so (each element is a byte, and further, from the description you posted, no larger than 127 in value, that is, a 7-bit ASCII character):
array =[104, 101, 108, 108, 111]
string = array.pack("c*")
After this, string will contain "hello", which is what I believe you're requesting.
The pack method "Packs the contents of arr into a binary sequence according to the directives in the given template string".
"c*" asks the method to interpret each element of the array as a "char". Use "C*" if you want to interpret them as unsigned chars.
http://ruby-doc.org/core/classes/Array.html#M002222
The example given in the documentation page uses the function to convert a string with Unicode characters. In Ruby I believe this is best done using Iconv:
require "iconv"
require "pp"
#Ruby representation of unicode characters is different
unicodeString = "This unicode string contains two characters " +
"with codes outside the ASCII code range, " +
"Pi (\342\x03\xa0) and Sigma (\342\x03\xa3).";
#printing original string
puts unicodeString
i = Iconv.new("ASCII//IGNORE","UTF-8")
#Printing converted string, unicode characters stripped
puts i.iconv(unicodeString)
bytes = i.iconv(unicodeString).unpack("c*")
#printing array of bytes of converted string
pp bytes
Read up on Ruby's Iconv here.
You might also want to check this question.
Is it possible to simplify this code into a cleaner/faster form?
StringBuilder builder = new StringBuilder();
var encoding = Encoding.GetEncoding(936);
// convert the text into a byte array
byte[] source = Encoding.Unicode.GetBytes(text);
// convert that byte array to the new codepage.
byte[] converted = Encoding.Convert(Encoding.Unicode, encoding, source);
// take multi-byte characters and encode them as separate ascii characters
foreach (byte b in converted)
builder.Append((char)b);
// return the result
string result = builder.ToString();
Simply put, it takes a string with Chinese characters such as 鄆 and converts them to ài.
For example, that Chinese character in decimal is 37126 or 0x9106 in hex.
See http://unicodelookup.com/#0x9106/1
Converted to a byte array, we get [145, 6] (145 * 256 + 6 = 37126). When encoded in CodePage 936 (simplified chinese), we get [224, 105]. If we break this byte array down into individual characters, we 224=e0=à and 105=69=i in unicode.
See http://unicodelookup.com/#0x00e0/1
and
http://unicodelookup.com/#0x0069/1
Thus, we're doing an encoding conversion and ensuring that all characters in our output Unicode string can be represented using at most two bytes.
Update: I need this final representation because this is the format my receipt printer is accepting. Took me forever to figure it out! :) Since I'm not an encoding expert, I'm looking for simpler or faster code, but the output must remain the same.
Update (Cleaner version):
return Encoding.GetEncoding("ISO-8859-1").GetString(Encoding.GetEncoding(936).GetBytes(text));
Well, for one, you don't need to convert the "built-in" string representation to a byte array before calling Encoding.Convert.
You could just do:
byte[] converted = Encoding.GetEncoding(936).GetBytes(text);
To then reconstruct a string from that byte array whereby the char values directly map to the bytes, you could do...
static string MangleTextForReceiptPrinter(string text) {
return new string(
Encoding.GetEncoding(936)
.GetBytes(text)
.Select(b => (char) b)
.ToArray());
}
I wouldn't worry too much about efficiency; how many MB/sec are you going to print on a receipt printer anyhow?
Joe pointed out that there's an encoding that directly maps byte values 0-255 to code points, and it's age-old Latin1, which allows us to shorten the function to...
return Encoding.GetEncoding("Latin1").GetString(
Encoding.GetEncoding(936).GetBytes(text)
);
By the way, if this is a buggy windows-only API (which it is, by the looks of it), you might be dealing with codepage 1252 instead (which is almost identical). You might try reflector to see what it's doing with your System.String before it sends it over the wire.
Almost anything would be cleaner than this - you're really abusing text here, IMO. You're trying to represent effectively opaque binary data (the encoded text) as text data... so you'll potentially get things like bell characters, escapes etc.
The normal way of encoding opaque binary data in text is base64, so you could use:
return Convert.ToBase64String(Encoding.GetEncoding(936).GetBytes(text));
The resulting text will be entirely ASCII, which is much less likely to cause you hassle.
EDIT: If you need that output, I would strongly recommend that you represent it as a byte array instead of as a string... pass it around as a byte array from that point onwards, so you're not tempted to perform string operations on it.
Does your receipt printer have an API that accepts a byte array rather than a string?
If so you may be able to simplify the code to a single conversion, from a Unicode string to a byte array using the encoding used by the receipt printer.
Also, if you want to convert an array of bytes to a string whose character values correspond 1-1 to the values of the bytes, you can use the code page 28591 aka Latin1 aka ISO-8859-1.
I.e., the following
foreach (byte b in converted)
builder.Append((char)b);
string result = builder.ToString();
can be replaced by:
// All three of the following are equivalent
// string result = Encoding.GetEncoding(28591).GetString(converted);
// string result = Encoding.GetEncoding("ISO-8859-1").GetString(converted);
string result = Encoding.GetEncoding("Latin1").GetString(converted);
Latin1 is a useful encoding when you want to encode binary data in a string, e.g. to send through a serial port.