I have an API which returns a byte[] over the network which represents information about a device.
It is in format 15ab1234cd\r\n where the first 2 characters are a HEX representation of the amount of data in the message.
I am aware I can convert this to a string via ASCIIEncoding.ASCII.GetString, and then use Convert.ToInt32(string.Substring(0, 2), 16) to achieve this. However the whole thing stays a byte array throughout the life of the whole program I am writing, and I don't want to convert to a string just for the purpose of getting the packet length.
Any suggestions of converting array of chars in hex format to an int in C#?
There is no .Net provided function that does it. Converting first 2 bytes to string with Encoding.GetString is very readable (possibly not most performant):
var hexValue = ASCIIEncoding.ASCII.GetString(byteData, 0, 2);
var intValue = Convert.ToInt32(hexValue, 16);
You can easily write conversion code (map '0'-'9' and 'a'-'f' / 'A'-'F' ranges to corresponding integer value and add together.
Here is one-statement conversion strictly for entertainment purposes. The resulting lambda (before ((byte)'0',(byte)'A') in sample takes 2 byte arguments assuming them to be ASCII characters and convert into integer.
((Func<Func<char,int>, Func<byte, byte, int>>)
(charToInt=> (c, c1)=>
charToInt(char.ToUpper((char)c)) * 16 + charToInt(char.ToUpper((char)c1))))
((Func<char, int>)(
c => c >= '0' && c <='9' ? c-'0' : c >='A' && c <= 'F' ? c - 'A' + 10 : 0))
((byte)'0',(byte)'A')
If you know the first two values are valid hexadecimal characters (0-9, A-Z, a-z), it is possible to convert to a hex value using logical operators.
int GetIntFromHexBytes(byte[] s, int start, int length)
{
int ret = 0;
for (int i = start; i < start+length; i++)
{
ret <<= 4;
ret |= (byte)((s[i] & 0x0f) + ((s[i] & 0x40) >> 6) * 9);
}
return ret;
}
(This works because c & 0x0f returns the 4 least significant bits, and will range from 0-9 for the values '0'-'9', and from 1 - 6 for both capital and lowercase letters ('a' - 'z' and 'A' - 'Z'). s[i] & 0x40 is 0 for numeric characters, and 0x40 for alpha characters; shifting right six characters provides a value of 0 for numeric characters and 1 for alphabetic characters. Shifting left and multiplying by 9 will add a bias of 9 for alpha characters to map A-F and a-f from 1-6 to 10-15.)
Given the byte array:
byte[] b = { (byte)'7', (byte)'f', (byte)'1', (byte)'c' };
Calling GetIntFromHexBytes(b, 0, 2) will return 127 (0x7f), the first two bytes of the array, as required.
As a caution: this approach does no bounds checking. A check can be added in the loop if needed to ensure that the input bytes are valid hex characters.
Related
I need to convert an integer which in the form of string to byte array in binary representation.
For example : I have a value "29", then convert this value to binary equivalent 2-> 0010 and 9-> 1001 and store it in byte array where 0th index has 0010 and 1st index has 1001.
I have tried this but this gives me an array of 8 bytes.
var val = "29".ToCharArray();
var a = Convert.ToString(Convert.ToInt32(Convert.ToString(val[0])), 2).PadLeft(4, '0');
var b = Convert.ToString(Convert.ToInt32(Convert.ToString(val[1])), 2).PadLeft(4, '0');
var c = a.ToList();
c.ForEach(x => sb.Append(Convert.ToString(x) + " "));
var f = sb.ToString().Split(new[] { ' ' }, StringSplitOptions.RemoveEmptyEntries);
var g = f.ToList();
byte[] buff = new byte[g.Count];
for (int z = 0; z < g.Count; z++)
{
buff[z] = (byte)Convert.ToInt32(g[z]);
}
var h = b.ToList();
sb.Clear();
h.ForEach(x => sb.Append(Convert.ToString(x) + " "));
var i = sb.ToString().Split(new[] { ' ' }, StringSplitOptions.RemoveEmptyEntries);
var j = i.ToList();
byte[] buff2 = new byte[j.Count];
for (int k = 0; k < j.Count; k++)
{
buff2[k] = (byte)Convert.ToInt32(j[k]);
}
byte[] buffer = buff.Concat(buff2).ToArray();
You can do this much easier:
string s = "29";
var buffer = new byte[s.Length];
for (int i = 0; i < buffer.Length; i++) {
buffer[i] = (byte)(s[i] - '0');
}
Explanation:
We create a byte buffer with the same length as the input string since every character in the string is supposed to be a decimal digit.
In C#, a character is a numeric type. We subtract the character '0' from the character representing our digit to get its numeric value. We get this digit by using the String indexer which allows us to access single characters in a string.
The result is an integer that we cast to byte we can then insert into the buffer.
Console.WriteLine(buffer[0]) prints 2 because numbers are converted to a string in a decimal format for display. Everything the debugger, the console or a textbox displays is always a string the data has been converted to. This conversion is called formatting. Therefore, you do not see the result as binary. But believe me, it is stored in the bytes in the requested binary format.
You can use Convert.ToString and specify the desired numeric base as second parameter to see the result in binary.
foreach (byte b in buffer) {
Console.WriteLine($"{b} --> {Convert.ToString(b, toBase: 2).PadLeft(4, '0')}");
}
If you want to store it in this visual binary format, then you must store it in a string array
var stringBuffer = new string[s.Length];
for (int i = 0; i < stringBuffer.Length; i++) {
stringBuffer[i] = Convert.ToString(s[i] - '0', toBase: 2).PadLeft(4, '0');
}
Note that everything is stored in a binary format with 0s and 1s in a computer, but you never see these 0s and 1s directly. What you see is always an image on your screen. And this image was created from images of characters in a specific font. And these characters result from converting some data into a string, i.e., from formatting your data. The same data might look different on PCs using a different culture, but the underlying data is stored with the same pattern of 0s and 1s.
The difference between storing the numeric value of the digit as byte and storing this digit as character (possibly being an element of a string) is that a different encoding is used.
The byte stores it as a binary number equivalent to the decimal number. I.e., 9 (decimal) becomes 00001001 (binary).
The string or character stores the digit using the UTF-16 character table in .NET. This table is equivalent to the ASCII table for Latin letters without accents or umlauts, for digits and for the most common punctuation, except that it uses 16 bits per character instead of 7 bits (expanded to 8 when stored as byte). According to this table, the character '9' is represented by the binary 00111001 (decimal 57).
The string "1001" is stored in UTF-16 as
00000000 00110001 00000000 00110000 00000000 00110000 00000000 00110001
where 0 is encoded as 00000000 00110000 (decimal 48) and 1 is encoded as 00000000 00110001 (decimal 49). Also, additional data is stored for a string, as its length, a NUL character terminator and data related to its class nature.
Alternative ways to store the result would be to use an array of the BitArray Class or to use an array of array of bytes where each byte in the inner array would store one bit only, i.e., be either 0 or 1.
I'm trying to count the number of times an int appears in a string
int count = numbers
.Where(x => x == '0')
.Count();
If I type in the literal number I want to check for as seen above '0' it works.
But I want to use this as a method where I can check for other digits. This, unfortunately, doesn't work when I convert an int to a char and insert the digit variable
char digit = Convert.ToChar(0);
int count= numbers
.Where(x => x == digit)
.Count();
What am I doing wrong?
Convert.ToChar(Int32) returns Unicode character equivalent to the value of passed int. For example it will convert 65 to A:
Console.WriteLine(Convert.ToChar(65)); // prints "A"
If I understand your requirement correctly you can call ToString and take first char of resulting string instead of Convert.ToChar(0):
char digit = 0.ToString()[0]
Or as this answer suggests:
char c = (char)(i + 48);
Convert.ToChar(0) converts you bit representation to the according to the character encoding table (e.g. ASCII) . 0 in this case is a Null character. Whereas bit representation of a '0' character is 48 according to the table
You can use this snippet to compare you int digit to the chars
string numbers = "33";
int digit = 3;
int count = numbers
// '0' -'0' (48 - 48) will give you 0 int value,
// '1' - '0' (49 - 48) will give you 1 etc.
.Where(x => x - '0' == digit)
.Count();
but keep in mind that you digit might be greater than 9, then it would take more than one character and this is going a bit different problem.
I'm trying to convert from a Base64 string. First I tried this:
string a = "BTQmJiI6JzFkZ2ZhY";
byte[] b = Convert.FromBase64String(a);
string c = System.Text.Encoding.ASCII.GetString(b);
Then got the exception - System.FormatException was caught Message=Invalid length for a Base-64 char array.
So after googling,I tried this:
string a1 = "BTQmJiI6JzFkZ2ZhY";
int mod4 = a1.Length % 4;
if (mod4 > 0)
{
a1 += new string('=', 4 - mod4);
}
byte[] b1 = Convert.FromBase64String(a1);
string c1 = System.Text.Encoding.ASCII.GetString(b1);
Here I got the exception - System.FormatException was caught Message=Invalid character in a Base-64 string.
Is there any invalid character in "BTQmJiI6JzFkZ2ZhY"? Or is it the length issue?
EDIT: I first decrypt the input string using the below code:
string sourstr, deststr,strchar;
int strlen;
decimal ascvalue, ConvValue;
deststr = "";
sourstr = "InputString";
strlen = sourstr.Length;
for (int intI = 0; intI <= strlen - 1; intI++)
{
strchar = sourstr.Substring(intI, 1);
ascvalue = (decimal)strchar[0];
ConvValue = (decimal)((int)ascvalue ^ 85);
if ((char)ConvValue.ToString().Length == 0)
{
deststr = deststr + strchar;
}
else
{
deststr = deststr + (char)ConvValue;
}
}
This output deststr is passed to below code
Convert.ToBase64String(System.Text.Encoding.ASCII.GetBytes(deststr));
This is where I got "BTQmJiI6JzFkZ2ZhY"
You cannot get such base64 string by encoding whole number of bytes. While encoding, every 3 bytes are represented as 4 characters, because 3 bytes is 24 bits, and each base64 character is 6 bits (2^6=64), so 4 of them is also 24 bits. If number of bytes to encode is not divisable by 3 - you have some bytes left. You can have 2 or 1 bytes left.
If you have 2 bytes left - that's 16 bits and you need at least 3 characters to encode that (2 characters is just 12 bits - not enough). So in case you have 2 bytes left - you encode them with 3 characters and apply "=" padding.
If you have 1 byte left - that's 8 bits. You need at least 2 characters for that. You encode to 2 characters and apply "==" padding.
Note that there is no way to encode something to just one character (and for that reason - there is no "===" padding).
Your string can be divided in 4 character blocks: "BTQm", "JiI6", "JzFk", "Z2Zh", "Y". 4 first blocks each represent 3 bytes, but what "Y" represents? Who knows. You can say that it represents 1 byte in range 0-63, but from above you can see that's not how it works, so to interpret it like that you have to do it yourself.
From above you can see that you cannot get base64 string with length 17 (without padding). You can get 16, 18, 19, 20, but never 17
Are you sure you took all chars from base64 output?
Appending "==" at the end of the string will make your first approach work without any problems. Although there is strange character at the beginning of the output. So the next question is: Are you sure it is "ASCI" Encoding?
I need to travers the string ,which should be the string of digits and make some arithmetic operations on these digits
for (int i = data.Length - 1; i >= 0; --i)
{
uint curDigit;
//Convert to uint the current rightmost digit, if convert fails return false (the valid data should be numeric)
try
{
curDigit = Convert.ToUInt32(data[i]);
//arithmetic ops...
}
catch
{
return false;
}
I test it with the following input data string.
"4000080706200002"
For i = 15,corresponding to the rightmost digit 2,I get 50 as an output from
curDigit = Convert.ToUInt32(data[i]);
Can someone please explain me what is wrong?and how to correct the issue
50 is the ascii code for '2'. what you need is '2' - '0' (50-48)
byte[] digits = "4000080706200002".Select(x => (byte)(x - '0')).ToArray();
http://www.asciitable.com/
What you are getting back is the ascii value of character 2, You can use call ToString on the character item and then call Convert.ToUnit32, Consider the example:
char x = '2';
uint curDigit = Convert.ToUInt32(x.ToString());
this will give you back 2 as curDigit
For your code you can just use:
curDigit = Convert.ToUInt32(data[i].ToString());
Another option is to use char.GetNumericValue like:
uint curDigit = (UInt32) char.GetNumericValue(data[i]);
char.GetNumericValue returns double and you can cast the result back to UInt32
The problem is that data[i] returns a char variable, that essentialy is an integer holding the ASCII code of the character. So '2' corresponds to 50.
There are 2 things you can do to overcome this behaviour:
Better curDigit = Convert.ToUInt32(data[i] - '0'); //Substract the ASCII representation of '0' from the char
curDigit = Convert.ToUInt32(data.Substring(i,1)); //Use substring to return a string instead of char. Note, that this method is less efficient, as Convert from string essentially splits the string into chars, and substract '0' from each and every one of them.
Your getting the ASCII (or Unicode) values for those characters. The problem is that the code points for the characters '0' … '9' are not 0 … 9, but 48 … 57. To fix this, you need to adjust by that offset. For example:
curDigit = Convert.ToUInt32(data[i] - '0');
Or
curDigit = Convert.ToUInt32(data[i] - 48);
Rather than messing around with ASCII calculations you could use UInt32.TryParse as an alternative solution. However, this method requires a string input not char, so you would have to modify your approach a little:
string input = "4000080706200002";
string[] digits = input.Select(x => x.ToString()).ToArray();
foreach(string digit in digits)
{
uint curDigit = 0;
if(UInt32.TryParse(digit, out curDigit))
{
//arithmetic ops...
}
//else failed to parse
}
I'm currently trying to convert a .NET JSON Encoder to NETMF but have hit a problem with Convert.ToString() as there isn't such thing in NETMF.
The original line of the encoder looks like this:
Convert.ToString(codepoint, 16);
And after looking at the documentation for Convert.ToString(Int32, Int32) it says it's for converting an int32 into int 2, 8, 10 or 16 by providing the int as the first parameter and the base as the second.
What are some low level code of how to do this or how would I go about doing this?
As you can see from the code, I only need conversion from an Int32 to Int16.
EDIT
Ah, the encoder also then wants to do:
PadLeft(4, '0');
on the string, is this just adding 4 '0' + '0' + '0' + '0' to the start of the string?
If you mean you want to change a 32-bit integer value into a string which shows the value in hexadecimal:
string hex = intValue.ToString("x");
For variations, please see Stack Overflow question Convert a number into the hex value in .NET.
Disclaimer: I'm not sure if this function exists in NETMF, but it is so fundamental that I think it should.
Here’s some sample code for converting an integer to hexadecimal (base 16):
int num = 48764; // assign your number
// Generate hexadecimal number in reverse.
var sb = new StringBuilder();
do
{
sb.Append(hexChars[num & 15]);
num >>= 4;
}
while (num > 0);
// Pad with leading 0s for a minimum length of 4 characters.
while (sb.Length < 4)
sb.Append('0');
// Reverse string and get result.
char[] chars = new char[sb.Length];
sb.CopyTo(0, chars, 0, sb.Length);
Array.Reverse(chars);
string result = new string(chars);
PadLeft(4, '0') prepends leading 0s to the string to ensure a minimum length of 4 characters.
The hexChars value lookup may be trivially defined as a string:
internal static readonly string hexChars = "0123456789ABCDEF";
Edit: Replacing StringBuilder with List<char>:
// Generate hexadecimal number in reverse.
List<char> builder = new List<char>();
do
{
builder.Add(hexChars[num & 15]);
num >>= 4;
}
while (num > 0);
// Pad with leading 0s for a minimum length of 4 characters.
while (builder.Count < 4)
builder.Add('0');
// Reverse string and get result.
char[] chars = new char[builder.Count];
for (int i = 0; i < builder.Count; ++i)
chars[i] = builder[builder.Count - i - 1];
string result = new string(chars);
Note: Refer to the “Hexadecimal Number Output” section of Expert .NET Micro Framework for a discussion of this conversion.