I'm trying to read from an accelerometer over BLE from my Arduino device. The only problem is that I'm not sure how to convert it back into a readable string value.
My Arduino sketch (part of it) looks like this:
if((x - lastX) > threshold) {
STATUS = "MOVING";
toggleIsMoving();
} else if((y - lastY) > threshold) {
STATUS = "MOVING";
toggleIsMoving();
} else if((z - lastZ) > threshold) {
STATUS = "MOVING";
toggleIsMoving();
} else {
STATUS = "STOPPED";
toggleIsStopped();
}
lastX = x;
lastY = y;
lastZ = z;
Serial.print(STATUS);
The hex value that I receive in my Xamarin Android application (through BLE) is this:
A0-08-00-00-00-4D-4F-56-49-4E-47-CE-3C
My current implementation is:
public static string FromHex(string hex) {
hex = hex.Replace("-", "");
byte[] raw = new byte[hex.Length / 2];
for (int i = 0; i < raw.Length; i++)
raw[i] = Convert.ToByte(hex.Substring(i * 2, 2), 16);
return Encoding.ASCII.GetString(raw);
}
This result is:
?������MOVING?<
Why is this happening and how can I convert this back to a readable string in C#?
Update 1
I did some research in the bean-sdk source code and found this:
/**
* Represents a LightBlue Serial Transport Message
*
* Defined as:
*
* [1 byte] - Length (Message ID + Payload)
* [1 byte] - Reserved
* [2 byte] BE - Message ID
* [0-64 bytes] LE - Payload
* [2 bytes] LE - CRC (Everything before CRC)
*
* #param messageId
* #param definition
*/
Update 2
Finally got it working by implementing the protocol mentioned in update 1. The actual payload is located inside, so it needs to be extracted from the hex string.
Anything over 0x7F is NOT ASCII. The first character is 0xA0. Which, both in Unicode and IOS LATIN 1 is NBSP. The sequence A0-08-00-00-00 is probably meaningful, but not in a plain text way. You should have a look at the specs of your accelerometer and see what it is spewing out to interpret it correctly.
EDIT
Also, since in your code you set the variable STATUS to "MOVING", it could be that the characters before and after MOVING are spurious characters.
I have used the following:
private static string hexToASCII(string hexValue)
{
StringBuilder output = new StringBuilder("");
for (int i = 0; i < hexValue.Length; i += 2)
{
string str = hexValue.Substring(i, 2);
output.Append((char)Convert.ToInt32(str, 16));
}
return output.ToString();
}
Reference: converted from a Java function at Convert Hex to ASCII and ASCII to Hex
Related
I have code in C# that converts an UInt64 to a float, the entered value is for example '4537294320117481472'. The code that does the work is in the first block, the second block shows the relevant functions, and the answers are at the bottom.
byte[] rawParameterData = new byte[8];
Console.Write("Enter Value: ");
string rawDataString = Console.ReadLine();
UInt64 rawParameterInteger = UInt64.Parse(rawDataString);
rawParameterData = ConvertFromUInt64(rawParameterInteger);
float convertedParameterData = ConvertToFloat(rawParameterData, 0);
rawParameterData now equals a byte array of [62,247,181,37,0,0,0,0]
convertedParameterData now equals 0.4838039
public static byte[] ConvertFromUInt64(UInt64 data)
{
var databuf = BitConverter.GetBytes(data);
return SwapBytes(databuf, 8); // DIS is big-endian; need to convert to little-endian: least significant byte is at lower byte location.
}
static float ConvertToFloat(byte[] data, int offset)
{
var databuf = CopyData(data, offset, 4);
return BitConverter.ToSingle(databuf, 0);
}
static byte[] SwapBytes(byte[] srcbuf, int datalength)
{
var destbuf = new byte[srcbuf.Length];
for (var i = 0; i < datalength; i++)
destbuf[datalength - 1 - i] = srcbuf[i];
return destbuf;
}
It seems that the code is relying on the BitConverter.ToSingle(databuf, 0) function that is part of C#.
Can this be done in Python? Thanks.
In python, this is as simple as
import struct
a = 4537294320117481472
b = struct.pack('Q', a)
f = struct.unpack('ff', b)
print(f) # (0.0, 0.4838038980960846)
https://docs.python.org/3/library/struct.html
import struct
import math
# We only care about the first 4 of eight of the digits so we
# need to shift by the character size * number of characters
# then we will just have databuf in our significantBits variable
# So we are converting from: 3E F7 B5 25 0 0 0 0 to 3E F7 B5 25
characterWidth = 8
significantBits = 4537294320117481472 >> (characterWidth * 4)
# We can use this python function convert from bits to a float
# Taken from https://stackoverflow.com/a/14431225/825093
def bitsToFloat(b):
s = struct.pack('>l', b)
return struct.unpack('>f', s)[0]
float = bitsToFloat(significantBits)
print(float) # 0.483803898096
I am trying to take a string with hex FFFFFFFF7DA98035 and display its extended ASCII characters in a TextBox in my program. I am having problems with 80 as its -128 and displays nothing.
Visual Studio compiles without errors, but throws an exception when it parses the string.
private static string ConvertHextoAscii(string HexString)
{
byte[] data = new byte[HexString.Length / 2];
for (int i = 0; i < HexString.Length - 1; i += 2)
{
data[i / 2] = byte.Parse(HexString.Substring(i, 2));
}
return Encoding.GetEncoding("Windows-1252").GetString(data);
}
Any help would be appreciated.
byte.Parse is expecting a string that contains an integer (in decimal). However, HexString.Substring(i, 2) will return a hex number (as a string).
Do the following to instruct byte.Parse to expect a hex number:
data[i / 2] = byte.Parse(HexString.Substring(i, 2), NumberStyles.HexNumber);
I'm using PacketDotNet to retrieve data from RTP header. But there are times when the timestamp is a negative value.
GetTimeStamp(UdpPacket packetUdp)
{
byte[] packet = packetUdp.PayloadData;
long timestamp = GetRTPHeaderValue(packet, 32, 63);
return timestamp;
}
private static int GetRTPHeaderValue(byte[] packet, int startBit, int endBit)
{
int result = 0;
// Number of bits in value
int length = endBit - startBit + 1;
// Values in RTP header are big endian, so need to do these conversions
for (int i = startBit; i <= endBit; i++)
{
int byteIndex = i / 8;
int bitShift = 7 - (i % 8);
result += ((packet[byteIndex] >> bitShift) & 1) *
(int)Math.Pow(2, length - i + startBit - 1);
}
return result;
}
It could be caused by the RTCP packets. If the RTP data is coming from a phone, then phone send periodic RTCP reports. They seem to pop in about every 200'th packet. The format is different and your code is probably reading it the same way - you will need to handle the RTCP packets.
The packets format: http://www.cl.cam.ac.uk/~jac22/books/mm/book/node162.html
I have a program in C# .net which writes 1 integer and 3 strings to a file, using BinaryWriter.Write().
Now I am programming in Java (for Android, and I'm new in Java), and I have to access the data which were previously written to a file using C#.
I tried using DataInputStream.readInt() and DataInputStream.readUTF(), but I can't get proper results. I usually get a UTFDataFormatException:
java.io.UTFDataFormatException: malformed input around byte 21
or the String and int I get is wrong...
FileInputStream fs = new FileInputStream(strFilePath);
DataInputStream ds = new DataInputStream(fs);
int i;
String str1,str2,str3;
i=ds.readInt();
str1=ds.readUTF();
str2=ds.readUTF();
str3=ds.readUTF();
ds.close();
What is the proper way of doing this?
I wrote a quick example on how to read .net's binaryWriter format in java here
excerpt from link:
/**
* Get string from binary stream. >So, if len < 0x7F, it is encoded on one
* byte as b0 = len >if len < 0x3FFF, is is encoded on 2 bytes as b0 = (len
* & 0x7F) | 0x80, b1 = len >> 7 >if len < 0x 1FFFFF, it is encoded on 3
* bytes as b0 = (len & 0x7F) | 0x80, b1 = ((len >> 7) & 0x7F) | 0x80, b2 =
* len >> 14 etc.
*
* #param is
* #return
* #throws IOException
*/
public static String getString(final InputStream is) throws IOException {
int val = getStringLength(is);
byte[] buffer = new byte[val];
if (is.read(buffer) < 0) {
throw new IOException("EOF");
}
return new String(buffer);
}
/**
* Binary files are encoded with a variable length prefix that tells you
* the size of the string. The prefix is encoded in a 7bit format where the
* 8th bit tells you if you should continue. If the 8th bit is set it means
* you need to read the next byte.
* #param bytes
* #return
*/
public static int getStringLength(final InputStream is) throws IOException {
int count = 0;
int shift = 0;
boolean more = true;
while (more) {
byte b = (byte) is.read();
count |= (b & 0x7F) << shift;
shift += 7;
if((b & 0x80) == 0) {
more = false;
}
}
return count;
}
As its name implies, BinaryWriter writes in binary format. .Net binary format to be precise, and as java is not a .Net language, it has no way of reading it. You have to use an interoperable format.
You can choose an existing format, like xml or json or any other interop format.
Or you can create your own, providing your data is simple enough to make it this way (it seems to be the case here). Just write a string to your file (using a StreamWriter for instance), provided you know your string's format. Then read your file from java as a string and parse it.
There is a very good explanation of the format used by BinaryWriter in this question Right Here it should be possible to read the data with a ByteArrayInputStream and write a simple translator.
I've got a byte array that was created using a hash function. I would like to convert this array into a string. So far so good, it will give me hexadecimal string.
Now I would like to use something different than hexadecimal characters, I would like to encode the byte array with these 36 characters: [a-z][0-9].
How would I go about?
Edit: the reason I would to do this, is because I would like to have a smaller string, than a hexadecimal string.
I adapted my arbitrary-length base conversion function from this answer to C#:
static string BaseConvert(string number, int fromBase, int toBase)
{
var digits = "0123456789abcdefghijklmnopqrstuvwxyz";
var length = number.Length;
var result = string.Empty;
var nibbles = number.Select(c => digits.IndexOf(c)).ToList();
int newlen;
do {
var value = 0;
newlen = 0;
for (var i = 0; i < length; ++i) {
value = value * fromBase + nibbles[i];
if (value >= toBase) {
if (newlen == nibbles.Count) {
nibbles.Add(0);
}
nibbles[newlen++] = value / toBase;
value %= toBase;
}
else if (newlen > 0) {
if (newlen == nibbles.Count) {
nibbles.Add(0);
}
nibbles[newlen++] = 0;
}
}
length = newlen;
result = digits[value] + result; //
}
while (newlen != 0);
return result;
}
As it's coming from PHP it might not be too idiomatic C#, there are also no parameter validity checks. However, you can feed it a hex-encoded string and it will work just fine with
var result = BaseConvert(hexEncoded, 16, 36);
It's not exactly what you asked for, but encoding the byte[] into hex is trivial.
See it in action.
Earlier tonight I came across a codereview question revolving around the same algorithm being discussed here. See: https://codereview.stackexchange.com/questions/14084/base-36-encoding-of-a-byte-array/
I provided a improved implementation of one of its earlier answers (both use BigInteger). See: https://codereview.stackexchange.com/a/20014/20654. The solution takes a byte[] and returns a Base36 string. Both the original and mine include simple benchmark information.
For completeness, the following is the method to decode a byte[] from an string. I'll include the encode function from the link above as well. See the text after this code block for some simple benchmark info for decoding.
const int kByteBitCount= 8; // number of bits in a byte
// constants that we use in FromBase36String and ToBase36String
const string kBase36Digits= "0123456789abcdefghijklmnopqrstuvwxyz";
static readonly double kBase36CharsLengthDivisor= Math.Log(kBase36Digits.Length, 2);
static readonly BigInteger kBigInt36= new BigInteger(36);
// assumes the input 'chars' is in big-endian ordering, MSB->LSB
static byte[] FromBase36String(string chars)
{
var bi= new BigInteger();
for (int x= 0; x < chars.Length; x++)
{
int i= kBase36Digits.IndexOf(chars[x]);
if (i < 0) return null; // invalid character
bi *= kBigInt36;
bi += i;
}
return bi.ToByteArray();
}
// characters returned are in big-endian ordering, MSB->LSB
static string ToBase36String(byte[] bytes)
{
// Estimate the result's length so we don't waste time realloc'ing
int result_length= (int)
Math.Ceiling(bytes.Length * kByteBitCount / kBase36CharsLengthDivisor);
// We use a List so we don't have to CopyTo a StringBuilder's characters
// to a char[], only to then Array.Reverse it later
var result= new System.Collections.Generic.List<char>(result_length);
var dividend= new BigInteger(bytes);
// IsZero's computation is less complex than evaluating "dividend > 0"
// which invokes BigInteger.CompareTo(BigInteger)
while (!dividend.IsZero)
{
BigInteger remainder;
dividend= BigInteger.DivRem(dividend, kBigInt36, out remainder);
int digit_index= Math.Abs((int)remainder);
result.Add(kBase36Digits[digit_index]);
}
// orientate the characters in big-endian ordering
result.Reverse();
// ToArray will also trim the excess chars used in length prediction
return new string(result.ToArray());
}
"A test 1234. Made slightly larger!" encodes to Base64 as "165kkoorqxin775ct82ist5ysteekll7kaqlcnnu6mfe7ag7e63b5"
To decode that Base36 string 1,000,000 times takes 12.6558909 seconds on my machine (I used the same build and machine conditions as provided in my answer on codereview)
You mentioned that you were dealing with a byte[] for the MD5 hash, rather than a hexadecimal string representation of it, so I think this solution provide the least overhead for you.
If you want a shorter string and can accept [a-zA-Z0-9] and + and / then look at Convert.ToBase64String
Using BigInteger (needs the System.Numerics reference)
Using BigInteger (needs the System.Numerics reference)
const string chars = "0123456789abcdefghijklmnopqrstuvwxyz";
// The result is padded with chars[0] to make the string length
// (int)Math.Ceiling(bytes.Length * 8 / Math.Log(chars.Length, 2))
// (so that for any value [0...0]-[255...255] of bytes the resulting
// string will have same length)
public static string ToBaseN(byte[] bytes, string chars, bool littleEndian = true, int len = -1)
{
if (bytes.Length == 0 || len == 0)
{
return String.Empty;
}
// BigInteger saves in the last byte the sign. > 7F negative,
// <= 7F positive.
// If we have a "negative" number, we will prepend a 0 byte.
byte[] bytes2;
if (littleEndian)
{
if (bytes[bytes.Length - 1] <= 0x7F)
{
bytes2 = bytes;
}
else
{
// Note that Array.Resize doesn't modify the original array,
// but creates a copy and sets the passed reference to the
// new array
bytes2 = bytes;
Array.Resize(ref bytes2, bytes.Length + 1);
}
}
else
{
bytes2 = new byte[bytes[0] > 0x7F ? bytes.Length + 1 : bytes.Length];
// We copy and reverse the array
for (int i = bytes.Length - 1, j = 0; i >= 0; i--, j++)
{
bytes2[j] = bytes[i];
}
}
BigInteger bi = new BigInteger(bytes2);
// A little optimization. We will do many divisions based on
// chars.Length .
BigInteger length = chars.Length;
// We pre-calc the length of the string. We know the bits of
// "information" of a byte are 8. Using Log2 we calc the bits of
// information of our new base.
if (len == -1)
{
len = (int)Math.Ceiling(bytes.Length * 8 / Math.Log(chars.Length, 2));
}
// We will build our string on a char[]
var chs = new char[len];
int chsIndex = 0;
while (bi > 0)
{
BigInteger remainder;
bi = BigInteger.DivRem(bi, length, out remainder);
chs[littleEndian ? chsIndex : len - chsIndex - 1] = chars[(int)remainder];
chsIndex++;
if (chsIndex < 0)
{
if (bi > 0)
{
throw new OverflowException();
}
}
}
// We append the zeros that we skipped at the beginning
if (littleEndian)
{
while (chsIndex < len)
{
chs[chsIndex] = chars[0];
chsIndex++;
}
}
else
{
while (chsIndex < len)
{
chs[len - chsIndex - 1] = chars[0];
chsIndex++;
}
}
return new string(chs);
}
public static byte[] FromBaseN(string str, string chars, bool littleEndian = true, int len = -1)
{
if (str.Length == 0 || len == 0)
{
return new byte[0];
}
// This should be the maximum length of the byte[] array. It's
// the opposite of the one used in ToBaseN.
// Note that it can be passed as a parameter
if (len == -1)
{
len = (int)Math.Ceiling(str.Length * Math.Log(chars.Length, 2) / 8);
}
BigInteger bi = BigInteger.Zero;
BigInteger length2 = chars.Length;
BigInteger mult = BigInteger.One;
for (int j = 0; j < str.Length; j++)
{
int ix = chars.IndexOf(littleEndian ? str[j] : str[str.Length - j - 1]);
// We didn't find the character
if (ix == -1)
{
throw new ArgumentOutOfRangeException();
}
bi += ix * mult;
mult *= length2;
}
var bytes = bi.ToByteArray();
int len2 = bytes.Length;
// BigInteger adds a 0 byte for positive numbers that have the
// last byte > 0x7F
if (len2 >= 2 && bytes[len2 - 1] == 0)
{
len2--;
}
int len3 = Math.Min(len, len2);
byte[] bytes2;
if (littleEndian)
{
if (len == bytes.Length)
{
bytes2 = bytes;
}
else
{
bytes2 = new byte[len];
Array.Copy(bytes, bytes2, len3);
}
}
else
{
bytes2 = new byte[len];
for (int i = 0; i < len3; i++)
{
bytes2[len - i - 1] = bytes[i];
}
}
for (int i = len3; i < len2; i++)
{
if (bytes[i] != 0)
{
throw new OverflowException();
}
}
return bytes2;
}
Be aware that they are REALLY slow! REALLY REALLY slow! (2 minutes for 100k). To speed them up you would probably need to rewrite the division/mod operation so that they work directly on a buffer, instead of each time recreating the scratch pads as it's done by BigInteger. And it would still be SLOW. The problem is that the time needed to encode the first byte is O(n) where n is the length of the byte array (this because all the array needs to be divided by 36). Unless you want to work with blocks of 5 bytes and lose some bits. Each symbol of Base36 carries around 5.169925001 bits. So 8 of these symbols would carry 41.35940001 bits. Very near 40 bytes.
Note that these methods can work both in little-endian mode and in big-endian mode. The endianness of the input and of the output is the same. Both methods accept a len parameter. You can use it to trim excess 0 (zeroes). Note that if you try to make an output too much small to contain the input, an OverflowException will be thrown.
System.Text.Encoding enc = System.Text.Encoding.ASCII;
string myString = enc.GetString(myByteArray);
You can play with what encoding you need:
System.Text.ASCIIEncoding,
System.Text.UnicodeEncoding,
System.Text.UTF7Encoding,
System.Text.UTF8Encoding
To match the requrements [a-z][0-9] you can use it:
Byte[] bytes = new Byte[] { 200, 180, 34 };
string result = String.Join("a", bytes.Select(x => x.ToString()).ToArray());
You will have string representation of bytes with char separator. To convert back you will need to split, and convert the string[] to byte[] using the same approach with .Select().
Usually a power of 2 is used - that way one character maps to a fixed number of bits. An alphabet of 32 bits for instance would map to 5 bits. The only challenge in that case is how to deserialize variable-length strings.
For 36 bits you could treat the data as a large number, and then:
divide by 36
add the remainder as character to your result
repeat until the division results in 0
Easier said than done perhaps.
you can use modulu.
this example encode your byte array to string of [0-9][a-z].
change it if you want.
public string byteToString(byte[] byteArr)
{
int i;
char[] charArr = new char[byteArr.Length];
for (i = 0; i < byteArr.Length; i++)
{
int byt = byteArr[i] % 36; // 36=num of availible charachters
if (byt < 10)
{
charArr[i] = (char)(byt + 48); //if % result is a digit
}
else
{
charArr[i] = (char)(byt + 87); //if % result is a letter
}
}
return new String(charArr);
}
If you don't want to lose data for de-encoding you can use this example:
public string byteToString(byte[] byteArr)
{
int i;
char[] charArr = new char[byteArr.Length*2];
for (i = 0; i < byteArr.Length; i++)
{
charArr[2 * i] = (char)((int)byteArr[i] / 36+48);
int byt = byteArr[i] % 36; // 36=num of availible charachters
if (byt < 10)
{
charArr[2*i+1] = (char)(byt + 48); //if % result is a digit
}
else
{
charArr[2*i+1] = (char)(byt + 87); //if % result is a letter
}
}
return new String(charArr);
}
and now you have a string double-lengthed when odd char is the multiply of 36 and even char is the residu. for example: 200=36*5+20 => "5k".