I am trying to store two bytes in a ushort. So the first 8 bit is the first value and the 8 last bits, the last. I almost have it working, but I this error in line 20, where I bit shift:
Severity Code Description Project File Line Suppression State
Error CS0266 Cannot implicitly convert type 'int' to 'ushort'. An explicit conversion exists (are you missing a cast?) Bit Stuff C:\Users\perqj\Dropbox\GM\Bit Stuff\Bit Stuff\Form1.cs 42 Active
Here is the code:
byte val1 = 1;
byte val2 = 1;
byte[] val = new byte[2];
val[0] = val1;
val[1] = val2;
ushort asShort = BitConverter.ToUInt16(val, 0);
ushort mask1 = 0x00ff; //0b_0000_0000_1111_1111 Haven't tried, yet
ushort mask2 = 0xff00; //0b_1111_1111_0000_0000 Haven't tried, yet
ushort short1 = asShort;
ushort short2 = asShort;
ushort byteShift = 8;
short1 &= mask1;
short2 &= mask2;
short2 = short2 >> byteShift;
string binaryMask1 = Convert.ToString(mask1, 2);
string binaryMask2 = Convert.ToString(mask2, 2);
string binaryShort1 = Convert.ToString(short1, 2);
string binaryShort2 = Convert.ToString(short2, 2);
listBox1.Items.Add("val1: " + val1);
listBox1.Items.Add("val2: " + val2);
listBox1.Items.Add("Short: " + asShort);
listBox1.Items.Add("mask1: " + mask1 + " " + binaryMask1);
listBox1.Items.Add("mask2: " + mask2 + " " + binaryMask2);
listBox1.Items.Add("val1: " + short1 + " " + binaryShort1);
listBox1.Items.Add("val2: " + short2 + " " + binaryShort2);
I usually use code like this :
class Program
{
static void Main(string[] args)
{
byte[] input = new byte[] { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 };
Data[] output = input.Select((x,i) => new {x = x, i = i}).GroupBy(x => x.i/2).Select(x => new Data() { upper = x.First().x, lower = x.Last().x}).ToArray();
}
}
public class Data
{
public byte upper { get; set; }
public byte lower { get; set; }
}
Related
I am trying to port AES GCM implementation in python OpenTLS project, to C# (.Net). Below is the code in OpenTLS code:
#######################
### Galois Counter Mode
#######################
class AES_GCM:
def __init__(self, keys, key_size, hash):
key_size //= 8
hash_size = hash.digest_size
self.client_AES_key = keys[0 : key_size]
self.server_AES_key = keys[key_size : 2*key_size]
self.client_IV = keys[2*key_size : 2*key_size+4]
self.server_IV = keys[2*key_size+4 : 2*key_size+8]
self.H_client = bytes_to_int(AES.new(self.client_AES_key, AES.MODE_ECB).encrypt('\x00'*16))
self.H_server = bytes_to_int(AES.new(self.server_AES_key, AES.MODE_ECB).encrypt('\x00'*16))
def GF_mult(self, x, y):
product = 0
for i in range(127, -1, -1):
product ^= x * ((y >> i) & 1)
x = (x >> 1) ^ ((x & 1) * 0xE1000000000000000000000000000000)
return product
def H_mult(self, H, val):
product = 0
for i in range(16):
product ^= self.GF_mult(H, (val & 0xFF) << (8 * i))
val >>= 8
return product
def GHASH(self, H, A, C):
C_len = len(C)
A_padded = bytes_to_int(A + b'\x00' * (16 - len(A) % 16))
if C_len % 16 != 0:
C += b'\x00' * (16 - C_len % 16)
tag = self.H_mult(H, A_padded)
for i in range(0, len(C) // 16):
tag ^= bytes_to_int(C[i*16:i*16+16])
tag = self.H_mult(H, tag)
tag ^= bytes_to_int(nb_to_n_bytes(8*len(A), 8) + nb_to_n_bytes(8*C_len, 8))
tag = self.H_mult(H, tag)
return tag
def decrypt(self, ciphertext, seq_num, content_type, debug=False):
iv = self.server_IV + ciphertext[0:8]
counter = Counter.new(nbits=32, prefix=iv, initial_value=2, allow_wraparound=False)
cipher = AES.new(self.server_AES_key, AES.MODE_CTR, counter=counter)
plaintext = cipher.decrypt(ciphertext[8:-16])
# Computing the tag is actually pretty time consuming
if debug:
auth_data = nb_to_n_bytes(seq_num, 8) + nb_to_n_bytes(content_type, 1) + TLS_VERSION + nb_to_n_bytes(len(ciphertext)-8-16, 2)
auth_tag = self.GHASH(self.H_server, auth_data, ciphertext[8:-16])
auth_tag ^= bytes_to_int(AES.new(self.server_AES_key, AES.MODE_ECB).encrypt(iv + '\x00'*3 + '\x01'))
auth_tag = nb_to_bytes(auth_tag)
print('Auth tag (from server): ' + bytes_to_hex(ciphertext[-16:]))
print('Auth tag (from client): ' + bytes_to_hex(auth_tag))
return plaintext
def encrypt(self, plaintext, seq_num, content_type):
iv = self.client_IV + os.urandom(8)
# Encrypts the plaintext
plaintext_size = len(plaintext)
counter = Counter.new(nbits=32, prefix=iv, initial_value=2, allow_wraparound=False)
cipher = AES.new(self.client_AES_key, AES.MODE_CTR, counter=counter)
ciphertext = cipher.encrypt(plaintext)
# Compute the Authentication Tag
auth_data = nb_to_n_bytes(seq_num, 8) + nb_to_n_bytes(content_type, 1) + TLS_VERSION + nb_to_n_bytes(plaintext_size, 2)
auth_tag = self.GHASH(self.H_client, auth_data, ciphertext)
auth_tag ^= bytes_to_int(AES.new(self.client_AES_key, AES.MODE_ECB).encrypt(iv + b'\x00'*3 + b'\x01'))
auth_tag = nb_to_bytes(auth_tag)
# print('Auth key: ' + bytes_to_hex(nb_to_bytes(self.H)))
# print('IV: ' + bytes_to_hex(iv))
# print('Key: ' + bytes_to_hex(self.client_AES_key))
# print('Plaintext: ' + bytes_to_hex(plaintext))
# print('Ciphertext: ' + bytes_to_hex(ciphertext))
# print('Auth tag: ' + bytes_to_hex(auth_tag))
return iv[4:] + ciphertext + auth_tag
An attempt to translate this to C# code is below (sorry for the amateurish code, I am a newbie):
EDIT:
Created an array which got values from GetBytes, and printed the result:
byte[] incr = BitConverter.GetBytes((int) 2);
cf.printBuf(incr, (String) "Array:");
return;
Noticed that the result was "02 00 00 00". Hence I guess my machine is little endian
Made some changes to the code as rodrigogq mentioned. Below is the latest code. It is still not working:
Verified that GHASH, GF_mult and H_mult are giving same results. Below is the verification code:
Python:
key = "\xab\xcd\xab\xcd"
key = key * 10
h = "\x00\x00"
a = AES_GCM(key, 128, h)
H = 200
A = "\x02" * 95
C = "\x02" * 95
D = a.GHASH(H, A, C)
print(D)
C#:
BigInteger H = new BigInteger(200);
byte[] A = new byte[95];
byte[] C = new byte[95];
for (int i = 0; i < 95; i ++)
{
A[i] = 2;
C[i] = 2;
}
BigInteger a = e.GHASH(H, A, C);
Console.WriteLine(a);
Results:
For both: 129209628709014910494696220101529767594
EDIT: Now the outputs are agreeing between Python and C#. So essentially the porting is done :) However, these outputs still don't agree with Wireshark. Hence, the handshake is still failing. May be something wrong with the procedure or the contents. Below is the working code
EDIT: Finally managed to get the code working. Below is the code that resulted in a successful handshake
Working Code:
/*
* Receiving seqNum as UInt64 and content_type as byte
*
*/
public byte[] AES_Encrypt_GCM(byte[] client_write_key, byte[] client_write_iv, byte[] plaintext, UInt64 seqNum, byte content_type)
{
int plaintext_size = plaintext.Length;
List<byte> temp = new List<byte>();
byte[] init_bytes = new byte[16];
Array.Clear(init_bytes, 0, 16);
byte[] encrypted = AES_Encrypt_ECB(init_bytes, client_write_key, 128);
Array.Reverse(encrypted);
BigInteger H_client = new BigInteger(encrypted);
if (H_client < 0)
{
temp.Clear();
temp.TrimExcess();
temp.AddRange(H_client.ToByteArray());
temp.Add(0);
H_client = new BigInteger(temp.ToArray());
}
Random rnd = new Random();
byte[] random = new byte[8];
rnd.NextBytes(random);
/*
* incr is little endian, but it needs to be in big endian format
*
*/
byte[] incr = BitConverter.GetBytes((int) 2);
Array.Reverse(incr);
/*
* Counter = First 4 bytes of IV + 8 Random bytes + 4 bytes of sequential value (starting at 2)
*
*/
temp.Clear();
temp.TrimExcess();
temp.AddRange(client_write_iv);
temp.AddRange(random);
byte[] iv = temp.ToArray();
temp.AddRange(incr);
byte[] counter = temp.ToArray();
AES_CTR aesctr = new AES_CTR(counter);
ICryptoTransform ctrenc = aesctr.CreateEncryptor(client_write_key, null);
byte[] ctext = ctrenc.TransformFinalBlock(plaintext, 0, plaintext_size);
byte[] seq_num = BitConverter.GetBytes(seqNum);
/*
* Using UInt16 instead of short
*
*/
byte[] tls_version = BitConverter.GetBytes((UInt16) 771);
Console.WriteLine("Plain Text size = {0}", plaintext_size);
byte[] plaintext_size_array = BitConverter.GetBytes((UInt16) plaintext_size);
/*
* Size was returned as 10 00 instead of 00 10
*
*/
Array.Reverse(plaintext_size_array);
temp.Clear();
temp.TrimExcess();
temp.AddRange(seq_num);
temp.Add(content_type);
temp.AddRange(tls_version);
temp.AddRange(plaintext_size_array);
byte[] auth_data = temp.ToArray();
BigInteger auth_tag = GHASH(H_client, auth_data, ctext);
Console.WriteLine("H = {0}", H_client);
this.printBuf(plaintext, "plaintext = ");
this.printBuf(auth_data, "A = ");
this.printBuf(ctext, "C = ");
this.printBuf(client_write_key, "client_AES_key = ");
this.printBuf(iv.ToArray(), "iv = ");
Console.WriteLine("Auth Tag just after GHASH: {0}", auth_tag);
AesCryptoServiceProvider aes2 = new AesCryptoServiceProvider();
aes2.Key = client_write_key;
aes2.Mode = CipherMode.ECB;
aes2.Padding = PaddingMode.None;
aes2.KeySize = 128;
ICryptoTransform transform1 = aes2.CreateEncryptor();
byte[] cval = {0, 0, 0, 1};
temp.Clear();
temp.TrimExcess();
temp.AddRange(iv);
temp.AddRange(cval);
byte[] encrypted1 = AES_Encrypt_ECB(temp.ToArray(), client_write_key, 128);
Array.Reverse(encrypted1);
BigInteger nenc = new BigInteger(encrypted1);
if (nenc < 0)
{
temp.Clear();
temp.TrimExcess();
temp.AddRange(nenc.ToByteArray());
temp.Add(0);
nenc = new BigInteger(temp.ToArray());
}
this.printBuf(nenc.ToByteArray(), "NENC = ");
Console.WriteLine("NENC: {0}", nenc);
auth_tag ^= nenc;
byte[] auth_tag_array = auth_tag.ToByteArray();
Array.Reverse(auth_tag_array);
this.printBuf(auth_tag_array, "Final Auth Tag Byte Array: ");
Console.WriteLine("Final Auth Tag: {0}", auth_tag);
this.printBuf(random, "Random sent = ");
temp.Clear();
temp.TrimExcess();
temp.AddRange(random);
temp.AddRange(ctext);
temp.AddRange(auth_tag_array);
return temp.ToArray();
}
public void printBuf(byte[] data, String heading)
{
int numBytes = 0;
Console.Write(heading + "\"");
if (data == null)
{
return;
}
foreach (byte element in data)
{
Console.Write("\\x{0}", element.ToString("X2"));
numBytes = numBytes + 1;
if (numBytes == 32)
{
Console.Write("\r\n");
numBytes = 0;
}
}
Console.Write("\"\r\n");
}
public BigInteger GF_mult(BigInteger x, BigInteger y)
{
BigInteger product = new BigInteger(0);
BigInteger e10 = BigInteger.Parse("00E1000000000000000000000000000000", NumberStyles.AllowHexSpecifier);
/*
* Below operation y >> i fails if i is UInt32, so leaving it as int
*
*/
int i = 127;
while (i != -1)
{
product = product ^ (x * ((y >> i) & 1));
x = (x >> 1) ^ ((x & 1) * e10);
i = i - 1;
}
return product;
}
public BigInteger H_mult(BigInteger H, BigInteger val)
{
BigInteger product = new BigInteger(0);
int i = 0;
/*
* Below operation (val & 0xFF) << (8 * i) fails if i is UInt32, so leaving it as int
*
*/
while (i < 16)
{
product = product ^ GF_mult(H, (val & 0xFF) << (8 * i));
val = val >> 8;
i = i + 1;
}
return product;
}
public BigInteger GHASH(BigInteger H, byte[] A, byte[] C)
{
int C_len = C.Length;
List <byte> temp = new List<byte>();
int plen = 16 - (A.Length % 16);
byte[] zeroes = new byte[plen];
Array.Clear(zeroes, 0, zeroes.Length);
temp.AddRange(A);
temp.AddRange(zeroes);
temp.Reverse();
BigInteger A_padded = new BigInteger(temp.ToArray());
temp.Clear();
temp.TrimExcess();
byte[] C1;
if ((C_len % 16) != 0)
{
plen = 16 - (C_len % 16);
byte[] zeroes1 = new byte[plen];
Array.Clear(zeroes, 0, zeroes.Length);
temp.AddRange(C);
temp.AddRange(zeroes1);
C1 = temp.ToArray();
}
else
{
C1 = new byte[C.Length];
Array.Copy(C, 0, C1, 0, C.Length);
}
temp.Clear();
temp.TrimExcess();
BigInteger tag = new BigInteger();
tag = H_mult(H, A_padded);
this.printBuf(H.ToByteArray(), "H Byte Array:");
for (int i = 0; i < (int) (C1.Length / 16); i ++)
{
byte[] toTake;
if (i == 0)
{
toTake = C1.Take(16).ToArray();
}
else
{
toTake = C1.Skip(i * 16).Take(16).ToArray();
}
Array.Reverse(toTake);
BigInteger tempNum = new BigInteger(toTake);
tag ^= tempNum;
tag = H_mult(H, tag);
}
byte[] A_arr = BitConverter.GetBytes((long) (8 * A.Length));
/*
* Want length to be "00 00 00 00 00 00 00 xy" format
*
*/
Array.Reverse(A_arr);
byte[] C_arr = BitConverter.GetBytes((long) (8 * C_len));
/*
* Want length to be "00 00 00 00 00 00 00 xy" format
*
*/
Array.Reverse(C_arr);
temp.AddRange(A_arr);
temp.AddRange(C_arr);
temp.Reverse();
BigInteger array_int = new BigInteger(temp.ToArray());
tag = tag ^ array_int;
tag = H_mult(H, tag);
return tag;
}
Using SSL decryption in wireshark (using private key), I found that:
The nonce calculated by the C# code is same as that in wireshark (fixed part is client_write_IV and variable part is 8 bytes random)
The value of AAD (auth_data above) (client_write_key, seqNum + ctype + tls_version + plaintext_size) is matching with wireshark value
Cipher text (ctext above) (the C in GHASH(H, A, C)), is also matching the wireshark calculated value
However, the auth_tag calculation (GHASH(H_client, auth_data, ctext)) is failing. It would be great if someone could guide me as to what could be wrong in GHASH function. I just did a basic comparison of results of GF_mult function in python and C#, but the results are not matching too
This is not a final solution, but just an advice. I have seen you are using a lot the function BitConverter.GetBytes, int instead of Int32 or Int16.
The remarks from the official documentation says:
The order of bytes in the array returned by the GetBytes method
depends on whether the computer architecture is little-endian or
big-endian.
As for when you are using the BigInteger structure, it seems to be expecting always the little-endian order:
value
Type: System.Byte[]
An array of byte values in little-endian order.
Prefer using the Int32 and Int16 and pay attention to the order of the bytes before using it on these calculations.
Use log4net to log all the operations. Would be nice to put the same logs in the python program so that you could compare then at once, and check exactly where the calculations change.
Hope this give some tips on where to start.
I'm trying to figure out the final size of a file serialized with protobuf-net, so I'll can choose the best approach.
I made some comparison tests with different proto configurations and a binary serialization, but still I don't understand how "varint to bytes" conversion works.
Classes
public class Pt2D
{
public Pt2D() { }
public Pt2D(double x, double y)
{
X = x;
Y = y;
}
public double X { get; set; }
public double Y { get; set; }
}
public class Pt3D : Pt2D
{
public Pt3D() { }
public Pt3D(double x, double y, double z) : base(x, y)
{
Z = z;
}
public double Z { get; set; }
}
public class FullPt3D
{
public FullPt3D() { }
public FullPt3D(double x, double y, double z)
{
X = x;
Y = y;
Z = z;
}
public double X { get; set; }
public double Y { get; set; }
public double Z { get; set; }
}
Test case
private void ProtoBufferTest()
{
var model = RuntimeTypeModel.Default;
model.Add(typeof(Pt2D), false)
.Add(1, "X")
.Add(2, "Y")
.AddSubType(101, typeof(Pt3D));
model[typeof(Pt3D)]
.Add(1, "Z");
model.Add(typeof(FullPt3D), false)
.Add(1, "X")
.Add(2, "Y")
.Add(3, "Z");
double x = 5.6050692524784562;
double y = 0.74161805247031987;
double z = 8.5883424750474937;
string filename = "testPt3D.pb";
using (var file = File.Create(filename))
{
Serializer.Serialize(file, new Pt3D(x, y, z));
}
Console.WriteLine(filename + " length = " + new FileInfo(filename).Length + " bytes") ;
filename = "testFullPt3D.pb";
using (var file = File.Create(filename))
{
Serializer.Serialize(file, new FullPt3D(x, y, z));
}
Console.WriteLine(filename + " length = " + new FileInfo(filename).Length + " bytes");
filename = "testBinaryWriter.bin";
using (var file = File.Create(filename))
{
using (var writer = new BinaryWriter(file))
{
writer.Write(x);
writer.Write(y);
writer.Write(z);
}
}
Console.WriteLine(filename + " length = " + new FileInfo(filename).Length + " bytes");
}
Test results
1) testPt3D.pb length = 30 bytes
2) testFullPt3D.pb length = 27 bytes
3) testBinaryWriter.bin length = 24 bytes
Q1) 24 bytes are used to store the 3 double values and it's ok, but what values are stored in cases 1) and 2) to reach 30 and 27 bytes? (I suppose int values used in model mapping)
Q2) I made some tests by changing the SubType mapping for Pt2D but I cannot understand the size changes
model.Add(typeof(Pt2D), false)
.Add(1, "X")
.Add(2, "Y")
.AddSubType(3, typeof(Pt3D));
Result: testPt3D.pb length = 29 bytes
model.Add(typeof(Pt2D), false)
.Add(1, "X")
.Add(2, "Y")
.AddSubType(21, typeof(Pt3D));
Result: testPt3D.pb length = 30 bytes
model.Add(typeof(Pt2D), false)
.Add(1, "X")
.Add(2, "Y")
.AddSubType(1111, typeof(Pt3D));
Result: testPt3D.pb length = 30 bytes
I tried to use this tool to better understand, but it gives different bytes conversion results.
Why do I get the same size by using 21, 101 or 1111?
1) testPt3D.pb length = 30 bytes
(subclass comes first) [field 101, string] = 2 bytes, 3 bits for "string", 7 bits for "101"; varint packs in 7 bit units with a continuation bit, so: 2 bytes (total = 2)
[data length "9"] = 1 byte (total = 3)
[field 1, fixed 64] = 1 byte (total = 4)
[payload 1] = 8 bytes (total = 12)
[field 1, fixed 64] = 1 byte (total = 13)
[payload 1] = 8 bytes (total = 21)
[field 2, fixed 64] = 1 byte (total = 22)
[payload 2] = 8 bytes (total = 30)
2) testFullPt3D.pb length = 27 bytes
[field 1, fixed 64] = 1 byte (total = 1)
[payload 1] = 8 bytes (total = 9)
[field 2, fixed 64] = 1 byte (total = 10)
[payload 2] = 8 bytes (total = 18)
[field 3, fixed 64] = 1 byte (total = 19)
[payload 3] = 8 bytes (total = 27)
There are other options in protobuf when dealing with repeated data - "packed" and "grouped"; they only make sense when discussing more data than 3 values, though.
I am trying to rewrite part of code from C# to Python.
But faced some problems with bitwise operation.
Here is C# code :
private string _generateConfirmationHashForTime(long time, string tag)
{
time = 1459152870;
byte[] decode = Convert.FromBase64String("TphBbTrbbVGJuXQ15OVZVZeBB9M=");
int n2 = 8;
if (tag != null)
{
if (tag.Length > 32)
{
n2 = 8 + 32;
}
else
{
n2 = 8 + tag.Length;
}
}
byte[] array = new byte[n2];
int n3 = 8;
while (true)
{
int n4 = n3 - 1;
if (n3 <= 0)
{
break;
}
array[n4] = (byte)time;
time >>= 8;
n3 = n4;
}
if (tag != null)
{
Array.Copy(Encoding.UTF8.GetBytes(tag), 0, array, 8, n2 - 8);
}
try
{
HMACSHA1 hmacGenerator = new HMACSHA1();
hmacGenerator.Key = decode;
byte[] hashedData = hmacGenerator.ComputeHash(array);
string encodedData = Convert.ToBase64String(hashedData, Base64FormattingOptions.None);
Console.WriteLine(encodedData)
return encodedData
}
catch (Exception)
{
return null; //Fix soon: catch-all is BAD!
}
}
I rewrote it to Python:
def _generateConfirmationHashForTime(self, time, tag):
time = 1459152870
decode = base64.b64decode("TphBbTrbbVGJuXQ15OVZVZeBB9M=")
n2 = 8
if tag is not None:
if len(tag) > 32:
n2 = 8 + 32
else:
n2 = 8 + len(tag)
arrayb = [hex(time >> i & 0xff) for i in (56, 48, 40, 32, 24, 16, 8, 0)]
if tag is not None:
for ch in range(0, len(tag)):
arrayb.append(hex(ord(tag[ch])))
arrayc = 0
n4 = len(arrayb) - 1
for i in range(0, len(arrayb)):
arrayc <<= 8
arrayc |= int(arrayb[n4], 16)
n4 -= 1
array_binary = binascii.a2b_hex("{:016x}".format(arrayc))
hmacGenerator = hmac.new(decode, array_binary, hashlib.sha1)
hashedData = hmacGenerator.digest()
encodedData = base64.b64encode(hashedData)
print encodedData
The result of hashing is not equal.
Variables encodedData do not match :(
Can you point where can be error in the code?
OK, now I remember why I don't use Python. Language snarks aside though...
The C# code composes an array of bytes, 8 from the time variable (in big-endian form, MSB first) and up to 32 from the UTF8 encoding of the tag string... but limited by the length of the original string, ignoring multi-byte encoding. Not exactly ideal, but we can handle that.
The bytes from the time variable are simple enough:
arr = struct.pack(">Q", time)
For the tag string convert it to UTF8, then slice the first 32 bytes off and append it to the array:
arr += str(tag).encode("utf-8")[0:min(32, len(str(tag)))]
Up to here we're fine. I compared the base64 encoding of arr against the composed message in C# and they match for my test data, as does the resultant HMAC message digest.
Here's the full code:
def _generateConfirmationHashForTime(time, tag):
time = 1459152870
decode = base64.b64decode("TphBbTrbbVGJuXQ15OVZVZeBB9M=")
arr = struct.pack(">Q", time)
arr += str(tag).encode("utf-8")[0:min(32, len(str(tag)))]
hmacGenerator = hmac.new(decode, arr, hashlib.sha1)
hashedData = hmacGenerator.digest()
encodedData = base64.b64encode(hashedData)
return encodedData
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I've a fallowing code which receives byte and probably has to perform convert to float and represent its converted values :
public float DecodeFloat(byte[] data)
{
float x = data[3]|data[2]<<8|data[1]<<16|data[0]<<24;
return x;
}
// receive thread
private void ReceiveData()
{
int count=0;
IPEndPoint remoteIP = new IPEndPoint(IPAddress.Parse("10.0.2.213"), port);
client = new UdpClient(remoteIP);
while (true)
{
try
{
IPEndPoint anyIP = new IPEndPoint(IPAddress.Any, 0);
byte[] data = client.Receive(ref anyIP);
Vector3 vec,rot;
float x= DecodeFloat (data);
float y= DecodeFloat (data + 4);
float z= DecodeFloat (data + 8);
float alpha= DecodeFloat (data + 12);
float theta= DecodeFloat (data +16);
float phi= DecodeFloat (data+20);
vec.Set(x,y,z);
rot.Set (alpha,theta,phi);
print(">> " + x.ToString() + ", "+ y.ToString() + ", "+ z.ToString() + ", "
+ alpha.ToString() + ", "+ theta.ToString() + ", "+ phi.ToString());
// latest UDPpacket
lastReceivedUDPPacket=x.ToString()+" Packet#: "+count.ToString();
count = count+1;
}
Is there anyone to put me in the right way, please?
Given 4 bytes, you would normally only "shift" (<<) if it is integer data. The code in the question basically reads the data as an int (via "shift"), then casts the int to a float. Which is almost certainly not what was intended.
Since you want to interpret it as float, you should probably use:
float val = BitConverter.ToSingle(data, offset);
where offset is the 0, 4, 8, 12 etc shown in your data + 4, data + 8, etc. This treats the 4 bytes (relative to offset) as raw IEEE 754 floating point data. For example:
float x= BitConverter.ToSingle(data, 0);
float y= BitConverter.ToSingle(data, 4);
float z= BitConverter.ToSingle(data, 8);
float alpha= BitConverter.ToSingle(data, 12);
float theta= BitConverter.ToSingle(data, 16);
float phi= BitConverter.ToSingle(data, 20);
Note that this makes assumptions about "endianness" - see BitConverter.IsLittleEndian.
Edit: from comments, it sounds like the data is other-endian; try:
public static float ReadSingleBigEndian(byte[] data, int offset)
{
if (BitConverter.IsLittleEndian)
{
byte tmp = data[offset];
data[offset] = data[offset + 3];
data[offset + 3] = tmp;
tmp = data[offset + 1];
data[offset + 1] = data[offset + 2];
data[offset + 2] = tmp;
}
return BitConverter.ToSingle(data, offset);
}
public static float ReadSingleLittleEndian(byte[] data, int offset)
{
if (!BitConverter.IsLittleEndian)
{
byte tmp = data[offset];
data[offset] = data[offset + 3];
data[offset + 3] = tmp;
tmp = data[offset + 1];
data[offset + 1] = data[offset + 2];
data[offset + 2] = tmp;
}
return BitConverter.ToSingle(data, offset);
}
...
float x= ReadSingleBigEndian(data, 0);
float y= ReadSingleBigEndian(data, 4);
float z= ReadSingleBigEndian(data, 8);
float alpha= ReadSingleBigEndian(data, 12);
float theta= ReadSingleBigEndian(data, 16);
float phi= ReadSingleBigEndian(data, 20);
If you need to optimize this massively, there are also things you can do with unsafe code to build an int from shifting (picking the endianness when shifting), then do an unsafe coerce to get the int as a float; for example (noting that I haven't checked endianness here - it might misbehave on a big-endian machine, but most people don't have those):
public static unsafe float ReadSingleBigEndian(byte[] data, int offset)
{
int i = (data[offset++] << 24) | (data[offset++] << 16) |
(data[offset++] << 8) | data[offset];
return *(float*)&i;
}
public static unsafe float ReadSingleBigEndian(byte[] data, int offset)
{
int i = (data[offset++]) | (data[offset++] << 8) |
(data[offset++] << 16) | (data[offset] << 24);
return *(float*)&i;
}
Or crazier, and CPU-safer:
public static float ReadSingleBigEndian(byte[] data, int offset)
{
return ReadSingle(data, offset, false);
}
public static float ReadSingleLittleEndian(byte[] data, int offset)
{
return ReadSingle(data, offset, true);
}
private static unsafe float ReadSingle(byte[] data, int offset,
bool littleEndian)
{
fixed (byte* ptr = &data[offset])
{
if (littleEndian != BitConverter.IsLittleEndian)
{ // other-endian; swap
byte b = ptr[0];
ptr[0] = ptr[3];
ptr[3] = b;
b = ptr[1];
ptr[1] = ptr[2];
ptr[2] = b;
}
return *(float*)ptr;
}
}
I've following Delphi code reading from socket:
type
RegbusReq2=packed record
Funct:char;
Device:char;
Device1:char;
Starting:integer;
Quantity:smallint;
_CRC:Word;
stroka:char;
end;
type
crcReg=packed record
buf:array[0..2] of byte;
value:array[0..5] of byte;
end;
type
myRB=record
case byte of
0:(data:RegbusReq2);
1:(Buff:crcReg);
end;
type
outVal=packed record
cap:array[0..8] of byte;
val:array[0..3] of single;
end;
type
outValBuff=record
case byte of
0:(val:outVal);
1:(Buff:array [1..25] of byte);
end;
var
Form1: TForm1;
hCommFile:THandle;
typeCon:byte;
cs1: TClientSocket;
...
Timer tick fo reading data:
Procedure TForm1.Timer1Timer(Sender: TObject);
var
DataReq:myRB;
output:outValbuff;
Wr,Rd:Cardinal;
i:integer;
st:string;
begin
//çàïîëíåíèå çàïðîñà
DataReq.data.Funct:=chr(63); //êîìàíäà "?"
DataReq.data.Device:=chr(48); //íîìåð ïðèáîðà ñò
DataReq.data.Device1:=chr(49); //íîìåð ïðèáîðà ìëàäøèé
DataReq.data.Starting:=2088; //àäðåñ â äåñ ôîðì
DataReq.data.Quantity:=16; //ðàçìåð äàííûõ
DataReq.data._CRC:=CRC2(#DataReq.Buff.value,6); //ÊÑ
DataReq.data.stroka:=chr(13); //ïåðåâîä ñòðîêè
PurgeComm(hCommFile,PURGE_RXCLEAR or PURGE_TXCLEAR);
if typecon=1 then begin //COM-ïîðò
WriteFile(hCommFile,DataReq.data,SizeOf(DataReq.data),Wr,nil);
ReadFile(hCommFile,Output.buff,SizeOf(Output.Buff),Rd,nil);
end;
if typecon=2 then begin //ethernet
cs1.Active:=true;
cs1.Socket.SendBuf(DataReq.data,SizeOf(DataReq.data));
cs1.Socket.ReceiveBuf(output.buff,SizeOf(Output.Buff));
cs1.Active:=false;
end;
for i:=1 to 25 do
st:=st + '_' + inttostr(output.buff[i]);
memo1.Lines.Add(st);
edit1.Text:=FloatToStr(Round(output.val.val[0]
*exp(2*ln(10)))/(exp(2*ln(10))));
edit2.Text:=FloatToStr(Round(output.val.val[1]
*exp(2*ln(10)))/(exp(2*ln(10))));
edit3.Text:=FloatToStr(Round(output.val.val[2]
*exp(2*ln(10)))/(exp(2*ln(10))));
edit4.Text:=FloatToStr(Round(output.val.val[3]
*exp(2*ln(10)))/(exp(2*ln(10))));
end;
I've following C# code:
[StructLayout(LayoutKind.Sequential, Pack = 1, CharSet = CharSet.Ansi)]
public struct RegBusRec
{
public char Funct;
public char Device;
public char Device1;
public int Starting;
public short Quantity;
public ushort _CRC;
public char Message;
}
...
private void timer1_Tick(object sender, EventArgs e)
{
byte[] CRCc = new byte[6];
byte[] tmp;
byte[] output = new byte[25];
RegBusRec req2 = new RegBusRec();
Crc16 crc16 = new Crc16();
req2.Funct = '?';
req2.Device = '0';
req2.Device1 = '1';
req2.Starting = 2088;
req2.Quantity = 16;
req2.Message = '\r';
tmp = BitConverter.GetBytes(req2.Starting);
CRCc[0] = tmp[0];
CRCc[1] = tmp[1];
CRCc[2] = tmp[2];
CRCc[3] = tmp[3];
tmp = BitConverter.GetBytes(req2.Quantity);
CRCc[4] = tmp[0];
CRCc[5] = tmp[1];
req2._CRC = crc16.ComputeChecksum(CRCc);
textBox6.Text += Environment.NewLine;
textBox6.Text += "CRC: " + req2._CRC;
cl.Client.Send(StructureToByteArray(req2));
cl.Client.Receive(output);
byte[] val = new byte[4];
val[0] = output[15];
val[1] = output[16];
val[2] = output[17];
val[3] = output[18];
textBox6.Text += Environment.NewLine;
textBox6.Text += "Query: ";
for (int i = 0; i < StructureToByteArray(req2).Length; i++)
{
textBox6.Text += StructureToByteArray(req2)[i] + "_";
}
textBox2.Text = BitConverter.ToSingle(val,0).ToString();
textBox6.Text += Environment.NewLine;
textBox6.Text += "Data: ";
for (int i = 0; i < output.Length; i++)
{
textBox6.Text += output[i] + "_";
}
}
...
static byte[] StructureToByteArray(object obj)
{
int len = Marshal.SizeOf(obj);
byte[] arr = new byte[len];
IntPtr ptr = Marshal.AllocHGlobal(len);
Marshal.StructureToPtr(obj, ptr, true);
Marshal.Copy(ptr, arr, 0, len);
Marshal.FreeHGlobal(ptr);
return arr;
}
CRC calc correctly. But I get wrong number for textBox2.Text - random numbers. How I can correctly get this numbers? Thanks in advance.
Scrrenshot of Delphi debugger:
OK, I think I might have deciphered it, but I wouldn't count on it.
You seem to be saying that the C# code displays an unexpected value in textBox2. Looking at the C# code, textBox2 displays data coming from val. And val is assigned like so:
val[0] = output[15];
val[1] = output[16];
val[2] = output[17];
val[3] = output[18];
Note that output is a C# byte array and so uses zero-based indexing.
In your Delphi code, the matching data structures are:
outVal=packed record
cap:array[0..8] of byte;
val:array[0..3] of single;
end;
outValBuff=record
case byte of
0:(val:outVal);
1:(Buff:array [1..25] of byte);
end;
So, cap consumes the first 9 bytes and then the next 16 are the 4 single precision values. In terms of the C# bytes array:
cap is output[0] to output[8],
val[0] is output[9] to output[12],
val[1] is output[13] to output[16],
val[2] is output[17] to output[20],
val[3] is output[21] to output[24].
But you are reading output[15] to output[18], and so combine half of val[1] with half of val[2].
In short, you just need to fix up your indexing.
Now, you are making it much more complex that it needs to be. You could do something like this to get hold of all 4 single values:
single[] val = new single[4];
int byteIndex = 9;
for (int i=0; i<4; i++)
{
val[i] = BitConverter.ToSingle(val, byteIndex);
byteIndex += 4;
}
And rather than copying things around byte by byte into CRCc, do this:
Buffer.BlockCopy(BitConverter.GetBytes(req2.Starting), 0, CRCc, 0, 4);
Buffer.BlockCopy(BitConverter.GetBytes(req2.Quantity), 0, CRCc, 4, 2);