Why does this output differently in C# and C++? - c#

In C#
var buffer = new byte[] {71, 20, 0, 0, 9, 0, 0, 0};
var g = (ulong) ((uint) (buffer[0] | buffer[1] << 8 | buffer[2] << 16 | buffer[3] << 24) |
(long) (buffer[4] | buffer[5] << 8 | buffer[6] << 16 | buffer[7] << 24) << 32);
In C++
#define byte unsigned char
#define uint unsigned int
#define ulong unsigned long long
byte buffer[8] = {71, 20, 0, 0, 9, 0, 0, 0};
ulong g = (ulong) ((uint) (buffer[0] | buffer[1] << 8 | buffer[2] << 16 | buffer[3] << 24) |
(long) (buffer[4] | buffer[5] << 8 | buffer[6] << 16 | buffer[7] << 24) << 32);
C# outputs 38654710855, C++ outputs 5199.
Why? I have been scratching my head on this for hours...
Edit: C# has the correct output.
Thanks for the help everyone :) Jack Aidley's answer was the first so I will mark that as the accepted answer. The other answers were also correct, but I can't accept multiple answers :\

The C++ is not working because you're casting to long which is typically 32-bits in most current C++ implementation but whose exact length is left to the implementor. You want long long.
Also, please read Bikeshedder's more complete answer below. He's quite correct that fixed size typedefs are a more reliable way of doing this.

The problem is that long type in C++ is still 4 byte or 32 bit(on most compilers) and thus your calculation overflows it. In C# however long is equivelent to C++'s long long and is 64 bit and so the result of the expression fits into the type.

Your unsigned long is not 64 bits long. You can easily check this using sizeof(unsigned long) which should return 4 (=32 bits) instead of 8 (=64 bits).
Don't use int/short/long if you expect them to be of a specific size. The standard does only say that short <= int <= long <= long long and defines a minimum size. They can actually be all the same size. long is guaranteed to be at least 32 bits and long long is guaranteed to be at least 64 bits. (See Page 22 of the C++ Standard) Still I would highly recommend against this and stick to stdint if you really want to work with a specific size.
Use <cstdint> (C++11) or <cstdint.h> (C++98) and the defined types uint8_t, uint16_t, uint32_t, uint64_t instead.
Corrected C++ code
#include <stdint.h>
#include <iostream>
int main(int argc, char *argv[]) {
uint8_t buffer[8] = {71, 20, 0, 0, 9, 0, 0, 0};
uint64_t g = (uint64_t) ((uint32_t) (buffer[0] | buffer[1] << 8 | buffer[2] << 16 | buffer[3] << 24) |
(int64_t) (buffer[4] | buffer[5] << 8 | buffer[6] << 16 | buffer[7] << 24) << 32);
std::cout << g << std::endl;
return 0;
}
Demo with output: http://codepad.org/e8GOuvMp

There is a subtle error in your castings.
long in C# is a 64-bit integer.
long in C++ is usually a 32-bit integer.
Thus your (long) (buffer[4] | buffer[5] << 8 | buffer[6] << 16 | buffer[7] << 24) << 32) has a different meaning when you execute it in C# or C++.

Related

c# FTDI 24bit two's complement implement

I am a beginner at C# and would like some advice on how to solve the following problem:
My code read 3 byte from ADC true FTDI ic and calculating value. Problem is that sometimes I get values under zero (less than 0000 0000 0000 0000 0000 0000) and my calculating value jump to a big number. I need to implement two's complement but I and I don't know how.
byte[] readData1 = new byte[3];
ftStatus = myFtdiDevice.Read(readData1, numBytesAvailable, ref numBytesRead); //read data from FTDI
int vrednost1 = (readData1[2] << 16) | (readData1[1] << 8) | (readData1[0]); //convert LSB MSB
int v = ((((4.096 / 16777216) * vrednost1) / 4) * 250); //calculate
Convert the data left-justified, so that your sign-bit lands in the spot your PC processor treats as a sign bit. Then right-shift back, which will automatically perform sign-extension.
int left_justified = (readData[2] << 24) | (readData[1] << 16) | (readData[0] << 8);
int result = left_justified >> 8;
Equivalently, one can mark the byte containing a sign bit as signed, so the processor will perform sign extension:
int result = (unchecked((sbyte)readData[2]) << 16) | (readData[1] << 8) | readData[0];
The second approach with a cast only works if the sign bit is already aligned left within any one of the bytes. The left-justification approach can work for arbitrary sizes, such as 18-bit two's complement readings. In this situation the cast to sbyte wouldn't do the job.
int left_justified18 = (readData[2] << 30) | (readData[1] << 22) | (readData[0] << 14);
int result18 = left_justified >> 14;
Well, the only obscure moment here is endianness (big or little). Assuming that byte[0] stands for the least significant byte you can put
private static int FromInt24(byte[] data) {
int result = (data[2] << 16) | (data[1] << 8) | data[0];
return data[2] < 128
? result // positive number
: -((~result & 0xFFFFFF) + 1); // negative number, its abs value is 2-complement
}
Demo:
byte[][] tests = new byte[][] {
new byte[] { 255, 255, 255},
new byte[] { 44, 1, 0},
};
string report = string.Join(Environment.NewLine, tests
.Select(test => $"[{string.Join(", ", test.Select(x => $"0x{x:X2}"))}] == {FromInt24(test)}"));
Console.Write(report);
Outcome:
[0xFF, 0xFF, 0xFF] == -1
[0x2C, 0x01, 0x00] == 300
If you have big endianness (e.g. 300 == {0, 1, 44}) you have to swap bytes:
private static int FromInt24(byte[] data) {
int result = (data[0] << 16) | (data[1] << 8) | data[2];
return data[0] < 128
? result
: -((~result & 0xFFFFFF) + 1);
}

Generate a unique key by encoding 4 smaller numeric primitive types into a long (Int64)

I have the following method which should create a unique long for any combination of the provided params:
private static long GenerateKey(byte sT, byte srcT, int nId, ushort aId)
{
var nBytes = BitConverter.GetBytes(nId);
var aBytes = BitConverter.GetBytes(aId);
var byteArray = new byte[8];
Buffer.BlockCopy(new[] { sT}, 0, byteArray, 7, 1);
Buffer.BlockCopy(new[] { srcT}, 0, byteArray, 6, 1);
Buffer.BlockCopy(aBytes, 0, byteArray, 4, 2);
Buffer.BlockCopy(nBytes, 0, byteArray, 0, 4);
var result = BitConverter.ToInt64(byteArray, 0);
return result;
}
So I end up with:
1 2 3 4 5 6 7 8
------------------------------------
|byte|byte| ushort | int |
------------------------------------
Can this be done with bitwise operations? I tried the below but seemed to generate the same number for different values??
var hash = ((byte)sT << 56) ^ ((byte)srcT<< 48) ^ (aId << 32) ^ nId;
What went wrong here is that something like this
((byte)sT << 56)
does not do what you want. What it actually does, is cast sT to a byte (which it already is), then it's implicitly converted to an int (because you do math with it), and then it's shifted left by 24 (shift counts are masked to limit them to less than the size of the left operand).
So actually instead of casting to a narrow type, you should cast to a wide type, like this:
((long)sT << 56)
Also, note that implicitly converting an int to long sign-extends it, which means that if nId is negative, you would end up complementing all the other fields (xor with all ones is complement).
So try this:
((long)sT << 56) | ((long)srcT << 48) | ((long)aId << 32) | (nId & 0xffffffffL)
I've also changed the xor's to or's, it's more idiomatic for combining non-overlapping things.
In the following line:
var hash = ((byte)sT << 56) ^ ((byte)srcT<< 48) ^ (aId << 32) ^ nId;
You are shifting more bits than the types are capable of holding. You can't shift a byte more than 8 bits, the rest is lost. Conversely, an ushort cannot be shifted more than 16 bits. Just cast everything to long and you'll get the expected result:
var hash = ((long)sT << 56)
^ ((long)srcT << 48)
^ ((long)aId << 32)
^ (uint)nId; // Cast to uint is required to prevent sign extension

Bitshift over Int32.max fails

Ok, so I have two methods
public static long ReadLong(this byte[] data)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
long length = data[0] | data[1] << 8 | data[2] << 16 | data[3] << 24
| data[4] << 32 | data[5] << 40 | data[6] << 48 | data[7] << 56;
return length;
}
public static void WriteLong(this byte[] data, long i)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
data[0] = (byte)((i >> (8*0)) & 0xFF);
data[1] = (byte)((i >> (8*1)) & 0xFF);
data[2] = (byte)((i >> (8*2)) & 0xFF);
data[3] = (byte)((i >> (8*3)) & 0xFF);
data[4] = (byte)((i >> (8*4)) & 0xFF);
data[5] = (byte)((i >> (8*5)) & 0xFF);
data[6] = (byte)((i >> (8*6)) & 0xFF);
data[7] = (byte)((i >> (8*7)) & 0xFF);
}
So WriteLong works correctly(Verified against BitConverter.GetBytes()). The problem is ReadLong. I have a fairly good understanding of this stuff, but I'm guessing what's happening is the or operations are happening as 32 bit ints so at Int32.MaxValue it rolls over. I'm not sure how to avoid that. My first instinct was to make an int from the lower half and an int from the upper half and combine them, but I'm not quite knowledgeable to know even where to start with that, so this is what I tried....
public static long ReadLong(byte[] data)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
long l1 = data[0] | data[1] << 8 | data[2] << 16 | data[3] << 24;
long l2 = data[4] | data[5] << 8 | data[6] << 16 | data[7] << 24;
return l1 | l2 << 32;
}
This didn't work though, at least not for larger numbers, it seems to work for everything below zero.
Here's how I run it
void Main()
{
var larr = new long[5]{
long.MinValue,
0,
long.MaxValue,
1,
-2000000000
};
foreach(var l in larr)
{
var arr = new byte[8];
WriteLong(ref arr,l);
Console.WriteLine(ByteString(arr));
var end = ReadLong(arr);
var end2 = BitConverter.ToInt64(arr,0);
Console.WriteLine(l + " == " + end + " == " + end2);
}
}
and here's what I get(using the modified ReadLong method)
0:0:0:0:0:0:0:128
-9223372036854775808 == -9223372036854775808 == -9223372036854775808
0:0:0:0:0:0:0:0
0 == 0 == 0
255:255:255:255:255:255:255:127
9223372036854775807 == -1 == 9223372036854775807
1:0:0:0:0:0:0:0
1 == 1 == 1
0:108:202:136:255:255:255:255
-2000000000 == -2000000000 == -2000000000
The problem is not the or, it is the bitshift. This has to be done as longs. Currently, the data[i] are implicitely converted to int. Just change that to long and that's it. I.e.
public static long ReadLong(byte[] data)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
long length = (long)data[0] | (long)data[1] << 8 | (long)data[2] << 16 | (long)data[3] << 24
| (long)data[4] << 32 | (long)data[5] << 40 | (long)data[6] << 48 | (long)data[7] << 56;
return length;
}
You are doing int arithmetic and then assigning to long, try:
long length = data[0] | data[1] << 8L | data[2] << 16L | data[3] << 24L
| data[4] << 32L | data[5] << 40L | data[6] << 48L | data[7] << 56L;
This should define your constants as longs forcing it to use long arithmetic.
EDIT: Turns out this may not work according to comments below as while bitshift takes many operators on the left it only takes int on the right. Georg's should be the accepted answer.

Convert 3 Hex (byte) to Decimal

How do I take the hex 0A 25 10 A2 and get the end result of 851.00625? This must be multiplied by 0.000005. I have tried the following code without success:
byte oct6 = 0x0A;
byte oct7 = 0x25;
byte oct8 = 0x10;
byte oct9 = 0xA2;
decimal BaseFrequency = Convert.ToDecimal((oct9 | (oct8 << 8) | (oct7 << 16) | (oct6 << 32))) * 0.000005M;
I am not getting 851.00625 as the BaseFrequency.
oct6 is being shifted 8 bits too far (32 instead of 24)
decimal BaseFrequency = Convert.ToDecimal((oct9 | (oct8 << 8) | (oct7 << 16) | (oct6 << 24))) * 0.000005M;

How can I combine 4 bytes into a 32 bit unsigned integer?

I'm trying to convert 4 bytes into a 32 bit unsigned integer.
I thought maybe something like:
UInt32 combined = (UInt32)((map[i] << 32) | (map[i+1] << 24) | (map[i+2] << 16) | (map[i+3] << 8));
But this doesn't seem to be working. What am I missing?
Your shifts are all off by 8. Shift by 24, 16, 8, and 0.
Use the BitConverter class.
Specifically, this overload.
BitConverter.ToInt32()
You can always do something like this:
public static unsafe int ToInt32(byte[] value, int startIndex)
{
fixed (byte* numRef = &(value[startIndex]))
{
if ((startIndex % 4) == 0)
{
return *(((int*)numRef));
}
if (IsLittleEndian)
{
return (((numRef[0] | (numRef[1] << 8)) | (numRef[2] << 0x10)) | (numRef[3] << 0x18));
}
return ((((numRef[0] << 0x18) | (numRef[1] << 0x10)) | (numRef[2] << 8)) | numRef[3]);
}
}
But this would be reinventing the wheel, as this is actually how BitConverter.ToInt32() is implemented.

Categories