Bitshift over Int32.max fails - c#

Ok, so I have two methods
public static long ReadLong(this byte[] data)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
long length = data[0] | data[1] << 8 | data[2] << 16 | data[3] << 24
| data[4] << 32 | data[5] << 40 | data[6] << 48 | data[7] << 56;
return length;
}
public static void WriteLong(this byte[] data, long i)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
data[0] = (byte)((i >> (8*0)) & 0xFF);
data[1] = (byte)((i >> (8*1)) & 0xFF);
data[2] = (byte)((i >> (8*2)) & 0xFF);
data[3] = (byte)((i >> (8*3)) & 0xFF);
data[4] = (byte)((i >> (8*4)) & 0xFF);
data[5] = (byte)((i >> (8*5)) & 0xFF);
data[6] = (byte)((i >> (8*6)) & 0xFF);
data[7] = (byte)((i >> (8*7)) & 0xFF);
}
So WriteLong works correctly(Verified against BitConverter.GetBytes()). The problem is ReadLong. I have a fairly good understanding of this stuff, but I'm guessing what's happening is the or operations are happening as 32 bit ints so at Int32.MaxValue it rolls over. I'm not sure how to avoid that. My first instinct was to make an int from the lower half and an int from the upper half and combine them, but I'm not quite knowledgeable to know even where to start with that, so this is what I tried....
public static long ReadLong(byte[] data)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
long l1 = data[0] | data[1] << 8 | data[2] << 16 | data[3] << 24;
long l2 = data[4] | data[5] << 8 | data[6] << 16 | data[7] << 24;
return l1 | l2 << 32;
}
This didn't work though, at least not for larger numbers, it seems to work for everything below zero.
Here's how I run it
void Main()
{
var larr = new long[5]{
long.MinValue,
0,
long.MaxValue,
1,
-2000000000
};
foreach(var l in larr)
{
var arr = new byte[8];
WriteLong(ref arr,l);
Console.WriteLine(ByteString(arr));
var end = ReadLong(arr);
var end2 = BitConverter.ToInt64(arr,0);
Console.WriteLine(l + " == " + end + " == " + end2);
}
}
and here's what I get(using the modified ReadLong method)
0:0:0:0:0:0:0:128
-9223372036854775808 == -9223372036854775808 == -9223372036854775808
0:0:0:0:0:0:0:0
0 == 0 == 0
255:255:255:255:255:255:255:127
9223372036854775807 == -1 == 9223372036854775807
1:0:0:0:0:0:0:0
1 == 1 == 1
0:108:202:136:255:255:255:255
-2000000000 == -2000000000 == -2000000000

The problem is not the or, it is the bitshift. This has to be done as longs. Currently, the data[i] are implicitely converted to int. Just change that to long and that's it. I.e.
public static long ReadLong(byte[] data)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
long length = (long)data[0] | (long)data[1] << 8 | (long)data[2] << 16 | (long)data[3] << 24
| (long)data[4] << 32 | (long)data[5] << 40 | (long)data[6] << 48 | (long)data[7] << 56;
return length;
}

You are doing int arithmetic and then assigning to long, try:
long length = data[0] | data[1] << 8L | data[2] << 16L | data[3] << 24L
| data[4] << 32L | data[5] << 40L | data[6] << 48L | data[7] << 56L;
This should define your constants as longs forcing it to use long arithmetic.
EDIT: Turns out this may not work according to comments below as while bitshift takes many operators on the left it only takes int on the right. Georg's should be the accepted answer.

Related

c# FTDI 24bit two's complement implement

I am a beginner at C# and would like some advice on how to solve the following problem:
My code read 3 byte from ADC true FTDI ic and calculating value. Problem is that sometimes I get values under zero (less than 0000 0000 0000 0000 0000 0000) and my calculating value jump to a big number. I need to implement two's complement but I and I don't know how.
byte[] readData1 = new byte[3];
ftStatus = myFtdiDevice.Read(readData1, numBytesAvailable, ref numBytesRead); //read data from FTDI
int vrednost1 = (readData1[2] << 16) | (readData1[1] << 8) | (readData1[0]); //convert LSB MSB
int v = ((((4.096 / 16777216) * vrednost1) / 4) * 250); //calculate
Convert the data left-justified, so that your sign-bit lands in the spot your PC processor treats as a sign bit. Then right-shift back, which will automatically perform sign-extension.
int left_justified = (readData[2] << 24) | (readData[1] << 16) | (readData[0] << 8);
int result = left_justified >> 8;
Equivalently, one can mark the byte containing a sign bit as signed, so the processor will perform sign extension:
int result = (unchecked((sbyte)readData[2]) << 16) | (readData[1] << 8) | readData[0];
The second approach with a cast only works if the sign bit is already aligned left within any one of the bytes. The left-justification approach can work for arbitrary sizes, such as 18-bit two's complement readings. In this situation the cast to sbyte wouldn't do the job.
int left_justified18 = (readData[2] << 30) | (readData[1] << 22) | (readData[0] << 14);
int result18 = left_justified >> 14;
Well, the only obscure moment here is endianness (big or little). Assuming that byte[0] stands for the least significant byte you can put
private static int FromInt24(byte[] data) {
int result = (data[2] << 16) | (data[1] << 8) | data[0];
return data[2] < 128
? result // positive number
: -((~result & 0xFFFFFF) + 1); // negative number, its abs value is 2-complement
}
Demo:
byte[][] tests = new byte[][] {
new byte[] { 255, 255, 255},
new byte[] { 44, 1, 0},
};
string report = string.Join(Environment.NewLine, tests
.Select(test => $"[{string.Join(", ", test.Select(x => $"0x{x:X2}"))}] == {FromInt24(test)}"));
Console.Write(report);
Outcome:
[0xFF, 0xFF, 0xFF] == -1
[0x2C, 0x01, 0x00] == 300
If you have big endianness (e.g. 300 == {0, 1, 44}) you have to swap bytes:
private static int FromInt24(byte[] data) {
int result = (data[0] << 16) | (data[1] << 8) | data[2];
return data[0] < 128
? result
: -((~result & 0xFFFFFF) + 1);
}

Get integer from last 12 bits of 2 bytes in C#

I suspect this is an easy one.
I need to get a number from the first 4 bits and another number from the last 12 bits
of 2 bytes.
So here is what I have but it doesn't seem to be right:
byte[] data = new byte[2];
//assume byte array contains data
var _4bit = data[0] >> 4;
var _12bit = data[0] >> 8 | data[1] & 0xff;
data[0]>>8 is 0. Remember that your data is defined as byte[] so it has 8bits per single item, so you are effectively cutting ALL bits off the data[0].
You want rather to take the lowest 4 bits from that byte by bitwise AND (00001111 = 0F) and then shift it leftwards as needed.
So try this:
var _4bit = data[0] >> 4;
var _12bit = ((data[0] & 0x0F) << 8) | (data[1] & 0xff);
It's also worth noting that the last & 0xFF is not needed, as the data[1] is already a byte.
On bits, step by step:
byte[2] data = { aaaabbbb, cccccccc }
var _4bit = data[0] >> 4;
= aaaabbbb >> 4
= 0000aaaa
var _12bit = ( (data[0] & 0x0F) << 8) | ( data[1] & 0xff);
= ((aaaabbbb & 0x0F) << 8) | (cccccccc & 0xff);
= ( 0000bbbb << 8) | ( cccccccc );
= ( 0000bbbb000000000 ) | ( cccccccc );
= 0000bbbbcccccccc;
BTW. also, note that results of & and | operators are typed as int, so 32bits, I've omitted the zeroes for clarity and written it as 8bit only to make it brief!

Convert 3 Hex (byte) to Decimal

How do I take the hex 0A 25 10 A2 and get the end result of 851.00625? This must be multiplied by 0.000005. I have tried the following code without success:
byte oct6 = 0x0A;
byte oct7 = 0x25;
byte oct8 = 0x10;
byte oct9 = 0xA2;
decimal BaseFrequency = Convert.ToDecimal((oct9 | (oct8 << 8) | (oct7 << 16) | (oct6 << 32))) * 0.000005M;
I am not getting 851.00625 as the BaseFrequency.
oct6 is being shifted 8 bits too far (32 instead of 24)
decimal BaseFrequency = Convert.ToDecimal((oct9 | (oct8 << 8) | (oct7 << 16) | (oct6 << 24))) * 0.000005M;

Optimize C# Code Fragment

I'm profiling some C# code. The method below is one of the most expensive ones. For the purpose of this question, assume that micro-optimization is the right thing to do. Is there an approach to improve performance of this method?
Changing the input parameter to p to ulong[] would create a macro inefficiency.
static ulong Fetch64(byte[] p, int ofs = 0)
{
unchecked
{
ulong result = p[0 + ofs] +
((ulong) p[1 + ofs] << 8) +
((ulong) p[2 + ofs] << 16) +
((ulong) p[3 + ofs] << 24) +
((ulong) p[4 + ofs] << 32) +
((ulong) p[5 + ofs] << 40) +
((ulong) p[6 + ofs] << 48) +
((ulong) p[7 + ofs] << 56);
return result;
}
}
Why not use BitConverter? I've got to believe the Microsoft has spent some time tuning that code. Plus it deals with endian issues.
Here's how BitConverter turns a byte[] into a long/ulong (ulong converts it as signed and then casts it to unsigned):
[SecuritySafeCritical]
public static unsafe long ToInt64(byte[] value, int startIndex)
{
if (value == null)
{
ThrowHelper.ThrowArgumentNullException(ExceptionArgument.value);
}
if (((ulong) startIndex) >= value.Length)
{
ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument.startIndex, ExceptionResource.ArgumentOutOfRange_Index);
}
if (startIndex > (value.Length - 8))
{
ThrowHelper.ThrowArgumentException(ExceptionResource.Arg_ArrayPlusOffTooSmall);
}
fixed (byte* numRef = &(value[startIndex]))
{
if ((startIndex % 8) == 0)
{
return *(((long*) numRef));
}
if (IsLittleEndian)
{
int num = ((numRef[0] | (numRef[1] << 8)) | (numRef[2] << 0x10)) | (numRef[3] << 0x18);
int num2 = ((numRef[4] | (numRef[5] << 8)) | (numRef[6] << 0x10)) | (numRef[7] << 0x18);
return (((long) ((ulong) num)) | (num2 << 0x20));
}
int num3 = (((numRef[0] << 0x18) | (numRef[1] << 0x10)) | (numRef[2] << 8)) | numRef[3];
int num4 = (((numRef[4] << 0x18) | (numRef[5] << 0x10)) | (numRef[6] << 8)) | numRef[7];
return (((long) ((ulong) num4)) | (num3 << 0x20));
}
}
I suspect that doing the conversion one 32-bit word at a time is for 32-bit efficiency. No 64-bit registers on a 32-bit CPU means dealing with a 64-bit ints is a lot more expensive.
If you know for sure you're targeting 64-bit hardware, it might be faster to do do the conversion in one fell swoop.
Try to use for instead of unrolling the loop. You may be able to save time on boundary checks.
Try BitConverter.ToUInt64 - http://msdn.microsoft.com/en-us/library/system.bitconverter.touint64.aspx if it is what you looking for.
For reference, Microsoft's .NET 4.0 BitConverter.ToInt64 (Shared Source Initiative at http://referencesource.microsoft.com/netframework.aspx):
// Converts an array of bytes into a long.
[System.Security.SecuritySafeCritical] // auto-generated
public static unsafe long ToInt64 (byte[] value, int startIndex) {
if( value == null) {
ThrowHelper.ThrowArgumentNullException(ExceptionArgument.value);
}
if ((uint) startIndex >= value.Length) {
ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument.startIndex, ExceptionResource.ArgumentOutOfRange_Index);
}
if (startIndex > value.Length -8) {
ThrowHelper.ThrowArgumentException(ExceptionResource.Arg_ArrayPlusOffTooSmall);
}
fixed( byte * pbyte = &value[startIndex]) {
if( startIndex % 8 == 0) { // data is aligned
return *((long *) pbyte);
}
else {
if( IsLittleEndian) {
int i1 = (*pbyte) | (*(pbyte + 1) << 8) | (*(pbyte + 2) << 16) | (*(pbyte + 3) << 24);
int i2 = (*(pbyte+4)) | (*(pbyte + 5) << 8) | (*(pbyte + 6) << 16) | (*(pbyte + 7) << 24);
return (uint)i1 | ((long)i2 << 32);
}
else {
int i1 = (*pbyte << 24) | (*(pbyte + 1) << 16) | (*(pbyte + 2) << 8) | (*(pbyte + 3));
int i2 = (*(pbyte+4) << 24) | (*(pbyte + 5) << 16) | (*(pbyte + 6) << 8) | (*(pbyte + 7));
return (uint)i2 | ((long)i1 << 32);
}
}
}
}
Why not go unsafe?
unsafe static ulong Fetch64(byte[] p, int ofs = 0)
{
fixed (byte* bp = p)
{
return *((ulong*)(bp + ofs));
}
}

How can I combine 4 bytes into a 32 bit unsigned integer?

I'm trying to convert 4 bytes into a 32 bit unsigned integer.
I thought maybe something like:
UInt32 combined = (UInt32)((map[i] << 32) | (map[i+1] << 24) | (map[i+2] << 16) | (map[i+3] << 8));
But this doesn't seem to be working. What am I missing?
Your shifts are all off by 8. Shift by 24, 16, 8, and 0.
Use the BitConverter class.
Specifically, this overload.
BitConverter.ToInt32()
You can always do something like this:
public static unsafe int ToInt32(byte[] value, int startIndex)
{
fixed (byte* numRef = &(value[startIndex]))
{
if ((startIndex % 4) == 0)
{
return *(((int*)numRef));
}
if (IsLittleEndian)
{
return (((numRef[0] | (numRef[1] << 8)) | (numRef[2] << 0x10)) | (numRef[3] << 0x18));
}
return ((((numRef[0] << 0x18) | (numRef[1] << 0x10)) | (numRef[2] << 8)) | numRef[3]);
}
}
But this would be reinventing the wheel, as this is actually how BitConverter.ToInt32() is implemented.

Categories