Convert BitArray to a small byte array - c#

I've read the other posts on BitArray conversions and tried several myself but none seem to deliver the results I want.
My situation is as such, I have some c# code that controls an LED strip. To issue a single command to the strip I need at most 28 bits
1 bit for selecting between 2 led strips
6 for position (Max 48 addressable leds)
7 for color x3 (0-127 value for color)
Suppose I create a BitArray for that structure and as an example we populate it semi-randomly.
BitArray ba = new BitArray(28);
for(int i = 0 ;i < 28; i++)
{
if (i % 3 == 0)
ba.Set(i, true);
else
ba.Set(i, false);
}
Now I want to stick those 28 bits in 4 bytes (The last 4 bits can be a stop signal), and finally turn it into a String so I can send the string via USB to the LED strip.
All the methods I've tried convert each 1 and 0 as a literal char which is not the goal.
Is there a straightforward way to do this bit compacting in C#?

Well you could use BitArray.CopyTo:
byte[] bytes = new byte[4];
ba.CopyTo(bytes, 0);
Or:
int[] ints = new int[1];
ba.CopyTo(ints, 0);
It's not clear what you'd want the string representation to be though - you're dealing with naturally binary data rather than text data...

I wouldn't use a BitArray for this. Instead, I'd use a struct, and then pack that into an int when I need to:
struct Led
{
public readonly bool Strip;
public readonly byte Position;
public readonly byte Red;
public readonly byte Green;
public readonly byte Blue;
public Led(bool strip, byte pos, byte r, byte g, byte b)
{
// set private fields
}
public int ToInt()
{
const int StripBit = 0x01000000;
const int PositionMask = 0x3F; // 6 bits
// bits 21 through 26
const int PositionShift = 20;
const int ColorMask = 0x7F;
const int RedShift = 14;
const int GreenShift = 7;
int val = Strip ? 0 : StripBit;
val = val | ((Position & PositionMask) << PositionShift);
val = val | ((Red & ColorMask) << RedShift);
val = val | (Blue & ColorMask);
return val;
}
}
That way you can create your structures easily without having to fiddle with bit arrays:
var blue17 = new Led(true, 17, 0, 0, 127);
var blah22 = new Led(false, 22, 15, 97, 42);
and to get the values:
int blue17_value = blue17.ToInt();
You can turn the int into a byte array easily enough with BitConverter:
var blue17_bytes = BitConverter.GetBytes(blue17_value);
It's unclear to me why you want to send that as a string.

Related

Setting bits in a byte array with C#

I have a requirement to encode a byte array from an short integer value
The encoding rules are
The bits representing the integer are bits 0 - 13
bit 14 is set if the number is negative
bit 15 is always 1.
I know I can get the integer into a byte array using BitConverter
byte[] roll = BitConverter.GetBytes(x);
But I cant find how to meet my requirement
Anyone know how to do this?
You should use Bitwise Operators.
Solution is something like this:
Int16 x = 7;
if(x < 0)
{
Int16 mask14 = 16384; // 0b0100000000000000;
x = (Int16)(x | mask14);
}
Int16 mask15 = -32768; // 0b1000000000000000;
x = (Int16)(x | mask15);
byte[] roll = BitConverter.GetBytes(x);
You cannot rely on GetBytes for negative numbers since it complements the bits and that is not what you need.
Instead you need to do bounds checking to make sure the number is representable, then use GetBytes on the absolute value of the given number.
The method's parameter is 'short' so we GetBytes returns a byte array with the size of 2 (you don't need more than 16 bits).
The rest is in the comments below:
static readonly int MAX_UNSIGNED_14_BIT = 16383;// 2^14-1
public static byte[] EncodeSigned14Bit(short x)
{
var absoluteX = Math.Abs(x);
if (absoluteX > MAX_UNSIGNED_14_BIT) throw new ArgumentException($"{nameof(x)} is too large and cannot be represented");
byte[] roll = BitConverter.GetBytes(absoluteX);
if (x < 0)
{
roll[1] |= 0b01000000; //x is negative, set 14th bit
}
roll[1] |= 0b10000000; // 15th bit is always set
return roll;
}
static void Main(string[] args)
{
// testing some values
var r1 = EncodeSigned14Bit(16383); // r1[0] = 255, r1[1] = 191
var r2 = EncodeSigned14Bit(-16383); // r2[0] = 255, r2[1] = 255
}

How to get little endian data from big endian in c# using bitConverter.ToInt32 method?

I am making application in C# which has a byte array containing hex values.
I am getting data as a big-endian but I want it as a little-endian and I am using Bitconverter.toInt32 method for converting that value to integer.
My problem is that before converting the value, I have to copy that 4 byte data into temporary array from source byte array and then reverse that temporary byte array.
I can't reverse source array because it also contains other data.
Because of that my application becomes slow.
In the code I have one source array of byte as waveData[] which contains a lot of data.
byte[] tempForTimestamp=new byte[4];
tempForTimestamp[0] = waveData[290];
tempForTimestamp[1] = waveData[289];
tempForTimestamp[2] = waveData[288];
tempForTimestamp[3] = waveData[287];
int number = BitConverter.ToInt32(tempForTimestamp, 0);
Is there any other method for that conversion?
Add a reference to System.Memory nuget and use BinaryPrimitives.ReverseEndianness().
using System.Buffers.Binary;
number = BinaryPrimitives.ReverseEndianness(number);
It supports both signed and unsigned integers (byte/short/int/long).
In modern-day Linq the one-liner and easiest to understand version would be:
int number = BitConverter.ToInt32(waveData.Skip(286).Take(4).Reverse().ToArray(), 0);
You could also...
byte[] tempForTimestamp = new byte[4];
Array.Copy(waveData, 287, tempForTimestamp, 0, 4);
Array.Reverse(tempForTimestamp);
int number = BitConverter.ToInt32(tempForTimestamp);
:)
If you know the data is big-endian, perhaps just do it manually:
int value = (buffer[i++] << 24) | (buffer[i++] << 16)
| (buffer[i++] << 8) | buffer[i++];
this will work reliably on any CPU, too. Note i is your current offset into the buffer.
Another approach would be to shuffle the array:
byte tmp = buffer[i+3];
buffer[i+3] = buffer[i];
buffer[i] = tmp;
tmp = buffer[i+2];
buffer[i+2] = buffer[i+1];
buffer[i+1] = tmp;
int value = BitConverter.ToInt32(buffer, i);
i += 4;
I find the first immensely more readable, and there are no branches / complex code, so it should work pretty fast too. The second could also run into problems on some platforms (where the CPU is already running big-endian).
Here you go
public static int SwapEndianness(int value)
{
var b1 = (value >> 0) & 0xff;
var b2 = (value >> 8) & 0xff;
var b3 = (value >> 16) & 0xff;
var b4 = (value >> 24) & 0xff;
return b1 << 24 | b2 << 16 | b3 << 8 | b4 << 0;
}
Declare this class:
using static System.Net.IPAddress;
namespace BigEndianExtension
{
public static class BigEndian
{
public static short ToBigEndian(this short value) => HostToNetworkOrder(value);
public static int ToBigEndian(this int value) => HostToNetworkOrder(value);
public static long ToBigEndian(this long value) => HostToNetworkOrder(value);
public static short FromBigEndian(this short value) => NetworkToHostOrder(value);
public static int FromBigEndian(this int value) => NetworkToHostOrder(value);
public static long FromBigEndian(this long value) => NetworkToHostOrder(value);
}
}
Example, create a form with a button and a multiline textbox:
using BigEndianExtension;
private void button1_Click(object sender, EventArgs e)
{
short int16 = 0x1234;
int int32 = 0x12345678;
long int64 = 0x123456789abcdef0;
string text = string.Format("LE:{0:X4}\r\nBE:{1:X4}\r\n", int16, int16.ToBigEndian());
text += string.Format("LE:{0:X8}\r\nBE:{1:X8}\r\n", int32, int32.ToBigEndian());
text += string.Format("LE:{0:X16}\r\nBE:{1:X16}\r\n", int64, int64.ToBigEndian());
textBox1.Text = text;
}
//Some code...
The most straightforward way is to use the BinaryPrimitives.ReadInt32BigEndian(ReadOnlySpan) Method introduced in .NET Standard 2.1
var number = BinaryPrimitives.ReadInt32BigEndian(waveData[297..291]);
If you won't ever again need that reversed, temporary array, you could just create it as you pass the parameter, instead of making four assignments. For example:
int i = 287;
int value = BitConverter.ToInt32({
waveData(i + 3),
waveData(i + 2),
waveData(i + 1),
waveData(i)
}, 0);
I use the following helper functions
public static Int16 ToInt16(byte[] data, int offset)
{
if (BitConverter.IsLittleEndian)
return BitConverter.ToInt16(BitConverter.IsLittleEndian ? data.Skip(offset).Take(2).Reverse().ToArray() : data, 0);
return BitConverter.ToInt16(data, offset);
}
public static Int32 ToInt32(byte[] data, int offset)
{
if (BitConverter.IsLittleEndian)
return BitConverter.ToInt32(BitConverter.IsLittleEndian ? data.Skip(offset).Take(4).Reverse().ToArray() : data, 0);
return BitConverter.ToInt32(data, offset);
}
public static Int64 ToInt64(byte[] data, int offset)
{
if (BitConverter.IsLittleEndian)
return BitConverter.ToInt64(BitConverter.IsLittleEndian ? data.Skip(offset).Take(8).Reverse().ToArray() : data, 0);
return BitConverter.ToInt64(data, offset);
}
You can also use Jon Skeet "Misc Utils" library, available at https://jonskeet.uk/csharp/miscutil/
His library has many utility functions. For Big/Little endian conversions you can check the MiscUtil/Conversion/EndianBitConverter.cs file.
var littleEndianBitConverter = new MiscUtil.Conversion.LittleEndianBitConverter();
littleEndianBitConverter.ToInt64(bytes, offset);
var bigEndianBitConverter = new MiscUtil.Conversion.BigEndianBitConverter();
bigEndianBitConverter.ToInt64(bytes, offset);
His software is from 2009 but I guess it's still relevant.
I dislike BitConverter, because (as Marc Gravell answered) it is specced to rely on system endianness, meaning you technically have to do a system endianness check every time you use BitConverter to ensure you don't have to reverse the array. And usually, with saved files, you generally know the endianness you're trying to read, and that might not be the same. You might just be handling file formats with big-endian values, too, like, for instance, PNG chunks.
Because of that, I just wrote my own methods for this, which take a byte array, the read offset and read length as arguments, as well as a boolean to specify the endianness handling, and which uses bit shifting for efficiency:
public static UInt64 ReadIntFromByteArray(Byte[] data, Int32 startIndex, Int32 bytes, Boolean littleEndian)
{
Int32 lastByte = bytes - 1;
if (data.Length < startIndex + bytes)
throw new ArgumentOutOfRangeException("startIndex", "Data array is too small to read a " + bytes + "-byte value at offset " + startIndex + ".");
UInt64 value = 0;
for (Int32 index = 0; index < bytes; index++)
{
Int32 offs = startIndex + (littleEndian ? index : lastByte - index);
value |= (((UInt64)data[offs]) << (8 * index));
}
return value;
}
This code can handle any value between 1 and 8 bytes, both little-endian and big-endian. The only small usage peculiarity is that you need to both give the amount of bytes to read, and need to specifically cast the result to the type you want.
Example from some code where I used it to read the header of some proprietary image type:
Int16 imageWidth = (Int16) ReadIntFromByteArray(fileData, hdrOffset, 2, true);
Int16 imageHeight = (Int16) ReadIntFromByteArray(fileData, hdrOffset + 2, 2, true);
This will read two consecutive 16-bit integers off an array, as signed little-endian values. You can of course just make a bunch of overload functions for all possibilities, like this:
public Int16 ReadInt16FromByteArrayLe(Byte[] data, Int32 startIndex)
{
return (Int16) ReadIntFromByteArray(data, startIndex, 2, true);
}
But personally I didn't bother with that.
And, here's the same for writing bytes:
public static void WriteIntToByteArray(Byte[] data, Int32 startIndex, Int32 bytes, Boolean littleEndian, UInt64 value)
{
Int32 lastByte = bytes - 1;
if (data.Length < startIndex + bytes)
throw new ArgumentOutOfRangeException("startIndex", "Data array is too small to write a " + bytes + "-byte value at offset " + startIndex + ".");
for (Int32 index = 0; index < bytes; index++)
{
Int32 offs = startIndex + (littleEndian ? index : lastByte - index);
data[offs] = (Byte) (value >> (8*index) & 0xFF);
}
}
The only requirement here is that you have to cast the input arg to 64-bit unsigned integer when passing it to the function.
public static unsafe int Reverse(int value)
{
byte* p = (byte*)&value;
return (*p << 24) | (p[1] << 16) | (p[2] << 8) | p[3];
}
If unsafe is allowed... Based on Marc Gravell's post
This will reverse the data inline if unsafe code is allowed...
fixed (byte* wavepointer = waveData)
new Span<byte>(wavepointer + offset, 4).Reverse();

C# Language: Changing the First Four Bits in a Byte

In order to utilize a byte to its fullest potential, I'm attempting to store two unique values into a byte: one in the first four bits and another in the second four bits. However, I've found that, while this practice allows for optimized memory allocation, it makes changing the individual values stored in the byte difficult.
In my code, I want to change the first set of four bits in a byte while maintaining the value of the second four bits in the same byte. While bitwise operations allow me to easily retrieve and manipulate the first four bit values, I'm finding it difficult to concatenate this new value with the second set of four bits in a byte. The question is, how can I erase the first four bits from a byte (or, more accurately, set them all the zero) and add the new set of 4 bits to replace the four bits that were just erased, thus preserving the last 4 bits in a byte while changing the first four?
Here's an example:
// Changes the first four bits in a byte to the parameter value
public void changeFirstFourBits(byte newFirstFour)
{
// If 'newFirstFour' is 0101 in binary, make 'value' 01011111 in binary, changing
// the first four bits but leaving the second four alone.
}
private byte value = 255; // binary: 11111111
Use bitwise AND (&) to clear out the old bits, shift the new bits to the correct position and bitwise OR (|) them together:
value = (value & 0xF) | (newFirstFour << 4);
Here's what happens:
value : abcdefgh
newFirstFour : 0000xyzw
0xF : 00001111
value & 0xF : 0000efgh
newFirstFour << 4 : xyzw0000
(value & 0xF) | (newFirstFour << 4) : xyzwefgh
When I have to do bit-twiddling like this, I make a readonly struct to do it for me. A four-bit integer is called nybble, of course:
struct TwoNybbles
{
private readonly byte b;
public byte High { get { return (byte)(b >> 4); } }
public byte Low { get { return (byte)(b & 0x0F); } {
public TwoNybbles(byte high, byte low)
{
this.b = (byte)((high << 4) | (low & 0x0F));
}
And then add implicit conversions between TwoNybbles and byte. Now you can just treat any byte as having a High and Low byte without putting all that ugly bit twiddling in your mainline code.
You first mask out you the high four bytes using value & 0xF. Then you shift the new bits to the high four bits using newFirstFour << 4 and finally you combine them together using binary or.
public void changeHighFourBits(byte newHighFour)
{
value=(byte)( (value & 0x0F) | (newFirstFour << 4));
}
public void changeLowFourBits(byte newLowFour)
{
value=(byte)( (value & 0xF0) | newLowFour);
}
I'm not really sure what your method there is supposed to do, but here are some methods for you:
void setHigh(ref byte b, byte val) {
b = (b & 0xf) | (val << 4);
}
byte high(byte b) {
return (b & 0xf0) >> 4;
}
void setLow(ref byte b, byte val) {
b = (b & 0xf0) | val;
}
byte low(byte b) {
return b & 0xf;
}
Should be self-explanatory.
public int SplatBit(int Reg, int Val, int ValLen, int Pos)
{
int mask = ((1 << ValLen) - 1) << Pos;
int newv = Val << Pos;
int res = (Reg & ~mask) | newv;
return res;
}
Example:
Reg = 135
Val = 9 (ValLen = 4, because 9 = 1001)
Pos = 2
135 = 10000111
9 = 1001
9 << Pos = 100100
Result = 10100111
A quick look would indicate that a bitwise and can be achieved using the & operator. So to remove the first four bytes you should be able to do:
byte value1=255; //11111111
byte value2=15; //00001111
return value1&value2;
Assuming newVal contains the value you want to store in origVal.
Do this for the 4 least significant bits:
byte origVal = ???;
byte newVal = ???
orig = (origVal & 0xF0) + newVal;
and this for the 4 most significant bits:
byte origVal = ???;
byte newVal = ???
orig = (origVal & 0xF) + (newVal << 4);
I know you asked specifically about clearing out the first four bits, which has been answered several times, but I wanted to point out that if you have two values <= decimal 15, you can combine them into 8 bits simply with this:
public int setBits(int upperFour, int lowerFour)
{
return upperFour << 4 | lowerFour;
}
The result will be xxxxyyyy where
xxxx = upperFour
yyyy = lowerFour
And that is what you seem to be trying to do.
Here's some code, but I think the earlier answers will do it for you. This is just to show some sort of test code to copy and past into a simple console project (the WriteBits method by be of help):
static void Main(string[] args)
{
int b1 = 255;
WriteBits(b1);
int b2 = b1 >> 4;
WriteBits(b2);
int b3 = b1 & ~0xF ;
WriteBits(b3);
// Store 5 in first nibble
int b4 = 5 << 4;
WriteBits(b4);
// Store 8 in second nibble
int b5 = 8;
WriteBits(b5);
// Store 5 and 8 in first and second nibbles
int b6 = 0;
b6 |= (5 << 4) + 8;
WriteBits(b6);
// Store 2 and 4
int b7 = 0;
b7 = StoreFirstNibble(2, b7);
b7 = StoreSecondNibble(4, b7);
WriteBits(b7);
// Read First Nibble
int first = ReadFirstNibble(b7);
WriteBits(first);
// Read Second Nibble
int second = ReadSecondNibble(b7);
WriteBits(second);
}
static int ReadFirstNibble(int storage)
{
return storage >> 4;
}
static int ReadSecondNibble(int storage)
{
return storage &= 0xF;
}
static int StoreFirstNibble(int val, int storage)
{
return storage |= (val << 4);
}
static int StoreSecondNibble(int val, int storage)
{
return storage |= val;
}
static void WriteBits(int b)
{
Console.WriteLine(BitConverter.ToString(BitConverter.GetBytes(b),0));
}
}

C++ to C#, static_cast into enum

I'm trying to convert a bit of VC 6.0 C++ code to C#. Specifically, I'm parsing through a binary dat file and I've run into a problem converting this bit of code:
ar.GetFile()->Read(buf,sizeof(int));
memmove(&x,buf,4);
pEBMA->before_after = static_cast<enum EBMA_Reserve>(x);
pEBMA->method = static_cast<enum EBMA_Method>(x >> 4);
Here is some related code.
struct EBMA_Data *pEBMA = &EBMA_data;
typedef CArray<struct EBMA_Data,struct EBMA_Data&> EBMA_data;
enum EBMA_Reserve
{EBMA_DONT_RESERVE,
EBMA_BEFORE,
EBMA_AFTER
};
enum EBMA_Method
{EBMA_CENTER,
EBMA_ALL_MATERIAL,
EBMA_FRACTION,
EBMA_RESERVE
};
struct EBMA_Data
{double reserved;
double fraction;
enum EBMA_Method method : 4;
enum EBMA_Reserve before_after : 4;
};
I've read this thread here Cast int to Enum in C#, but my code isn't giving me the same results as the legacy program.
Here is some of my code in C#:
reserved = reader.ReadDouble();
fraction = reader.ReadDouble();
beforeAfter = (EBMAReserve)Enum.ToObject(typeof(EBMAReserve), x);
method = (EBMAMethod)Enum.ToObject(typeof(EBMAMethod), (x >> 4));
I do have an endianness problem so I am reversing the endianness like so.
public override double ReadDouble()
{
byte[] b = this.ConvertByteArrayToBigEndian(base.ReadBytes(8));
double d = BitConverter.ToDouble(b, 0);
return d;
}
private byte[] ConvertByteArrayToBigEndian(byte[] b)
{
if (BitConverter.IsLittleEndian)
{
Array.Reverse(b);
}
return b;
}
So then I thought that maybe the endianness issue was still throwing me off so here is another attempt:
byte[] test = reader.ReadBytes(8);
Array.Reverse(test);
int test1 = BitConverter.ToInt32(buffer, 0);
int test2 = BitConverter.ToInt32(buffer, 4);
beforeAfter = (EBMAReserve)test1;
method = (EBMAMethod)test2;
I hope I've given enough details about what I'm trying to do.
EDIT:
This is how I solved my issue, apparently the values I needed were stored in the first byte of a 4 byte segment in the binary file. This is in a loop.
byte[] temp = reader.ReadBytes(4);
byte b = temp[0];
res = (EBMAReserve)(b & 0x0f);
meth = (EBMAMethod)(b >> 4);
EDIT: It actually looks like the structure size of EBMA_Data is 17 bytes.
struct EBMA_DATA
{
double reserved; //(8 bytes)
double fraction; //(8 bytes)
enum EBMA_Method method : 4; //(this is packed to 4 bits, not bytes)
enum EMBA_Reserve before_after : 4; //(this too, is packed to 4 bits)
}
so your read code should look something more like this:
EBMA_Data data = new EBMA_Data;
data.reserved = reader.ReadDouble();
data.fraction = reader.ReadDouble();
byte b = reader.ReadByte();
data.method = (EBMAMethod)(b >> 4);
data.before_after = (EBMAReserve)(b & 0x0f);
Not 100% sure, but it looks like the code that does the shift x >> 4 bytes may be the underlying issue that's being overlooked. If the EBMAReserve is the lower 4 bits of x and EBMAMethod is the top 4 bits, maybe this code would work?
EBMAReserve res = (EBMAReserve)(x & 0x0f);
EBMAMethod meth = (EBMAMethod)(x >> 4);
I think that is what the : 4 means after the enumerations in the struct, it's packing the two enums into the structure as a single byte instead of 2 bytes.

Convert a long to two int for the purpose of reconstruction

I need to pass a parameter as two int parameters to a Telerik Report since it cannot accept Long parameters. What is the easiest way to split a long into two ints and reconstruct it without losing data?
Using masking and shifting is your best bet. long is guaranteed to be 64 bit and int 32 bit, according to the documentation, so you can mask off the bits into the two integers and then recombine.
See:
static int[] long2doubleInt(long a) {
int a1 = (int)(a & uint.MaxValue);
int a2 = (int)(a >> 32);
return new int[] { a1, a2 };
}
static long doubleInt2long(int a1, int a2)
{
long b = a2;
b = b << 32;
b = b | (uint)a1;
return b;
}
static void Main(string[] args)
{
long a = 12345678910111213;
int[] al = long2doubleInt(a);
long ap = doubleInt2long(al[0],al[1]);
System.Console.WriteLine(ap);
System.Console.ReadKey();
}
Note the use of bitwise operations throughout. This avoids the problems one might get when using addition or other numerical operations that might occur using negative numbers or rounding errors.
Note you can replace int with uint in the above code if you are able to use unsigned integers (this is always preferable in this sort of situation, as it's a lot clearer what's going on with the bits).
Doing bit-manipulation in C# can be awkward at times, particularly when dealing with signed values. You need to be using unsigned values whenever you plan on doing bit-manipulation. Unfortunately it's not going to yield the nicest looking code.
const long LOW_MASK = ((1L << 32) - 1);
long value = unchecked((long)0xDEADBEEFFEEDDEAD);
int valueHigh = (int)(value >> 32);
int valueLow = (int)(value & LOW_MASK);
long reconstructed = unchecked((long)(((ulong)valueHigh << 32) | (uint)valueLow));
If you want a nicer way to do this, get the raw bytes for the long and get the corresponding integers from the bytes. The conversion to/from representations doesn't change very much.
long value = unchecked((long)0xDEADBEEFFEEDDEAD);
byte[] valueBytes = BitConverter.GetBytes(value);
int valueHigh = BitConverter.ToInt32(valueBytes, BitConverter.IsLittleEndian ? 4 : 0);
int valueLow = BitConverter.ToInt32(valueBytes, BitConverter.IsLittleEndian ? 0 : 4);
byte[] reconstructedBytes = BitConverter.IsLittleEndian
? BitConverter.GetBytes(valueLow).Concat(BitConverter.GetBytes(valueHigh)).ToArray()
: BitConverter.GetBytes(valueHigh).Concat(BitConverter.GetBytes(valueLow)).ToArray();
long reconstructed = BitConverter.ToInt64(reconstructedBytes, 0);
For unigned the following will work:
ulong value = ulong.MaxValue - 12;
uint low = (uint)(value & (ulong)uint.MaxValue);
uint high = (uint)(value >> 32);
ulong value2 = ((ulong)high << 32) | low;
long x = long.MaxValue;
int lo = (int)(x & 0xffffffff);
int hi = (int)((x - ((long)lo & 0xffffffff)) >> 32);
long y = ((long)hi << 32) | ((long)lo & 0xffffffff);
Console.WriteLine(System.Convert.ToString(x, 16));
Console.WriteLine(System.Convert.ToString(lo, 16));
Console.WriteLine(System.Convert.ToString(hi, 16));
Console.WriteLine(System.Convert.ToString(y, 16));
Converting it to and from a string would be much simpler than converting it two and from a pair of ints. Is this an option?
string myStringValue = myLongValue.ToString();
myLongValue = long.Parse(myStringValue);
Instead of mucking with bit operations, just use a faux union. This also would work for different combinations of data types, not just long & 2 ints. More importantly, that avoids the need to be concerned about signs, endianness or other low-level details when you really only care about reading & writing bits in a consistent manner.
using System;
using System.Runtime.InteropServices;
public class Program
{
[StructLayout(LayoutKind.Explicit)]
private struct Mapper
{
[FieldOffset(0)]
public long Aggregated;
[FieldOffset(0)]
public int One;
[FieldOffset(sizeof(int))]
public int Two;
}
public static void Main()
{
var layout = new Mapper{ Aggregated = 0x00000000200000001 };
var one = layout.One;
var two = layout.Two;
Console.WriteLine("One: {0}, Two: {1}", one, two);
var secondLayout = new Mapper { One = one, Two = two };
var aggregated = secondLayout.Aggregated;
Console.WriteLine("Aggregated: {0}", aggregated.ToString("X"));
}
}

Categories