Should I be able to bit-shift >> a byte array? - c#

I am trying to understand why BigInteger is throwing an overflow exception. I tried to visualize this by converting the BigInteger to a byte[] and iteratively incrementing the shift until I see where the exception occurs.
Should I be able to bit-shift >> a byte[], or is C# simply not able to?
Code causing an exception
uint amountToShift2 = 12;
BigInteger num = new BigInteger(-126);
uint compactBitsRepresentation = (uint)(num >> (int)amountToShift2);

Regarding your edited question with:
uint amountToShift2 = 12;
BigInteger num = new BigInteger(-126);
uint compactBitsRepresentation = (uint)(num >> (int)amountToShift2);
The bit shift works OK and produces a BigInteger of value -1 (negative one).
But the conversion to uint throws an exception becauce -1 is outside the range of an uint. The conversion from BigInteger to uint does not "wrap around" modulo 2**32, but simply throws.
You can get around that with:
uint compactBitsRepresentation = (uint)(int)(num >> (int)amountToShift2);
which will not throw in unchecked context (which is the usual context).

There is no >> or << bit-shift operators for byte arrays in C#. You need to write code by hand to do so (pay attention to bits that fall off).

Something tells me that the >> operator won't work with reference types like arrays, rather it works with primitive types.
your ints are actually represented by a series of bytes, so say
int i = 6;
i is represented as
00000000000000000000000000000110
the >> shifts all the bits to the right, changing it to
00000000000000000000000000000011
or 3
If you really need to shift the byte array, it shouldn't be too terribly hard to define your own method to move all the items of the array over 1 slot. It will have O(n) time complexity though.

Related

C# extract bit ranges from byte array

I need to extract some bit ranges from a 16-byte value, e.g.:
bit 0 = first thing
next 54 bits = second thing
next 52 bits = third thing
last 21 bits = fourth thing
.net doesn't have a UInt128 structure, well it has the BigInteger class, but I'm not sure that's right for the job, maybe it is?
I have found a third party library that can read bits from a stream, but when trying to convert them back to UInt64's using the BitConverter, it will fail, as 54 bits isn't long enough for a UInt64, but it's too long for a UInt32
My immediate thought was the bit shifting was the way to do this, but now I'm not so sure how to proceed, seeing as I can't think of a good way of handling the original 16 bytes.
Any suggestions or comments would be appreciated.
Here's some untested code. I'm sure that there are bugs in it (whenever I write code like this, I get shifts, masks, etc. wrong). However, it should be enough to get you started. If you get this working and there are only a few problems, let me know in the comments and I'll fix things. If you can't get it to work, let me know as well, and I'll delete the answer. If it requires a major rewrite, post your working code as an answer and let me know.
The other thing to worry about with this (since you mentioned that this comes from a file) is endian-ness. Not all computer architectures represent values in the same way. I'll leave any byte swizzling (if needed) to you.
First, structs in C++ are basically the same as classes (though people think they are different). In C#, they are very different. A struct in C# is a Value Type. When you do value type assignment, the compiler makes a copy of the value of the struct, rather than just making a copy to a reference to the object (like it does with classes). Value types have an implicit default constructor that initializes all members to their default (zero or null) values.
Marking the struct with [StructLayout(LayoutKind.Sequential)] tells the compiler to layout the members in the specified order (they compiler doesn't have to normally). This allows you to pass a reference to one of these (via P/Invoke) to a C program if you want to.
So, my struct starts off this way:
[StructLayout(LayoutKind.Sequential)]
public struct Struct128
{
//not using auto-properties with private setters on purpose.
//This should look like a single 128-bit value (in part, because of LayoutKind.Sequential)
private ulong _bottom64bits;
private ulong _top64bits;
}
Now I'm going to add members to that struct. Since you are getting the 128 bits from a file, don't try to read the data into a single 128-bit structure (if you can figure out how (look up serialization), you can, but...). Instead, read 64 bits at a time and use a constructor like this one:
public Struct128(ulong bottom64, ulong top64)
{
_top64bits = top64;
_bottom64bits = bottom64;
}
If you need to write the data in one of these back into the file, go get it 64-bits at a time using read-only properties like this:
//read access to the raw storage
public ulong Top64 => _top64bits;
public ulong Bottom64 => _bottom64bits;
Now we need to get and set the various bit-ish values out of our structure. Getting (and setting) the first thing is easy:
public bool FirstThing
{
get => (_bottom64bits & 0x01) == 1;
set
{
//set or clear the 0 bit
if (value)
{
_bottom64bits |= 1ul;
}
else
{
_bottom64bits &= (~1ul);
}
}
}
Getting/setting the second and fourth things are very similar. In both cases, to get the value, you mask away all but the important bits and then shift the result. To set the value, you take the property value, shift it to the right place, zero out the bits in the appropriate (top or bottom) value stored in the structure and OR in the new bits (that you set up by shifting)
//bits 1 through 55
private const ulong SecondThingMask = 0b111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1110;
public ulong SecondThing
{
get => (_bottom64bits & SecondThingMask) >> 1;
set
{
var shifted = (value << 1) & SecondThingMask;
_bottom64bits = (_bottom64bits & (~SecondThingMask)) | shifted;
}
}
and
//top 21 bits
private const ulong FourthThingMask = 0b1111_1111_1111_1111_1111_1000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000;
//to shift the top 21 bits down to the bottom 21 bits, need to shift 64-21
private const int FourthThingShift = 64 - 21;
public uint FourthThing
{
get => (uint)((_top64bits & FourthThingMask) >> FourthThingShift);
set
{
var shifted = ((ulong)value << FourthThingShift) & FourthThingMask;
_top64bits = (_top64bits & (~FourthThingMask)) | shifted;
}
}
It's the third thing that is tricky. To get the value, you need to mask the correct bits out of both the top and bottom values, shift them to the right positions and return the ORed result.
To set the value, you need to take the property value, split it into upper and lower portions and then do the same kind of magic ORing that was done for the second and fourth things:
//the third thing is the hard part.
//The bottom 55 bits of the _bottom64bits are dedicate to the 1st and 2nd things, so the next 9 are the bottom 9 of the 3rd thing
//The other 52-9 (=43) bits come-from/go-to the _top64bits
//top 9 bits
private const ulong ThirdThingBottomMask = 0b1111_1111_1000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000;
//bottom 43 bits
private const ulong ThirdThingTopMask = 0b111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111;
private const int ThirdThingBottomShift = 64 - 9;
//bottom 9 bits
private const ulong ThirdThingBottomSetMask = 0b1_1111_1111;
//all but the bottom 9 bits
private const ulong ThirdThingTopSetMask = 0b1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1110_0000_0000;
//52 bits total
private const ulong ThirdThingOverallMask = 0b1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111;
public ulong ThirdThing
{
get
{
var bottom = (_bottom64bits & ThirdThingBottomMask) >> ThirdThingBottomShift;
var top = (_top64bits & ThirdThingTopMask) << 9;
return top | bottom;
}
set
{
var masked = value & ThirdThingOverallMask;
var bottom = (masked & ThirdThingBottomSetMask) << ThirdThingBottomShift;
_bottom64bits = (_bottom64bits & (~ThirdThingBottomSetMask)) | bottom;
var top = (masked & ThirdThingTopSetMask) >> 9;
_top64bits = (_top64bits & (~ThirdThingTopSetMask)) | top;
}
}
I hope this is useful. Let me know.

Convert a variable size hex string to signed number (variable size bytes) in C#

C# provides the method Convert.ToUInt16("FFFF", 16)/Convert.ToInt16("FFFF", 16) to convert hex strings into unsigned and signed 16 bit integer. These methods works fine for 16/32 bit values but not so for 12 bit values.
I would like to convert 3 char long hex string to signed integer. How could I do it? I would prefer a solution that could take the number of character as parameter to decide signed values.
Convert(string hexString, int fromBase, int size)
Convert("FFF", 16, 12) return -1.
Convert("FFFF", 16, 16) return -1.
Convert("FFF", 16, 16) return 4095.
The easiest way I can think of converting 12 bit signed hex to a signed integer is as follows:
string value = "FFF";
int convertedValue = (Convert.ToInt32(value, 16) << 20) >> 20; // -1
The idea is to shift the result as far left as possible so that the negative bits line up, then shift right again to the original position. This works because a "signed shift right" operation keeps the negative bit in place.
You can generalize this into a method as follows:
int Convert(string value, int fromBase, int bits)
{
int bitsToShift = 32 - bits;
return (Convert.ToInt32(value, fromBase) << bitsToShift) >> bitsToShift;
}
You can cast the result to a short if you want a 16 bit value when working with 12 bit hex strings. Performance of this method will be the same as a 16 bit version because bit shift operators on short cast the values to int anyway and this gives you more flexibility to specify more than 16 bits if needed without writing another method.
Ah, you'd like to calculate the Two's Complement for a certain number of bits (12 in your case, but really it should work with anything).
Here's the code in C#, blatantly stolen from the Python example in the wiki article:
int Convert(string hexString, int fromBase, int num_bits)
{
var i = System.Convert.ToUInt16(hexString, fromBase);
var mask = 1 << (num_bits - 1);
return (-(i & mask) + (i & ~mask));
}
Convert("FFF", 16, 12) returns -1
Convert("4095", 10, 12) is also -1 as expected

Defining a bit[] array in C#

currently im working on a solution for a prime-number calculator/checker. The algorythm is already working and verry efficient (0,359 seconds for the first 9012330 primes). Here is a part of the upper region where everything is declared:
const uint anz = 50000000;
uint a = 3, b = 4, c = 3, d = 13, e = 12, f = 13, g = 28, h = 32;
bool[,] prim = new bool[8, anz / 10];
uint max = 3 * (uint)(anz / (Math.Log(anz) - 1.08366));
uint[] p = new uint[max];
Now I wanted to go to the next level and use ulong's instead of uint's to cover a larger area (you can see that already), where i tapped into my problem: the bool-array.
Like everybody should know, bool's have the length of a byte what takes a lot of memory when creating the array... So I'm searching for a more resource-friendly way to do that.
My first idea was a bit-array -> not byte! <- to save the bool's, but haven't figured out how to do that by now. So if someone ever did something like this, I would appreciate any kind of tips and solutions. Thanks in advance :)
You can use BitArray collection:
http://msdn.microsoft.com/en-us/library/system.collections.bitarray(v=vs.110).aspx
MSDN Description:
Manages a compact array of bit values, which are represented as Booleans, where true indicates that the bit is on (1) and false indicates the bit is off (0).
You can (and should) use well tested and well known libraries.
But if you're looking to learn something (as it seems to be the case) you can do it yourself.
Another reason you may want to use a custom bit array is to use the hard drive to store the array, which comes in handy when calculating primes. To do this you'd need to further split addr, for example lowest 3 bits for the mask, next 28 bits for 256MB of in-memory storage, and from there on - a file name for a buffer file.
Yet another reason for custom bit array is to compress the memory use when specifically searching for primes. After all more than half of your bits will be 'false' because the numbers corresponding to them would be even, so in fact you can both speed up your calculation AND reduce memory requirements if you don't even store the even bits. You can do that by changing the way addr is interpreted. Further more you can also exclude numbers divisible by 3 (only 2 out of every 6 numbers has a chance of being prime) thus reducing memory requirements by 60% compared to plain bit array.
Notice the use of shift and logical operators to make the code a bit more efficient.
byte mask = (byte)(1 << (int)(addr & 7)); for example can be written as
byte mask = (byte)(1 << (int)(addr % 8));
and addr >> 3 can be written as addr / 8
Testing shift/logical operators vs division shows 2.6s vs 4.8s in favor of shift/logical for 200000000 operations.
Here's the code:
void Main()
{
var barr = new BitArray(10);
barr[4] = true;
Console.WriteLine("Is it "+barr[4]);
Console.WriteLine("Is it Not "+barr[5]);
}
public class BitArray{
private readonly byte[] _buffer;
public bool this[long addr]{
get{
byte mask = (byte)(1 << (int)(addr & 7));
byte val = _buffer[(int)(addr >> 3)];
bool bit = (val & mask) == mask;
return bit;
}
set{
byte mask = (byte) ((value ? 1:0) << (int)(addr & 7));
int offs = (int)addr >> 3;
_buffer[offs] = (byte)(_buffer[offs] | mask);
}
}
public BitArray(long size){
_buffer = new byte[size/8 + 1]; // define a byte buffer sized to hold 8 bools per byte. The spare +1 is to avoid dealing with rounding.
}
}

C#: The result of casting a negative integer to a byte

I was a looking at the source code of a project, and I noticed the following statement (both keyByte and codedByte are of type byte):
return (byte)(keyByte - codedByte);
I'm trying now to understand what would the result be in cases where keyByte is smaller than codedByte, which results in a negative integer.
After some experiments to understand the result of casting a negative integer which has a value in the range [-255 : -1], I got the following results:
byte result = (byte) (-6); // result = 250
byte result = (byte) (-50); // result = 206
byte result = (byte) (-17); // result = 239
byte result = (byte) (-20); // result = 236
So, provided that -256 < a < 0 , I was able to determine the result by:
result = 256 + a;
My question is: should I always expect this to be the case?
Yes, that will always be the case (i.e. it is not simply dependent on your environment or compiler, but is defined as part of the C# language spec). See http://msdn.microsoft.com/en-us/library/aa691349(v=vs.71).aspx:
In an unchecked context, the result is truncated by discarding any high-order bits that do not fit in the destination type.
The next question is, if you take away the high-order bits of a negative int between -256 and -1, and read it as a byte, what do you get? This is what you've already discovered through experimentation: it is 256 + x.
Note that endianness does not matter because we're discarding the high-order (or most significant) bits, not the "first" 24 bits. So regardless of which end we took it from, we're left with the least significant byte that made up that int.
Yes. Remember, there's no such thing as "-" in the domain of a .Net "Byte":
http://msdn.microsoft.com/en-us/library/e2ayt412.aspx
Because Byte is an unsigned type, it cannot represent a negative
number. If you use the unary minus (-) operator on an expression that
evaluates to type Byte, Visual Basic converts the expression to Short
first. (Note: substitute any CLR/.Net language for "Visual Basic")
ADDENDUM:
Here's a sample app:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace TestByte
{
class Program
{
static void Main(string[] args)
{
for (int i = -255; i < 256; i++)
{
byte b = (byte)i;
System.Console.WriteLine("i={0}, b={1}", i, b);
}
}
}
}
And here's the resulting output:
testbyte|more
i=-255, b=1
i=-254, b=2
i=-253, b=3
i=-252, b=4
i=-251, b=5
...
i=-2, b=254
i=-1, b=255
i=0, b=0
i=1, b=1
...
i=254, b=254
i=255, b=255
Here is an algorithm that performs the same logic as casting to byte, to help you understand it:
For positives:
byte bNum = iNum % 256;
For negatives:
byte bNum = 256 + (iNum % 256);
It's like searching for any k which causes x + 255k to be in the range 0 ... 255. There could only be one k which produces a result with that range, and the result will be the result of casting to byte.
Another way of looking at it is as if it "cycles around the byte value range":
Lets use the iNum = -712 again, and define a bNum = 0.
We shall do iNum++; bNum--; untill iNum == 0:
iNum = -712;
bNum = 0;
iNum++; // -711
bNum--; // 255 (cycles to the maximum value)
iNum++; // -710
bNum--; // 254
... // And so on, as if the iNum value is being *consumed* within the byte value range cycle.
This is, of course, just an illustration to see how logically it works.
This is what happens in unchecked context. You could say that the runtime (or compiler if the Int32 that you cast to Byte is known at compiletime) adds or subtracts 256 as many times as is needed until it finds a representable value.
In a checked context, an exception (or compiletime error) results. See http://msdn.microsoft.com/en-us/library/khy08726.aspx
Yes - unless you get an exception.
.NET defines all arithmetic operations only on 4 byte and larger data types. So the only non-obvious point is how converting an int to a byte works.
For a conversion from an integral type to another integral type, the result of conversion depends on overflow checking context (says the ECMA 334 standard, Section 13.2.1).
So, in the following context
checked
{
return (byte)(keyByte - codedByte);
}
you will see a System.OverflowException. Whereas in the following context:
unchecked
{
return (byte)(keyByte - codedByte);
}
you are guaranteed to always see the results that you expect regardless of whether you do or don't add a multiple of 256 to the difference; for example, 2 - 255 = 3.
This is true regardless of how the hardware represents signed values. The CLR standard (ECMA 335) specifies, in Section 12.1, that the Int32 type is a "32-bit two's-complement signed value". (Well, that also matches all platforms on which .NET or mono is currently available anyway, so one could almost guess that it would work anyway, but it is good to know that the practice is supported by the language standard and portable.)
Some teams do not want to specify overflow checking contexts explicitly, because they have a policy of checking for overflows early in development cycle, but not in released code. In these cases you can safely do byte arithmetic like this:
return (byte)((keyByte - codedByte) % 256);

Is there a nice way to split an int into two shorts (.NET)?

I think that this is not possible because Int32 has 1 bit sign and have 31 bit of numeric information and Int16 has 1 bit sign and 15 bit of numeric information and this leads to having 2 bit signs and 30 bits of information.
If this is true then I cannot have one Int32 into two Int16. Is this true?
Thanks in advance.
EXTRA INFORMATION: Using Vb.Net but I think that I can translate without problems a C# answer.
What initially I wanted to do was to convert one UInt32 to two UInt16 as this is for a library that interacts with WORD based machines. Then I realized that Uint is not CLS compliant and tried to do the same with Int32 and Int16.
EVEN WORSE: Doing a = CType(c And &HFFFF, Int16); throws OverflowException. I expected that statement being the same as a = (Int16)(c & 0xffff); (which does not throw an exception).
This can certainly be done with no loss of information. In both cases you end up with 32 bits of information. Whether they're used for sign bits or not is irrelevant:
int original = ...;
short firstHalf = (short) (original >> 16);
short secondHalf = (short) (original & 0xffff);
int reconstituted = (firstHalf << 16) | (secondHalf & 0xffff);
Here, reconstituted will always equal original, hence no information is lost.
Now the meaning of the signs of the two shorts is a different matter - firstHalf will be negative iff original is negative, but secondHalf will be negative if bit 15 (counting 0-31) of original is set, which isn't particularly meaningful in the original form.
This should work:
int original = ...;
byte[] bytes = BitConverter.GetBytes(original);
short firstHalf = BitConverter.ToInt16(bytes, 0);
short secondHalf = BitConverter.ToInt16(bytes, 2);
EDIT:
tested with 0x7FFFFFFF, it works
byte[] recbytes = new byte[4];
recbytes[0] = BitConverter.GetBytes(firstHalf)[0];
recbytes[1] = BitConverter.GetBytes(firstHalf)[1];
recbytes[2] = BitConverter.GetBytes(secondHalf)[0];
recbytes[3] = BitConverter.GetBytes(secondHalf)[1];
int reconstituted = BitConverter.ToInt32(recbytes, 0);
Jon's answer, translated into Visual Basic, and without overflow:
Module Module1
Function MakeSigned(ByVal x As UInt16) As Int16
Dim juniorBits As Int16 = CType(x And &H7FFF, Int16)
If x > Int16.MaxValue Then
Return juniorBits + Int16.MinValue
End If
Return juniorBits
End Function
Sub Main()
Dim original As Int32 = &H7FFFFFFF
Dim firstHalfUnsigned As UInt16 = CType(original >> 16, UInt16)
Dim secondHalfUnsigned As UInt16 = CType(original And &HFFFF, UInt16)
Dim firstHalfSigned As Int16 = MakeSigned(firstHalfUnsigned)
Dim secondHalfSigned As Int16 = MakeSigned(secondHalfUnsigned)
Console.WriteLine(firstHalfUnsigned)
Console.WriteLine(secondHalfUnsigned)
Console.WriteLine(firstHalfSigned)
Console.WriteLine(secondHalfSigned)
End Sub
End Module
Results:
32767
65535
32767
-1
In .NET CType(&Hffff, Int16) causes overflow, and (short)0xffff gives -1 (without overflow). It is because by default C# compiler uses unchecked operations and VB.NET checked.
Personally I like Agg's answer, because my code is more complicated, and Jon's would cause an overflow exception in checked environment.
I also created another answer, based on code of BitConverter class, optimized for this particular task. However, it uses unsafe code.
yes it can be done using masking and bitshifts
Int16 a,b;
Int32 c;
a = (Int16) (c&0xffff);
b = (Int16) ((c>>16)&0xffff);
EDIT
to answer the comment. Reconstructionworks fine:
Int16 a, b;
Int32 c = -1;
a = (Int16)(c & 0xffff);
b = (Int16)((c >> 16) & 0xffff);
Int32 reconst = (((Int32)a)&0xffff) | ((Int32)b << 16);
Console.WriteLine("reconst = " + reconst);
Tested it and it prints -1 as expected.
EDIT2: changed the reconstruction. The promotion of the Int16 to Int32 caused all sign bits to extend. Forgot that, it had to be AND'ed.
Why not? Lets reduce the number of bits for the sake of simplicity : let's say we have 8 bits of which the left bit is a minus bit.
[1001 0110] // representing -22
You can store it in 2 times 4 bits
[1001] [0110] // representing -1 and 6
I don't see why it wouldn't be possible, you twice have 8 bits info
EDIT : For the sake of simplicity, I didn't just reduce the bits, but also don't use 2-complementmethod. In my examples, the left bit denotes minus, the rest is to be interpreted as a normal positive binary number
Unsafe code in C#, overflow doesn't occur, detects endianness automatically:
using System;
class Program
{
static void Main(String[] args)
{
checked // Yes, it works without overflow!
{
Int32 original = Int32.MaxValue;
Int16[] result = GetShorts(original);
Console.WriteLine("Original int: {0:x}", original);
Console.WriteLine("Senior Int16: {0:x}", result[1]);
Console.WriteLine("Junior Int16: {0:x}", result[0]);
Console.ReadKey();
}
}
static unsafe Int16[] GetShorts(Int32 value)
{
byte[] buffer = new byte[4];
fixed (byte* numRef = buffer)
{
*((Int32*)numRef) = value;
if (BitConverter.IsLittleEndian)
return new Int16[] { *((Int16*)numRef), *((Int16*)numRef + 1) };
return new Int16[] {
(Int16)((numRef[0] << 8) | numRef[1]),
(Int16)((numRef[2] << 8) | numRef[3])
};
}
}
}
You can use StructLayout in VB.NET:
correction: word is 16bit, dword is 32bit
<StructLayout(LayoutKind.Explicit, Size:=4)> _
Public Structure UDWord
<FieldOffset(0)> Public Value As UInt32
<FieldOffset(0)> Public High As UInt16
<FieldOffset(2)> Public Low As UInt16
Public Sub New(ByVal value As UInt32)
Me.Value = value
End Sub
Public Sub New(ByVal high as UInt16, ByVal low as UInt16)
Me.High = high
Me.Low = low
End Sub
End Structure
Signed would be the same just using those types instead
<StructLayout(LayoutKind.Explicit, Size:=4)> _
Public Structure DWord
<FieldOffset(0)> Public Value As Int32
<FieldOffset(0)> Public High As Int16
<FieldOffset(2)> Public Low As Int16
Public Sub New(ByVal value As Int32)
Me.Value = value
End Sub
Public Sub New(ByVal high as Int16, ByVal low as Int16)
Me.High = high
Me.Low = low
End Sub
End Structure
EDIT:
I've kind of rushed the few times I've posted/edited my anwser, and yet to explain this solution, so I feel I have not completed my answer. So I'm going to do so now:
Using the StructLayout as explicit onto a structure requires you to provide the positioning of each field (by byte offset) [StructLayoutAttribute] with the FieldOffset attribute [FieldOffsetAttribute]
With these two attributes in use you can create overlapping fields, aka unions.
The first field (DWord.Value) would be the 32bit integer, with an offset of 0 (zero). To split this 32bit integer you would have two additional fields starting again at the offset of 0 (zero) then the second field 2 more bytes off, because a 16bit (short) integer is 2 bytes a-peice.
From what I recall, usually when you split an integer they normally call the first half "high" then the second half "low"; thus naming my two other fields.
With using a structure like this, you could then create overloads for operators and type widing/narrowing, to easily exchange from say an Int32 type to this DWord structure, aswell as comparasions Operator Overloading in VB.NET
You can use StructLayout to do this:
[StructLayout(LayoutKind.Explicit)]
struct Helper
{
[FieldOffset(0)]
public int Value;
[FieldOffset(0)]
public short Low;
[FieldOffset(2)]
public short High;
}
Using this, you can get the full Value as int , and low part, hight part as short.
something like:
var helper = new Helper {value = 12345};
Due to storage width (32bits and 16bits), converting Int32 to Int16 may imply a loss of information, if your Int32 is greater than 32767.
If you look at the bit representation, then you are correct.
You can do this with unsigned ints though, as they don't have the sign bit.
Int32 num = 70000;
string str = Convert.ToString(num, 2);
//convert INT32 to Binary string
Int32 strl = str.Length;
//detect string length
string strhi, strlo;
//ifvalue is greater than 16 bit
if (strl > 16)
{
int lg = strl - 16;
//dtect bits in higher word
strlo = str.Substring(lg, 16);
///move lower word string to strlo
strhi = str.Substring(0, lg);
//mov higher word string to strhi
}
else
//if value is less than 16 bit
{
strhi = "0";
//set higher word zero
strlo = str;
///move lower word string to strlo
}
Int16 lowword, hiword;
lowword = Convert.ToInt16(strlo, 2);
hiword = Convert.ToInt16(strhi, 2);
////convert binary string to int16
}
I did not use bitwise operators but for unsigned values, this may work:
public (ushort, ushort) SplitToUnsignedShorts(uint value)
{
ushort v1 = (ushort) (value / 0x10000);
ushort v2 = (ushort) (value % 0x10000);
return (v1, v2);
}
Or an expression body version of it:
public (ushort, ushort) SplitToUShorts(uint value)
=> ((ushort)(value / 0x10000), (ushort)(value % 0x10000));
As for signs, you have to decide how you want to split the data. There can only be 1 negative output out of two. Remember a signed value always sacrifices one bit to store the negative state of the number. And that essentially 'halves' the maximum value you can have in that variable. This is also why uint can store twice as much as a signed int.
As for encoding it to your target format, you can either choose make the second number an unsigned short, to preserve the numerical value, or you can manually encode it such that the one bit now represent the sign of that value. This way although you will lose the originally intended numeric value for a sign bit, you don't lose the original binary data and you can always reconstruct it to the original value.
In the end it comes down to how you want to store and process that data. You don't lose the bits, and by extension, the data, as long as you know how to extract the data from (or merge to) your encoded values.
you can use this Nuget package LSharpCode.XExtensions
When you had installed it ,you can use it in this way:
using LSharpCode.XExtensions.MathExtensions;
Int32 varInt32name;
Int16 varint16nameLow;
Int16 varint16nameHigh;
varInt32name.ToTwoInt16(out varint16nameLow,out varint16nameHigh);

Categories