Here is a method -
using System;
class Program
{
static void Main(string[] args)
{
//
// Create an array of four bytes.
// ... Then convert it into an integer and unsigned integer.
//
byte[] array = new byte[4];
array[0] = 1; // Lowest
array[1] = 64;
array[2] = 0;
array[3] = 0; // Sign bit
//
// Use BitConverter to convert the bytes to an int and a uint.
// ... The int and uint can have different values if the sign bit differs.
//
int result1 = BitConverter.ToInt32(array, 0); // Start at first index
uint result2 = BitConverter.ToUInt32(array, 0); // First index
Console.WriteLine(result1);
Console.WriteLine(result2);
Console.ReadLine();
}
}
Output
16385
16385
I just want to know how this is happening?
The docs for BitConverter.ToInt32 actually have some pretty good examples. Assuming BitConverter.IsLittleEndian returns true, array[0] is the least significant byte, as you've shown... although array[3] isn't just the sign bit, it's the most significant byte which includes the sign bit (as bit 7) but the rest of the bits are for magnitude.
So in your case, the least significant byte is 1, and the next byte is 64 - so the result is:
( 1 * (1 << 0) ) + // Bottom 8 bits
(64 * (1 << 8) ) + // Next 8 bits, i.e. multiply by 256
( 0 * (1 << 16)) + // Next 8 bits, i.e. multiply by 65,536
( 0 * (1 << 24)) // Top 7 bits and sign bit, multiply by 16,777,216
which is 16385. If the sign bit were set, you'd need to consider the two cases differently, but in this case it's simple.
It converts like it was a number in base 256. So in your case : 1+64*256 = 16385
Looking at the .Net 4.0 Framework reference source, BitConverter does work how Jon's answer said, though it uses pointers (unsafe code) to work with the array.
However, if the second argument (i.e., startindex) is divisible by 4 (as is the case in your example), the framework takes a shortcut. It takes a byte pointer to the value[startindex], casts it to an int pointer, then dereferences it. This trick works regardless of whether IsLittleEndian is true.
From a high level, this basically just means the code is pointing at 4 bytes in the byte array and categorically declaring, "the chunk of memory over there is an int!" (and then returning a copy of it). This makes perfect sense when you take into account that under the hood, an int is just a chunk of memory.
Below is the source code of the framework ToUint32 method:
return (uint)ToInt32(value, startIndex);
array[0] = 1; // Lowest // 0x01 array[1] = 64; //
0x40 array[2] = 0; // 0x00 array[3] = 0; // Sign bit
0x00
If you combine each hex value 0x00004001
The MSDN documentatin explains everything
You can look for yourself - https://referencesource.microsoft.com/#mscorlib/system/bitconverter.cs,e8230d40857425ba
If the data is word-aligned, it will simply cast the memory pointer to an int32.
return *((int *) pbyte);
Otherwise, it uses bitwise logic from the byte memory pointer values.
For those of you who are having trouble with Little Endien and Big Endien. I use the following wrapper functions to take care of it.
public static Int16 ToInt16(byte[] data, int offset)
{
if (BitConverter.IsLittleEndian)
{
return BitConverter.ToInt16(BitConverter.IsLittleEndian ? data.Skip(offset).Take(2).Reverse().ToArray() : data, 0);
}
return BitConverter.ToInt16(data, offset);
}
public static Int32 ToInt32(byte[] data, int offset)
{
if (BitConverter.IsLittleEndian)
{
return BitConverter.ToInt32(BitConverter.IsLittleEndian ? data.Skip(offset).Take(4).Reverse().ToArray() : data, 0);
}
return BitConverter.ToInt32(data, offset);
}
public static Int64 ToInt64(byte[] data, int offset)
{
if (BitConverter.IsLittleEndian)
{
return BitConverter.ToInt64(BitConverter.IsLittleEndian ? data.Skip(offset).Take(8).Reverse().ToArray() : data, 0);
}
return BitConverter.ToInt64(data, offset);
}
Related
I need to combine two Bytes into one int value.
I receive from my camera a 16bit Image were two successive bytes have the intensity value of one pixel. My goal is to combine these two bytes into one "int" vale.
I manage to do this using the following code:
for (int i = 0; i < VectorLength * 2; i = i + 2)
{
NewImageVector[ImagePointer] = ((int)(buffer.Array[i + 1]) << 8) | ((int)(buffer.Array[i]));
ImagePointer++;
}
My image is 1280*960 so VectorLength==1228800 and the incomming buffer size is 2*1228800=2457600 elements...
Is there any way that I can speed this up?
Maybe there is another way so I don't need to use a for-loop.
Thank you
You could use the equivalent to the union of c. Im not sure if faster, but more elegant:
[StructLayout(LayoutKind.Explicit)]
struct byte_array
{
[FieldOffset(0)]
public byte byte1;
[FieldOffset(1)]
public byte byte2;
[FieldOffset(0)]
public short int0;
}
use it like this:
byte_array ba = new byte_array();
//insert the two bytes
ba.byte1 = (byte)(buffer.Array[i]);
ba.byte2 = (byte)(buffer.Array[i + 1]);
//get the integer
NewImageVector[ImagePointer] = ba.int1;
You can fill your two bytes and use the int. To find the faster way take the StopWatch-Class and compare the two ways like this:
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
//The code
stopWatch.Stop();
MessageBox.Show(stopWatch.ElapsedTicks.ToString()); //Or milliseconds ,...
Assuming you can (re-)define NewImageVector as a short[], and every two consecutive bytes in Buffer should be transformed into a short (which basically what you're doing now, only you cast to an int afterwards), you can use Buffer.BlockCopy to do it for you.
As the documentation tells, you Buffer.BlockCopy copies bytes from one array to another, so in order to copy your bytes in buffer you need to do the following:
Buffer.BlockCopy(Buffer, 0, NewImageVector, 0, [NumberOfExpectedShorts] * 2)
This tells BlockCopy that you want to start copying bytes from Buffer, starting at index 0, to NewImageVector starting at index 0, and you want to copy [NumberOfExpectedShorts] * 2 bytes (since every short is two bytes long).
No loops, but it does depend on the ability of using a short[] array instead of an int[] array (and indeed, on using an array to begin with).
Note that this also requires the bytes in Buffer to be in little-endian order (i.e. Buffer[index] contains the low byte, buffer[index + 1] the high byte).
You can achieve a small performance increase by using unsafe pointers to iterate the arrays. The following code assumes that source is the input byte array (buffer.Array in your case). It also assumes that source has an even number of elements. In production code you would obviously have to check these things.
int[] output = new int[source.Length / 2];
fixed (byte* pSource = source)
fixed (int* pDestination = output)
{
byte* sourceIterator = pSource;
int* destIterator = pDestination;
for (int i = 0; i < output.Length; i++)
{
(*destIterator) = ((*sourceIterator) | (*(sourceIterator + 1) << 8));
destIterator++;
sourceIterator += 2;
}
}
return output;
Suppose I have an array that ends with the most signifiant bit of the most significant byte equaling 1.
My understanding is that if this is the case, BigInteger will treat this as a negative number, by design.
BigInteger numberToShorten = new BigInteger(toEncode);
if (numberToShorten.Sign == -1)
{
// problem with twos compliment or the last bit of the last byte is equal to 1
throw new Exception("Unexpected negative number");
}
To solve this problem, I think I need to add a dummy zero bit to the array, prior to converting the array. I can easily do this using Array.Resize().
My question is, how should I test if the last bit is indeed equal to 1?
I'm pretty weak on my boolean logic now, and am thinking I need to AND the values and test for equality, but not able to get the syntax correct in C#. Something like this:
byte temp = toEncode[toEncode.Length - 1];
if (temp == ???)
{
Array.Resize(ref toEncode, toEncode.Length +1);
}
If the number to be converted to BigInteger is always positive then you don't actually need to test at all; just appending a zero byte will always work correctly.
For completeness, checking if the MSB of any one byte is set is done with
if (byte & 0x80 == 0x80) ...
Found the answer at the bottom of this MSDN article (I think)
ulong originalNumber = UInt64.MaxValue;
byte[] bytes = BitConverter.GetBytes(originalNumber);
if (originalNumber > 0 && (bytes[bytes.Length - 1] & 0x80) > 0)
{
byte[] temp = new byte[bytes.Length];
Array.Copy(bytes, temp, bytes.Length);
bytes = new byte[temp.Length + 1];
Array.Copy(temp, bytes, temp.Length);
}
BigInteger newNumber = new BigInteger(bytes);
Console.WriteLine("Converted the UInt64 value {0:N0} to {1:N0}.",
originalNumber, newNumber);
// The example displays the following output:
// Converted the UInt64 value 18,446,744,073,709,551,615 to 18,446,744,073,709,551,615.
I'm not sure if this is necessary, have you tried it? In any case, assuming that temp contains a byte and you are trying to test whether that byte ends in 1, this is how you should do it:
if((temp & 0x01)==1)
{
...
}
That amounts to:
Temp: ???????z
0x01: 00000001
Temp & 0x01: 0000000z
Where z represents the bit you are trying to test, and you can now just test whether the result is equal to 1 or not.
If you are trying to test whether the byte starts with 1, do:
if((temp & 0x80)==0x80) //0x80 is 10000000 in binary
{
...
}
I'm trying to debug some bit shifting operations and I need to visualize the bits as they exist before and after a Bit-Shifting operation.
I read from this answer that I may need to handle backfill from the shifting, but I'm not sure what that means.
I think that by asking this question (how do I print the bits in a int) I can figure out what the backfill is, and perhaps some other questions I have.
Here is my sample code so far.
static string GetBits(int num)
{
StringBuilder sb = new StringBuilder();
uint bits = (uint)num;
while (bits!=0)
{
bits >>= 1;
isBitSet = // somehow do an | operation on the first bit.
// I'm unsure if it's possible to handle different data types here
// or if unsafe code and a PTR is needed
if (isBitSet)
sb.Append("1");
else
sb.Append("0");
}
}
Convert.ToString(56,2).PadLeft(8,'0') returns "00111000"
This is for a byte, works for int also, just increase the numbers
To test if the last bit is set you could use:
isBitSet = ((bits & 1) == 1);
But you should do so before shifting right (not after), otherwise you's missing the first bit:
isBitSet = ((bits & 1) == 1);
bits = bits >> 1;
But a better option would be to use the static methods of the BitConverter class to get the actual bytes used to represent the number in memory into a byte array. The advantage (or disadvantage depending on your needs) of this method is that this reflects the endianness of the machine running the code.
byte[] bytes = BitConverter.GetBytes(num);
int bitPos = 0;
while(bitPos < 8 * bytes.Length)
{
int byteIndex = bitPos / 8;
int offset = bitPos % 8;
bool isSet = (bytes[byteIndex] & (1 << offset)) != 0;
// isSet = [True] if the bit at bitPos is set, false otherwise
bitPos++;
}
I'm trying to port a C++ function that uses Bitwise operations to C#, but am running into issues. Namely when the input is 0x01123456 the C# returns zero, but it should be 12
BN_set_word and other BN_* functions are in OpenSSL. What do these functions do (in detail) ... Is there a .NET equivalent in big integer?
What is an MPI representation? (google is of no use)
Should I be concerned about Masking in the shift count as mentioned in the comments here?
C++ Original
Git Source
// The "compact" format is a representation of a whole
// number N using an unsigned 32bit number similar to a
// floating point format.
// The most significant 8 bits are the unsigned exponent of base 256.
// This exponent can be thought of as "number of bytes of N".
// The lower 23 bits are the mantissa.
// Bit number 24 (0x800000) represents the sign of N.
// N = (-1^sign) * mantissa * 256^(exponent-3)
//
// Satoshi's original implementation used BN_bn2mpi() and BN_mpi2bn().
// MPI uses the most significant bit of the first byte as sign.
// Thus 0x1234560000 is compact (0x05123456)
// and 0xc0de000000 is compact (0x0600c0de)
// (0x05c0de00) would be -0x40de000000
//
// Bitcoin only uses this "compact" format for encoding difficulty
// targets, which are unsigned 256bit quantities. Thus, all the
// complexities of the sign bit and using base 256 are probably an
// implementation accident.
//
// This implementation directly uses shifts instead of going
// through an intermediate MPI representation.
CBigNum& SetCompact(unsigned int nCompact)
{
unsigned int nSize = nCompact >> 24;
bool fNegative =(nCompact & 0x00800000) != 0;
unsigned int nWord = nCompact & 0x007fffff;
if (nSize <= 3)
{
nWord >>= 8*(3-nSize);
BN_set_word(this, nWord);
}
else
{
BN_set_word(this, nWord);
BN_lshift(this, this, 8*(nSize-3));
}
BN_set_negative(this, fNegative);
return *this;
}
C# attempted port
internal static System.Numerics.BigInteger SetCompact(uint numToCompact)
{
//
//
// SetCompact
// Extract the number from bits 0..23
uint nWord = numToCompact & 0x007fffff;
BigInteger ret = new BigInteger(nWord);
// Add zeroes to the left according to bits 25..32
var ttt = ret.ToByteArray();
uint size = numToCompact >> 24;
uint amountToShift = 0;
if (size <= 3)
{
amountToShift = 8 * (3 - size);
ret = ret >> (int)amountToShift;
}
else
{
ret = ret << (int)amountToShift;
amountToShift = 8 * (size - 3);
}
// Set the value negative if required per bit 24
UInt32 isNegative = 0x00800000 & numToCompact;
if (isNegative != 0)
ret =BigInteger.Negate(ret);
var test = ret.ToByteArray();
Console.WriteLine();
Console.WriteLine(ret.ToString("X"));
return ret;
}
I have this code for converting a byte[] to float[].
public float[] ConvertByteToFloat(byte[] array)
{
float[] floatArr = new float[array.Length / sizeof(float)];
int index = 0;
for (int i = 0; i < floatArr.Length; i++)
{
floatArr[i] = BitConverter.ToSingle(array, index);
index += sizeof(float);
}
return floatArr;
}
Problem is, I usually get a NaN result! Why should this be? I checked if there is data in the byte[] and the data seems to be fine. If it helps, an example of the values are:
new byte[] {
231,
255,
235,
255,
}
But this returns NaN (Not a Number) after conversion to float. What could be the problem? Are there other better ways of converting byte[] to float[]? I am sure that the values read into the buffer are correct since I compared it with my other program (which performs amplification for a .wav file).
If endianness is the problem, you should check the value of BitConverter.IsLittleEndian to determine if the bytes have to be reversed:
public static float[] ConvertByteToFloat(byte[] array) {
float[] floatArr = new float[array.Length / 4];
for (int i = 0; i < floatArr.Length; i++) {
if (BitConverter.IsLittleEndian) {
Array.Reverse(array, i * 4, 4);
}
floatArr[i] = BitConverter.ToSingle(array, i * 4);
}
return floatArr;
}
Isn't the problem that the 255 in the exponent represents NaN (see Wikipedia in the exponent section), so you should get a NaN. Try changing the last 255 to something else...
If it's the endianness that is wrong (you are reading big endian numbers) try these (be aware that they are "unsafe" so you have to check the unsafe flag in your project properties)
public static unsafe int ToInt32(byte[] value, int startIndex)
{
fixed (byte* numRef = &value[startIndex])
{
var num = (uint)((numRef[0] << 0x18) | (numRef[1] << 0x10) | (numRef[2] << 0x8) | numRef[3]);
return (int)num;
}
}
public static unsafe float ToSingle(byte[] value, int startIndex)
{
int val = ToInt32(value, startIndex);
return *(float*)&val;
}
I assume that your byte[] doesn't contain the binary representation of floats, but either a sequence of Int8s or Int16s. The examples you posted don't look like audio samples based on float (neither NaN nor -2.41E+24 are in the range -1 to 1. While some new audio formats might support floats outside that range, traditionally audio data consists of signed 8 or 16 bit integer samples.
Another thing you need to be aware of is that often the different channels are interleaved. For example it could contain the first sample for the left, then for the right channel, and then the second sample for both... So you need to separate the channels while parsing.
It's also possible, but uncommon that the samples are unsigned. In which case you need to remove the offset from the conversion functions.
So you first need to parse current position in the byte array into an Int8/16. And then convert that integer to a float in the range -1 to 1.
If the format is little endian you can use BitConverter. Another possibility that works with both endiannesses is getting two bytes manually and combining them with a bit-shift. I don't remember if little or big endian is common. So you need to try that yourself.
This can be done with functions like the following(I didn't test them):
float Int8ToFloat(Int8 i)
{
return ((i-Int8.MinValue)*(1f/0xFF))-0.5f;
}
float Int16ToFloat(Int16 i)
{
return ((i-Int16.MinValue)*(1f/0xFFFF))-0.5f;
}
Depends on how you want to convert the bytes to float. Start out by trying to pin-point what is actually stored in the byte array. Perhaps there is a file format specification? Other transformations in your code?
In the case each byte should be converted to a float between 0 and 255:
public float[] ConvertByteToFloat(byte[] array)
{
return array.Select(b => (float)b).ToArray();
}
If the bytes array contains binary representation of floats, there are several representation and if the representation stored in your file does not match the c# language standard floating point representation (IEEE 754) weird things like this will happen.