How do I set each bit in the following byte array which has 21 bytes or 168 bits to either zero or one?
byte[] logonHours
Thank you very much
Well, to clear every bit to zero you can just use Array.Clear:
Array.Clear(logonHours, 0, logonHours.Length);
Setting each bit is slightly harder:
for (int i = 0; i < logonHours.Length; i++)
{
logonHours[i] = 0xff;
}
If you find yourself filling an array often, you could write an extension method:
public static void FillArray<T>(this T[] array, T value)
{
// TODO: Validation
for (int i = 0; i < array.Length; i++)
{
array[i] = value;
}
}
BitArray.SetAll:
System.Collections.BitArray a = new System.Collections.BitArray(logonHours);
a.SetAll(true);
Note that this copies the data from the byte array. It's not just a wrapper around it.
This may be more than you need, but ...
Usually when dealing with individual bits in any data type, I define a const for each bit position, then use the binary operators |, &, and ^.
i.e.
const byte bit1 = 1;
const byte bit2 = 2;
const byte bit3 = 4;
const byte bit4 = 8;
.
.
const byte bit8 = 128;
Then you can turn whatever bits you want on and off using the bit operations.
byte byTest = 0;
byTest = byTest | bit4;
would turn bit 4 on but leave the rest untouched.
You would use the & and ^ to turn them off or do more complex exercises.
Obviously, since you only want to turn all bits up or down then you can just set the byte to 0 or 255. That would turn them all off or on.
Related
So I've got this code I'm writing in C#. It's supposed to reverse the order of bits in a bit array (not invert them). Through heavy use of breakpoints and watches, I've determined that the function is somehow modifying both the input parameter array array and the array I copied that into in an attempt to make the function NOT change the input array, tempArray.
static BitArray reverseBits(BitArray array)
{
BitArray tempArray = array;
int length = tempArray.Length;
int mid = length / 2;
for (int i = 0; i < mid; i++)
{
bool tempBit = tempArray[i];
tempArray[i] = tempArray[length - 1 - i]; //the problem seems to be happening
tempArray[length - 1 - i] = tempBit; //somewhere in this vicinity
}
return tempArray;
}
I have no idea why it's behaving like this. Granted, pointers were never my strong suit, but I do try to avoid them whenever possible and they don't seem to be used much at all in c#, which is why I'm puzzled about this behavior.
TL;DR: if you pass my function 00000001, you'll be returned 10000000 from the function AND the array that was passed from the outside will be changed to that as well
P.S. this is for a FFT related task, thats why I'm bothering with the bit reversal at all.
I believe you want to create a new instance of a BitArray like this:
BitArray tempArray = new BitArray(array);
This should create a new instance of a BitArray instead of creating another variable referencing the original array.
You haven't copied the array, you've just assigned it to another variable.
BitArray is a class, and so is always passed by reference (similar to pointers in C/etc).
If you want to copy the array, use the .CopyTo method.
Maybe this Byte similar function could help you
/// <summary>
/// Reverse bit order in each byte (8 bits) of a BitArray
/// (change endian bit order)
/// </summary>
public static void BytewiseReverse(BitArray bitArr)
{
int byteCount = bitArr.Length / 8;
for (int i = 0; i < byteCount; i++)
{
for (int j = 0; j < 4; j++)
{
bool temp = bitArr[i * 8 + 7 - j];
bitArr[i * 8 + 7 - j] = bitArr[i * 8 + j];
bitArr[i * 8 + j] = temp;
}
}
}
I need to combine two Bytes into one int value.
I receive from my camera a 16bit Image were two successive bytes have the intensity value of one pixel. My goal is to combine these two bytes into one "int" vale.
I manage to do this using the following code:
for (int i = 0; i < VectorLength * 2; i = i + 2)
{
NewImageVector[ImagePointer] = ((int)(buffer.Array[i + 1]) << 8) | ((int)(buffer.Array[i]));
ImagePointer++;
}
My image is 1280*960 so VectorLength==1228800 and the incomming buffer size is 2*1228800=2457600 elements...
Is there any way that I can speed this up?
Maybe there is another way so I don't need to use a for-loop.
Thank you
You could use the equivalent to the union of c. Im not sure if faster, but more elegant:
[StructLayout(LayoutKind.Explicit)]
struct byte_array
{
[FieldOffset(0)]
public byte byte1;
[FieldOffset(1)]
public byte byte2;
[FieldOffset(0)]
public short int0;
}
use it like this:
byte_array ba = new byte_array();
//insert the two bytes
ba.byte1 = (byte)(buffer.Array[i]);
ba.byte2 = (byte)(buffer.Array[i + 1]);
//get the integer
NewImageVector[ImagePointer] = ba.int1;
You can fill your two bytes and use the int. To find the faster way take the StopWatch-Class and compare the two ways like this:
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
//The code
stopWatch.Stop();
MessageBox.Show(stopWatch.ElapsedTicks.ToString()); //Or milliseconds ,...
Assuming you can (re-)define NewImageVector as a short[], and every two consecutive bytes in Buffer should be transformed into a short (which basically what you're doing now, only you cast to an int afterwards), you can use Buffer.BlockCopy to do it for you.
As the documentation tells, you Buffer.BlockCopy copies bytes from one array to another, so in order to copy your bytes in buffer you need to do the following:
Buffer.BlockCopy(Buffer, 0, NewImageVector, 0, [NumberOfExpectedShorts] * 2)
This tells BlockCopy that you want to start copying bytes from Buffer, starting at index 0, to NewImageVector starting at index 0, and you want to copy [NumberOfExpectedShorts] * 2 bytes (since every short is two bytes long).
No loops, but it does depend on the ability of using a short[] array instead of an int[] array (and indeed, on using an array to begin with).
Note that this also requires the bytes in Buffer to be in little-endian order (i.e. Buffer[index] contains the low byte, buffer[index + 1] the high byte).
You can achieve a small performance increase by using unsafe pointers to iterate the arrays. The following code assumes that source is the input byte array (buffer.Array in your case). It also assumes that source has an even number of elements. In production code you would obviously have to check these things.
int[] output = new int[source.Length / 2];
fixed (byte* pSource = source)
fixed (int* pDestination = output)
{
byte* sourceIterator = pSource;
int* destIterator = pDestination;
for (int i = 0; i < output.Length; i++)
{
(*destIterator) = ((*sourceIterator) | (*(sourceIterator + 1) << 8));
destIterator++;
sourceIterator += 2;
}
}
return output;
I'm trying to debug some bit shifting operations and I need to visualize the bits as they exist before and after a Bit-Shifting operation.
I read from this answer that I may need to handle backfill from the shifting, but I'm not sure what that means.
I think that by asking this question (how do I print the bits in a int) I can figure out what the backfill is, and perhaps some other questions I have.
Here is my sample code so far.
static string GetBits(int num)
{
StringBuilder sb = new StringBuilder();
uint bits = (uint)num;
while (bits!=0)
{
bits >>= 1;
isBitSet = // somehow do an | operation on the first bit.
// I'm unsure if it's possible to handle different data types here
// or if unsafe code and a PTR is needed
if (isBitSet)
sb.Append("1");
else
sb.Append("0");
}
}
Convert.ToString(56,2).PadLeft(8,'0') returns "00111000"
This is for a byte, works for int also, just increase the numbers
To test if the last bit is set you could use:
isBitSet = ((bits & 1) == 1);
But you should do so before shifting right (not after), otherwise you's missing the first bit:
isBitSet = ((bits & 1) == 1);
bits = bits >> 1;
But a better option would be to use the static methods of the BitConverter class to get the actual bytes used to represent the number in memory into a byte array. The advantage (or disadvantage depending on your needs) of this method is that this reflects the endianness of the machine running the code.
byte[] bytes = BitConverter.GetBytes(num);
int bitPos = 0;
while(bitPos < 8 * bytes.Length)
{
int byteIndex = bitPos / 8;
int offset = bitPos % 8;
bool isSet = (bytes[byteIndex] & (1 << offset)) != 0;
// isSet = [True] if the bit at bitPos is set, false otherwise
bitPos++;
}
Here is a method -
using System;
class Program
{
static void Main(string[] args)
{
//
// Create an array of four bytes.
// ... Then convert it into an integer and unsigned integer.
//
byte[] array = new byte[4];
array[0] = 1; // Lowest
array[1] = 64;
array[2] = 0;
array[3] = 0; // Sign bit
//
// Use BitConverter to convert the bytes to an int and a uint.
// ... The int and uint can have different values if the sign bit differs.
//
int result1 = BitConverter.ToInt32(array, 0); // Start at first index
uint result2 = BitConverter.ToUInt32(array, 0); // First index
Console.WriteLine(result1);
Console.WriteLine(result2);
Console.ReadLine();
}
}
Output
16385
16385
I just want to know how this is happening?
The docs for BitConverter.ToInt32 actually have some pretty good examples. Assuming BitConverter.IsLittleEndian returns true, array[0] is the least significant byte, as you've shown... although array[3] isn't just the sign bit, it's the most significant byte which includes the sign bit (as bit 7) but the rest of the bits are for magnitude.
So in your case, the least significant byte is 1, and the next byte is 64 - so the result is:
( 1 * (1 << 0) ) + // Bottom 8 bits
(64 * (1 << 8) ) + // Next 8 bits, i.e. multiply by 256
( 0 * (1 << 16)) + // Next 8 bits, i.e. multiply by 65,536
( 0 * (1 << 24)) // Top 7 bits and sign bit, multiply by 16,777,216
which is 16385. If the sign bit were set, you'd need to consider the two cases differently, but in this case it's simple.
It converts like it was a number in base 256. So in your case : 1+64*256 = 16385
Looking at the .Net 4.0 Framework reference source, BitConverter does work how Jon's answer said, though it uses pointers (unsafe code) to work with the array.
However, if the second argument (i.e., startindex) is divisible by 4 (as is the case in your example), the framework takes a shortcut. It takes a byte pointer to the value[startindex], casts it to an int pointer, then dereferences it. This trick works regardless of whether IsLittleEndian is true.
From a high level, this basically just means the code is pointing at 4 bytes in the byte array and categorically declaring, "the chunk of memory over there is an int!" (and then returning a copy of it). This makes perfect sense when you take into account that under the hood, an int is just a chunk of memory.
Below is the source code of the framework ToUint32 method:
return (uint)ToInt32(value, startIndex);
array[0] = 1; // Lowest // 0x01 array[1] = 64; //
0x40 array[2] = 0; // 0x00 array[3] = 0; // Sign bit
0x00
If you combine each hex value 0x00004001
The MSDN documentatin explains everything
You can look for yourself - https://referencesource.microsoft.com/#mscorlib/system/bitconverter.cs,e8230d40857425ba
If the data is word-aligned, it will simply cast the memory pointer to an int32.
return *((int *) pbyte);
Otherwise, it uses bitwise logic from the byte memory pointer values.
For those of you who are having trouble with Little Endien and Big Endien. I use the following wrapper functions to take care of it.
public static Int16 ToInt16(byte[] data, int offset)
{
if (BitConverter.IsLittleEndian)
{
return BitConverter.ToInt16(BitConverter.IsLittleEndian ? data.Skip(offset).Take(2).Reverse().ToArray() : data, 0);
}
return BitConverter.ToInt16(data, offset);
}
public static Int32 ToInt32(byte[] data, int offset)
{
if (BitConverter.IsLittleEndian)
{
return BitConverter.ToInt32(BitConverter.IsLittleEndian ? data.Skip(offset).Take(4).Reverse().ToArray() : data, 0);
}
return BitConverter.ToInt32(data, offset);
}
public static Int64 ToInt64(byte[] data, int offset)
{
if (BitConverter.IsLittleEndian)
{
return BitConverter.ToInt64(BitConverter.IsLittleEndian ? data.Skip(offset).Take(8).Reverse().ToArray() : data, 0);
}
return BitConverter.ToInt64(data, offset);
}
How can I use LINQ to to XOR bytes together in an array? I'm taking the output of an MD5 hash and would like to XOR every four bytes together so that I can get a 32 bit int out of it. I could easily do this with a loop, but I thought it was an interesting problem for LINQ.
public static byte[] CompressBytes(byte[] data, int length)
{
byte[] buffer = new byte[length];
for (int i = 0; i < data.Length; i++)
{
for (int j = 0; j < length; j++)
{
if (i * length + j >= data.Length)
break;
buffer[j] ^= data[i * length + j];
}
}
return buffer;
}
Slightly off topic, but is this even a good idea? If I need a int, would I be better off with a different hash function, or should I just take the first 4 bytes of the MD5 because XORing them all wouldn't help any? Comments are welcome.
You can use the IEnumerable.Aggregate function (not actually LINQ, but most people refer to the LINQ-related extension methods as LINQ) to perform a custom aggregate. For example, you could compute the total XOR of a list of bytes like this:
var xor = list.Aggregate((acc, val) => (byte)(acc ^ val));
You can create a virtually unreadable chain of extension method calls to do what you're after:
int integer = BitConverter.ToInt32(Enumerable.Range(0, 3).
Select(i => data.Skip(i * 4).Take(4).
Aggregate((acc, val) => (byte)(acc ^ val))).ToArray(), 0)
To address the "off topic" part, I'd suggest just lopping off the first 32 bits of the MD5 hash. Or consider a simpler non-crypto hash such as CRC32.
Like other cryptographic hashes, MD5 is supposed to appear as random as possible, so XOR'ing other bytes won't really make a difference, IMO.
In case you need xor of 2 byte arrays: byteArray1.Select((x, i) => (byte)(x ^ byteArray2[i])).ToArray();