Hello I'am new to using bytes in C#.
Say if I want to compare bytes based on the forms 0xxxxxxx and 1xxxxxxx. How would I get that first value for my comparison and at the same time remove it from the front?
Any help will be greatly appreciated.
Not sure I understand, but in C#, to write the binaray number 1000'0000, you must use hex notation. So to check if the left-most (most significant) bits of two bytes match, you can do e.g.
byte a = ...;
byte b = ...;
if ((a & 0x80) == (b & 0x80))
{
// match
}
else
{
// opposite
}
This uses bit-wise AND. To clear the most significant bit, you may use:
byte aModified = (byte)(a & 0x7f);
or if you want to assign back to a again:
a &= 0x7f;
You need to use binary operations like
a&10000
a<<1
This will check two bytes and compare each bit. If the bit is the same, it will clear that bit.
static void Main(string[] args)
{
byte byte1 = 255;
byte byte2 = 255;
for (var i = 0; i <= 7; i++)
{
if ((byte1 & (1 << i)) == (byte2 & (1 << i)))
{
// position i in byte1 is the same as position i in byte2
// clear bit that is the same in both numbers
ClearBit(ref byte1, i);
ClearBit(ref byte2, i);
}
else
{
// if not the same.. do something here
}
Console.WriteLine(Convert.ToString(byte1, 2).PadLeft(8, '0'));
}
Console.ReadKey();
}
private static void ClearBit(ref byte value, int position)
{
value = (byte)(value & ~(1 << position));
}
}
Related
Let say we have two bitmaps that are represented by unsigned long(64-bit) arrays. And I want to merge this two bitmaps using specific shift(offset).
For example merge bitmap1(bigger) into bitmap2(smaller) starting offset 3. Offset 3 mean that 3rd bit of bitmap1 corresponds to 0 bit of bitmap2.
By merge I mean logical Or operation. What is the cleanest way to do this?
Currently I have done this with simple uneffective for loop
const ulong BitsPerUlong = 64;
MergeAt(ulong startIndex, Bitmap bitmap2)
{
for (int i = startIndex; i < bitmap2.Capacity; i++)
{
bool newVal = bitmap2.GetAt(i) | bitmap1.GetAt(i)
bitmap2.SetAt(i, newVal)
}
}
bool GetAt(ulong index)
{
var dataOffset = BitOffsetToUlongOffset(index);
ulong mask = 0x1ul << ((int)(index % BitsPerUlong));
return (_data[dataOffset] & mask) == mask;
}
void SetAt(ulong index, bool value)
{
var dataOffset = BitOffsetToUlongOffset(index);
ulong mask = 0x1ul << ((int)(index % BitsPerUlong));
if (value)
{
_data[dataOffset] |= mask;
}
else
{
_data[dataOffset] &= ~mask;
}
}
ulong BitOffsetToUlongOffset(ulong index)
{
var dataOffset = index / BitsPerUlong;
return dataOffset;
}
(C/C++/C# accepted).
As you probably figured out yourself, if offset < BitsPerULong the first block can be merged with:
data1[0] |= data2[0] << offset;
Which leaves some bits in data2[0] unmerged, but you can get those with:
data2[0] >> (BitsPerULong - offset)
So the next merge for i > 0 becomes:
data1[i] |= (data2[i] << offset) | (data2[i-1] >> (BitsPerULong - offset));
from which you can construct a for-loop to merge all data. Of course, this still means a couple of bits from data2 will "fall off" but I think that's inherent to your problem description?
If you need a more generic solution where offset can also be greater than BitsPerULong, this needs a bit more work.
I presume you mean that you want to "merge" the smaller INTO the bigger.
Have you tried: bitmapLarger |= ( bitmapSmaller << 3 ) ?
I am continuing from my previous question. I am making a c# program where the user enters a 7-bit binary number and the computer prints out the number with an even parity bit to the right of the number. I am struggling. I have a code, but it says BitArray is a namespace but is used as a type. Also, is there a way I could improve the code and make it simpler?
namespace BitArray
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Please enter a 7-bit binary number:");
int a = Convert.ToInt32(Console.ReadLine());
byte[] numberAsByte = new byte[] { (byte)a };
BitArray bits = new BitArray(numberAsByte);
int count = 0;
for (int i = 0; i < 8; i++)
{
if (bits[i])
{
count++;
}
}
if (count % 2 == 1)
{
bits[7] = true;
}
bits.CopyTo(numberAsByte, 0);
a = numberAsByte[0];
Console.WriteLine("The binary number with a parity bit is:");
Console.WriteLine(a);
Might be more fun to duplicate the circuit they use to do this..
bool odd = false;
for(int i=6;i>=0;i--)
odd ^= (number & (1 << i)) > 0;
Then if you want even parity set bit 7 to odd, odd parity to not odd.
or
bool even = true;
for(int i=6;i>=0;i--)
even ^= (number & (1 << i)) > 0;
The circuit is dual function returns 0 and 1 or 1 and 0, does more than 1 bit at a time as well, but this is a bit light for TPL....
PS you might want to check the input for < 128 otherwise things are going to go well wrong.
ooh didn't notice the homework tag, don't use this unless you can explain it.
Almost the same process, only much faster on a larger number of bits. Using only the arithmetic operators (SHR && XOR), without loops:
public static bool is_parity(int data)
{
//data ^= data >> 32; // if arg >= 64-bit (notice argument length)
//data ^= data >> 16; // if arg >= 32-bit
//data ^= data >> 8; // if arg >= 16-bit
data ^= data >> 4;
data ^= data >> 2;
data ^= data >> 1;
return (data & 1) !=0;
}
public static byte fix_parity(byte data)
{
if (is_parity(data)) return data;
return (byte)(data ^ 128);
}
Using a BitArray does not buy you much here, if anything it makes your code harder to understand. Your problem can be solved with basic bit manipulation with the & and | and << operators.
For example to find out if a certain bit is set in a number you can & the number with the corresponding power of 2. That leads to:
int bitsSet = 0;
for(int i=0;i<7;i++)
if ((number & (1 << i)) > 0)
bitsSet++;
Now the only thing remain is determining if bitsSet is even or odd and then setting the remaining bit if necessary.
In order to utilize a byte to its fullest potential, I'm attempting to store two unique values into a byte: one in the first four bits and another in the second four bits. However, I've found that, while this practice allows for optimized memory allocation, it makes changing the individual values stored in the byte difficult.
In my code, I want to change the first set of four bits in a byte while maintaining the value of the second four bits in the same byte. While bitwise operations allow me to easily retrieve and manipulate the first four bit values, I'm finding it difficult to concatenate this new value with the second set of four bits in a byte. The question is, how can I erase the first four bits from a byte (or, more accurately, set them all the zero) and add the new set of 4 bits to replace the four bits that were just erased, thus preserving the last 4 bits in a byte while changing the first four?
Here's an example:
// Changes the first four bits in a byte to the parameter value
public void changeFirstFourBits(byte newFirstFour)
{
// If 'newFirstFour' is 0101 in binary, make 'value' 01011111 in binary, changing
// the first four bits but leaving the second four alone.
}
private byte value = 255; // binary: 11111111
Use bitwise AND (&) to clear out the old bits, shift the new bits to the correct position and bitwise OR (|) them together:
value = (value & 0xF) | (newFirstFour << 4);
Here's what happens:
value : abcdefgh
newFirstFour : 0000xyzw
0xF : 00001111
value & 0xF : 0000efgh
newFirstFour << 4 : xyzw0000
(value & 0xF) | (newFirstFour << 4) : xyzwefgh
When I have to do bit-twiddling like this, I make a readonly struct to do it for me. A four-bit integer is called nybble, of course:
struct TwoNybbles
{
private readonly byte b;
public byte High { get { return (byte)(b >> 4); } }
public byte Low { get { return (byte)(b & 0x0F); } {
public TwoNybbles(byte high, byte low)
{
this.b = (byte)((high << 4) | (low & 0x0F));
}
And then add implicit conversions between TwoNybbles and byte. Now you can just treat any byte as having a High and Low byte without putting all that ugly bit twiddling in your mainline code.
You first mask out you the high four bytes using value & 0xF. Then you shift the new bits to the high four bits using newFirstFour << 4 and finally you combine them together using binary or.
public void changeHighFourBits(byte newHighFour)
{
value=(byte)( (value & 0x0F) | (newFirstFour << 4));
}
public void changeLowFourBits(byte newLowFour)
{
value=(byte)( (value & 0xF0) | newLowFour);
}
I'm not really sure what your method there is supposed to do, but here are some methods for you:
void setHigh(ref byte b, byte val) {
b = (b & 0xf) | (val << 4);
}
byte high(byte b) {
return (b & 0xf0) >> 4;
}
void setLow(ref byte b, byte val) {
b = (b & 0xf0) | val;
}
byte low(byte b) {
return b & 0xf;
}
Should be self-explanatory.
public int SplatBit(int Reg, int Val, int ValLen, int Pos)
{
int mask = ((1 << ValLen) - 1) << Pos;
int newv = Val << Pos;
int res = (Reg & ~mask) | newv;
return res;
}
Example:
Reg = 135
Val = 9 (ValLen = 4, because 9 = 1001)
Pos = 2
135 = 10000111
9 = 1001
9 << Pos = 100100
Result = 10100111
A quick look would indicate that a bitwise and can be achieved using the & operator. So to remove the first four bytes you should be able to do:
byte value1=255; //11111111
byte value2=15; //00001111
return value1&value2;
Assuming newVal contains the value you want to store in origVal.
Do this for the 4 least significant bits:
byte origVal = ???;
byte newVal = ???
orig = (origVal & 0xF0) + newVal;
and this for the 4 most significant bits:
byte origVal = ???;
byte newVal = ???
orig = (origVal & 0xF) + (newVal << 4);
I know you asked specifically about clearing out the first four bits, which has been answered several times, but I wanted to point out that if you have two values <= decimal 15, you can combine them into 8 bits simply with this:
public int setBits(int upperFour, int lowerFour)
{
return upperFour << 4 | lowerFour;
}
The result will be xxxxyyyy where
xxxx = upperFour
yyyy = lowerFour
And that is what you seem to be trying to do.
Here's some code, but I think the earlier answers will do it for you. This is just to show some sort of test code to copy and past into a simple console project (the WriteBits method by be of help):
static void Main(string[] args)
{
int b1 = 255;
WriteBits(b1);
int b2 = b1 >> 4;
WriteBits(b2);
int b3 = b1 & ~0xF ;
WriteBits(b3);
// Store 5 in first nibble
int b4 = 5 << 4;
WriteBits(b4);
// Store 8 in second nibble
int b5 = 8;
WriteBits(b5);
// Store 5 and 8 in first and second nibbles
int b6 = 0;
b6 |= (5 << 4) + 8;
WriteBits(b6);
// Store 2 and 4
int b7 = 0;
b7 = StoreFirstNibble(2, b7);
b7 = StoreSecondNibble(4, b7);
WriteBits(b7);
// Read First Nibble
int first = ReadFirstNibble(b7);
WriteBits(first);
// Read Second Nibble
int second = ReadSecondNibble(b7);
WriteBits(second);
}
static int ReadFirstNibble(int storage)
{
return storage >> 4;
}
static int ReadSecondNibble(int storage)
{
return storage &= 0xF;
}
static int StoreFirstNibble(int val, int storage)
{
return storage |= (val << 4);
}
static int StoreSecondNibble(int val, int storage)
{
return storage |= val;
}
static void WriteBits(int b)
{
Console.WriteLine(BitConverter.ToString(BitConverter.GetBytes(b),0));
}
}
I have this function in C# to convert a little endian byte array to an integer number:
int LE2INT(byte[] data)
{
return (data[3] << 24) | (data[2] << 16) | (data[1] << 8) | data[0];
}
Now I want to convert it back to little endian..
Something like
byte[] INT2LE(int data)
{
// ...
}
Any idea?
Thanks.
The BitConverter class can be used for this, and of course, it can also be used on both little and big endian systems.
Of course, you'll have to keep track of the endianness of your data. For communications for instance, this would be defined in your protocol.
You can then use the BitConverter class to convert a data type into a byte array and vice versa, and then use the IsLittleEndian flag to see if you need to convert it on your system or not.
The IsLittleEndian flag will tell you the endianness of the system, so you can use it as follows:
This is from the MSDN page on the BitConverter class.
int value = 12345678; //your value
//Your value in bytes... in your system's endianness (let's say: little endian)
byte[] bytes = BitConverter.GetBytes(value);
//Then, if we need big endian for our protocol for instance,
//Just check if you need to convert it or not:
if (BitConverter.IsLittleEndian)
Array.Reverse(bytes); //reverse it so we get big endian.
You can find the full article here.
Hope this helps anyone coming here :)
Just reverse it, Note that this this code (like the other) works only on a little Endian machine. (edit - that was wrong, since this code returns LE by definition)
byte[] INT2LE(int data)
{
byte[] b = new byte[4];
b[0] = (byte)data;
b[1] = (byte)(((uint)data >> 8) & 0xFF);
b[2] = (byte)(((uint)data >> 16) & 0xFF);
b[3] = (byte)(((uint)data >> 24) & 0xFF);
return b;
}
Just do it in reverse:
result[3]= (data >> 24) & 0xff;
result[2]= (data >> 16) & 0xff;
result[1]= (data >> 8) & 0xff;
result[0]= data & 0xff;
Could you use the BitConverter class? It will only work on little-endian hardware I believe, but it should handle most of the heavy lifting for you.
The following is a contrived example that illustrates the use of the class:
if (BitConverter.IsLittleEndian)
{
int someInteger = 100;
byte[] bytes = BitConverter.GetBytes(someInteger);
int convertedFromBytes = BitConverter.ToInt32(bytes, 0);
}
BitConverter.GetBytes(1000).Reverse<byte>().ToArray();
Depending on what you're actually doing, you could rely on letting the framework handle the details of endianness for you by using IPAddress.HostToNetworkOrder and the corresponding reverse function. Then just use the BitConverter class to go to and from byte arrays.
Try using BinaryPrimitives in System.Buffers.Binary, it has helper methods for reading and writing all .net primitives in both little and big endian form.
byte[] IntToLittleEndian(int data)
{
var output = new byte[sizeof(int)];
BinaryPrimitives.WriteInt32LittleEndian(output, data);
return output;
}
int LittleEndianToInt(byte[] data)
{
return BinaryPrimitives.ReadInt32LittleEndian(data);
}
public static string decimalToHexLittleEndian(int _iValue, int _iBytes)
{
string sBigEndian = String.Format("{0:x" + (2 * _iBytes).ToString() + "}", _iValue);
string sLittleEndian = "";
for (int i = _iBytes - 1; i >= 0; i--)
{
sLittleEndian += sBigEndian.Substring(i * 2, 2);
}
return sLittleEndian;
}
You can use this if you don't want to use new heap allocations:
public static void Int32ToFourBytes(Int32 number, out byte b0, out byte b1, out byte b2, out byte b3)
{
b3 = (byte)number;
b2 = (byte)(((uint)number >> 8) & 0xFF);
b1 = (byte)(((uint)number >> 16) & 0xFF);
b0 = (byte)(((uint)number >> 24) & 0xFF);
}
I am trying to send a UDP packet of bytes corresponding to the numbers 1-1000 in sequence. How do I convert each number (1,2,3,4,...,998,999,1000) into the minimum number of bytes required and put them in a sequence that I can send as a UDP packet?
I've tried the following with no success. Any help would be greatly appreciated!
List<byte> byteList = new List<byte>();
for (int i = 1; i <= 255; i++)
{
byte[] nByte = BitConverter.GetBytes((byte)i);
foreach (byte b in nByte)
{
byteList.Add(b);
}
}
for (int g = 256; g <= 1000; g++)
{
UInt16 st = Convert.ToUInt16(g);
byte[] xByte = BitConverter.GetBytes(st);
foreach (byte c in xByte)
{
byteList.Add(c);
}
}
byte[] sendMsg = byteList.ToArray();
Thank you.
You need to use :
BitConverter.GetBytes(INTEGER);
Think about how you are going to be able to tell the difference between:
260, 1 -> 0x1, 0x4, 0x1
1, 4, 1 -> 0x1, 0x4, 0x1
If you use one byte for numbers up to 255 and two bytes for the numbers 256-1000, you won't be able to work out at the other end which number corresponds to what.
If you just need to encode them as described without worrying about how they are decoded, it smacks to me of a contrived homework assignment or test, and I'm uninclined to solve it for you.
I think you are looking for something along the lines of a 7-bit encoded integer:
protected void Write7BitEncodedInt(int value)
{
uint num = (uint) value;
while (num >= 0x80)
{
this.Write((byte) (num | 0x80));
num = num >> 7;
}
this.Write((byte) num);
}
(taken from System.IO.BinaryWriter.Write(String)).
The reverse is found in the System.IO.BinaryReader class and looks something like this:
protected internal int Read7BitEncodedInt()
{
byte num3;
int num = 0;
int num2 = 0;
do
{
if (num2 == 0x23)
{
throw new FormatException(Environment.GetResourceString("Format_Bad7BitInt32"));
}
num3 = this.ReadByte();
num |= (num3 & 0x7f) << num2;
num2 += 7;
}
while ((num3 & 0x80) != 0);
return num;
}
I do hope this is not homework, even though is really smells like it.
EDIT:
Ok, so to put it all together for you:
using System;
using System.IO;
namespace EncodedNumbers
{
class Program
{
protected static void Write7BitEncodedInt(BinaryWriter bin, int value)
{
uint num = (uint)value;
while (num >= 0x80)
{
bin.Write((byte)(num | 0x80));
num = num >> 7;
}
bin.Write((byte)num);
}
static void Main(string[] args)
{
MemoryStream ms = new MemoryStream();
BinaryWriter bin = new BinaryWriter(ms);
for(int i = 1; i < 1000; i++)
{
Write7BitEncodedInt(bin, i);
}
byte[] data = ms.ToArray();
int size = data.Length;
Console.WriteLine("Total # of Bytes = " + size);
Console.ReadLine();
}
}
}
The total size I get is 1871 bytes for numbers 1-1000.
Btw, could you simply state whether or not this is homework? Obviously, we will still help either way. But we would much rather you try a little harder so you can actually learn for yourself.
EDIT #2:
If you want to just pack them in ignoring the ability to decode them back, you can do something like this:
protected static void WriteMinimumInt(BinaryWriter bin, int value)
{
byte[] bytes = BitConverter.GetBytes(value);
int skip = bytes.Length-1;
while (bytes[skip] == 0)
{
skip--;
}
for (int i = 0; i <= skip; i++)
{
bin.Write(bytes[i]);
}
}
This ignores any bytes that are zero (from MSB to LSB). So for 0-255 it will use one byte.
As states elsewhere, this will not allow you to decode the data back since the stream is now ambiguous. As a side note, this approach crams it down to 1743 bytes (as opposed to 1871 using 7-bit encoding).
A byte can only hold 256 distinct values, so you cannot store the numbers above 255 in one byte. The easiest way would be to use short, which is 16 bits. If you realy need to conserve space, you can use 10 bit numbers and pack that into a byte array ( 10 bits = 2^10 = 1024 possible values).
Naively (also, untested):
List<byte> bytes = new List<byte>();
for (int i = 1; i <= 1000; i++)
{
byte[] nByte = BitConverter.GetBytes(i);
foreach(byte b in nByte) bytes.Add(b);
}
byte[] byteStream = bytes.ToArray();
Will give you a stream of bytes were each group of 4 bytes is a number [1, 1000].
You might be tempted to do some work so that i < 256 take a single byte, i < 65535 take two bytes, etc. However, if you do this you can't read the values out of the stream. Instead, you'd add length encoding or sentinels bits or something of the like.
I'd say, don't. Just compress the stream, either using a built-in class, or gin up a Huffman encoding implementation using an agree'd upon set of frequencies.