If i declare
[Flags]
public enum MyColor
{
Red = 1;
Green = 2;
Blue = 4;
White = 8;
Magenta = 16;
... (etc)
}
Is there a way to determine/set the number of Bytes that this enum takes up? Also, what byte order would it end up in? (e.g. do i have to do a HostToNetwork() to properly send it over the wire?) Also, in order to call HostToNetwork, can i cast as a byte array and iterate?
[Flags]
public enum MyColor : byte // sets the underlying type.
{
Red = 0;
Green = 1;
Blue = 2;
White = 4;
Magenta = 8;
... (etc)
}
It's not possible to directly set the endianness. You can use some well-crafted numbers that simulate big-endian bytes on a little endian system. However, I'd always use explicit APIs for converting byte orders.
Complete answer is:
Is there a way to determine/set the number of Bytes that this enum takes up?
Yes:
[Flags]
public enum MyColor : byte // sets the underlying type.
{
Red = 1;
Green = 2;
Blue = 4;
White = 8;
Magenta = 16;
... (etc)
}
Also, what byte order would it end up in?
Whatever it's compiled in, so for my case, x86 (little).
Also, in order to call HostToNetwork, can i cast as a byte array and iterate?
This is where it's tricky. I found out a few things:
the enum's underlying type will expand (or be expanded by the ": long" you have to tag at the end of the declaration) and it must be a type. So it is actually impossible to do what I was really trying to do (an enum of 6 bytes).
the serialization of this structure to an array of bytes (to be converted to network order, and sent over the wire) is incredibly not straightforward. The BitConverter class does the trick, but this is pretty helpful for dancing in between endianness: http://snipplr.com/view/15179/adapt-systembitconverter-to-handle-big-endian-network-byte-ordering-in-order-to-create-number-types-from-bytes-and-viceversa/
Related
According to this question C# will assign 4 byte size to field of type Fruits whether it is defined like this:
enum Fruits : byte { Apple, Orange, Banana }
or like this:
enum Fruits { Apple, Orange, Banana }
I'm still curious if there is any way of sidesteping this and making the size of enum smaller than 4 bytes. I know that this probably wouldn't be very efficient or desirable but it's still interesting to know if it's possible at all.
Data alignment (typically on 1, 2, 4 byte border) is used for the faster access to the data (int should be aligned on 4 bytes border).
For instance
(let me use byte and int instead of enum for readability and struct instead of class - it's an easy way to get size of struct with a help of sizeof):
// sizeof() == 8 == 1 + 3 (padding) + 4
public struct MyDemo {
public byte A; // Padded with 3 unused bytes
public int B; // Aligned on 4 byte
}
// sizeof() == 8 == 1 + 1 + 2 (padding) + 4
public struct MyDemo {
public byte A; // Bytes should be aligned on 1 Byte Border
public byte B; // Padded with 2 unused bytes
public int C; // Aligned on 4 byte
}
// sizeof() == 2 == 1 + 1
public struct MyDemo {
public byte A; // Bytes should be aligned on 1 Byte Border
public byte B; // Bytes should be aligned on 1 Byte Border
}
So far so good you can have an effect even in case of fields within class (struct), e.g.
public struct MyClass {
// 4 Byte in total: 1 + 1 + 2 (we are lucky: no padding here)
private Fruits m_Fruits; // Aligned on 1 Byte border
private byte m_MyByte // Aligned on 1 Byte border
private short m_NyShort; // Aligned on 2 Byte border
}
In case of a collection (array) all the values are of the same type which should be aligned in the same way, that's why no padding is required:
// Length * 1Byte == Length byte in total
byte[] array = new [] {
byte1, // 1 Byte alignment
byte2, // 1 Byte alignment
byte3, // 1 Byte alignment
...
byteN, // 1 Byte alignment
}
For the vast majority of applications the size overhead will not matter at all. For some specialized applications, like image processing, it may make sense to use constant byte values and do bit-manipulations instead. This can also be a way to pack multiple values into a single byte, or combine flag-bits with values:
const byte Apple = 0x01;
const byte Orange= 0x02;
const byte Banana= 0x03;
const byte FruitMask = 0x0f; // bits 0-3 represent the fruit value
const byte Red = 0x10;
const byte Green = 0x20;
const byte ColorMask = 0x70; // bits 4-6 represents color
const byte IsValidFlag = 0x80; // bit 7 represent value flag
...
var fruitValue = myBytes[i] & FruitMask;
var IsRed = (myBytes[i] & ColorMask) == Red ;
var isValid = myBytes[i] & IsValidFlag > 0;
According to this question C# will assign 4 byte size to field of type Fruits whether it is defined like this
I would say that this is not what actually is written there. The post describes the memory alignment on stack which seems to align 4 bytes for byte variable too (can be platform depended):
byte b = 1;
results in the same IL_0000: ldc.i4.1 instruction as the var fb1 = FruitsByte.Apple and int i = 1; (see at sharplab.io) and the same 4 bytes difference (Core CLR 6.0.322.12309 on x86) in the move instructions.
Though using corresponding enum as struct fields will result in them being aligned to corresponding borders:
Console.WriteLine(Unsafe.SizeOf<C>()); // prints 2
Console.WriteLine(Unsafe.SizeOf<C1>()); // prints 8
public enum Fruits : byte { Apple, Orange, Banana }
public enum Fruits1 { Apple, Orange, Banana }
public struct C {
public Fruits f1;
public Fruits f2;
}
public struct C1 {
public Fruits1 f1;
public Fruits1 f2;
}
The same will happen for arrays, which will allocate continuous region of memory without aligning different elements.
Useful reading:
StructLayoutAttribute
Blittable and Non-Blittable Types
Article about blittable types with a lot of links
I need to extract some bit ranges from a 16-byte value, e.g.:
bit 0 = first thing
next 54 bits = second thing
next 52 bits = third thing
last 21 bits = fourth thing
.net doesn't have a UInt128 structure, well it has the BigInteger class, but I'm not sure that's right for the job, maybe it is?
I have found a third party library that can read bits from a stream, but when trying to convert them back to UInt64's using the BitConverter, it will fail, as 54 bits isn't long enough for a UInt64, but it's too long for a UInt32
My immediate thought was the bit shifting was the way to do this, but now I'm not so sure how to proceed, seeing as I can't think of a good way of handling the original 16 bytes.
Any suggestions or comments would be appreciated.
Here's some untested code. I'm sure that there are bugs in it (whenever I write code like this, I get shifts, masks, etc. wrong). However, it should be enough to get you started. If you get this working and there are only a few problems, let me know in the comments and I'll fix things. If you can't get it to work, let me know as well, and I'll delete the answer. If it requires a major rewrite, post your working code as an answer and let me know.
The other thing to worry about with this (since you mentioned that this comes from a file) is endian-ness. Not all computer architectures represent values in the same way. I'll leave any byte swizzling (if needed) to you.
First, structs in C++ are basically the same as classes (though people think they are different). In C#, they are very different. A struct in C# is a Value Type. When you do value type assignment, the compiler makes a copy of the value of the struct, rather than just making a copy to a reference to the object (like it does with classes). Value types have an implicit default constructor that initializes all members to their default (zero or null) values.
Marking the struct with [StructLayout(LayoutKind.Sequential)] tells the compiler to layout the members in the specified order (they compiler doesn't have to normally). This allows you to pass a reference to one of these (via P/Invoke) to a C program if you want to.
So, my struct starts off this way:
[StructLayout(LayoutKind.Sequential)]
public struct Struct128
{
//not using auto-properties with private setters on purpose.
//This should look like a single 128-bit value (in part, because of LayoutKind.Sequential)
private ulong _bottom64bits;
private ulong _top64bits;
}
Now I'm going to add members to that struct. Since you are getting the 128 bits from a file, don't try to read the data into a single 128-bit structure (if you can figure out how (look up serialization), you can, but...). Instead, read 64 bits at a time and use a constructor like this one:
public Struct128(ulong bottom64, ulong top64)
{
_top64bits = top64;
_bottom64bits = bottom64;
}
If you need to write the data in one of these back into the file, go get it 64-bits at a time using read-only properties like this:
//read access to the raw storage
public ulong Top64 => _top64bits;
public ulong Bottom64 => _bottom64bits;
Now we need to get and set the various bit-ish values out of our structure. Getting (and setting) the first thing is easy:
public bool FirstThing
{
get => (_bottom64bits & 0x01) == 1;
set
{
//set or clear the 0 bit
if (value)
{
_bottom64bits |= 1ul;
}
else
{
_bottom64bits &= (~1ul);
}
}
}
Getting/setting the second and fourth things are very similar. In both cases, to get the value, you mask away all but the important bits and then shift the result. To set the value, you take the property value, shift it to the right place, zero out the bits in the appropriate (top or bottom) value stored in the structure and OR in the new bits (that you set up by shifting)
//bits 1 through 55
private const ulong SecondThingMask = 0b111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1110;
public ulong SecondThing
{
get => (_bottom64bits & SecondThingMask) >> 1;
set
{
var shifted = (value << 1) & SecondThingMask;
_bottom64bits = (_bottom64bits & (~SecondThingMask)) | shifted;
}
}
and
//top 21 bits
private const ulong FourthThingMask = 0b1111_1111_1111_1111_1111_1000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000;
//to shift the top 21 bits down to the bottom 21 bits, need to shift 64-21
private const int FourthThingShift = 64 - 21;
public uint FourthThing
{
get => (uint)((_top64bits & FourthThingMask) >> FourthThingShift);
set
{
var shifted = ((ulong)value << FourthThingShift) & FourthThingMask;
_top64bits = (_top64bits & (~FourthThingMask)) | shifted;
}
}
It's the third thing that is tricky. To get the value, you need to mask the correct bits out of both the top and bottom values, shift them to the right positions and return the ORed result.
To set the value, you need to take the property value, split it into upper and lower portions and then do the same kind of magic ORing that was done for the second and fourth things:
//the third thing is the hard part.
//The bottom 55 bits of the _bottom64bits are dedicate to the 1st and 2nd things, so the next 9 are the bottom 9 of the 3rd thing
//The other 52-9 (=43) bits come-from/go-to the _top64bits
//top 9 bits
private const ulong ThirdThingBottomMask = 0b1111_1111_1000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000;
//bottom 43 bits
private const ulong ThirdThingTopMask = 0b111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111;
private const int ThirdThingBottomShift = 64 - 9;
//bottom 9 bits
private const ulong ThirdThingBottomSetMask = 0b1_1111_1111;
//all but the bottom 9 bits
private const ulong ThirdThingTopSetMask = 0b1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1110_0000_0000;
//52 bits total
private const ulong ThirdThingOverallMask = 0b1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111;
public ulong ThirdThing
{
get
{
var bottom = (_bottom64bits & ThirdThingBottomMask) >> ThirdThingBottomShift;
var top = (_top64bits & ThirdThingTopMask) << 9;
return top | bottom;
}
set
{
var masked = value & ThirdThingOverallMask;
var bottom = (masked & ThirdThingBottomSetMask) << ThirdThingBottomShift;
_bottom64bits = (_bottom64bits & (~ThirdThingBottomSetMask)) | bottom;
var top = (masked & ThirdThingTopSetMask) >> 9;
_top64bits = (_top64bits & (~ThirdThingTopSetMask)) | top;
}
}
I hope this is useful. Let me know.
Quite often when using hardware interfaces you'll have to set groups of bits or set them without changing the rest of the bits. The interface description says something like:
you get a System.UINT32, bit 0 is set if available; bits 1..7 mean the minimum value; bits 8..14 is the maximum value; bits 15..17 is the threshold, etc.
I have to do this for a lot of values, each with their own start and stop bits.
That's why I'd like to create a class that can convert the values (start bit; stop bit; raw UINT32 value) into the value it represents, and back.
So something like:
class RawParameterInterpreter
{
public int StartBit {get; set;} // counting from 0..31
public int StopBit {get; set;} // counting from 0..31
Uint32 ExtractParameterValue(Uint32 rawValue);
Uint32 InsertParameterValueToRawValue(Uint32 parameterValue,
Uint32 rawValue);
}
I understand the part with handling the bits:
// example bits 4..7:
extract parameter from raw value: (rawvalue & 0x000000F0) >> startbit;
insert parameter into raw: (parameter << startbit) | (rawValue & 0xFFFFFF0F)
The problem is, how to initialize the 0x000000F0 and 0xFFFFFF0F from values startBit and endBit? Is there a general method to calculate these values?
I would use something like this
Uint32 bitPattern = 0;
for (int bitNr = startBit; bitNr <= stopBit; ++bitNr)
{
bitPattern = bitPattern << 2 + 1;
}
bitPattern = bitPattern << startBit;
I know the class System.Collections.BitArray. This would make it even easier to set the bits, but how to convert the BitArray back to Uint32?
So question: what is the best method for this?
Well, your question is very general but,
You could use an enum with a Flags attribute.
[Flags]
public enum BitPattern
{
Start = 1,
Stop = 1 << 31
}
currently im working on a solution for a prime-number calculator/checker. The algorythm is already working and verry efficient (0,359 seconds for the first 9012330 primes). Here is a part of the upper region where everything is declared:
const uint anz = 50000000;
uint a = 3, b = 4, c = 3, d = 13, e = 12, f = 13, g = 28, h = 32;
bool[,] prim = new bool[8, anz / 10];
uint max = 3 * (uint)(anz / (Math.Log(anz) - 1.08366));
uint[] p = new uint[max];
Now I wanted to go to the next level and use ulong's instead of uint's to cover a larger area (you can see that already), where i tapped into my problem: the bool-array.
Like everybody should know, bool's have the length of a byte what takes a lot of memory when creating the array... So I'm searching for a more resource-friendly way to do that.
My first idea was a bit-array -> not byte! <- to save the bool's, but haven't figured out how to do that by now. So if someone ever did something like this, I would appreciate any kind of tips and solutions. Thanks in advance :)
You can use BitArray collection:
http://msdn.microsoft.com/en-us/library/system.collections.bitarray(v=vs.110).aspx
MSDN Description:
Manages a compact array of bit values, which are represented as Booleans, where true indicates that the bit is on (1) and false indicates the bit is off (0).
You can (and should) use well tested and well known libraries.
But if you're looking to learn something (as it seems to be the case) you can do it yourself.
Another reason you may want to use a custom bit array is to use the hard drive to store the array, which comes in handy when calculating primes. To do this you'd need to further split addr, for example lowest 3 bits for the mask, next 28 bits for 256MB of in-memory storage, and from there on - a file name for a buffer file.
Yet another reason for custom bit array is to compress the memory use when specifically searching for primes. After all more than half of your bits will be 'false' because the numbers corresponding to them would be even, so in fact you can both speed up your calculation AND reduce memory requirements if you don't even store the even bits. You can do that by changing the way addr is interpreted. Further more you can also exclude numbers divisible by 3 (only 2 out of every 6 numbers has a chance of being prime) thus reducing memory requirements by 60% compared to plain bit array.
Notice the use of shift and logical operators to make the code a bit more efficient.
byte mask = (byte)(1 << (int)(addr & 7)); for example can be written as
byte mask = (byte)(1 << (int)(addr % 8));
and addr >> 3 can be written as addr / 8
Testing shift/logical operators vs division shows 2.6s vs 4.8s in favor of shift/logical for 200000000 operations.
Here's the code:
void Main()
{
var barr = new BitArray(10);
barr[4] = true;
Console.WriteLine("Is it "+barr[4]);
Console.WriteLine("Is it Not "+barr[5]);
}
public class BitArray{
private readonly byte[] _buffer;
public bool this[long addr]{
get{
byte mask = (byte)(1 << (int)(addr & 7));
byte val = _buffer[(int)(addr >> 3)];
bool bit = (val & mask) == mask;
return bit;
}
set{
byte mask = (byte) ((value ? 1:0) << (int)(addr & 7));
int offs = (int)addr >> 3;
_buffer[offs] = (byte)(_buffer[offs] | mask);
}
}
public BitArray(long size){
_buffer = new byte[size/8 + 1]; // define a byte buffer sized to hold 8 bools per byte. The spare +1 is to avoid dealing with rounding.
}
}
How do I set each bit in the following byte array which has 21 bytes or 168 bits to either zero or one?
byte[] logonHours
Thank you very much
Well, to clear every bit to zero you can just use Array.Clear:
Array.Clear(logonHours, 0, logonHours.Length);
Setting each bit is slightly harder:
for (int i = 0; i < logonHours.Length; i++)
{
logonHours[i] = 0xff;
}
If you find yourself filling an array often, you could write an extension method:
public static void FillArray<T>(this T[] array, T value)
{
// TODO: Validation
for (int i = 0; i < array.Length; i++)
{
array[i] = value;
}
}
BitArray.SetAll:
System.Collections.BitArray a = new System.Collections.BitArray(logonHours);
a.SetAll(true);
Note that this copies the data from the byte array. It's not just a wrapper around it.
This may be more than you need, but ...
Usually when dealing with individual bits in any data type, I define a const for each bit position, then use the binary operators |, &, and ^.
i.e.
const byte bit1 = 1;
const byte bit2 = 2;
const byte bit3 = 4;
const byte bit4 = 8;
.
.
const byte bit8 = 128;
Then you can turn whatever bits you want on and off using the bit operations.
byte byTest = 0;
byTest = byTest | bit4;
would turn bit 4 on but leave the rest untouched.
You would use the & and ^ to turn them off or do more complex exercises.
Obviously, since you only want to turn all bits up or down then you can just set the byte to 0 or 255. That would turn them all off or on.