I am using a Flags Enum to track the completion stages of a data migration process for each data record. I need a way to reset back to a specified stage where I can begin reprocessing the migration of a data record. How does one reset the higher bytes in a Flags Enum?
Example Enum:
[Flags]
public Enum MigrationStages {
None = 0,
Started = 1,
MiddleStage = 2,
WrappingUp = 4,
Finished = 8
}
My current value:
var currentStage =
MigrationStages.None
| MigrationStages.Started
| MigrationStages.MiddleStage
| MigrationStages.WrappingUp
| MigrationStages.Finished;
I want to reset back to MigrationStages.MiddleStage to cause reprocessing to occur starting there.
Bitwise math is not something we use much anymore. As such, when I went searching for an answer to this I found nothing that helped so I worked it out. Sharing my math with the world in case others find it useful.
I created a simple helper method to do this, as follows:
public static MigrationStage ClearHigherFlags(MigrationStage orig, MigrationStage highBit)
{
var lowerBits = (int)orig % (int)highBit;
return highBit + lowerBits;
}
Usage example:
currentStage = ClearHigherFlags(currentStage, MigrationStages.MiddleStage);
Obviously, if you want to clear higher flags including the highBit, just don't add it back. To clear lower flags, return orig - lowerBits.
In bitwise math, modulus (%) is often your friend.
Addendum
There are those who will find this answer and think that it's not really bit math. I hope this assuages those folks.
First, recall that this is flags we're talking about so a very specific subset of bit manipulation where modulus makes the math easier to read and is very appropriate. The actual math performed by the compiler replacement will be something like what follows, which I find much less intuitive to read.
public static MigrationStage ClearHigherFlags(MigrationStage orig, MigrationStage highBit)
{
var bitMask = highBit - 1;
var lowerBits = orig & bitMask;
return highBit + lowerBits;
}
It's really not too hard to read but the conversion to a bit mask is done implicitly in my original solution.
If you want to use bitwise manipulation you can do it this way:
var lowbits = MigrationStages.MiddleStage | MigrationStages.Started;
Then to clear the high bits in your example:
currentStage = currentStage & lowbits;
Maybe this will make more sense:
8 4 2 1
==========
lowbits 0 0 1 1
currentvalue 1 1 1 1
==========
AND (&) 0 0 1 1
which clears the two high bits
Related
Context
Let's say that I have a system model which comprises of 8 Boolean variables. Together, they comprise a byte that may expresses the 128 state permutations of my system. Let this byte be stateByte, whereby each bit is one of my variables.
Now, suppose I have some enumerable states, such as:
public enum States
{
READY = 0b_0000_0001
OPERATING = 0b_0100_0000
FAULT = 0b_1000_0000
}
If it were that each of the States were discrete, I could easily determine States currentState = (States)stateByte, however my problem is:
My states are only each dependent on a subset of specific bits, not the entire byte. Specifically, there are some bits that are irrelevant depending on the state. To use pseudo notation, I have the scenario below where x notates an irrelevant bit:
public enum States
{
READY = 0b_0000_0001 // Exactly this permutation
OPERATING = 0b_0100_0000 // Exactly this permutation
FAULT = 0b_1xxx_xxxx // Only bit 7 need be high to determine a fault
}
Question
How can I use logical, bitwise operators (masking) in order to enumerate states from only relevant bits?
Further Context
For those sticklers for detail who would question why I am trying to do this or why I cannot simply use thresholds, please see below the full state table of the hardware I am integrating:
If the flags solution is valid then it would be done like this:
[Flags]
public enum States
{
READY = 0b_0000_0001,
OPERATING = 0b_0100_0000,
FAULT = 0b_1000_0000
}
static void Main(string[] args)
{
var s = (States)5;
var check = s = States.FAULT | States.OPERATING;
}
You could use the binary and operator & to mask values, such as to only include certain bits:
0b_1xxx_xxxx & 0b_1000_0000 = 0b_1000_0000
0b_1xxx_xxxx & (1 << 7) = 0b_1000_0000
0b_1xxx_xxxx & States.Fault = 0b_1000_0000
If you want to access certain bits often you could write an extension method like this:
public static boolean GetBit(this byte bitmask, int index) =>
((bitmask >> index) & 1) != 0;
0b_1xxx_xxxx.GetBit(7) = true
If you want to check multiple bits at once, you can use a pattern that matches all bits you want to check and compare them with another pattern containing all "correct" bits and 0s everywhere else:
0b_x0xx_1000
& 0b_0100_1111 // Only look at bits 0-3 and 6
== 0b_0000_1000 // Check that bit 6 is 0, 3 is 1 and 0-2 are 0
// Other bits are 0 due to the logical and
I have a 3 element enum, it defines one of three contexts, for example red, green, or blue. This enum is used in a loop with millions of iterations, for example, pixel many. The fields are currently one int apart, the default. Given a desired production order of R,G,B,R,G,B..., I have currently resorted to checking if the value is currently B, thus assigning it to R, otherwise incrementing the value.
private enum CHANNEL_CONTEXT {RED, GREEN, BLUE} //here is a sample enum
//here is a sample loop with the relevant construct
CHANNEL_CONTEXT current = CHANNEL_CONTEXT.RED;
while(condition)
{
use current;
//...
if(current == CHANNEL_CONTEXT.BLUE)
current = CHANNEL_CONTEXT.RED
else
current+=1;
}
Is there a way to wrap a 3 field enum with a single operation, such that no branch is required to determine if it is time to wrap. I know modulus(%) fits the bill, but my motivation is a performance based one, and I'd break even at best with such an expensive operation(testing corroborated, but not exhaustively).
To put my agenda in perspective, if i had 256 relevant fields, I could create a byte based enum, and increment with impunity and intended overflow. Alas, I only have three, and I cant think of a way to manipulate any integral primitive in a way that three values are produced cyclically, using a lightweight ALU operation,(+,-,&,^,|,<<..etc). I also wouldn't have been able to think of a way to swap bits with no temporary using such operations, but there is a rarely practical but possible way to do so.
Can someone guide me to a way to distribute 3 integral enum values such that they are traversable periodically, with no branch required, and no division based operators used(like modulus)?
While it sounds very unlikely that you can beat x = (x + 1) % 3 you can try to use mapping table:
var map = new[]{1,2,0};
x = map[x];
You probably would need to wrap that in unsafe to remove boundary checks on array access.
If you really set on bit manipulation irrespective of readability of the code - the table of converting numbers you are interested in is small enough to build manually for each bit and then combine.
Truth table:
Source Result
Bit2 Bit1 Bit2 Bit1
0 0 0 1
0 1 1 0
1 0 0 0
1 1 x x
As you can see the values we are interested in only produce 2 non-zero bits so resulting expression will be very simple - one case for 1 for lower bit and one case for higher bit (assuming values never fall out of the range 0-2 (which is safe if this is the only transformation).
var b1 = (x & 1) >> 0; // extract lower bit 0
var b2 = (x & 2) >> 1; // extract higher bit 1
// only care of cases when pair of bits results in 1
var resultBit1 = 1 & (~b1 & ~b2); // 00 -> x1, other cases is 0
var resultBit2 = (1 & (b1 & ~b2)) << 1; // 01 -> 1x, other cases is 0
x = resultBit1 | resultBit2;
Or inlining all into one unreadable line:
x = 1 & ~(x | x >> 1) | 2 & (x & 1 & ~x >> 1) << 1;
Why are people always using enum values like 0, 1, 2, 4, 8 and not 0, 1, 2, 3, 4?
Has this something to do with bit operations, etc.?
I would really appreciate a small sample snippet on how this is used correctly :)
[Flags]
public enum Permissions
{
None = 0,
Read = 1,
Write = 2,
Delete = 4
}
Because they are powers of two and I can do this:
var permissions = Permissions.Read | Permissions.Write;
And perhaps later...
if( (permissions & Permissions.Write) == Permissions.Write )
{
// we have write access
}
It is a bit field, where each set bit corresponds to some permission (or whatever the enumerated value logically corresponds to). If these were defined as 1, 2, 3, ... you would not be able to use bitwise operators in this fashion and get meaningful results. To delve deeper...
Permissions.Read == 1 == 00000001
Permissions.Write == 2 == 00000010
Permissions.Delete == 4 == 00000100
Notice a pattern here? Now if we take my original example, i.e.,
var permissions = Permissions.Read | Permissions.Write;
Then...
permissions == 00000011
See? Both the Read and Write bits are set, and I can check that independently (Also notice that the Delete bit is not set and therefore this value does not convey permission to delete).
It allows one to store multiple flags in a single field of bits.
If it is still not clear from the other answers, think about it like this:
[Flags]
public enum Permissions
{
None = 0,
Read = 1,
Write = 2,
Delete = 4
}
is just a shorter way to write:
public enum Permissions
{
DeleteNoWriteNoReadNo = 0, // None
DeleteNoWriteNoReadYes = 1, // Read
DeleteNoWriteYesReadNo = 2, // Write
DeleteNoWriteYesReadYes = 3, // Read + Write
DeleteYesWriteNoReadNo = 4, // Delete
DeleteYesWriteNoReadYes = 5, // Read + Delete
DeleteYesWriteYesReadNo = 6, // Write + Delete
DeleteYesWriteYesReadYes = 7, // Read + Write + Delete
}
There are eight possibilities but you can represent them as combinations of only four members. If there were sixteen possibilities then you could represent them as combinations of only five members. If there were four billion possibilities then you could represent them as combinations of only 33 members! It is obviously far better to have only 33 members, each (except zero) a power of two, than to try to name four billion items in an enum.
Because these values represent unique bit locations in binary:
1 == binary 00000001
2 == binary 00000010
4 == binary 00000100
etc., so
1 | 2 == binary 00000011
EDIT:
3 == binary 00000011
3 in binary is represented by a value of 1 in both the ones place and the twos place. It is actually the same as the value 1 | 2. So when you are trying to use the binary places as flags to represent some state, 3 isn't usually meaningful (unless there is a logical value that actually is the combination of the two)
For further clarification, you might want to extend your example enum as follows:
[Flags]
public Enum Permissions
{
None = 0, // Binary 0000000
Read = 1, // Binary 0000001
Write = 2, // Binary 0000010
Delete = 4, // Binary 0000100
All = 7, // Binary 0000111
}
Therefore in I have Permissions.All, I also implicitly have Permissions.Read, Permissions.Write, and Permissions.Delete
[Flags]
public Enum Permissions
{
None = 0; //0000000
Read = 1; //0000001
Write = 1<<1; //0000010
Delete = 1<<2; //0000100
Blah1 = 1<<3; //0001000
Blah2 = 1<<4; //0010000
}
I think writing using a binary shift operator << is easier to understand and read, and you don't need to calculate it.
These are used to represent bit flags which allows combinations of enum values. I think it's clearer if you write the values in hex notation
[Flags]
public Enum Permissions
{
None = 0x00,
Read = 0x01,
Write = 0x02,
Delete= 0x04,
Blah1 = 0x08,
Blah2 = 0x10
}
This is really more of a comment, but since that wouldn't support formatting, I just wanted to include a method I've employed for setting up flag enumerations:
[Flags]
public enum FlagTest
{
None = 0,
Read = 1,
Write = Read * 2,
Delete = Write * 2,
ReadWrite = Read|Write
}
I find this approach especially helpful during development in the case where you like to maintain your flags in alphabetical order. If you determine you need to add a new flag value, you can just insert it alphabetically and the only value you have to change is the one it now precedes.
Note, however, that once a solution is published to any production system (especially if the enum is exposed without a tight coupling, such as over a web service), then it is highly advisable against changing any existing value within the enum.
Lot's of good answers to this one… I'll just say.. if you do not like, or cannot easily grasp what the << syntax is trying to express.. I personally prefer an alternative (and dare I say, straightforward enum declaration style)…
typedef NS_OPTIONS(NSUInteger, Align) {
AlignLeft = 00000001,
AlignRight = 00000010,
AlignTop = 00000100,
AlignBottom = 00001000,
AlignTopLeft = 00000101,
AlignTopRight = 00000110,
AlignBottomLeft = 00001001,
AlignBottomRight = 00001010
};
NSLog(#"%ld == %ld", AlignLeft | AlignBottom, AlignBottomLeft);
LOG 513 == 513
So much easier (for myself, at least) to comprehend. Line up the ones… describe the result you desire, get the result you WANT.. No "calculations" necessary.
How to I use logical operators to determine if a bit is set, or is bit-shifting the only way?
I found this question that uses bit shifting, but I would think I can just AND out my value.
For some context, I'm reading a value from Active Directory and trying to determine if it a Schema Base Object. I think my problem is a syntax issue, but I'm not sure how to correct it.
foreach (DirectoryEntry schemaObjectToTest in objSchema.Children)
{
var resFlag = schemaObjectToTest.Properties["systemFlags"].Value;
//if bit 10 is set then can't be made confidential.
if (resFlag != null)
{
byte original = Convert.ToByte( resFlag );
byte isFlag_Schema_Base_Object = Convert.ToByte( 2);
var result = original & isFlag_Schema_Base_Object;
if ((result) > 0)
{
//A non zero result indicates that the bit was found
}
}
}
When I look at the debugger:
resFlag is an object{int} and the value is 0x00000010.
isFlag_Schema_Base_Object, is 0x02
resFlag is 0x00000010 which is 16 in decimal, or 10000 in binary. So it seems like you want to test bit 4 (with bit 0 being the least significant bit), despite your comment saying "if bit 10 is set".
If you do need to test bit 4, then isFlag_Schema_Base_Object needs to be initialised to 16, which is 0x10.
Anyway, you are right - you don't need to do bit shifting to see if a bit is set, you can AND the value with a constant that has just that bit set, and see if the result is non-zero.
If the bit is set:
original xxx1xxxx
AND
isFlag_Schema_Base_Object 00010000
-----------------------------------
= 00010000 (non-zero)
But if the bit isn't set:
original xxx0xxxx
AND
isFlag_Schema_Base_Object 00010000
-----------------------------------
= 00000000 (zero)
Having said that, it might be clearer to initialise isFlag_Schema_Base_Object using the value 1<<4, to make it clear that you're testing whether bit 4 is set.
If you know which bit to check and you're dealing with int's you can use BitVector32.
int yourValue = 5;
BitVector32 bv = new BitVector32(yourValue);
int bitPositionToCheck = 3;
int mask = Enumerable.Range(0, bitPositionToCheck).Select(BitVector32.CreateMask).Last();
bool isSet = bv[mask];
Using bitshifting is probably cleaner than using CreateMask. But it's there :)
I have some old code like this:
private int ParseByte(byte theByte)
{
byte[] bytes = new byte[1];
bytes[0] = theByte;
BitArray bits = new BitArray(bytes);
if (bits[0])
return 1;
else
return 0;
}
It's long and I figured I could trim it down like this:
private int ParseByte(byte theByte)
{
return theByte >> 7;
}
But, I'm not getting the same values as the first function. The byte either contains 00000000 or 10000000. What am I missing here? Am I using an incorrect operator?
The problem is that, in the first function, bits[0] returns the least significant bit, but the second function is returning the most significant bit. To modify the second function to get the least significant bit:
private int ParseByte(byte theByte)
{
return theByte & 00000001;
}
To modify the first function to return the most significant bit, you should use bits[7] -- not bits[0].
The equivalent function to the first snipet is:
return theByte & 1 == 1
In the second snipet you were chechink the most significative bit and in the first snipet the least significant.
Do you want to return int or string? Anyway - you can use modulo:
return theByte % 2 == 0 ? "0" : "1"
OK, you edited ... and want to return int
A word to your shifting operation: you would have to use << instead of >>. But this returns (when you cast to byte instead of int) 0 or 128 and not 0 or 1. So you could rewrite your second solution as:
return (byte)(theByte << 7) == 128 ? 1 : 0;
But the other answers contain really better solutions than this.
Perhaps the first function should check for bits[7] ?
You have an extra zero in your binary numbers (you have 9 digits in each). I'm assuming that's just a typo.
Are you sure you're doing your ordering correctly? Binary is traditionally written right-to-left, not left-to-right like most other numbering systems. If the binary number you showed is property formatted (meaning that 10000000 is really the number 128 and not the number 1) then your first code snippet shouldn't work and the second should. If you're writing it backwards (meaning 10000000 is 1, not 128), then you don't even need to bitshift. Just AND it with 1 (theByte & 1).
In fact, regardless of the approach a bitwise AND (the & operator) seems more appropriate. Given that your first function works and the second does not, I'm assuming you just wrote the number backwards and need to AND it with 1 as described above.
According to a user on Microsoft's site the BitArray internally stores the bits into Int32s in big endian in bit order. That could cause the problem. For a solution and further info you can visit the link.
1st The first function does not work as it tries to return a string instead of an int.
But what you might want is this:
private static int ParseByte(byte theByte)
{
return theByte & 1;
}
However you might also want this:
private static string ParseByteB(byte theByte)
{
return (theByte & 1).ToString();
}