I ran into a bit of code similar to the code below and was just curious if someone could help me understand what it's doing?:
int flag = 5;
Console.WriteLine(0x0E & flag);
// 5 returns 4, 6 returns 4, 7 returns 6, 8 returns 8
Sandbox:
https://dotnetfiddle.net/NnLyvJ
This is the bitwise AND operator.
It performs an AND operation on the bits of a number.
A logical AND operation on two [boolean] values returns True if the two values are True; False otherwise.
A bitwise AND operation on two numbers returns a number from all the bits of the two numbers that are 1 (True) in both numbers.
Example:
5 = 101
4 = 100
AND = 100 = 4
Therefore, 5 & 4 = 4.
This logic is heavily used for storing flags, you just need to assign each flag a power of 2 (1, 2, 4, 8, etc) so that each flag is stored in a different bit of the flags number, and then you just need to do flags & FLAG_VALUE and if the flag is set, it'll return FLAG_VALUE, otherwise 0.
C# provides a "cleaner" way to do this using enums and the Flags attribute.
[Flags]
public enum MyFlags
{
Flag0 = 1 << 0, // using the bitwise shift operator to make it more readable
Flag1 = 1 << 1,
Flag2 = 1 << 2,
Flag3 = 1 << 3,
}
void a()
{
var flags = MyFlags.Flag0 | MyFlags.Flag1 | MyFlags.Flag3;
Console.WriteLine(Convert.ToString((int) flags, 2)); // prints the binary representation of flags, that is "1011" (in base 10 it's 11)
Console.WriteLine(flags); // as the enum has the Flags attribute, it prints "Flag0, Flag1, Flag3" instead of treating it as an invalid value and printing "11"
Console.WriteLine(flags.HasFlag(MyFlags.Flag1)); // the Flags attribute also provides the HasFlag function, which is syntactic sugar for doing "(flags & MyFlags.Flag1) != 0"
}
Excuse my bad english.
Related
This question already has answers here:
Why use the Bitwise-Shift operator for values in a C enum definition?
(9 answers)
Closed 6 years ago.
Ok so I am new to C#, and for the life of me I cannot comprehend what exactly the below code (from a legacy project) is supposed to do:
[Flags]
public enum EAccountStatus
{
None = 0,
FreeServiceApproved = 1 << 0,
GovernmentAccount = 1 << 1,
PrivateOrganisationAccount = 1 << 2,
All = 8
}
What exactly does the << operator do here on the enums? Why do we need this?
Behind the scenes, the enumeration is actually an int.
<< is the Bitwise Left Shift Operator
An equivalent way of writing this code is :
[Flags]
public enum EAccountStatus
{
None = 0,
FreeServiceApproved = 1,
GovernmentAccount = 2,
PrivateOrganisationAccount = 4,
All = 8
}
Please note, that this enumeration has the Flag attribute
As stated in the msdn:
Use the FlagsAttribute custom attribute for an enumeration only if a
bitwise operation (AND, OR, EXCLUSIVE OR) is to be performed on a
numeric value.
This way, if you want to have multiple options set you can use:
var combined = EAccountStatus.FreeServiceApproved | EAccountStatus.GovernmentAccount
which is equivalent to:
00000001 // =1 - FreeServiceApproved
| 00000010 // =2 - GovernmentAccount
---------
00000011 //= 3 - FreeServiceApproved and GovernmentAccount
this SO thread has a rather good explanation about the flags attribute
<< is doing simply what does i.e. Shift left operation.
As far as why in an enum is concerned, its just a way of evaluating the expression as enums allow expressions (and evaluate them on compile time)
I have the enum:
[Flags]
enum Editions
{
Educational,
Basic,
Pro,
Ultra
}
why do I get this behavior?
var x = Editions.Basic;
var y = Editions.Educational;
var test =x.HasFlag(y); // why is this true!?
// and!!!
var test2 = y.HasFlag(x); // this is false!
When using the [Flags] attribute you should explicitly map the enum values to integer that contain non overlapping bit patterns. That is, each enum value should be mapped to a power of two:
[Flags]
enum Editions
{
Educational = 1,
Basic = 2,
Pro = 4,
Ultra = 8
}
Without the explicit numbering, Educational will be mapped to 0 and Basic to 1.
Enum.HasFlags checks if the bit field or bit fields that are set in the parameter are also all set in the tested enum. In your case, x is 1 and y is 0. That means that x contains all of the bits set in 0 (that is no bits at all). However 0 does not contain the bits set in 1 when testing the other way around.
It returns true because you implemented Editions incorrectly. You must explicitly set the values when you use the [Flags] attribute.
[Flags]
enum Editions
{ //binary format
Educational = 1 << 0, // 0001
Basic = 1 << 1, // 0010
Pro = 1 << 2, // 0100
Ultra = 1 << 3 // 1000
}
If you do not assign numbers it will auto assign them starting at 0 going up by 1 for each option after that (this is standard enum behavior, adding the [Flags] attribute does not change this behavior)
[Flags]
enum Editions
{ //binary format
Educational = 0, // 0000
Basic = 1, // 0001
Pro = 2, // 0010
Ultra = 3 // 0011
}
So the two tests you did was: 0001 has the flag 0000 set (which is true, you are not testing for any flags) and 0000 has the flag 0001 set (which is defiantly false)
However looking at your names I doubt you should be using [Flags] at all because I doubt whatever you are doing can be Basic and be Pro at the same time. If you are trying to use flags to make testing enum values easier just use a switch statement instead
Editions edition = GetEdition();
switch(edition)
{
case Editions.Basic:
DoSomthingSpecialForBasic();
break;
case Editions.Pro:
case Editions.Ultra:
DoSomthingSpecialForProAndUltra();
break;
//Does not do anything if Editions.Educational
}
I was wondering how the following enum masking works
If I have an Enum structure
public enum DelMask
{
pass = 1,
fail = 2,
abandoned = 4,
distinction = 8,
merit = 16,
defer = 32,
}
I have seen the following code
int pass = 48;
if ((pass & (int)DelMask.defer) > 0)
//Do something
else if ((pass & (int)DelMask.merit ) > 0)
//Do something else
I am wondering can anyone help me figure out how which block will get executed?
Basic bit logic at work here. The integer 48 ends like this in binary:
0011 0000
Defer, 32, is:
0010 0000
Merit, 16, is:
0001 0000
Now when you perform a logical AND (&), the resulting bits are set where they are both in the input:
pass & (int)DelMask.defer
0011 0000
0010 0000
========= &
0010 0000
The result will be 16, so ((pass & (int)DelMask.defer) > 0) will evaluate to true. Both if's will evaluate to true in your example because both flags are present in the input. The second one won't be evaluated though, because it's an else if.
Both are correct so the first will get executed.
16 is 10000
32 is 100000
48 is 16+32 so it is 110000
10000 & 110000 is 10000
100000 & 110000 is 100000
Both are bigger than zero.
48 = 16 (merit) + 32 (defer).
Thus pass & (int)DelMask.defer evaluates to 32, so the first block runs.
If that wasn't the case, pass & (int)DelMask.merit evaluates to 16, so the second block would run if it got that far.
This only works because the values in the enum are all different powers of 2 and thus correspond to independent bits in the underlying int. This is what is known as a bit flags enum.
First, it should be int pass = 48;
Basically this code checks whether a bit is set in a binary representation of the number. Each & operation should produce a result with all zeroes and one on the place where it is in the mask. for instance:
48: 110000
defer = 32: 100000
______
& 100000
So you can use this code:
int pass = 48;
if ((pass & (int)DelMask.defer) == (int)DelMask.defer)
//Do something
else if ((pass & (int)DelMask.merit ) == (int)DelMask.merit)
//Do something else
Well you need to think of those numbers as binary. I'll use the d suffix to show decimal notation and b suffix for binary notation.
enum values:
01d = 000001b
02d = 000010b
04d = 000100b
08d = 001000b
16d = 010000b
32d = 100000b
pass value:
48d = 110000b
Now the & is the bit-wise AND operator. Which means that if c = a&b, the nth bit in c will be 1 if and only if the nth bit is 1 in both a and b.
So:
16d & 48d = 010000b = 16d > 0
32d & 48d = 100000b = 32d > 0
As you see, your number 48d "matches" with both 16d and 32d. That is why this kind of enums is generally described as a "flag" enum: you can have with one integer the value of several "flags".
As for your code, the first if operator will be verified, which means that you will enter it and "Do something". You will not "Do something else".
Generally in C#, we use the [Flags] attribute for flag enums, which allows not actually writing the decimal values for the enum members. As usual, the example in the MSDN is useless so I'll refer to this SO question for more details about how to use it (note that to know if a value x has a flag f set, you can either do x & f == f or x | f == x, but the usage seems to be to generally use the latter one).
Why are people always using enum values like 0, 1, 2, 4, 8 and not 0, 1, 2, 3, 4?
Has this something to do with bit operations, etc.?
I would really appreciate a small sample snippet on how this is used correctly :)
[Flags]
public enum Permissions
{
None = 0,
Read = 1,
Write = 2,
Delete = 4
}
Because they are powers of two and I can do this:
var permissions = Permissions.Read | Permissions.Write;
And perhaps later...
if( (permissions & Permissions.Write) == Permissions.Write )
{
// we have write access
}
It is a bit field, where each set bit corresponds to some permission (or whatever the enumerated value logically corresponds to). If these were defined as 1, 2, 3, ... you would not be able to use bitwise operators in this fashion and get meaningful results. To delve deeper...
Permissions.Read == 1 == 00000001
Permissions.Write == 2 == 00000010
Permissions.Delete == 4 == 00000100
Notice a pattern here? Now if we take my original example, i.e.,
var permissions = Permissions.Read | Permissions.Write;
Then...
permissions == 00000011
See? Both the Read and Write bits are set, and I can check that independently (Also notice that the Delete bit is not set and therefore this value does not convey permission to delete).
It allows one to store multiple flags in a single field of bits.
If it is still not clear from the other answers, think about it like this:
[Flags]
public enum Permissions
{
None = 0,
Read = 1,
Write = 2,
Delete = 4
}
is just a shorter way to write:
public enum Permissions
{
DeleteNoWriteNoReadNo = 0, // None
DeleteNoWriteNoReadYes = 1, // Read
DeleteNoWriteYesReadNo = 2, // Write
DeleteNoWriteYesReadYes = 3, // Read + Write
DeleteYesWriteNoReadNo = 4, // Delete
DeleteYesWriteNoReadYes = 5, // Read + Delete
DeleteYesWriteYesReadNo = 6, // Write + Delete
DeleteYesWriteYesReadYes = 7, // Read + Write + Delete
}
There are eight possibilities but you can represent them as combinations of only four members. If there were sixteen possibilities then you could represent them as combinations of only five members. If there were four billion possibilities then you could represent them as combinations of only 33 members! It is obviously far better to have only 33 members, each (except zero) a power of two, than to try to name four billion items in an enum.
Because these values represent unique bit locations in binary:
1 == binary 00000001
2 == binary 00000010
4 == binary 00000100
etc., so
1 | 2 == binary 00000011
EDIT:
3 == binary 00000011
3 in binary is represented by a value of 1 in both the ones place and the twos place. It is actually the same as the value 1 | 2. So when you are trying to use the binary places as flags to represent some state, 3 isn't usually meaningful (unless there is a logical value that actually is the combination of the two)
For further clarification, you might want to extend your example enum as follows:
[Flags]
public Enum Permissions
{
None = 0, // Binary 0000000
Read = 1, // Binary 0000001
Write = 2, // Binary 0000010
Delete = 4, // Binary 0000100
All = 7, // Binary 0000111
}
Therefore in I have Permissions.All, I also implicitly have Permissions.Read, Permissions.Write, and Permissions.Delete
[Flags]
public Enum Permissions
{
None = 0; //0000000
Read = 1; //0000001
Write = 1<<1; //0000010
Delete = 1<<2; //0000100
Blah1 = 1<<3; //0001000
Blah2 = 1<<4; //0010000
}
I think writing using a binary shift operator << is easier to understand and read, and you don't need to calculate it.
These are used to represent bit flags which allows combinations of enum values. I think it's clearer if you write the values in hex notation
[Flags]
public Enum Permissions
{
None = 0x00,
Read = 0x01,
Write = 0x02,
Delete= 0x04,
Blah1 = 0x08,
Blah2 = 0x10
}
This is really more of a comment, but since that wouldn't support formatting, I just wanted to include a method I've employed for setting up flag enumerations:
[Flags]
public enum FlagTest
{
None = 0,
Read = 1,
Write = Read * 2,
Delete = Write * 2,
ReadWrite = Read|Write
}
I find this approach especially helpful during development in the case where you like to maintain your flags in alphabetical order. If you determine you need to add a new flag value, you can just insert it alphabetically and the only value you have to change is the one it now precedes.
Note, however, that once a solution is published to any production system (especially if the enum is exposed without a tight coupling, such as over a web service), then it is highly advisable against changing any existing value within the enum.
Lot's of good answers to this one… I'll just say.. if you do not like, or cannot easily grasp what the << syntax is trying to express.. I personally prefer an alternative (and dare I say, straightforward enum declaration style)…
typedef NS_OPTIONS(NSUInteger, Align) {
AlignLeft = 00000001,
AlignRight = 00000010,
AlignTop = 00000100,
AlignBottom = 00001000,
AlignTopLeft = 00000101,
AlignTopRight = 00000110,
AlignBottomLeft = 00001001,
AlignBottomRight = 00001010
};
NSLog(#"%ld == %ld", AlignLeft | AlignBottom, AlignBottomLeft);
LOG 513 == 513
So much easier (for myself, at least) to comprehend. Line up the ones… describe the result you desire, get the result you WANT.. No "calculations" necessary.
So I'm building an MSNP (windows live messenger) client. And I've got this list of capabilities
public enum UserCapabilities : long
{
None = 0,
MobileOnline = 1 << 0,
MSN8User = 1 << 1,
RendersGif = 1 << 2,
....
MsgrVersion7 = 1 << 30,
MsgrVersion8 = 1 << 31,
MsgrVersion9 = 1 << 32,
}
full list here http://paste.pocoo.org/show/383240/
The server sends each users capabilities to the client as a long integer, which I take and cast it to UserCapabilities
capabilities = Int64.Parse(e.Command.Args[3]);
user._capabilities = (UserCapabilities)capabilities;
This is fine, and with atleast one user (with a capability value of 1879474220), I can do
Debug.WriteLine(_msgr.GetUser(usr).Capabilities);
and this will output
RendersGif, RendersIsf, SupportsChunking, IsBot, SupportsSChannel, SupportsSipInvite, MsgrVersion5, MsgrVersion6, MsgrVersion7
But with another user, who has the capability value of (3055849760), when I do the same, I just get the same number outputted
3055849760
What I would like to be seeing is a list of capabilities, as it is with the other user.
I'm sure there is a very valid reason for this happening, but no matter how hard I try to phrase the question to Google, I am not finding an answer.
Please help me :)
The definition of the shift operators means that only the 5 least significant bits are used for 32-bit numbers and only the first 6 bits for 64-bit; meaning:
1 << 5
is identical to
1 << 37
(both are 32)
By making it:
MsgrVersion9 = 1L << 32
you make it a 64-bit number, which is why #leppie's fix works; otherwise the << is considered first (and note that 1<<32 is identical to 1<<0, i.e. 1), and then the resulting 1 is converted to a long; so it is still 1.
From §14.8 in the ECMA spec:
For the predefined operators, the number of bits to shift is computed as follows:
When the type of x is int or uint, the shift count is given by the low-order five bits of count. In other words, the shift count is computed from count & 0x1F.
When the type of x is long or ulong, the shift count is given by the low-order six bits of count. In other words, the shift count is computed from count & 0x3F.
If the resulting shift count is zero, the shift operators simply return the value of x.
Shift operations never cause overflows and produce the same results in checked and unchecked context
The problem could be with arithmetic overflow.
Specifically at:
MsgrVersion8 = 1 << 31,
MsgrVersion9 = 1 << 32,
I suggest you make it:
MsgrVersion8 = 1L << 31,
MsgrVersion9 = 1L << 32,
To prevent accidental overflow.
Update:
Seems likely as the smaller number on 'touches' 31 bits, while the bigger one 'touches' 32 bits.