I've seen some explanations of & (and plenty of explanations of |) around SO (here etc), but none which clarify the use of the & in the following scenario:
else if ((e.AllowedEffect & DragDropEffects.Move) == DragDropEffects.Move) {...}
Taken from MSDN
Can anyone explain this, specific to this usage?
Thanks.
e.AllowedEffect is possibly a combination of bitwise flags. The & operator performs a bit a bit "and" logical operation with each bit. As a result if the bit under test is one, the result is that single flag.
The test could be writter in this way with exactly the same result:
else if ((e.AllowedEffect & DragDropEffects.Move) != 0 ) {...}
Lets explain better with an example, the flag value are these:
None = 0,
Copy = 1,
Move = 2,
Link = 4,
So in binary:
None = 00000000,
Copy = 00000001,
Move = 00000010,
Link = 00000100,
So we consider the case in which under test we have the combination of Copy and Move, ie the value will be:
00000011
by bitwise and with move we have:
00000011 -->Copy | Move
00000010 -->Move
======== &
00000010 === Move
Suppose :
DragDropEffects.Move has 1 value.
e.AllowedEffect has 0 value.
It will do bitwise AND (1 & 0 = 0) of the 2 hence the result will be 0 currently:
DragDropEffects.Move & e.AllowedEffect will be 0 in this case.
Consider this now :
DragDropEffects.Move has 1 value.
e.AllowedEffect has 1 value.
in that case bitwise AND will return 1 (as 1 & 1 = 1 in bitwise AND) so now the result will be 1.
Bitwise AND will return 0 if one of the bit is 0 which we are doing AND and will return 1 if all are set to 1.
The second answer in this post which you linked in your question explains it well.
DragDropEffects.Move has one bit set, the second least significant making it the same as the number 2.
If you & something with 2 then if that bit is set you will get 2 and if that bit is not set, you will get 0.
So (x & DragDropEffects.Move) == DragDropEffects.Move will be true if the flag for DragDropEffects.Move is set in x and false otherwise.
In languages which allow automatic conversion to boolean it's common to use the more concise x & DragDropEffects.Move. The lack of concision is a disadvantage with C# not allowing such automatic conversion, but it does make a lot of mistakes just not happen.
Some people prefer the alternative (x & DragDropEffects.Move) != 0 (and conversely (x & DragDropEffects.Move) == 0 to test for a flag not being set) which has the advantage of 0 working here no matter what the enum type or what flag is tested. (And potentially a minor advantage in resulting in very slightly smaller CIL if it is turned straight into a brzero instruction, but I think it generally doesn't anyway).
DragDropEffects is not just enum, this is a set of flags, so in your example we check whether e.AllowedEffect has set bits for DragDropEffects.Move or not.
hope you understand how bitwise operators work.
if e.AllowedEffect is set to DragDropEffects.Move then their & will result in either e.AllowedEffector DragDropEffects.Move i.e.
e.AllowedEffect = 1
DragDropEffects.Move = 1
e.AllowedEffect & DragDropEffects.Move = 1
from the MSDN example, it rouhhly means:
'if ActiveEffect is set/equal to DragDropEffects.Move then do this...'
Related
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Should an Enum start with a 0 or a 1?
Why should I never use 0 in a flag enum? I have read this multiple times now and would like to
know the reason :)
Why should I never use 0 in a flag enum?
The question is predicated on an incorrect assumption. You should always use zero in a flag enum. It should always be set to mean "None".
You should never use it for anything other than to represent "none of the flags are set".
Why not?
Because it gets really confusing if zero has a meaning other than "none". One has the reasonable expectation that ((e & E.X) == E.X) means "Is the X flag set?" but if X is zero then this expression will always be true, even if logically the flag is not "set".
Because Flag enums are bit collections, or sets of options.
A 0 value would be part of all sets, or of none. It just wouldn't work.
Although a zero means none of the bits are set, it is often very useful to have a named constant for 0.
When I set up flag words, I define the names of the bits so that they all represent the non-default value. That is, the enum value is always initialised to zero, turning 'off' all the options the bitfield represents.
This provides forwards compatibility for your enum, so that anyone who creates a new value knows that any zero bits are going to be 'safe' to use if you later add more flags to your bitfield.
Similarly it is very useful to combine several flags to make a new constant name, which makes code more readable.
The danger of this (and the reason for the rule you cite) is just that you have to be aware of the difference between single bit values (flags) and values that represent groups or combinations of bits.
Flag enums are used like this:
Flag flags = Flag.First | Flag.Next | Flag.Last;
Then you should define your Flag like this:
enum Flag {First = 1, Next = 2, Last = 4};
This way you can see if a Flag has been used e.g.:
if (flags & Flag.First <> 0) Console.WriteLine("First is set");
if (flags & Flag.Next <> 0) Console.WriteLine("Next is set");
if (flags & Flag.Last <> 0) Console.WriteLine("Last is set");
This is why you can only use values that is a power of 2 e.g. 1,2,4,8,16,32,64,128,...
If flags is 0 then it is considered blank.
I hope that this will increase your understanding of flag enums.
Because typically you use flags as follows:
var myFlagEnum = MyEnum.Foo | MyEnum.Bar | MyEnum.Bat;
// ... snip ...
if (myFlagEnum & MyEnum.Foo == MyEnum.Foo) { ... do something ... };
If the MyEnum.Foo were zero, the above wouldn't work (it would return true for all cases). Whereas if it were 1, then it would work.
A flag enum assumes that each one of it's values represents the presence of an option and it is coded in one of the enum's bits. So if a particular option is present (or true) the equiveland bit in the enum's value is set (1), otherwise it is not set (0)
so each one of the enum's fields is a value with only one bit set. If none of the options are present or true, then the combined enum's value is zero, which mean none of the bits are set. So the only zero field in a flag's enum, is the one that supposed to mean that no option is set, true, or selected.
For example assume we have a flags enum that encodes the presence of borders in a table cell
public enum BorderType
{
None = 0x00, // 00000000 in binary
Top = 0x01, // 00000001 in binary
Left = 0x02, // 00000010 in binary
Right = 0x04, // 00000100 in binary
Bottom = 0x08 // 00001000 in binary
}
if you want to show that a cell has the top and bottom borders present, then you should use a value of
Cell.Border = BorderType.Top | BorderType.Bottom; // 0x01 | 0x08 = 0x09 = 00001001 in binary
if you want to show that a cell has no borders present, then you should use a value of
Cell.Border = BorderType.None; // 0x00 = 00000000 in binary
So you should NEVER use zero as a value for an option in a flag enum, but you should always use zero as the value that means that none of the flags are set.
I really don't see the problem.
enum Where {Nowhere=0x00, Left=0x01, Right=0x02, Both=Left|Right};
Where thevalue = Where.Both;
bool result = (thevalue&Where.Nowhere)==Where.Nowhere;
Of course the result is true! What did you expect? Here, think about this.
bool result1 = (thevalue&Where.Left)==Where.Left;
bool result2 = (thevalue&Where.Right)==Where.Right;
bool result3 = (thevalue&Where.Both)==Where.Both;
These are all true! Why should Nowhere be special? There is nothing special about 0!
I have this code :
int flags = some integer value;
compte.actif = !Convert.ToBoolean(flags & 0x0002);
It is working very well the problem is I don't really understand how it s working..
The & operation is a bitwise AND I assume so Imagine 110110 & 000010 I assume it will result 001011 (maybe i'm wrong from here). The goal is to check if the 2's bit in the first term is filled. So in this case it is true.
I don't really understand How it can be converted in boolean..
Thanks for help
Bitwise and of 110110 & 000010 is 000010.
The ToBoolean looks for a non-zero value, so basically, this code checks that flags has the 2nd bit set, then negates it (!). So it is checking "is the 2nd bit clear".
A more traditional test there might be:
compte.actif = (flags & 0x02) == 0;
The bitwise AND operation will give you an integer containing bits that were set on both numbers. I.e. 0b110011 & 0b010100 yields 0b010000.
The exclamation mark switches the boolean, causing true only of the 2nd bit is NOT set.
I recently had a test question:
byte a,b,c;
a = 190;
b = 4;
c = (byte)(a & b);
What is the value of c?
I have never used a logical operand in this manner, what's going on here? Stepping through this, the answer is 4, but why?
Also, where would this come up in the real world? I would argue that using logical operands in this manner, with a cast, is just bad practice, but I could be wrong.
You are doing a bitwise AND in this case, not a logical AND, it is combining the bits of the two values of a & b and giving you a result that has only the bits set that are both set in a & b, in this case, just the 4s place bit.
190 = 10111110
& 4 = 00000100
-------------------
= 4 00000100
Edit: Interestingly, msdn itself makes the issue of whether to call it logical vs bitwise a bit muddy. On their description of the logical operators (& && | || etc) they say logical operators (bitwise and bool) but then on the description of & itself it indicates it performs a bitwise AND for integers and a logical AND for bools. It appears it is still considered a logical operator, but the action between integer types is a bitwise AND.
http://msdn.microsoft.com/en-us/library/sbf85k1c(v=vs.71).aspx
The logical AND operator, when applied to integers performs a bitwise AND operation. The result is 1 in each position in which a 1 appears in both of the operands.
0011
& 0101
------
0001
The decimal value 190 is equivalent to binary 10111110. Decimal 4 is binary 00000100.
Do a logical AND operation on the bits like this:
10111110
& 00000100
----------
00000100
So the result is 4.
Also, where would this come up in the real world? I would argue that using logical operands in this manner, with a cast, is just bad practice, but I could be wrong.
These operations are useful in several circumstances. The most common is when using Enum values as flags.
[Flags]
public enum MyFileOptions
{
None = 0,
Read = 1, // 2^0
Write = 2, // 2^1
Append = 4, // 2^2
}
If an Enum has values that are powers of two, then they can be combined into a single integer variable (with the Logical OR operator).
MyFileOptions openReadWrite = MyFileOptions.Read | MyFileOptions.Write;
In this variable, both bits are set, so it indicates that both the Read and Write options are selected.
The logical AND operator can be used to test values.
bool openForWriting = ((openReadWrite & MyFileOptions.Write) == MyFileOptions.Write);
NOTE
A lot of people are pointing out that this is actually a bitwise AND not a logical AND. I looked it up in the spec before I posted, and I was suprised to learn that both versions are referred to as "Logical AND" in the spec. This makes sense because it is performing the logical AND operation on each bit. So you are actually correct in the title of the question.
This is a bitwise AND, meaning the bits on both bytes are compared, and a 1 is returned if both bits are 1.
10111110 &
00000100
--------
00000100
One of the uses of a logical & on bytes is in networking and is called the Binary And Test. Basically, you logical and bytes by converting them to binary and doing a logical and on every bit.
In binary
4 == 100
190 == 10111110
& is and AND operation on the boolean operators, so it does de Binary and on 4 and 190 in byte format, so 10111110 AND 100 gives you 100, so the result is 4.
This is a bitwise AND, not a logical AND. Logical AND is a test of a condition ie:
if(a && b) doSomething();
A bitwise AND looks at the binary value of the two variables and combines them. Eg:
10101010 &
00001000
---------
00001000
This lines up the binary of the two numbers, like this:
10111110
00000100
--------
00000100
On each column, it checks to see if both numbers are 1. If it is, it will return 1 on the bottom. Otherwise, it will return 0 on the bottom.
I would argue that using logical operands in this manner, with a cast, is just bad practice, but I could be wrong.
There is no cast. & is defined on integers, and intended to be used this way. And just so everyone knows, this technically is a logical operator according to MSDN, which I find kind of crazy.
Edit: It seems most people misunderstood my question.
I know how enum works, and I know binary. I'm wondering why the enums with the [Flags] attribute is designed the way it is.
Original post:
This might be a duplicate, but I didn't find any other posts, so here goes.
I bet there has been some good rationale behind it, I just find it a bit bug prone.
[Flag]
public enum Flagged
{
One, // 0
Two, // 1
Three, // 2
Four, // 3
}
Flagged f; // Defaults to Flagged.One = 0
f = Flagged.Four;
(f & Flagged.One) != 0; // Sure.. One defaults to 0
(f & Flagged.Two) != 0; // 3 & 1 == 1
(f & Flagged.Three) != 0; // 3 & 2 == 2
Wouldn't it have made more sense if it did something like this?
[Flag]
public enum Flagged
{
One = 1 << 0, // 1
Two = 1 << 1, // 2
Three = 1 << 2, // 4
Four = 1 << 3, // 8
}
Flagged f; // Defaults to 0
f = Flagged.Four;
(f & Flagged.One) != 0; // 8 & 1 == 0
(f & Flagged.Two) != 0; // 8 & 2 == 0
(f & Flagged.Three) != 0; // 8 & 4 == 0
(f & Flagged.Four) != 0; // 8 & 8 == 8
Of course.. I'm not quite sure how it should handle custom flags like this
[Flag]
public enum Flagged
{
One, // 1
Two, // 2
LessThanThree = One | Two,
Three, // 4? start from Two?
LessThanFour = Three | LessThanThree,
Three, // 8? start from Three?
}
The spec gives some guidelines
Define enumeration constants in powers of two, that is, 1, 2, 4, 8, and so on. This means the individual flags in combined enumeration constants do not overlap.
But this should perhaps be done automatically as I bet you would never want my first example to occur. Please enlighten me :)
The Flags attribute is only used for formatting the values as multiple values. The bit operations work on the underlying type with or without the attribute.
The first item of an enumeration is zero unless explicitly given some other value. It is often best practice to have a zero value for flags enumerations as it provides a semantic meaning to the zero value such as "No flags" or "Turned off". This can be helpful in maintaining code as it can imply intent in your code (although comments also achieve this).
Other than that, it really is up to you and your design as to whether you require a zero value or not.
As flag enumerations are still just enumerations (the FlagsAttribute merely instructs the debugger to interpret the values as combinations of other values), the next value in an enumeration is always one more than the previous value. Therefore, you should be explicit in specifying the bit values as you may want to express combinations as bitwise-ORs of other values.
That said, it is not unreasonable to imagine a syntax for flags enumerations that demands all bitwise combinations are placed at the end of the enumeration definition or are marked in some way, so that the compiler knows how to handle everything else.
For example (assuming a flags keyword and that we're in the northern hemisphere),
flags enum MyFlags
{
January,
February,
March,
April,
May,
June,
July,
August,
September,
October,
November,
December,
Winter = January | February | March
Spring = April | May | June
Summer = July | August | September
Autumn = October | November | December
}
With this syntax, the compiler could create the 0 value itself, and assign flags to the other values automatically.
The attribute is [Flags] not [Flag] and there's nothing magical about it. The only thing it seems to affect is the ToString method. When [Flags] is specified, the values come out comma delimited. It's up to you to specify the values to make it valid to be used in a bit field.
There's nothing in the annotated C# 3 spec. I think there may be something in the annotated C# 4 spec - I'm not sure. (I think I started writing such an annotation myself, but then deleted it.)
It's fine for simple cases, but as soon as you start adding extra flags, it gets a bit tricky:
[Flags]
enum Features
{
Frobbing, // 1
Blogging, // 2
Widgeting, // 4
BloggingAndWidgeting = Frobbing | Blogging, // 6
Mindnumbing // ?
}
What value should Mindnumbing have? The next bit that isn't used? What about if you set a fixed integer value?
I agree that this is a pain. Maybe some rules could be worked out that would be reasonable... but I wonder whether the complexity vs value balance would really work out.
Simply put, Flags is an attribute. It doesn't apply until after the enumeration is created, and thus doesn't change the values assigned to the enumeration.
Having said that, the MSDN page Designing Flags Enumerations says this:
Do use powers of two for a flags
enumeration's values so they can be
freely combined using the bitwise OR
operation.
Important: If you do not use powers of two or
combinations of powers of two, bitwise
operations will not work as expected.
Likewise, the page for the FlagsAttribute says
Define enumeration constants in powers
of two, that is, 1, 2, 4, 8, and so
on. This means the individual flags in
combined enumeration constants do not
overlap.
In C, it's possible to (ab)use the preprocessor to generate power-of-two enumerations automatically. If one has a macro make_things which expands to "make_thing(flag1) make_thing(flag2) make_thing(flag3)" etc. it's possible to invoke that macro multiple times, with different definitions of make_thing, so as to achieve a power-of-two sequence of flag names as well as some other goodies.
For example, start by defining make_thing(x) as "LINEAR_ENUM_##x," (including the comma), and then use an enum statement to generate a list of enumerations (including, outside the make_things macro, LINEAR_NUM_ITEMS). Then create another enumeration, this time with make_thing(x) defined as "FLAG_ENUM_##x = 1<
Rather nifty some of the things that can be done that way, with flag and linear values automatically kept in sync; code can do nice things like "if (thingie[LINEAR_ENUM_foo] thing_errors |= FLAG_ENUM_foo;" (using both linear and flag values). Unfortunately, I know of no way to do anything remotely similar in C# or VB.net.
Continuing my previous question
Why I cannot derive from long?
I found an interesting problem.
Step one:
4294967296 & 0xFFFFFFFF00000000
Result: 4294967296.
Step two.
4294967296 & 0x00000000FFFFFFFF
Result: 0
Aha, So here I assume that 4294967296 == 0xFFFFFFFF
Let's check
(long)0x00000000FFFFFFFF
Result: 4294967295. Fail.
Let's double check
4294967296 >> 32
Result: 1. Fail.
The only explanation is that because i am using long where
some bit is reserved for sign. In C I would use unsigned long.
What do you think guys?
4294967296 & 0xFFFFFFFF00000000 = 4294967296
This indicates that the value 4294967296 has no bits set in the lower 32-bits. In fact, 4294967296 is 0x100000000, so this is true.
4294967296 >> 32 = 1
Again, consistent.
In other words, your conclusion that 4294967296 is 0xFFFFFFFF is wrong so the remaining checks will not support this.
Um... I'm not sure why you came to the conclusions you did but 4294967296 is 0x100000000. To write out the bitwise AND's in easily readable hex...
0x0000000100000000 &
0x00000000FFFFFFFF =
0x0000000000000000
0x0000000100000000 &
0xFFFFFFFF00000000 =
0x0000000100000000
Both of those make perfect sense. Perhaps you're misunderstanding a bitwise AND... it maps the bits that are the same in both. Your comments seem more appropriate to a bitwise XOR than a bitwise AND (Which is not the operation you're using)...
I think you are failing to understand the bitwise and operation. The bitwise and will return the bits that are set in both. If the two were the same then
(4294967296 & 0xFFFFFFFF00000000) == 4294967296
and
(4294967296 & 0xFFFFFFFF00000000) == 0xFFFFFFFF00000000
would both hold, but they obviously don't.