Why should I never use 0 in a flag enum [duplicate] - c#

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Should an Enum start with a 0 or a 1?
Why should I never use 0 in a flag enum? I have read this multiple times now and would like to
know the reason :)

Why should I never use 0 in a flag enum?
The question is predicated on an incorrect assumption. You should always use zero in a flag enum. It should always be set to mean "None".
You should never use it for anything other than to represent "none of the flags are set".
Why not?
Because it gets really confusing if zero has a meaning other than "none". One has the reasonable expectation that ((e & E.X) == E.X) means "Is the X flag set?" but if X is zero then this expression will always be true, even if logically the flag is not "set".

Because Flag enums are bit collections, or sets of options.
A 0 value would be part of all sets, or of none. It just wouldn't work.

Although a zero means none of the bits are set, it is often very useful to have a named constant for 0.
When I set up flag words, I define the names of the bits so that they all represent the non-default value. That is, the enum value is always initialised to zero, turning 'off' all the options the bitfield represents.
This provides forwards compatibility for your enum, so that anyone who creates a new value knows that any zero bits are going to be 'safe' to use if you later add more flags to your bitfield.
Similarly it is very useful to combine several flags to make a new constant name, which makes code more readable.
The danger of this (and the reason for the rule you cite) is just that you have to be aware of the difference between single bit values (flags) and values that represent groups or combinations of bits.

Flag enums are used like this:
Flag flags = Flag.First | Flag.Next | Flag.Last;
Then you should define your Flag like this:
enum Flag {First = 1, Next = 2, Last = 4};
This way you can see if a Flag has been used e.g.:
if (flags & Flag.First <> 0) Console.WriteLine("First is set");
if (flags & Flag.Next <> 0) Console.WriteLine("Next is set");
if (flags & Flag.Last <> 0) Console.WriteLine("Last is set");
This is why you can only use values that is a power of 2 e.g. 1,2,4,8,16,32,64,128,...
If flags is 0 then it is considered blank.
I hope that this will increase your understanding of flag enums.

Because typically you use flags as follows:
var myFlagEnum = MyEnum.Foo | MyEnum.Bar | MyEnum.Bat;
// ... snip ...
if (myFlagEnum & MyEnum.Foo == MyEnum.Foo) { ... do something ... };
If the MyEnum.Foo were zero, the above wouldn't work (it would return true for all cases). Whereas if it were 1, then it would work.

A flag enum assumes that each one of it's values represents the presence of an option and it is coded in one of the enum's bits. So if a particular option is present (or true) the equiveland bit in the enum's value is set (1), otherwise it is not set (0)
so each one of the enum's fields is a value with only one bit set. If none of the options are present or true, then the combined enum's value is zero, which mean none of the bits are set. So the only zero field in a flag's enum, is the one that supposed to mean that no option is set, true, or selected.
For example assume we have a flags enum that encodes the presence of borders in a table cell
public enum BorderType
{
None = 0x00, // 00000000 in binary
Top = 0x01, // 00000001 in binary
Left = 0x02, // 00000010 in binary
Right = 0x04, // 00000100 in binary
Bottom = 0x08 // 00001000 in binary
}
if you want to show that a cell has the top and bottom borders present, then you should use a value of
Cell.Border = BorderType.Top | BorderType.Bottom; // 0x01 | 0x08 = 0x09 = 00001001 in binary
if you want to show that a cell has no borders present, then you should use a value of
Cell.Border = BorderType.None; // 0x00 = 00000000 in binary
So you should NEVER use zero as a value for an option in a flag enum, but you should always use zero as the value that means that none of the flags are set.

I really don't see the problem.
enum Where {Nowhere=0x00, Left=0x01, Right=0x02, Both=Left|Right};
Where thevalue = Where.Both;
bool result = (thevalue&Where.Nowhere)==Where.Nowhere;
Of course the result is true! What did you expect? Here, think about this.
bool result1 = (thevalue&Where.Left)==Where.Left;
bool result2 = (thevalue&Where.Right)==Where.Right;
bool result3 = (thevalue&Where.Both)==Where.Both;
These are all true! Why should Nowhere be special? There is nothing special about 0!

Related

Use of Bitwise AND (&) in this scenario

I've seen some explanations of & (and plenty of explanations of |) around SO (here etc), but none which clarify the use of the & in the following scenario:
else if ((e.AllowedEffect & DragDropEffects.Move) == DragDropEffects.Move) {...}
Taken from MSDN
Can anyone explain this, specific to this usage?
Thanks.
e.AllowedEffect is possibly a combination of bitwise flags. The & operator performs a bit a bit "and" logical operation with each bit. As a result if the bit under test is one, the result is that single flag.
The test could be writter in this way with exactly the same result:
else if ((e.AllowedEffect & DragDropEffects.Move) != 0 ) {...}
Lets explain better with an example, the flag value are these:
None = 0,
Copy = 1,
Move = 2,
Link = 4,
So in binary:
None = 00000000,
Copy = 00000001,
Move = 00000010,
Link = 00000100,
So we consider the case in which under test we have the combination of Copy and Move, ie the value will be:
00000011
by bitwise and with move we have:
00000011 -->Copy | Move
00000010 -->Move
======== &
00000010 === Move
Suppose :
DragDropEffects.Move has 1 value.
e.AllowedEffect has 0 value.
It will do bitwise AND (1 & 0 = 0) of the 2 hence the result will be 0 currently:
DragDropEffects.Move & e.AllowedEffect will be 0 in this case.
Consider this now :
DragDropEffects.Move has 1 value.
e.AllowedEffect has 1 value.
in that case bitwise AND will return 1 (as 1 & 1 = 1 in bitwise AND) so now the result will be 1.
Bitwise AND will return 0 if one of the bit is 0 which we are doing AND and will return 1 if all are set to 1.
The second answer in this post which you linked in your question explains it well.
DragDropEffects.Move has one bit set, the second least significant making it the same as the number 2.
If you & something with 2 then if that bit is set you will get 2 and if that bit is not set, you will get 0.
So (x & DragDropEffects.Move) == DragDropEffects.Move will be true if the flag for DragDropEffects.Move is set in x and false otherwise.
In languages which allow automatic conversion to boolean it's common to use the more concise x & DragDropEffects.Move. The lack of concision is a disadvantage with C# not allowing such automatic conversion, but it does make a lot of mistakes just not happen.
Some people prefer the alternative (x & DragDropEffects.Move) != 0 (and conversely (x & DragDropEffects.Move) == 0 to test for a flag not being set) which has the advantage of 0 working here no matter what the enum type or what flag is tested. (And potentially a minor advantage in resulting in very slightly smaller CIL if it is turned straight into a brzero instruction, but I think it generally doesn't anyway).
DragDropEffects is not just enum, this is a set of flags, so in your example we check whether e.AllowedEffect has set bits for DragDropEffects.Move or not.
hope you understand how bitwise operators work.
if e.AllowedEffect is set to DragDropEffects.Move then their & will result in either e.AllowedEffector DragDropEffects.Move i.e.
e.AllowedEffect = 1
DragDropEffects.Move = 1
e.AllowedEffect & DragDropEffects.Move = 1
from the MSDN example, it rouhhly means:
'if ActiveEffect is set/equal to DragDropEffects.Move then do this...'

Comparing 2 FileAttributes must always return true in C#

I recently got some great help here on stackoverflow. One of the answers puzzled me somewhat and I didn't feel it was appropriate to get an explanation due to the limitations of the comments box.
Please review the code below.
if ((File.GetAttributes(fileName) & FileAttributes.Archive) == FileAttributes.Archive)
{
// Archive file.
}
My question is why would you include the logic after the & (see bold)
(File.GetAttributes(fileName) & FileAttributes.Archive) == etc
Surely FileAttributes.Archive == FileAttributes.Archive will always match?
Does any one have an explanation to this (IMO it's probably a typo/mistake but I've assumed too many things before to only be corrected later on!)
The second question is what does the tilde ~ do in this code:
File.SetAttributes(fileName, File.GetAttributes(fileName) & ~FileAttributes.Archive);
Some Enums are flags. That is, it can have any combination of the members of the enum and still be valid.
In the case of the FileAttributes enum, a file can be ReadOnly and Hidden at the same time. Likewise a file could be Hidden, ReadOnly and System. Writing an enum member for each combination would give 16 different members! Very inefficient.
When using flag-type enums, the way to check whether a value contains a specified enum member is to compare it with itself in a bitwise (binary) fashion.
Given the following simplified definition of the FileAttributes enum:
[Serializable, Flags]
public enum FileAttributes
{
Archive = 32,
Hidden = 2,
Normal = 128,
ReadOnly = 1,
System = 4,
Temporary = 256
}
A System file which is also marked ReadOnly will have the value 5 (4 + 1).
Trying to determine if the file is ReadOnly by using the code
File.GetAttributes(fileName) == FileAttributes.System
will evaluate as such:
5 == 4
and the result will be False.
The best way to determine whether the file has got the System attribute set is to do binary AND operation on the file's attribute and the attribute whose presence you want to determine. In code, you would write this:
(File.GetAttributes(fileName) & FileAttributes.System) == FileAttributes.System
This strips off all other attributes other than the System attribute before doing the comparison. Mathematically it would evaluate as such:
0101 (System + Hidden)
AND 0100 (System)
-------- -----------------
0100 (System)
Then the result (0100) would be compared to the System attribute (0100) and the result would then be True.
On one line, the code would be (0x0101 & 0x0100) == 0x0100 which evaluates to True.
Starting from .NET 4.0, Microsoft has included the Enum.HasFlag method to determine the presence or absence of flags in an enum value. You therefore do not have to type all that code yourself. When dealing with an Enum type that has the Flags attribute, you can simply use the HasFlag method to check if a particular flag is present. Your line would therefore be written as
File.GetAttributes(fileName).HasFlag(FileAttributes.System)
The tilde (~) mark, when used on a numeric value (or any type which can be 'degenerated' into int, uint, long or ulong), has the effect of flipping the bits on the number, producing the number's complement (all other values except the one specified).
For example, given the 16-bit number 4 (0x0100), it's complement (~4) would be 11 (0x1011)
0100 -> 1011
The tilde mark has the same effect as doing an XOR on the highest value of the type being compared. For a 16-bit number, the highest value would be 15 (1111) so your tilde will evaluate as:
0100
XOR 1111
--------
1011
The effect in your code File.SetAttributes(fileName, File.GetAttributes(fileName) & ~FileAttributes.Archive) will therefore get the file's attributes, remove the Archive attribute and then set it back to the file.
Assuming the file's attributes are Archive + Hidden, it will have a value of 34 (0x00100010) and ~Archive will have a value of 0x11011111.
Evaluating will be as such
(Archive + Hidden) 0x00100010
AND (~Archive) 0x11011111
---------------------- ----------
Hidden 0x00000010
The file's attributes will subsequently be changed to Hidden only (the Archive attribute will be removed).
The File.GetAttributes method returns an enumeration which has a Flags attribute that allows a bitwise combination of its member values. In other words, all the bit values of ALL of the relevant attributes are combined together in a single integer object. The '&' or bitwise and operator allows you to pull out the relevant bits of the object. The comparison with the original attribute is for clarity, it would be equally logically correct to simply look for a non-zero value.
http://msdn.microsoft.com/en-us/library/system.io.fileattributes.aspx
The problem is that FileAttributes has the Flags attribute. Means all the values can be combined (e.g. Archive AND Hidden).
To find out if really a specific value is set you have to mask out all the other values. For this purpose also the HasFlag method within the Enum class exists which could be used as follows in your example:
if(File.GetAttributes(fileName).HasFlag(FileAttributes.Archive))
{
// Archive file.
}
The second example does removing an exact value out of the bitmask. So it removes the archive attribute without touching all the other bits within the mask (e.g. readonly or hidden). For this task no method exists within the enum class.
It is a so named masked comparison.
(File.GetAttributes(fileName) & FileAttributes.Archive)
return FileAttributes.Archive
if there is a FileAttributes.Archive in attributes and return false in any other case.
Example:
if file attributes has value:
hidden archive readonly
1 1 0
bitwise and (File.GetAttributes(fileName) & FileAttributes.Archive)
returns
hidden archive readonly
0 1 0
and it is equal FileAttributes.Archive.
if file attributes has value:
hidden archive readonly
1 0 1
bitwise and (File.GetAttributes(fileName) & FileAttributes.Archive)
returns
hidden archive readonly
0 0 0
The '~' operator is a bitwise NOT (complement). See Bitwise Complement Operator

How is working binary operation and boolean conversion?

I have this code :
int flags = some integer value;
compte.actif = !Convert.ToBoolean(flags & 0x0002);
It is working very well the problem is I don't really understand how it s working..
The & operation is a bitwise AND I assume so Imagine 110110 & 000010 I assume it will result 001011 (maybe i'm wrong from here). The goal is to check if the 2's bit in the first term is filled. So in this case it is true.
I don't really understand How it can be converted in boolean..
Thanks for help
Bitwise and of 110110 & 000010 is 000010.
The ToBoolean looks for a non-zero value, so basically, this code checks that flags has the 2nd bit set, then negates it (!). So it is checking "is the 2nd bit clear".
A more traditional test there might be:
compte.actif = (flags & 0x02) == 0;
The bitwise AND operation will give you an integer containing bits that were set on both numbers. I.e. 0b110011 & 0b010100 yields 0b010000.
The exclamation mark switches the boolean, causing true only of the 2nd bit is NOT set.

Why does [Flag]'d enums start at 0 and increment by 1?

Edit: It seems most people misunderstood my question.
I know how enum works, and I know binary. I'm wondering why the enums with the [Flags] attribute is designed the way it is.
Original post:
This might be a duplicate, but I didn't find any other posts, so here goes.
I bet there has been some good rationale behind it, I just find it a bit bug prone.
[Flag]
public enum Flagged
{
One, // 0
Two, // 1
Three, // 2
Four, // 3
}
Flagged f; // Defaults to Flagged.One = 0
f = Flagged.Four;
(f & Flagged.One) != 0; // Sure.. One defaults to 0
(f & Flagged.Two) != 0; // 3 & 1 == 1
(f & Flagged.Three) != 0; // 3 & 2 == 2
Wouldn't it have made more sense if it did something like this?
[Flag]
public enum Flagged
{
One = 1 << 0, // 1
Two = 1 << 1, // 2
Three = 1 << 2, // 4
Four = 1 << 3, // 8
}
Flagged f; // Defaults to 0
f = Flagged.Four;
(f & Flagged.One) != 0; // 8 & 1 == 0
(f & Flagged.Two) != 0; // 8 & 2 == 0
(f & Flagged.Three) != 0; // 8 & 4 == 0
(f & Flagged.Four) != 0; // 8 & 8 == 8
Of course.. I'm not quite sure how it should handle custom flags like this
[Flag]
public enum Flagged
{
One, // 1
Two, // 2
LessThanThree = One | Two,
Three, // 4? start from Two?
LessThanFour = Three | LessThanThree,
Three, // 8? start from Three?
}
The spec gives some guidelines
Define enumeration constants in powers of two, that is, 1, 2, 4, 8, and so on. This means the individual flags in combined enumeration constants do not overlap.
But this should perhaps be done automatically as I bet you would never want my first example to occur. Please enlighten me :)
The Flags attribute is only used for formatting the values as multiple values. The bit operations work on the underlying type with or without the attribute.
The first item of an enumeration is zero unless explicitly given some other value. It is often best practice to have a zero value for flags enumerations as it provides a semantic meaning to the zero value such as "No flags" or "Turned off". This can be helpful in maintaining code as it can imply intent in your code (although comments also achieve this).
Other than that, it really is up to you and your design as to whether you require a zero value or not.
As flag enumerations are still just enumerations (the FlagsAttribute merely instructs the debugger to interpret the values as combinations of other values), the next value in an enumeration is always one more than the previous value. Therefore, you should be explicit in specifying the bit values as you may want to express combinations as bitwise-ORs of other values.
That said, it is not unreasonable to imagine a syntax for flags enumerations that demands all bitwise combinations are placed at the end of the enumeration definition or are marked in some way, so that the compiler knows how to handle everything else.
For example (assuming a flags keyword and that we're in the northern hemisphere),
flags enum MyFlags
{
January,
February,
March,
April,
May,
June,
July,
August,
September,
October,
November,
December,
Winter = January | February | March
Spring = April | May | June
Summer = July | August | September
Autumn = October | November | December
}
With this syntax, the compiler could create the 0 value itself, and assign flags to the other values automatically.
The attribute is [Flags] not [Flag] and there's nothing magical about it. The only thing it seems to affect is the ToString method. When [Flags] is specified, the values come out comma delimited. It's up to you to specify the values to make it valid to be used in a bit field.
There's nothing in the annotated C# 3 spec. I think there may be something in the annotated C# 4 spec - I'm not sure. (I think I started writing such an annotation myself, but then deleted it.)
It's fine for simple cases, but as soon as you start adding extra flags, it gets a bit tricky:
[Flags]
enum Features
{
Frobbing, // 1
Blogging, // 2
Widgeting, // 4
BloggingAndWidgeting = Frobbing | Blogging, // 6
Mindnumbing // ?
}
What value should Mindnumbing have? The next bit that isn't used? What about if you set a fixed integer value?
I agree that this is a pain. Maybe some rules could be worked out that would be reasonable... but I wonder whether the complexity vs value balance would really work out.
Simply put, Flags is an attribute. It doesn't apply until after the enumeration is created, and thus doesn't change the values assigned to the enumeration.
Having said that, the MSDN page Designing Flags Enumerations says this:
Do use powers of two for a flags
enumeration's values so they can be
freely combined using the bitwise OR
operation.
Important: If you do not use powers of two or
combinations of powers of two, bitwise
operations will not work as expected.
Likewise, the page for the FlagsAttribute says
Define enumeration constants in powers
of two, that is, 1, 2, 4, 8, and so
on. This means the individual flags in
combined enumeration constants do not
overlap.
In C, it's possible to (ab)use the preprocessor to generate power-of-two enumerations automatically. If one has a macro make_things which expands to "make_thing(flag1) make_thing(flag2) make_thing(flag3)" etc. it's possible to invoke that macro multiple times, with different definitions of make_thing, so as to achieve a power-of-two sequence of flag names as well as some other goodies.
For example, start by defining make_thing(x) as "LINEAR_ENUM_##x," (including the comma), and then use an enum statement to generate a list of enumerations (including, outside the make_things macro, LINEAR_NUM_ITEMS). Then create another enumeration, this time with make_thing(x) defined as "FLAG_ENUM_##x = 1<
Rather nifty some of the things that can be done that way, with flag and linear values automatically kept in sync; code can do nice things like "if (thingie[LINEAR_ENUM_foo] thing_errors |= FLAG_ENUM_foo;" (using both linear and flag values). Unfortunately, I know of no way to do anything remotely similar in C# or VB.net.

enum with value 0x0001?

I have an enum declaration like this:
public enum Filter
{
a = 0x0001;
b = 0x0002;
}
What does that mean? They are using this to filter an array.
It means they're the integer values assigned to those names. Enums are basically just named numbers. You can cast between the underlying type of an enum and the enum value.
For example:
public enum Colour
{
Red = 1,
Blue = 2,
Green = 3
}
Colour green = (Colour) 3;
int three = (int) Colour.Green;
By default an enum's underlying type is int, but you can use any of byte, sbyte, short, ushort, int, uint, long or ulong:
public enum BigEnum : long
{
BigValue = 0x5000000000 // Couldn't fit this in an int
}
It just means that if you do Filter->a, you get 1. Filter->b is 2.
The weird hex notation is just that, notation.
EDIT:
Since this is a 'filter' the hex notation makes a little more sense.
By writing 0x1, you specify the following bit pattern:
0000 0001
And 0x2 is:
0000 0010
This makes it clearer on how to use a filter.
So for example, if you wanted to filter out data that has the lower 2 bits set, you could do:
Filter->a | Filter->b
which would correspond to:
0000 0011
The hex notation makes the concept of a filter clearer (for some people). For example, it's relatively easy to figure out the binary of 0x83F0 by looking at it, but much more difficult for 33776 (the same number in base 10).
It's not clear what it is that you find unclear, so let's discuss it all:
The enum values have been given explicit numerical values. Each enum value is always represented as a numerical value for the underlying storage, but if you want to be sure what that numerical value is you have to specify it.
The numbers are written in hexadecimal notation, this is often used when you want the numerical values to contain a single set bit for masking. It's easier to see that the value has only one bit set when it's written as 0x8000 than when it's written as 32768.
In your example it's not as obvious as you have only two values, but for bit filtering each value represents a single bit so that each value is twice as large as the previous:
public enum Filter {
First = 0x0001,
Second = 0x0002,
Third = 0x0004,
Fourth = 0x0008
}
You can use such an enum to filter out single bits in a value:
If ((num & Filter.First) != 0 && (num & Filter.Third) != 0) {
Console.WriteLine("First and third bits are set.");
}
It could mean anything. We need to see more code then that to be able to understand what it's doing.
0x001 is the number 1. Anytime you see the 0x it means the programmer has entered the number in hexadecimal.
Those are literal hexadecimal numbers.
Main reason is :
It is easyer to read hex notation when writing numbers such as : "2 to the power of x" is needed.
To use enum type as bit flag, we need to increment enum values by power of 2 ...
1,2,4,8,16,32,64, etc. To keep it readable, hex notation is used.
Ex : 2^10 is 0x10000 in hex (neat and clean), but it is written 65536 in classical decimal notation ... Same for 0x200 (hex notation) and 512. (2^9)
Those look like they are bit masks of some sort. But their actual values are 1 and 2...
You can assign values to enums such as:
enum Example {
a = 10,
b = 23,
c = 0x00FF
}
etc...
Using Hexidecimal notation like that usually indicates that there may be some bit manipulation. I've used this notation often when dealing with this very thing, for the very reason you asked this question - this notation sort of pops out at you and says "Pay attention to me I'm important!"
Well we can use integers infact we can avoid any as the default nature of enum assigns 0 to its first member and an incremented value to the next available member. Many developers use this to hit two targets with one bow.
Complicate the code making it difficult to understand
Faster the performance as hex codes are nearer to binary one
my view is if we are still using why we are in fourth generation language just move to binary again
but its quite better technique to play with bits and encryption/decryption process

Categories