I have the following code
Font oldf;
Font newf;
oldf = this.richText.SelectionFont;
if (oldf.Bold)
newf = new Font(oldf, oldf.Style & ~FontStyle.Bold);
else
newf = new Font(oldf, oldf.Style | FontStyle.Bold);
I know that code, but I don't know what did it mean these symbols &,| and ~ .
is these mean ( and , or , not ) or Am I wrong ?
Like others have stated, they are bitwise operators. FontStyle is a bit field (set of flags).
oldf.Style & ~FontStyle.Bold
This means "remove bold" but looking at the underlying math, you get something like this.
(a) FontStyle.Bold = 0b00000010; // just a guess, it doesn't really matter
(b) oldf.Style = 0b11100111; // random mix here
// we want Bold "unset"
(c) ~FontStyle.Bold = 0b11111101;
=> (b) & (c) = 0b11100101; // oldf without Bold
new Font(oldf, oldf.Style | FontStyle.Bold)
This means that we want to Bold the font. By OR'ing it with the exist value (this also means that something that's already bold will remain bold).
(a) FontStyle.Bold = 0b00000010; // just a guess, it doesn't really matter
(b) oldf.Style = 0b11100000; // random mix here
=> (b) | (c) = 0b11100010; // oldf with Bold
Yes,
& is the logical and
| is the logical or
Check those links out for descriptions.
Click here to view All C# Operator descriptions
They are bitwise operations. | is OR & is AND, and ~ is NOT.
You're comparing flags of an enumeration, so each one of those (Style, Bold, etc.) is a number that is some power of two. The magic of bit manipulation with flags such as that is that a bitwise OR of two flags will have two bits set. By using bit masking someone can figure out which values you OR-ed together, or whether or not a particular 'enumeration' was used.
The first line is asking for a font with Style set to true AND that's NOT Bold.
The second is looking for a font with Style OR Bold.
You are right, they are logical bitwise operators.
new Font(oldf, oldf.Style & ~FontStyle.Bold);
Has the effect of removing the bold attribute from your overall style (these always seemed a bit backwards to me when I began, having to AND something to get rid of it, but you'll get used to it).
new Font(oldf, oldf.Style | FontStyle.Bold);
ORing will add the bold enum to your style.
Do a bit of reading and then work out what is happening with a bit of paper, it is pretty clever and this sort of coding is used all over the place.
They are the logical bitwise operators.
This:
newf = new Font(oldf, oldf.Style & ~FontStyle.Bold);
Is taking the old font style and removing bold by performing a bitwise AND with every bit EXCEPT (bitwise negate) bold.
However this:
newf = new Font(oldf, oldf.Style | FontStyle.Bold);
Is setting the bit represented by FontStyle.Bold.
Those are bitwise operators: http://msdn.microsoft.com/en-us/library/6a71f45d%28v=vs.71%29.aspx (the row "Logical (boolean and bitwise)")
Basically, they work on the bit level. & is AND, | is OR, ~ is NOT. Here's an example:
00000001b & 00000011b == 00000001b (any bits contained by both bytes)
00000001b | 00001000b == 00001001b (any bits contained in either byte)
~00000001b == 11111110b (toggle all bits)
I used single bytes here, but it works for multibyte values as well.
The variables are bitflag enumerations. So you can and them together with the bitwise AND operator "$" or OR them together with the bitwise OR Operator "|".
They're used with enumeration so allow you to specify multiple options example below.
[Flags]
enum Numbers {
one = 1 // 001
two = 2 // 010
three = 4 // 100
}
var holder = Numbers.two | Numbers.one; //one | two == 011
if ((holder & Numbers.one) == Numbers.one) {
//Hit
}
if ((holder & Numbers.two) == Numbers.two) {
//Hit
}
if ((holder & Numbers.three) == Numbers.three) {
//Not hit in this example
}
Related
I'm trying to learn C# right now and I'm using W3schools (website) in order to do so, but I came across some operators I dont completely understand.
I know that there are many questions asking what &= does in C# but I couldn't find anything relevant to my issue (dealing with numbers rather than True or False values)
From what I've gathered from online, the && operator is just an AND operator, and the & operator is just an AND operator but all conditions are checked
But when I searched for &= I can't find anything relevant to the way that its used on the W3Schools website, it shows the &= operator in use with numbers rather than boolean, and on the section that lets you try it out, I was receiving an output that I couldn't understand.
This is the link to the website page:
https://www.w3schools.com/cs/cs_operators_assignment.php
This is the link to the 'try it out' section where I got the code:
https://www.w3schools.com/cs/trycs.php?filename=demo_oper_ass7
Here's the code:
int x = 5;
x &= 3;
Console.WriteLine(x);
When I leave it as shown in the code above, I get an output of 1
When x = 10, output is 2
When x = 15, output is 3
When x = 20, output is 0
When x = 4329, output is 1
etc...
Please can somebody explain the &= operator, and if possible, the |=, ^= operators too?
I understand the use of all these operators by themselves when I searched them up however those understandings dont match with the usage of the ...= version shown on the website
Thanks alot
Let me preface this with the fact that I technically don't know C#, but I do know C, C++, and Java, and I also know that C# is a language in the C/C++ family (just like Java), so I would literally bet my life on this being correct in any and all of the languages mentioned.
Generally, for any binary operator _, the expression "a _= b" is (supposed to be considered -- C++ kind of messes with this, but that's besides the point) equivalent to "a = a _ b". As such,
a &= b is a = a & b
a |= b is a = a | b
a ^= b is a = a ^ b
All of those operators (&, |, ^) are, as you correctly recognized, boolean (as in, concerning only "true" and "false") operations. However, they are bitwise (as in binary numbers) boolean operations. In particular, the difference between the "logical" (as they are generally called, although the term is fairly misleading) operators and the "bitwise" operators is that the "logical" versions consider the veracity ("true-or-false-ness") of their operands as a whole, whereas the "bitwise" (remember that "bit" is short for "binary digit") versions consider each bit -- i.e. each digit of their operand when they are written in the binary system (Wiki "Positional notation" and "Binary number" for more information). Thus, && and || (there is no "logical" version of ^ in any C-like language I know) work on truth values as a whole -- which, in the case of Java and (almost certainly) C#, means type boolean/bool. For example, operator && indicates logical AND:
a
b
a && b
true
true
true
true
false
false
false
true
false
false
false
false
Analogously, operator || indicates logical OR. The bitwise operations, however, consider each binary digit of their operands (and since "binary" basically means "having two", this equates the whole "ones and zeroes" thing you see/hear everywhere), where (predictably) "0" indicates "false" and "1" indicates "true". Ergo, you can use the table above for "a & b" if only you replace all the "false" with "0" and all the "true" with "1". This explains why your program outputs what it does: In the initial example, you print 5 & 3, which, in binary notation, is 101 & 011. Ergo, in the result, only the digits where both operands have one (namely the last place) will be "1". Observe:
101
& 011
== 001
(If the binary notation is an issue, 5 = 22 + 20 = 4 + 1, and 3 = 21 + 20 = 2 + 1.) Since "1 AND 0" is "0" (leftmost digit), "0 AND 1" is "0" (middle digit), and "1 AND 1" is "1" (rightmost digit).
If it helps understand, consider the program as a mathematician would: Argue that any variable can have only one definition, and thus only one value. Thus, when you assign a new value, the system would actually have to intoduce a "new" ("hidden") variable for the new definition, so your
x = 5
x = x & 3
print x
is actually
x0 = 5
x1 = x0 & 3
print x1 // not a math thing, just here for completeness
(This is actually how the compiler views your program; Wiki "Static single-assignment form" if interested.)
The bottom line is: If you have a number ("more than one") of boolean values (true or false), you can combine them into an integer value (byte/sbyte/short/ushort/int/uint/long/ulong) by assigning one binary place for each boolean value and then use boolean operations (& → AND, | → OR, ^ → XOR) to combine all of the boolean values at once. Consider (and note again that I don't actually know C# per se, so I'm pretty much winging this based on some Googling; there might be some issues, but I trust the principle becomes clear) the definitions:
static readonly int CHEAP = 0; // binary: 000
static readonly int EXPENSIVE = 1; // binary: 001
static readonly int LIGHT = 0; // binary: 000
static readonly int HEAVY = 2; // binary: 010
static readonly int WEAK = 0; // binary: 000
static readonly int POWERFUL = 4; // binary: 100
// and now, the combinations:
static readonly int GENERIC = 0; // binary: 000 (cheap, light, weak)
static readonly int LUXURY = 3; // binary: 011 (expensive, heavy, ?)
static readonly int MUSCLE = 7; // binary: 111 (expensive, heavy, powerful)
static readonly int PONY = 6; // binary: 011 (cheap, heavy, powerful)
Here, we consider the following veracities:
The 20 (i.e. rightmost binary) place indicates "expensive".
The 21 (i.e. middle binary) place indicates "heavy".
The 22 (i.e. leftmost binary) place indicates "powerful".
Now, we can put all three boolean values into an integer (would fit into a byte, but usually people use int):
class Car {
readonly string name;
readonly int traits;
Car(string name, int traits) {
this.name = name;
this.traits = traits;
}
bool isGeneric() {
// if and only if all veracities are "false"
// should be the same as "return traits == GENERIC", but somebody might set traits to > 7
// note that EXPENSIVE | HEAVY | POWERFUL == 7, so we only consider the three "defined" bools
return (traits & (EXPENSIVE | HEAVY | POWERFUL)) == GENERIC;
}
bool isLuxury() {
// if and only if the two veracities dictated by LUXURY match, with no regard for power
return (traits & LUXURY) == LUXURY;
}
bool isMuscle() {
// if and only if all veracities are "true"
// should be the same as "return traits == MUSCLE", but somebody might set traits to > 7
return (traits & MUSCLE) == MUSCLE;
}
bool isPony() {
// if and only if all three veracities dictated by PONY match
// note that EXPENSIVE | HEAVY | POWERFUL == 7, so we only consider the three "defined" bools
// also note that this requires that EXPENSIVE not be set,
// i.e. "== PONY" is equivalent to "== (HEAVY | POWERFUL)" and "== (CHEAP | HEAVY | POWERFUL)"
return (traits & (EXPENSIVE | HEAVY | POWERFUL)) == PONY;
}
}
Then, we can do something like:
// "PONY" could also be written as "CHEAP | HEAVY | POWERFUL" or "HEAVY | POWERFUL"
Car fordMustang = new Car("Ford Mustang", PONY);
(And yes, I drive a Mustang. :P)
I have a 3 element enum, it defines one of three contexts, for example red, green, or blue. This enum is used in a loop with millions of iterations, for example, pixel many. The fields are currently one int apart, the default. Given a desired production order of R,G,B,R,G,B..., I have currently resorted to checking if the value is currently B, thus assigning it to R, otherwise incrementing the value.
private enum CHANNEL_CONTEXT {RED, GREEN, BLUE} //here is a sample enum
//here is a sample loop with the relevant construct
CHANNEL_CONTEXT current = CHANNEL_CONTEXT.RED;
while(condition)
{
use current;
//...
if(current == CHANNEL_CONTEXT.BLUE)
current = CHANNEL_CONTEXT.RED
else
current+=1;
}
Is there a way to wrap a 3 field enum with a single operation, such that no branch is required to determine if it is time to wrap. I know modulus(%) fits the bill, but my motivation is a performance based one, and I'd break even at best with such an expensive operation(testing corroborated, but not exhaustively).
To put my agenda in perspective, if i had 256 relevant fields, I could create a byte based enum, and increment with impunity and intended overflow. Alas, I only have three, and I cant think of a way to manipulate any integral primitive in a way that three values are produced cyclically, using a lightweight ALU operation,(+,-,&,^,|,<<..etc). I also wouldn't have been able to think of a way to swap bits with no temporary using such operations, but there is a rarely practical but possible way to do so.
Can someone guide me to a way to distribute 3 integral enum values such that they are traversable periodically, with no branch required, and no division based operators used(like modulus)?
While it sounds very unlikely that you can beat x = (x + 1) % 3 you can try to use mapping table:
var map = new[]{1,2,0};
x = map[x];
You probably would need to wrap that in unsafe to remove boundary checks on array access.
If you really set on bit manipulation irrespective of readability of the code - the table of converting numbers you are interested in is small enough to build manually for each bit and then combine.
Truth table:
Source Result
Bit2 Bit1 Bit2 Bit1
0 0 0 1
0 1 1 0
1 0 0 0
1 1 x x
As you can see the values we are interested in only produce 2 non-zero bits so resulting expression will be very simple - one case for 1 for lower bit and one case for higher bit (assuming values never fall out of the range 0-2 (which is safe if this is the only transformation).
var b1 = (x & 1) >> 0; // extract lower bit 0
var b2 = (x & 2) >> 1; // extract higher bit 1
// only care of cases when pair of bits results in 1
var resultBit1 = 1 & (~b1 & ~b2); // 00 -> x1, other cases is 0
var resultBit2 = (1 & (b1 & ~b2)) << 1; // 01 -> 1x, other cases is 0
x = resultBit1 | resultBit2;
Or inlining all into one unreadable line:
x = 1 & ~(x | x >> 1) | 2 & (x & 1 & ~x >> 1) << 1;
I am using a Flags Enum to track the completion stages of a data migration process for each data record. I need a way to reset back to a specified stage where I can begin reprocessing the migration of a data record. How does one reset the higher bytes in a Flags Enum?
Example Enum:
[Flags]
public Enum MigrationStages {
None = 0,
Started = 1,
MiddleStage = 2,
WrappingUp = 4,
Finished = 8
}
My current value:
var currentStage =
MigrationStages.None
| MigrationStages.Started
| MigrationStages.MiddleStage
| MigrationStages.WrappingUp
| MigrationStages.Finished;
I want to reset back to MigrationStages.MiddleStage to cause reprocessing to occur starting there.
Bitwise math is not something we use much anymore. As such, when I went searching for an answer to this I found nothing that helped so I worked it out. Sharing my math with the world in case others find it useful.
I created a simple helper method to do this, as follows:
public static MigrationStage ClearHigherFlags(MigrationStage orig, MigrationStage highBit)
{
var lowerBits = (int)orig % (int)highBit;
return highBit + lowerBits;
}
Usage example:
currentStage = ClearHigherFlags(currentStage, MigrationStages.MiddleStage);
Obviously, if you want to clear higher flags including the highBit, just don't add it back. To clear lower flags, return orig - lowerBits.
In bitwise math, modulus (%) is often your friend.
Addendum
There are those who will find this answer and think that it's not really bit math. I hope this assuages those folks.
First, recall that this is flags we're talking about so a very specific subset of bit manipulation where modulus makes the math easier to read and is very appropriate. The actual math performed by the compiler replacement will be something like what follows, which I find much less intuitive to read.
public static MigrationStage ClearHigherFlags(MigrationStage orig, MigrationStage highBit)
{
var bitMask = highBit - 1;
var lowerBits = orig & bitMask;
return highBit + lowerBits;
}
It's really not too hard to read but the conversion to a bit mask is done implicitly in my original solution.
If you want to use bitwise manipulation you can do it this way:
var lowbits = MigrationStages.MiddleStage | MigrationStages.Started;
Then to clear the high bits in your example:
currentStage = currentStage & lowbits;
Maybe this will make more sense:
8 4 2 1
==========
lowbits 0 0 1 1
currentvalue 1 1 1 1
==========
AND (&) 0 0 1 1
which clears the two high bits
I was wondering how the following enum masking works
If I have an Enum structure
public enum DelMask
{
pass = 1,
fail = 2,
abandoned = 4,
distinction = 8,
merit = 16,
defer = 32,
}
I have seen the following code
int pass = 48;
if ((pass & (int)DelMask.defer) > 0)
//Do something
else if ((pass & (int)DelMask.merit ) > 0)
//Do something else
I am wondering can anyone help me figure out how which block will get executed?
Basic bit logic at work here. The integer 48 ends like this in binary:
0011 0000
Defer, 32, is:
0010 0000
Merit, 16, is:
0001 0000
Now when you perform a logical AND (&), the resulting bits are set where they are both in the input:
pass & (int)DelMask.defer
0011 0000
0010 0000
========= &
0010 0000
The result will be 16, so ((pass & (int)DelMask.defer) > 0) will evaluate to true. Both if's will evaluate to true in your example because both flags are present in the input. The second one won't be evaluated though, because it's an else if.
Both are correct so the first will get executed.
16 is 10000
32 is 100000
48 is 16+32 so it is 110000
10000 & 110000 is 10000
100000 & 110000 is 100000
Both are bigger than zero.
48 = 16 (merit) + 32 (defer).
Thus pass & (int)DelMask.defer evaluates to 32, so the first block runs.
If that wasn't the case, pass & (int)DelMask.merit evaluates to 16, so the second block would run if it got that far.
This only works because the values in the enum are all different powers of 2 and thus correspond to independent bits in the underlying int. This is what is known as a bit flags enum.
First, it should be int pass = 48;
Basically this code checks whether a bit is set in a binary representation of the number. Each & operation should produce a result with all zeroes and one on the place where it is in the mask. for instance:
48: 110000
defer = 32: 100000
______
& 100000
So you can use this code:
int pass = 48;
if ((pass & (int)DelMask.defer) == (int)DelMask.defer)
//Do something
else if ((pass & (int)DelMask.merit ) == (int)DelMask.merit)
//Do something else
Well you need to think of those numbers as binary. I'll use the d suffix to show decimal notation and b suffix for binary notation.
enum values:
01d = 000001b
02d = 000010b
04d = 000100b
08d = 001000b
16d = 010000b
32d = 100000b
pass value:
48d = 110000b
Now the & is the bit-wise AND operator. Which means that if c = a&b, the nth bit in c will be 1 if and only if the nth bit is 1 in both a and b.
So:
16d & 48d = 010000b = 16d > 0
32d & 48d = 100000b = 32d > 0
As you see, your number 48d "matches" with both 16d and 32d. That is why this kind of enums is generally described as a "flag" enum: you can have with one integer the value of several "flags".
As for your code, the first if operator will be verified, which means that you will enter it and "Do something". You will not "Do something else".
Generally in C#, we use the [Flags] attribute for flag enums, which allows not actually writing the decimal values for the enum members. As usual, the example in the MSDN is useless so I'll refer to this SO question for more details about how to use it (note that to know if a value x has a flag f set, you can either do x & f == f or x | f == x, but the usage seems to be to generally use the latter one).
How to I use logical operators to determine if a bit is set, or is bit-shifting the only way?
I found this question that uses bit shifting, but I would think I can just AND out my value.
For some context, I'm reading a value from Active Directory and trying to determine if it a Schema Base Object. I think my problem is a syntax issue, but I'm not sure how to correct it.
foreach (DirectoryEntry schemaObjectToTest in objSchema.Children)
{
var resFlag = schemaObjectToTest.Properties["systemFlags"].Value;
//if bit 10 is set then can't be made confidential.
if (resFlag != null)
{
byte original = Convert.ToByte( resFlag );
byte isFlag_Schema_Base_Object = Convert.ToByte( 2);
var result = original & isFlag_Schema_Base_Object;
if ((result) > 0)
{
//A non zero result indicates that the bit was found
}
}
}
When I look at the debugger:
resFlag is an object{int} and the value is 0x00000010.
isFlag_Schema_Base_Object, is 0x02
resFlag is 0x00000010 which is 16 in decimal, or 10000 in binary. So it seems like you want to test bit 4 (with bit 0 being the least significant bit), despite your comment saying "if bit 10 is set".
If you do need to test bit 4, then isFlag_Schema_Base_Object needs to be initialised to 16, which is 0x10.
Anyway, you are right - you don't need to do bit shifting to see if a bit is set, you can AND the value with a constant that has just that bit set, and see if the result is non-zero.
If the bit is set:
original xxx1xxxx
AND
isFlag_Schema_Base_Object 00010000
-----------------------------------
= 00010000 (non-zero)
But if the bit isn't set:
original xxx0xxxx
AND
isFlag_Schema_Base_Object 00010000
-----------------------------------
= 00000000 (zero)
Having said that, it might be clearer to initialise isFlag_Schema_Base_Object using the value 1<<4, to make it clear that you're testing whether bit 4 is set.
If you know which bit to check and you're dealing with int's you can use BitVector32.
int yourValue = 5;
BitVector32 bv = new BitVector32(yourValue);
int bitPositionToCheck = 3;
int mask = Enumerable.Range(0, bitPositionToCheck).Select(BitVector32.CreateMask).Last();
bool isSet = bv[mask];
Using bitshifting is probably cleaner than using CreateMask. But it's there :)