How does a logical & work on two bytes in C#? - c#

I recently had a test question:
byte a,b,c;
a = 190;
b = 4;
c = (byte)(a & b);
What is the value of c?
I have never used a logical operand in this manner, what's going on here? Stepping through this, the answer is 4, but why?
Also, where would this come up in the real world? I would argue that using logical operands in this manner, with a cast, is just bad practice, but I could be wrong.

You are doing a bitwise AND in this case, not a logical AND, it is combining the bits of the two values of a & b and giving you a result that has only the bits set that are both set in a & b, in this case, just the 4s place bit.
190 = 10111110
& 4 = 00000100
-------------------
= 4 00000100
Edit: Interestingly, msdn itself makes the issue of whether to call it logical vs bitwise a bit muddy. On their description of the logical operators (& && | || etc) they say logical operators (bitwise and bool) but then on the description of & itself it indicates it performs a bitwise AND for integers and a logical AND for bools. It appears it is still considered a logical operator, but the action between integer types is a bitwise AND.
http://msdn.microsoft.com/en-us/library/sbf85k1c(v=vs.71).aspx

The logical AND operator, when applied to integers performs a bitwise AND operation. The result is 1 in each position in which a 1 appears in both of the operands.
0011
& 0101
------
0001
The decimal value 190 is equivalent to binary 10111110. Decimal 4 is binary 00000100.
Do a logical AND operation on the bits like this:
10111110
& 00000100
----------
00000100
So the result is 4.
Also, where would this come up in the real world? I would argue that using logical operands in this manner, with a cast, is just bad practice, but I could be wrong.
These operations are useful in several circumstances. The most common is when using Enum values as flags.
[Flags]
public enum MyFileOptions
{
None = 0,
Read = 1, // 2^0
Write = 2, // 2^1
Append = 4, // 2^2
}
If an Enum has values that are powers of two, then they can be combined into a single integer variable (with the Logical OR operator).
MyFileOptions openReadWrite = MyFileOptions.Read | MyFileOptions.Write;
In this variable, both bits are set, so it indicates that both the Read and Write options are selected.
The logical AND operator can be used to test values.
bool openForWriting = ((openReadWrite & MyFileOptions.Write) == MyFileOptions.Write);
NOTE
A lot of people are pointing out that this is actually a bitwise AND not a logical AND. I looked it up in the spec before I posted, and I was suprised to learn that both versions are referred to as "Logical AND" in the spec. This makes sense because it is performing the logical AND operation on each bit. So you are actually correct in the title of the question.

This is a bitwise AND, meaning the bits on both bytes are compared, and a 1 is returned if both bits are 1.
10111110 &
00000100
--------
00000100

One of the uses of a logical & on bytes is in networking and is called the Binary And Test. Basically, you logical and bytes by converting them to binary and doing a logical and on every bit.

In binary
4 == 100
190 == 10111110
& is and AND operation on the boolean operators, so it does de Binary and on 4 and 190 in byte format, so 10111110 AND 100 gives you 100, so the result is 4.

This is a bitwise AND, not a logical AND. Logical AND is a test of a condition ie:
if(a && b) doSomething();
A bitwise AND looks at the binary value of the two variables and combines them. Eg:
10101010 &
00001000
---------
00001000

This lines up the binary of the two numbers, like this:
10111110
00000100
--------
00000100
On each column, it checks to see if both numbers are 1. If it is, it will return 1 on the bottom. Otherwise, it will return 0 on the bottom.

I would argue that using logical operands in this manner, with a cast, is just bad practice, but I could be wrong.
There is no cast. & is defined on integers, and intended to be used this way. And just so everyone knows, this technically is a logical operator according to MSDN, which I find kind of crazy.

Related

How is working binary operation and boolean conversion?

I have this code :
int flags = some integer value;
compte.actif = !Convert.ToBoolean(flags & 0x0002);
It is working very well the problem is I don't really understand how it s working..
The & operation is a bitwise AND I assume so Imagine 110110 & 000010 I assume it will result 001011 (maybe i'm wrong from here). The goal is to check if the 2's bit in the first term is filled. So in this case it is true.
I don't really understand How it can be converted in boolean..
Thanks for help
Bitwise and of 110110 & 000010 is 000010.
The ToBoolean looks for a non-zero value, so basically, this code checks that flags has the 2nd bit set, then negates it (!). So it is checking "is the 2nd bit clear".
A more traditional test there might be:
compte.actif = (flags & 0x02) == 0;
The bitwise AND operation will give you an integer containing bits that were set on both numbers. I.e. 0b110011 & 0b010100 yields 0b010000.
The exclamation mark switches the boolean, causing true only of the 2nd bit is NOT set.

Curiosity: why the shift operators have less priority than the additive?

I'm wondering why the shift operators (<< and >>), being equivalent to a multiplication and a division respectively, do have less priority than an additive operator, such the "+".
In other words:
int a = 1 + 2 * 8; //yields 17
whereas:
int a = 1 + 2 << 3; //yields 24
Anyone knows what's the reason behind this behavior?
NOTE: Please, don't answer me "because the specs say so"!
Thank you all in advance.
EDIT: I realized that a left-shift can be obtained by summing the left operand by itself. May be this the reason?
The relative priority of arithmetic operators and bitwise operators is irrelevant because you should never be using them together anyway. If you want to treat an integer as an array of bits, then don't be adding and subtracting it like a number. If you want to treat an integer as a number, then don't be shifting, or-ing and and-ing it like an array of bits.
Frankly if I had my way there would be no bit shifting operations on integers; you'd have to cast the integer to a BitArray type, that would not have arithmetic on it. The fact that ints are treated as both bit arrays and numbers is an unfortunate design flaw that exists for historical reasons.
The notion that bit shifting is a kind of multiplication and division is a strange one; bit shifting is bit shifting, not multiplication.
If I were to ascribe a rational decision to it, I'd say it's because convenient in some common situations, such as when constructing bit patterns: A + B << 8 would mark bits A and B in the second-to-least significant byte.
It could just as well just be a random priority allocation as well, of course, because Dennis Ritchie didn't have any idea where it would fit better (I blatantly assume C# inherits the operator priorities from C). Unfortunately, he isn't here to tell us anymore. :(

C# strange code

Anyone know what follow code does?
the question is about follow operators: & and |,and 0xfc
salt[0] = (byte)((salt[0] & 0xfc) | (saltLen & 0x03));
salt[1] = (byte)((salt[1] & 0xf3) | (saltLen & 0x0c));
salt[2] = (byte)((salt[2] & 0xcf) | (saltLen & 0x30));
salt[3] = (byte)((salt[3] & 0x3f) | (saltLen & 0xc0));
the question is about follow operators: & and |,and 0xfc
& is the bitwise and operator. See http://msdn.microsoft.com/en-us/library/sbf85k1c.aspx.
| is the bitwise or operator. See http://msdn.microsoft.com/en-us/library/kxszd0kx.aspx.
0xfc isn't an operator, it's an integer constant (i.e., a number). See http://msdn.microsoft.com/en-us/library/aa664674(VS.71).aspx and http://en.wikipedia.org/wiki/Hexadecimal.
Well the comment above explains what it's doing, but if you're looking for a breakdown of the operators:
Perform a bitwise and on
salt[i] and a hex number (the & operator).
Perform a bitwise and on salt[i]
and a second hex number.
Perform a bitwise or on the result of steps 1 and 2 (the | operator).
Cast the result of step 3 to a byte
Store the result in salt[i]
The result is what is noted in the comment block. The numbers of the format 0xc0 and whatnot are in hexadecimal, which is base 16. I.e. c0 in hex is equivalent to 16*12 + 16*0 = 192 in decimal. In hex, since you run out of digits at 9, you begin using letters. Thus, a=10, b=11, c=12, d=13, e=14, f=15, and f becomes the highest "digit" since you would move over by one place when you get to 16 (as 16 is the base).
See also:
Bitwise operation
Hexadecimal
// Split salt length (always one byte) into four two-bit pieces and
// store these pieces in the first four bytes of the salt array.
This is a cocky answer, but my intention is to indicate that it is already answered, so please let me know if you need more detail :)

Why arithmetic shift halfs a number only in SOME incidents?

Hey, I'm self-learning about bitwise, and I saw somewhere in the internet that arithmetic shift (>>) by one halfs a number. I wanted to test it:
44 >> 1 returns 22, ok
22 >> 1 returns 11, ok
11 >> 1 returns 5, and not 5.5, why?
Another Example:
255 >> 1 returns 127
127 >> 1 returns 63 and not 63.5, why?
Thanks.
The bit shift operator doesn't actually divide by 2. Instead, it moves the bits of the number to the right by the number of positions given on the right hand side. For example:
00101100 = 44
00010110 = 44 >> 1 = 22
Notice how the bits in the second line are the same as the line above, merely
shifted one place to the right. Now look at the second example:
00001011 = 11
00000101 = 11 >> 1 = 5
This is exactly the same operation as before. However, the result of 5 is due to the fact that the last bit is shifted to the right and disappears, creating the result 5. Because of this behavior, the right-shift operator will generally be equivalent to dividing by two and then throwing away any remainder or decimal portion.
11 in binary is 1011
11 >> 1
means you shift your binary representation to the right by one step.
1011 >> 1 = 101
Then you have 101 in binary which is 1*1 + 0*2 + 1*4 = 5.
If you had done 11 >> 2 you would have as a result 10 in binary i.e. 2 (1*2 + 0*1).
Shifting by 1 to the right transforms sum(A_i*2^i) [i=0..n] in sum(A_(i+1)*2^i) [i=0..n-1]
that's why if your number is even (i.e. A_0 = 0) it is divided by two. (sorry for the customised LateX syntax... :))
Binary has no concept of decimal numbers. It's returning the truncated (int) value.
11 = 1011 in binary. Shift to the right and you have 101, which is 5 in decimal.
Bit shifting is the same as division or multiplication by 2^n. In integer arithmetics the result gets rounded towards zero to an integer. In floating-point arithmetics bit shifting is not permitted.
Internally bit shifting, well, shifts bits, and the rounding simply means bits that fall off an edge simply getting removed (not that it would actually calculate the precise value and then round it). The new bits that appear on the opposite edge are always zeroes for the right hand side and for positive values. For negative values, one bits are appended on the left hand side, so that the value stays negative (see how two's complement works) and the arithmetic definition that I used still holds true.
In most statically-typed languages, the return type of the operation is e.g. "int". This precludes a fractional result, much like integer division.
(There are better answers about what's 'under the hood', but you don't need to understand those to grok the basics of the type system.)

Long type, left shift and right shift operations

Continuing my previous question
Why I cannot derive from long?
I found an interesting problem.
Step one:
4294967296 & 0xFFFFFFFF00000000
Result: 4294967296.
Step two.
4294967296 & 0x00000000FFFFFFFF
Result: 0
Aha, So here I assume that 4294967296 == 0xFFFFFFFF
Let's check
(long)0x00000000FFFFFFFF
Result: 4294967295. Fail.
Let's double check
4294967296 >> 32
Result: 1. Fail.
The only explanation is that because i am using long where
some bit is reserved for sign. In C I would use unsigned long.
What do you think guys?
4294967296 & 0xFFFFFFFF00000000 = 4294967296
This indicates that the value 4294967296 has no bits set in the lower 32-bits. In fact, 4294967296 is 0x100000000, so this is true.
4294967296 >> 32 = 1
Again, consistent.
In other words, your conclusion that 4294967296 is 0xFFFFFFFF is wrong so the remaining checks will not support this.
Um... I'm not sure why you came to the conclusions you did but 4294967296 is 0x100000000. To write out the bitwise AND's in easily readable hex...
0x0000000100000000 &
0x00000000FFFFFFFF =
0x0000000000000000
0x0000000100000000 &
0xFFFFFFFF00000000 =
0x0000000100000000
Both of those make perfect sense. Perhaps you're misunderstanding a bitwise AND... it maps the bits that are the same in both. Your comments seem more appropriate to a bitwise XOR than a bitwise AND (Which is not the operation you're using)...
I think you are failing to understand the bitwise and operation. The bitwise and will return the bits that are set in both. If the two were the same then
(4294967296 & 0xFFFFFFFF00000000) == 4294967296
and
(4294967296 & 0xFFFFFFFF00000000) == 0xFFFFFFFF00000000
would both hold, but they obviously don't.

Categories