FlagsAttribute Enum problems - c#

So I'm building an MSNP (windows live messenger) client. And I've got this list of capabilities
public enum UserCapabilities : long
{
None = 0,
MobileOnline = 1 << 0,
MSN8User = 1 << 1,
RendersGif = 1 << 2,
....
MsgrVersion7 = 1 << 30,
MsgrVersion8 = 1 << 31,
MsgrVersion9 = 1 << 32,
}
full list here http://paste.pocoo.org/show/383240/
The server sends each users capabilities to the client as a long integer, which I take and cast it to UserCapabilities
capabilities = Int64.Parse(e.Command.Args[3]);
user._capabilities = (UserCapabilities)capabilities;
This is fine, and with atleast one user (with a capability value of 1879474220), I can do
Debug.WriteLine(_msgr.GetUser(usr).Capabilities);
and this will output
RendersGif, RendersIsf, SupportsChunking, IsBot, SupportsSChannel, SupportsSipInvite, MsgrVersion5, MsgrVersion6, MsgrVersion7
But with another user, who has the capability value of (3055849760), when I do the same, I just get the same number outputted
3055849760
What I would like to be seeing is a list of capabilities, as it is with the other user.
I'm sure there is a very valid reason for this happening, but no matter how hard I try to phrase the question to Google, I am not finding an answer.
Please help me :)

The definition of the shift operators means that only the 5 least significant bits are used for 32-bit numbers and only the first 6 bits for 64-bit; meaning:
1 << 5
is identical to
1 << 37
(both are 32)
By making it:
MsgrVersion9 = 1L << 32
you make it a 64-bit number, which is why #leppie's fix works; otherwise the << is considered first (and note that 1<<32 is identical to 1<<0, i.e. 1), and then the resulting 1 is converted to a long; so it is still 1.
From ยง14.8 in the ECMA spec:
For the predefined operators, the number of bits to shift is computed as follows:
When the type of x is int or uint, the shift count is given by the low-order five bits of count. In other words, the shift count is computed from count & 0x1F.
When the type of x is long or ulong, the shift count is given by the low-order six bits of count. In other words, the shift count is computed from count & 0x3F.
If the resulting shift count is zero, the shift operators simply return the value of x.
Shift operations never cause overflows and produce the same results in checked and unchecked context

The problem could be with arithmetic overflow.
Specifically at:
MsgrVersion8 = 1 << 31,
MsgrVersion9 = 1 << 32,
I suggest you make it:
MsgrVersion8 = 1L << 31,
MsgrVersion9 = 1L << 32,
To prevent accidental overflow.
Update:
Seems likely as the smaller number on 'touches' 31 bits, while the bigger one 'touches' 32 bits.

Related

Convert a variable size hex string to signed number (variable size bytes) in C#

C# provides the method Convert.ToUInt16("FFFF", 16)/Convert.ToInt16("FFFF", 16) to convert hex strings into unsigned and signed 16 bit integer. These methods works fine for 16/32 bit values but not so for 12 bit values.
I would like to convert 3 char long hex string to signed integer. How could I do it? I would prefer a solution that could take the number of character as parameter to decide signed values.
Convert(string hexString, int fromBase, int size)
Convert("FFF", 16, 12) return -1.
Convert("FFFF", 16, 16) return -1.
Convert("FFF", 16, 16) return 4095.
The easiest way I can think of converting 12 bit signed hex to a signed integer is as follows:
string value = "FFF";
int convertedValue = (Convert.ToInt32(value, 16) << 20) >> 20; // -1
The idea is to shift the result as far left as possible so that the negative bits line up, then shift right again to the original position. This works because a "signed shift right" operation keeps the negative bit in place.
You can generalize this into a method as follows:
int Convert(string value, int fromBase, int bits)
{
int bitsToShift = 32 - bits;
return (Convert.ToInt32(value, fromBase) << bitsToShift) >> bitsToShift;
}
You can cast the result to a short if you want a 16 bit value when working with 12 bit hex strings. Performance of this method will be the same as a 16 bit version because bit shift operators on short cast the values to int anyway and this gives you more flexibility to specify more than 16 bits if needed without writing another method.
Ah, you'd like to calculate the Two's Complement for a certain number of bits (12 in your case, but really it should work with anything).
Here's the code in C#, blatantly stolen from the Python example in the wiki article:
int Convert(string hexString, int fromBase, int num_bits)
{
var i = System.Convert.ToUInt16(hexString, fromBase);
var mask = 1 << (num_bits - 1);
return (-(i & mask) + (i & ~mask));
}
Convert("FFF", 16, 12) returns -1
Convert("4095", 10, 12) is also -1 as expected

C# hexadecimal & comparison

I ran into a bit of code similar to the code below and was just curious if someone could help me understand what it's doing?:
int flag = 5;
Console.WriteLine(0x0E & flag);
// 5 returns 4, 6 returns 4, 7 returns 6, 8 returns 8
Sandbox:
https://dotnetfiddle.net/NnLyvJ
This is the bitwise AND operator.
It performs an AND operation on the bits of a number.
A logical AND operation on two [boolean] values returns True if the two values are True; False otherwise.
A bitwise AND operation on two numbers returns a number from all the bits of the two numbers that are 1 (True) in both numbers.
Example:
5 = 101
4 = 100
AND = 100 = 4
Therefore, 5 & 4 = 4.
This logic is heavily used for storing flags, you just need to assign each flag a power of 2 (1, 2, 4, 8, etc) so that each flag is stored in a different bit of the flags number, and then you just need to do flags & FLAG_VALUE and if the flag is set, it'll return FLAG_VALUE, otherwise 0.
C# provides a "cleaner" way to do this using enums and the Flags attribute.
[Flags]
public enum MyFlags
{
Flag0 = 1 << 0, // using the bitwise shift operator to make it more readable
Flag1 = 1 << 1,
Flag2 = 1 << 2,
Flag3 = 1 << 3,
}
void a()
{
var flags = MyFlags.Flag0 | MyFlags.Flag1 | MyFlags.Flag3;
Console.WriteLine(Convert.ToString((int) flags, 2)); // prints the binary representation of flags, that is "1011" (in base 10 it's 11)
Console.WriteLine(flags); // as the enum has the Flags attribute, it prints "Flag0, Flag1, Flag3" instead of treating it as an invalid value and printing "11"
Console.WriteLine(flags.HasFlag(MyFlags.Flag1)); // the Flags attribute also provides the HasFlag function, which is syntactic sugar for doing "(flags & MyFlags.Flag1) != 0"
}
Excuse my bad english.

Understanding this snippet of code _num = (_num & ~(1L << 63));

Can any explain what this section of code does: _num = (_num & ~(1L << 63));
I've have been reading up on RNGCryptoServiceProvider and came across http://codethinktank.blogspot.co.uk/2013/04/cryptographically-secure-pseudo-random.html with the code, I can follow most the code except for the section above.
I understand it ensuring that all numbers are positive, but I do not know how its doing that.
Full code
public static long GetInt64(bool allowNegativeValue = false)
{
using (RNGCryptoServiceProvider _rng = new RNGCryptoServiceProvider())
{
byte[] _obj = new byte[8];
_rng.GetBytes(_obj);
long _num = BitConverter.ToInt64(_obj, 0);
if (!allowNegativeValue)
{
_num = (_num & ~(1L << 63));
}
return _num;
}
}
Any help explaining it would be appreciated
<< is the bitshift operator 1L << 63 Results in shifting the 1 left 63 places or 1 followed by 63 0s
~ is i believe bitwise not, So this would apply to the above and result in 0 followed by 63 1s
& is bitwise and, it results in applying the and operation bitwise to both operands
Ultimately this appears to be filtering it down to 63 bits of data, since any higher bits will be zeroed out due to the and
The reason this works to force positive, is because typically the highest bit(#64 in your case) is used as a sign bit in most notations, and this code just essentially 0s it out, thus forcing it to be not negative, i.e positive
a = ~(1L << 63) = ~0x1000000000000000 = 0x7fffffffffffffff
so m &= a clears the highest bit of m, thus ensuring it's positive, assuming the two's complement encoding of signed integers is used.

integer overflow in C# with left shift

I have the following line of code in C#:
ulong res = (1<<(1<<n))-1;
for some integer n.
As long as n is lower than 5, I get the correct answer.
However, for n>=5, it does not work.
Any idea, using bitwise operators, how to get the correct answer even for n=5 and n=6?
For n=6, the result should be ~0UL, and for n=5, the result should be 0xFFFFFFFF.
As long as n is lower than 5, I get the correct answer. However, for n>=5, it does not work.
Well, it obeys the specification. From section 7.9 of the C# 5 spec:
The << operator shifts x left by a number of bits computed as described below.
For the predefined operators, the number of bits to shift is computed as follows:
When the type of x is int or uint, the shift count is given by the low-order five bits of count. In other words, the shift count is computed from count & 0x1F.
So when n is 5, 1 << n (the inner shift) is 32. So you've then got effectively:
int x = 32;
ulong res = (1 << x) - 1;
Now 32 & 0x1f is 0... hence you have (1 << 0) - 1 which is 0.
Now if you make the first operand of the "outer" shift operator 1UL as suggested by p.s.w.g, you then run into this part of the specification instead:
When the type of x is long or ulong, the shift count is given by the low-order six bits of count. In other words, the shift count is computed from count & 0x3F.
So the code will do as it seems you expect, at least for n = 5 - but not for n = 6.
I believe the problem is that the constant 1 is considered an System.Int32 so it assumes that's the datatype you want to operate on, but it quickly overflows the bounds of that datatype. If you change it to:
ulong res = (1ul<<(1<<n))-1;
It works for me:
var ns = new[] { 0, 1, 2, 3, 4, 5, 6 };
var output = ns.Select(n => (1ul<<(1<<n))-1);
// { 0x1ul, 0x3ul, 0xful, 0xfful, 0xfffful, 0xfffffffful, 0ul }
The problem is that the literal '1' is a 32-bit signed integer, not a 64-bit unsigned long. You're exceeding the range of a 32-bit integer when n is 5 or more.
Changing the appropriate 1 to 1UL fixes the issue, and works for n=5 (but not n=6, which exceeds the range of a ulong).
ulong res = (1UL<<(1<<n))-1;
Getting it to work for n=6 (i.e. to get 0xFFFFFFFFFFFFFFFF) is not as easy. One simple solution is to use a BigInteger, which will remove the issue that bit-shifts by 64 aren't defined for 64-bit integers.
// (reference and using System.Numerics)
ulong res = (ulong)(BigInteger.One<<(1<<n)-1)
However, that won't be particularly fast. Maybe an array of the constants?
var arr = new[] {0x1, 0x3, 0xF, 0xFF, 0xFFFF, 0xFFFFFFFF, 0xFFFFFFFFFFFFFFFF};
ulong res = arr[n];

Working out Enum masks c#

I was wondering how the following enum masking works
If I have an Enum structure
public enum DelMask
{
pass = 1,
fail = 2,
abandoned = 4,
distinction = 8,
merit = 16,
defer = 32,
}
I have seen the following code
int pass = 48;
if ((pass & (int)DelMask.defer) > 0)
//Do something
else if ((pass & (int)DelMask.merit ) > 0)
//Do something else
I am wondering can anyone help me figure out how which block will get executed?
Basic bit logic at work here. The integer 48 ends like this in binary:
0011 0000
Defer, 32, is:
0010 0000
Merit, 16, is:
0001 0000
Now when you perform a logical AND (&), the resulting bits are set where they are both in the input:
pass & (int)DelMask.defer
0011 0000
0010 0000
========= &
0010 0000
The result will be 16, so ((pass & (int)DelMask.defer) > 0) will evaluate to true. Both if's will evaluate to true in your example because both flags are present in the input. The second one won't be evaluated though, because it's an else if.
Both are correct so the first will get executed.
16 is 10000
32 is 100000
48 is 16+32 so it is 110000
10000 & 110000 is 10000
100000 & 110000 is 100000
Both are bigger than zero.
48 = 16 (merit) + 32 (defer).
Thus pass & (int)DelMask.defer evaluates to 32, so the first block runs.
If that wasn't the case, pass & (int)DelMask.merit evaluates to 16, so the second block would run if it got that far.
This only works because the values in the enum are all different powers of 2 and thus correspond to independent bits in the underlying int. This is what is known as a bit flags enum.
First, it should be int pass = 48;
Basically this code checks whether a bit is set in a binary representation of the number. Each & operation should produce a result with all zeroes and one on the place where it is in the mask. for instance:
48: 110000
defer = 32: 100000
______
& 100000
So you can use this code:
int pass = 48;
if ((pass & (int)DelMask.defer) == (int)DelMask.defer)
//Do something
else if ((pass & (int)DelMask.merit ) == (int)DelMask.merit)
//Do something else
Well you need to think of those numbers as binary. I'll use the d suffix to show decimal notation and b suffix for binary notation.
enum values:
01d = 000001b
02d = 000010b
04d = 000100b
08d = 001000b
16d = 010000b
32d = 100000b
pass value:
48d = 110000b
Now the & is the bit-wise AND operator. Which means that if c = a&b, the nth bit in c will be 1 if and only if the nth bit is 1 in both a and b.
So:
16d & 48d = 010000b = 16d > 0
32d & 48d = 100000b = 32d > 0
As you see, your number 48d "matches" with both 16d and 32d. That is why this kind of enums is generally described as a "flag" enum: you can have with one integer the value of several "flags".
As for your code, the first if operator will be verified, which means that you will enter it and "Do something". You will not "Do something else".
Generally in C#, we use the [Flags] attribute for flag enums, which allows not actually writing the decimal values for the enum members. As usual, the example in the MSDN is useless so I'll refer to this SO question for more details about how to use it (note that to know if a value x has a flag f set, you can either do x & f == f or x | f == x, but the usage seems to be to generally use the latter one).

Categories