How is a Hex Value manipulated bitwise? - c#

I have a very basic understanding of bitwise operators. I am at a loss to understand how the value is assigned however. If someone can point me in the right direction I would be very grateful.
My Hex Address: 0xE0074000
The Decimal value: 3758571520
The Binary Value: 11100000000001110100000000000000
I am trying to program a simple Micro Controller and use the Register access Class in the Microsoft .Net Micro Framework to make the Controller do what I want it to do.
Register T2IR = new Register(0xE0074000);
T2IR.Write(1 << 22);
In my above example, how are the bits in the Binary representation moved? I don’t understand how the management of bits is assigned to the address in Binary form.
If someone can point me in the right direction I would be very greatfull.

Forget about decimals for a start. You'll get back to that later.
First you need to see the logic between HEX and BINARY.
Okay, for a byte you have 8 bits (#7-0)
#7 = 0x80 = %1000 0000
#6 = 0x40 = %0100 0000
#5 = 0x20 = %0010 0000
#4 = 0x10 = %0001 0000
#3 = 0x08 = %0000 1000
#2 = 0x04 = %0000 0100
#1 = 0x02 = %0000 0010
#0 = 0x01 = %0000 0001
When you read that in binary, in a byte, like this one %00001000
Then the bit set, is the 4th from right aka bit #3 which has a value of 08 hex (in fact also decimal, but still forget about decimal while you figure out hex/binary)
Now if we have the binary number %10000000
This is the #7 bit which is on. That has a hex value of 0x80
So all you have to do is to sum them up in "nibbles" (each part of the hex byte is called a nibble by some geeks)
the maximum you can get in a nibble is (decimal) 15 or F as 0x10 + 0x20 + 0x40 + 0x80 = 0xF0 = binary %11110000
so all lights on (4 bits) in a nibble = F in hex (15 decimal)
same goes for the lower nibble.
Do you see the pattern?

Refer to #BerggreenDK's answer for what a shift is. Here's some info about what it's like in hex (same thing, just different representation):
Shifting is a very simple concept to understand. The register is of a fixed size, and whatever bits that won't fit falls off the end. So, take this example:
int num = 0xffff << 16;
Your variable in hex would now be 0xffff0000. Note how the the right end is filled with zeros. Now, let's shift it again.
num = num << 8;
num = num >> 8;
num is now 0x00ff0000. You don't get your old bits back. The same applies to right shifts as well.
Trick: Left shifting by 1 is like multiplying the number by 2, and right shifting by 1 is like integer dividing everything by 2.

Related

Parse 4 digit code from binary hash in C#

6 bits from the beginning of hash and 7 bits from the end of hash are taken. The resulting 13 bits are transformed into decimal number and printed out. The code is a decimal 4-digits number in range 0000...8192, always 4 digits are displayed (e.g. 0041).
Example:
Hash value: 2f665f6a6999e0ef0752e00ec9f453adf59d8cb6
Binary representation of hash: 0010 1111 0110 0110 1111 .... 1000 1100 1011 0110
code – binary value: 0010110110110
code – decimal value: 1462
Example No.2:
Hash value: otr+8gszp9ey/gcCu4Q8ValEbewEb5zL+mvyKakl1Pp7S1Be3klDYh0RcLktBpgs6VbbgCTjsmvjEZtMbUkaEg==
(calculations)
code – decimal value: 5138
With example no.1 I'm not sure which one SHA used. But the second one is definitely Sha512 and I can't get the desired code 5138.
Steps I made:
Created sha:
var sha = SHA512.Create();
Computed hash 3 ways (str -> hash value from above):
1) var _hash = sha.ComputeHash(Encoding.UTF8.GetBytes(str));
2) var _hash = sha.ComputeHash(Convert.FromBase64String(str));
3) Call procedure: string stringifiedHash = ToBinaryString(Encoding.UTF8, str);
static string ToBinaryString(Encoding encoding, string text)
{
string.Join("", encoding.GetBytes(text).Select(n => Convert.ToString(n, 2).PadLeft(8, '0')));
}
Stringified (1 and 2 hash) to binary value string (This part redone from Java project with exact same calculations) :
(((0xFC & _hash[0]) << 5) | (_hash[_hash.Length - 1] & 0x7F)).ToString("0000")
OR substring stringifiedHash and take 6 bits from begining and 7 from end. And then convert:
Convert.ToInt({value from substring and join}, 2)
But every try returned not correct answer. Where I made an mistake while trying to parse desired code.
Your bitwise logic is incorrect.
upper = 0b111111
lower = 0b0010101
value = upper << 5 | lower
value = 0b11111110101 // this is not the value you wanted it never goes beyond 11 bits.
Essentially the above operation looks like this
// Notice the lowest two bytes of the top will overwrite the top two bytes of the bottom.
0b11111100000
| 0b0010101
-------------------
0b11111110101 // Only 11 bits...
You need room for the final 7 bits after your | operation. If you only bit shift 5 times when you | with the lower 7 bits you will overwrite 2 of them.
So, updating the shift in your code should solve the problem.
public static string Get13FormattedBitches ( Hash hash )
{
UInt16 upper = hash[0] & 0xFC;
UInt16 lower = hash[hash.Length - 1] & 0x7F;
UInt16 value = (upper << 7) | lower;
return value.ToString ( "0000" );
}

How to manipulate bits of a binary string in C#

I am working on a c# desktop application. I have a bit string and i want to toggle it.
c3 = DecimalToBinary(Convert.ToInt32(tbVal3.Text)).PadLeft(16, '0');
// above c3 is 0000001011110100
spliting the above string into two (substring)
string part1 = c3.Substring(0, 8); // 00000010
string part2 = c3.Substring(8, 8); // 11110100
For part1 the MSB of the first octet shall be set to 1 and for part2 thus this bit shall be shifted into the LSB of the first octet the MSB of the second (last) octet shall be set to 0,thus this bit shall be shifted into the LSB of the first octet. This gives binary part1 = 10000101 and part2 = 01110100
I have checked this solution Binary array after M range toggle operations but still, it's not understandable.
Rule
in the case of the application context name LN referencing with no ciphering
the arc labels of the object identifier are (2, 16, 756, 5, 8, 1, 1);
• the first octet of the encoding is the combination of the first two
numbers into a single number, following the rule of
40*First+Second -> 40*2 + 16 = 96 = 0x60;
• the third number of the Object Identifier (756) requires two octets: its
hexadecimal value is 0x02F4, which is 00000010 11110100, but following the above rule,
the MSB of the first octet shall be set to 1 and the MSB of the second (last) octet shall
be set to 0, thus this bit shall be shifted into the LSB of the first octet. This gives
binary 10000101 01110100, which is 0x8574;
• each remaining numbers of the Object Identifier required to be encoded on one octet;
• this results in the encoding 60 85 74 05 08 01 01.
How can I perform this toggle with binary strings?
Any help would be highly appreciated
You may convert strings to bytes, to be able to manipulate bits, and next you convert values to strings.
So you can use this:
// Convert string of binary value (base 2) to byte
byte v1 = Convert.ToByte("00000010", 2);
byte v2 = Convert.ToByte("11110100", 2);
// Operate bits
v1 = (byte)( v1 | 0b10000000 );
v1 = (byte)( v1 | ( v2 & 0b10000000 ) >> 7 );
v2 = (byte)( v2 & ~0b10000000 );
// Convert to string formatted
string result1 = Convert.ToString(v1, 2).PadLeft(8, '0');
string result2 = Convert.ToString(v2, 2).PadLeft(8, '0');
Console.WriteLine(result1);
Console.WriteLine(result2);
Result:
10000011
01110100
It forces the MSB of part1.
It copies the MSB of part2 to LSB of part1.
It clears the MSB of part2.
If you don't use C#7, use 128 instead of 0b10000000.
As you requested, if I understood what you ask for. But you say: part1=10000101 and part2=01110100... so I don't understand what you wrote but perhaps there is a mistake and you wanted to write 10000011 instead of 10000101.
If you want to toggle bits (0=>1 and 1=>0), use:
v1 = (byte)( v1 ^ 128);
v1 = (byte)( v1 | ( v2 & 128 ) >> 7 );
v2 = (byte)( v2 ^ 128 );
It toggles the MSB of part1.
It copies the MSB of part2 to LSB of part1.
It toggles the MSB of part2.
Bitwise operators
Bitwise and shift operators (C# reference)
Boolean algebra

Understanding this snippet of code _num = (_num & ~(1L << 63));

Can any explain what this section of code does: _num = (_num & ~(1L << 63));
I've have been reading up on RNGCryptoServiceProvider and came across http://codethinktank.blogspot.co.uk/2013/04/cryptographically-secure-pseudo-random.html with the code, I can follow most the code except for the section above.
I understand it ensuring that all numbers are positive, but I do not know how its doing that.
Full code
public static long GetInt64(bool allowNegativeValue = false)
{
using (RNGCryptoServiceProvider _rng = new RNGCryptoServiceProvider())
{
byte[] _obj = new byte[8];
_rng.GetBytes(_obj);
long _num = BitConverter.ToInt64(_obj, 0);
if (!allowNegativeValue)
{
_num = (_num & ~(1L << 63));
}
return _num;
}
}
Any help explaining it would be appreciated
<< is the bitshift operator 1L << 63 Results in shifting the 1 left 63 places or 1 followed by 63 0s
~ is i believe bitwise not, So this would apply to the above and result in 0 followed by 63 1s
& is bitwise and, it results in applying the and operation bitwise to both operands
Ultimately this appears to be filtering it down to 63 bits of data, since any higher bits will be zeroed out due to the and
The reason this works to force positive, is because typically the highest bit(#64 in your case) is used as a sign bit in most notations, and this code just essentially 0s it out, thus forcing it to be not negative, i.e positive
a = ~(1L << 63) = ~0x1000000000000000 = 0x7fffffffffffffff
so m &= a clears the highest bit of m, thus ensuring it's positive, assuming the two's complement encoding of signed integers is used.

Removing bits from first byte and then rejoining the bits

I have a devious little problem to which I think I've come up with a solution far more difficult than needs to be.
The problem is that I have two bytes. The first two bits of the first byte are to be removed (as the value is little endian, these bits are effectively in the middle of the 16 bit value). Then the least significant two bits of the second byte are to be moved to the most significant bit locations of the first byte, in place of the removed bits.
My solution is as follows:
byte firstByte = (byte)stream.ReadByte(); // 01000100
byte secondByte = (byte)stream.ReadByte(); // 00010010
// the first and second byte equal the decimal 4676 in this little endian example
byte remainderOfFirstByte = (byte)(firstByte & 63); // 01000100 & 00111111 = 00000100
byte transferredBits = (byte)(secondByte << 6); // 00010010 << 6 = 10000000
byte remainderOfSecondByte = (byte)(secondByte >> 2); // 00010010 >> 2 = 00000100
byte newFirstByte = (byte)(transferredBits | remainderOfFirstByte); // 10000000 | 00000100 = 10000100
int result = BitConverter.ToInt32(new byte[]{newFirstByte, remainderOfSecondByte, 0, 0}, 0); //10000100 00010000 (the result is decimal 1156)
Is there an easier way* to achieve this?
*less verbose, perhaps an inbuilt function or trick I'm missing? (with the exception of doing both the & and << on the same line)
You don't have to mask out bits that a shift would throw away anyway. And you don't have to transfer those bits manually. So it becomes this: (not tested)
int result = (secondByte << 6) | (firstByte & 0x3F);

C# integer masking into byte array

I'm confused as to why this isn't working, can someone please provide some insight?
I have a function who is taking in an integer value, but would like to store the upper two bits of the hex value into a byte array element.
Let's say if Distance is (24,135)10 or (5E47)16
public ConfigureReportOptionsMessageData(int Distance, int DistanceCheckTime)
{
...
this._data = new byte[9];
this._data[0] = (byte)(Distance & 0x00FF); // shows 47
this._data[1] = (byte)(Distance & 0xFF00); // shows 00
this._data[2] = (byte)(DistanceCheckTime & 0xFF);
...
}
this._data[1] = (byte)(Distance >> 8);
?
This seems like you should be using BitConverter.GetBytes - it will provide a much simpler option.
The reason you get 0 for _data[1] is that the upper 3 bytes are lost when you cast to byte.
Your intermediate result looks like this:
Distance && 0xff00 = 0x00005e00;
When this is converted to a byte, you only retain the low order byte:
(byte)0x00005e00 = 0x00;
You need to shift by 8 bits:
0x00005e00 >> 8 = 0x0000005e;
before you cast to byte and assign to _data[1]

Categories