How to manipulate bits of a binary string in C# - c#

I am working on a c# desktop application. I have a bit string and i want to toggle it.
c3 = DecimalToBinary(Convert.ToInt32(tbVal3.Text)).PadLeft(16, '0');
// above c3 is 0000001011110100
spliting the above string into two (substring)
string part1 = c3.Substring(0, 8); // 00000010
string part2 = c3.Substring(8, 8); // 11110100
For part1 the MSB of the first octet shall be set to 1 and for part2 thus this bit shall be shifted into the LSB of the first octet the MSB of the second (last) octet shall be set to 0,thus this bit shall be shifted into the LSB of the first octet. This gives binary part1 = 10000101 and part2 = 01110100
I have checked this solution Binary array after M range toggle operations but still, it's not understandable.
Rule
in the case of the application context name LN referencing with no ciphering
the arc labels of the object identifier are (2, 16, 756, 5, 8, 1, 1);
• the first octet of the encoding is the combination of the first two
numbers into a single number, following the rule of
40*First+Second -> 40*2 + 16 = 96 = 0x60;
• the third number of the Object Identifier (756) requires two octets: its
hexadecimal value is 0x02F4, which is 00000010 11110100, but following the above rule,
the MSB of the first octet shall be set to 1 and the MSB of the second (last) octet shall
be set to 0, thus this bit shall be shifted into the LSB of the first octet. This gives
binary 10000101 01110100, which is 0x8574;
• each remaining numbers of the Object Identifier required to be encoded on one octet;
• this results in the encoding 60 85 74 05 08 01 01.
How can I perform this toggle with binary strings?
Any help would be highly appreciated

You may convert strings to bytes, to be able to manipulate bits, and next you convert values to strings.
So you can use this:
// Convert string of binary value (base 2) to byte
byte v1 = Convert.ToByte("00000010", 2);
byte v2 = Convert.ToByte("11110100", 2);
// Operate bits
v1 = (byte)( v1 | 0b10000000 );
v1 = (byte)( v1 | ( v2 & 0b10000000 ) >> 7 );
v2 = (byte)( v2 & ~0b10000000 );
// Convert to string formatted
string result1 = Convert.ToString(v1, 2).PadLeft(8, '0');
string result2 = Convert.ToString(v2, 2).PadLeft(8, '0');
Console.WriteLine(result1);
Console.WriteLine(result2);
Result:
10000011
01110100
It forces the MSB of part1.
It copies the MSB of part2 to LSB of part1.
It clears the MSB of part2.
If you don't use C#7, use 128 instead of 0b10000000.
As you requested, if I understood what you ask for. But you say: part1=10000101 and part2=01110100... so I don't understand what you wrote but perhaps there is a mistake and you wanted to write 10000011 instead of 10000101.
If you want to toggle bits (0=>1 and 1=>0), use:
v1 = (byte)( v1 ^ 128);
v1 = (byte)( v1 | ( v2 & 128 ) >> 7 );
v2 = (byte)( v2 ^ 128 );
It toggles the MSB of part1.
It copies the MSB of part2 to LSB of part1.
It toggles the MSB of part2.
Bitwise operators
Bitwise and shift operators (C# reference)
Boolean algebra

Related

Parse 4 digit code from binary hash in C#

6 bits from the beginning of hash and 7 bits from the end of hash are taken. The resulting 13 bits are transformed into decimal number and printed out. The code is a decimal 4-digits number in range 0000...8192, always 4 digits are displayed (e.g. 0041).
Example:
Hash value: 2f665f6a6999e0ef0752e00ec9f453adf59d8cb6
Binary representation of hash: 0010 1111 0110 0110 1111 .... 1000 1100 1011 0110
code – binary value: 0010110110110
code – decimal value: 1462
Example No.2:
Hash value: otr+8gszp9ey/gcCu4Q8ValEbewEb5zL+mvyKakl1Pp7S1Be3klDYh0RcLktBpgs6VbbgCTjsmvjEZtMbUkaEg==
(calculations)
code – decimal value: 5138
With example no.1 I'm not sure which one SHA used. But the second one is definitely Sha512 and I can't get the desired code 5138.
Steps I made:
Created sha:
var sha = SHA512.Create();
Computed hash 3 ways (str -> hash value from above):
1) var _hash = sha.ComputeHash(Encoding.UTF8.GetBytes(str));
2) var _hash = sha.ComputeHash(Convert.FromBase64String(str));
3) Call procedure: string stringifiedHash = ToBinaryString(Encoding.UTF8, str);
static string ToBinaryString(Encoding encoding, string text)
{
string.Join("", encoding.GetBytes(text).Select(n => Convert.ToString(n, 2).PadLeft(8, '0')));
}
Stringified (1 and 2 hash) to binary value string (This part redone from Java project with exact same calculations) :
(((0xFC & _hash[0]) << 5) | (_hash[_hash.Length - 1] & 0x7F)).ToString("0000")
OR substring stringifiedHash and take 6 bits from begining and 7 from end. And then convert:
Convert.ToInt({value from substring and join}, 2)
But every try returned not correct answer. Where I made an mistake while trying to parse desired code.
Your bitwise logic is incorrect.
upper = 0b111111
lower = 0b0010101
value = upper << 5 | lower
value = 0b11111110101 // this is not the value you wanted it never goes beyond 11 bits.
Essentially the above operation looks like this
// Notice the lowest two bytes of the top will overwrite the top two bytes of the bottom.
0b11111100000
| 0b0010101
-------------------
0b11111110101 // Only 11 bits...
You need room for the final 7 bits after your | operation. If you only bit shift 5 times when you | with the lower 7 bits you will overwrite 2 of them.
So, updating the shift in your code should solve the problem.
public static string Get13FormattedBitches ( Hash hash )
{
UInt16 upper = hash[0] & 0xFC;
UInt16 lower = hash[hash.Length - 1] & 0x7F;
UInt16 value = (upper << 7) | lower;
return value.ToString ( "0000" );
}

Bit shifting with hex in Python

I am trying to understand how to perform bit shift operations in Python. Coming from C#, it doesn't work in the same way.
The C# code is;
var plain=0xabcdef0000000; // plaintext
var key=0xf0f0f0f0f123456; // encryption key
var L = plain;
var R = plain>>32;
The output is;
000abcdef0000000 00000000000abcde
What is the equivilent in Python? I have tried;
plain = 0xabcdef0000000
key = 0xf0f0f0f0f123456
print plain
left = plain
right = plain >> 32
print hex(left)
print hex(right)
However, it doesn't work. The output is different in Python. The 0's padding are missing. Any help would be appreciated!
The hex() function does not pad numbers with leading zeros, because Python integers are unbounded. C# integers have a fixed size (64 bits in this case), so have an upper bound and can therefor be padded out. This doesn't mean those extra padding zeros carry any meaning; the integer value is the same.
You'll have to explicitly add those zeros, using the format() function to produce the output:
print format(left, '#018x')
print format(right, '#018x')
The # tells format() to include the 0x prefix, and the leading 0 before the field width asks format() to pad the output:
>>> print format(left, '#018x')
0x000abcdef0000000
>>> print format(right, '#018x')
0x0000000000abcde
Note that the width includes the 0x prefix; there are 16 hex digits in that output, representing 64 bits of data.
If you wanted to use a dynamic width based on the number of characters used in key, then calculate that from int.bit_length(); every 4 bits produce a hex character:
format(right, '#0{}x'.format((key.bit_length() + 3) // 4 + 2))
Demo:
>>> (key.bit_length() + 3) // 4 + 2
17
>>> print format(right, '#0{}x'.format((key.bit_length() + 3) // 4 + 2))
0x0000000000abcde
But note that even the key is only 60 bits in length and C# would pad that value with an 0 as well.
I have no problem with you you tried
>>> hex(0xabcdef0000000)
'0xabcdef0000000'
>>> hex(0xabcdef0000000 >> 32)
'0xabcde'
In [83]: plain=0xabcdef0000000
In [84]: plain>>32
Out[84]: 703710
In [85]: plain
Out[85]: 3022415462400000
In [87]: hex(plain)
Out[87]: '0xabcdef0000000'
if
In [134]: left = plain
In [135]: right = plain >> 32
Then
In [140]: '{:0x}'.format(left)
Out[140]: 'abcdef0000000'
In [143]: '{:018x}'.format(right)
Out[143]: '0000000000000abcde'

Array of chars in hex format to integer?

I have an API which returns a byte[] over the network which represents information about a device.
It is in format 15ab1234cd\r\n where the first 2 characters are a HEX representation of the amount of data in the message.
I am aware I can convert this to a string via ASCIIEncoding.ASCII.GetString, and then use Convert.ToInt32(string.Substring(0, 2), 16) to achieve this. However the whole thing stays a byte array throughout the life of the whole program I am writing, and I don't want to convert to a string just for the purpose of getting the packet length.
Any suggestions of converting array of chars in hex format to an int in C#?
There is no .Net provided function that does it. Converting first 2 bytes to string with Encoding.GetString is very readable (possibly not most performant):
var hexValue = ASCIIEncoding.ASCII.GetString(byteData, 0, 2);
var intValue = Convert.ToInt32(hexValue, 16);
You can easily write conversion code (map '0'-'9' and 'a'-'f' / 'A'-'F' ranges to corresponding integer value and add together.
Here is one-statement conversion strictly for entertainment purposes. The resulting lambda (before ((byte)'0',(byte)'A') in sample takes 2 byte arguments assuming them to be ASCII characters and convert into integer.
((Func<Func<char,int>, Func<byte, byte, int>>)
(charToInt=> (c, c1)=>
charToInt(char.ToUpper((char)c)) * 16 + charToInt(char.ToUpper((char)c1))))
((Func<char, int>)(
c => c >= '0' && c <='9' ? c-'0' : c >='A' && c <= 'F' ? c - 'A' + 10 : 0))
((byte)'0',(byte)'A')
If you know the first two values are valid hexadecimal characters (0-9, A-Z, a-z), it is possible to convert to a hex value using logical operators.
int GetIntFromHexBytes(byte[] s, int start, int length)
{
int ret = 0;
for (int i = start; i < start+length; i++)
{
ret <<= 4;
ret |= (byte)((s[i] & 0x0f) + ((s[i] & 0x40) >> 6) * 9);
}
return ret;
}
(This works because c & 0x0f returns the 4 least significant bits, and will range from 0-9 for the values '0'-'9', and from 1 - 6 for both capital and lowercase letters ('a' - 'z' and 'A' - 'Z'). s[i] & 0x40 is 0 for numeric characters, and 0x40 for alpha characters; shifting right six characters provides a value of 0 for numeric characters and 1 for alphabetic characters. Shifting left and multiplying by 9 will add a bias of 9 for alpha characters to map A-F and a-f from 1-6 to 10-15.)
Given the byte array:
byte[] b = { (byte)'7', (byte)'f', (byte)'1', (byte)'c' };
Calling GetIntFromHexBytes(b, 0, 2) will return 127 (0x7f), the first two bytes of the array, as required.
As a caution: this approach does no bounds checking. A check can be added in the loop if needed to ensure that the input bytes are valid hex characters.

Removing bits from first byte and then rejoining the bits

I have a devious little problem to which I think I've come up with a solution far more difficult than needs to be.
The problem is that I have two bytes. The first two bits of the first byte are to be removed (as the value is little endian, these bits are effectively in the middle of the 16 bit value). Then the least significant two bits of the second byte are to be moved to the most significant bit locations of the first byte, in place of the removed bits.
My solution is as follows:
byte firstByte = (byte)stream.ReadByte(); // 01000100
byte secondByte = (byte)stream.ReadByte(); // 00010010
// the first and second byte equal the decimal 4676 in this little endian example
byte remainderOfFirstByte = (byte)(firstByte & 63); // 01000100 & 00111111 = 00000100
byte transferredBits = (byte)(secondByte << 6); // 00010010 << 6 = 10000000
byte remainderOfSecondByte = (byte)(secondByte >> 2); // 00010010 >> 2 = 00000100
byte newFirstByte = (byte)(transferredBits | remainderOfFirstByte); // 10000000 | 00000100 = 10000100
int result = BitConverter.ToInt32(new byte[]{newFirstByte, remainderOfSecondByte, 0, 0}, 0); //10000100 00010000 (the result is decimal 1156)
Is there an easier way* to achieve this?
*less verbose, perhaps an inbuilt function or trick I'm missing? (with the exception of doing both the & and << on the same line)
You don't have to mask out bits that a shift would throw away anyway. And you don't have to transfer those bits manually. So it becomes this: (not tested)
int result = (secondByte << 6) | (firstByte & 0x3F);

How is a Hex Value manipulated bitwise?

I have a very basic understanding of bitwise operators. I am at a loss to understand how the value is assigned however. If someone can point me in the right direction I would be very grateful.
My Hex Address: 0xE0074000
The Decimal value: 3758571520
The Binary Value: 11100000000001110100000000000000
I am trying to program a simple Micro Controller and use the Register access Class in the Microsoft .Net Micro Framework to make the Controller do what I want it to do.
Register T2IR = new Register(0xE0074000);
T2IR.Write(1 << 22);
In my above example, how are the bits in the Binary representation moved? I don’t understand how the management of bits is assigned to the address in Binary form.
If someone can point me in the right direction I would be very greatfull.
Forget about decimals for a start. You'll get back to that later.
First you need to see the logic between HEX and BINARY.
Okay, for a byte you have 8 bits (#7-0)
#7 = 0x80 = %1000 0000
#6 = 0x40 = %0100 0000
#5 = 0x20 = %0010 0000
#4 = 0x10 = %0001 0000
#3 = 0x08 = %0000 1000
#2 = 0x04 = %0000 0100
#1 = 0x02 = %0000 0010
#0 = 0x01 = %0000 0001
When you read that in binary, in a byte, like this one %00001000
Then the bit set, is the 4th from right aka bit #3 which has a value of 08 hex (in fact also decimal, but still forget about decimal while you figure out hex/binary)
Now if we have the binary number %10000000
This is the #7 bit which is on. That has a hex value of 0x80
So all you have to do is to sum them up in "nibbles" (each part of the hex byte is called a nibble by some geeks)
the maximum you can get in a nibble is (decimal) 15 or F as 0x10 + 0x20 + 0x40 + 0x80 = 0xF0 = binary %11110000
so all lights on (4 bits) in a nibble = F in hex (15 decimal)
same goes for the lower nibble.
Do you see the pattern?
Refer to #BerggreenDK's answer for what a shift is. Here's some info about what it's like in hex (same thing, just different representation):
Shifting is a very simple concept to understand. The register is of a fixed size, and whatever bits that won't fit falls off the end. So, take this example:
int num = 0xffff << 16;
Your variable in hex would now be 0xffff0000. Note how the the right end is filled with zeros. Now, let's shift it again.
num = num << 8;
num = num >> 8;
num is now 0x00ff0000. You don't get your old bits back. The same applies to right shifts as well.
Trick: Left shifting by 1 is like multiplying the number by 2, and right shifting by 1 is like integer dividing everything by 2.

Categories