Parse 4 digit code from binary hash in C# - c#

6 bits from the beginning of hash and 7 bits from the end of hash are taken. The resulting 13 bits are transformed into decimal number and printed out. The code is a decimal 4-digits number in range 0000...8192, always 4 digits are displayed (e.g. 0041).
Example:
Hash value: 2f665f6a6999e0ef0752e00ec9f453adf59d8cb6
Binary representation of hash: 0010 1111 0110 0110 1111 .... 1000 1100 1011 0110
code – binary value: 0010110110110
code – decimal value: 1462
Example No.2:
Hash value: otr+8gszp9ey/gcCu4Q8ValEbewEb5zL+mvyKakl1Pp7S1Be3klDYh0RcLktBpgs6VbbgCTjsmvjEZtMbUkaEg==
(calculations)
code – decimal value: 5138
With example no.1 I'm not sure which one SHA used. But the second one is definitely Sha512 and I can't get the desired code 5138.
Steps I made:
Created sha:
var sha = SHA512.Create();
Computed hash 3 ways (str -> hash value from above):
1) var _hash = sha.ComputeHash(Encoding.UTF8.GetBytes(str));
2) var _hash = sha.ComputeHash(Convert.FromBase64String(str));
3) Call procedure: string stringifiedHash = ToBinaryString(Encoding.UTF8, str);
static string ToBinaryString(Encoding encoding, string text)
{
string.Join("", encoding.GetBytes(text).Select(n => Convert.ToString(n, 2).PadLeft(8, '0')));
}
Stringified (1 and 2 hash) to binary value string (This part redone from Java project with exact same calculations) :
(((0xFC & _hash[0]) << 5) | (_hash[_hash.Length - 1] & 0x7F)).ToString("0000")
OR substring stringifiedHash and take 6 bits from begining and 7 from end. And then convert:
Convert.ToInt({value from substring and join}, 2)
But every try returned not correct answer. Where I made an mistake while trying to parse desired code.

Your bitwise logic is incorrect.
upper = 0b111111
lower = 0b0010101
value = upper << 5 | lower
value = 0b11111110101 // this is not the value you wanted it never goes beyond 11 bits.
Essentially the above operation looks like this
// Notice the lowest two bytes of the top will overwrite the top two bytes of the bottom.
0b11111100000
| 0b0010101
-------------------
0b11111110101 // Only 11 bits...
You need room for the final 7 bits after your | operation. If you only bit shift 5 times when you | with the lower 7 bits you will overwrite 2 of them.
So, updating the shift in your code should solve the problem.
public static string Get13FormattedBitches ( Hash hash )
{
UInt16 upper = hash[0] & 0xFC;
UInt16 lower = hash[hash.Length - 1] & 0x7F;
UInt16 value = (upper << 7) | lower;
return value.ToString ( "0000" );
}

Related

How to manipulate bits of a binary string in C#

I am working on a c# desktop application. I have a bit string and i want to toggle it.
c3 = DecimalToBinary(Convert.ToInt32(tbVal3.Text)).PadLeft(16, '0');
// above c3 is 0000001011110100
spliting the above string into two (substring)
string part1 = c3.Substring(0, 8); // 00000010
string part2 = c3.Substring(8, 8); // 11110100
For part1 the MSB of the first octet shall be set to 1 and for part2 thus this bit shall be shifted into the LSB of the first octet the MSB of the second (last) octet shall be set to 0,thus this bit shall be shifted into the LSB of the first octet. This gives binary part1 = 10000101 and part2 = 01110100
I have checked this solution Binary array after M range toggle operations but still, it's not understandable.
Rule
in the case of the application context name LN referencing with no ciphering
the arc labels of the object identifier are (2, 16, 756, 5, 8, 1, 1);
• the first octet of the encoding is the combination of the first two
numbers into a single number, following the rule of
40*First+Second -> 40*2 + 16 = 96 = 0x60;
• the third number of the Object Identifier (756) requires two octets: its
hexadecimal value is 0x02F4, which is 00000010 11110100, but following the above rule,
the MSB of the first octet shall be set to 1 and the MSB of the second (last) octet shall
be set to 0, thus this bit shall be shifted into the LSB of the first octet. This gives
binary 10000101 01110100, which is 0x8574;
• each remaining numbers of the Object Identifier required to be encoded on one octet;
• this results in the encoding 60 85 74 05 08 01 01.
How can I perform this toggle with binary strings?
Any help would be highly appreciated
You may convert strings to bytes, to be able to manipulate bits, and next you convert values to strings.
So you can use this:
// Convert string of binary value (base 2) to byte
byte v1 = Convert.ToByte("00000010", 2);
byte v2 = Convert.ToByte("11110100", 2);
// Operate bits
v1 = (byte)( v1 | 0b10000000 );
v1 = (byte)( v1 | ( v2 & 0b10000000 ) >> 7 );
v2 = (byte)( v2 & ~0b10000000 );
// Convert to string formatted
string result1 = Convert.ToString(v1, 2).PadLeft(8, '0');
string result2 = Convert.ToString(v2, 2).PadLeft(8, '0');
Console.WriteLine(result1);
Console.WriteLine(result2);
Result:
10000011
01110100
It forces the MSB of part1.
It copies the MSB of part2 to LSB of part1.
It clears the MSB of part2.
If you don't use C#7, use 128 instead of 0b10000000.
As you requested, if I understood what you ask for. But you say: part1=10000101 and part2=01110100... so I don't understand what you wrote but perhaps there is a mistake and you wanted to write 10000011 instead of 10000101.
If you want to toggle bits (0=>1 and 1=>0), use:
v1 = (byte)( v1 ^ 128);
v1 = (byte)( v1 | ( v2 & 128 ) >> 7 );
v2 = (byte)( v2 ^ 128 );
It toggles the MSB of part1.
It copies the MSB of part2 to LSB of part1.
It toggles the MSB of part2.
Bitwise operators
Bitwise and shift operators (C# reference)
Boolean algebra

C# How to remove the n-th bit in integer?

I am trying to find a way to remove a bit from an integer. The solution must not use string operations.
For example, I have the number 27, which is 11011 in binary.
I want to remove the third bit so it leaves me with 1011.
Or we have 182 (10110110), remove the 6th bit so the result is 1110110 (which is 118). I am trying to think of the algorithm how to do that, but so far no luck, and I can't find useful information on the internet.
I know how to use bitwise operators and how to extract or manipulate bits in integers (change values, exchange values etc), but I don't know how to 'remove' a certain bit.
I am not looking for code, just the logic of the operation. If anyone could help me, that would be awesome!
Regards,
Toni
No problem, just decompose the number into the "upper part" and the "lower part", and put them together without the middle bit that now disappeared.
Not tested:
uint upper = x & 0xFFFFFFF0;
uint lower = x & 7;
return (upper >> 1) | lower;
More generally: (also not tested)
uint upper = x & (0xFFFFFFFE << n);
uint lower = x & ((1u << n) - 1);
return (upper >> 1) | lower;
In order to do this you need two bit masks and a shift.
The first bit mask gives you the portion of the number above bit n, exclusive of the n-th bit. The mask is constructed as follows:
var top = ~((1U<<(n+1))-1); // 1111 1111 1000 000, 0xFF80
The second bit mask gives you the portion of the number below bit n, exclusive of the n-th bit:
var bottom = (1U<<n)-1; // 0000 0000 0011 1111, 0x003F
Comments above show the values for your second example (i.e. n == 6)
With the two masks in hand, you can construct the result as follows:
var res = ((original & top)>>1) | (original & bottom);
Demo.
You could use the following approach:
int value = 27;
string binary = Convert.ToString(value, 2);
binary = binary.Remove(binary.Length-3-1,1); //Remove the exact bit, 3rd in this case
int newValue = Convert.ToInt32(binary, 2);
Console.WriteLine(newValue);
Hope it helps!
int Place = 7;
int TheInt = 182;
string binary = Convert.ToString(TheInt, 2);
MessageBox.Show(binary.Remove(binary.Length - Place, 1));
Here is a version that needs slightly fewer operations than the solution by harold:
x ^ (((x >> 1) ^ x) & (0xffffffff << n));
The idea is that below n, bits are xored with zero, leaving them unchanged, while from n and above the two x xored cancel each other out, leaving x >> 1.
int a = 27;//int= 4byte equal to 32 bit
string binary = "";
for (int i = 0; i < 32; i++)
{
if ((a&1)==0)//if a's least significant bit is 0 ,add 0 to str
{
binary = "0" + binary;
}
else//if a's least significant bit is 1 ,add 1 to str
{
binary = "1" + binary;
}
a = a >> 1;//shift the bits left to right and delete lsb
//we are doing it for 32 times because integer have 32 bit.
}
Console.WriteLine("Integer to Binary= "+binary);
//Now you can operate the string(binary) however you want.
binary = binary.Remove(binary.Length-4,1);//remove 4st bit from str

Bit shifting with hex in Python

I am trying to understand how to perform bit shift operations in Python. Coming from C#, it doesn't work in the same way.
The C# code is;
var plain=0xabcdef0000000; // plaintext
var key=0xf0f0f0f0f123456; // encryption key
var L = plain;
var R = plain>>32;
The output is;
000abcdef0000000 00000000000abcde
What is the equivilent in Python? I have tried;
plain = 0xabcdef0000000
key = 0xf0f0f0f0f123456
print plain
left = plain
right = plain >> 32
print hex(left)
print hex(right)
However, it doesn't work. The output is different in Python. The 0's padding are missing. Any help would be appreciated!
The hex() function does not pad numbers with leading zeros, because Python integers are unbounded. C# integers have a fixed size (64 bits in this case), so have an upper bound and can therefor be padded out. This doesn't mean those extra padding zeros carry any meaning; the integer value is the same.
You'll have to explicitly add those zeros, using the format() function to produce the output:
print format(left, '#018x')
print format(right, '#018x')
The # tells format() to include the 0x prefix, and the leading 0 before the field width asks format() to pad the output:
>>> print format(left, '#018x')
0x000abcdef0000000
>>> print format(right, '#018x')
0x0000000000abcde
Note that the width includes the 0x prefix; there are 16 hex digits in that output, representing 64 bits of data.
If you wanted to use a dynamic width based on the number of characters used in key, then calculate that from int.bit_length(); every 4 bits produce a hex character:
format(right, '#0{}x'.format((key.bit_length() + 3) // 4 + 2))
Demo:
>>> (key.bit_length() + 3) // 4 + 2
17
>>> print format(right, '#0{}x'.format((key.bit_length() + 3) // 4 + 2))
0x0000000000abcde
But note that even the key is only 60 bits in length and C# would pad that value with an 0 as well.
I have no problem with you you tried
>>> hex(0xabcdef0000000)
'0xabcdef0000000'
>>> hex(0xabcdef0000000 >> 32)
'0xabcde'
In [83]: plain=0xabcdef0000000
In [84]: plain>>32
Out[84]: 703710
In [85]: plain
Out[85]: 3022415462400000
In [87]: hex(plain)
Out[87]: '0xabcdef0000000'
if
In [134]: left = plain
In [135]: right = plain >> 32
Then
In [140]: '{:0x}'.format(left)
Out[140]: 'abcdef0000000'
In [143]: '{:018x}'.format(right)
Out[143]: '0000000000000abcde'

Removing bits from first byte and then rejoining the bits

I have a devious little problem to which I think I've come up with a solution far more difficult than needs to be.
The problem is that I have two bytes. The first two bits of the first byte are to be removed (as the value is little endian, these bits are effectively in the middle of the 16 bit value). Then the least significant two bits of the second byte are to be moved to the most significant bit locations of the first byte, in place of the removed bits.
My solution is as follows:
byte firstByte = (byte)stream.ReadByte(); // 01000100
byte secondByte = (byte)stream.ReadByte(); // 00010010
// the first and second byte equal the decimal 4676 in this little endian example
byte remainderOfFirstByte = (byte)(firstByte & 63); // 01000100 & 00111111 = 00000100
byte transferredBits = (byte)(secondByte << 6); // 00010010 << 6 = 10000000
byte remainderOfSecondByte = (byte)(secondByte >> 2); // 00010010 >> 2 = 00000100
byte newFirstByte = (byte)(transferredBits | remainderOfFirstByte); // 10000000 | 00000100 = 10000100
int result = BitConverter.ToInt32(new byte[]{newFirstByte, remainderOfSecondByte, 0, 0}, 0); //10000100 00010000 (the result is decimal 1156)
Is there an easier way* to achieve this?
*less verbose, perhaps an inbuilt function or trick I'm missing? (with the exception of doing both the & and << on the same line)
You don't have to mask out bits that a shift would throw away anyway. And you don't have to transfer those bits manually. So it becomes this: (not tested)
int result = (secondByte << 6) | (firstByte & 0x3F);

How is a Hex Value manipulated bitwise?

I have a very basic understanding of bitwise operators. I am at a loss to understand how the value is assigned however. If someone can point me in the right direction I would be very grateful.
My Hex Address: 0xE0074000
The Decimal value: 3758571520
The Binary Value: 11100000000001110100000000000000
I am trying to program a simple Micro Controller and use the Register access Class in the Microsoft .Net Micro Framework to make the Controller do what I want it to do.
Register T2IR = new Register(0xE0074000);
T2IR.Write(1 << 22);
In my above example, how are the bits in the Binary representation moved? I don’t understand how the management of bits is assigned to the address in Binary form.
If someone can point me in the right direction I would be very greatfull.
Forget about decimals for a start. You'll get back to that later.
First you need to see the logic between HEX and BINARY.
Okay, for a byte you have 8 bits (#7-0)
#7 = 0x80 = %1000 0000
#6 = 0x40 = %0100 0000
#5 = 0x20 = %0010 0000
#4 = 0x10 = %0001 0000
#3 = 0x08 = %0000 1000
#2 = 0x04 = %0000 0100
#1 = 0x02 = %0000 0010
#0 = 0x01 = %0000 0001
When you read that in binary, in a byte, like this one %00001000
Then the bit set, is the 4th from right aka bit #3 which has a value of 08 hex (in fact also decimal, but still forget about decimal while you figure out hex/binary)
Now if we have the binary number %10000000
This is the #7 bit which is on. That has a hex value of 0x80
So all you have to do is to sum them up in "nibbles" (each part of the hex byte is called a nibble by some geeks)
the maximum you can get in a nibble is (decimal) 15 or F as 0x10 + 0x20 + 0x40 + 0x80 = 0xF0 = binary %11110000
so all lights on (4 bits) in a nibble = F in hex (15 decimal)
same goes for the lower nibble.
Do you see the pattern?
Refer to #BerggreenDK's answer for what a shift is. Here's some info about what it's like in hex (same thing, just different representation):
Shifting is a very simple concept to understand. The register is of a fixed size, and whatever bits that won't fit falls off the end. So, take this example:
int num = 0xffff << 16;
Your variable in hex would now be 0xffff0000. Note how the the right end is filled with zeros. Now, let's shift it again.
num = num << 8;
num = num >> 8;
num is now 0x00ff0000. You don't get your old bits back. The same applies to right shifts as well.
Trick: Left shifting by 1 is like multiplying the number by 2, and right shifting by 1 is like integer dividing everything by 2.

Categories