Bit shifting with hex in Python - c#

I am trying to understand how to perform bit shift operations in Python. Coming from C#, it doesn't work in the same way.
The C# code is;
var plain=0xabcdef0000000; // plaintext
var key=0xf0f0f0f0f123456; // encryption key
var L = plain;
var R = plain>>32;
The output is;
000abcdef0000000 00000000000abcde
What is the equivilent in Python? I have tried;
plain = 0xabcdef0000000
key = 0xf0f0f0f0f123456
print plain
left = plain
right = plain >> 32
print hex(left)
print hex(right)
However, it doesn't work. The output is different in Python. The 0's padding are missing. Any help would be appreciated!

The hex() function does not pad numbers with leading zeros, because Python integers are unbounded. C# integers have a fixed size (64 bits in this case), so have an upper bound and can therefor be padded out. This doesn't mean those extra padding zeros carry any meaning; the integer value is the same.
You'll have to explicitly add those zeros, using the format() function to produce the output:
print format(left, '#018x')
print format(right, '#018x')
The # tells format() to include the 0x prefix, and the leading 0 before the field width asks format() to pad the output:
>>> print format(left, '#018x')
0x000abcdef0000000
>>> print format(right, '#018x')
0x0000000000abcde
Note that the width includes the 0x prefix; there are 16 hex digits in that output, representing 64 bits of data.
If you wanted to use a dynamic width based on the number of characters used in key, then calculate that from int.bit_length(); every 4 bits produce a hex character:
format(right, '#0{}x'.format((key.bit_length() + 3) // 4 + 2))
Demo:
>>> (key.bit_length() + 3) // 4 + 2
17
>>> print format(right, '#0{}x'.format((key.bit_length() + 3) // 4 + 2))
0x0000000000abcde
But note that even the key is only 60 bits in length and C# would pad that value with an 0 as well.

I have no problem with you you tried
>>> hex(0xabcdef0000000)
'0xabcdef0000000'
>>> hex(0xabcdef0000000 >> 32)
'0xabcde'

In [83]: plain=0xabcdef0000000
In [84]: plain>>32
Out[84]: 703710
In [85]: plain
Out[85]: 3022415462400000
In [87]: hex(plain)
Out[87]: '0xabcdef0000000'
if
In [134]: left = plain
In [135]: right = plain >> 32
Then
In [140]: '{:0x}'.format(left)
Out[140]: 'abcdef0000000'
In [143]: '{:018x}'.format(right)
Out[143]: '0000000000000abcde'

Related

Parse 4 digit code from binary hash in C#

6 bits from the beginning of hash and 7 bits from the end of hash are taken. The resulting 13 bits are transformed into decimal number and printed out. The code is a decimal 4-digits number in range 0000...8192, always 4 digits are displayed (e.g. 0041).
Example:
Hash value: 2f665f6a6999e0ef0752e00ec9f453adf59d8cb6
Binary representation of hash: 0010 1111 0110 0110 1111 .... 1000 1100 1011 0110
code – binary value: 0010110110110
code – decimal value: 1462
Example No.2:
Hash value: otr+8gszp9ey/gcCu4Q8ValEbewEb5zL+mvyKakl1Pp7S1Be3klDYh0RcLktBpgs6VbbgCTjsmvjEZtMbUkaEg==
(calculations)
code – decimal value: 5138
With example no.1 I'm not sure which one SHA used. But the second one is definitely Sha512 and I can't get the desired code 5138.
Steps I made:
Created sha:
var sha = SHA512.Create();
Computed hash 3 ways (str -> hash value from above):
1) var _hash = sha.ComputeHash(Encoding.UTF8.GetBytes(str));
2) var _hash = sha.ComputeHash(Convert.FromBase64String(str));
3) Call procedure: string stringifiedHash = ToBinaryString(Encoding.UTF8, str);
static string ToBinaryString(Encoding encoding, string text)
{
string.Join("", encoding.GetBytes(text).Select(n => Convert.ToString(n, 2).PadLeft(8, '0')));
}
Stringified (1 and 2 hash) to binary value string (This part redone from Java project with exact same calculations) :
(((0xFC & _hash[0]) << 5) | (_hash[_hash.Length - 1] & 0x7F)).ToString("0000")
OR substring stringifiedHash and take 6 bits from begining and 7 from end. And then convert:
Convert.ToInt({value from substring and join}, 2)
But every try returned not correct answer. Where I made an mistake while trying to parse desired code.
Your bitwise logic is incorrect.
upper = 0b111111
lower = 0b0010101
value = upper << 5 | lower
value = 0b11111110101 // this is not the value you wanted it never goes beyond 11 bits.
Essentially the above operation looks like this
// Notice the lowest two bytes of the top will overwrite the top two bytes of the bottom.
0b11111100000
| 0b0010101
-------------------
0b11111110101 // Only 11 bits...
You need room for the final 7 bits after your | operation. If you only bit shift 5 times when you | with the lower 7 bits you will overwrite 2 of them.
So, updating the shift in your code should solve the problem.
public static string Get13FormattedBitches ( Hash hash )
{
UInt16 upper = hash[0] & 0xFC;
UInt16 lower = hash[hash.Length - 1] & 0x7F;
UInt16 value = (upper << 7) | lower;
return value.ToString ( "0000" );
}

How to convert from string to 16-bit unsigned integer in python?

I'm currently working on some encoding and decoding of the string in python. I was supposed to convert some code from C# to python, however I encountered some problem as below:
So now I have a string that looks like this: 21-20-89-00-67-00-45-78
The code was supposed to eliminates the - in between the numbers, and packed 2 integers into 1 group, then convert them into bytes. In C#, it was done like this:
var value = "21-20-89-00-67-00-45-78";
var valueNoDash = value.Replace("-", null);
for (var i = 0; i < DataSizeInByte; i++)
{
//convert every 2 digits into 1 byte
Data[i] = Convert.ToByte(valueNoDash.Substring(i * 2, 2), 16);
}
The above code represents Step 1: Remove - from the string, Step 2: using Substring method to divide them into 2 digits in 1 group, Step 3: use Convert.ToByte with base 16 to convert them into 16-bit unsigned integer. The results in Data is
33
32
137
0
103
0
69
120
So far I have no problem with this C# code, however when I try to do the same in python, I could not get to the same result as the C# code. My python code are as below:
from textwrap import wrap
import struct
values = "21-20-89-00-67-00-45-78"
values_no_dash = a.replace('-', '')
values_grouped = wrap(b, 2)
values_list = []
for value in values_grouped:
values_list.append(struct.pack('i', int(value)))
In python, it gives me list of bytes in hex value, which is as below:
b'\x15\x00\x00\x00'
b'\x14\x00\x00\x00'
b'Y\x00\x00\x00'
b'\x00\x00\x00\x00'
b'C\x00\x00\x00'
b'\x00\x00\x00\x00'
b'-\x00\x00\x00'
b'N\x00\x00\x00'
This is in bytes object, however when I converted this object into Decimal, it gives me the exact same value as the original string: 21, 20, 89, 0, 67, 0, 45, 78.
Which means I did not convert successfully into 16-bit unsigned integer right? How can I do this in python? I've tried using str.encode() but the result still different. How can I achieve what C# had done in python?
Thanks and appreciates if anyone can help!
I think this is the solution you're looking for:
values = "21-20-89-00-67-00-45-78"
values_no_dash_grouped = values.split('-') #deletes dashes and groups numbers simultaneously
for value in values_no_dash_grouped:
print(int(value, 16)) #converts number in base 16 to base 10 and prints it
Hope it helps!

Verifying modular sum checksum in c#

I'm working with an embedded system that returns ASCII data that includes (what I believe to be) a modular sum checksum. I would like to verify this checksum, but I've been unable to do so based on the manufacturers specification. I've also been unable to accomplish the opposite and calculate the same checksum based off the description.
Each response from the device is in the following format:
╔═════╦═══════════════╦════════════╦════╦══════════╦═════╗
║ SOH ║ Function Code ║ Data Field ║ && ║ Checksum ║ ETX ║
╚═════╩═══════════════╩════════════╩════╩══════════╩═════╝
Example:
SOHi11A0014092414220&&FBEA
Where SOH is ASCII 1. e.g.
#define SOH "\x01"
The description of the checksum is as follows:
The Checksum is a series of four ASCII-hexadecimal characters which provide a check on the integrity of all the characters preceding it, including the control
characters. The four characters represent a 16-bit binary count which is the 2's complemented sum of the 8-bit binary representation of the message characters after the parity bit (if enabled) has been cleared. Overflows are ignored. The data integrity check can be done by converting the four checksum characters to the 16-bit
binary number and adding the 8-bit binary representation of the message characters to it. The binary result should be zero.
I've tried a few different interpretations of the specification, including ignoring SOH as well as the ampersands, and even the function code. At this point I must be missing something very obvious in either my interpretation of the spec, or the code I've been using to test. Below you'll find a simple example (data was taken from a live system), if it were correct, the lower word in the validate variable would be 0:
static void Main(string[] args)
{
unchecked
{
var data = String.Format("{0}{1}", (char) 1, #"i11A0014092414220&&");
const string checkSum = "FBEA";
// Checksum is 16 bit word
var checkSumValue = Convert.ToUInt16(checkSum, 16);
// Sum of message chars preceeding checksum
var mySum = data.TakeWhile(c => c != '&').Aggregate(0, (current, c) => current + c);
var validate = checkSumValue + mySum;
Console.WriteLine("Data: {0}", data);
Console.WriteLine("Checksum: {0:X4}", checkSumValue);
Console.WriteLine("Sum of chars: {0:X4}", mySum);
Console.WriteLine("Validation: {0}", Convert.ToString(validate, 2));
Console.ReadKey();
}
}
Edit
While the solution provided by #tinstaafl works for this particular example, it doesn't work when providing a larger record such as the below:
SOHi20100140924165011000007460904004608B40045361000427DDD6300000000427C3C66000000002200000745B4100045B3D8004508C00042754B900000000042774D8D0000000033000007453240004531E000459F5000420EA4E100000000427B14BB000000005500000744E0200044DF4000454AE000421318A0000000004288A998000000006600000744E8C00044E7200045469000421753E600000000428B4DA50000000&&
BA6C
Theoretically you could keep incrementing/decrementing a value in the string until the checksum matched, it just so happened that using the character 1 rather than the ASCII SOH control character gave it just the right value, a coincidence in this case.
Not sure if this is exactly what you're looking for, but by using an integer of 1 for the SOH instead of a char value of 1, taking the sum of all the characters and converting the validate variable to a 16 bit integer, I was able to get validate to equal 0:
var data = (#"1i11A0014092414220&&");
const string checkSum = "FBEA";
// Checksum is 16 bit word
var checkSumValue = Convert.ToUInt16(checkSum, 16);
// Sum of message chars preceeding checksum
var mySum = data.Sum<char>(c => c);
var validate = (UInt16)( checkSumValue + mySum);
Console.WriteLine("Data: {0}", data);
Console.WriteLine("Checksum: {0:X4}", checkSumValue);
Console.WriteLine("Sum of chars: {0:X4}", mySum);
Console.WriteLine("Validation: {0}", Convert.ToString(validate, 2));
Console.ReadKey();

How to create byte[] with length 16 using FromBase64String [duplicate]

This question already has an answer here:
Calculate actual data size from Base64 encoded string length
(1 answer)
Closed 10 years ago.
I have a requirement to create a byte[] with length 16. (A byte array that has 128 bit to be used as Key in AES encryption).
Following is a valid string
"AAECAwQFBgcICQoLDA0ODw=="
What is the algorithm that determines whether a string will be 128 bit? Or is trial and error the only way to create such 128 bit strings?
CODE
static void Main(string[] args)
{
string firstString = "AAECAwQFBgcICQoLDA0ODw=="; //String Length = 24
string secondString = "ABCDEFGHIJKLMNOPQRSTUVWX"; //String Length = 24
int test = secondString.Length;
byte[] firstByteArray = Convert.FromBase64String((firstString));
byte[] secondByteArray = Convert.FromBase64String((secondString));
int firstLength = firstByteArray.Length;
int secondLength = secondByteArray.Length;
Console.WriteLine("First Length: " + firstLength.ToString());
Console.WriteLine("Second Length: " + secondLength.ToString());
Console.ReadLine();
}
Findings:
For 256 bit, we need 256/6 = 42.66 chars. That is rounded to 43 char. [To make it divisible by 4 add =]
For 512 bit, we need 512/6 = 85.33 chars. That is rounded to 86 char. [To make it divisible by 4 add ==]
For 128 bit, we need 128/6 = 21.33 chars. That is rounded to 22 char. [To make it divisible by 4 add ==]
A base64 string for 16 bytes will always be 24 characters and have == at the end, as padding.
(At least when it's decodable using the .NET method. The padding is not always inlcuded in all uses of base64 strings, but the .NET implementation requires it.)
In Base64 encoding '=' is a special symbol that is added to end of the Base64 string to indicate that there is no data for these chars in original value.
Each char is equal to 6 original bits of data, so to produce 8 bit values the string length has to be dividable by 4 without remainder. (6 bits * 4 = 8 bits * 3). When the resulting BASE64 string is shorter than 4n then '=' are added at the end to make it valid.
Update
Last char before '==' encodes only 2 bits of information, so by replacing it with all possible Base64 chars will give you only 4 different keys out of 64 possible combinations. In other words, by generating strings in format "bbbbbbbbbbbbbbbbbbbbbb==" (where 'b' is valid Base64 character) you'll get 15 duplicate keys per each unique key.
You can use PadRight() to pad the string to the end of it with a char that you will later remove once decrypted.

How to convert a byte to a char, e.g. 1 -> '1'?

How to convert a byte to a char? I don't mean an ASCII representation.
I have a variable of type byte and want it as a character.
I want just following conversions from byte to char:
0 ->'0'
1 ->'1'
2 ->'2'
3 ->'3'
4 ->'4'
5 ->'5'
6 ->'6'
7 ->'7'
8 ->'8'
9 ->'9'
(char)1 and Convert.ToChar(1) do not work. They result in '' because they think 1 is the ASCII code.
the number .ToString();
one.ToString(); // one.ToString()[0] - first char -'1'
two.ToString(); // two.ToString()[0] - first char -'2'
Note that you can't really convert a byte to a char
char is one char while byte can even three digits value!
If you want to use LINQ and you're sure the byte won't exceed one digit(10+) you can use this:
number.ToString().Single();
Simply using variable.ToString() should be enough. If you want to get fancy, add the ASCII code of 0 to the variable before converting:
Convert.ToChar(variable + Convert.ToByte('0'));
Use this for conversion.
(char)(mybyte + 48);
where mybyte = 0 or 1 and so
OR
Convert.ToChar(1 + 48); // specific case for 1
While others have given solution i'll tell you why your (char)1 and Convert.ToChar(1) is not working.
When your convert byte 1 to char it takes that 1 as an ASCII value.
Now ASCII of 1 != 1.
Add 48 in it because ASCII of 1 == 1 + 48`. Similar cases for 0, 2 and so on.
Assume you have variable byte x;
Just use (char)(x + '0')
Use Convert.ToString() to perform this.

Categories