GF(256) finite field multiplication function in C# - c#

I'm implementing AES in C# and at some point (MixColumns function) I have to multiply two Bytes over the GF(2^8) finite field.
So, I have three options:
Use a default function that dotNet has (does it have something like that?)
Write a custom function which does that
Use lookup tables
For the custom function I found a piece of C code which I tried to rewrite for C#, but it doesn't work (I get wrong results). (*)
Here is the original C piece of code (source):
/* Multiply two numbers in the GF(2^8) finite field defined
* by the polynomial x^8 + x^4 + x^3 + x + 1 */
uint8_t gmul(uint8_t a, uint8_t b) {
uint8_t p = 0;
uint8_t counter;
uint8_t hi_bit_set;
for (counter = 0; counter < 8; counter++) {
if (b & 1)
p ^= a;
hi_bit_set = (a & 0x80);
a <<= 1;
if (hi_bit_set)
a ^= 0x1b; /* x^8 + x^4 + x^3 + x + 1 */
b >>= 1;
}
return p;
}
And this is what I rewrote:
public Byte GMul(Byte a, Byte b) { // Galois Field (256) Multiplication
Byte p = 0;
Byte counter;
Byte hi_bit_set;
for (counter = 0; counter < 8; counter++) {
if ((b & 1) != 0) {
p ^= a;
}
hi_bit_set = (Byte) (a & 0x80);
a <<= 1;
if (hi_bit_set != 0) {
a ^= 0x1b; /* x^8 + x^4 + x^3 + x + 1 */
}
b >>= 1;
}
return p;
}
I also found some lookup tables here, and it seemed a simple and fine approach, but I don't really know how to use them, though I got a hunch. (**)
Bottom line: which option should I choose, and how can I make it work, given what I wrote above is all I got so far, and that I don't really want to go very deep with the math knowledge.
UPDATE:
*) Meanwhile I realised my C# rewrote code was producing correct answers, it was just my fault because I messed up when I verified them.
**) The tables can be used as a Byte[256] array, and the answer for, let's say, x*3 is table_3[x], x being converted from HEX to DECIMAL when used as index for the table array.

In order to multiply x * 3 in GF(2), one just accesses x=table_3[x];
There's probably a 3 Look-up-table method available that uses a logarithm approach.
Just as in regular numbers a*b = 2^(log2(a)+log2(b)), the same happens in GF(2), but without floating points or rounding errors.

Related

How to encode a decimal number to binary in 16 bits in C#?

The problem is asking :
The user gives me integer n,
I convert it to binary in 16 bits,
inverse the binary,
then decode the inverse binary into a new integer.
example:
14769 is 0011100110110001 (the 2 zeros in the front are the problem for me)
inverse the binary:
1000110110011100
Decode:
36252
I wrote the code but when I convert to binary it only gives me
11100110110001 without 00 in front, so the whole inverse binary will change and the new integer will be different.
This is my code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text.RegularExpressions;
namespace HelloWorld
{
public class Program
{
public static void Main(string[] args)
{
long n, n1, p, i, r, sum, inv, inv1, newint;
Console.WriteLine("give n:");
n=long.Parse(Console.ReadLine());
n1=n;
p=1;
sum=0;
i=n;
//for below is for the binary representation of n
for(i=n;i!=0;i=i/2)
{
r=i%2;
sum=sum+r*p;
p=p*10;
}
inv=0;
//for below is to inverse the above binary representation
for(i=sum;i!=0;i=i/10)
{
r=i%10;
inv=10*inv+r;
}
inv1=inv;
newint=0;
p=0;
//for below is to decode the inverse binary to its decimal representation
for(i=inv;i!=0;i=i/10)
{
r=i%10;
newint=newint+r*(long)Math.Pow(2,p);
p=p+1;
}
Console.WriteLine("The number that you gave = {0} \nIts binary
representation = {1} \n\nThe inverse binary representation = {2} \nThe integer corresponding to the inverse binary number = {3}", n1, sum, inv1, newint);
}
}
}
So how can i encode on 16 bits?
Edit:
1)We didn't learn built in functions
2)We didn't learn padding or
Convert.Int...
3)We only know the for loop (+ while loop but better not use it)
4)We can't use strings either
You could reverse the bits using some simple bitwise operators.
ushort num = 14769;
ushort result = 0;
// ushort is 16 bits, therefore exactly 16 iterations is required
for (var i = 0; i < 16; i++, num >>= 1){
// shift result bits left by 1 position
result <<= 1;
// add the i'th bit in the first position
result |= (ushort)(num & 1);
}
Console.WriteLine(result); //36252
You can try using Convert to obtain binary representation and Aggregate (Linq) to get back decimal:
using System.Linq;
...
int value = 14769;
int result = Convert
.ToString(value, 2) // Binary representation
.PadLeft(16, '0') // Ensure it is 16 characters long
.Reverse() // Reverse
.Aggregate(0, (s, a) => s * 2 + a - '0'); // Back to decimal
Console.Write($"{value} => {result}");
Output:
14769 => 36252
Edit: Loop solution (if you are not allowed to use the classes above...)
int value = 14769;
int result = 0;
for (int i = 0, v = value; i < 16; ++i, v /= 2)
result = result * 2 + v % 2;
Console.Write($"{value} => {result}");
Explanation (how for above works):
First of all how can we get all 16 bits of the number? We can use standard algorithm based on remainder:
14769 / 1 % 2 == 1,
14769 / 2 % 2 == 0,
14769 / 4 % 2 == 0,
14769 / 8 % 2 == 0,
14769 / 16 % 2 == 1,
...
these are the bits from right to left: 11100110110001. Typical code can be
int v = value; // we don't want to change value, let's work with its copy - v
for (int i = 0; i < 16; ++i) {
// rightmost bit
int bit = v % 2;
// we divide v by to to get rid of the rightmost bit
v = v / 2;
}
Note that we compute bits from right to left - in reverse order - the very order we are looking for! How can we build result from these bits?
result = bit0 + 2 * (bit1 + 2 * (bit2 + ...))))..)
So we can easily modify our loop into
int result = 0;
int v = value; // we don't want to change value, let's work with its copy - v
for (int i = 0; i < 16; ++i) {
// rightmost bit
int bit = v % 2;
result = result * 2 + bit;
// we divide v by to to get rid of the rightmost bit
v = v / 2;
}
Finally, if we get rid of bit and make v declared within loop we can get my loop solution

Unset All Bits Except Most Significant Bit C#

Is there a quick and easy way to unset all the bits in a number except the most significant bit? In other words I would like to take an integer x and apply & operator to it where the operand is 1 left-shifted by total number of bits in x.
Example:
return UnsetAllBitsExceptMSB(400);
should return 256
Yes, there is a trick:
private int UnsetAllBitsExceptMSB(int x)
{
x |= x >> 16;
x |= x >> 8;
x |= x >> 4;
x |= x >> 2;
x |= x >> 1;
x ^= x >> 1;
return x;
}
This works by first turning on all the bits to the right of the most significant set bit (00110000 becomes 001111111). It then uses XOR with the result right shifted one to turn all but the first bit off. (00111111 XOR with 00011111 = 00100000)
There are other ways of doing this that will perform better in some circumstances, but this has a predictable performance no matter the input. (5 OR, 6 right shifts, and an XOR).
I'm not sure about "quick and easy", but you don't need any bitwise operations for this... your question could be reworded as "how can I find the largest power of 2 that's smaller than my input? So a simple way to do that:
private int UnsetAllBitsExceptMSB(int x)
{
int y = 1;
while (y <= x)
{
y*=2;
}
return y / 2;
}
Given int represents a 32-bit signed integer I guess the first bit shouldn't be taken into consideration. So, this should get what you want:
int result = 1 << 30;
while ((result & myInt) != result)
result >>= 1;
Hi here is another option to consider:
public static int GetTopBitValue(int number)
{
if (number < 0)
{
throw new ArgumentOutOfRangeException("Non negative numbers are expected");
}
int i = 1;
while (i <= number)
i = i << 1;
return i >> 1;
}
Edited to cover corner cases.

rewrite array manipulations from C++ to C#

I have a C++ code that I'm trying to reuse on my C# project and I need some help.
Here is the subject
for (int i = 0; i < numOfSamples; i++)
{
*(((double*)m_Buffer) + i)
= max(*(((double*)m_Buffer) + i*4), *(((double*)m_Buffer) + i*4 + 1));
}
where m_Buffer is array of float. This part of code read each 2 "floats" of array as a one "double" and then do some manipulations (shift it, choose max etc.)
The question is - how can I do the same operation in C#.
For example, I have an array [12,45,26,32,07,89,14,11] and I have to transform items in position 0 and 1 (12 and 45) so that I will get a new number (type of double) where highest (I'm not sure - maybe lowest) part of bits will be formed from 12 and lowest - from 45
It should be something like:
for (int i = 0; i < numOfSamples; i++)
{
m_Buffer[i] = Math.Max(m_Buffer[i * 4], m_Buffer[i * 4 + 1]);
}
Where m_Buffer must be an array of at least numOfSamples * 4 + 1 elements.
So, I got the solution. Key point here is a structure
[StructLayout(LayoutKind.Explicit)]
struct MyStruct
{
[FieldOffset(0)]
public double Double;
[FieldOffset(0)]
public float Float1;
[FieldOffset(4)]
public float Float2;
}
I simply create a new array and put array[2*i] to Float1 and array[2*i+1] to Float2. Then apply Math.Max to each new_array[i].Double

How can I compute a base 2 logarithm without using the built-in math functions in C#?

How can I compute a base 2 logarithm without using the built-in math functions in C#?
I use Math.Log and BigInteger.Log repeatedly in an application millions of times and it becomes painfully slow.
I am interested in alternatives that use binary manipulation to achieve the same. Please bear in mind that I can make do with Log approximations in case that helps speed up execution times.
Assuming you're only interested in the integral part of the logarithm, you can do something like that:
static int LogBase2(uint value)
{
int log = 31;
while (log >= 0)
{
uint mask = (1 << log);
if ((mask & value) != 0)
return (uint)log;
log--;
}
return -1;
}
(note that the return value for 0 is wrong; it should be negative infinity, but there is no such value for integral datatypes so I return -1 instead)
http://graphics.stanford.edu/~seander/bithacks.html
For the BigInteger you could use the toByteArray() method and then manually find the most significant 1 and count the number of zeroes afterward. This would give you the base-2 logarithm with integer precision.
The bit hacks page is useful for things like this.
Find the log base 2 of an integer with a lookup table
The code there is in C, but the basic idea will work in C# too.
If you can make due with approximations then use a trick that Intel chips use: precalculate the values into an array of suitable size and then reference that array. You can make the array start and end with any min/max values, and you can create as many in-between values as you need to achieve the desired accuracy.
You can try this C algorithm to get the binary logarithm (base 2) of a double N :
static double native_log_computation(const double n) {
// Basic logarithm computation.
static const double euler = 2.7182818284590452354 ;
unsigned a = 0, d;
double b, c, e, f;
if (n > 0) {
for (c = n < 1 ? 1 / n : n; (c /= euler) > 1; ++a);
c = 1 / (c * euler - 1), c = c + c + 1, f = c * c, b = 0;
for (d = 1, c /= 2; e = b, b += 1 / (d * c), b - e /* > 0.0000001 */ ;)
d += 2, c *= f;
} else b = (n == 0) / 0.;
return n < 1 ? -(a + b) : a + b;
}
static inline double native_ln(const double n) {
// Returns the natural logarithm (base e) of N.
return native_log_computation(n) ;
}
static inline double native_log_base(const double n, const double base) {
// Returns the logarithm (base b) of N.
// Right hand side can be precomputed to 2.
return native_log_computation(n) / native_log_computation(base) ;
}
Source

Find most significant bit of a BigInteger

I have read many fine algorithms for identifying the most significant bit for 32- and 64-bit integers (including other posts here on SO). But I am using BigIntegers, and will be dealing with numbers up to 4000 bits long. (The BigInteger will hold the Hilbert index into the Hilbert space-filling curve that meanders through a 1000-dimension hypercube at a fractal depth of 4.) But the bulk of the cases will involve numbers that could fit inside a 64 bit integer, so I want a solution that is optimal for the common cases but can handle the extreme cases.
The naive way is:
BigInteger n = 234762348763498247634;
int count = 0;
while (n > 0) {
n >>= 1;
count++;
}
I was thinking of converting common cases to Longs and using a 64-bit algorithm on those, otherwise using a different algorithm for the really big numbers. But I am not sure how expensive the conversion to a Long is, and whether that will swamp the efficiencies of doing the remainder of the computation on a 64-bit quantity. Any thoughts?
One intended use for this function is to help optimize inverse gray code calculations.
Update. I coded two approaches and ran a benchmark.
If the number was under Ulong.MaxValue, then converting to a Ulong and doing the binary search approach was twice as fast as using BigInteger.Log.
If the number was very large (I went as high as 10000 bits), then Log was 3.5 times faster.
96 msec elapsed for one million calls to MostSignificantBitUsingLog
(convertable to Long).
42 msec elapsed for one million calls to
MostSignificantBitUsingBinarySearch (convertable to Long).
74 msec elapsed for ten thousand calls to MostSignificantBitUsingLog
(too big to convert).
267 msec elapsed for ten thousand calls to
MostSignificantBitUsingBinarySearch (too big to convert).
Here is the code for using Log:
public static int MostSignificantBitUsingLog(BigInteger i)
{
int bit;
if (i == 0)
bit = -1;
else
bit = (int)BigInteger.Log(i, 2.0);
return bit;
}
Here is my approach to binary search. It could be improved to extend the binary division up into the BigInteger range. I will try that next.
public static int MostSignificantBitUsingBinarySearch(BigInteger i)
{
int bit;
if (i.IsZero)
bit = -1;
else if (i < ulong.MaxValue)
{
ulong y = (ulong)i;
ulong s;
bit = 0;
s = y >> 32;
if (s != 0)
{
bit = 32;
y = s;
}
s = y >> 16;
if (s != 0)
{
bit += 16;
y = s;
}
s = y >> 8;
if (s != 0)
{
bit += 8;
y = s;
}
s = y >> 4;
if (s != 0)
{
bit += 4;
y = s;
}
s = y >> 2;
if (s != 0)
{
bit += 2;
y = s;
}
s = y >> 1;
if (s != 0)
bit++;
}
else
return 64 + MostSignificantBitUsingBinarySearch(i >> 64);
return bit;
}
Update 2: I changed my binary search algorithm to work against BigIntegers up to one million binary digits and not call itself recursively in 64 bit chunks. Much better. Now it takes 18 msec to run my test, and is four times faster than calling Log! (In the code below, MSB is my ulong function that does the same sort of thing, with the loop unrolled.)
public static int MostSignificantBitUsingBinarySearch(BigInteger i)
{
int bit;
if (i.IsZero)
bit = -1;
else if (i < ulong.MaxValue)
bit = MSB((ulong)i);
else
{
bit = 0;
int shift = 1 << 20; // Accommodate up to One million bits.
BigInteger remainder;
while (shift > 0)
{
remainder = i >> shift;
if (remainder != 0)
{
bit += shift;
i = remainder;
}
shift >>= 1;
}
}
return bit;
}
You can calculate the log2 which represent the number of bits needed:
var numBits = (int)Math.Ceil(bigInt.Log(2));
You can treat it like a binary-search problem.
You have an upper limit of 4000 (add some room maybe)
int m = (lo + hi) / 2;
BigInteger x = BigInteger(1) << m;
if (x > n) ...
else ...
In .Net 5 this is now built-in...
int log2 = myBigInt.GetBitLength()
If you can use Java rather than C#, there is a library for arbitrary precision Hilbert curve indexing that you can find at http://uzaygezen.googlecode.com. For the implementation of the gray code inverse, you may want to have a closer look at LongArrayBitVector.grayCodeInverse or perhaps BitSetBackedVector.grayCodeInverse in the mentioned project.
8 years late, to find the MSB top bit (aka Log2) I came up with this quick method...
static int GetTopBit(BigInteger value)
{
if (value < 0)
BigInteger.Negate(value);
int lowerBytes = value.GetByteCount(true) - 1;
int t = value.ToByteArray(true)[lowerBytes];
int top = t > 127 ? 8 : t > 63 ? 7 : t > 31 ? 6 : t > 15 ? 5 : t > 7 ? 4 : t > 3 ? 3 : t > 1 ? 2 : 1;
int topbit = (top + lowerBytes * 8);
return topbit;
}

Categories