Non linear operations on byte array [duplicate] - c#

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Finding Byte logarithm
I am implemeting the SAFER+ algorithm, this algorithm uses 16 bytes byte-array and performs the operations on Bytes.
The first phase includes XOR and ADDITON funciton with the Subkeys, no problems to mention here.
The second phase is the nonlinear layer which uses POWER and LOGARITHMS on the bytes' values, the problem here is when we take the log "to base 45" of the Value, the result is a floating point double, and this value should be passed to phase 3 as a byte to be handled in the same way of the phase one.

Create an exponentiation table that looks like this:
exp | log
----+----
0 | 1
1 | 45
2 | 226
3 | 147
... | ...
128 | 0
... | ...
255 | 40
---------
The "log" values are 45exp % 257. You'll need an arbitrary precision arithmetic library with a modPow function (raise a number to a power, modulo some value) to build this table. You can see that the value for "exp" 128 is a special case, since normally the logarithm of zero is undefined.
Compute the logarithm of a number by finding the it in the "log" column; the value in the "exp" column of that row is the logarithm.
Here's a sketch of the initialization:
BigInteger V45 = BigInteger.valueOf(45);
BigInteger V257 = BigInteger.valueOf(257);
int[] exp = new int[256];
int[] log = new int[256];
for (int idx = 0; idx < 256; ++idx)
exp[idx] = V45.modPow(BigInteger.valueOf(idx), V257).intValue() % 256;
for (int idx = 0; idx < 256; ++idx)
log[exp[idx]] = idx;
With this setup, for example, log45(131) = log[131] = 63, and 4538 = exp[38] = 59.

You can do this with a Linq expression as follows
inputBytes.Select(b => b == 0 ? (byte)128 : Convert.ToByte(System.Math.Log(Convert.ToDouble(b), 45))).ToArray();
But this will truncate the double, as it has to do...
Edited after looking at SAFER+, it uses the convention that Log45(0)=128 to avoid numeric overflow.

Related

Binary of a number

Is there a simply way to convert decimal/ascii 6 bit decimal numbers from 1 to 100 to binary representation?
To be more specific im interested in 6 bit binary ascii. So I made this to get int 32.
For example "u" is changed to 61 instead 117 in standard decimal ascii.
Then this 61 is needed to be "111101" instead of traditional "01110101" but after this 48 + 8 math it's not important as now it's normal binary, just with 6 bits used.
foreach (char c in partToDecode)
{
var sum = c - 48;
if (sum>40)
{
sum = sum - 8;
}
Found this, but i don't have a clue how to traspose it to c#
void binary(unsigned n) {
unsigned i;
// Reverse loop
for (i = 1 << 31; i > 0; i >>= 1)
printf("%u", !!(n & i));
}
. . .
binary(65);
You can try Convert.ToString, e.g.
int source = 61;
// "111101"
string result = Convert.ToString(source, 2).PadLeft(6, '0');
Fiddle

Single(and inexpensive) operation to cycle through a 3 value C# enum periodically(ie with wrap)

I have a 3 element enum, it defines one of three contexts, for example red, green, or blue. This enum is used in a loop with millions of iterations, for example, pixel many. The fields are currently one int apart, the default. Given a desired production order of R,G,B,R,G,B..., I have currently resorted to checking if the value is currently B, thus assigning it to R, otherwise incrementing the value.
private enum CHANNEL_CONTEXT {RED, GREEN, BLUE} //here is a sample enum
//here is a sample loop with the relevant construct
CHANNEL_CONTEXT current = CHANNEL_CONTEXT.RED;
while(condition)
{
use current;
//...
if(current == CHANNEL_CONTEXT.BLUE)
current = CHANNEL_CONTEXT.RED
else
current+=1;
}
Is there a way to wrap a 3 field enum with a single operation, such that no branch is required to determine if it is time to wrap. I know modulus(%) fits the bill, but my motivation is a performance based one, and I'd break even at best with such an expensive operation(testing corroborated, but not exhaustively).
To put my agenda in perspective, if i had 256 relevant fields, I could create a byte based enum, and increment with impunity and intended overflow. Alas, I only have three, and I cant think of a way to manipulate any integral primitive in a way that three values are produced cyclically, using a lightweight ALU operation,(+,-,&,^,|,<<..etc). I also wouldn't have been able to think of a way to swap bits with no temporary using such operations, but there is a rarely practical but possible way to do so.
Can someone guide me to a way to distribute 3 integral enum values such that they are traversable periodically, with no branch required, and no division based operators used(like modulus)?
While it sounds very unlikely that you can beat x = (x + 1) % 3 you can try to use mapping table:
var map = new[]{1,2,0};
x = map[x];
You probably would need to wrap that in unsafe to remove boundary checks on array access.
If you really set on bit manipulation irrespective of readability of the code - the table of converting numbers you are interested in is small enough to build manually for each bit and then combine.
Truth table:
Source Result
Bit2 Bit1 Bit2 Bit1
0 0 0 1
0 1 1 0
1 0 0 0
1 1 x x
As you can see the values we are interested in only produce 2 non-zero bits so resulting expression will be very simple - one case for 1 for lower bit and one case for higher bit (assuming values never fall out of the range 0-2 (which is safe if this is the only transformation).
var b1 = (x & 1) >> 0; // extract lower bit 0
var b2 = (x & 2) >> 1; // extract higher bit 1
// only care of cases when pair of bits results in 1
var resultBit1 = 1 & (~b1 & ~b2); // 00 -> x1, other cases is 0
var resultBit2 = (1 & (b1 & ~b2)) << 1; // 01 -> 1x, other cases is 0
x = resultBit1 | resultBit2;
Or inlining all into one unreadable line:
x = 1 & ~(x | x >> 1) | 2 & (x & 1 & ~x >> 1) << 1;

Is ampersand followed by int.MaxValue rounding down?

I have a piece of C# code that another developer has copied from a blog post which is used to encode/obfuscate an integer. this code contains some syntax that i am unfamiliar with. It looks like it might be rounding down the result of the calculation to prevent it from exceeding the maximum size of an integer, if that is the case i am worried that two input values could potentially result in the same output. The obfuscated values need to be unique so i'm worried about using this code without understanding how it works
this is a simplified version of the code:
public static int DecodeNumber(int input)
{
return (input * PrimeInverse) & int.MaxValue;
}
so my question is
what is the meaning of the ampersand in this context and will this code produce an output that is unique to the input?
No, there is no "rounding" going on here. This is a sneaky way of truncating the most significant bit when multiplication results in overflow.
According to the documentation, int.MaxValue is 2,147,483,647, which is 0x7FFFFFFF in hex. Performing a bitwise AND with this value simply clears out the most significant bit.
Since the intention of the code is to use int.MaxValue for its binary pattern, rather than for its numeric value of the highest int that could be represented by Int32, I would recommend either using 0x7FFFFFFF constant explicitly, or computing it with ~ expressionL
return (input * PrimeInverse) & ~(1 << 31);
The ampersand is a bitwise AND operator. The numbers on the sides of this operator will be considered in binary format and a logic AND would be performed on the bits of the same significance.
The int.MaxValue equals 2,147,483,647. The result of this operation is explained as below:
operation:
a = x & int.MaxValue;
result:
if (x >= 0) {a = x;}
if (x < 0) {a = x + 2,147,483,648;}
if x is non-negative then a = x;
if x is negative, then a = x + 2,147,483,648;
EDIT :
Logical Operations:
Logical operations like AND, OR, XOR, etc are defined to work on Boolean (logical) values. Boolean variables can have either 1 or 0 as their values. The result of AND operation between two logical variables will be 1 if and only if both the variables are equal to 1. This is shown below:
0 & 0 = 0
0 & 1 = 0
1 & 0 = 0
1 & 1 = 1
Bitwise AND operator on numbers works using the basic AND operator. First the two numbers on the sides of operator are converted to binary format. If the number of digits in both numbers are not equal, zeros are added to the left side of the number with less number of digits to have the same number of digits for both numbers. Then the digits of the same significance are ANDed one by one the way explained above and the result of each operation will be written on the place with the same significance constructing the result. The bitwise and between 12 and 7 is shown below. 12 is demonstrated as 1100 in binary format and 7 is 0111.
12 = 0b1100
7 = 0b0111
12 & 7 = ?
1 1 0 0 &
0 1 1 1
----------
0 1 0 0 = 4

Interlacing two binary numbers together in Arduino C

So I've come across a strange need to 'merge' two numbers:
byte one;
byte two;
into an int three; with the first bit being the first bit of one, the second bit being the first bit of two, the third being the second bit of one and so on.
So with these two numbers:
01001000
00010001
would result in
0001001001000010
A more didatic illustration of the interlacing operation:
byte one = 0 1 0 0 1 0 0 0
byte two = 0 0 0 1 0 0 0 1
result = 00 01 00 10 01 00 00 10
UPDATE: Sorry misread your question completely.
The following code should do:
public static int InterlacedMerge(byte low, byte high)
{
var result = 0;
for (var offset = 0; offset < 8; offset++)
{
var mask = 1 << offset;
result |= ((low & mask) | ((high & mask)) << 1) << offset;
}
return result;
}
I am not, by any means, very smart when it comes to bit twiddling so there probably is a more efficient way to do this. That said, I think this will do the job but I haven't tested it, so make sure you do.
P.D: There are some unnecessary parenthesis in the code but I'm never sure about bitwise operators precedence so I find it easier to read the way its written.
UPDATE2: Here is the same code a little more verbose to make it easier to follow:
public static int InterlacedMerge(byte low, byte high)
{
var result = 0;
for (var offset = 0; offset < 8; offset++)
{
//Creates a mask with the current bit set to one: 00000001,
//00000010, 00000100, and so on...
var mask = 1 << offset;
//Creates a number with the current bit set to low's bit value.
//All other bits are 0
var lowAndMask = low & mask;
//Creates a number with the current bit set to high's bit value.
//All other bits are 0
var highAndMask = high & mask;
//Create a merged pair where the lowest bit is the low 's bit value
//and the highest bit is high's bit value.
var mergedPair = lowAndMask | (highAndMask << 1);
//Ors the mergedPair into the result shifted left offset times
//Because we are merging two bits at a time, we need to
//shift 1 additional time for each preceding bit.
result |= mergedPair << offset;
}
return result;
}
#inbetween answered while I was writing this; similar solution, different phrasing.
You'll have to write a loop. You'll test one bit in each of the two inputs. You'll set a bit in an output for each input. You'll shift all three values one place. Maybe something like this (untested):
#define TOPBIT 32768
for /* 16 times */
if ( value1 & 1 ) out |= TOPBIT;
out >>= 1;
if ( value2 & 1 ) out |= TOPBIT;
out >>= 1;
b1 >>= 1;
b2 >>= 1;

how to loop through the digits of a binary number?

I have a binary number 1011011, how can I loop through all these binary digits one after the other ?
I know how to do this for decimal integers by using modulo and division.
int n = 0x5b; // 1011011
Really you should just do this, hexadecimal in general is much better representation:
printf("%x", n); // this prints "5b"
To get it in binary, (with emphasis on easy understanding) try something like this:
printf("%s", "0b"); // common prefix to denote that binary follows
bool leading = true; // we're processing leading zeroes
// starting with the most significant bit to the least
for (int i = sizeof(n) * CHAR_BIT - 1; i >= 0; --i) {
int bit = (n >> i) & 1;
leading |= bit; // if the bit is 1, we are no longer reading leading zeroes
if (!leading)
printf("%d", bit);
}
if (leading) // all zero, so just print 0
printf("0");
// at this point, for n = 0x5b, we'll have printed 0b1011011
You can use modulo and division by 2 exactly like you would in base 10. You can also use binary operators, but if you already know how to do that in base 10, it would be easier if you just used division and modulo
Expanding on Frédéric and Gabi's answers, all you need to do is realise that the rules in base 2 are no different to in base 10 - you just need to do your division and modulus with a divisor 2 instead of 10.
The next step is simply to use number >> 1 instead of number / 2 and number & 0x1 instead of number % 2 to improve performance. Mind you, with modern optimising compilers there's probably no difference...
Use an AND with increasing powers of two...
In C, at least, you can do something like:
while (val != 0)
{
printf("%d", val&0x1);
val = val>>1;
}
To expand on #Marco's answer with an example:
uint value = 0x82fa9281;
for (int i = 0; i < 32; i++)
{
bool set = (value & 0x1) != 0;
value >>= 1;
Console.WriteLine("Bit set: {0}", set);
}
What this does is test the last bit, and then shift everything one bit.
If you're already starting with a string, you could just iterate through each of the characters in the string:
var values = "1011011".Reverse().ToCharArray();
for(var index = 0; index < values.Length; index++) {
var isSet = (Boolean)Int32.Parse(values[index]); // Boolean.Parse only works on "true"/"false", not 0/1
// do whatever
}
byte input = Convert.ToByte("1011011", 2);
BitArray arr = new BitArray(new[] { input });
foreach (bool value in arr)
{
// ...
}
You can simply loop through every bit. The following C like pseudocode allows you to set the bit number you want to check. (You might also want to google endianness)
for()
{
bitnumber = <your bit>
printf("%d",(val & 1<<bitnumber)?1:0);
}
The code basically writes 1 if the bit it set or 0 if not. We shift the value 1 (which in binary is 1 ;) ) the number of bits set in bitnumber and then we AND it with the value in val to see if it matches up. Simple as that!
So if bitnumber is 3 we simply do this
00000100 ( The value 1 is shifted 3 left for example)
AND
10110110 (We check it with whatever you're value is)
=
00000100 = True! - Both values have bit 3 set!

Categories