How to get random double value out of random byte array values? - c#

I would like to use RNGCryptoServiceProvider as my source of random numbers. As it only can output them as an array of byte values how can I convert them to 0 to 1 double value while preserving uniformity of results?

byte[] result = new byte[8];
rng.GetBytes(result);
return (double)BitConverter.ToUInt64(result,0) / ulong.MaxValue;

This is how I would do this.
private static readonly System.Security.Cryptography.RNGCryptoServiceProvider _secureRng;
public static double NextSecureDouble()
{
var bytes = new byte[8];
_secureRng.GetBytes(bytes);
var v = BitConverter.ToUInt64(bytes, 0);
// We only use the 53-bits of integer precision available in a IEEE 754 64-bit double.
// The result is a fraction,
// r = (0, 9007199254740991) / 9007199254740992 where 0 <= r && r < 1.
v &= ((1UL << 53) - 1);
var r = (double)v / (double)(1UL << 53);
return r;
}
Coincidentally 9007199254740991 / 9007199254740992 is ~= 0.99999999999999988897769753748436 which is what the Random.NextDouble method will return as it's maximum value (see https://msdn.microsoft.com/en-us/library/system.random.nextdouble(v=vs.110).aspx).
In general, the standard deviation of a continuous uniform distribution is (max - min) / sqrt(12).
With a sample size of 1000 I'm reliably getting within a 2% error margin.
With a sample size of 10000 I'm reliably getting within a 1% error margin.
Here's how I verified these results.
[Test]
public void Randomness_SecureDoubleTest()
{
RunTrials(1000, 0.02);
RunTrials(10000, 0.01);
}
private static void RunTrials(int sampleSize, double errorMargin)
{
var q = new Queue<double>();
while (q.Count < sampleSize)
{
q.Enqueue(Randomness.NextSecureDouble());
}
for (int k = 0; k < 1000; k++)
{
// rotate
q.Dequeue();
q.Enqueue(Randomness.NextSecureDouble());
var avg = q.Average();
// Dividing by n−1 gives a better estimate of the population standard
// deviation for the larger parent population than dividing by n,
// which gives a result which is correct for the sample only.
var actual = Math.Sqrt(q.Sum(x => (x - avg) * (x - avg)) / (q.Count - 1));
// see http://stats.stackexchange.com/a/1014/4576
var expected = (q.Max() - q.Min()) / Math.Sqrt(12);
Assert.AreEqual(expected, actual, errorMargin);
}
}

You can use the BitConverter.ToDouble(...) method. It takes in a byte array and will return a Double. Thre are corresponding methods for most of the other primitive types, as well as a method to go from the primitives to a byte array.

Use BitConverter to convert a sequence of random bytes to a Double:
byte[] random_bytes = new byte[8]; // BitConverter will expect an 8-byte array
new RNGCryptoServiceProvider().GetBytes(random_bytes);
double my_random_double = BitConverter.ToDouble(random_bytes, 0);

Related

How would you compress 256-byte string consists of only "F" and "G"?

Theoretically, how much you can compress this 256-byte string containing only "F" and "G"?
FGFFFFFFGFFFFGGGGGGGGGGGGGFFFFFGGGGGGGGGGGGFFGFGGGFFFGGGGGGGGFFFFFFFFFFFFFFFFFFFFFGGGGGGFFFGFGGFGFFFFGFFGFGGFFFGFGGFGFFFGFGGGGFGGGGGGGGGFFFFFFFFGGGGGGGFFFFFGFFGGGGGGGFFFGGGFFGGGGGGFFGGGGGGGGGFFGFFGFGFFGFFGFFFFGGGGFGGFGGGFFFGGGFFFGGGFFGGFFGGGGFFGFGGFFFGFGGF
While I don't see a real world application, it is intriguing that compression algorithms like gz, bzip2 and deflate have a disadvantage in this case.
Well, I have this answer and the C# code to demonstrate:
using System;
public class Program
{
public static void Main()
{
string testCase = "FGFFFFFFGFFFFGGGGGGGGGGGGGFFFFFGGGGGGGGGGGGFFGFGGGFFFGGGGGGGGFFFFFFFFFFFFFFFFFFFFFGGGGGGFFFGFGGFGFFFFGFFGFGGFFFGFGGFGFFFGFGGGGFGGGGGGGGGFFFFFFFFGGGGGGGFFFFFGFFGGGGGGGFFFGGGFFGGGGGGFFGGGGGGGGGFFGFFGFGFFGFFGFFFFGGGGFGGFGGGFFFGGGFFFGGGFFGGFFGGGGFFGFGGFFFGFGGF";
uint[] G = new uint[8]; // 256 bit
for (int i = 0; i < testCase.Length; i++)
G[(i / 32)] += (uint)(((testCase[i] & 1)) << (i % 32));
for (int i = 0; i < 8; i++)
Console.WriteLine(G[i]);
string gTestCase = string.Empty;
//G 71 0100 0111
//F 70 0100 0110
for (int i = 0; i < 256; i++)
gTestCase += (char)((((uint)G[i / 32] & (1 << (i % 32))) >> (i % 32)) | 70);
Console.WriteLine(testCase);
Console.WriteLine(gTestCase);
if (testCase == gTestCase)
Console.WriteLine("OK.");
}
}
It may sound silly, but as to how I can improve the algorithm so that this 256-bit decimal number can be further compressed, I have the following idea:
(Note: The following are different topics of discussion but related to compressing 256-byte further)
From my understanding of Microsoft's implementation of Decimal,
96-bit + 96-bit = 128-bit decimal.
Which implies that a 192-byte string containing of any two distinct characters can be encoded as 128-bit number instead of 192-bit number. Correct?
My questions are:
Can I do the same with 256-byte strings?
(by splitting each of them into a pair of two numbers before adding those two as a Decimal shorter than 256-bit)?
How do I decode the above-mentioned 128-bit Decimal back to a pair of two 96-bit numbers, while maintaining the compressed data size less than 192-bit?
Sorry for my previous rather vague question.
The following code would demonstrate how to add two 96-char "binary" strings as 128-char binary string.
public static string AddBinary(string a, string b) // 96-char binary strings
{
int[] x = { 0, 0, 0 };
int[] y = { 0, 0, 0 };
string c = String.Empty;
for (int z = 0; z < a.Length; z++)
x[(z / 32)] |= ((byte)(a[a.Length - z - 1]) & 1) << (z % 32);
for (int z = 0; z < b.Length; z++)
y[(z / 32)] |= ((byte)(b[b.Length - z - 1]) & 1) << (z % 32);
decimal m = new decimal(x[0], x[1], x[2], false, 0); //96-bit
decimal n = new decimal(y[0], y[1], y[2], false, 0); //96-bit
decimal k = decimal.Add(m, n);
int[] l = decimal.GetBits(k); //128-bit
Console.WriteLine(k);
for (int z = 127; z >= 0; z--)
c += (char)(((l[(z / 32)] & (1 << (z % 32))) >> (z % 32)) | 48);
return c.Contains("1") ? c.TrimStart('0') : "0";
}
96-bit + 96-bit = 128-bit decimal.
That is a misunderstanding. Decimal is 96bit integer/mantissa, a sign and an exponent from 0 to 28 (~5bit) to form a scaling factor for the mantissa.
Addition is from 2×(1+5+96) bits to 1×(1+5+96) bits, including inevitable rounding errors and overflow.
You can't get summands from a sum easily - for starters, addition is symmetrical, there is no earthly way of knowing which of two summands has been the first and which the second.
Paul Hankin mentioned the programmer's variant of compressibility: Kolmogorov complexity.
In all fairness, you'd have to add to the 256 bits of your recoding of the input string the size of a program to turn those bits into the original string.
(As would gz, bzip2, deflate(, LZW) - decoders for "pure LZ" can be very small. The usual escape is to define a file format, including a recognisably header.)
Lasse V. Karlsen mentioned one consequence of the Pigeon-hole principle: to tell each combination of 192 bits from every other one, you need no less than 2^192 codes.

Setting bits in a byte array with C#

I have a requirement to encode a byte array from an short integer value
The encoding rules are
The bits representing the integer are bits 0 - 13
bit 14 is set if the number is negative
bit 15 is always 1.
I know I can get the integer into a byte array using BitConverter
byte[] roll = BitConverter.GetBytes(x);
But I cant find how to meet my requirement
Anyone know how to do this?
You should use Bitwise Operators.
Solution is something like this:
Int16 x = 7;
if(x < 0)
{
Int16 mask14 = 16384; // 0b0100000000000000;
x = (Int16)(x | mask14);
}
Int16 mask15 = -32768; // 0b1000000000000000;
x = (Int16)(x | mask15);
byte[] roll = BitConverter.GetBytes(x);
You cannot rely on GetBytes for negative numbers since it complements the bits and that is not what you need.
Instead you need to do bounds checking to make sure the number is representable, then use GetBytes on the absolute value of the given number.
The method's parameter is 'short' so we GetBytes returns a byte array with the size of 2 (you don't need more than 16 bits).
The rest is in the comments below:
static readonly int MAX_UNSIGNED_14_BIT = 16383;// 2^14-1
public static byte[] EncodeSigned14Bit(short x)
{
var absoluteX = Math.Abs(x);
if (absoluteX > MAX_UNSIGNED_14_BIT) throw new ArgumentException($"{nameof(x)} is too large and cannot be represented");
byte[] roll = BitConverter.GetBytes(absoluteX);
if (x < 0)
{
roll[1] |= 0b01000000; //x is negative, set 14th bit
}
roll[1] |= 0b10000000; // 15th bit is always set
return roll;
}
static void Main(string[] args)
{
// testing some values
var r1 = EncodeSigned14Bit(16383); // r1[0] = 255, r1[1] = 191
var r2 = EncodeSigned14Bit(-16383); // r2[0] = 255, r2[1] = 255
}

fast way to convert integer array to byte array (11 bit)

I have integer array and I need to convert it to byte array
but I need to take (only and just only) first 11 bit of each element of the هinteger array
and then convert it to a byte array
I tried this code
// ***********convert integer values to byte values
//***********to avoid the left zero padding on the byte array
// *********** first step : convert to binary string
// ***********second step : convert binary string to byte array
// *********** first step
string ByteString = Convert.ToString(IntArray[0], 2).PadLeft(11,'0');
for (int i = 1; i < IntArray.Length; i++)
ByteString = ByteString + Convert.ToString(IntArray[i], 2).PadLeft(11, '0');
// ***********second step
int numOfBytes = ByteString.Length / 8;
byte[] bytes = new byte[numOfBytes];
for (int i = 0; i < numOfBytes; ++i)
{
bytes[i] = Convert.ToByte(ByteString.Substring(8 * i, 8), 2);
}
But it takes too long time (if the file size large , the code takes more than 1 minute)
I need a very very fast code (very few milliseconds only )
can any one help me ?
Basically, you're going to be doing a lot of shifting and masking. The exact nature of that depends on the layout you want. If we assume that we pack little-endian from each int, appending on the left, so two 11-bit integers with positions:
abcdefghijk lmnopqrstuv
become the 8-bit chunks:
defghijk rstuvabc 00lmnopq
(i.e. take the lowest 8 bits of the first integer, which leaves 3 left over, so pack those into the low 3 bits of the next byte, then take the lowest 5 bits of the second integer, then finally the remaining 6 bits, padding with zero), then something like this should work:
using System;
using System.Linq;
static class Program
{
static string AsBinary(int val) => Convert.ToString(val, 2).PadLeft(11, '0');
static string AsBinary(byte val) => Convert.ToString(val, 2).PadLeft(8, '0');
static void Main()
{
int[] source = new int[1432];
var rand = new Random(123456);
for (int i = 0; i < source.Length; i++)
source[i] = rand.Next(0, 2047); // 11 bits
// Console.WriteLine(string.Join(" ", source.Take(5).Select(AsBinary)));
var raw = Encode(source);
// Console.WriteLine(string.Join(" ", raw.Take(6).Select(AsBinary)));
var clone = Decode(raw);
// now prove that it worked OK
if (source.Length != clone.Length)
{
Console.WriteLine($"Length: {source.Length} vs {clone.Length}");
}
else
{
int failCount = 0;
for (int i = 0; i < source.Length; i++)
{
if (source[i] != clone[i] && failCount++ == 0)
{
Console.WriteLine($"{i}: {source[i]} vs {clone[i]}");
}
}
Console.WriteLine($"Errors: {failCount}");
}
}
static byte[] Encode(int[] source)
{
long bits = source.Length * 11;
int len = (int)(bits / 8);
if ((bits % 8) != 0) len++;
byte[] arr = new byte[len];
int bitOffset = 0, index = 0;
for (int i = 0; i < source.Length; i++)
{
// note: this encodes little-endian
int val = source[i] & 2047;
int bitsLeft = 11;
if(bitOffset != 0)
{
val = val << bitOffset;
arr[index++] |= (byte)val;
bitsLeft -= (8 - bitOffset);
val >>= 8;
}
if(bitsLeft >= 8)
{
arr[index++] = (byte)val;
bitsLeft -= 8;
val >>= 8;
}
if(bitsLeft != 0)
{
arr[index] = (byte)val;
}
bitOffset = bitsLeft;
}
return arr;
}
private static int[] Decode(byte[] source)
{
int bits = source.Length * 8;
int len = (int)(bits / 11);
// note no need to worry about remaining chunks - no ambiguity since 11 > 8
int[] arr = new int[len];
int bitOffset = 0, index = 0;
for(int i = 0; i < source.Length; i++)
{
int val = source[i] << bitOffset;
int bitsLeftInVal = 11 - bitOffset;
if(bitsLeftInVal > 8)
{
arr[index] |= val;
bitOffset += 8;
}
else if(bitsLeftInVal == 8)
{
arr[index++] |= val;
bitOffset = 0;
}
else
{
arr[index++] |= (val & 2047);
if(index != arr.Length) arr[index] = val >> 11;
bitOffset = 8 - bitsLeftInVal;
}
}
return arr;
}
}
If you need a different layout you'll need to tweak it.
This encodes 512 MiB in just over a second on my machine.
Overview to the Encode method:
The first thing is does is pre-calculate the amount of space that is going to be required, and allocate the output buffer; since each input contributes 11 bits to the output, this is just some modulo math:
long bits = source.Length * 11;
int len = (int)(bits / 8);
if ((bits % 8) != 0) len++;
byte[] arr = new byte[len];
We know the output position won't match the input, and we know we're going to be starting each 11-bit chunk at different positions in bytes each time, so allocate variables for those, and loop over the input:
int bitOffset = 0, index = 0;
for (int i = 0; i < source.Length; i++)
{
...
}
return arr;
So: taking each input in turn (where the input is the value at position i), take the low 11 bits of the value - and observe that we have 11 bits (of this value) still to write:
int val = source[i] & 2047;
int bitsLeft = 11;
Now, if the current output value is partially written (i.e. bitOffset != 0), we should deal with that first. The amount of space left in the current output is 8 - bitOffset. Since we always have 11 input bits we don't need to worry about having more space than values to fill, so: left-shift our value by bitOffset (pads on the right with bitOffset zeros, as a binary operation), and "or" the lowest 8 bits of this with the output byte. Essentially this says "if bitOffset is 3, write the 5 low bits of val into the 5 high bits of the output buffer"; finally, fixup the values: increment our write position, record that we have fewer bits of the current value still to write, and use right-shift to discard the 8 low bits of val (which is made of bitOffset zeros and 8 - bitOffset "real" bits):
if(bitOffset != 0)
{
val = val << bitOffset;
arr[index++] |= (byte)val;
bitsLeft -= (8 - bitOffset);
val >>= 8;
}
The next question is: do we have (at least) an entire byte of data left? We might not, if bitOffset was 1 for example (so we'll have written 7 bits already, leaving just 4). If we do, we can just stamp that down and increment the write position - then once again track how many are left and throw away the low 8 bits:
if(bitsLeft >= 8)
{
arr[index++] = (byte)val;
bitsLeft -= 8;
val >>= 8;
}
And it is possible that we've still got some left-over; for example, if bitOffset was 7 we'll have written 1 bit in the first chunk, 8 bits in the second, leaving 2 more to write - or if bitOffset was 0 we won't have written anything in the first chunk, 8 in the second, leaving 3 left to write. So, stamp down whatever is left, but do not increment the write position - we've written to the low bits, but the next value might need to write to the high bits. Finally, update bitOffset to be however many low bits we wrote in the last step (which could be zero):
if(bitsLeft != 0)
{
arr[index] = (byte)val;
}
bitOffset = bitsLeft;
The Decode operation is the reverse of this logic - again, calculate the sizes and prepare the state:
int bits = source.Length * 8;
int len = (int)(bits / 11);
int[] arr = new int[len];
int bitOffset = 0, index = 0;
Now loop over the input:
for(int i = 0; i < source.Length; i++)
{
...
}
return arr;
Now, bitOffset is the start position that we want to write to in the current 11-bit value, so if we start at the start, it will be 0 on the first byte, then 8; 3 bits of the second byte join with the first 11-bit integer, so the 5 bits become part of the second - so bitOffset is 5 on the 3rd byte, etc. We can calculate the number of bits left in the current integer by subtracting from 11:
int val = source[i] << bitOffset;
int bitsLeftInVal = 11 - bitOffset;
Now we have 3 possible scenarios:
1) if we have more than 8 bits left in the current value, we can stamp down our input (as a bitwise "or") but do not increment the write position (as we have more to write for this value), and note that we're 8-bits further along:
if(bitsLeftInVal > 8)
{
arr[index] |= val;
bitOffset += 8;
}
2) if we have exactly 8 bits left in the current value, we can stamp down our input (as a bitwise "or") and increment the write position; the next loop can start at zero:
else if(bitsLeftInVal == 8)
{
arr[index++] |= val;
bitOffset = 0;
}
3) otherwise, we have less than 8 bits left in the current value; so we need to write the first bitsLeftInVal bits to the current output position (incrementing the output position), and whatever is left to the next output position. Since we already left-shifted by bitOffset, what this really means is simply: stamp down (as a bitwise "or") the low 11 bits (val & 2047) to the current position, and whatever is left (val >> 11) to the next if that wouldn't exceed our output buffer (padding zeros). Then calculate our new bitOffset:
else
{
arr[index++] |= (val & 2047);
if(index != arr.Length) arr[index] = val >> 11;
bitOffset = 8 - bitsLeftInVal;
}
And that's basically it. Lots of bitwise operations - shifts (<< / >>), masks (&) and combinations (|).
If you wanted to store the least significant 11 bits of an int into two bytes such that the least significant byte has bits 1-8 inclusive and the most significant byte has 9-11:
int toStore = 123456789;
byte msb = (byte) ((toStore >> 8) & 7); //or 0b111
byte lsb = (byte) (toStore & 255); //or 0b11111111
To check this, 123456789 in binary is:
0b111010110111100110100010101
MMMLLLLLLLL
The bits above L are lsb, and have a value of 21, above M are msb and have a value of 5
Doing the work is the shift operator >> where all the binary digits are slid to the right 8 places (8 of them disappear from the right hand side - they're gone, into oblivion):
0b111010110111100110100010101 >> 8 =
0b1110101101111001101
And the mask operator & (the mask operator works by only keeping bits where, in each position, they're 1 in the value and also 1 in the mask) :
0b111010110111100110100010101 &
0b000000000000000000011111111 (255) =
0b000000000000000000000010101
If you're processing an int array, just do this in a loop:
byte[] bs = new byte[ intarray.Length*2 ];
for(int x = 0, b=0; x < intarray.Length; x++){
int toStore = intarray[x];
bs[b++] = (byte) ((toStore >> 8) & 7);
bs[b++] = (byte) (toStore & 255);
}

"Random-looking" sequence generator

I can't think of a good way to do this, and would appreciate some help, if possible!
I'm afraid I don't have any code to post yet as I haven't got that far.
I need to generate a sequence of values from 3 (or possible more) parameters in the range 0-999.
The value must always be the same for the given inputs but with a fair distribution between upper and lower boundaries so as to appear random.
For example:
function (1, 1, 1) = 423
function (1, 1, 2) = 716
function (1, 2, 1) = 112
These must be reasonably fast to produce, by which I mean I should be able to generate 100-200 during web page load with no noticeable delay.
The method must be do-able in C# but also in JavaScript, otherwise I'd probably use a CRC32 or MD5 hash algorithm.
If it helps this will be used as part of a procedural generation routine.
I had a go at asking this previously, but I think the poor quality of my explanation let me down.
I apologise if this is worded badly. Please just let me know if so and I'll try to explain further.
Thanks very much for any help.
Here's one:
function sequence(x, y, z) {
return Math.abs(441*x-311*y+293*z) % 1000;
}
It even produces the output from your example!
Using the Marsaglia generator from the Wiki
public class SimpleMarsagliaRandom
{
private const uint original_w = 1023;
private uint m_w = original_w; /* must not be zero */
private uint m_z = 0; /* must not be zero, initialized by the constructor */
public SimpleMarsagliaRandom()
{
this.init(666);
}
public void init(uint z)
{
this.m_w = original_w;
this.m_z = z;
}
public uint get_random()
{
this.m_z = 36969 * (this.m_z & 65535) + (this.m_z >> 16);
this.m_w = 18000 * (this.m_w & 65535) + (this.m_w >> 16);
return (this.m_z << 16) + this.m_w; /* 32-bit result */
}
public uint get_random(uint min, uint max)
{
// max excluded
uint num = max - min;
return (this.get_random() % num) + min;
}
}
and
simpleMarsagliaRandom = function()
{
var original_w = 1023 >>> 0;
var m_w = 0, m_z = 0;
this.init = function(z)
{
m_w = original_w;
m_z = z >>> 0;
};
this.init(666);
var internalRandom = function()
{
m_z = (36969 * (m_z & 65535) + (m_z >>> 16)) >>> 0;
m_w = (18000 * (m_w & 65535) + (m_w >>> 16)) >>> 0;
return (((m_z << 16) >>> 0) + m_w) >>> 0; /* 32-bit result */
};
this.get_random = function(min, max)
{
if (arguments.length < 2)
{
return internalRandom();
}
var num = ((max >>> 0) - (min >>> 0)) >>> 0;
return ((internalRandom() % num) + min) >>> 0;
}
};
In Javascript all the >>> are to coerce the number to uint
Totally untested
Be aware that what is done in get_random to make numbers from x to y is wrong. Low numbers will happen a little more times than higher numbers. To make an example: let's say you have a standard 6 faces dice. You roll it, you get 1-6. Now let's say you print on it the numbers 0-5. You roll it, you get 0-5. No problems. But you need the numbers in the range 0-3. So you do roll % 3... So we have:
rolled => rolled % 3
0 => 0,
1 => 1,
2 => 2,
3 => 0,
4 => 1,
5 => 2,
6 => 0.
The 0 result is more common.
Ideone for C# version: http://ideone.com/VQudcV
JSFiddle for Javascript version: http://jsfiddle.net/dqayk/
You should be able to use MD5 hashing in both C# and JS.
In C#:
int Hash(params int[] values)
{
System.Security.Cryptography.MD5 hasher = MD5.Create();
string valuesAsString = string.Join(",", values);
var hash = hasher.ComputeHash(Encoding.UTF8.GetBytes(valuesAsString));
var hashAsInt = BitConverter.ToInt32(hash, 0);
return Math.Abs(hashAsInt % 1000);
}
In JS, implement the same method using some MD5 algorithm (e.g. jshash)

Converting an Int to a BCD byte array [duplicate]

I want to convert an int to a byte[2] array using BCD.
The int in question will come from DateTime representing the Year and must be converted to two bytes.
Is there any pre-made function that does this or can you give me a simple way of doing this?
example:
int year = 2010
would output:
byte[2]{0x20, 0x10};
static byte[] Year2Bcd(int year) {
if (year < 0 || year > 9999) throw new ArgumentException();
int bcd = 0;
for (int digit = 0; digit < 4; ++digit) {
int nibble = year % 10;
bcd |= nibble << (digit * 4);
year /= 10;
}
return new byte[] { (byte)((bcd >> 8) & 0xff), (byte)(bcd & 0xff) };
}
Beware that you asked for a big-endian result, that's a bit unusual.
Use this method.
public static byte[] ToBcd(int value){
if(value<0 || value>99999999)
throw new ArgumentOutOfRangeException("value");
byte[] ret=new byte[4];
for(int i=0;i<4;i++){
ret[i]=(byte)(value%10);
value/=10;
ret[i]|=(byte)((value%10)<<4);
value/=10;
}
return ret;
}
This is essentially how it works.
If the value is less than 0 or greater than 99999999, the value won't fit in four bytes. More formally, if the value is less than 0 or is 10^(n*2) or greater, where n is the number of bytes, the value won't fit in n bytes.
For each byte:
Set that byte to the remainder of the value-divided-by-10 to the byte. (This will place the last digit in the low nibble [half-byte] of the current byte.)
Divide the value by 10.
Add 16 times the remainder of the value-divided-by-10 to the byte. (This will place the now-last digit in the high nibble of the current byte.)
Divide the value by 10.
(One optimization is to set every byte to 0 beforehand -- which is implicitly done by .NET when it allocates a new array -- and to stop iterating when the value reaches 0. This latter optimization is not done in the code above, for simplicity. Also, if available, some compilers or assemblers offer a divide/remainder routine that allows retrieving the quotient and remainder in one division step, an optimization which is not usually necessary though.)
Here's a terrible brute-force version. I'm sure there's a better way than this, but it ought to work anyway.
int digitOne = year / 1000;
int digitTwo = (year - digitOne * 1000) / 100;
int digitThree = (year - digitOne * 1000 - digitTwo * 100) / 10;
int digitFour = year - digitOne * 1000 - digitTwo * 100 - digitThree * 10;
byte[] bcdYear = new byte[] { digitOne << 4 | digitTwo, digitThree << 4 | digitFour };
The sad part about it is that fast binary to BCD conversions are built into the x86 microprocessor architecture, if you could get at them!
Here is a slightly cleaner version then Jeffrey's
static byte[] IntToBCD(int input)
{
if (input > 9999 || input < 0)
throw new ArgumentOutOfRangeException("input");
int thousands = input / 1000;
int hundreds = (input -= thousands * 1000) / 100;
int tens = (input -= hundreds * 100) / 10;
int ones = (input -= tens * 10);
byte[] bcd = new byte[] {
(byte)(thousands << 4 | hundreds),
(byte)(tens << 4 | ones)
};
return bcd;
}
maybe a simple parse function containing this loop
i=0;
while (id>0)
{
twodigits=id%100; //need 2 digits per byte
arr[i]=twodigits%10 + twodigits/10*16; //first digit on first 4 bits second digit shifted with 4 bits
id/=100;
i++;
}
More common solution
private IEnumerable<Byte> GetBytes(Decimal value)
{
Byte currentByte = 0;
Boolean odd = true;
while (value > 0)
{
if (odd)
currentByte = 0;
Decimal rest = value % 10;
value = (value-rest)/10;
currentByte |= (Byte)(odd ? (Byte)rest : (Byte)((Byte)rest << 4));
if(!odd)
yield return currentByte;
odd = !odd;
}
if(!odd)
yield return currentByte;
}
Same version as Peter O. but in VB.NET
Public Shared Function ToBcd(ByVal pValue As Integer) As Byte()
If pValue < 0 OrElse pValue > 99999999 Then Throw New ArgumentOutOfRangeException("value")
Dim ret As Byte() = New Byte(3) {} 'All bytes are init with 0's
For i As Integer = 0 To 3
ret(i) = CByte(pValue Mod 10)
pValue = Math.Floor(pValue / 10.0)
ret(i) = ret(i) Or CByte((pValue Mod 10) << 4)
pValue = Math.Floor(pValue / 10.0)
If pValue = 0 Then Exit For
Next
Return ret
End Function
The trick here is to be aware that simply using pValue /= 10 will round the value so if for instance the argument is "16", the first part of the byte will be correct, but the result of the division will be 2 (as 1.6 will be rounded up). Therefore I use the Math.Floor method.
I made a generic routine posted at IntToByteArray that you could use like:
var yearInBytes = ConvertBigIntToBcd(2010, 2);
static byte[] IntToBCD(int input) {
byte[] bcd = new byte[] {
(byte)(input>> 8),
(byte)(input& 0x00FF)
};
return bcd;
}

Categories