Can't increment value of a floating-point variable - c#

I am newbie in C# and I can't understand, why my variable doesn't want to increment to value, bigger than 16777215.
I have code:
float fullPerc = 100;
float bytesRead = 1;
long fileSize = 1;
ProgressBar pb1;
int blockSizeBytes = 0;
float counter = 0;
using (FileStream inFs = new FileStream(inFile, FileMode.Open))
{
fileSize = inFs.Length;
do
{
counter = counter + 1;
if (counter > 16777215) //&& counter < 16777230)
{
counter = counter + 10;
//Console.WriteLine(((counter + 10) /100));
}
count = inFs.Read(data, 0, blockSizeBytes);
offset += count;
outStreamEncrypted.Write(data, 0, count);
bytesRead += blockSizeBytes;
}
while (count > 0);
inFs.Close();
}
https://bitbucket.org/ArtUrlWWW/myfileencoderdecoder/src/f7cce67b78336636ef83058cd369027b0d146b17/FileEncoder/Encrypter.cs?at=master&fileviewer=file-view-default#Encrypter.cs-166
When value of the counter is equal 16777216, code of the variable increment counter = counter + 1; doesn't work, but this code
if (counter > 16777215) //&& counter < 16777230)
{
counter = counter + 10;
//Console.WriteLine(((counter + 10) /100));
}
is works fine.
Ie, if I comment this if code, my counter will grow up to 16777216 value and will stop on this value. Only increment by 10 will grow this variable, when this variable >=16777216 .
Why?

You have mantissa overflow and as the result a presision loss. float (or Single) type has 24 bits mantissa (up to 16777216):
https://en.wikipedia.org/wiki/Single-precision_floating-point_format
Let's see what's going on:
private static String MakeReport(float value) {
return String.Join(" ", BitConverter
.GetBytes(value)
.Select(b => Convert.ToString(b, 2).PadLeft(8, '0')));
}
...
float f = 16777215;
// Mantissa (first 3 bytes) is full of 1's except the last one bit
// 11111111 11111111 01111111 01001011
Console.Write(MakeReport(f));
// Overflow! Presision loss
// 00000000 00000000 10000000 01001011
Console.Write(MakeReport(f + 1));
// Overflow! Presision loss
// 00000000 00000000 10000000 01001011
Console.Write(MakeReport(f + 2));
// Overflow! Presision loss
// 00000100 00000000 10000000 01001011
Console.Write(MakeReport(f + 10));
Remedy: do not use floating points as counter but integer:
int counter = 0;
To avoid integer division cast value into double
float x = (float) ((((double)counter * blockSizeBytes) / fileSize) * fullPerc);

Related

fast way to convert integer array to byte array (11 bit)

I have integer array and I need to convert it to byte array
but I need to take (only and just only) first 11 bit of each element of the هinteger array
and then convert it to a byte array
I tried this code
// ***********convert integer values to byte values
//***********to avoid the left zero padding on the byte array
// *********** first step : convert to binary string
// ***********second step : convert binary string to byte array
// *********** first step
string ByteString = Convert.ToString(IntArray[0], 2).PadLeft(11,'0');
for (int i = 1; i < IntArray.Length; i++)
ByteString = ByteString + Convert.ToString(IntArray[i], 2).PadLeft(11, '0');
// ***********second step
int numOfBytes = ByteString.Length / 8;
byte[] bytes = new byte[numOfBytes];
for (int i = 0; i < numOfBytes; ++i)
{
bytes[i] = Convert.ToByte(ByteString.Substring(8 * i, 8), 2);
}
But it takes too long time (if the file size large , the code takes more than 1 minute)
I need a very very fast code (very few milliseconds only )
can any one help me ?
Basically, you're going to be doing a lot of shifting and masking. The exact nature of that depends on the layout you want. If we assume that we pack little-endian from each int, appending on the left, so two 11-bit integers with positions:
abcdefghijk lmnopqrstuv
become the 8-bit chunks:
defghijk rstuvabc 00lmnopq
(i.e. take the lowest 8 bits of the first integer, which leaves 3 left over, so pack those into the low 3 bits of the next byte, then take the lowest 5 bits of the second integer, then finally the remaining 6 bits, padding with zero), then something like this should work:
using System;
using System.Linq;
static class Program
{
static string AsBinary(int val) => Convert.ToString(val, 2).PadLeft(11, '0');
static string AsBinary(byte val) => Convert.ToString(val, 2).PadLeft(8, '0');
static void Main()
{
int[] source = new int[1432];
var rand = new Random(123456);
for (int i = 0; i < source.Length; i++)
source[i] = rand.Next(0, 2047); // 11 bits
// Console.WriteLine(string.Join(" ", source.Take(5).Select(AsBinary)));
var raw = Encode(source);
// Console.WriteLine(string.Join(" ", raw.Take(6).Select(AsBinary)));
var clone = Decode(raw);
// now prove that it worked OK
if (source.Length != clone.Length)
{
Console.WriteLine($"Length: {source.Length} vs {clone.Length}");
}
else
{
int failCount = 0;
for (int i = 0; i < source.Length; i++)
{
if (source[i] != clone[i] && failCount++ == 0)
{
Console.WriteLine($"{i}: {source[i]} vs {clone[i]}");
}
}
Console.WriteLine($"Errors: {failCount}");
}
}
static byte[] Encode(int[] source)
{
long bits = source.Length * 11;
int len = (int)(bits / 8);
if ((bits % 8) != 0) len++;
byte[] arr = new byte[len];
int bitOffset = 0, index = 0;
for (int i = 0; i < source.Length; i++)
{
// note: this encodes little-endian
int val = source[i] & 2047;
int bitsLeft = 11;
if(bitOffset != 0)
{
val = val << bitOffset;
arr[index++] |= (byte)val;
bitsLeft -= (8 - bitOffset);
val >>= 8;
}
if(bitsLeft >= 8)
{
arr[index++] = (byte)val;
bitsLeft -= 8;
val >>= 8;
}
if(bitsLeft != 0)
{
arr[index] = (byte)val;
}
bitOffset = bitsLeft;
}
return arr;
}
private static int[] Decode(byte[] source)
{
int bits = source.Length * 8;
int len = (int)(bits / 11);
// note no need to worry about remaining chunks - no ambiguity since 11 > 8
int[] arr = new int[len];
int bitOffset = 0, index = 0;
for(int i = 0; i < source.Length; i++)
{
int val = source[i] << bitOffset;
int bitsLeftInVal = 11 - bitOffset;
if(bitsLeftInVal > 8)
{
arr[index] |= val;
bitOffset += 8;
}
else if(bitsLeftInVal == 8)
{
arr[index++] |= val;
bitOffset = 0;
}
else
{
arr[index++] |= (val & 2047);
if(index != arr.Length) arr[index] = val >> 11;
bitOffset = 8 - bitsLeftInVal;
}
}
return arr;
}
}
If you need a different layout you'll need to tweak it.
This encodes 512 MiB in just over a second on my machine.
Overview to the Encode method:
The first thing is does is pre-calculate the amount of space that is going to be required, and allocate the output buffer; since each input contributes 11 bits to the output, this is just some modulo math:
long bits = source.Length * 11;
int len = (int)(bits / 8);
if ((bits % 8) != 0) len++;
byte[] arr = new byte[len];
We know the output position won't match the input, and we know we're going to be starting each 11-bit chunk at different positions in bytes each time, so allocate variables for those, and loop over the input:
int bitOffset = 0, index = 0;
for (int i = 0; i < source.Length; i++)
{
...
}
return arr;
So: taking each input in turn (where the input is the value at position i), take the low 11 bits of the value - and observe that we have 11 bits (of this value) still to write:
int val = source[i] & 2047;
int bitsLeft = 11;
Now, if the current output value is partially written (i.e. bitOffset != 0), we should deal with that first. The amount of space left in the current output is 8 - bitOffset. Since we always have 11 input bits we don't need to worry about having more space than values to fill, so: left-shift our value by bitOffset (pads on the right with bitOffset zeros, as a binary operation), and "or" the lowest 8 bits of this with the output byte. Essentially this says "if bitOffset is 3, write the 5 low bits of val into the 5 high bits of the output buffer"; finally, fixup the values: increment our write position, record that we have fewer bits of the current value still to write, and use right-shift to discard the 8 low bits of val (which is made of bitOffset zeros and 8 - bitOffset "real" bits):
if(bitOffset != 0)
{
val = val << bitOffset;
arr[index++] |= (byte)val;
bitsLeft -= (8 - bitOffset);
val >>= 8;
}
The next question is: do we have (at least) an entire byte of data left? We might not, if bitOffset was 1 for example (so we'll have written 7 bits already, leaving just 4). If we do, we can just stamp that down and increment the write position - then once again track how many are left and throw away the low 8 bits:
if(bitsLeft >= 8)
{
arr[index++] = (byte)val;
bitsLeft -= 8;
val >>= 8;
}
And it is possible that we've still got some left-over; for example, if bitOffset was 7 we'll have written 1 bit in the first chunk, 8 bits in the second, leaving 2 more to write - or if bitOffset was 0 we won't have written anything in the first chunk, 8 in the second, leaving 3 left to write. So, stamp down whatever is left, but do not increment the write position - we've written to the low bits, but the next value might need to write to the high bits. Finally, update bitOffset to be however many low bits we wrote in the last step (which could be zero):
if(bitsLeft != 0)
{
arr[index] = (byte)val;
}
bitOffset = bitsLeft;
The Decode operation is the reverse of this logic - again, calculate the sizes and prepare the state:
int bits = source.Length * 8;
int len = (int)(bits / 11);
int[] arr = new int[len];
int bitOffset = 0, index = 0;
Now loop over the input:
for(int i = 0; i < source.Length; i++)
{
...
}
return arr;
Now, bitOffset is the start position that we want to write to in the current 11-bit value, so if we start at the start, it will be 0 on the first byte, then 8; 3 bits of the second byte join with the first 11-bit integer, so the 5 bits become part of the second - so bitOffset is 5 on the 3rd byte, etc. We can calculate the number of bits left in the current integer by subtracting from 11:
int val = source[i] << bitOffset;
int bitsLeftInVal = 11 - bitOffset;
Now we have 3 possible scenarios:
1) if we have more than 8 bits left in the current value, we can stamp down our input (as a bitwise "or") but do not increment the write position (as we have more to write for this value), and note that we're 8-bits further along:
if(bitsLeftInVal > 8)
{
arr[index] |= val;
bitOffset += 8;
}
2) if we have exactly 8 bits left in the current value, we can stamp down our input (as a bitwise "or") and increment the write position; the next loop can start at zero:
else if(bitsLeftInVal == 8)
{
arr[index++] |= val;
bitOffset = 0;
}
3) otherwise, we have less than 8 bits left in the current value; so we need to write the first bitsLeftInVal bits to the current output position (incrementing the output position), and whatever is left to the next output position. Since we already left-shifted by bitOffset, what this really means is simply: stamp down (as a bitwise "or") the low 11 bits (val & 2047) to the current position, and whatever is left (val >> 11) to the next if that wouldn't exceed our output buffer (padding zeros). Then calculate our new bitOffset:
else
{
arr[index++] |= (val & 2047);
if(index != arr.Length) arr[index] = val >> 11;
bitOffset = 8 - bitsLeftInVal;
}
And that's basically it. Lots of bitwise operations - shifts (<< / >>), masks (&) and combinations (|).
If you wanted to store the least significant 11 bits of an int into two bytes such that the least significant byte has bits 1-8 inclusive and the most significant byte has 9-11:
int toStore = 123456789;
byte msb = (byte) ((toStore >> 8) & 7); //or 0b111
byte lsb = (byte) (toStore & 255); //or 0b11111111
To check this, 123456789 in binary is:
0b111010110111100110100010101
MMMLLLLLLLL
The bits above L are lsb, and have a value of 21, above M are msb and have a value of 5
Doing the work is the shift operator >> where all the binary digits are slid to the right 8 places (8 of them disappear from the right hand side - they're gone, into oblivion):
0b111010110111100110100010101 >> 8 =
0b1110101101111001101
And the mask operator & (the mask operator works by only keeping bits where, in each position, they're 1 in the value and also 1 in the mask) :
0b111010110111100110100010101 &
0b000000000000000000011111111 (255) =
0b000000000000000000000010101
If you're processing an int array, just do this in a loop:
byte[] bs = new byte[ intarray.Length*2 ];
for(int x = 0, b=0; x < intarray.Length; x++){
int toStore = intarray[x];
bs[b++] = (byte) ((toStore >> 8) & 7);
bs[b++] = (byte) (toStore & 255);
}

How to extract multiple parameters from a binary chromosome

I am trying to use AForge.Net Genetics library to create a simple application for optimization purposes. I have a scenario where I have four input parameters, therefore I tried to modify the "OptimizationFunction2D.cs" class located in the AForge.Genetic project to handle four parameters.
While converting the binary chromosomes into 4 parameters (type = double) I am not sure if my approach is correct as I don't know how to verify the extracted values. Below is the code segment where my code differs from the original AForge code:
public double[] Translate( IChromosome chromosome )
{
// get chromosome's value
ulong val = ((BinaryChromosome) chromosome).Value;
// chromosome's length
int length = ((BinaryChromosome) chromosome).Length;
// length of W component
int wLength = length/4;
// length of X component
int xLength = length / 4;
// length of Y component
int yLength = length / 4;
// length of Z component
int zLength = length / 4;
// W maximum value - equal to X mask
ulong wMax = 0xFFFFFFFFFFFFFFFF >> (64 - wLength);
// X maximum value
ulong xMax = 0xFFFFFFFFFFFFFFFF >> (64 - xLength);
// Y maximum value - equal to X mask
ulong yMax = 0xFFFFFFFFFFFFFFFF >> (64 - yLength);
// Z maximum value
ulong zMax = 0xFFFFFFFFFFFFFFFF >> (64 - zLength);
// W component
double wPart = val & wMax;
// X component;
double xPart = (val >> wLength) & xMax;
// Y component;
double yPart = (val >> (wLength + xLength) & yMax);
// Z component;
double zPart = val >> (wLength + xLength + yLength);
// translate to optimization's function space
double[] ret = new double[4];
ret[0] = wPart * _rangeW.Length / wMax + _rangeW.Min;
ret[1] = xPart * _rangeX.Length / xMax + _rangeX.Min;
ret[2] = yPart * _rangeY.Length / yMax + _rangeY.Min;
ret[3] = zPart * _rangeZ.Length / zMax + _rangeZ.Min;
return ret;
}
I am not sure if am correctly separating the chromosome value into four part (wPart/xPart/yPart/zPart). The original function in the AForge.Genetic library looks like this:
public double[] Translate( IChromosome chromosome )
{
// get chromosome's value
ulong val = ( (BinaryChromosome) chromosome ).Value;
// chromosome's length
int length = ( (BinaryChromosome) chromosome ).Length;
// length of X component
int xLength = length / 2;
// length of Y component
int yLength = length - xLength;
// X maximum value - equal to X mask
ulong xMax = 0xFFFFFFFFFFFFFFFF >> ( 64 - xLength );
// Y maximum value
ulong yMax = 0xFFFFFFFFFFFFFFFF >> ( 64 - yLength );
// X component
double xPart = val & xMax;
// Y component;
double yPart = val >> xLength;
// translate to optimization's function space
double[] ret = new double[2];
ret[0] = xPart * rangeX.Length / xMax + rangeX.Min;
ret[1] = yPart * rangeY.Length / yMax + rangeY.Min;
return ret;
}
Can someone please confirm if my conversion process is correct or is there a better way of doing it.
No, it works but you don't need it to be so complicated.
ulong wMax = 0xFFFFFFFFFFFFFFFF >> (64 - wLength);
this returns the same value to all the results wMax xMax yMax zMax so just do one and call it componentMask
part = (val >> (wLength * pos) & componentMask);
where pos is the 0 based position of the component. so 0 for w, 1 for x ...
the rest is ok.
EDIT:
if the Length is not divided by 4 you can make the last part be just val >> (wLength * pos) to make it have the remaining bits.

Converting between NTP and C# DateTime

I use the following code to convert between NTP and a C# DateTime. I think the forward corversion is correct, but backwards is wrong.
See the following code to convert 8 bytes into a DatTime:
Convert NTP to DateTime
public static ulong GetMilliSeconds(byte[] ntpTime)
{
ulong intpart = 0, fractpart = 0;
for (var i = 0; i <= 3; i++)
intpart = 256 * intpart + ntpTime[i];
for (var i = 4; i <= 7; i++)
fractpart = 256 * fractpart + ntpTime[i];
var milliseconds = intpart * 1000 + ((fractpart * 1000) / 0x100000000L);
Debug.WriteLine("intpart: " + intpart);
Debug.WriteLine("fractpart: " + fractpart);
Debug.WriteLine("milliseconds: " + milliseconds);
return milliseconds;
}
public static DateTime ConvertToDateTime(byte[] ntpTime)
{
var span = TimeSpan.FromMilliseconds(GetMilliSeconds(ntpTime));
var time = new DateTime(1900, 1, 1, 0, 0, 0, DateTimeKind.Utc);
time += span;
return time;
}
Convert from DateTime to NTP
public static byte[] ConvertToNtp(ulong milliseconds)
{
ulong intpart = 0, fractpart = 0;
var ntpData = new byte[8];
intpart = milliseconds / 1000;
fractpart = ((milliseconds % 1000) * 0x100000000L) / 1000;
Debug.WriteLine("intpart: " + intpart);
Debug.WriteLine("fractpart: " + fractpart);
Debug.WriteLine("milliseconds: " + milliseconds);
var temp = intpart;
for (var i = 3; i >= 0; i--)
{
ntpData[i] = (byte)(temp % 256);
temp = temp / 256;
}
temp = fractpart;
for (var i = 7; i >= 4; i--)
{
ntpData[i] = (byte)(temp % 256);
temp = temp / 256;
}
return ntpData;
}
The following input produces the output:
bytes = { 131, 170, 126, 128,
46, 197, 205, 234 }
var ms = GetMilliSeconds(bytes );
var ntp = ConvertToNtp(ms)
//GetMilliSeconds output
milliseconds: 2208988800182
intpart: 2208988800
fractpart: 784715242
//ConvertToNtp output
milliseconds: 2208988800182
intpart: 2208988800
fractpart: 781684047
Notice that the conversion from milliseconds to fractional part is wrong. Why?
Update:
As Jonathan S. points out - it's loss of fraction. So instead of converting back and forth, I want to manipulate with the NTP timestamp directly. More specific, add milliseconds to it. I would assume the following function would do just that, but I'm having a hard time validating it. I am very unsure about the fraction-part.
public static void AddMilliSeconds(ref byte[] ntpTime, ulong millis)
{
ulong intpart = 0, fractpart = 0;
for (var i = 0; i < 4; i++)
intpart = 256 * intpart + ntpTime[i];
for (var i = 4; i <= 7; i++)
fractpart = 256 * fractpart + ntpTime[i];
intpart += millis / 1000;
fractpart += millis % 1000;
var newIntpart = BitConverter.GetBytes(SwapEndianness(intpart));
var newFractpart = BitConverter.GetBytes(SwapEndianness(fractpart));
for (var i = 0; i < 8; i++)
{
if (i < 4)
ntpTime[i] = newIntpart[i];
if (i >= 4)
ntpTime[i] = newFractpart[i - 4];
}
}
What you're running into here is loss of precision in the conversion from NTP timestamp to milliseconds. When you convert from NTP to milliseconds, you're dropping part of the fraction. When you then take that value and try to convert back, you get a value that's slightly different. You can see this more clearly if you change your ulong values to decimal values, as in this test:
public static decimal GetMilliSeconds(byte[] ntpTime)
{
decimal intpart = 0, fractpart = 0;
for (var i = 0; i <= 3; i++)
intpart = 256 * intpart + ntpTime[i];
for (var i = 4; i <= 7; i++)
fractpart = 256 * fractpart + ntpTime[i];
var milliseconds = intpart * 1000 + ((fractpart * 1000) / 0x100000000L);
Console.WriteLine("milliseconds: " + milliseconds);
Console.WriteLine("intpart: " + intpart);
Console.WriteLine("fractpart: " + fractpart);
return milliseconds;
}
public static byte[] ConvertToNtp(decimal milliseconds)
{
decimal intpart = 0, fractpart = 0;
var ntpData = new byte[8];
intpart = milliseconds / 1000;
fractpart = ((milliseconds % 1000) * 0x100000000L) / 1000m;
Console.WriteLine("milliseconds: " + milliseconds);
Console.WriteLine("intpart: " + intpart);
Console.WriteLine("fractpart: " + fractpart);
var temp = intpart;
for (var i = 3; i >= 0; i--)
{
ntpData[i] = (byte)(temp % 256);
temp = temp / 256;
}
temp = fractpart;
for (var i = 7; i >= 4; i--)
{
ntpData[i] = (byte)(temp % 256);
temp = temp / 256;
}
return ntpData;
}
public static void Main(string[] args)
{
byte[] bytes = { 131, 170, 126, 128,
46, 197, 205, 234 };
var ms = GetMilliSeconds(bytes);
Console.WriteLine();
var ntp = ConvertToNtp(ms);
}
This yields the following result:
milliseconds: 2208988800182.7057548798620701
intpart: 2208988800
fractpart: 784715242
milliseconds: 2208988800182.7057548798620701
intpart: 2208988800.1827057548798620701
fractpart: 784715242.0000000000703594496
It's the ~0.7 milliseconds that are screwing things up here.
Since the NTP timestamp includes a 32-bit fractional second ("a theoretical resolution of 2^-32 seconds or 233 picoseconds"), a conversion to integer milliseconds will result in a loss of precision.
Response to Update:
Adding milliseconds to the NTP timestamp wouldn't be quite as simple as adding the integer parts and the fraction parts. Think of adding the decimals 1.75 and 2.75. 0.75 + 0.75 = 1.5, and you'd need to carry the one over to the integer part. Also, the fraction part in the NTP timestamp is not base-10, so you can't just add the milliseconds. Some conversion is necessary, using a proportion like ms / 1000 = ntpfrac / 0x100000000.
This is entirely untested, but I'd think you'd want to replace your intpart += and fracpart += lines in AddMilliSeconds to be more like this:
intpart += millis / 1000;
ulong fractsum = fractpart + (millis % 1000) / 1000 * 0x100000000L);
intpart += fractsum / 0x100000000L;
fractpart = fractsum % 0x100000000L;
Suggestion To Cameron's Solution:
use
ntpEpoch = (new DateTime(1900, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc)).Ticks;
to make sure that you dont calculate from your local time
Same as others but without division
return (ulong)(elapsedTime.Ticks * 1e-7 * 4294967296ul)
Or
return (ulong)(((long)(elapsedTime.Ticks * 0.0000001) << 32) + (elapsedTime.TotalMilliseconds % 1000 * 4294967296 * 0.001));
//TicksPerPicosecond = 0.0000001m
//4294967296 = uint.MaxValue + 1
//0.001 == PicosecondsPerNanosecond
The full method would then be:
public static System.DateTime UtcEpoch2036 = new System.DateTime(2036, 2, 7, 6, 28, 16, System.DateTimeKind.Utc);
public static System.DateTime UtcEpoch1900 = new System.DateTime(1900, 1, 1, 0, 0, 0, System.DateTimeKind.Utc);
public static ulong DateTimeToNptTimestamp(ref System.DateTime value/*, bool randomize = false*/)
{
System.DateTime baseDate = value >= UtcEpoch2036 ? UtcEpoch2036 : UtcEpoch1900;
System.TimeSpan elapsedTime = value > baseDate ? value.ToUniversalTime() - baseDate.ToUniversalTime() : baseDate.ToUniversalTime() - value.ToUniversalTime();
//Media.Common.Extensions.TimeSpan.TimeSpanExtensions.MicrosecondsPerMillisecond = 1000
//TicksPerPicosecond = 0.0000001m = 1e-7
//4294967296 = uint.MaxValue + 1
//0.001 == PicosecondsPerNanosecond = 1e-3
//429496.7296 Picoseconds = 4.294967296e-7 Seconds
//4.294967296e-7 * 1000 Milliseconds per second = 0.0004294967296 * 1e+9 (PicosecondsPerMilisecond) = 429.4967296
//0.4294967296 nanoseconds * 100 nanoseconds = 1 tick = 42.94967296 * 10000 ticks per millisecond = 429496.7296 / 1000 = 429.49672960000004
unchecked
{
//return (ulong)((long)(elapsedTime.Ticks * 0.0000001m) << 32 | (long)((decimal)elapsedTime.TotalMilliseconds % 1000 * 4294967296m * 0.001m));
//return (ulong)(((long)(elapsedTime.Ticks * 0.0000001m) << 32) + (elapsedTime.TotalMilliseconds % 1000 * 4294967296ul * 0.001));
//return (ulong)(elapsedTime.Ticks * 1e-7 * 4294967296ul); //ie-7 * 4294967296ul = 429.4967296 has random diff which complies better? (In order to minimize bias and help make timestamps unpredictable to an intruder, the non - significant bits should be set to an unbiased random bit string.)
//return (ulong)(elapsedTime.Ticks * 429.4967296m);//decimal precision is better but we still lose precision because of the magnitude? 0.001 msec dif ((ulong)(elapsedTime.Ticks * 429.4967296000000000429m))
//429.49672960000004m has reliable 003 msec diff
//Has 0 diff but causes fraction to be different from examples...
//return (ulong)((elapsedTime.Ticks + 1) * 429.4967296m);
//Also adding + 429ul;
return (ulong)(elapsedTime.Ticks * 429.496729600000000000429m);
//var ticks = (ulong)(elapsedTime.Ticks * 429.496729600000000000429m); //Has 0 diff on .137 measures otherwise 0.001 msec or 1 tick, keeps the examples the same.
//if(randomize) ticks ^= (ulong)(Utility.Random.Next() & byte.MaxValue);
//return ticks;
}
Where as the reverse would be:
public static System.DateTime NptTimestampToDateTime(ref uint seconds, ref uint fractions, System.DateTime? epoch = null)
{
//Convert to ticks
//ulong ticks = (ulong)((seconds * System.TimeSpan.TicksPerSecond) + ((fractions * System.TimeSpan.TicksPerSecond) / 0x100000000L)); //uint.MaxValue + 1
unchecked
{
//Convert to ticks,
//'UtcEpoch1900.AddTicks(seconds * System.TimeSpan.TicksPerSecond + ((long)(fractions * 1e+12))).Millisecond' threw an exception of type 'System.ArgumentOutOfRangeException'
//0.01 millisecond = 1e+7 picseconds = 10000 nanoseconds
//10000 nanoseconds = 10 micros = 10000000 pioseconds
//0.001 Centisecond = 10 Microsecond
//1 Tick = 0.1 Microsecond
//0.1 * 100 Nanos Per Tick = 100
//TenMicrosecondsPerPicosecond = 10000000 = TimeSpan.TicksPerSecond = 10000000
//System.TimeSpan.TicksPerSecond is fine here also...
long ticks = seconds * System.TimeSpan.TicksPerSecond + ((long)(fractions * Media.Common.Extensions.TimeSpan.TimeSpanExtensions.TenMicrosecondsPerPicosecond) >> Common.Binary.BitsPerInteger);
//Return the result of adding the ticks to the epoch
//If the epoch was given then use that value otherwise determine the epoch based on the highest bit.
return epoch.HasValue ? epoch.Value.AddTicks(ticks) :
(seconds & 0x80000000L) == 0 ?
UtcEpoch2036.AddTicks(ticks) :
UtcEpoch1900.AddTicks(ticks);
}
}
DateTime ticks to NTP and back.
static long ntpEpoch = (new DateTime(1900, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc)).Ticks;
static public long Ntp2Ticks(UInt64 a)
{
var b = (decimal)a * 1e7m / (1UL << 32);
return (long)b + ntpEpoch;
}
static public UInt64 Ticks2Ntp(long a)
{
decimal b = a - ntpEpoch;
b = (decimal)b / 1e7m * (1UL << 32);
return (UInt64)b;
}

How to convert a byte array (MD5 hash) into a string (36 chars)?

I've got a byte array that was created using a hash function. I would like to convert this array into a string. So far so good, it will give me hexadecimal string.
Now I would like to use something different than hexadecimal characters, I would like to encode the byte array with these 36 characters: [a-z][0-9].
How would I go about?
Edit: the reason I would to do this, is because I would like to have a smaller string, than a hexadecimal string.
I adapted my arbitrary-length base conversion function from this answer to C#:
static string BaseConvert(string number, int fromBase, int toBase)
{
var digits = "0123456789abcdefghijklmnopqrstuvwxyz";
var length = number.Length;
var result = string.Empty;
var nibbles = number.Select(c => digits.IndexOf(c)).ToList();
int newlen;
do {
var value = 0;
newlen = 0;
for (var i = 0; i < length; ++i) {
value = value * fromBase + nibbles[i];
if (value >= toBase) {
if (newlen == nibbles.Count) {
nibbles.Add(0);
}
nibbles[newlen++] = value / toBase;
value %= toBase;
}
else if (newlen > 0) {
if (newlen == nibbles.Count) {
nibbles.Add(0);
}
nibbles[newlen++] = 0;
}
}
length = newlen;
result = digits[value] + result; //
}
while (newlen != 0);
return result;
}
As it's coming from PHP it might not be too idiomatic C#, there are also no parameter validity checks. However, you can feed it a hex-encoded string and it will work just fine with
var result = BaseConvert(hexEncoded, 16, 36);
It's not exactly what you asked for, but encoding the byte[] into hex is trivial.
See it in action.
Earlier tonight I came across a codereview question revolving around the same algorithm being discussed here. See: https://codereview.stackexchange.com/questions/14084/base-36-encoding-of-a-byte-array/
I provided a improved implementation of one of its earlier answers (both use BigInteger). See: https://codereview.stackexchange.com/a/20014/20654. The solution takes a byte[] and returns a Base36 string. Both the original and mine include simple benchmark information.
For completeness, the following is the method to decode a byte[] from an string. I'll include the encode function from the link above as well. See the text after this code block for some simple benchmark info for decoding.
const int kByteBitCount= 8; // number of bits in a byte
// constants that we use in FromBase36String and ToBase36String
const string kBase36Digits= "0123456789abcdefghijklmnopqrstuvwxyz";
static readonly double kBase36CharsLengthDivisor= Math.Log(kBase36Digits.Length, 2);
static readonly BigInteger kBigInt36= new BigInteger(36);
// assumes the input 'chars' is in big-endian ordering, MSB->LSB
static byte[] FromBase36String(string chars)
{
var bi= new BigInteger();
for (int x= 0; x < chars.Length; x++)
{
int i= kBase36Digits.IndexOf(chars[x]);
if (i < 0) return null; // invalid character
bi *= kBigInt36;
bi += i;
}
return bi.ToByteArray();
}
// characters returned are in big-endian ordering, MSB->LSB
static string ToBase36String(byte[] bytes)
{
// Estimate the result's length so we don't waste time realloc'ing
int result_length= (int)
Math.Ceiling(bytes.Length * kByteBitCount / kBase36CharsLengthDivisor);
// We use a List so we don't have to CopyTo a StringBuilder's characters
// to a char[], only to then Array.Reverse it later
var result= new System.Collections.Generic.List<char>(result_length);
var dividend= new BigInteger(bytes);
// IsZero's computation is less complex than evaluating "dividend > 0"
// which invokes BigInteger.CompareTo(BigInteger)
while (!dividend.IsZero)
{
BigInteger remainder;
dividend= BigInteger.DivRem(dividend, kBigInt36, out remainder);
int digit_index= Math.Abs((int)remainder);
result.Add(kBase36Digits[digit_index]);
}
// orientate the characters in big-endian ordering
result.Reverse();
// ToArray will also trim the excess chars used in length prediction
return new string(result.ToArray());
}
"A test 1234. Made slightly larger!" encodes to Base64 as "165kkoorqxin775ct82ist5ysteekll7kaqlcnnu6mfe7ag7e63b5"
To decode that Base36 string 1,000,000 times takes 12.6558909 seconds on my machine (I used the same build and machine conditions as provided in my answer on codereview)
You mentioned that you were dealing with a byte[] for the MD5 hash, rather than a hexadecimal string representation of it, so I think this solution provide the least overhead for you.
If you want a shorter string and can accept [a-zA-Z0-9] and + and / then look at Convert.ToBase64String
Using BigInteger (needs the System.Numerics reference)
Using BigInteger (needs the System.Numerics reference)
const string chars = "0123456789abcdefghijklmnopqrstuvwxyz";
// The result is padded with chars[0] to make the string length
// (int)Math.Ceiling(bytes.Length * 8 / Math.Log(chars.Length, 2))
// (so that for any value [0...0]-[255...255] of bytes the resulting
// string will have same length)
public static string ToBaseN(byte[] bytes, string chars, bool littleEndian = true, int len = -1)
{
if (bytes.Length == 0 || len == 0)
{
return String.Empty;
}
// BigInteger saves in the last byte the sign. > 7F negative,
// <= 7F positive.
// If we have a "negative" number, we will prepend a 0 byte.
byte[] bytes2;
if (littleEndian)
{
if (bytes[bytes.Length - 1] <= 0x7F)
{
bytes2 = bytes;
}
else
{
// Note that Array.Resize doesn't modify the original array,
// but creates a copy and sets the passed reference to the
// new array
bytes2 = bytes;
Array.Resize(ref bytes2, bytes.Length + 1);
}
}
else
{
bytes2 = new byte[bytes[0] > 0x7F ? bytes.Length + 1 : bytes.Length];
// We copy and reverse the array
for (int i = bytes.Length - 1, j = 0; i >= 0; i--, j++)
{
bytes2[j] = bytes[i];
}
}
BigInteger bi = new BigInteger(bytes2);
// A little optimization. We will do many divisions based on
// chars.Length .
BigInteger length = chars.Length;
// We pre-calc the length of the string. We know the bits of
// "information" of a byte are 8. Using Log2 we calc the bits of
// information of our new base.
if (len == -1)
{
len = (int)Math.Ceiling(bytes.Length * 8 / Math.Log(chars.Length, 2));
}
// We will build our string on a char[]
var chs = new char[len];
int chsIndex = 0;
while (bi > 0)
{
BigInteger remainder;
bi = BigInteger.DivRem(bi, length, out remainder);
chs[littleEndian ? chsIndex : len - chsIndex - 1] = chars[(int)remainder];
chsIndex++;
if (chsIndex < 0)
{
if (bi > 0)
{
throw new OverflowException();
}
}
}
// We append the zeros that we skipped at the beginning
if (littleEndian)
{
while (chsIndex < len)
{
chs[chsIndex] = chars[0];
chsIndex++;
}
}
else
{
while (chsIndex < len)
{
chs[len - chsIndex - 1] = chars[0];
chsIndex++;
}
}
return new string(chs);
}
public static byte[] FromBaseN(string str, string chars, bool littleEndian = true, int len = -1)
{
if (str.Length == 0 || len == 0)
{
return new byte[0];
}
// This should be the maximum length of the byte[] array. It's
// the opposite of the one used in ToBaseN.
// Note that it can be passed as a parameter
if (len == -1)
{
len = (int)Math.Ceiling(str.Length * Math.Log(chars.Length, 2) / 8);
}
BigInteger bi = BigInteger.Zero;
BigInteger length2 = chars.Length;
BigInteger mult = BigInteger.One;
for (int j = 0; j < str.Length; j++)
{
int ix = chars.IndexOf(littleEndian ? str[j] : str[str.Length - j - 1]);
// We didn't find the character
if (ix == -1)
{
throw new ArgumentOutOfRangeException();
}
bi += ix * mult;
mult *= length2;
}
var bytes = bi.ToByteArray();
int len2 = bytes.Length;
// BigInteger adds a 0 byte for positive numbers that have the
// last byte > 0x7F
if (len2 >= 2 && bytes[len2 - 1] == 0)
{
len2--;
}
int len3 = Math.Min(len, len2);
byte[] bytes2;
if (littleEndian)
{
if (len == bytes.Length)
{
bytes2 = bytes;
}
else
{
bytes2 = new byte[len];
Array.Copy(bytes, bytes2, len3);
}
}
else
{
bytes2 = new byte[len];
for (int i = 0; i < len3; i++)
{
bytes2[len - i - 1] = bytes[i];
}
}
for (int i = len3; i < len2; i++)
{
if (bytes[i] != 0)
{
throw new OverflowException();
}
}
return bytes2;
}
Be aware that they are REALLY slow! REALLY REALLY slow! (2 minutes for 100k). To speed them up you would probably need to rewrite the division/mod operation so that they work directly on a buffer, instead of each time recreating the scratch pads as it's done by BigInteger. And it would still be SLOW. The problem is that the time needed to encode the first byte is O(n) where n is the length of the byte array (this because all the array needs to be divided by 36). Unless you want to work with blocks of 5 bytes and lose some bits. Each symbol of Base36 carries around 5.169925001 bits. So 8 of these symbols would carry 41.35940001 bits. Very near 40 bytes.
Note that these methods can work both in little-endian mode and in big-endian mode. The endianness of the input and of the output is the same. Both methods accept a len parameter. You can use it to trim excess 0 (zeroes). Note that if you try to make an output too much small to contain the input, an OverflowException will be thrown.
System.Text.Encoding enc = System.Text.Encoding.ASCII;
string myString = enc.GetString(myByteArray);
You can play with what encoding you need:
System.Text.ASCIIEncoding,
System.Text.UnicodeEncoding,
System.Text.UTF7Encoding,
System.Text.UTF8Encoding
To match the requrements [a-z][0-9] you can use it:
Byte[] bytes = new Byte[] { 200, 180, 34 };
string result = String.Join("a", bytes.Select(x => x.ToString()).ToArray());
You will have string representation of bytes with char separator. To convert back you will need to split, and convert the string[] to byte[] using the same approach with .Select().
Usually a power of 2 is used - that way one character maps to a fixed number of bits. An alphabet of 32 bits for instance would map to 5 bits. The only challenge in that case is how to deserialize variable-length strings.
For 36 bits you could treat the data as a large number, and then:
divide by 36
add the remainder as character to your result
repeat until the division results in 0
Easier said than done perhaps.
you can use modulu.
this example encode your byte array to string of [0-9][a-z].
change it if you want.
public string byteToString(byte[] byteArr)
{
int i;
char[] charArr = new char[byteArr.Length];
for (i = 0; i < byteArr.Length; i++)
{
int byt = byteArr[i] % 36; // 36=num of availible charachters
if (byt < 10)
{
charArr[i] = (char)(byt + 48); //if % result is a digit
}
else
{
charArr[i] = (char)(byt + 87); //if % result is a letter
}
}
return new String(charArr);
}
If you don't want to lose data for de-encoding you can use this example:
public string byteToString(byte[] byteArr)
{
int i;
char[] charArr = new char[byteArr.Length*2];
for (i = 0; i < byteArr.Length; i++)
{
charArr[2 * i] = (char)((int)byteArr[i] / 36+48);
int byt = byteArr[i] % 36; // 36=num of availible charachters
if (byt < 10)
{
charArr[2*i+1] = (char)(byt + 48); //if % result is a digit
}
else
{
charArr[2*i+1] = (char)(byt + 87); //if % result is a letter
}
}
return new String(charArr);
}
and now you have a string double-lengthed when odd char is the multiply of 36 and even char is the residu. for example: 200=36*5+20 => "5k".

Converting an Int to a BCD byte array [duplicate]

I want to convert an int to a byte[2] array using BCD.
The int in question will come from DateTime representing the Year and must be converted to two bytes.
Is there any pre-made function that does this or can you give me a simple way of doing this?
example:
int year = 2010
would output:
byte[2]{0x20, 0x10};
static byte[] Year2Bcd(int year) {
if (year < 0 || year > 9999) throw new ArgumentException();
int bcd = 0;
for (int digit = 0; digit < 4; ++digit) {
int nibble = year % 10;
bcd |= nibble << (digit * 4);
year /= 10;
}
return new byte[] { (byte)((bcd >> 8) & 0xff), (byte)(bcd & 0xff) };
}
Beware that you asked for a big-endian result, that's a bit unusual.
Use this method.
public static byte[] ToBcd(int value){
if(value<0 || value>99999999)
throw new ArgumentOutOfRangeException("value");
byte[] ret=new byte[4];
for(int i=0;i<4;i++){
ret[i]=(byte)(value%10);
value/=10;
ret[i]|=(byte)((value%10)<<4);
value/=10;
}
return ret;
}
This is essentially how it works.
If the value is less than 0 or greater than 99999999, the value won't fit in four bytes. More formally, if the value is less than 0 or is 10^(n*2) or greater, where n is the number of bytes, the value won't fit in n bytes.
For each byte:
Set that byte to the remainder of the value-divided-by-10 to the byte. (This will place the last digit in the low nibble [half-byte] of the current byte.)
Divide the value by 10.
Add 16 times the remainder of the value-divided-by-10 to the byte. (This will place the now-last digit in the high nibble of the current byte.)
Divide the value by 10.
(One optimization is to set every byte to 0 beforehand -- which is implicitly done by .NET when it allocates a new array -- and to stop iterating when the value reaches 0. This latter optimization is not done in the code above, for simplicity. Also, if available, some compilers or assemblers offer a divide/remainder routine that allows retrieving the quotient and remainder in one division step, an optimization which is not usually necessary though.)
Here's a terrible brute-force version. I'm sure there's a better way than this, but it ought to work anyway.
int digitOne = year / 1000;
int digitTwo = (year - digitOne * 1000) / 100;
int digitThree = (year - digitOne * 1000 - digitTwo * 100) / 10;
int digitFour = year - digitOne * 1000 - digitTwo * 100 - digitThree * 10;
byte[] bcdYear = new byte[] { digitOne << 4 | digitTwo, digitThree << 4 | digitFour };
The sad part about it is that fast binary to BCD conversions are built into the x86 microprocessor architecture, if you could get at them!
Here is a slightly cleaner version then Jeffrey's
static byte[] IntToBCD(int input)
{
if (input > 9999 || input < 0)
throw new ArgumentOutOfRangeException("input");
int thousands = input / 1000;
int hundreds = (input -= thousands * 1000) / 100;
int tens = (input -= hundreds * 100) / 10;
int ones = (input -= tens * 10);
byte[] bcd = new byte[] {
(byte)(thousands << 4 | hundreds),
(byte)(tens << 4 | ones)
};
return bcd;
}
maybe a simple parse function containing this loop
i=0;
while (id>0)
{
twodigits=id%100; //need 2 digits per byte
arr[i]=twodigits%10 + twodigits/10*16; //first digit on first 4 bits second digit shifted with 4 bits
id/=100;
i++;
}
More common solution
private IEnumerable<Byte> GetBytes(Decimal value)
{
Byte currentByte = 0;
Boolean odd = true;
while (value > 0)
{
if (odd)
currentByte = 0;
Decimal rest = value % 10;
value = (value-rest)/10;
currentByte |= (Byte)(odd ? (Byte)rest : (Byte)((Byte)rest << 4));
if(!odd)
yield return currentByte;
odd = !odd;
}
if(!odd)
yield return currentByte;
}
Same version as Peter O. but in VB.NET
Public Shared Function ToBcd(ByVal pValue As Integer) As Byte()
If pValue < 0 OrElse pValue > 99999999 Then Throw New ArgumentOutOfRangeException("value")
Dim ret As Byte() = New Byte(3) {} 'All bytes are init with 0's
For i As Integer = 0 To 3
ret(i) = CByte(pValue Mod 10)
pValue = Math.Floor(pValue / 10.0)
ret(i) = ret(i) Or CByte((pValue Mod 10) << 4)
pValue = Math.Floor(pValue / 10.0)
If pValue = 0 Then Exit For
Next
Return ret
End Function
The trick here is to be aware that simply using pValue /= 10 will round the value so if for instance the argument is "16", the first part of the byte will be correct, but the result of the division will be 2 (as 1.6 will be rounded up). Therefore I use the Math.Floor method.
I made a generic routine posted at IntToByteArray that you could use like:
var yearInBytes = ConvertBigIntToBcd(2010, 2);
static byte[] IntToBCD(int input) {
byte[] bcd = new byte[] {
(byte)(input>> 8),
(byte)(input& 0x00FF)
};
return bcd;
}

Categories