I have to convert values (double/float in C#) to bytes and need some help..
// Datatype long 4byte -99999999,99 to 99999999,99
// Datatype long 4byte -99999999,9 to 99999999,9
// Datatype short 2byte -999,99 to 999,99
// Datatype short 2byte -999,9 to 999,9
In my "world at home" i would just string it and ASCII.GetBytes().
But now, in this world, we have to make less possible space.
And indeed that '-99999999,99' takes 12 bytes instead of 4! if it's a 'long' datatype.
[EDIT]
Due to some help and answer I attach some results here,
long lng = -9999999999L;
byte[] test = Encoding.ASCII.GetBytes(lng.ToString()); // 11 byte
byte[] test2 = BitConverter.GetBytes(lng); // 8 byte
byte[] mybyt = BitConverter.GetBytes(lng); // 8 byte
byte[] bA = BitConverter.GetBytes(lng); // 8 byte
There still have to be one detail left to find out. The lng-variabel got 8 byte even if it helds a lower values, i.e. 99951 (I won't include the ToString() sample).
If the value are even "shorter", which means -999,99 -- 999,99 it will only take 2 byte space.
[END EDIT]
Have you checked BitConverter
long lng =-9999999999L;
byte[] mybyt = BitConverter.GetBytes(lng);
hope this is what you are looking
Try to do it in this way:
long l = 4554334;
byte[] bA = BitConverter.GetBytes(l);
Be aware that in 2 bytes you can only have 4 full digits + sign, and in 4 bytes you can only have 9 digits + sign, so I had to scale your prereqs accordingly.
public static byte[] SerializeLong2Dec(double value)
{
value *= 100;
value = Math.Round(value, MidpointRounding.AwayFromZero);
if (value < -999999999.0 || value > 999999999.0)
{
throw new ArgumentOutOfRangeException();
}
int value2 = (int)value;
return BitConverter.GetBytes(value2);
}
public static double DeserializeLong2Dec(byte[] value)
{
int value2 = BitConverter.ToInt32(value, 0);
return (double)value2 / 100.0;
}
public static byte[] SerializeLong1Dec(double value) {
value *= 10;
value = Math.Round(value, MidpointRounding.AwayFromZero);
if (value < -999999999.0 || value > 999999999.0) {
throw new ArgumentOutOfRangeException();
}
int value2 = (int)value;
return BitConverter.GetBytes(value2);
}
public static double DeserializeLong1Dec(byte[] value) {
int value2 = BitConverter.ToInt32(value, 0);
return (double)value2 / 10.0;
}
public static byte[] SerializeShort2Dec(double value) {
value *= 100;
value = Math.Round(value, MidpointRounding.AwayFromZero);
if (value < -9999.0 || value > 9999.0) {
throw new ArgumentOutOfRangeException();
}
short value2 = (short)value;
return BitConverter.GetBytes(value2);
}
public static double DeserializeShort2Dec(byte[] value) {
short value2 = BitConverter.ToInt16(value, 0);
return (double)value2 / 100.0;
}
public static byte[] SerializeShort1Dec(double value) {
value *= 10;
value = Math.Round(value, MidpointRounding.AwayFromZero);
if (value < -9999.0 || value > 9999.0) {
throw new ArgumentOutOfRangeException();
}
short value2 = (short)value;
return BitConverter.GetBytes(value2);
}
public static double DeserializeShort1Dec(byte[] value) {
short value2 = BitConverter.ToInt16(value, 0);
return (double)value2 / 10.0;
}
So that it's clear, the range of a (signed) short (16 bits) is -32,768 to 32,767 so it's quite clear that you only have 4 full digits plus a little piece (the 0-3), the range of a (signed) int (32 bits) is −2,147,483,648 to 2,147,483,647 so it's quite clear that you only have 9 full digits plus a little piece (the 0-2). Going to a (signed) long (64 bits) you have -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 so 18 digits plus a (big) piece. Using floating points you lose in accuracy. A float (32 bits) has an accuracy of around 7 digits, while a double (64 bits) has an accuracy of around 15-16 digits.
To everyone reading this question and the answers. Please note that:
//convert to bytes
long number = 123;
byte[] bytes = BitConverter.GetBytes(number);
//convert back to int64
long newNumber = BitConverter.ToInt64(bytes);
//numbers are different on x64 systems!!!
number != newNumber;
The workaround is to check which system you run on:
newNumber = BitConverter.IsLittleEndian
? BitConverter.ToInt64(bytes, 0)
: BitConverter.ToInt64(bytes.Reverse().ToArray(), 0);
long longValue = 9999999999L;
Console.WriteLine("Long value: " + longValue.ToString());
bytes = BitConverter.GetBytes(longValue);
Console.WriteLine("Byte array value:");
Console.WriteLine(BitConverter.ToString(bytes));
As the other answers have pointed out, you can use BitConverter to get the byte representation of primitive types.
You said that in the current world you inhabit, there is an onus on representing these values as concisely as possible, in which case you should consider variable length encoding (though that document may be a bit abstract).
If you decide this approach is applicable to your case I would suggest looking at how the Protocol Buffers project represents scalar types as some of these types are encoded using variable length encoding which produces shorter output if the data set favours smaller absolute values. (The project is open source under a New BSD license, so you will be able to learn the technique employed from the source repository or even use the source in your own project.)
Related
I have a requirement to encode a byte array from an short integer value
The encoding rules are
The bits representing the integer are bits 0 - 13
bit 14 is set if the number is negative
bit 15 is always 1.
I know I can get the integer into a byte array using BitConverter
byte[] roll = BitConverter.GetBytes(x);
But I cant find how to meet my requirement
Anyone know how to do this?
You should use Bitwise Operators.
Solution is something like this:
Int16 x = 7;
if(x < 0)
{
Int16 mask14 = 16384; // 0b0100000000000000;
x = (Int16)(x | mask14);
}
Int16 mask15 = -32768; // 0b1000000000000000;
x = (Int16)(x | mask15);
byte[] roll = BitConverter.GetBytes(x);
You cannot rely on GetBytes for negative numbers since it complements the bits and that is not what you need.
Instead you need to do bounds checking to make sure the number is representable, then use GetBytes on the absolute value of the given number.
The method's parameter is 'short' so we GetBytes returns a byte array with the size of 2 (you don't need more than 16 bits).
The rest is in the comments below:
static readonly int MAX_UNSIGNED_14_BIT = 16383;// 2^14-1
public static byte[] EncodeSigned14Bit(short x)
{
var absoluteX = Math.Abs(x);
if (absoluteX > MAX_UNSIGNED_14_BIT) throw new ArgumentException($"{nameof(x)} is too large and cannot be represented");
byte[] roll = BitConverter.GetBytes(absoluteX);
if (x < 0)
{
roll[1] |= 0b01000000; //x is negative, set 14th bit
}
roll[1] |= 0b10000000; // 15th bit is always set
return roll;
}
static void Main(string[] args)
{
// testing some values
var r1 = EncodeSigned14Bit(16383); // r1[0] = 255, r1[1] = 191
var r2 = EncodeSigned14Bit(-16383); // r2[0] = 255, r2[1] = 255
}
I need to create a an byte array with hex and int values.
For example:
int value1 = 13;
int value2 = 31;
byte[] mixedbytes = new byte[] {0x09, (byte)value1, (byte)value2};
Problem: The 31 is converted to 0x1F. It should be 0x31. I've tried to convert the int values to string and back to bytes but that didn't solve the problem. The integers have never more than two digits.
Try this:
int value1 = 0x13;
int value2 = 0x31;
byte[] mixedbytes = new byte[] { 0x09, (byte)value1, (byte)value2 };
Also, you don't seem to understand conversion between decimal and hex. 31 in decimal is 1F in hex, expecting it to be 31 in hex is a bad expectation for a better understanding of the conversion between decimal and hex, please have a look here: http://www.wikihow.com/Convert-from-Decimal-to-Hexadecimal
I think you can try this method
string i = "10";
var b = Convert.ToByte(i, 16)
In this method 10 will be stored as 0x10
This format is commonly known as Binary Coded Decimal (BCD). The idea is that the nibbles in the byte each contain a single decimal digit.
In C#, you can do this conversion very easily:
var number = 31;
var bcd = (number / 10) * 16 + (number % 10);
I'm trying to generate a fixed length hash using the code below.
public int GetStableHash(string s)
{
string strKey = "myHashingKey";
UnicodeEncoding UE = new UnicodeEncoding();
byte[] key = UE.GetBytes(strKey);
byte[] contentBuffer = UE.GetBytes(s);
// Initialize the keyed hash object.
HMACSHA256 myhmacsha256 = new HMACSHA256(key);
byte[] hashValue = myhmacsha256.ComputeHash(contentBuffer);
return BitConverter.ToInt32(hashValue,0);
}
It gives me output like this.
-1635597425
I need a positive number fixed length (8 digits). Can someone plz tell me how to do that.
Thanks in advance.
You're trying to get a 8-digit number from a hash function output which can have up to
lg(2^256) ~ 78
decimal digits.
You should either consider changing hash function or substitute up to 26 bits (2^26 = 67108864, 2^27 = 134217728 - 9 digits already) rounded down to 3 bytes (24 bits) from output and get Int32 from those 3 bytes.
public int GetStableHash(string s)
{
...
byte[] hashValue = myhmacsha256.ComputeHash(contentBuffer);
byte[] hashPart = new byte[3];
hashValue.CopyTo(hashPart, 29); // 32-3
return System.BitConverter.ToInt32(hashPart, 0);
}
unchecked
{
int num = BitConverter.ToInt32(hashValue,0);
if (num < 0)
{
num = -num;
}
num %= 100000000;
}
I'm using the unchecked because otherwise -int.MinValue would break (but note that normally programs are compiled with the unchecked "flag" "on")
The code means:
unchecked
don't do overflow controls
if (num < 0)
{
num = -num;
}
make the number positive if negative
num %= 100000000;
take the remainder (that has 0-8 digits)
much shorter:
return unchecked((int)((uint)BitConverter.ToInt32(hashValue,0) % 100000000));
I want to convert an int to a byte[2] array using BCD.
The int in question will come from DateTime representing the Year and must be converted to two bytes.
Is there any pre-made function that does this or can you give me a simple way of doing this?
example:
int year = 2010
would output:
byte[2]{0x20, 0x10};
static byte[] Year2Bcd(int year) {
if (year < 0 || year > 9999) throw new ArgumentException();
int bcd = 0;
for (int digit = 0; digit < 4; ++digit) {
int nibble = year % 10;
bcd |= nibble << (digit * 4);
year /= 10;
}
return new byte[] { (byte)((bcd >> 8) & 0xff), (byte)(bcd & 0xff) };
}
Beware that you asked for a big-endian result, that's a bit unusual.
Use this method.
public static byte[] ToBcd(int value){
if(value<0 || value>99999999)
throw new ArgumentOutOfRangeException("value");
byte[] ret=new byte[4];
for(int i=0;i<4;i++){
ret[i]=(byte)(value%10);
value/=10;
ret[i]|=(byte)((value%10)<<4);
value/=10;
}
return ret;
}
This is essentially how it works.
If the value is less than 0 or greater than 99999999, the value won't fit in four bytes. More formally, if the value is less than 0 or is 10^(n*2) or greater, where n is the number of bytes, the value won't fit in n bytes.
For each byte:
Set that byte to the remainder of the value-divided-by-10 to the byte. (This will place the last digit in the low nibble [half-byte] of the current byte.)
Divide the value by 10.
Add 16 times the remainder of the value-divided-by-10 to the byte. (This will place the now-last digit in the high nibble of the current byte.)
Divide the value by 10.
(One optimization is to set every byte to 0 beforehand -- which is implicitly done by .NET when it allocates a new array -- and to stop iterating when the value reaches 0. This latter optimization is not done in the code above, for simplicity. Also, if available, some compilers or assemblers offer a divide/remainder routine that allows retrieving the quotient and remainder in one division step, an optimization which is not usually necessary though.)
Here's a terrible brute-force version. I'm sure there's a better way than this, but it ought to work anyway.
int digitOne = year / 1000;
int digitTwo = (year - digitOne * 1000) / 100;
int digitThree = (year - digitOne * 1000 - digitTwo * 100) / 10;
int digitFour = year - digitOne * 1000 - digitTwo * 100 - digitThree * 10;
byte[] bcdYear = new byte[] { digitOne << 4 | digitTwo, digitThree << 4 | digitFour };
The sad part about it is that fast binary to BCD conversions are built into the x86 microprocessor architecture, if you could get at them!
Here is a slightly cleaner version then Jeffrey's
static byte[] IntToBCD(int input)
{
if (input > 9999 || input < 0)
throw new ArgumentOutOfRangeException("input");
int thousands = input / 1000;
int hundreds = (input -= thousands * 1000) / 100;
int tens = (input -= hundreds * 100) / 10;
int ones = (input -= tens * 10);
byte[] bcd = new byte[] {
(byte)(thousands << 4 | hundreds),
(byte)(tens << 4 | ones)
};
return bcd;
}
maybe a simple parse function containing this loop
i=0;
while (id>0)
{
twodigits=id%100; //need 2 digits per byte
arr[i]=twodigits%10 + twodigits/10*16; //first digit on first 4 bits second digit shifted with 4 bits
id/=100;
i++;
}
More common solution
private IEnumerable<Byte> GetBytes(Decimal value)
{
Byte currentByte = 0;
Boolean odd = true;
while (value > 0)
{
if (odd)
currentByte = 0;
Decimal rest = value % 10;
value = (value-rest)/10;
currentByte |= (Byte)(odd ? (Byte)rest : (Byte)((Byte)rest << 4));
if(!odd)
yield return currentByte;
odd = !odd;
}
if(!odd)
yield return currentByte;
}
Same version as Peter O. but in VB.NET
Public Shared Function ToBcd(ByVal pValue As Integer) As Byte()
If pValue < 0 OrElse pValue > 99999999 Then Throw New ArgumentOutOfRangeException("value")
Dim ret As Byte() = New Byte(3) {} 'All bytes are init with 0's
For i As Integer = 0 To 3
ret(i) = CByte(pValue Mod 10)
pValue = Math.Floor(pValue / 10.0)
ret(i) = ret(i) Or CByte((pValue Mod 10) << 4)
pValue = Math.Floor(pValue / 10.0)
If pValue = 0 Then Exit For
Next
Return ret
End Function
The trick here is to be aware that simply using pValue /= 10 will round the value so if for instance the argument is "16", the first part of the byte will be correct, but the result of the division will be 2 (as 1.6 will be rounded up). Therefore I use the Math.Floor method.
I made a generic routine posted at IntToByteArray that you could use like:
var yearInBytes = ConvertBigIntToBcd(2010, 2);
static byte[] IntToBCD(int input) {
byte[] bcd = new byte[] {
(byte)(input>> 8),
(byte)(input& 0x00FF)
};
return bcd;
}
I need to convert bytes in two's complement format to positive integer bytes.
The range -128 to 127 mapped to 0 to 255.
Examples: -128 (10000000) -> 0 , 127 (01111111) -> 255, etc.
EDIT To clear up the confusion, the input byte is (of course) an unsigned integer in the range 0 to 255. BUT it represents a signed integer in the range -128 to 127 using two's complement format. For example, the input byte value of 128 (binary 10000000) actually represents -128.
EXTRA EDIT Alrighty, lets say we have the following byte stream 0,255,254,1,127. In two's complement format this represents 0, -1, -2, 1, 127. This I need clamping to the 0 to 255 range. For more info check out this hard to find article: Two's complement
From you sample input you simply want:
sbyte something = -128;
byte foo = (byte)( something + 128);
new = old + 128;
bingo :-)
Try
sbyte signed = (sbyte)input;
or
int signed = input | 0xFFFFFF00;
public static byte MakeHexSigned(byte value)
{
if (value > 255 / 2)
{
value = -1 * (255 + 1) + value;
}
return value;
}
I believe that 2s complement bytes would be best done with the following. Maybe not elegant or short but clear and obvious. I would put it as a static method in one of my util classes.
public static sbyte ConvertTo2Complement(byte b)
{
if(b < 128)
{
return Convert.ToSByte(b);
}
else
{
int x = Convert.ToInt32(b);
return Convert.ToSByte(x - 256);
}
}
If I undestood correctly, your problem is how to convert the input, which is really a signed-byte (sbyte), but that input is stored in a unsigned integer, and then also avoid negative values by converting them to zero.
To be clear, when you use a signed type (like ubyte) the framework is using Two's complement behind the scene, so just by casting to the right type you will be using two's complement.
Then, once you have that conversion done, you could clamp the negative values with a simple if or a conditional ternary operator (?:).
The functions presented below will return 0 for values from 128 to 255 (or from -128 to -1), and the same value for values from 0 to 127.
So, if you must use unsigned integers as input and output you could use something like this:
private static uint ConvertSByteToByte(uint input)
{
sbyte properDataType = (sbyte)input; //128..255 will be taken as -128..-1
if (properDataType < 0) { return 0; } //when negative just return 0
if (input > 255) { return 0; } //just in case as uint can be greater than 255
return input;
}
Or, IMHO, you could change your input and outputs to the data types best suited to your input and output (sbyte and byte):
private static byte ConvertSByteToByte(sbyte input)
{
return input < 0 ? (byte)0 : (byte)input;
}
int8_t indata; /* -128,-127,...-1,0,1,...127 */
uint8_t byte = indata ^ 0x80;
xor MSB, that's all
Here is my solution for this problem, for numbers bigger than 8-bits. My example is for a 16-bit value. Note: You will have to check the first bit, to see if it is a negative or not.
Steps:
Convert # to compliment by placing '~' before variable. (ie. y = ~y)
Convert #s to binary string
Break binary strings into character array
Starting with right most value, add 1 , keeping track of carries. Store result in character array.
Convert character array back to string.
private string TwosComplimentMath(string value1, string value2)
{
char[] binary1 = value1.ToCharArray();
char[] binary2 = value2.ToCharArray();
bool carry = false;
char[] calcResult = new char[16];
for (int i = 15; i >= 0; i--)
{
if (binary1[i] == binary2[i])
{
if (binary1[i] == '1')
{
if (carry)
{
calcResult[i] = '1';
carry = true;
}
else
{
calcResult[i] = '0';
carry = true;
}
}
else
{
if (carry)
{
calcResult[i] = '1';
carry = false;
}
else
{
calcResult[i] = '0';
carry = false;
}
}
}
else
{
if (carry)
{
calcResult[i] = '0';
carry = true;
}
else
{
calcResult[i] = '1';
carry = false;
}
}
}
string result = new string(calcResult);
return result;
}
So the problem is that the OP's problem isn't really two's complement conversion. He's adding a bias to a set of values, to adjust the range from -128..127 to 0..255.
To actually do a two's complement conversion, you just typecast the signed value to the unsigned value, like this:
sbyte test1 = -1;
byte test2 = (byte)test1;
-1 becomes 255. -128 becomes 128. This doesn't sound like what the OP wants, though. He just wants to slide an array up so that the lowest signed value (-128) becomes the lowest unsigned value (0).
To add a bias, you just do integer addition:
newValue = signedValue+128;
You could be describing something as simple as adding a bias to your number ( in this case, adding 128 to the signed number ).