I have this code:
string result = "";
foreach(char item in texte)
{
result += Convert.ToString(item, 2).PadLeft(8, '0');
}
So I have string named result which is conversion of a word like 'bonjour' in binary.
for texte = "bonjour" I have string result = 01100010011011110110111001101010011011110111010101110010 as type integer.
And when I do
Console.writeLine(result[0])
I obtain 0, normal, what I expected, but if I do
Console.WriteLine((int)result[0])
or
Console.WriteLine(Convert.ToInt32(result[0]))
I obtain 48!
I don't want 48, I want 0 or 1 at the type integer.
Could you help me please?
You can just subtract 48 from it!
Console.WriteLine(result[0] - 48);
because the characters digits 0-9 are encoded as 48 to 57.
If you want to access each bit by index, I suggest using a BitArray instead:
var bytes = Encoding.ASCII.GetBytes("someString");
var bitArray = new BitArray(bytes);
// now you can access the first bit like so:
bitArray.Get(0) // this returns a bool
bitArray.Get(0) ? 1 : 0 // this gives you a 1 or 0
string a = "23jlfdsa890123kl21";
byte[] data = System.Text.Encoding.Default.GetBytes(a);
StringBuilder result = new StringBuilder(data.Length * 8);
foreach (byte b in data)
{
result.Append(Convert.ToString(b, 2).PadLeft(8, '0'));
}
you can try this code.
Just Do this
Console.WriteLine(Convert.ToInt32(Convert.ToString(result[0])));
You're expecting it to behave the same as Convert.ToInt32(string input) but actually you're invoking Convert.ToInt32(char input) and if you check the docs, they explicitly state it will return the unicode value (in this case the same as the ASCII value).
http://msdn.microsoft.com/en-us/library/ww9t2871(v=vs.110).aspx
I have a requirement to encode a byte array from an short integer value
The encoding rules are
The bits representing the integer are bits 0 - 13
bit 14 is set if the number is negative
bit 15 is always 1.
I know I can get the integer into a byte array using BitConverter
byte[] roll = BitConverter.GetBytes(x);
But I cant find how to meet my requirement
Anyone know how to do this?
You should use Bitwise Operators.
Solution is something like this:
Int16 x = 7;
if(x < 0)
{
Int16 mask14 = 16384; // 0b0100000000000000;
x = (Int16)(x | mask14);
}
Int16 mask15 = -32768; // 0b1000000000000000;
x = (Int16)(x | mask15);
byte[] roll = BitConverter.GetBytes(x);
You cannot rely on GetBytes for negative numbers since it complements the bits and that is not what you need.
Instead you need to do bounds checking to make sure the number is representable, then use GetBytes on the absolute value of the given number.
The method's parameter is 'short' so we GetBytes returns a byte array with the size of 2 (you don't need more than 16 bits).
The rest is in the comments below:
static readonly int MAX_UNSIGNED_14_BIT = 16383;// 2^14-1
public static byte[] EncodeSigned14Bit(short x)
{
var absoluteX = Math.Abs(x);
if (absoluteX > MAX_UNSIGNED_14_BIT) throw new ArgumentException($"{nameof(x)} is too large and cannot be represented");
byte[] roll = BitConverter.GetBytes(absoluteX);
if (x < 0)
{
roll[1] |= 0b01000000; //x is negative, set 14th bit
}
roll[1] |= 0b10000000; // 15th bit is always set
return roll;
}
static void Main(string[] args)
{
// testing some values
var r1 = EncodeSigned14Bit(16383); // r1[0] = 255, r1[1] = 191
var r2 = EncodeSigned14Bit(-16383); // r2[0] = 255, r2[1] = 255
}
The main problem is that I recive a binary number with only 10 bits in use from a SerialPort so I use this to receive the complete data:
byte[] buf = new byte[2];
serialPort.Read(buf, 0, buf.Length);
BitArray bits = new BitArray(buf);
The original idea for convert binary to int was this:
foreach (bool b in bits)
{
if(b){
binary += "1";
}
else{
binary+= "0";
}
}
decimal = Convert.ToInt32(binary, 2);
decimal = decimal >> 6;
binary is obviously a string, that works but I need to know if exists another solution, instead of the previuos code I try with this:
decimal = BitConverter.ToInt16(buf, 0);
But this only read the first 8 bits, I need the other 2 bits missing! If I change ToInt16 for a ToInt32
decimal = BitConverter.ToInt32(buf, 0);
The program stops for a System.ArgumentException: Destination array was not long enough...
What can I do?
You can just shift the values in the bytes so that they match, and put them together. If I got the use of bits right, that would be:
int value = (buf[0] << 2) | (buf[1] >> 6);
If I have int x = 24, how can I convert that into a 2-byte array where the first byte stores the value for 2 (50) and the second byte stores the value for 4 (52)?
System.Text.Encoding.ASCIIEncoding.GetBytes(x.ToString());
Easiest way is to convert to a String first, then convert that to bytes.
byte[] bytes = System.Text.Encoding.ASCII.GetBytes(x.ToString());
You can use the division and modulo operators:
byte[] data = new byte[] { (byte)(48 + x / 10), (byte)(48 + x % 10) };
int x_int = 24;
string x_string = x_int.ToString();
var x_bytes = (from x in x_string select Convert.ToByte(x)).ToArray();
I have to convert values (double/float in C#) to bytes and need some help..
// Datatype long 4byte -99999999,99 to 99999999,99
// Datatype long 4byte -99999999,9 to 99999999,9
// Datatype short 2byte -999,99 to 999,99
// Datatype short 2byte -999,9 to 999,9
In my "world at home" i would just string it and ASCII.GetBytes().
But now, in this world, we have to make less possible space.
And indeed that '-99999999,99' takes 12 bytes instead of 4! if it's a 'long' datatype.
[EDIT]
Due to some help and answer I attach some results here,
long lng = -9999999999L;
byte[] test = Encoding.ASCII.GetBytes(lng.ToString()); // 11 byte
byte[] test2 = BitConverter.GetBytes(lng); // 8 byte
byte[] mybyt = BitConverter.GetBytes(lng); // 8 byte
byte[] bA = BitConverter.GetBytes(lng); // 8 byte
There still have to be one detail left to find out. The lng-variabel got 8 byte even if it helds a lower values, i.e. 99951 (I won't include the ToString() sample).
If the value are even "shorter", which means -999,99 -- 999,99 it will only take 2 byte space.
[END EDIT]
Have you checked BitConverter
long lng =-9999999999L;
byte[] mybyt = BitConverter.GetBytes(lng);
hope this is what you are looking
Try to do it in this way:
long l = 4554334;
byte[] bA = BitConverter.GetBytes(l);
Be aware that in 2 bytes you can only have 4 full digits + sign, and in 4 bytes you can only have 9 digits + sign, so I had to scale your prereqs accordingly.
public static byte[] SerializeLong2Dec(double value)
{
value *= 100;
value = Math.Round(value, MidpointRounding.AwayFromZero);
if (value < -999999999.0 || value > 999999999.0)
{
throw new ArgumentOutOfRangeException();
}
int value2 = (int)value;
return BitConverter.GetBytes(value2);
}
public static double DeserializeLong2Dec(byte[] value)
{
int value2 = BitConverter.ToInt32(value, 0);
return (double)value2 / 100.0;
}
public static byte[] SerializeLong1Dec(double value) {
value *= 10;
value = Math.Round(value, MidpointRounding.AwayFromZero);
if (value < -999999999.0 || value > 999999999.0) {
throw new ArgumentOutOfRangeException();
}
int value2 = (int)value;
return BitConverter.GetBytes(value2);
}
public static double DeserializeLong1Dec(byte[] value) {
int value2 = BitConverter.ToInt32(value, 0);
return (double)value2 / 10.0;
}
public static byte[] SerializeShort2Dec(double value) {
value *= 100;
value = Math.Round(value, MidpointRounding.AwayFromZero);
if (value < -9999.0 || value > 9999.0) {
throw new ArgumentOutOfRangeException();
}
short value2 = (short)value;
return BitConverter.GetBytes(value2);
}
public static double DeserializeShort2Dec(byte[] value) {
short value2 = BitConverter.ToInt16(value, 0);
return (double)value2 / 100.0;
}
public static byte[] SerializeShort1Dec(double value) {
value *= 10;
value = Math.Round(value, MidpointRounding.AwayFromZero);
if (value < -9999.0 || value > 9999.0) {
throw new ArgumentOutOfRangeException();
}
short value2 = (short)value;
return BitConverter.GetBytes(value2);
}
public static double DeserializeShort1Dec(byte[] value) {
short value2 = BitConverter.ToInt16(value, 0);
return (double)value2 / 10.0;
}
So that it's clear, the range of a (signed) short (16 bits) is -32,768 to 32,767 so it's quite clear that you only have 4 full digits plus a little piece (the 0-3), the range of a (signed) int (32 bits) is −2,147,483,648 to 2,147,483,647 so it's quite clear that you only have 9 full digits plus a little piece (the 0-2). Going to a (signed) long (64 bits) you have -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 so 18 digits plus a (big) piece. Using floating points you lose in accuracy. A float (32 bits) has an accuracy of around 7 digits, while a double (64 bits) has an accuracy of around 15-16 digits.
To everyone reading this question and the answers. Please note that:
//convert to bytes
long number = 123;
byte[] bytes = BitConverter.GetBytes(number);
//convert back to int64
long newNumber = BitConverter.ToInt64(bytes);
//numbers are different on x64 systems!!!
number != newNumber;
The workaround is to check which system you run on:
newNumber = BitConverter.IsLittleEndian
? BitConverter.ToInt64(bytes, 0)
: BitConverter.ToInt64(bytes.Reverse().ToArray(), 0);
long longValue = 9999999999L;
Console.WriteLine("Long value: " + longValue.ToString());
bytes = BitConverter.GetBytes(longValue);
Console.WriteLine("Byte array value:");
Console.WriteLine(BitConverter.ToString(bytes));
As the other answers have pointed out, you can use BitConverter to get the byte representation of primitive types.
You said that in the current world you inhabit, there is an onus on representing these values as concisely as possible, in which case you should consider variable length encoding (though that document may be a bit abstract).
If you decide this approach is applicable to your case I would suggest looking at how the Protocol Buffers project represents scalar types as some of these types are encoded using variable length encoding which produces shorter output if the data set favours smaller absolute values. (The project is open source under a New BSD license, so you will be able to learn the technique employed from the source repository or even use the source in your own project.)