Converting from bitstring to integer - c#

I need a function like
int GetIntegerFromBinaryString(string binary, int bitCount)
if binary = "01111111" and bitCount = 8, it should return 127
if binary = "10000000" and bitCount = 8, it should return -128
The numbers are stored in 2's complement form. How can I do it. Is there any built in functions that would help so that I needn't calculate manually.

Prepend the string with 0's or 1's to make up to the bitCount and do
int number = Convert.ToInt16("11111111"+"10000000", 2);

here you go.
static int GetIntegerFromBinaryString(string binary, int bitCount)
{
if (binary.Length == bitCount && binary[0] == '1')
return Convert.ToInt32(binary.PadLeft(32, '1'),2);
else
return Convert.ToInt32(binary,2);
}
Convert it to the 2-s complement version of a 32 bit number, then simply let the Convert.ToInt32 method do it's magic.

Related

Convert a variable size hex string to signed number (variable size bytes) in C#

C# provides the method Convert.ToUInt16("FFFF", 16)/Convert.ToInt16("FFFF", 16) to convert hex strings into unsigned and signed 16 bit integer. These methods works fine for 16/32 bit values but not so for 12 bit values.
I would like to convert 3 char long hex string to signed integer. How could I do it? I would prefer a solution that could take the number of character as parameter to decide signed values.
Convert(string hexString, int fromBase, int size)
Convert("FFF", 16, 12) return -1.
Convert("FFFF", 16, 16) return -1.
Convert("FFF", 16, 16) return 4095.
The easiest way I can think of converting 12 bit signed hex to a signed integer is as follows:
string value = "FFF";
int convertedValue = (Convert.ToInt32(value, 16) << 20) >> 20; // -1
The idea is to shift the result as far left as possible so that the negative bits line up, then shift right again to the original position. This works because a "signed shift right" operation keeps the negative bit in place.
You can generalize this into a method as follows:
int Convert(string value, int fromBase, int bits)
{
int bitsToShift = 32 - bits;
return (Convert.ToInt32(value, fromBase) << bitsToShift) >> bitsToShift;
}
You can cast the result to a short if you want a 16 bit value when working with 12 bit hex strings. Performance of this method will be the same as a 16 bit version because bit shift operators on short cast the values to int anyway and this gives you more flexibility to specify more than 16 bits if needed without writing another method.
Ah, you'd like to calculate the Two's Complement for a certain number of bits (12 in your case, but really it should work with anything).
Here's the code in C#, blatantly stolen from the Python example in the wiki article:
int Convert(string hexString, int fromBase, int num_bits)
{
var i = System.Convert.ToUInt16(hexString, fromBase);
var mask = 1 << (num_bits - 1);
return (-(i & mask) + (i & ~mask));
}
Convert("FFF", 16, 12) returns -1
Convert("4095", 10, 12) is also -1 as expected

Convert 24 bit two's complement to int?

I have a C#/Mono app that reads data from an ADC. My read functions returns it as ulong. The data is in two complement and I need to translate it into int. E.g.:
0x7FFFFF = + 8,388,607
0x7FFFFE = + 8,388,606
0x000000 = 0
0xFFFFFF = -1
0x800001 = - 8,388,607
0x800000 = - 8,388,608
How can I do this?
const int MODULO = 1 << 24;
const int MAX_VALUE = (1 << 23) - 1;
int transform(int value) {
if (value > MAX_VALUE) {
value -= MODULO;
}
return value;
}
Explanation: MAX_VALUE is the maximum possible positive value that can be stored in 24 signed int, so if the value is less or equal then that, it should be returned as is. Otherwise value is unsigned representation of two-complement negative number. The way two-complement works is that negative number is written as a number modulo MODULO, which would effectively mean that MODULO is added. So to convert it back to signed we subtract MODULO.

How do I properly loop through and print bits of an Int, Long, Float, or BigInteger?

I'm trying to debug some bit shifting operations and I need to visualize the bits as they exist before and after a Bit-Shifting operation.
I read from this answer that I may need to handle backfill from the shifting, but I'm not sure what that means.
I think that by asking this question (how do I print the bits in a int) I can figure out what the backfill is, and perhaps some other questions I have.
Here is my sample code so far.
static string GetBits(int num)
{
StringBuilder sb = new StringBuilder();
uint bits = (uint)num;
while (bits!=0)
{
bits >>= 1;
isBitSet = // somehow do an | operation on the first bit.
// I'm unsure if it's possible to handle different data types here
// or if unsafe code and a PTR is needed
if (isBitSet)
sb.Append("1");
else
sb.Append("0");
}
}
Convert.ToString(56,2).PadLeft(8,'0') returns "00111000"
This is for a byte, works for int also, just increase the numbers
To test if the last bit is set you could use:
isBitSet = ((bits & 1) == 1);
But you should do so before shifting right (not after), otherwise you's missing the first bit:
isBitSet = ((bits & 1) == 1);
bits = bits >> 1;
But a better option would be to use the static methods of the BitConverter class to get the actual bytes used to represent the number in memory into a byte array. The advantage (or disadvantage depending on your needs) of this method is that this reflects the endianness of the machine running the code.
byte[] bytes = BitConverter.GetBytes(num);
int bitPos = 0;
while(bitPos < 8 * bytes.Length)
{
int byteIndex = bitPos / 8;
int offset = bitPos % 8;
bool isSet = (bytes[byteIndex] & (1 << offset)) != 0;
// isSet = [True] if the bit at bitPos is set, false otherwise
bitPos++;
}

Converting raw byte data to float[]

I have this code for converting a byte[] to float[].
public float[] ConvertByteToFloat(byte[] array)
{
float[] floatArr = new float[array.Length / sizeof(float)];
int index = 0;
for (int i = 0; i < floatArr.Length; i++)
{
floatArr[i] = BitConverter.ToSingle(array, index);
index += sizeof(float);
}
return floatArr;
}
Problem is, I usually get a NaN result! Why should this be? I checked if there is data in the byte[] and the data seems to be fine. If it helps, an example of the values are:
new byte[] {
231,
255,
235,
255,
}
But this returns NaN (Not a Number) after conversion to float. What could be the problem? Are there other better ways of converting byte[] to float[]? I am sure that the values read into the buffer are correct since I compared it with my other program (which performs amplification for a .wav file).
If endianness is the problem, you should check the value of BitConverter.IsLittleEndian to determine if the bytes have to be reversed:
public static float[] ConvertByteToFloat(byte[] array) {
float[] floatArr = new float[array.Length / 4];
for (int i = 0; i < floatArr.Length; i++) {
if (BitConverter.IsLittleEndian) {
Array.Reverse(array, i * 4, 4);
}
floatArr[i] = BitConverter.ToSingle(array, i * 4);
}
return floatArr;
}
Isn't the problem that the 255 in the exponent represents NaN (see Wikipedia in the exponent section), so you should get a NaN. Try changing the last 255 to something else...
If it's the endianness that is wrong (you are reading big endian numbers) try these (be aware that they are "unsafe" so you have to check the unsafe flag in your project properties)
public static unsafe int ToInt32(byte[] value, int startIndex)
{
fixed (byte* numRef = &value[startIndex])
{
var num = (uint)((numRef[0] << 0x18) | (numRef[1] << 0x10) | (numRef[2] << 0x8) | numRef[3]);
return (int)num;
}
}
public static unsafe float ToSingle(byte[] value, int startIndex)
{
int val = ToInt32(value, startIndex);
return *(float*)&val;
}
I assume that your byte[] doesn't contain the binary representation of floats, but either a sequence of Int8s or Int16s. The examples you posted don't look like audio samples based on float (neither NaN nor -2.41E+24 are in the range -1 to 1. While some new audio formats might support floats outside that range, traditionally audio data consists of signed 8 or 16 bit integer samples.
Another thing you need to be aware of is that often the different channels are interleaved. For example it could contain the first sample for the left, then for the right channel, and then the second sample for both... So you need to separate the channels while parsing.
It's also possible, but uncommon that the samples are unsigned. In which case you need to remove the offset from the conversion functions.
So you first need to parse current position in the byte array into an Int8/16. And then convert that integer to a float in the range -1 to 1.
If the format is little endian you can use BitConverter. Another possibility that works with both endiannesses is getting two bytes manually and combining them with a bit-shift. I don't remember if little or big endian is common. So you need to try that yourself.
This can be done with functions like the following(I didn't test them):
float Int8ToFloat(Int8 i)
{
return ((i-Int8.MinValue)*(1f/0xFF))-0.5f;
}
float Int16ToFloat(Int16 i)
{
return ((i-Int16.MinValue)*(1f/0xFFFF))-0.5f;
}
Depends on how you want to convert the bytes to float. Start out by trying to pin-point what is actually stored in the byte array. Perhaps there is a file format specification? Other transformations in your code?
In the case each byte should be converted to a float between 0 and 255:
public float[] ConvertByteToFloat(byte[] array)
{
return array.Select(b => (float)b).ToArray();
}
If the bytes array contains binary representation of floats, there are several representation and if the representation stored in your file does not match the c# language standard floating point representation (IEEE 754) weird things like this will happen.

easy and fast way to convert an int to binary?

What I am looking for is something like PHPs decbin function in C#. That function converts decimals to its representation as a string.
For example, when using decbin(21) it returns 10101 as result.
I found this function which basically does what I want, but maybe there is a better / faster way?
var result = Convert.ToString(number, 2);
– Almost the only use for the (otherwise useless) Convert class.
Most ways will be better and faster than the function that you found. It's not a very good example on how to do the conversion.
The built in method Convert.ToString(num, base) is an obvious choice, but you can easily write a replacement if you need it to work differently.
This is a simple method where you can specify the length of the binary number:
public static string ToBin(int value, int len) {
return (len > 1 ? ToBin(value >> 1, len - 1) : null) + "01"[value & 1];
}
It uses recursion, the first part (before the +) calls itself to create the binary representation of the number except for the last digit, and the second part takes care of the last digit.
Example:
Console.WriteLine(ToBin(42, 8));
Output:
00101010
int toBase = 2;
string binary = Convert.ToString(21, toBase); // "10101"
To have the binary value in (at least) a specified number of digits, padded with zeroes:
string bin = Convert.ToString(1234, 2).PadLeft(16, '0');
The Convert.ToString does the conversion to a binary string.
The PadLeft adds zeroes to fill it up to 16 digits.
This is my answer:
static bool[] Dec2Bin(int value)
{
if (value == 0) return new[] { false };
var n = (int)(Math.Log(value) / Math.Log(2));
var a = new bool[n + 1];
for (var i = n; i >= 0; i--)
{
n = (int)Math.Pow(2, i);
if (n > value) continue;
a[i] = true;
value -= n;
}
Array.Reverse(a);
return a;
}
Using Pow instead of modulo and divide so i think it's faster way.

Categories