Binary of a number - c#

Is there a simply way to convert decimal/ascii 6 bit decimal numbers from 1 to 100 to binary representation?
To be more specific im interested in 6 bit binary ascii. So I made this to get int 32.
For example "u" is changed to 61 instead 117 in standard decimal ascii.
Then this 61 is needed to be "111101" instead of traditional "01110101" but after this 48 + 8 math it's not important as now it's normal binary, just with 6 bits used.
foreach (char c in partToDecode)
{
var sum = c - 48;
if (sum>40)
{
sum = sum - 8;
}
Found this, but i don't have a clue how to traspose it to c#
void binary(unsigned n) {
unsigned i;
// Reverse loop
for (i = 1 << 31; i > 0; i >>= 1)
printf("%u", !!(n & i));
}
. . .
binary(65);

You can try Convert.ToString, e.g.
int source = 61;
// "111101"
string result = Convert.ToString(source, 2).PadLeft(6, '0');
Fiddle

Related

Shift bits of an integer only if the number of bits of its binary presentation is greater then given value

I need to shift the bits of an integer to the right only if the number of bits is greater then a certain number. For the example Lets take 10.
If the integer is 818 the then binary representation of the integer is 1100110010, In that case i do nothing.
If the Integer is 1842 the binary representation of the integer is 11100110010 which is greater then 10 by one, So i need to shift one bit to the right(Or setting bit at index 10 to 0 which gives the same result as far as i know, Maybe im wrong).
What i did until now is make an integer array of ones and zeros represent the int, But i`m sure there is more elegant way of doing this
int y = 818;
string s = Convert.ToString(y, 2);
int[] bits = s.PadLeft(8, '0')
.Select(c => int.Parse(c.ToString()))
.ToArray();
if (bits.Length > 10)
{
for (int i = 10; i < bits.Length; i++)
{
bits[i] = 0;
}
}
I also tried to do this:
if(bits.Length > 10){ y = y >> (bits.Length - 10)}
but for some reason i got 945 (1110110001) when the input was 1891 (11101100011)
There's no need to do this with strings. 2 to the power of 10 has 11 binary digits, so
if (y >= Math.Pow(2, 10))
{
y = y >> 1;
}
seems to do what you want.

How to convert a base 10-number to 3-base in net (Special case)

I'm looking for a routine in C# that gives me the following output when putting in numbers:
0 - A
1 - B
2 - C
3 - AA
4 - AB
5 - AC
6 - BA
7 - BB
8 - BC
9 - CA
10 - CB
11 - CC
12 - AAA
13 - etc
I'm using letters, so that it's not so confusing with zero's.
I've seen other routines, but they will give me BA for the value of 3 and not AA.
Note: The other routines I found was Quickest way to convert a base 10 number to any base in .NET? and http://www.drdobbs.com/architecture-and-design/convert-from-long-to-any-number-base-in/228701167, but as I said, they would give me not exactly what I was looking for.
Converting between systems is basic programming task and logic doesn't differ from other systems (such as hexadecimal or binary). Please, find below code:
//here you choose what number should be used to convert, you wanted 3, so I assigned this value here
int systemNumber = 3;
//pick number to convert (you can feed text box value here)
int numberToParse = 5;
// Note below
numberToParse++;
string convertedNumber = "";
List<char> letters = new List<char>{ 'A', 'B', 'C' };
//basic algorithm for converting numbers between systems
while(numberToParse > 0)
{
// Note below
numberToParse--;
convertedNumber = letters[numberToParse % systemNumber] + convertedNumber;
//append corresponding letter to our "number"
numberToParse = (int)Math.Floor((decimal)numberToParse / systemNumber);
}
//show converted number
MessageBox.Show(convertedNumber);
NOTE: I didn't read carefully at first and got it wrong. I added to previous solution two lines marked with "Note below": incrementation and decrementation of parsed number. Decrementation enables A (which is zero, thus omitted at the beginning of numbers) to be treated as relevent leading digit. But this way, numbers that can be converted are shifted and begin with 1. To compensate that, we need to increment our number at the beginning.
Additionaly, if you want to use other systems like that, you have to expand list with letter. Now we have A, B and C, because you wanted system based on 3. In fact, you can always use full alphabet:
List<char> letters = new List<char> {'A','B','C', 'D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z'};
and only change systemNumber.
Based on code from https://stackoverflow.com/a/182924 the following should work:
private string GetWeirdBase3Value(int input)
{
int dividend = input+1;
string output = String.Empty;
int modulo;
while (dividend > 0)
{
modulo = (dividend - 1) % 3;
output = Convert.ToChar('A' + modulo).ToString() + output;
dividend = (int)((dividend - modulo) / 3);
}
return output;
}
The code should hopefully be pretty easy to read. It essentially iteratively calculates character by character until the dividend is reduced to 0.

Interlacing two binary numbers together in Arduino C

So I've come across a strange need to 'merge' two numbers:
byte one;
byte two;
into an int three; with the first bit being the first bit of one, the second bit being the first bit of two, the third being the second bit of one and so on.
So with these two numbers:
01001000
00010001
would result in
0001001001000010
A more didatic illustration of the interlacing operation:
byte one = 0 1 0 0 1 0 0 0
byte two = 0 0 0 1 0 0 0 1
result = 00 01 00 10 01 00 00 10
UPDATE: Sorry misread your question completely.
The following code should do:
public static int InterlacedMerge(byte low, byte high)
{
var result = 0;
for (var offset = 0; offset < 8; offset++)
{
var mask = 1 << offset;
result |= ((low & mask) | ((high & mask)) << 1) << offset;
}
return result;
}
I am not, by any means, very smart when it comes to bit twiddling so there probably is a more efficient way to do this. That said, I think this will do the job but I haven't tested it, so make sure you do.
P.D: There are some unnecessary parenthesis in the code but I'm never sure about bitwise operators precedence so I find it easier to read the way its written.
UPDATE2: Here is the same code a little more verbose to make it easier to follow:
public static int InterlacedMerge(byte low, byte high)
{
var result = 0;
for (var offset = 0; offset < 8; offset++)
{
//Creates a mask with the current bit set to one: 00000001,
//00000010, 00000100, and so on...
var mask = 1 << offset;
//Creates a number with the current bit set to low's bit value.
//All other bits are 0
var lowAndMask = low & mask;
//Creates a number with the current bit set to high's bit value.
//All other bits are 0
var highAndMask = high & mask;
//Create a merged pair where the lowest bit is the low 's bit value
//and the highest bit is high's bit value.
var mergedPair = lowAndMask | (highAndMask << 1);
//Ors the mergedPair into the result shifted left offset times
//Because we are merging two bits at a time, we need to
//shift 1 additional time for each preceding bit.
result |= mergedPair << offset;
}
return result;
}
#inbetween answered while I was writing this; similar solution, different phrasing.
You'll have to write a loop. You'll test one bit in each of the two inputs. You'll set a bit in an output for each input. You'll shift all three values one place. Maybe something like this (untested):
#define TOPBIT 32768
for /* 16 times */
if ( value1 & 1 ) out |= TOPBIT;
out >>= 1;
if ( value2 & 1 ) out |= TOPBIT;
out >>= 1;
b1 >>= 1;
b2 >>= 1;

How to convert int to char[] without generating garbage in C#

Doubtless this seems like a strange request, given the availability of ToString() and Convert.ToString(), but I need to convert an unsigned integer (i.e. UInt32) to its string representation, but I need to store the answer into a char[].
The reason is that I am working with character arrays for efficiency, and as the target char[] is initialised as a member to char[10] (to hold the string representation of UInt32.MaxValue) on object creation, it should be theoretically possible to do the conversion without generating any garbage (by which I mean without generating any temporary objects in the managed heap.)
Can anyone see a neat way to achieve this?
(I'm working in Framework 3.5SP1 in case that is any way relevant.)
Further to my comment above, I wondered if log10 was too slow, so I wrote a version that doesn't use it.
For four digit numbers this version is about 35% quicker, falling to about 16% quicker for ten digit numbers.
One disadvantage is that it requires space for the full ten digits in the buffer.
I don't swear it doesn't have any bugs!
public static int ToCharArray2(uint value, char[] buffer, int bufferIndex)
{
const int maxLength = 10;
if (value == 0)
{
buffer[bufferIndex] = '0';
return 1;
}
int startIndex = bufferIndex + maxLength - 1;
int index = startIndex;
do
{
buffer[index] = (char)('0' + value % 10);
value /= 10;
--index;
}
while (value != 0);
int length = startIndex - index;
if (bufferIndex != index + 1)
{
while (index != startIndex)
{
++index;
buffer[bufferIndex] = buffer[index];
++bufferIndex;
}
}
return length;
}
Update
I should add, I'm using a Pentium 4. More recent processors may calculate transcendental functions faster.
Conclusion
I realised yesterday that I'd made a schoolboy error and run the benchmarks on a debug build. So I ran them again but it didn't actually make much difference. The first column shows the number of digits in the number being converted. The remaining columns show the times in milliseconds to convert 500,000 numbers.
Results for uint:
luc1 arx henk1 luc3 henk2 luc2
1 715 217 966 242 837 244
2 877 420 1056 541 996 447
3 1059 608 1169 835 1040 610
4 1184 795 1282 1116 1162 801
5 1403 969 1405 1396 1279 978
6 1572 1149 1519 1674 1399 1170
7 1740 1335 1648 1952 1518 1352
8 1922 1675 1868 2233 1750 1545
9 2087 1791 2005 2511 1893 1720
10 2263 2103 2139 2797 2012 1985
Results for ulong:
luc1 arx henk1 luc3 henk2 luc2
1 802 280 998 390 856 317
2 912 516 1102 729 954 574
3 1066 746 1243 1060 1056 818
4 1300 1141 1362 1425 1170 1210
5 1557 1363 1503 1742 1306 1436
6 1801 1603 1612 2233 1413 1672
7 2269 1814 1723 2526 1530 1861
8 2208 2142 1920 2886 1634 2149
9 2360 2376 2063 3211 1775 2339
10 2615 2622 2213 3639 2011 2697
11 3048 2996 2513 4199 2244 3011
12 3413 3607 2507 4853 2326 3666
13 3848 3988 2663 5618 2478 4005
14 4298 4525 2748 6302 2558 4637
15 4813 5008 2974 7005 2712 5065
16 5161 5654 3350 7986 2994 5864
17 5997 6155 3241 8329 2999 5968
18 6490 6280 3296 8847 3127 6372
19 6440 6720 3557 9514 3386 6788
20 7045 6616 3790 10135 3703 7268
luc1: Lucero's first function
arx: my function
henk1: Henk's function
luc3 Lucero's third function
henk2: Henk's function without the copy to the char array; i.e. just test the performance of ToString().
luc2: Lucero's second function
The peculiar order is the order they were created in.
I also ran the test without henk1 and henk2 so there would be no garbage collection. The times for the other three functions were nearly identical. Once the benchmark had gone past three digits the memory use was stable: so GC was happening during Henk's functions and didn't have a detrimental effect on the other functions.
Conclusion: just call ToString()
The following code does it, with the following caveat: it does not respect the culture settings, but always outputs normal decimal digits.
public static int ToCharArray(uint value, char[] buffer, int bufferIndex) {
if (value == 0) {
buffer[bufferIndex] = '0';
return 1;
}
int len = (int)Math.Ceiling(Math.Log10(value));
for (int i = len-1; i>= 0; i--) {
buffer[bufferIndex+i] = (char)('0'+(value%10));
value /= 10;
}
return len;
}
The returned value is how much of the char[] has been used.
Edit (for arx): the following version avoids the floating-point math and swaps the buffer in-place:
public static int ToCharArray(uint value, char[] buffer, int bufferIndex) {
if (value == 0) {
buffer[bufferIndex] = '0';
return 1;
}
int bufferEndIndex = bufferIndex;
while (value > 0) {
buffer[bufferEndIndex++] = (char)('0'+(value%10));
value /= 10;
}
int len = bufferEndIndex-bufferIndex;
while (--bufferEndIndex > bufferIndex) {
char ch = buffer[bufferEndIndex];
buffer[bufferEndIndex] = buffer[bufferIndex];
buffer[bufferIndex++] = ch;
}
return len;
}
And here yet another variation which computes the number of digits in a small loop:
public static int ToCharArray(uint value, char[] buffer, int bufferIndex) {
if (value == 0) {
buffer[bufferIndex] = '0';
return 1;
}
int len = 1;
for (uint rem = value/10; rem > 0; rem /= 10) {
len++;
}
for (int i = len-1; i>= 0; i--) {
buffer[bufferIndex+i] = (char)('0'+(value%10));
value /= 10;
}
return len;
}
I leave the benchmarking to whoever wants to do it... ;)
I'm coming little late to the party, but I guess you probably cannot get faster and less memory demanding results than with simple reinterpreting of the memory:
[System.Security.SecuritySafeCritical]
public static unsafe char[] GetChars(int value, char[] chars)
{
//TODO: if needed to use accross machines then
// this should also use BitConverter.IsLittleEndian to detect little/big endian
// and order bytes appropriately
fixed (char* numPtr = chars)
*(int*)numPtr = value;
return chars;
}
[System.Security.SecuritySafeCritical]
public static unsafe int ToInt32(char[] value)
{
//TODO: if needed to use accross machines then
// this should also use BitConverter.IsLittleEndian to detect little/big endian
// and order bytes appropriately
fixed (char* numPtr = value)
return *(int*)numPtr;
}
This is just a demonstration of an idea - you'd obviously need to add check for char array size and make sure that you have proper byte-ordering encoding. You can peek into reflected helper methods of BitConverter for those checks.

how to loop through the digits of a binary number?

I have a binary number 1011011, how can I loop through all these binary digits one after the other ?
I know how to do this for decimal integers by using modulo and division.
int n = 0x5b; // 1011011
Really you should just do this, hexadecimal in general is much better representation:
printf("%x", n); // this prints "5b"
To get it in binary, (with emphasis on easy understanding) try something like this:
printf("%s", "0b"); // common prefix to denote that binary follows
bool leading = true; // we're processing leading zeroes
// starting with the most significant bit to the least
for (int i = sizeof(n) * CHAR_BIT - 1; i >= 0; --i) {
int bit = (n >> i) & 1;
leading |= bit; // if the bit is 1, we are no longer reading leading zeroes
if (!leading)
printf("%d", bit);
}
if (leading) // all zero, so just print 0
printf("0");
// at this point, for n = 0x5b, we'll have printed 0b1011011
You can use modulo and division by 2 exactly like you would in base 10. You can also use binary operators, but if you already know how to do that in base 10, it would be easier if you just used division and modulo
Expanding on Frédéric and Gabi's answers, all you need to do is realise that the rules in base 2 are no different to in base 10 - you just need to do your division and modulus with a divisor 2 instead of 10.
The next step is simply to use number >> 1 instead of number / 2 and number & 0x1 instead of number % 2 to improve performance. Mind you, with modern optimising compilers there's probably no difference...
Use an AND with increasing powers of two...
In C, at least, you can do something like:
while (val != 0)
{
printf("%d", val&0x1);
val = val>>1;
}
To expand on #Marco's answer with an example:
uint value = 0x82fa9281;
for (int i = 0; i < 32; i++)
{
bool set = (value & 0x1) != 0;
value >>= 1;
Console.WriteLine("Bit set: {0}", set);
}
What this does is test the last bit, and then shift everything one bit.
If you're already starting with a string, you could just iterate through each of the characters in the string:
var values = "1011011".Reverse().ToCharArray();
for(var index = 0; index < values.Length; index++) {
var isSet = (Boolean)Int32.Parse(values[index]); // Boolean.Parse only works on "true"/"false", not 0/1
// do whatever
}
byte input = Convert.ToByte("1011011", 2);
BitArray arr = new BitArray(new[] { input });
foreach (bool value in arr)
{
// ...
}
You can simply loop through every bit. The following C like pseudocode allows you to set the bit number you want to check. (You might also want to google endianness)
for()
{
bitnumber = <your bit>
printf("%d",(val & 1<<bitnumber)?1:0);
}
The code basically writes 1 if the bit it set or 0 if not. We shift the value 1 (which in binary is 1 ;) ) the number of bits set in bitnumber and then we AND it with the value in val to see if it matches up. Simple as that!
So if bitnumber is 3 we simply do this
00000100 ( The value 1 is shifted 3 left for example)
AND
10110110 (We check it with whatever you're value is)
=
00000100 = True! - Both values have bit 3 set!

Categories