Funny Division in C and Arduino - c#

TL;DR
Why does
printf("%d\n", 042/10 );
return 3 and not 4?
Hey so i actually i actually had this doubt while i was in the Arduino IDE, but then just to verify I did try in another C compiler. The code in Question is here:
Serial.println(42/10);
This works fine, displays 4. The is the funny bit
Serial.println(042/10);
This return 3.
This seems very fundamental but I couldn't find a suitable post answering this. Thanks in Advance!

The leading zero means it’s octal. 042 is equal to 34 in decimal. And 34/10 is indeed 3 using int math.

The 042 from printf("%d\n", 042/10 ); is interpreted as an octal value:
042 (oct) = 34 (dec)
so your division is actually: (int)34/10 = 3

Serial.println(042/010); will output 4
Stems from the earliest days of C, when octal numbers were at least a bit more common than nowadays (?). Once it's defined, you can't ever change it.

Related

C# So how does this hexadecimal stuff work?

I'm doing some entry level programming challenges at codefights.com and I came across the following question. The link is to a blog that has the answer, but it includes the question in it as well. If only it had an explanation...
https://codefightssolver.wordpress.com/2016/10/19/swap-adjacent-bits/
My concern is with the line of code (it is the only line of code) below.
return (((n & 0x2AAAAAAA) >> 1) | ((n & 0x15555555) << 1)) ;
Specifically, I'm struggling to find some decent info on how the "0x2AAAAAAA" and "0x15555555" work, so I have a few dumb questions. I know they represent binary values of 10101010... and 01010101... respectively.
1. I've messed around some and found out that the number of 5s and As corresponds loosely and as far as I can tell to bit size, but how?
2. Why As? Why 5s?
3. Why the 2 and the 1 before the As and 5s?
4. Anything else I should know about this? Does anyone know a cool blog post or website that explains some of this in more detail?
0x2AAAAAAA is 00101010101010101010101010101010 in 32 bits binary,
0x15555555 is ‭00010101010101010101010101010101‬ in 32 bits binary.
Note that the problem specifies Constraints: 0 ≤ n < 2^30. For this reason the highest two bits can be 00.
The two hex numbers have been "built" starting from their binary representation, that has a particular property (that we will see in the next paragraph).
Now... We can say that, given the constraint, x & 0x2AAAAAAA will return the even bits of x (if we count the bits as first, second, third... the second bit is even), while x & 0x15555555 will return the odd bits of x. By using << 1 and >> 1 you move them of one step. By using | (or) you re-merge them.
0x2AAAAAAA is used to get 30 bits, which is the constraint.
Constraints:
0 ≤ n < 2^30.
0x15555555 also represent 30 bits with bits opposite of other number.
I would start with binary number (101010101010101010101010101010) in the calculator and select hex using programmer calculator to show the number in hex.
you can also use 0b101010101010101010101010101010 too, if you like, depending on language.

Math.Ceiling() on expression when dividing and multiplying by same number

I have a little trouble with Math.Ceiling() in C#.
I call it on the result of a division by a number followed by a multiplication by the same number, e.g. 20 000 / 184 * 184. I would expect that result to be 20 000 but it is 20 001. Are there any possible ways how to avoid this behavior when trying round up value?
Thank you in advance
When running the code you supplied we have the following
twentyThousand/oneEightyFour * oneEightyFour
The answer is 20000.000000000000000000000001
Hence when you do the ceiling we have 20001.
By the following article I think the result is due to in inaccuracy introduced when performing the division , this yields 108.69565217391304347826086957 and as Jon stated
As a very broad rule of thumb, if you end up seeing a very long string representation (ie most of the 28/29 digits are non-zero) then chances are you've got some inaccuracy along the way.
http://csharpindepth.com/Articles/General/Decimal.aspx
As light pointed out in the comments, you shouldn't be getting 20001 at all.
20000 / 184 would yield 108. Which then would give you 19872 when multiplied by 184.
Somewhere you are doing something other than what you posted. Where is Math.Ceiling() even called?
I will say, if the numbers are hard coded, you can put a decimal in the code and it will treat it as such. If you are using variables that represent numbers, be sure they are formatted as some floating point type (decimal,double,float) depending on the accuracy needed.
Console.WriteLine(20000 / 184 * 184); // 19872
Console.WriteLine((20000.0 / 184.0 * 184.0)); // 20000
Are there any possible ways how to avoid this behavior when trying round up value?
In this particular case you can avoid the problem by multiplying first, then dividing:
result = (20000m * 184m) / 184m ;
Since the precision is lost in the division, multiplying first prevents that imprecision form getting exaggerated when you multiply.

Confused by calculation

I am porting an application from VB6 to C#. I have found one calculation in particular that is causing me an issue. It basically boils down to
opperandA *.01 / opperandB
My concrete example is:
1 * .01 / 12
In VB6 (and Windows Calculator) I get 8.3333333333e-4.
However, in C# (and every other Calculator) I get .00083333.
The second number makes sense to me, but I have to replication the first result and I want to understand it, so why does VB6 and Windows calculator produce an odd result?
8.3333333333e-4 is the same as 0.00083333. It equates to:
8.3333333333 * 10^-4
= 8.3333333333 times ( ten to the power of -4 )
= 8.3333333333 * 0.0001
= 0.00083333333
N.b. After rounding
The e stands for exponent and the relevant Wikipedia article is http://en.wikipedia.org/wiki/Exponentiation

How are byte values obtained (XOR Example)

I have been reading this page here from MSDN regarding the XOR operator and its usage.
Half way down the page I read the following code:
// Bitwise exclusive-OR of 10 (2) and 11 (3) returns 01 (1).
Console.WriteLine("Bitwise result: {0}", Convert.ToString(0x2 ^ 0x3, 2));
Now, I cannot figure out for the life of me how 10 equates to 2, or how 11 equates to 3. Would anyone mind explaining this in simple terms so that I can clearly understand the concept here?
Thank you,
Evan
The "10" and "11" in the text are simply binary representations of numbers. So "10" in binary equals "2" in decimal, and "11" in binary equals "3" in decimal.
It's not very clear though, I admit...
(If that doesn't help, please comment saying what else is confusing. I suspect this is enough though.)
10 in binary is a 2 in decimal,
11 in binary is a 3
(10)2=1*2^1+0*2^0=2
(11)2=1*2^1+1*2^0=3
10 XOR 11 = 01
10
-
11
----
01
Exclusive means there has to be only one '1' to get a '1', in all other cases, you get a 0.
The issue here is one of base conversion. In base 2 (or binary) we represent a number a as series of zeros and ones. Take a look at http://en.wikipedia.org/wiki/Binary_numeral_system
It's showing you in binary that hexadecimal (0x2) equals 00000010 and (0x3) equals 00000011.
Therefore in XOR that is
00000010
00000011
--------
00000001

Increment forever and you get -2147483648?

For a clever and complicated reason that I don't really want to explain (because it involves making a timer in an extremely ugly and hacky way), I wrote some C# code sort of like this:
int i = 0;
while (i >= 0) i++; //Should increment forever
Console.Write(i);
I expected the program to hang forever or crash or something, but, to my surprise, after waiting for about 20 seconds or so, I get this ouput:
-2147483648
Well, programming has taught me many things, but I still cannot grasp why continually incrementing a number causes it to eventually be negative...what's going on here?
In C#, the built-in integers are represented by a sequence of bit values of a predefined length. For the basic int datatype that length is 32 bits. Since 32 bits can only represent 4,294,967,296 different possible values (since that is 2^32), clearly your code will not loop forever with continually increasing values.
Since int can hold both positive and negative numbers, the sign of the number must be encoded somehow. This is done with first bit. If the first bit is 1, then the number is negative.
Here are the int values laid out on a number-line in hexadecimal and decimal:
Hexadecimal Decimal
----------- -----------
0x80000000 -2147483648
0x80000001 -2147483647
0x80000002 -2147483646
... ...
0xFFFFFFFE -2
0xFFFFFFFF -1
0x00000000 0
0x00000001 1
0x00000002 2
... ...
0x7FFFFFFE 2147483646
0x7FFFFFFF 2147483647
As you can see from this chart, the bits that represent the smallest possible value are what you would get by adding one to the largest possible value, while ignoring the interpretation of the sign bit. When a signed number is added in this way, it is called "integer overflow". Whether or not an integer overflow is allowed or treated as an error is configurable with the checked and unchecked statements in C#. The default is unchecked, which is why no error occured, but you got that crazy small number in your program.
This representation is called 2's Complement.
The value is overflowing the positive range of 32 bit integer storage going to 0xFFFFFFFF which is -2147483648 in decimal. This means you overflow at 31 bit integers.
It's been pointed out else where that if you use an unsigned int you'll get different behaviour as the 32nd bit isn't being used to store the sign of of the number.
What you are experiencing is Integer Overflow.
In computer programming, an integer overflow occurs when an arithmetic operation attempts to create a numeric value that is larger than can be represented within the available storage space. For instance, adding 1 to the largest value that can be represented constitutes an integer overflow. The most common result in these cases is for the least significant representable bits of the result to be stored (the result is said to wrap).
int is a signed integer. Once past the max value, it starts from the min value (large negative) and marches towards 0.
Try again with uint and see what is different.
Try it like this:
int i = 0;
while (i >= 0)
checked{ i++; } //Should increment forever
Console.Write(i);
And explain the results
What the others have been saying. If you want something that can go on forever (and I wont remark on why you would need something of this sort), use the BigInteger class in the System.Numerics namespace (.NET 4+). You can do the comparison to an arbitrarily large number.
It has a lot to do with how positive numbers and negative numbers are really stored in memory (at bit level).
If you're interested, check this video: Programming Paradigms at 12:25 and onwards. Pretty interesting and you will understand why your code behaves the way it does.
This happens because when the variable "i" reaches the maximum int limit, the next value will be a negative one.
I hope this does not sound like smart-ass advice, because its well meant, and not meant to be snarky.
What you are asking is for us to describe that which is pretty fundamental behaviour for integer datatypes.
There is a reason why datatypes are covered in the 1st year of any computer science course, its really very fundamental to understanding how and where things can go wrong (you can probably already see how the behaviour above if unexpected causes unexpected behaviour i.e. a bug in your application).
My advice is get hold of the reading material for 1st year computer science + Knuth's seminal work "The art of computer pragramming" and for ~ $500 you will have everything you need to become a great programmer, much cheaper than a whole Uni course ;-)

Categories