Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
There was misunderstanding about this int number = 010, What I am saying is
first 0 is not integer due to c# has no leading zero so 010 will be 10
however one of stackoverflow user saying
first 0 in 010 is integer
so could anyone help to explain in details why first 0 in 010 is integer even though it has no value or it doesn't represent any mathematical integer !
thanks in advance
When you are writing integer literals, leading zeros don't mean anything.
var a = 10;
var b = 010;
a == b // true
It's not that the "first 0 is not [an] integer", it's that the leading 0 is ignored, because it doesn't contribute any information to the value of the number.
The same is with binary notation - leading zeros do not increase the information of the number (except for maybe the storage size of the value, but that's meta-information).
If you're dealing with strings, that's a whole different ballgame, as "010" has a different character array than "10", even if they parse to the same integer value.
var c = "010"
var d = "10"
c == d // false
int.Parse(c) == int.Parse(d) // true
Ok so obviously 0 is an integer and you can do the math on a 0. 0*1 = 0 for example. When looking at what your computer sees from an integer standpoint you'll notice that 010 and 10 both have the same binary representation. So that begs the question, why is 0 being ignored? Remember that computers understand instructions, one... at... a... time. Meaning that when it first reads through that integer, even with math, it starts at the first character and goes through. The only other difference is strings or other data types that may make better use of that 0.
Binary Representation of 10
1010
Binary Representation of 010
1010
Now since we recognize that computers read one instruction at a time, if it is given a 0 (Which has a binary representation of 0) what happens when that instruction has any math completed on it? Nothing is what happens. 0*100=0 or 0/2=0 the difference being addition or subtraction which takes your integer into positive or negative value. 0+100=100 now if you look at the instructions it again starts with the 0 and then it's cleaned out leaving you with the binary representation of whatever had some calculation.
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I've thought of using a if statement to say if that number has already been produced into my line change the number which it was going to produce into a different number other then that, but I must of wrote it incorrect since it does not work.
Let's consider a sequence of random hex "digits" :
f 5 3 0 0 5 e 8 e 8 5 6 f
A naïve approach would try to check if the previous number is equal to the next in the sequence and discard it.
That will not work since only a small subset of the duplicated numbers will be come in direct sequence.
What you will need is to keep a track of the ALL numbers already present.
if you store your final sequence in an array you could check for each element, as long as the number of the desired elements is low.
For example :
f (first elements nothing to check agains, add it)
f 5 (check against index 0-0 , f, not present add it)
f 5 3 (check
against index 0-1 , 3, not present add it)
f 5 3 0 (check against
index 0-2 , 0, not present add it)
f 5 3 0 0 (check against index
0-3 , 0 , found - skip it)
f 5 3 0 5 (check against index 0-3 , 5 ,
found - skip it)
f 5 3 0
8 (check against index 0-3 , 8, not
present add it)
f 5 3 0 8 e (check against index 0-4 , e, not
present add it)
f 5 3 0 8 e
and so on. This is basically what a human eye would do, if you cannot keep all the numbers in your head.
If the sequence is large scanning the output array will quickly become too inefficient and slow.
There ways to optimize the check by using hash sets/maps.(see Remove duplicate values from JS array and Selecting Unique Elements From a List in C# )
If the number of possible elements is low, you might want a "shuffle" to get your output (see Random shuffling of an array).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Can you please explain what's going on in this code (how is it multiplying by 4 as said in comments in code?):
public static int GetNextSize(int i)
{
//multiply it by four and make sure it is positive
return i > 0 ? i << 2 : ~(i << 2) + 1;
}
Is there any better or cleaner way to do this? or is this one the optimum one?
Also, any practical situations where this (or this type of) code will be helpful?
Thanks.
The ? is the ternary operator, effectively a returnable if/else statement
if (i>0)
return i multiplied by four (bitshift to the left two)
else
return negative i multiplied by four
The ~x+1 means two's compliment and add one, effectively making it a negative number. The x here happens to be i<<2
It looks like some optimized C-like code to me.
For #2, are you referring to the logical OR operator?
a || b=c
Since a is evaluated first, the total expression will be true if a is true, so b=c is only evaluated is a is false. This effectively means if not a: b=c
if i is positive:
it will shift the bits by two to the left, which is effectively the same as multiplying by 4.
if i is not positive (is negative or zero), it will again multiply by 4, then negate all the bits (that's what ~ does) and add 1 (due to 2s complement -- it's necessary for positive numbers).
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
Problem:
I am trying to convert a byte[] to a single. I've tried using BitConverter.ToSingle() and it doesn't give the desired result.
The Content of the Array is:
0
0
0
100
The desired output is 100; I know an Int would work for this but I just choose that number for easy debugging. I have also tried moving the 100 into every possible position in the array, with no luck.
My output always looks like 9.3345534545E
or something similar with different digits.
Any Ideas?
IEE-754 types (Single and Double - float and double in C#) do not have a trivial binary representation so 0x00 0x00 0x00 0x64 does not represent the value of 0x64 (100 in decimal).
The actual raw, binary representation of IEEE-754 values is rather complicated and setting them and performing the conversion from integer to IEE-754 really isn't worth the effort (unless it's a learning exercise). It's best to let the library/platform or even the processor do it for you:
Because your value is an integer value, you need to convert it into Int32 first, and then use the Convert class (or a simple compiler cast which will perform the type conversion under-the-hood).
Int32 val = BitConverter.ToInt32( yourArray ); // assuming it's little-endian
Single s1 = (Single)val;
Single s2 = Convert.ToSingle( val );
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
So a common question you see on SO is how to convert between type x and type z but I want to know how does the computer do this?
For example, how does it take an int out of a string?
My theory is that a string is a char array at its core so its going index by index and checking it against the ascii table. If it falls within the range of ints then its added to the integer. Does it happen at an even lower level than this? Is there bitmasking taking place? How does this happen?
Disclaimer: not for school, just curious.
This question can only be answered when restricting the types to a somewhat managable subset. To do so, let us consider the three interesting types: Strings, integers and floats.
The only other truly different basic type is a pointer, which is not usually converted in any meaningful manner (even the NULL check is not actually a conversion, but a special built in semantic for the 0 literal).
int to float and vice versa
Converting integers to floats and vice versa is simple, since modern CPUs provide an instruction to deal with that case directly.
string to integer type
Conversion from string to integer is fairly simple, because no numeric errors will happen. Indeed, any string is just a sequence of code points (which may or may not be represented by char or wchar_t), and the common method to work through this goes along the lines of the following:
unsigned result = 0;
for(size_t i = 0; i < str.size(); ++i) {
unsigned c = str[i] - static_cast<unsigned>('0');
if(c > '9') {
if(i) return result; // ok: integer over
else throw "no integer found";
}
if((MAX_SIZE_T - c) / 10 < result) throw "integer overflow";
result = result * 10 + c;
}
If you wish to consider things like additional bases (e.g. strings like 0x123 as a hexadecimal representation) or negative values, it obivously requires a few more tests, but the basic algorithm stays the same.
int to string
As expected, this basically works in reverse: An implementation will always take the remainder of a division by 10 and then divide by 10. Since this will give the number in reverse, one can either print into a buffer from the back or reverse the result again.
string to floating point type
Parsing strings to a double (or float) is significantly more complex, since the conversion is supposed to happen with the highest possible accuracy. The basic idea here is to read the number as a string of digits while only remembering where the dot was and what the exponent is. Then, you would assemble the mantissa from this information (which basically is a 53 bit integer) and the exponent and assemble the actual bit pattern for the resulting number. This would then be copied into your target value.
While this approach works perfectly fine, there are literally dozens of different approaches in use, all varying in performance, correctness and robustness.
Actual implementations
Note that actual implementations may have to do one more important (and horribly ugly) thing, which is locale. For example, in the German locale the "," is the decimal point and not the thousands seperator, so pi is roughly "3,1415926535".
Perl string to double
TCL string to double
David M. Gay AT&T Paper string to double, double to string and source code
Boost Spirit
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I'm unable to think through this one I think it's one of those moments where the answer is really simple but I'm too close to the problem to see the solution.
I have a distance that's changeable and an object that has to traverse this distance in the same time regardless of length.
The start of the distance is valued as 0 and the end of the distance is valued as 1.
Obviously the incrementation will be smaller the larger the length to keep the times equal.
What formula could I use to calculate the 0-1 incrementation but keep the time taken equal.
I know it seems overly complicated way to increment but it's part of the third party plugin I've been given.
I'm coding in C#.
Thanks.
[EDIT]
Sorry I wasn't very clear.
For incrementation the start point is always 0 and the end point is always 1.
So the object can move += 0.5 for example.
so when the length increase from say 30 to 65 it should take longer to increment from 0 to 1.
So you are looking for a way to have a number x in the range [0,1] that maps to some y in some arbitrary range [min,max], and are looking for the increment value a such that if x -> y then x + a -> y + b for some constant b? If I have understood your question correctly, then your a value should be:
a = b / (max - min) note: make sure to format this correctly for C#, especially be sure to cast and that sort of thing.
This is basically saying that a should be the fraction of the range that a spans, that if b is half the range from min to max, then a should be 0.5, and if b spans one fifth the range, a should be 0.2.