Number representation in a common command? - c#

I have a line of code in a piece of C# I'm analyzing.
`random.next(0xf4240, 0x98967f).ToString();'
I know the command line is generating a number between the specified ranges and returning it as a string. Whats a little odd to me is the '0xf' and the '0x#####f'
I looked up that the 0xf is supposed to return a nybbie but I'd realy like to get an idea of what the raw values would be. Any help would be great.
Thanks.

The prefix 0x is how you specify hexadecimal values in C# and a number of other languages. It's my belief that hexadecimal can only specify integer values, although I may be wrong.
In your case, 0xf4240 is the same as F4240 in hexadecimal, or a 1.000.000 in decimal. 0x98967f is the same as 9.999.999 in decimal.
One thing, this code was obviously obfuscated on purpose, which is baaaad. There seems to be no need to provide those values in hexadecimal.

The command line isn't generating anything - your C# application (which happens to output to the command line) is calculating a pseudo-random number between 1000000 and 9999999 (you are passing in the hex representations).

In C#, 0x is used as prefix to represent hexadecimal integer literals. See the spec
In your case, f4240 and 98967f are just two integers represented in hexadecimal system.
Update:
As #codesparkle has stated they represent 1000000 and 9999999 respectively

Related

Why does C# returning "?"?

so I just started experimenting with C#. I have one line of code and the output is a "?".
Code: Console.WriteLine(Math.Pow(Math.Exp(1200), 0.005d));
Output: ?
I am using Visual Studio and it also says exited with code 0.
It should output 403.4287934927351. I also tried it out with Geogebra and it's correct as shown in the image so it's not infinity.
C# double type (which is returned by Math.Exp function) is a fixed-size type (64 bits), and so it cannot represent arbitrary large numbers. Largest number it can represent is double.MaxValue constant, and it's of order 10^308, which is less than what you are trying to compute (e^2000).
When result of computation exceeds maximum representable value - special "number" representing Infinity is returned. So
double x = Math.Exp(1200); // cannot represent this with double type
bool isInifinite = double.IsInfinity(x); // true
After you got this "infinity" - all other computations involving it will just return infinity back, there is nothing else they can do. So then whole expression Math.Pow(Math.Exp(1200), 0.005d)) returns "infinity".
When you try to write result to console, it gets converted to string. The rules for converting mentioned infinity to string are as follows:
Regardless of the format string, if the value of a Single or Double
floating-point type is positive infinity, negative infinity, or not a
number (NaN), the formatted string is the value of the respective
PositiveInfinitySymbol, NegativeInfinitySymbol, or NaNSymbol property
that is specified by the currently applicable NumberFormatInfo object.
In your current culture, PositiveInfinitySymbol is likely "∞", but your console encoding likely cannot represent this symbol, so it outputs "?". You can change console encoding to UTF8 like this:
Console.OutputEncoding = Encoding.UTF8;
And then it will show "∞" correctly.
There is no framework-provided type to work with arbitrary sized rational numbers, as far as I know. For integer there is BigInteger type though.
In this specific case you can do fine with just double, because you can do the same thing with:
Console.WriteLine(Math.Exp(1200 * 0.005d));
// outputs 403.4287934927351
Now there are no intermediate results exceeding capacity of double, and so it works fine.
For cases where it's not possible - there are third party libraries which allow to work with arbitrary large rationals.

Why does formatting a Double using the "G" standard format string not return the full string?

I hope I have researched this enough that my premise is not totally off base. If so, then the mathematicians out there can set me straight.
My premise is that a Double value such as 12.5 should be rounded to 5 significant figures (NOT decimal places) as 12.500. Instead, using the following C# code, I get 12.5:
Double d = 12.5;
Console.WriteLine(d.ToString("G5"));
I came across this post from 2007 which seems to echo my problem. In fact, I am using those example numbers just to keep things consistent.
My goal here is to better understand the following:
Is my understanding of sig figs mathematically correct? I.e., is my expectation reasonable, or is the output "12.5" somehow correct?
Is this really a (very long-lived) bug in the framework? If so, can/will it be fixed?
Assuming it is a bug, what might I do about it now? Write a hack to determine how many
sig figs you actually got back and then pad it? Roll my own code to
do what the "G" format string was supposed to do? I have come across examples of this on SO already, so perhaps that is evidence that a clean option does not exist.
Additionally, I do realize that the storage issues with Double might negatively impact the rounding aspect of this problem, but for now, I am only concerned with the issue of more sig figs than original digits.
EDIT: I have tested this up to framework 4.5.
See this link on G-Format Specifier. It clearly states:
The result contains a decimal point if required, and trailing zeros after the decimal point are omitted.
A Double value is rounded to 15 significant figures, not five.
Reference: The General ("G") format specifier
Rounding a number to any number of significant figures doesn't mean that the formatted string has to contain that number of digits. If the value is rounded to 12.5000000000000 then it will be formatted into "12.5" because that is the most compact way to represent the value.

How can I convert a byte into a string of binary digits in C#?

I am trying to convert a byte into a string of binary digits - not encoded, just as it is, i.e. if the byte = 00110101 then the string would be "00110101".
I have searched high and low, and everything I find is either relating to getting the ASCII or UTF or whatever value of the byte, or converting a character into a byte, neither of which is what I want. Just doing ToString() gives me the int value.
Maybe i'm missing something obvious, and I understand this is a fairly rare case. It must be possible without some crazy loop which iterates through, surely?
(I'm sending the string over bluetoothLE to a rotating shop display cabinet to program it)
edit: here's some code:
DateTime updateTime = DateTime.Now;
byte dow = (byte)updateTime.DayOfWeek;
Debug.WriteLine(dow.ToString());
If I break and inspect 'dow', it shows as '3' (it's wednesday), not 00000011 as I would have expected. I just tried BitConverter as suggested below, but that still returns '3'.
You want to use Convert.ToString() but specify a base, in this case because it's binary, base 2.
However, you'll also need to pad to the number of bits, because it will cut off 0 digits, so 00000001 would end up as 1.
Try this:
Convert.ToString(theByte,2).PadLeft(8,'0');

Happy 3501.ToString("X") day!

In C#, is there a way to convert an int value to a hex value without using the .ToString("X") method?
Your question is plain wrong (no offense intended). A number has one single value. Hex, Decimal, Binary, Octal, etc. are just different representations of one same integral number. Int32 is agnostic when it comes to what representation you choose to write it with.
So when you ask:
is there a way to convert an int value to a hex value
you are asking something thast doesn't make sense. A valid question would be: is there anyway to write a integer in hexadecimal representation that doesn't involve using .ToString("X")?
The answer is not really. Someway or the other (directly or not by you), .ToString("X") or some other flavor of ToString() will be called to correctly format the string representing the value.
And when you think of hexadecimal as a respresentation (a formatted string) of a given number, then .ToString() does make sense.
Use Convert.ToString( intValue, 16 );
It can be used to convert between any common numeric base, i.e., binary, octal, decimal and hexadecimal.

Float to String format specifier

I have some float values I want to convert to a string, I want to keep the formatting the same when converting, i.e. 999.0000(float) -> 999.0000(String). My problem is when the values contain an arbitrary number of zeroes after the decimal point, as in the previous example, they are stripped away when converting to a string, so the result I actually end up with is 999.
I looked at the format specifiers for the toString() method on MSDN, the RoundTrip ('R') specifier looks like it will produce what I want, but it is only supported for Single, Double and BigInt variables. Is there a format specifier like this for float variables?? Or would it be easier to just convert the values to doubles?
UPDATE: Just for clarity, the reason why I want to keep the trailing zeroes is because I'm doing a comparison of decimal places, i.e. I'm comparing the number of digits after the decimal place between two values. So for example, 1.00 and 1.00000 have a different number of digits after the decimal point. I know it's a strange request, it's for work and the requirement is coming from on high.
UPDATE 2-3-11:
I was thinking about this too hard, I'm reading the numbers from a txt file and then parsing them as floats, I'm going to modify the program to check whether the string values are decimals or whole numbers. Sorry for wasting your time, although this was very insightful.
Use ToString() with this format:
12345.678901.ToString("0.0000"); // outputs 12345.6789
12345.0.ToString("0.0000"); // outputs 12345.0000
Put as much zero as necessary at the end of the format.
Firstly, as Etienne says, float in C# is Single. It is just the C# keyword for that data type.
So you can definitely do this:
float f = 13.5f;
string s = f.ToString("R");
Secondly, you have referred a couple of times to the number's "format"; numbers don't have formats, they only have values. Strings have formats. Which makes me wonder: what is this thing you have that has a format but is not a string? The closest thing I can think of would be decimal, which does maintain its own precision; however, calling simply decimal.ToString should have the effect you want in that case.
How about including some example code so we can see exactly what you're doing, and why it isn't achieving what you want?
You can pass a format string to the ToString method, like so:
ToString("N4"); // 4 decimal points Number
If you want to see more modifiers, take a look at MSDN - Standard Numeric Format Strings
In C#, float is an alias for System.Single (a bit like intis an alias for System.Int32).

Categories