I'm a C# newbie learning how to work with arrays. I wrote a small console app that converts binary numbers to their decimal equivalents; however, the sytax I've used seems to be causing the app to - at some point - use the unicode designation of integers instead of the true value of the integer itself, so 1 becomes 49, and 0 becomes 48.
How can I write the app differently to avoid this? Thanks
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Sandbox
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Key in binary number and press Enter to calculate decimal equivalent");
string inputString = Console.ReadLine();
////This is supposed to change the user input into character array - possible issue here
char[] digitalState = inputString.ToArray();
int exponent = 0;
int numberBase = 2;
int digitIndex = inputString.Length - 1;
int decimalValue = 0;
int intermediateValue = 0;
//Calculates the decimal value of each binary digit by raising two to the power of the position of the digit. The result is then multiplied by the binary digit (i.e. 1 or 0, the "digitalState") to determine whether the result should be accumulated into the final result for the binary number as a whole ("decimalValue").
while (digitIndex > 0 || digitIndex == 0)
{
intermediateValue = (int)Math.Pow(numberBase, exponent) * digitalState[digitIndex]; //The calculation here gives the wrong result, possibly because of the unicode designation vs. true value issue
decimalValue = decimalValue + intermediateValue;
digitIndex--;
exponent++;
}
Console.WriteLine("The decimal equivalent of {0} is {1}", inputString, intermediateValue);
Console.ReadLine();
}
}
}
Simply use the following code
for (int i = 0; i < digitalState.Length; i++)
{
digitalState[i] = (char)(digitalState[i] - 48);
}
After
char[] digitalState = inputString.ToArray();
Note that the value of a character, for example '1' is different from what it represents. As you already noticed '1' is equal to ASCII code 49. When you subtract 48 from its value (49) it becomes 1.
there were two errors: you missed the "-48" and wrote the intermediate instead of the result (last line). Not sure how to unline some parts in the codeblock ;)
intermediateValue = (int)Math.Pow(numberBase, exponent) * (digitalState[digitIndex]-48;
//The calculation here gives the wrong result,
//possibly because of the unicode designation vs. true value issue
decimalValue += intermediateValue;
(.....)
Console.WriteLine("The decimal equivalent of {0} is {1}", inputString, decimalValue);
#CharlesMager says it all. However, I assume this is a homework assignment. So as you said multiplying by the ASCII value is wrong. So just subtract '0' (decimal value 48) from ASCII value.
intermediateValue = (int)Math.Pow(numberBase, exponent)
* ((int)digitalState[digitIndex] - 48);
You code is ugly, there is no need to go backwards from the string. Also using Math.Power is inefficient, shifting (<<) is equivalent for binary powers.
long v = 0;
foreach (var c in inputString)
{
v = (v << 1) + ((int)c - 48);
}
Console.WriteLine("The decimal equivalent of {0} is {1}", inputString, v);
Console.ReadLine();
Related
I need to get a number from the user and display the sum of that number's digits. For example, the sum of the digits in the number 12329 is 17.
Here's what I tried to do and it is giving me the ASCII code instead:
Console.WriteLine("please enter a number: ");
string num = Console.ReadLine();
int len = num.Length;
int[] nums = new int[len];
int sum = 0;
int count = 0;
while (count < len)
{
nums[count] = Convert.ToInt32(num[count]);
count++;
}
for (int i = 0; i < len; i++)
sum += nums[i];
Console.WriteLine(sum);
This is a very common mistake. char is really just a number - the encoding value of the character represented by the char. When you do Convert.ToInt32 on it, it sees the char as a number and says "alright let's just convert this number to 32 bits and return!" instead of trying to parse the character.
"Wait, where have I used a char in my code?" you might ask. Well, here:
Convert.ToInt32(num[count]) // 'num[count]' evaluates to 'char'
To fix this, you need to convert the char to a string:
nums[count] = Convert.ToInt32(num[count].ToString());
^^^^^^^^^^^^^^^^^^^^^
Now you are calling a different overload of the ToInt32 method, which actually tries to parse the string.
When you access your string with a index (in your case num[count]) you get a char type and because of that you are getting ASCII values. You can convert char to string with .ToString() in your case nums[count] = Convert.ToInt32(num[count].ToString());.I posted here another approach to your problem:
string number = Console.ReadLine();
int sum = 0;
foreach (var item in number)
{
sum += Convert.ToInt32(item.ToString());
}
Console.WriteLine(sum);
As you noticed the Convert.ToInt32(num[count]) will only return the Unicode code of the char you want to convert, because when you use the [] operator on a string, you will get readonly access to [the] individual characters of a string [1].
And so you are using Convert.ToInt32(Char), which
Converts the value of the specified Unicode character to the equivalent 32-bit signed integer.
One way to cast the numeric value from a char to a digit, is using Char.GetNumericValue(), which
Converts a specified numeric Unicode character to a double-precision floating-point number.
By using System.Linq; you can cut your code to just a few lines of code.
Console.WriteLine("please enter a number: ");
string num = Console.ReadLine(); // "12329"
int sum = (int)num.Select(n => Char.GetNumericValue(n)).Sum();
Console.WriteLine(sum); // 17
What does this line of code?
The num.Select(n => Char.GetNumericValue(n)) will iterate over each char in your string, like your while and converts each value to a double value and return an IEnumerable<double>. The Sum() itself will iterate over each value of the IEnumerable<double> and calculate the sum as a double. And since you want an integer as result the (int) casts the double into an integer value.
Side-Note:
You could check your input, if it is really an integer.
For example:
int intValue;
if(Int32.TryParse(num, out intValue))
{
// run the linq
}
else
{
// re-prompt, exit, ...
}
If you are using Char.GetNumericValue() on an letter it will return -1 so for example the sum of string num = "123a29"; will be 16.
I'm trying to convert a float to it's string representation without the result appearing in scientific notation.
I first tried :
ToString("0." + new string('#', 7))
but this doesn't seem to work for large values. For example :
float largeNumber = 12345678f;
string str = largeNumber.ToString("0." + new string('#', 7));
results in "12345680"
I then tried ToString("R")
this works for the large number above, but if the numbers get too large, it displays them in scientific notation. For example 5000000000f results in "5E+09". And small numbers like 0.0005 result in 0.0004999999966
I've also tried mixing the 2, but I still get scientific notation in some cases.
My test program is pasted below. I appreciate that there will be precision issues, but I'm wondering if I can do any better than what I have?
class Program
{
static void Main(string[] args)
{
Write(0.123456789f);
Write(0.12345678f);
Write(0.1234567f);
Write(0.123456f);
Write(0.12345f);
Write(0.1234f);
Write(0.123f);
Write(0.12f);
Write(0.1f);
Write(1);
Write(12);
Write(123);
Write(1234);
Write(12345);
Write(123456);
Write(1234567);
Write(12345678);
Write(123456789);
Console.WriteLine();
float f = 5000000000f;
for (int i = 0; i < 17; ++i)
{
Write(f);
f /= 10;
}
Console.WriteLine();
f = 5000000000f;
for (int i = 0; i < 17; ++i)
{
Write(f < 1 ? f + 1 : f);
f /= 10;
}
Console.Read();
}
static void Write(float f)
{
//string str = f.ToString("0." + new string('#', 7));
//string str = f.ToString("R");
string str = Math.Abs(f) < 1 ? f.ToString("0." + new string('#', 7)) : f.ToString("R");
Console.WriteLine(str);
}
}
The problem is that float only supports 7 digits of precision. There's no way to represent 12345678f precisely in a float, so it gets converted to the nearest representable float value, which is 12345680f. It's not the size of the number, but the number of digits of precision that is the key issue.
Also, 0.0005 cannot be represented exactly in a binary floating-point number; the closest 8-bit binary floating point number to 0.0005 is 0.0004999999966
decimal supports much greater precision, and can represented base-10 numbers precisely using the standard N format specifier.
try this:
largeNumber.ToString("r")
You can find the list of available formats here:
http://msdn.microsoft.com/en-us/library/dwhawy9k%28v=vs.110%29.aspx
The 0 in the format string is not for zeroes, it represents "digit if non-zero and zero if zero", while '#' represents "digit if non-zero or nothing if zero".
You can use the format f.ToString("0." + new string('#', 7)) for numbers over zero
I just tested it in PowerShell and it works fine for me, although it's probably using doubles or decimals:
PS C:\> $test = "{0:0.0######}"
PS C:\> $test -f 0.5
0,5
PS C:\> $test -f 0.4443423
0,4443423
PS C:\> $test -f 123.4443423
123,4443423
PS C:\> $test -f 45425123.4443423
45425123,4443423
Definitely, the problem seems to be float precision:
PS C:\> $test -f [float]45425123.4443423
45425120,0
I have this program that gets all the numbers from a double variable, removes decimal marks and minuses and adds every digit separately. Here is is:
static void Main(string[] args)
{
double input = double.Parse(Console.ReadLine());
char[] chrArr = input.ToString().ToCharArray();
input = 0;
foreach (var ch in chrArr)
{
string somestring = Convert.ToString(ch);
int someint = 0;
bool z = int.TryParse(somestring, out someint);
if (z == true)
{
input += (ch - '0');
}
}
The problem is for example when I enter "9999999999999999999999999999999...." and so on, it gets represented as 1.0E+254 and what so my program just adds 1+0+2+5+4 and finishes. Is there efficient way to make this work properly ? I tried using string instad of double, but it works too slow..
You can't store "9999999999999999999999999999999..." as a double - a double only has 15 or 16 digits of precision. The compiler is giving you the closest double it can represent to what you're asking, which is 1E254.
I'd look into why using string was slow, or use BigInteger
As other answers indicate, what's stored will not be exactly the digits entered, but will be the closest double value that can be represented.
If you want to inspect all of it's digits though, use F0 as the format string.
char[] chrArr = input.ToString("F0").ToCharArray();
You can store a larger number in a Decimal as it is a 128 bit number compared to the 64 bit of a Double.
But there is obviously still a limit.
In my code I need to convert string representation of integers to long and double values.
String representation is a byte array (byte[]). For example, for a number 12345 string representation is { 49, 50, 51, 52, 53 }
Currently, I use following obvious code for conversion to long (and almost the same code for conversion to double)
private long bytesToIntValue()
{
string s = System.Text.Encoding.GetEncoding("Latin1").GetString(bytes);
return long.Parse(s, CultureInfo.InvariantCulture);
}
This code works as expected, but in my case I want something better. It's because currently I must convert bytes to string first.
In my case, bytesToIntValue() gets called about 12 million times and about 25% of all memory allocations are made in this method.
Sure, I want to optimize this part. I want to perform conversions without intermediate string (+ speed, - allocation).
What would you recommend? How can I perform conversions without intermediate strings? Is there a faster method to perform conversions?
EDIT:
Byte arrays I am dealing with are always contain ASCII-encoded data. Numbers can be negative. For double values exponential format is allowed. Hexadecimal integers are not allowed.
How can I perform conversions without intermediate strings?
Well you can easily convert each byte to a char. For example - untested:
private static long ConvertAsciiBytesToInt32(byte[] bytes)
{
long value = 0;
foreach (byte b in bytes)
{
value *= 10L;
char c = b; // Implicit conversion; effectively ISO-8859-1
if (c < '0' || c > '9')
{
throw new ArgumentException("Bytes contains non-digit: " + c);
}
value += (c - '0');
}
return value;
}
Note that this really does assume it's ASCII (or compatible) - if your byte array is actually UTF-16 (for example) then it will definitely do the wrong thing.
Also note that this doesn't perform any sort of length validation or overflow checking... and it doesn't cope with negative numbers. You could add all of these if you want, but we don't know enough about your requirements to know if it's worth adding the complexity.
I'm not sure that there is a easy way to do that,
Please note that it won't work with other encodings, The test shown on my computer that this is only 3 times faster (I don't think it worth it).
The code + test :
class MainClass
{
public static void Main(string[] args)
{
string str = "12341234";
byte[] buffer = Encoding.ASCII.GetBytes(str);
Stopwatch sw = Stopwatch.StartNew();
for(int i = 0; i < 1000000 ;i ++)
{
long val = BufferToLong.GetValue(buffer);
}
Console.WriteLine (sw.ElapsedMilliseconds);
sw.Restart();
for (int i = 0 ; i < 1000000 ; i++)
{
string valStr = Encoding.ASCII.GetString(buffer);
long val = long.Parse(valStr);
}
Console.WriteLine (sw.ElapsedMilliseconds);
}
}
static class BufferToLong
{
public static long GetValue(Byte[] buffer) {
long number = 0;
foreach (byte currentByte in buffer) {
char currentChar = (char)currentByte;
int currentDigit = currentChar - '0';
number *= 10 ;
number += currentDigit;
}
return number;
}
}
In the end, I created C# version of strol function. This function comes with CRT and source code of CRT comes with Visual Studio.
The resulting method is almost the same as code provided by #Jon Skeet in his answer but also contains some checks for overflow.
In my case all the changes proved to be very useful in terms of speed and memory.
What I am looking for is something like PHPs decbin function in C#. That function converts decimals to its representation as a string.
For example, when using decbin(21) it returns 10101 as result.
I found this function which basically does what I want, but maybe there is a better / faster way?
var result = Convert.ToString(number, 2);
– Almost the only use for the (otherwise useless) Convert class.
Most ways will be better and faster than the function that you found. It's not a very good example on how to do the conversion.
The built in method Convert.ToString(num, base) is an obvious choice, but you can easily write a replacement if you need it to work differently.
This is a simple method where you can specify the length of the binary number:
public static string ToBin(int value, int len) {
return (len > 1 ? ToBin(value >> 1, len - 1) : null) + "01"[value & 1];
}
It uses recursion, the first part (before the +) calls itself to create the binary representation of the number except for the last digit, and the second part takes care of the last digit.
Example:
Console.WriteLine(ToBin(42, 8));
Output:
00101010
int toBase = 2;
string binary = Convert.ToString(21, toBase); // "10101"
To have the binary value in (at least) a specified number of digits, padded with zeroes:
string bin = Convert.ToString(1234, 2).PadLeft(16, '0');
The Convert.ToString does the conversion to a binary string.
The PadLeft adds zeroes to fill it up to 16 digits.
This is my answer:
static bool[] Dec2Bin(int value)
{
if (value == 0) return new[] { false };
var n = (int)(Math.Log(value) / Math.Log(2));
var a = new bool[n + 1];
for (var i = n; i >= 0; i--)
{
n = (int)Math.Pow(2, i);
if (n > value) continue;
a[i] = true;
value -= n;
}
Array.Reverse(a);
return a;
}
Using Pow instead of modulo and divide so i think it's faster way.