static void Main()
{
Console.Write("Please input a number: ");
Console.WriteLine("\n The number you selected was {0} \n", method());
}
static int method()
{
int var = int.Parse(Console.ReadLine());
return var;
}
The above code throws a format exception. I tried storing the input in a string variable and then parsing, but it had the same problem. I also tried using the Convert class and still had the same problem. I would appreciate if someone could show me where I am wrong.
I am trying to convert 23.4, for example. (It works for Natural numbers but why not 4345.5, for example)
23.4 is not an integer, so you cannot use int.Parse (or int.TryParse). Instead you have to parse it to a decimal number like decimal or double. You can use the TryParse methods like Double.TryParse to prevent an exception if it's not a valid number:
string input = Console.ReadLine();
double number;
if(double.TryParse(input.Trim(), out number))
{
// valid number
Console.WriteLine("\n The number you selected was {0} \n", number);
}
else
{
Console.WriteLine("Please enter a real number.");
}
Update from your comments i can see that you want to display an integer.
You can use (int)number to get an integer where the decimal part is simply truncated. If you want that it gets rounded use Math.Round first.
int integer = (int) number; // decimal part is truncated
integer = (int) Math.Round(number, 2, MidpointRounding.AwayFromZero); // rounded to two digits
If you just want to display a string without the decimal part you can also use format strings.
string numberString = number.ToString("N0");
You should be using double or float for rational numbers:
float num = float.Parse(Console.ReadLine())
You can return the integer part of your input string (3.6 becomes 3, for example) like this:
static int method()
{
float var = float.Parse(Console.ReadLine());
return (int)Math.Floor(var);
}
Integer32.Parse is for parsing 32 bit integers. It cannot parse numbers that cannot be represented as a 32 bit integer, such as 23.4, and it will throw an exception when it cannot parse the data. Use a different numeric representation if you wish to represent non-integer numbers.
Related
I need to get a number from the user and display the sum of that number's digits. For example, the sum of the digits in the number 12329 is 17.
Here's what I tried to do and it is giving me the ASCII code instead:
Console.WriteLine("please enter a number: ");
string num = Console.ReadLine();
int len = num.Length;
int[] nums = new int[len];
int sum = 0;
int count = 0;
while (count < len)
{
nums[count] = Convert.ToInt32(num[count]);
count++;
}
for (int i = 0; i < len; i++)
sum += nums[i];
Console.WriteLine(sum);
This is a very common mistake. char is really just a number - the encoding value of the character represented by the char. When you do Convert.ToInt32 on it, it sees the char as a number and says "alright let's just convert this number to 32 bits and return!" instead of trying to parse the character.
"Wait, where have I used a char in my code?" you might ask. Well, here:
Convert.ToInt32(num[count]) // 'num[count]' evaluates to 'char'
To fix this, you need to convert the char to a string:
nums[count] = Convert.ToInt32(num[count].ToString());
^^^^^^^^^^^^^^^^^^^^^
Now you are calling a different overload of the ToInt32 method, which actually tries to parse the string.
When you access your string with a index (in your case num[count]) you get a char type and because of that you are getting ASCII values. You can convert char to string with .ToString() in your case nums[count] = Convert.ToInt32(num[count].ToString());.I posted here another approach to your problem:
string number = Console.ReadLine();
int sum = 0;
foreach (var item in number)
{
sum += Convert.ToInt32(item.ToString());
}
Console.WriteLine(sum);
As you noticed the Convert.ToInt32(num[count]) will only return the Unicode code of the char you want to convert, because when you use the [] operator on a string, you will get readonly access to [the] individual characters of a string [1].
And so you are using Convert.ToInt32(Char), which
Converts the value of the specified Unicode character to the equivalent 32-bit signed integer.
One way to cast the numeric value from a char to a digit, is using Char.GetNumericValue(), which
Converts a specified numeric Unicode character to a double-precision floating-point number.
By using System.Linq; you can cut your code to just a few lines of code.
Console.WriteLine("please enter a number: ");
string num = Console.ReadLine(); // "12329"
int sum = (int)num.Select(n => Char.GetNumericValue(n)).Sum();
Console.WriteLine(sum); // 17
What does this line of code?
The num.Select(n => Char.GetNumericValue(n)) will iterate over each char in your string, like your while and converts each value to a double value and return an IEnumerable<double>. The Sum() itself will iterate over each value of the IEnumerable<double> and calculate the sum as a double. And since you want an integer as result the (int) casts the double into an integer value.
Side-Note:
You could check your input, if it is really an integer.
For example:
int intValue;
if(Int32.TryParse(num, out intValue))
{
// run the linq
}
else
{
// re-prompt, exit, ...
}
If you are using Char.GetNumericValue() on an letter it will return -1 so for example the sum of string num = "123a29"; will be 16.
I'm a C# newbie learning how to work with arrays. I wrote a small console app that converts binary numbers to their decimal equivalents; however, the sytax I've used seems to be causing the app to - at some point - use the unicode designation of integers instead of the true value of the integer itself, so 1 becomes 49, and 0 becomes 48.
How can I write the app differently to avoid this? Thanks
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Sandbox
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Key in binary number and press Enter to calculate decimal equivalent");
string inputString = Console.ReadLine();
////This is supposed to change the user input into character array - possible issue here
char[] digitalState = inputString.ToArray();
int exponent = 0;
int numberBase = 2;
int digitIndex = inputString.Length - 1;
int decimalValue = 0;
int intermediateValue = 0;
//Calculates the decimal value of each binary digit by raising two to the power of the position of the digit. The result is then multiplied by the binary digit (i.e. 1 or 0, the "digitalState") to determine whether the result should be accumulated into the final result for the binary number as a whole ("decimalValue").
while (digitIndex > 0 || digitIndex == 0)
{
intermediateValue = (int)Math.Pow(numberBase, exponent) * digitalState[digitIndex]; //The calculation here gives the wrong result, possibly because of the unicode designation vs. true value issue
decimalValue = decimalValue + intermediateValue;
digitIndex--;
exponent++;
}
Console.WriteLine("The decimal equivalent of {0} is {1}", inputString, intermediateValue);
Console.ReadLine();
}
}
}
Simply use the following code
for (int i = 0; i < digitalState.Length; i++)
{
digitalState[i] = (char)(digitalState[i] - 48);
}
After
char[] digitalState = inputString.ToArray();
Note that the value of a character, for example '1' is different from what it represents. As you already noticed '1' is equal to ASCII code 49. When you subtract 48 from its value (49) it becomes 1.
there were two errors: you missed the "-48" and wrote the intermediate instead of the result (last line). Not sure how to unline some parts in the codeblock ;)
intermediateValue = (int)Math.Pow(numberBase, exponent) * (digitalState[digitIndex]-48;
//The calculation here gives the wrong result,
//possibly because of the unicode designation vs. true value issue
decimalValue += intermediateValue;
(.....)
Console.WriteLine("The decimal equivalent of {0} is {1}", inputString, decimalValue);
#CharlesMager says it all. However, I assume this is a homework assignment. So as you said multiplying by the ASCII value is wrong. So just subtract '0' (decimal value 48) from ASCII value.
intermediateValue = (int)Math.Pow(numberBase, exponent)
* ((int)digitalState[digitIndex] - 48);
You code is ugly, there is no need to go backwards from the string. Also using Math.Power is inefficient, shifting (<<) is equivalent for binary powers.
long v = 0;
foreach (var c in inputString)
{
v = (v << 1) + ((int)c - 48);
}
Console.WriteLine("The decimal equivalent of {0} is {1}", inputString, v);
Console.ReadLine();
I have this program that gets all the numbers from a double variable, removes decimal marks and minuses and adds every digit separately. Here is is:
static void Main(string[] args)
{
double input = double.Parse(Console.ReadLine());
char[] chrArr = input.ToString().ToCharArray();
input = 0;
foreach (var ch in chrArr)
{
string somestring = Convert.ToString(ch);
int someint = 0;
bool z = int.TryParse(somestring, out someint);
if (z == true)
{
input += (ch - '0');
}
}
The problem is for example when I enter "9999999999999999999999999999999...." and so on, it gets represented as 1.0E+254 and what so my program just adds 1+0+2+5+4 and finishes. Is there efficient way to make this work properly ? I tried using string instad of double, but it works too slow..
You can't store "9999999999999999999999999999999..." as a double - a double only has 15 or 16 digits of precision. The compiler is giving you the closest double it can represent to what you're asking, which is 1E254.
I'd look into why using string was slow, or use BigInteger
As other answers indicate, what's stored will not be exactly the digits entered, but will be the closest double value that can be represented.
If you want to inspect all of it's digits though, use F0 as the format string.
char[] chrArr = input.ToString("F0").ToCharArray();
You can store a larger number in a Decimal as it is a 128 bit number compared to the 64 bit of a Double.
But there is obviously still a limit.
How to convert a negative decimal number to a hexadecimal one?
I know how to convert positive numbers from one base to another.
The widows calculator returns a huge number something like FFFFFFFFFFFFCFC7 in hex for -12345 in dec.The value that I need to process further more is CFC7, but I don't know how to get it using C#.
Not exactly sure if that is what you need:
int i = -12345;
string test = i.ToString("X"); // test will hold: "FFFFCFC7"
int HexI = Convert.ToInt32(test, 16); // HexI will hold: -12345
Try this:
int decimalValue = -12345;
string hexVal = String.Format("{0:x2}", decimalValue);
We have an interesting problem were we need to determine the decimal precision of a users input (textbox). Essentially we need to know the number of decimal places entered and then return a precision number, this is best illustrated with examples:
4500 entered will yield a result 1
4500.1 entered will yield a result 0.1
4500.00 entered will yield a result 0.01
4500.450 entered will yield a result 0.001
We are thinking to work with the string, finding the decimal separator and then calculating the result. Just wondering if there is an easier solution to this.
I think you should just do what you suggested - use the position of the decimal point. Obvious drawback might be that you have to think about internationalization yourself.
var decimalSeparator = NumberFormatInfo.CurrentInfo.CurrencyDecimalSeparator;
var position = input.IndexOf(decimalSeparator);
var precision = (position == -1) ? 0 : input.Length - position - 1;
// This may be quite unprecise.
var result = Math.Pow(0.1, precision);
There is another thing you could try - the Decimal type stores an internal precision value. Therefore you could use Decimal.TryParse() and inspect the returned value. Maybe the parsing algorithm maintains the precision of the input.
Finally I would suggest not to try something using floating point numbers. Just parsing the input will remove any information about trailing zeros. So you have to add an artifical non-zero digit to preserve them or do similar tricks. You might run into precision issues. Finally finding the precision based on a floating point number is not simple, too. I see some ugly math or a loop multiplying with ten every iteration until there is no longer any fractional part. And the loop comes with new precision issues...
UPDATE
Parsing into a decimal works. Se Decimal.GetBits() for details.
var input = "123.4560";
var number = Decimal.Parse(input);
// Will be 4.
var precision = (Decimal.GetBits(number)[3] >> 16) & 0x000000FF;
From here using Math.Pow(0.1, precision) is straight forward.
UPDATE 2
Using decimal.GetBits() will allocate an int[] array. If you want to avoid the allocation you can use the following helper method which uses an explicit layout struct to get the scale directly out of the decimal value:
static int GetScale(decimal d)
{
return new DecimalScale(d).Scale;
}
[StructLayout(LayoutKind.Explicit)]
struct DecimalScale
{
public DecimalScale(decimal value)
{
this = default;
this.d = value;
}
[FieldOffset(0)]
decimal d;
[FieldOffset(0)]
int flags;
public int Scale => (flags >> 16) & 0xff;
}
Just wondering if there is an easier
solution to this.
No.
Use string:
string[] res = inputstring.Split('.');
int precision = res[1].Length;
Since your last examples indicate that trailing zeroes are significant, I would rule out any numerical solution and go for the string operations.
No, there is no easier solution, you have to examine the string. If you convert "4500" and "4500.00" to numbers, they both become the value 4500 so you can't tell how many non-value digits there were behind the decimal separator.
As an interesting aside, the Decimal tries to maintain the precision entered by the user. For example,
Console.WriteLine(5.0m);
Console.WriteLine(5.00m);
Console.WriteLine(Decimal.Parse("5.0"));
Console.WriteLine(Decimal.Parse("5.00"));
Has output of:
5.0
5.00
5.0
5.00
If your motivation in tracking the precision of the input is purely for input and output reasons, this may be sufficient to address your problem.
Working with the string is easy enough.
If there is no "." in the string, return 1.
Else return "0.", followed by n-1 "0", followed by one "1", where n is the length of the string after the decimal point.
Here's a possible solution using strings;
static double GetPrecision(string s)
{
string[] splitNumber = s.Split('.');
if (splitNumber.Length > 1)
{
return 1 / Math.Pow(10, splitNumber[1].Length);
}
else
{
return 1;
}
}
There is a question here; Calculate System.Decimal Precision and Scale which looks like it might be of interest if you wish to delve into this some more.