Example :
int a = 0000000;
int b = 17;
int c = a + b;
c value should be 0000017
if b 223 means c should be 0000223
You use for the output using .ToString("D7"), read more in this article How to: Pad a Number with Leading Zeros
c.ToString("D7");
Leading zeroes has sense in a string. You get it as following:
result.ToString("D7")
Well 0 and 0000000 are the same integer numbers; if you want different String representation, then use formatting:
int a = 0000000; // equals to 0
int b = 17;
int c = a + b;
Console.Write(c.ToString("D7")); // 7 digits, "0000000" format will do the same
You can just use this method:
using System;
public class Example
{
public static void Main()
{
int a = 0000000;
int b = 17;
String s = String.Format("{0,10:0000000}",a+b);
Console.WriteLine(s);
}
}
OUTPUT:
0000017
In application, leading zeroes only have sense when they are displayed which means when they are actually strings.
In numeric form you don't see and by that do/should not care.
So, don't care about leading zeroes in int variables. When you want to show them or otherwise use in string use c.ToString("D7"). Of course, you can change to any number of digits, 7 is only for your example.
Related
I'm a C# newbie learning how to work with arrays. I wrote a small console app that converts binary numbers to their decimal equivalents; however, the sytax I've used seems to be causing the app to - at some point - use the unicode designation of integers instead of the true value of the integer itself, so 1 becomes 49, and 0 becomes 48.
How can I write the app differently to avoid this? Thanks
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Sandbox
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Key in binary number and press Enter to calculate decimal equivalent");
string inputString = Console.ReadLine();
////This is supposed to change the user input into character array - possible issue here
char[] digitalState = inputString.ToArray();
int exponent = 0;
int numberBase = 2;
int digitIndex = inputString.Length - 1;
int decimalValue = 0;
int intermediateValue = 0;
//Calculates the decimal value of each binary digit by raising two to the power of the position of the digit. The result is then multiplied by the binary digit (i.e. 1 or 0, the "digitalState") to determine whether the result should be accumulated into the final result for the binary number as a whole ("decimalValue").
while (digitIndex > 0 || digitIndex == 0)
{
intermediateValue = (int)Math.Pow(numberBase, exponent) * digitalState[digitIndex]; //The calculation here gives the wrong result, possibly because of the unicode designation vs. true value issue
decimalValue = decimalValue + intermediateValue;
digitIndex--;
exponent++;
}
Console.WriteLine("The decimal equivalent of {0} is {1}", inputString, intermediateValue);
Console.ReadLine();
}
}
}
Simply use the following code
for (int i = 0; i < digitalState.Length; i++)
{
digitalState[i] = (char)(digitalState[i] - 48);
}
After
char[] digitalState = inputString.ToArray();
Note that the value of a character, for example '1' is different from what it represents. As you already noticed '1' is equal to ASCII code 49. When you subtract 48 from its value (49) it becomes 1.
there were two errors: you missed the "-48" and wrote the intermediate instead of the result (last line). Not sure how to unline some parts in the codeblock ;)
intermediateValue = (int)Math.Pow(numberBase, exponent) * (digitalState[digitIndex]-48;
//The calculation here gives the wrong result,
//possibly because of the unicode designation vs. true value issue
decimalValue += intermediateValue;
(.....)
Console.WriteLine("The decimal equivalent of {0} is {1}", inputString, decimalValue);
#CharlesMager says it all. However, I assume this is a homework assignment. So as you said multiplying by the ASCII value is wrong. So just subtract '0' (decimal value 48) from ASCII value.
intermediateValue = (int)Math.Pow(numberBase, exponent)
* ((int)digitalState[digitIndex] - 48);
You code is ugly, there is no need to go backwards from the string. Also using Math.Power is inefficient, shifting (<<) is equivalent for binary powers.
long v = 0;
foreach (var c in inputString)
{
v = (v << 1) + ((int)c - 48);
}
Console.WriteLine("The decimal equivalent of {0} is {1}", inputString, v);
Console.ReadLine();
I am having a hard time understanding what exactly is going on behind this algorithm. So, I have the following code which I believe works for the Wikipedia example. I seem to be having problems matching up the correct outcomes of hex values. While for the wiki example I get the correct hex value, It seems that my int finalValue; is not the correct value.
string fText, fileName, output;
Int32 a = 1 , b = 0;
const int MOD_ADLER = 65521;
const int ADLER_CONST2 = 65536;
private void btnCalculate_Click(object sender, EventArgs e) {
fileName = tbFilePath.Text;
if(fileName != "" && File.Exists(fileName)) {
fText = File.ReadAllText(fileName);
foreach (char i in fText) {
a = ( a + Convert.ToInt32(i)) % MOD_ADLER;
b = (b + a) % MOD_ADLER;
}
int finalValue = (b * ADLER_CONST2 + a);
output = finalValue.ToString("X");
lbValue.Text = output.ToString();
}
else {
MessageBox.Show("This is not a valid filepath, or is a blank file.\n" +
"Please enter a valid file path.");
}
}
I understand that this is not an efficient way to go about this, I am just trying to understand what is really going on under the hood. That way I can create a more efficient algorithm that varies from this.
From my understanding. In my code, the example value a is going to be added the integer (32 bit) value plus its initial value of 1. I do the Mod of the very high prime number, and continue moving through the sub-string of my text file adding up the values until all of the characters have been added up.
Probably this two lines confuse you.
a = ( a + Convert.ToInt32(i)) % MOD_ADLER;
b = (b + a) % MOD_ADLER;
Every char have integer representation. You can check this article. You are changing the value a to be the reminder-> from current value of a + int representetion of the char divided by MOD_ADLER. You can read operator %
What is reminder: 5%2 = 1
After that you are making same thing for b. b is equal to the reminder current value of b+a divided by MOD_ADLER. After you do that multiple times ( number of chars in the string). You have this.
int finalValue = (b * ADLER_CONST2 + a);
output = finalValue.ToString("X");
This converts the final integer value to HEX.
output = finalValue.ToString("X");
The "X" format says generate the hexadecimal represent of the number!
See MSDN Standard Numeric Format Strings
I need to travers the string ,which should be the string of digits and make some arithmetic operations on these digits
for (int i = data.Length - 1; i >= 0; --i)
{
uint curDigit;
//Convert to uint the current rightmost digit, if convert fails return false (the valid data should be numeric)
try
{
curDigit = Convert.ToUInt32(data[i]);
//arithmetic ops...
}
catch
{
return false;
}
I test it with the following input data string.
"4000080706200002"
For i = 15,corresponding to the rightmost digit 2,I get 50 as an output from
curDigit = Convert.ToUInt32(data[i]);
Can someone please explain me what is wrong?and how to correct the issue
50 is the ascii code for '2'. what you need is '2' - '0' (50-48)
byte[] digits = "4000080706200002".Select(x => (byte)(x - '0')).ToArray();
http://www.asciitable.com/
What you are getting back is the ascii value of character 2, You can use call ToString on the character item and then call Convert.ToUnit32, Consider the example:
char x = '2';
uint curDigit = Convert.ToUInt32(x.ToString());
this will give you back 2 as curDigit
For your code you can just use:
curDigit = Convert.ToUInt32(data[i].ToString());
Another option is to use char.GetNumericValue like:
uint curDigit = (UInt32) char.GetNumericValue(data[i]);
char.GetNumericValue returns double and you can cast the result back to UInt32
The problem is that data[i] returns a char variable, that essentialy is an integer holding the ASCII code of the character. So '2' corresponds to 50.
There are 2 things you can do to overcome this behaviour:
Better curDigit = Convert.ToUInt32(data[i] - '0'); //Substract the ASCII representation of '0' from the char
curDigit = Convert.ToUInt32(data.Substring(i,1)); //Use substring to return a string instead of char. Note, that this method is less efficient, as Convert from string essentially splits the string into chars, and substract '0' from each and every one of them.
Your getting the ASCII (or Unicode) values for those characters. The problem is that the code points for the characters '0' … '9' are not 0 … 9, but 48 … 57. To fix this, you need to adjust by that offset. For example:
curDigit = Convert.ToUInt32(data[i] - '0');
Or
curDigit = Convert.ToUInt32(data[i] - 48);
Rather than messing around with ASCII calculations you could use UInt32.TryParse as an alternative solution. However, this method requires a string input not char, so you would have to modify your approach a little:
string input = "4000080706200002";
string[] digits = input.Select(x => x.ToString()).ToArray();
foreach(string digit in digits)
{
uint curDigit = 0;
if(UInt32.TryParse(digit, out curDigit))
{
//arithmetic ops...
}
//else failed to parse
}
I have this program that gets all the numbers from a double variable, removes decimal marks and minuses and adds every digit separately. Here is is:
static void Main(string[] args)
{
double input = double.Parse(Console.ReadLine());
char[] chrArr = input.ToString().ToCharArray();
input = 0;
foreach (var ch in chrArr)
{
string somestring = Convert.ToString(ch);
int someint = 0;
bool z = int.TryParse(somestring, out someint);
if (z == true)
{
input += (ch - '0');
}
}
The problem is for example when I enter "9999999999999999999999999999999...." and so on, it gets represented as 1.0E+254 and what so my program just adds 1+0+2+5+4 and finishes. Is there efficient way to make this work properly ? I tried using string instad of double, but it works too slow..
You can't store "9999999999999999999999999999999..." as a double - a double only has 15 or 16 digits of precision. The compiler is giving you the closest double it can represent to what you're asking, which is 1E254.
I'd look into why using string was slow, or use BigInteger
As other answers indicate, what's stored will not be exactly the digits entered, but will be the closest double value that can be represented.
If you want to inspect all of it's digits though, use F0 as the format string.
char[] chrArr = input.ToString("F0").ToCharArray();
You can store a larger number in a Decimal as it is a 128 bit number compared to the 64 bit of a Double.
But there is obviously still a limit.
I need to take an int and turn it into its byte form.
i.e. I need to take '1' and turn it into '00000001'
or '160' and turn it into '10100000'
Currently, I am using this
int x = 3;
string s = Convert.ToString(x, 2);
int b = int.Parse(s);
This is an awful way to do things, so I am looking for a better way.
Any Suggestions?
EDIT
Basically, I need to get a list of every number up to 256 in base-2. I'm going to store all the numbers in a list, and keep them in a table on my db.
UPDATE
I decided to keep the base-2 number as a string instead of parsing it back. Thanks for the help and sorry for the confusion!
If you need the byte form you should take a look at the BitConverter.GetBytes() method. It does not return a string, but an array of bytes.
The int is already a binary number. What exactly are you looking to do with the new integer? What you are doing is setting a base 10 number to a base 2 value. That's kind of confusing and I think you must be trying to do something that should happen a different way.
I don't know what you need at the end ... this may help:
Turn the int into an int array:
byte[] bytes = BitConverter.GetBytes(x);
Turn the int into a bit array:
BitArray bitArray = new BitArray(new[] {x});
You can use BitArray.
The code looks a bit clumsy, but that could be improved a bit.
int testValue = 160;
System.Collections.BitArray bitarray = new System.Collections.BitArray(new int[] { testValue });
var bitList = new List<bool>();
foreach (bool bit in bitarray)
bitList.Add(bit);
bitList.Reverse();
var base2 = 0;
foreach (bool bit in bitList)
{
base2 *= 10; // Shift one step left
if (bit)
base2++; // Add 1 last
}
Console.WriteLine(base2);
I think you are confusing the data type Integer with its textual representation.
int x = 3;
is the number three regardless of the representation (binary, decimal, hexadecimal, etc.)
When you parse the binary textual representation of an integer back to integer you are getting a different number. The framework assumes you are parsing the number represented in the decimal base and gives the corresponding integer.
You can try
int x = 1600;
string s = Convert.ToString(x, 2);
int b = int.Parse(s);
and it will throw an exception because the binary representation of 1600 interpreted as decimal
is too big to fit in an integer