I am implementing GetHashCode(). I want to sum the values in a property called string Id then divide
by some constant and return the result. I am using GetNumericValue():
int sum = 0;
foreach (var ch in Id)
sum += char.GetNumericValue(ch);
But it seems that GetNumericValue returns double. is it ok to convert it into an int?
I thought that Unicode characters are represented by whole number, why is double returned?
And is it okay to ignore it?
Why is double returned?
While 0-9 are the standard digits, actually there are some Unicode characters that represent numbers, some of them are floating point like ⅔ or ½. Let's get an example:
var ch = char.GetNumericValue('½');
Console.WriteLine(ch);// 0.5 output
Yes, it will lose data for some values, since this is the numerical value of unicode characters that are themselves numbers - and unicode includes characters that are non-integer numbers:
var val = char.GetNumericValue('½'); // or ¼, or ꠲, or ⳽
System.Console.WriteLine(val);
Related
How the arithmetic addition and subtraction can be carried out on large strings. For example, I have the following hexadecimal strings
string a1="B91EFEBFBDBDBFEFF39ABEE";
string a2="000FFFFFFFFFFFFFFFFFEEE";
then I want to do arithmetic addition a1+a2 to get the sum, not string concatenation.
And then arithmetic subtraction e.g. sum-a2 to get back string a1.
I tried to do
Int64 parseda1 = Int64.Parse(a1);
Int64 parseda2 = Int64.Parse(a2);
Int64 xyz = abc + abc;
MessageBox.Show(xyz.ToString("X"));// may be error in this as well
It trows exception, Input string was not in a correct format.
If you want really large numbers, you can use the BigInteger struct which represents an arbitrarily large signed integer. Try this:
string a1 = "B91EFEBFBDBDBFEFF39ABEE";
string a2 = "000FFFFFFFFFFFFFFFFFEEE";
BigInteger num1 = BigInteger.Parse(a1, NumberStyles.HexNumber);
BigInteger num2 = BigInteger.Parse(a2, NumberStyles.HexNumber);
BigInteger sum = num1 + num2;
Console.WriteLine(sum.ToString("X"));
Console.WriteLine((sum - num2).ToString("X")); //gets a1
Edit:
Looks like num1 gives us a negative number. That's probably not what you want. To fix that, read: MSDN: BigInteger.Parse Method
"If value is a hexadecimal string, the Parse(String, NumberStyles)
method interprets value as a negative number stored by using two's
complement representation if its first two hexadecimal digits are
greater than or equal to 0x80. In other words, the method interprets
the highest-order bit of the first byte in value as the sign bit. To
make sure that a hexadecimal string is correctly interpreted as a
positive number, the first digit in value must have a value of zero.
For example, the method interprets 0x80 as a negative value, but it
interprets either 0x080 or 0x0080 as a positive value."
I have a number
int number = 509; // integer
string bool_number = Convert.ToString(number, 2); // same integer converted to binary no
I want to bitwise or this number with hex values 0x01, 0x02, 0x04 and 0x08.
(e.g. something like this)
result = number | 0x01
How can I do it? Should I convert number to hex form or whats the right way?
You can use hexadecimal values as numeric literals...
int number = 509;
int hexNumber = 0x02;
int newNumber = number | hexNumber;
// whatever
string newNumberAsBinaryString = Convert.ToString(newNumber, 2);
Console.WriteLine(newNumber);
// etc.
If you need to input a hex string and convert it to a numeric type:
int num = Int32.Parse(hexString, System.Globalization.NumberStyles.HexNumber);
If you need to output a numeric type as hex:
Console.WriteLine(num.ToString("x"));
// or
Console.WriteLine("{0:x}", num);
See also MSDN's page on dealing with hex strings.
An int value isn't in any particular base. You can use bitwise operators on an int at any time - there's no need to convert it first. For example:
int a = 509;
int b = 0x1fd;
The variables a and b have exactly the same value here. I happen to have used a decimal literal to initialize a, and a hex literal to initialize b, but the effect is precisely the same.
So you can bitwise OR your ints at any time. Your example (adding a suitable declaration and semicolon to make it compile):
int result = number | 0x01;
will work just fine - you don't need to do anything to prepare number for this sort of usage. (Incidentally, this will do nothing, because the result of a bitwise OR of the numbers 509 and 1 is 509. If you write 509 in binary you get 111111101 - the bottom bit is already 1, so ORing in 1 won't change anything.)
You should avoid thinking in terms of things like "hex values", because there isn't really any such thing in C#. Numeric bases are only relevant for numbers represented as strings, which typically means either literals in source code, or conversions done at runtime. For example, if your program accepts a number as a command line argument, then that will arrive as a string, so you'll need to know its base to convert it correctly to an int. But once it's an int it's just an int - there's no such thing as a hex value or a decimal value for an int.
I have this program that gets all the numbers from a double variable, removes decimal marks and minuses and adds every digit separately. Here is is:
static void Main(string[] args)
{
double input = double.Parse(Console.ReadLine());
char[] chrArr = input.ToString().ToCharArray();
input = 0;
foreach (var ch in chrArr)
{
string somestring = Convert.ToString(ch);
int someint = 0;
bool z = int.TryParse(somestring, out someint);
if (z == true)
{
input += (ch - '0');
}
}
The problem is for example when I enter "9999999999999999999999999999999...." and so on, it gets represented as 1.0E+254 and what so my program just adds 1+0+2+5+4 and finishes. Is there efficient way to make this work properly ? I tried using string instad of double, but it works too slow..
You can't store "9999999999999999999999999999999..." as a double - a double only has 15 or 16 digits of precision. The compiler is giving you the closest double it can represent to what you're asking, which is 1E254.
I'd look into why using string was slow, or use BigInteger
As other answers indicate, what's stored will not be exactly the digits entered, but will be the closest double value that can be represented.
If you want to inspect all of it's digits though, use F0 as the format string.
char[] chrArr = input.ToString("F0").ToCharArray();
You can store a larger number in a Decimal as it is a 128 bit number compared to the 64 bit of a Double.
But there is obviously still a limit.
How do I replace the decimal part of currency with 0's
Here's my cuurency: 166.7
This is to be formatted as 000000016670
The length of this field is 12.
s.padright(12,0); This is will the second part I believe.
The first part will involve replace the number after the decimal with 000..
Thanks
You can multiply by 100 then format the number.
var num = 166.7;
var numString = (num * 100).ToString("000000000000");
Multiplying by 100 turns 166.7 to 16670. Next you need to pad the left part of the number, which is what the ToString does. Each 0 represents a digit. It means, write the number that belongs to that digit, and if no number is present print 0.
We have an interesting problem were we need to determine the decimal precision of a users input (textbox). Essentially we need to know the number of decimal places entered and then return a precision number, this is best illustrated with examples:
4500 entered will yield a result 1
4500.1 entered will yield a result 0.1
4500.00 entered will yield a result 0.01
4500.450 entered will yield a result 0.001
We are thinking to work with the string, finding the decimal separator and then calculating the result. Just wondering if there is an easier solution to this.
I think you should just do what you suggested - use the position of the decimal point. Obvious drawback might be that you have to think about internationalization yourself.
var decimalSeparator = NumberFormatInfo.CurrentInfo.CurrencyDecimalSeparator;
var position = input.IndexOf(decimalSeparator);
var precision = (position == -1) ? 0 : input.Length - position - 1;
// This may be quite unprecise.
var result = Math.Pow(0.1, precision);
There is another thing you could try - the Decimal type stores an internal precision value. Therefore you could use Decimal.TryParse() and inspect the returned value. Maybe the parsing algorithm maintains the precision of the input.
Finally I would suggest not to try something using floating point numbers. Just parsing the input will remove any information about trailing zeros. So you have to add an artifical non-zero digit to preserve them or do similar tricks. You might run into precision issues. Finally finding the precision based on a floating point number is not simple, too. I see some ugly math or a loop multiplying with ten every iteration until there is no longer any fractional part. And the loop comes with new precision issues...
UPDATE
Parsing into a decimal works. Se Decimal.GetBits() for details.
var input = "123.4560";
var number = Decimal.Parse(input);
// Will be 4.
var precision = (Decimal.GetBits(number)[3] >> 16) & 0x000000FF;
From here using Math.Pow(0.1, precision) is straight forward.
UPDATE 2
Using decimal.GetBits() will allocate an int[] array. If you want to avoid the allocation you can use the following helper method which uses an explicit layout struct to get the scale directly out of the decimal value:
static int GetScale(decimal d)
{
return new DecimalScale(d).Scale;
}
[StructLayout(LayoutKind.Explicit)]
struct DecimalScale
{
public DecimalScale(decimal value)
{
this = default;
this.d = value;
}
[FieldOffset(0)]
decimal d;
[FieldOffset(0)]
int flags;
public int Scale => (flags >> 16) & 0xff;
}
Just wondering if there is an easier
solution to this.
No.
Use string:
string[] res = inputstring.Split('.');
int precision = res[1].Length;
Since your last examples indicate that trailing zeroes are significant, I would rule out any numerical solution and go for the string operations.
No, there is no easier solution, you have to examine the string. If you convert "4500" and "4500.00" to numbers, they both become the value 4500 so you can't tell how many non-value digits there were behind the decimal separator.
As an interesting aside, the Decimal tries to maintain the precision entered by the user. For example,
Console.WriteLine(5.0m);
Console.WriteLine(5.00m);
Console.WriteLine(Decimal.Parse("5.0"));
Console.WriteLine(Decimal.Parse("5.00"));
Has output of:
5.0
5.00
5.0
5.00
If your motivation in tracking the precision of the input is purely for input and output reasons, this may be sufficient to address your problem.
Working with the string is easy enough.
If there is no "." in the string, return 1.
Else return "0.", followed by n-1 "0", followed by one "1", where n is the length of the string after the decimal point.
Here's a possible solution using strings;
static double GetPrecision(string s)
{
string[] splitNumber = s.Split('.');
if (splitNumber.Length > 1)
{
return 1 / Math.Pow(10, splitNumber[1].Length);
}
else
{
return 1;
}
}
There is a question here; Calculate System.Decimal Precision and Scale which looks like it might be of interest if you wish to delve into this some more.