how to convert Math.Ceiling result to int? - c#

Math.Ceiling returns double because double may store much bigger numbers.
However if i'm sure that int type is capable to store the result how should I convert? Is it safe to cast (int) Math.Ceiling(... ?

If you are sure that you do not cross the capacity of int, it should be perfectly safe to do
int myInt = (int)Math.Ceiling(...);
If you are not sure about the bound, you could go with long instead of int.

From C++ practices, I would use the following. It's guaranteed to get the correct result even when ceiling returns 99.99999...8 or 100.000000...1
var result = (int)(Math.Ceiling(value) + 0.5);
The code below should work too if you trust its implementation
var result = Convert.ToInt32(value);

If it's all about speed, then Math.Ceiling for Int inputs and output is quite slow. The fastest is an inline expression. 2.4 seconds vs 33 ms.
Warning: Only for positive value and divisor values.
A) Modulus
Here's one I came up with, that has obviously also been found by C/C++ developers before:
var ceilingResult = (value / divisor) + (value % divisor == 0 ? 0 : 1);
From my own benchmark of 10M iterations, Math.Ceiling takes ~2.4 seconds. Calling this expression inside a named function takes ~380 ms and having it as a direct inline expression takes ~33ms.
B) Simple arithmetic only
Also consider using the suggestion from #mafu
var ceilingResult = (value + divisor - 1) / divisor;
See this 470x upvoted C++ answer for reference and validation. Also https://stackoverflow.com/a/4175152/887092.
C) DivRem
While looking at this answer, https://stackoverflow.com/a/14878734/887092, I noticed the comment that reminded me about DivRem CPU instructions. See https://learn.microsoft.com/en-us/dotnet/api/system.math.divrem?view=netframework-4.8. Math.DivRem should get resolved down to such a CPU instruction.
var quotient = Math.DivRem(value, divisor, out long remainder);
var ceilingResult = quotient + (remainder == 0 ? 0 : 1);
[I have not tested this]. See https://stackoverflow.com/a/924160/887092, for potential edge cases (Where negative Int numbers are used)
Further optimisations might be possible for this - maybe with casting. In this answer, https://stackoverflow.com/a/924160/887092, an if-conditional statement is used - they are about the same.
Performance of the 3:
Modulus: Has two operations that are added, but also a conditional branch.
Arithmetic: Has some additional mathematical operations in a sequence.
DivRem: Builds on the Modulus approach. If C# does resolve Math.DivRem to a single CPU instruction, this might be faster. Further optimisations might also be possible.
I'm not sure how the two would perform on various architectures. But now you have options to explore.
If you would like Math.Floor for Int inputs and Output, it's even easier:
var floorResult = (value / divisor);

I'd go with
int x = (int)Math.Ceiling(0.9); // 1

If you are uncertain you can always put an if statement and check if the number you get back is highter then int.MaxValue

int oInt = Convert.ToInt32(Math.Ceiling(value));
since Math.Ceiling returns double and you want to convert it to int, use Convert Class. example:
double[] values= { Double.MinValue, -1.38e10, -1023.299, -12.98,
0, 9.113e-16, 103.919, 17834.191, Double.MaxValue };
int result;
foreach (double value in values)
{
try {
result = Convert.ToInt32(value);
Console.WriteLine("Converted the {0} value '{1}' to the {2} value {3}.",
value.GetType().Name, value,
result.GetType().Name, result);
}
catch (OverflowException) {
Console.WriteLine("{0} is outside the range of the Int32 type.", value);
}
}
// -1.79769313486232E+308 is outside the range of the Int32 type.
// -13800000000 is outside the range of the Int16 type.
// Converted the Double value '-1023.299' to the Int32 value -1023.
// Converted the Double value '-12.98' to the Int32 value -13.
// Converted the Double value '0' to the Int32 value 0.
// Converted the Double value '9.113E-16' to the Int32 value 0.
// Converted the Double value '103.919' to the Int32 value 104.
// Converted the Double value '17834.191' to the Int32 value 17834.
// 1.79769313486232E+308 is outside the range of the Int32 type.

Related

Why does this checked calculation not throw OverflowException?

Can somebody please explain the following behavior:
static void Main(string[] args)
{
checked
{
double d = -1d + long.MinValue; //this resolves at runtime to -9223372036854780000.00
//long obviousOverflow = -9223372036854780000; //compile time error, '-' cannot be applied to operand of tpye ulong -> this makes it obvious that -9223372036854780000 overflows a long.
double one = 1;
long lMax = (long)(one + long.MaxValue); // THROWS
long lMin = (long)(-one + long.MinValue); // THEN WHY DOES THIS NOT THROW?
}
}
I don't undersant why I'm not getting an OverFlowException in the last line of code.
UPDATE Updated code to make it obvious that checked does through when casting a double to long except in the last case.
You are calculating with double values (-1d). Floating point numbers do not throw on .NET. checked does not have influence on them in any way.
But the conversion back to long is influenced by checked. one + long.MaxValue does not fit into the range of double. -one + long.MinValue does fit into that range. The reason for that is that signed integers have more negative numbers than positive numbers. long.MinValue has no positve equivalent. That's why the negative version of your code happens to fit and the positive version does not fit.
The addition operation does not change anything:
Debug.Assert((double)(1d + long.MaxValue) == (double)(0d + long.MaxValue));
Debug.Assert((double)(-1d + long.MinValue) == (double)(-0d + long.MinValue));
The numbers we are calculating are outside of the range where double is precise. double can fit integers up to 2^53 precisely. We have rounding errors here. Adding one is the same as adding zero. Essentially, you are computing:
var min = (long)(double)(long.MinValue); //does not overflow
var max = (long)(double)(long.MaxValue); //overflows (compiler error)
The add operation is a red herring. It does not change anything.
Apparently there is some leeway in the conversion from double to long. If we run the following code:
checked {
double longMinValue = long.MinValue;
var i = 0;
while (true)
{
long test = (long)(longMinValue - i);
Console.WriteLine("Works for " + i++.ToString() + " => " + test.ToString());
}
}
It goes up to Works for 1024 => -9223372036854775808 before failing with an OverflowException, with the -9223372036854775808 value never changing for i.
If we run the code unchecked, no exception is thrown.
This behavior is not coherent with the documentation on explicit numeric conversions that says:
When you convert from a double or float value to an integral type, the
value is truncated. If the resulting integral value is outside the
range of the destination value, the result depends on the overflow
checking context. In a checked context, an OverflowException is
thrown, while in an unchecked context, the result is an unspecified
value of the destination type.
But as the example shows, the truncation doesn't occur immediately.
I assume this is the question:
I don't undersant why I'm not getting an OverFlowException in the last
line of code.
Have a look at this line (the 1d you are using is insignificant and can be removed, the only thing it provided was conversion to double):
var max = (long)(double)long.MaxValue;
It throws because the closest (I do not know the spec, so I won't go into what "closest" is here) double representation of int64.MaxValue is larger than the largest int64, ergo it cannot be converted back.
var min = (long)(double)long.MinValue;
For this line on the other hand the closest double representation is between int64.MinValue and 0 so it can be converted back to int64.
What I just said does not hold for all combinations of jitter, hardware etc, but I'm trying to explain what happens. Remember that in your case it is thrown because of the checked keyword, without it the jitter would just swallow it.
Also I would recommend you to have a look at BitConverter.GetBytes() to experiment what happens when you go from double to long and back with large number, also decimal and double is interesting :) (the byte representation is the only representation you can trust btw, don't use the debugger for precision when it comes to double)

Put number in range c#

This is widely discussed maybe, but i can't find the proper answer yet. Here is my problem i want to put a number in current range, but the number is random. I don't use
Random rand = new Random();
rand.Next(0,100);
the number is from GetHashCode(), and i have to put it in range *[0, someArray.Length);
I tried :
int a = 12345;
int currentIndex = a.GetHashCode();
currentIndex % someArray.Length + someArrayLength
but it doesn't work. I will appreciate any help.
I'd go for (hash & 0x7FFFFFFF) % modulus. The masking ensures that the input is positive, and then the remainder operator % maps it into the target range.
Alternatives include:
result = hash % modulus;
if(result < 0)
result += modulus;
and
result = ((hash % modulus) + modulus) % modulus
What unfortunately doesn't work is
result = Math.Abs(hash) % modulus
because Math.Abs(int.MinValue) is int.MinValue and thus negative. To fix this approach one could cast to long:
result = (int)(Math.Abs((long)hash)) % modulus)
All of these methods introduce a minor bias for some input ranges and modulus values, since unless the number of input values is an integral multiple of the modulus they can't be mapped to each output value with the same probability. In some contexts this can be a problem, but it's fine for hashtables.
If you mainly care about performance then the masking solution is preferable since & is cheap compared to % or branching.
The proper way to handle negative values is to use double-modulus.
int currentIndex = ((a.GetHashCode() % someArray.Length) + someArray.Length) % someArray.Length;
Introduce some variables into the mix:
int len = someArray.Length;
int currentIndex = ((a.GetHashCode() % len) + len) % len;
This will first make the value range from -len up to (len -1), so when you add len to it, it will range from 0 up to len*2-1, and then you use modulus again, which will put the value in the range of 0 to len-1, which is what you want.
This method will handle all valid values of a.GetHashCode(), no need to special-handle int.MinValue or int.MaxValue.
Note that this method will ensure that if you add one to the input (which is a.GetHashCode() in this case, so might not matter), you'll end up adding one to the output (which will wrap around to 0 when it reaches the end). Methods that uses Math.Abs or bitwise manipulation to ensure a positive value might not work like that for negative numbers. It depends on what you want.
You should be able to use:
int currentIndex = (a.GetHashCode() & 0x7FFFFFFF) % someArray.Length;
Note that, depending on the array length and implementation of GetHashCode, this may not have a random distribution. This is especially true if you use an Int32 as in your sample code, as Int32.GetHashCode just returns the integer itself, so there's no need to call GetHashCode.

C#: The result of casting a negative integer to a byte

I was a looking at the source code of a project, and I noticed the following statement (both keyByte and codedByte are of type byte):
return (byte)(keyByte - codedByte);
I'm trying now to understand what would the result be in cases where keyByte is smaller than codedByte, which results in a negative integer.
After some experiments to understand the result of casting a negative integer which has a value in the range [-255 : -1], I got the following results:
byte result = (byte) (-6); // result = 250
byte result = (byte) (-50); // result = 206
byte result = (byte) (-17); // result = 239
byte result = (byte) (-20); // result = 236
So, provided that -256 < a < 0 , I was able to determine the result by:
result = 256 + a;
My question is: should I always expect this to be the case?
Yes, that will always be the case (i.e. it is not simply dependent on your environment or compiler, but is defined as part of the C# language spec). See http://msdn.microsoft.com/en-us/library/aa691349(v=vs.71).aspx:
In an unchecked context, the result is truncated by discarding any high-order bits that do not fit in the destination type.
The next question is, if you take away the high-order bits of a negative int between -256 and -1, and read it as a byte, what do you get? This is what you've already discovered through experimentation: it is 256 + x.
Note that endianness does not matter because we're discarding the high-order (or most significant) bits, not the "first" 24 bits. So regardless of which end we took it from, we're left with the least significant byte that made up that int.
Yes. Remember, there's no such thing as "-" in the domain of a .Net "Byte":
http://msdn.microsoft.com/en-us/library/e2ayt412.aspx
Because Byte is an unsigned type, it cannot represent a negative
number. If you use the unary minus (-) operator on an expression that
evaluates to type Byte, Visual Basic converts the expression to Short
first. (Note: substitute any CLR/.Net language for "Visual Basic")
ADDENDUM:
Here's a sample app:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace TestByte
{
class Program
{
static void Main(string[] args)
{
for (int i = -255; i < 256; i++)
{
byte b = (byte)i;
System.Console.WriteLine("i={0}, b={1}", i, b);
}
}
}
}
And here's the resulting output:
testbyte|more
i=-255, b=1
i=-254, b=2
i=-253, b=3
i=-252, b=4
i=-251, b=5
...
i=-2, b=254
i=-1, b=255
i=0, b=0
i=1, b=1
...
i=254, b=254
i=255, b=255
Here is an algorithm that performs the same logic as casting to byte, to help you understand it:
For positives:
byte bNum = iNum % 256;
For negatives:
byte bNum = 256 + (iNum % 256);
It's like searching for any k which causes x + 255k to be in the range 0 ... 255. There could only be one k which produces a result with that range, and the result will be the result of casting to byte.
Another way of looking at it is as if it "cycles around the byte value range":
Lets use the iNum = -712 again, and define a bNum = 0.
We shall do iNum++; bNum--; untill iNum == 0:
iNum = -712;
bNum = 0;
iNum++; // -711
bNum--; // 255 (cycles to the maximum value)
iNum++; // -710
bNum--; // 254
... // And so on, as if the iNum value is being *consumed* within the byte value range cycle.
This is, of course, just an illustration to see how logically it works.
This is what happens in unchecked context. You could say that the runtime (or compiler if the Int32 that you cast to Byte is known at compiletime) adds or subtracts 256 as many times as is needed until it finds a representable value.
In a checked context, an exception (or compiletime error) results. See http://msdn.microsoft.com/en-us/library/khy08726.aspx
Yes - unless you get an exception.
.NET defines all arithmetic operations only on 4 byte and larger data types. So the only non-obvious point is how converting an int to a byte works.
For a conversion from an integral type to another integral type, the result of conversion depends on overflow checking context (says the ECMA 334 standard, Section 13.2.1).
So, in the following context
checked
{
return (byte)(keyByte - codedByte);
}
you will see a System.OverflowException. Whereas in the following context:
unchecked
{
return (byte)(keyByte - codedByte);
}
you are guaranteed to always see the results that you expect regardless of whether you do or don't add a multiple of 256 to the difference; for example, 2 - 255 = 3.
This is true regardless of how the hardware represents signed values. The CLR standard (ECMA 335) specifies, in Section 12.1, that the Int32 type is a "32-bit two's-complement signed value". (Well, that also matches all platforms on which .NET or mono is currently available anyway, so one could almost guess that it would work anyway, but it is good to know that the practice is supported by the language standard and portable.)
Some teams do not want to specify overflow checking contexts explicitly, because they have a policy of checking for overflows early in development cycle, but not in released code. In these cases you can safely do byte arithmetic like this:
return (byte)((keyByte - codedByte) % 256);

Fastest way to sum digits in a number

Given a large number, e.g. 9223372036854775807 (Int64.MaxValue), what is the quickest way to sum the digits?
Currently I am ToStringing and reparsing each char into an int:
num.ToString().Sum(c => int.Parse(new String(new char[] { c })));
Which is surely horrifically inefficent. Any suggestions?
And finally, how would you make this work with BigInteger?
Thanks
Well, another option is:
int sum = 0;
while (value != 0)
{
int remainder;
value = Math.DivRem(value, 10, out remainder);
sum += remainder;
}
BigInteger has a DivRem method as well, so you could use the same approach.
Note that I've seen DivRem not be as fast as doing the same arithmetic "manually", so if you're really interested in speed, you might want to consider that.
Also consider a lookup table with (say) 1000 elements precomputed with the sums:
int sum = 0;
while (value != 0)
{
int remainder;
value = Math.DivRem(value, 1000, out remainder);
sum += lookupTable[remainder];
}
That would mean fewer iterations, but each iteration has an added array access...
Nobody has discussed the BigInteger version. For that I'd look at 101, 102, 104, 108 and so on until you find the last 102n that is less than your value. Take your number div and mod 102n to come up with 2 smaller values. Wash, rinse, and repeat recursively. (You should keep your iterated squares of 10 in an array, and in the recursive part pass along the information about the next power to use.)
With a BigInteger with k digits, dividing by 10 is O(k). Therefore finding the sum of the digits with the naive algorithm is O(k2).
I don't know what C# uses internally, but the non-naive algorithms out there for multiplying or dividing a k-bit by a k-bit integer all work in time O(k1.6) or better (most are much, much better, but have an overhead that makes them worse for "small big integers"). In that case preparing your initial list of powers and splitting once takes times O(k1.6). This gives you 2 problems of size O((k/2)1.6) = 2-0.6O(k1.6). At the next level you have 4 problems of size O((k/4)1.6) for another 2-1.2O(k1.6) work. Add up all of the terms and the powers of 2 turn into a geometric series converging to a constant, so the total work is O(k1.6).
This is a definite win, and the win will be very, very evident if you're working with numbers in the many thousands of digits.
Yes, it's probably somewhat inefficient. I'd probably just repeatedly divide by 10, adding together the remainders each time.
The first rule of performance optimization: Don't divide when you can multiply instead. The following function will take four digit numbers 0-9999 and do what you ask. The intermediate calculations are larger than 16 bits. We multiple the number by 1/10000 and take the result as a Q16 fixed point number. Digits are then extracted by multiplication by 10 and taking the integer part.
#define TEN_OVER_10000 ((1<<25)/1000 +1) // .001 Q25
int sum_digits(unsigned int n)
{
int c;
int sum = 0;
n = (n * TEN_OVER_10000)>>9; // n*10/10000 Q16
for (c=0;c<4;c++)
{
printf("Digit: %d\n", n>>16);
sum += n>>16;
n = (n & 0xffff) * 10; // next digit
}
return sum;
}
This can be extended to larger sizes but its tricky. You need to ensure that the rounding in the fixed point calculation always works correctly. I also did 4 digit numbers so the intermediate result of the fixed point multiply would not overflow.
Int64 BigNumber = 9223372036854775807;
String BigNumberStr = BigNumber.ToString();
int Sum = 0;
foreach (Char c in BigNumberStr)
Sum += (byte)c;
// 48 is ascii value of zero
// remove in one step rather than in the loop
Sum -= 48 * BigNumberStr.Length;
Instead of int.parse, why not subtract '0' from each digit to get the actual value.
Remember, '9' - '0' = 9, so you should be able to do this in order k (length of the number). The subtraction is just one operation, so that should not slow things down.

Determine the decimal precision of an input number

We have an interesting problem were we need to determine the decimal precision of a users input (textbox). Essentially we need to know the number of decimal places entered and then return a precision number, this is best illustrated with examples:
4500 entered will yield a result 1
4500.1 entered will yield a result 0.1
4500.00 entered will yield a result 0.01
4500.450 entered will yield a result 0.001
We are thinking to work with the string, finding the decimal separator and then calculating the result. Just wondering if there is an easier solution to this.
I think you should just do what you suggested - use the position of the decimal point. Obvious drawback might be that you have to think about internationalization yourself.
var decimalSeparator = NumberFormatInfo.CurrentInfo.CurrencyDecimalSeparator;
var position = input.IndexOf(decimalSeparator);
var precision = (position == -1) ? 0 : input.Length - position - 1;
// This may be quite unprecise.
var result = Math.Pow(0.1, precision);
There is another thing you could try - the Decimal type stores an internal precision value. Therefore you could use Decimal.TryParse() and inspect the returned value. Maybe the parsing algorithm maintains the precision of the input.
Finally I would suggest not to try something using floating point numbers. Just parsing the input will remove any information about trailing zeros. So you have to add an artifical non-zero digit to preserve them or do similar tricks. You might run into precision issues. Finally finding the precision based on a floating point number is not simple, too. I see some ugly math or a loop multiplying with ten every iteration until there is no longer any fractional part. And the loop comes with new precision issues...
UPDATE
Parsing into a decimal works. Se Decimal.GetBits() for details.
var input = "123.4560";
var number = Decimal.Parse(input);
// Will be 4.
var precision = (Decimal.GetBits(number)[3] >> 16) & 0x000000FF;
From here using Math.Pow(0.1, precision) is straight forward.
UPDATE 2
Using decimal.GetBits() will allocate an int[] array. If you want to avoid the allocation you can use the following helper method which uses an explicit layout struct to get the scale directly out of the decimal value:
static int GetScale(decimal d)
{
return new DecimalScale(d).Scale;
}
[StructLayout(LayoutKind.Explicit)]
struct DecimalScale
{
public DecimalScale(decimal value)
{
this = default;
this.d = value;
}
[FieldOffset(0)]
decimal d;
[FieldOffset(0)]
int flags;
public int Scale => (flags >> 16) & 0xff;
}
Just wondering if there is an easier
solution to this.
No.
Use string:
string[] res = inputstring.Split('.');
int precision = res[1].Length;
Since your last examples indicate that trailing zeroes are significant, I would rule out any numerical solution and go for the string operations.
No, there is no easier solution, you have to examine the string. If you convert "4500" and "4500.00" to numbers, they both become the value 4500 so you can't tell how many non-value digits there were behind the decimal separator.
As an interesting aside, the Decimal tries to maintain the precision entered by the user. For example,
Console.WriteLine(5.0m);
Console.WriteLine(5.00m);
Console.WriteLine(Decimal.Parse("5.0"));
Console.WriteLine(Decimal.Parse("5.00"));
Has output of:
5.0
5.00
5.0
5.00
If your motivation in tracking the precision of the input is purely for input and output reasons, this may be sufficient to address your problem.
Working with the string is easy enough.
If there is no "." in the string, return 1.
Else return "0.", followed by n-1 "0", followed by one "1", where n is the length of the string after the decimal point.
Here's a possible solution using strings;
static double GetPrecision(string s)
{
string[] splitNumber = s.Split('.');
if (splitNumber.Length > 1)
{
return 1 / Math.Pow(10, splitNumber[1].Length);
}
else
{
return 1;
}
}
There is a question here; Calculate System.Decimal Precision and Scale which looks like it might be of interest if you wish to delve into this some more.

Categories