Fastest way to sum digits in a number - c#

Given a large number, e.g. 9223372036854775807 (Int64.MaxValue), what is the quickest way to sum the digits?
Currently I am ToStringing and reparsing each char into an int:
num.ToString().Sum(c => int.Parse(new String(new char[] { c })));
Which is surely horrifically inefficent. Any suggestions?
And finally, how would you make this work with BigInteger?
Thanks

Well, another option is:
int sum = 0;
while (value != 0)
{
int remainder;
value = Math.DivRem(value, 10, out remainder);
sum += remainder;
}
BigInteger has a DivRem method as well, so you could use the same approach.
Note that I've seen DivRem not be as fast as doing the same arithmetic "manually", so if you're really interested in speed, you might want to consider that.
Also consider a lookup table with (say) 1000 elements precomputed with the sums:
int sum = 0;
while (value != 0)
{
int remainder;
value = Math.DivRem(value, 1000, out remainder);
sum += lookupTable[remainder];
}
That would mean fewer iterations, but each iteration has an added array access...

Nobody has discussed the BigInteger version. For that I'd look at 101, 102, 104, 108 and so on until you find the last 102n that is less than your value. Take your number div and mod 102n to come up with 2 smaller values. Wash, rinse, and repeat recursively. (You should keep your iterated squares of 10 in an array, and in the recursive part pass along the information about the next power to use.)
With a BigInteger with k digits, dividing by 10 is O(k). Therefore finding the sum of the digits with the naive algorithm is O(k2).
I don't know what C# uses internally, but the non-naive algorithms out there for multiplying or dividing a k-bit by a k-bit integer all work in time O(k1.6) or better (most are much, much better, but have an overhead that makes them worse for "small big integers"). In that case preparing your initial list of powers and splitting once takes times O(k1.6). This gives you 2 problems of size O((k/2)1.6) = 2-0.6O(k1.6). At the next level you have 4 problems of size O((k/4)1.6) for another 2-1.2O(k1.6) work. Add up all of the terms and the powers of 2 turn into a geometric series converging to a constant, so the total work is O(k1.6).
This is a definite win, and the win will be very, very evident if you're working with numbers in the many thousands of digits.

Yes, it's probably somewhat inefficient. I'd probably just repeatedly divide by 10, adding together the remainders each time.

The first rule of performance optimization: Don't divide when you can multiply instead. The following function will take four digit numbers 0-9999 and do what you ask. The intermediate calculations are larger than 16 bits. We multiple the number by 1/10000 and take the result as a Q16 fixed point number. Digits are then extracted by multiplication by 10 and taking the integer part.
#define TEN_OVER_10000 ((1<<25)/1000 +1) // .001 Q25
int sum_digits(unsigned int n)
{
int c;
int sum = 0;
n = (n * TEN_OVER_10000)>>9; // n*10/10000 Q16
for (c=0;c<4;c++)
{
printf("Digit: %d\n", n>>16);
sum += n>>16;
n = (n & 0xffff) * 10; // next digit
}
return sum;
}
This can be extended to larger sizes but its tricky. You need to ensure that the rounding in the fixed point calculation always works correctly. I also did 4 digit numbers so the intermediate result of the fixed point multiply would not overflow.

Int64 BigNumber = 9223372036854775807;
String BigNumberStr = BigNumber.ToString();
int Sum = 0;
foreach (Char c in BigNumberStr)
Sum += (byte)c;
// 48 is ascii value of zero
// remove in one step rather than in the loop
Sum -= 48 * BigNumberStr.Length;

Instead of int.parse, why not subtract '0' from each digit to get the actual value.
Remember, '9' - '0' = 9, so you should be able to do this in order k (length of the number). The subtraction is just one operation, so that should not slow things down.

Related

Bit reverse numbers by N bits

I am trying to find a simple algorithm that reverses the bits of a number up to N number of bits. For example:
For N = 2:
01 -> 10
11 -> 11
For N = 3:
001 -> 100
011 -> 110
101 -> 101
The only things i keep finding is how to bit reverse a full byte but thats only going to work for N = 8 and thats not always what i need.
Does any one know an algorithm that can do this bitwise operation? I need to do many of them for an FFT so i'm looking for something that can be very optimised too.
It is the C# implementation of bitwise reverse operation:
public uint Reverse(uint a, int length)
{
uint b = 0b_0;
for (int i = 0; i < length; i++)
{
b = (b << 1) | (a & 0b_1);
a = a >> 1;
}
return b;
}
The code above first shifts the output value to the left and adds the bit in the smallest position of the input to the output and then shifts the input to right. and repeats it until all bits finished. Here are some samples:
uint a = 0b_1100;
uint b = Reverse(a, 4); //should be 0b_0011;
And
uint a = 0b_100;
uint b = Reverse(a, 3); //should be 0b_001;
This implementation's time complexity is O(N) which N is the length of the input.
Edit in Dotnet fiddle
Here's a small look-up table solution that's good for (2<=N<=32).
For N==8, I think everyone agrees that a 256 byte array lookup table is the way to go. Similarly, for N from 2 to 7, you could create 4, 8, ... 128 lookup byte arrays.
For N==16, you could flip each byte and then reorder the two bytes. Similarly, for N==24, you could flip each byte and then reorder things (which would leave the middle one flipped but in the same position). It should be obvious how N==32 would work.
For N==9, think of it as three 3-bit numbers (flip each of them, reorder them and then do some masking and shifting to get them in the right position). For N==10, it's two 5-bit numbers. For N==11, it's two 5-bit numbers on either side of a center bit that doesn't change. The same for N==13 (two 6-bit numbers around an unchanging center bit). For a prime like N==23, it would be a pair of 8- bit numbers around a center 7-bit number.
For the odd numbers between 24 and 32 it gets more complicated. You probably need to consider five separate numbers. Consider N==29, that could be four 7-bit numbers around an unchanging center bit. For N==31, it would be a center bit surround by a pair of 8-bit numbers and a pair of 7-bit numbers.
That said, that's a ton of complicated logic. It would be a bear to test. It might be faster than #MuhammadVakili's bit shifting solution (it certainly would be for N<=8), but it might not. I suggest you go with his solution.
Using string manipulation?
static void Main(string[] args)
{
uint number = 269;
int numBits = 4;
string strBinary = Convert.ToString(number, 2).PadLeft(32, '0');
Console.WriteLine($"{number}");
Console.WriteLine($"{strBinary}");
string strBitsReversed = new string(strBinary.Substring(strBinary.Length - numBits, numBits).ToCharArray().Reverse().ToArray());
string strBinaryModified = strBinary.Substring(0, strBinary.Length - numBits) + strBitsReversed;
uint numberModified = Convert.ToUInt32(strBinaryModified, 2);
Console.WriteLine($"{strBinaryModified}");
Console.WriteLine($"{numberModified}");
Console.Write("Press Enter to Quit.");
Console.ReadLine();
}
Output:
269
00000000000000000000000100001101
00000000000000000000000100001011
267
Press Enter to Quit.

Selecting set of binary sequences to avoid similarity

I want to be able to programatically generate a set of binary sequences of a given length whilst avoiding similarity between any two sequences.
I'll define 'similar' between two sequences thus:
If sequence A can be converted to sequence B (or B to A) by bit-shifting A (non-circularly) and padding with 0s, A and B are similar (note: bit-shifting is allowed on only one of the sequences otherwise both could always be shifted to a sequence of just 0s)
For example: A = 01010101 B = 10101010 C = 10010010
In this example, A and B are similar because a single left-shift of A results in B (A << 1 = B). A and C are not similar because no bit-shifting of one can result in the other.
A set of sequences is defined is dissimilar if no subset of size 2 is similar.
I believe there could be multiple sets for a given sequence length and presumably the size of the set will be significantly less than the total possibilities (total possibilities = 2 ^ sequence length).
I need a way to generate a set for a given sequence length. Does an algorithm exist that can achieve this? Selecting sequences one at a time and checking against all previously selected sequences is not acceptable for my use case (but may have to be if a better method doesn't exist!).
I've tried generating sets of integers based on primes numbers and also the golden ratio, then converting to binary. This seemed like it might be a viable method, but I have been unable to get it to work as expected.
Update: I have written a function in C# to that uses a prime number modulo to generate the set without success. Also I've tried using the Fibonacci sequence which finds a mostly dissimilar set, but of a size that is very small compared to the number of possibilities:
private List<string> GetSequencesFib(int sequenceLength)
{
var sequences = new List<string>();
long current = 21;
long prev = 13;
long prev2 = 8;
long size = (long)Math.Pow(2, sequenceLength);
while (current < size)
{
current = prev + prev2;
sequences.Add(current.ToBitString(sequenceLength));
prev2 = prev;
prev = current;
}
return sequences;
}
This generates a set of sequences of size 41 that is roughly 60% dissimilar (sequenceLength = 32). It is started at 21 since lower values produce sequences of mostly 0s which are similar to any other sequence.
By relaxing the conditions of similarity to only allowing a small number of successive bit-shifts, the proportion of dissimilar sequences approaches 100%. This may be acceptable in my use case.
Update 2:
I've implemented a function following DCHE's suggestion, by selecting all odd numbers greater than half the maximum value for a given sequence length:
private static List<string> GetSequencesOdd(int length)
{
var sequences = new List<string>();
long max = (long)(Math.Pow(2, length));
long quarterMax = max / 4;
for (long n = quarterMax * 2 + 1; n < max; n += 2)
{
sequences.Add(n.ToBitString(length));
}
return sequences;
}
This produces an entirely dissimilar set as per my requirements. I can see why this works mathematically as well.
I can't prove it, but from my experimenting, I think that your set is the odd integers greater than half of the largest number in binary. E.g. for bit sets of length 3, max integer is 7, so the set is 5 and 7 (101, 111).

Put number in range c#

This is widely discussed maybe, but i can't find the proper answer yet. Here is my problem i want to put a number in current range, but the number is random. I don't use
Random rand = new Random();
rand.Next(0,100);
the number is from GetHashCode(), and i have to put it in range *[0, someArray.Length);
I tried :
int a = 12345;
int currentIndex = a.GetHashCode();
currentIndex % someArray.Length + someArrayLength
but it doesn't work. I will appreciate any help.
I'd go for (hash & 0x7FFFFFFF) % modulus. The masking ensures that the input is positive, and then the remainder operator % maps it into the target range.
Alternatives include:
result = hash % modulus;
if(result < 0)
result += modulus;
and
result = ((hash % modulus) + modulus) % modulus
What unfortunately doesn't work is
result = Math.Abs(hash) % modulus
because Math.Abs(int.MinValue) is int.MinValue and thus negative. To fix this approach one could cast to long:
result = (int)(Math.Abs((long)hash)) % modulus)
All of these methods introduce a minor bias for some input ranges and modulus values, since unless the number of input values is an integral multiple of the modulus they can't be mapped to each output value with the same probability. In some contexts this can be a problem, but it's fine for hashtables.
If you mainly care about performance then the masking solution is preferable since & is cheap compared to % or branching.
The proper way to handle negative values is to use double-modulus.
int currentIndex = ((a.GetHashCode() % someArray.Length) + someArray.Length) % someArray.Length;
Introduce some variables into the mix:
int len = someArray.Length;
int currentIndex = ((a.GetHashCode() % len) + len) % len;
This will first make the value range from -len up to (len -1), so when you add len to it, it will range from 0 up to len*2-1, and then you use modulus again, which will put the value in the range of 0 to len-1, which is what you want.
This method will handle all valid values of a.GetHashCode(), no need to special-handle int.MinValue or int.MaxValue.
Note that this method will ensure that if you add one to the input (which is a.GetHashCode() in this case, so might not matter), you'll end up adding one to the output (which will wrap around to 0 when it reaches the end). Methods that uses Math.Abs or bitwise manipulation to ensure a positive value might not work like that for negative numbers. It depends on what you want.
You should be able to use:
int currentIndex = (a.GetHashCode() & 0x7FFFFFFF) % someArray.Length;
Note that, depending on the array length and implementation of GetHashCode, this may not have a random distribution. This is especially true if you use an Int32 as in your sample code, as Int32.GetHashCode just returns the integer itself, so there's no need to call GetHashCode.

List<T> capacity increasing vs Dictionary<K,V> capacity increasing?

Why does List<T> increase its capacity by a factor of 2?
private void EnsureCapacity(int min)
{
if (this._items.Length < min)
{
int num = (this._items.Length == 0) ? 4 : (this._items.Length * 2);
if (num < min)
{
num = min;
}
this.Capacity = num;
}
}
Why does Dictionary<K,V> use prime numbers as capacity?
private void Resize()
{
int prime = HashHelpers.GetPrime(this.count * 2);
int[] numArray = new int[prime];
for (int i = 0; i < numArray.Length; i++)
{
numArray[i] = -1;
}
Entry<TKey, TValue>[] destinationArray = new Entry<TKey, TValue>[prime];
Array.Copy(this.entries, 0, destinationArray, 0, this.count);
for (int j = 0; j < this.count; j++)
{
int index = destinationArray[j].hashCode % prime;
destinationArray[j].next = numArray[index];
numArray[index] = j;
}
this.buckets = numArray;
this.entries = destinationArray;
}
Why doesn't it also just multiply by 2? Both are dealing with finding continues memory location...correct?
It's common to use prime numbers for hash table sizes because it reduces the probability of collisions.
Hash tables typically use the modulo operation to find the bucket where an entry belongs, as you can see in your code:
int index = destinationArray[j].hashCode % prime;
Suppose your hashCode function results in the following hashCodes among others {x , 2x, 3x, 4x, 5x, 6x...}, then all these are going to be clustered in just m number of buckets, where m = table_length/GreatestCommonFactor(table_length, x). (It is trivial to verify/derive this). Now you can do one of the following to avoid clustering:
Make sure that you don't generate too many hashCodes that are multiples of another hashCode like in {x, 2x, 3x, 4x, 5x, 6x...}.But this may be kind of difficult if your hashTable is supposed to have millions of entries.
Or simply make m equal to the table_length by making GreatestCommonFactor(table_length, x) equal to 1, i.e by making table_length coprime with x. And if x can be just about any number then make sure that table_length is a prime number.
(from http://srinvis.blogspot.com/2006/07/hash-table-lengths-and-prime-numbers.html)
HashHelpers.GetPrime(this.count * 2)
should return a prime number. Look at the definition of HashHelpers.GetPrime().
Dictionary puts all its objects into buckets depending on their GetHashCode value, i.e.
Bucket[object.GetHashCode() % DictionarySize] = object;
It uses a prime number for size to avoid the chance of collisions. Presumably a size with many divisors would be bad for poorly designed hash codes.
From a question in SO;
Dictionary or hash table relies on hashing the key to get a smaller
index to look up into corresponding store (array). So choice of hash
function is very important. Typical choice is to get hash code of a
key (so that we get good random distribution) and then divide the code
by a prime number and use reminder to index into fixed number of
buckets. This allows to convert arbitrarily large hash codes into a
bounded set of small numbers for which we can define an array to look
up into. So its important to have array size in prime number and then
the best choice for the size become the prime number that is larger
than the required capacity. And that's exactly dictionary
implementation does.
List<T> employs arrays to store data; and increasing the capacity of an array requires copying the array to a new memory location; which is time consuming. I guess, in order to lower the occurence of copying arrays, list doubles it's capacity.
I'm not computer scientist, but ...
Most probabbly its related to a HashTable's Load factor (the last link just a math definition), and for not creating more confusion, for not math auditory, it's important to define that:
loadFactor = FreeCells/AllCells
this we can write as
loadFactor = (AllBuckets - UsedBuckets)/AllBuckets
loadFactor defines a probabbilty of collision in hash map.
So by using a Prime Number,a number that
..is a natural number greater than 1 that
has no positive divisors other than 1 and itself.
we decrease (but do not erase) a risk of collision in our hashmap.
If loadFactor tends to 0, we have more secure hashmap, so we always has to keep it as low as possible. By MS blog, they found out that the value of that loadFactor (optimal one) has to be arround 0.72, so if it becomes bigger, we increase the capacity following nearest prime number.
EDIT
To be more clear on this: having a prime number, ensures, as mush as it possible, uniform destribution of the hashes in this concrete implementation of the hash we have in .NET dictionary. It's not about efficency of the retrieval of the values, but efficiency of the memory used and collision risk reduction.
Hope this helps.
Dictionary needs some heuristic so that hash code distribution among buckets is more uniform.
.NET's Dictionary uses prime number of buckets to do that, and then calculates bucket index like this:
int num = this.comparer.GetHashCode(key) & 2147483647; // make hash code positive
// get the remainder from division - that's our bucket index
int num2 = this.buckets[num % ((int)this.buckets.Length)];
When it grows, it doubles the number of buckets and then adds some more to make the number prime again.
It's not the only heuristic possible. Java's HashMap, for example, takes another approach. The number of buckets there is always a power of 2 and on grow it just doubles the number of buckets:
resize(2 * table.length);
But when calculating bucket index it modifies hash:
static int hash(int h) {
// This function ensures that hashCodes that differ only by
// constant multiples at each bit position have a bounded
// number of collisions (approximately 8 at default load factor).
h ^= (h >>> 20) ^ (h >>> 12);
return h ^ (h >>> 7) ^ (h >>> 4);
}
static int indexFor(int h, int length) {
return h & (length-1);
}
// from put() method
int hash = hash(key.hashCode()); // get modified hash
int i = indexFor(hash, table.length); // trim the hash to the bucket count
List on the other hand doesn't need any heuristic, so they didn't bother.
Addition: Grow behavior doesn't influence Add's complexity at all. Dictionary, HashMap and List each have amortized Add complexity of O(1).
Grow operation takes O(N) but occurs only N-th time, so to cause grow operation we need to call Add N times. For N=8 the time it takes to do N Adds has the value
O(1)+O(1)+O(1)+O(1)+O(1)+O(1)+O(1)+O(N) = O(N)+O(N) = O(2N) = O(N)
So, N Adds take O(N), then one Add takes O(1).
Increasing the capacity by a constant factor (instead of for example increasing the capacity by a additive constant) when resizing is required to guarantee some amortized running times. For example adding to or removing from the end of an array based list requires O(1) time except when you have to increase or decrease the capacity requiring to copy the list content and therefore requiring O(n) time. Changing the capacity by a constant factor guarantees that the amortized runtime is still O(1). The optimal value of the factor depends on the expected usage. Some more information on Wikipedia.
Choosing the capacity of a hash table to be prime is used to improve the distribution of the items. bucket[hash % capacity] will yield a more uniform distribution if hash is not uniformly distributed if capacity is prime. (I can not give the math behind that but I am looking for a good reference.) The combination of this with the first point is exactly what the implementation does - increasing the capacity by a factor (of at least) 2 and also ensure that the capacity is prime.

"Substring" a Numeric Value

In C#, what is the best way to "substring" (for lack of a better word) a long value.
I need to calculate a sum of account numbers for a trailer record but only need the 16 least significant characters.
I am able to this by converting the value to string but wondered if there is a better way in which it can be done.
long number = 1234567890123456789L;
const long _MAX_LENGTH = 9999999999999999L;
if (number > _MAX_LENGTH)
{
string strNumber = number.ToString();
number = Convert.ToInt64(strNumber.Substring(strNumber.Length - 16));
}
This will return the value 4567890123456789.
You could do:
long number = 1234567890123456789L;
long countSignificant = 16;
long leastSignificant = number % (long) Math.Pow(10, countSignificant);
How does this work? Well, if you divide by 10, you drop off the last digit, right? And the remainder will be that last digit? The same goes for 100, 1000 and Math.Pow(1, n).
Let's just look at the least significant digit, because we can do this in our head:
1234 divided by 10 is 123 remainder 4
In c#, that would be:
1234 / 10 == 123;
1234 % 10 == 4;
So, the next step is to figure out how to get the last n significant digits. It turns out, that that is the same as dividing by 10 to the power of n. Since c# doesn't have an exponent operator (like ** in python), we use a library function:
Math.Pow(10, 4) == 1000.0; // oops: a float!
We need to cast that back to a long:
(long) Math.Pow(10, 4) == 1000;
I think now you have all the pieces to create a nice function of your own ;)
You could use modulo (the % operator in C#). For example:
123456 % 100000 = 23456
or
123456 % 1000 = 456
As a quick guide I keep remembering that you get as many digits as there are zeros in the divisor. Or, vice versa, the divisor needs as many zeros as you want to keep digits.
So in your case you'd need:
long number = 1234567890123456789L;
long divisor = 10000000000000000L;
long result = number % divisor;
Complete Code, use modulo operator:
long number = 1234567890123456789L;
const long _MAX_LENGTH = 9999999999999999L;
number = number % (_MAX_LENGTH + 1);
Console.WriteLine (number);
Live test: http://ideone.com/pKB6w
Until you are enlightened with modulo approach, you can opt for this one instead for the meantime:
long number = 1234567890123456789L;
const long _MAX_LENGTH = 9999999999999999L;
if (number > _MAX_LENGTH) {
long minus = number / (_MAX_LENGTH + 1) * (_MAX_LENGTH + 1);
number = number - minus;
}
Console.WriteLine(number);
Live test: http://ideone.com/oAkcy
Note:
Strongly recommended, use modulo approach, don't use subtraction. Modulo approach is the best, no corner case, i.e. no need to use if statement.

Categories