Quick and Simple Hash Code Combinations - c#

Can people recommend quick and simple ways to combine the hash codes of two objects. I am not too worried about collisions since I have a Hash Table which will handle that efficiently I just want something that generates a code quickly as possible.
Reading around SO and the web there seem to be a few main candidates:
XORing
XORing with Prime Multiplication
Simple numeric operations like multiplication/division (with overflow checking or wrapping around)
Building a String and then using the String classes Hash Code method
What would people recommend and why?

I would personally avoid XOR - it means that any two equal values will result in 0 - so hash(1, 1) == hash(2, 2) == hash(3, 3) etc. Also hash(5, 0) == hash(0, 5) etc which may come up occasionally. I have deliberately used it for set hashing - if you want to hash a sequence of items and you don't care about the ordering, it's nice.
I usually use:
unchecked
{
int hash = 17;
hash = hash * 31 + firstField.GetHashCode();
hash = hash * 31 + secondField.GetHashCode();
return hash;
}
That's the form that Josh Bloch suggests in Effective Java. Last time I answered a similar question I managed to find an article where this was discussed in detail - IIRC, no-one really knows why it works well, but it does. It's also easy to remember, easy to implement, and easy to extend to any number of fields.

If you are using .NET Core 2.1 or later or .NET Framework 4.6.1 or later, consider using the System.HashCode struct to help with producing composite hash codes. It has two modes of operation: Add and Combine.
An example using Combine, which is usually simpler and works for up to eight items:
public override int GetHashCode()
{
return HashCode.Combine(object1, object2);
}
An example of using Add:
public override int GetHashCode()
{
var hash = new HashCode();
hash.Add(this.object1);
hash.Add(this.object2);
return hash.ToHashCode();
}
Pros:
Part of .NET itself, as of .NET Core 2.1/.NET Standard 2.1 (though, see con below)
For .NET Framework 4.6.1 and later, the Microsoft.Bcl.HashCode NuGet package can be used to backport this type.
Looks to have good performance and mixing characteristics, based on the work the author and the reviewers did before merging this into the corefx repo
Handles nulls automatically
Overloads that take IEqualityComparer instances
Cons:
Not available on .NET Framework before .NET 4.6.1. HashCode is part of .NET Standard 2.1. As of September 2019, the .NET team has no plans to support .NET Standard 2.1 on the .NET Framework, as .NET Core/.NET 5 is the future of .NET.
General purpose, so it won't handle super-specific cases as well as hand-crafted code

While the template outlined in Jon Skeet's answer works well in general as a hash function family, the choice of the constants is important and the seed of 17 and factor of 31 as noted in the answer do not work well at all for common use cases. In most use cases, the hashed values are much closer to zero than int.MaxValue, and the number of items being jointly hashed are a few dozen or less.
For hashing an integer tuple {x, y} where -1000 <= x <= 1000 and -1000 <= y <= 1000, it has an abysmal collision rate of almost 98.5%. For example, {1, 0} -> {0, 31}, {1, 1} -> {0, 32}, etc. If we expand the coverage to also include n-tuples where 3 <= n <= 25, it does less terrible with a collision rate of about 38%. But we can do much better.
public static int CustomHash(int seed, int factor, params int[] vals)
{
int hash = seed;
foreach (int i in vals)
{
hash = (hash * factor) + i;
}
return hash;
}
I wrote a Monte Carlo sampling search loop that tested the method above with various values for seed and factor over various random n-tuples of random integers i. Allowed ranges were 2 <= n <= 25 (where n was random but biased toward the lower end of the range) and -1000 <= i <= 1000. At least 12 million unique collision tests were performed for each seed and factor pair.
After about 7 hours running, the best pair found (where the seed and factor were both limited to 4 digits or less) was: seed = 1009, factor = 9176, with a collision rate of 0.1131%. In the 5- and 6-digit areas, even better options exist. But I selected the top 4-digit performer for brevity, and it peforms quite well in all common int and char hashing scenarios. It also seems to work fine with integers of much greater magnitudes.
It is worth noting that "being prime" did not seem to be a general prerequisite for good performance as a seed and/or factor although it likely helps. 1009 noted above is in fact prime, but 9176 is not. I explicitly tested variations on this where I changed factor to various primes near 9176 (while leaving seed = 1009) and they all performed worse than the above solution.
Lastly, I also compared against the generic ReSharper recommendation function family of hash = (hash * factor) ^ i; and the original CustomHash() as noted above seriously outperforms it. The ReSharper XOR style seems to have collision rates in the 20-30% range for common use case assumptions and should not be used in my opinion.

Use the combination logic in tuple. The example is using c#7 tuples.
(field1, field2).GetHashCode();

I presume that .NET Framework team did a decent job in testing their System.String.GetHashCode() implementation, so I would use it:
// System.String.GetHashCode(): http://referencesource.microsoft.com/#mscorlib/system/string.cs,0a17bbac4851d0d4
// System.Web.Util.StringUtil.GetStringHashCode(System.String): http://referencesource.microsoft.com/#System.Web/Util/StringUtil.cs,c97063570b4e791a
public static int CombineHashCodes(IEnumerable<int> hashCodes)
{
int hash1 = (5381 << 16) + 5381;
int hash2 = hash1;
int i = 0;
foreach (var hashCode in hashCodes)
{
if (i % 2 == 0)
hash1 = ((hash1 << 5) + hash1 + (hash1 >> 27)) ^ hashCode;
else
hash2 = ((hash2 << 5) + hash2 + (hash2 >> 27)) ^ hashCode;
++i;
}
return hash1 + (hash2 * 1566083941);
}
Another implementation is from System.Web.Util.HashCodeCombiner.CombineHashCodes(System.Int32, System.Int32) and System.Array.CombineHashCodes(System.Int32, System.Int32) methods. This one is simpler, but probably doesn't have such a good distribution as the method above:
// System.Web.Util.HashCodeCombiner.CombineHashCodes(System.Int32, System.Int32): http://referencesource.microsoft.com/#System.Web/Util/HashCodeCombiner.cs,21fb74ad8bb43f6b
// System.Array.CombineHashCodes(System.Int32, System.Int32): http://referencesource.microsoft.com/#mscorlib/system/array.cs,87d117c8cc772cca
public static int CombineHashCodes(IEnumerable<int> hashCodes)
{
int hash = 5381;
foreach (var hashCode in hashCodes)
hash = ((hash << 5) + hash) ^ hashCode;
return hash;
}

This is a repackaging of Special Sauce's brilliantly researched solution.
It makes use of Value Tuples (ITuple).
This allows defaults for the parameters seed and factor.
public static int CombineHashes(this ITuple tupled, int seed=1009, int factor=9176)
{
var hash = seed;
for (var i = 0; i < tupled.Length; i++)
{
unchecked
{
hash = hash * factor + tupled[i].GetHashCode();
}
}
return hash;
}
Usage:
var hash1 = ("Foo", "Bar", 42).CombineHashes();
var hash2 = ("Jon", "Skeet", "Constants").CombineHashes(seed=17, factor=31);

If your input hashes are the same size, evenly distributed and not related to each other then an XOR should be OK. Plus it's fast.
The situation I'm suggesting this for is where you want to do
H = hash(A) ^ hash(B); // A and B are different types, so there's no way A == B.
of course, if A and B can be expected to hash to the same value with a reasonable (non-negligible) probability, then you should not use XOR in this way.

If you're looking for speed and don't have too many collisions, then XOR is fastest. To prevent a clustering around zero, you could do something like this:
finalHash = hash1 ^ hash2;
return finalHash != 0 ? finalHash : hash1;
Of course, some prototyping ought to give you an idea of performance and clustering.

Assuming you have a relevant toString() function (where your different fields shall appear), I would just return its hashcode:
this.toString().hashCode();
This is not very fast, but it should avoid collisions quite well.

I would recommend using the built-in hash functions in System.Security.Cryptography rather than rolling your own.

Related

Generate unique code from primary key

I have a table of orders and I want to give users a unique code for an order whilst hiding the incrementing identity integer primary key because I don't want to give away how many orders have been made.
One easy way of making sure the codes are unique is to use the primary key to determine the code.
So how can I transform an integer into a friendly, say, eight alpha numeric code such that every code is unique?
The easiest way (if you want an alpha numeric code) is to convert the integer primary key to HEX (like below). And, you can Use `PadLeft()' to make sure the string has 8 characters. But, when the number of orders grow, 8 characters will not be enough.
var uniqueCode = intPrimaryKey.ToString("X").PadLeft(8, '0');
Or, you can create an offset of your primary key, before converting it to HEX, like below:
var uniqueCode = (intPrimaryKey + 999).ToString("X").PadLeft(8, '0');
Assuming the total number of orders being created isn't going to get anywhere near the total number of identifiers in your pool, a reasonably effective technique is to simply generate a random identifier and see if it is used already; continue generating new identifiers until you find one not previously used.
A quick and easy way to do this is to have a guid column that has a default value of
left(newid(),8)
This solution will generally give you a unique value for each row. But if you have extremely large amounts of orders this will not be unique and you should use just the newid() value to generate the guid.
I would just use MD5 for this. MD5 offers enough "uniqueness" for a small subset of integers that represent your customer orders.
For an example see this answer. You will need to adjust input parameter from string to int (or alternatively just call ToString on your number and use the code as-is).
If you would like something that would be difficult to trace and you don;t mind it being 16 characters, you could use something like this that includes some random numbers and mixes the byte positions of the original input with them: (EDITED to make a bit more untraceable, by XOR-ing with the generated random numbers).
public static class OrderIdRandomizer
{
private static readonly Random _rnd = new Random();
public static string GenerateFor(int orderId)
{
var rndBytes = new byte[4];
_rnd.NextBytes(rndBytes);
var bytes = new byte[]
{
(byte)rndBytes[0],
(byte)(((byte)(orderId >> 8)) ^ rndBytes[0]),
(byte)(((byte)(orderId >> 24)) ^ rndBytes[1]),
(byte)rndBytes[1],
(byte)(((byte)(orderId >> 16)) ^ rndBytes[2]),
(byte)rndBytes[2],
(byte)(((byte)(orderId)) ^ rndBytes[3]),
(byte)rndBytes[3],
};
return string.Concat(bytes.Select(b => b.ToString("X2")));
}
public static int ReconstructFrom(string generatedId)
{
if (generatedId == null || generatedId.Length != 16)
throw new InvalidDataException("Invalid generated order id");
var bytes = new byte[8];
for (int i = 0; i < 8; i++)
bytes[i] = byte.Parse(generatedId.Substring(i * 2, 2), System.Globalization.NumberStyles.HexNumber);
return (int)(
((bytes[2] ^ bytes[3]) << 24) |
((bytes[4] ^ bytes[5]) << 16) |
((bytes[1] ^ bytes[0]) << 8) |
((bytes[6] ^ bytes[7])));
}
}
Usage:
var obfuscatedId = OrderIdRandomizer.GenerateFor(123456);
Console.WriteLine(obfuscatedId);
Console.WriteLine(OrderIdRandomizer.ReconstructFrom(obfuscatedId));
Disadvantage: If the algorithm is know, it is obviously easy to break.
Advantage: It is completely custom, i.e. not an established algorithm like MD5 that might be easier to guess/crack if you do not know what algorithm is being used.

Dictionary <,> Size , GetHashCode and Prime Numbers?

I've been reading quite a lot about this interesting topic (IMO). but I'm not fully understand one thing :
Dictionary size is increasing its capacity ( doubles to the closest prime number) to a prime number (when reallocation) :
because :
int index = hashCode % [Dictionary Capacity];
So we can see that prime numbers are used here for [Dictionary Capacity] because their GreatestCommonFactor is 1. and this helps to avoid collisions.
In addition
I've seen many samples of implementing theGetHashCode() :
Here is a sample from Jon Skeet :
public override int GetHashCode()
{
unchecked
{
int hash = 17;
// Suitable nullity checks etc, of course :)
hash = hash * 23 + field1.GetHashCode();
hash = hash * 23 + field2.GetHashCode();
hash = hash * 23 + field3.GetHashCode();
return hash;
}
}
I don't understand :
Question
Does prime numbers are used both in : Dictionary capacity
and in the generation of getHashCode ?
Because in the code above , there is a good chance that the return value will not be a prime number [please correct me if i'm wrong] because of the
multiplication by 23
addition of the GetHashCode() value for each field.
For Example : (11,17,173 are prime number)
int hash = 17;
hash = hash * 23 + 11; //402
hash = hash * 23 + 17; //9263
hash = hash * 23 + 173 //213222
return hash;
213222 is not a prime.
Also there is not any math rule which state :
(not a prime number) + (prime number) = (prime number)
nor
(not a prime number) * (prime number) = (prime number)
nor
(not a prime number) * (not a prime number) = (prime number)
So what am I missing ?
It does not matter what the result of GetHashCode is (it does not have to be prime at all), as long as the result is the same for two objects that are considered to be equal. However, it is nice (but not required) to have GetHashCode return a different value for two objects that are considered to be different (but still not necessarily prime).
Given two numbers a and b, when you multiply them you get c = a * b. There are usually multiple different pairs of a and b that give the same result c. For example 6 * 2 = 12 and 4 * 3 = 12. However, when a is a prime number, there are a lot less pairs that give the same result. This is convenient for the property that the hash code should be different for different objects.
In the dictionary the same principle applies: the objects are put in buckets depending on their hash. Since most integers do not divide nicely by a prime number, you get a nice spreading of your objects in the buckets. Ideally you'd want only one item in each bucket for optimal dictionary performance.
Slightly off-topic: Cicada's (that's an insect) use prime numbers to determine after how many years they go and mate again. Since this mating cycle is a prime number of years, the chances of the mating continously coinciding with the life cycles of any of its enemies are slim.

How can I best generate a static array of random number on demand?

An application I'm working on requires a matrix of random numbers. The matrix can grow in any direction at any time, and isn't always full. (I'll probably end up re-implementing it with a quad tree or something else, rather than a matrix with a lot of null objects.)
I need a way to generate the same matrix, given the same seed, no matter in which order I calculate the matrix.
LazyRandomMatrix rndMtx1 = new LazyRandomMatrix(1234) // Seed new object
float X = rndMtx1[0,0] // Lazily generate random numbers on demand
float Y = rndMtx1[3,16]
float Z = rndMtx1[23,-5]
Debug.Assert(X == rndMtx1[0,0])
Debug.Assert(Y == rndMtx1[3,16])
Debug.Assert(Z == rndMtx1[23,-5])
LazyRandomMatrix rndMtx2 = new LazyRandomMatrix(1234) // Seed second object
Debug.Assert(Y == rndMtx2[3,16]) // Lazily generate the same random numbers
Debug.Assert(Z == rndMtx2[23,-5]) // on demand in a different order
Debug.Assert(X == rndMtx2[0,0])
Yes, if I knew the dimensions of the array, the best way would be to generate the entire array, and just return values, but they need to be generated independently and on demand.
My first idea was to initialize a new random number generator for each call to a new coordinate, seeding it with some hash of the overall matrix's seed and the coordinates used in calling, but this seems like a terrible hack, as it would require creating a ton of new Random objects.
What you're talking about is typically called "Perlin Noise", here's a link for you: http://freespace.virgin.net/hugo.elias/models/m_perlin.htm
The most important thing in that article is the noise function in 2D:
function Noise1(integer x, integer y)
n = x + y * 57
n = (n<<13) ^ n;
return ( 1.0 - ( (n * (n * n * 15731 + 789221) + 1376312589) & 7fffffff) / 1073741824.0);
end function
It returns a number between -1.0 and +1.0 based on the x and y coordonates alone (and a hard coded seed that you can change randomly at the start of your app or just leave it as it is).
The rest of the article is about interpolating these numbers, but depending on how random you want these numbers, you can just leave them as it is. Note that these numbers will be utterly random. If you instead apply a Cosine Interpolator and use the generated noise every 5-6 indexes, interpolating inbetween, you get heightmap data (which is what I used it for). Skip it for totally random data.
Standart random generator usually is generator of sequence, where each next element is build from previous. So to generate rndMtx1[3,16] you have to generate all previous elements in a sequence.
Actually you need something different from random generator, because you need only one value, but not the sequence. You have to build your own "generator" which uses seed and indexes as input for formula to produce single random value. You can invent many ways to do so. One of the simplest way is to take random value asm hash(seed + index) (I guess idea of hashes used in passwords and signing is to produce some stable "random" value out of input data).
P.S. You can improve your approach with independent generators (Random(seed + index)) by making lazy blocks of matrix.
I think your first idea of instantiating a new Random object seeded by some deterministic hash of (x-coordinate, y-coordinate, LazyRandomMatrix seed) is probably reasonable for most situations. In general, creating lots of small objects on the managed heap is something the CLR is very good at handling efficiently. And I don't think Random.ctor() is terribly expensive. You can easily measure the performance if it's a concern.
A very similar solution which may be easier than creating a good deterministic hash is to use two Random objects each time. Something like:
public int this[int x, int y]
{
get
{
Random r1 = new Random(_seed * x);
Random r2 = new Random(y);
return (r1.Next() ^ r2.Next());
}
}
Here is a solution based on a SHA1 hash. Basically this takes the bytes for the X, Y and Seed values and packs this into a byte array. Then a hash for the byte array and the first 4 bytes of the hash used to generate an int. This should be pretty random.
public class LazyRandomMatrix
{
private int _seed;
private SHA1 _hashProvider = new SHA1CryptoServiceProvider();
public LazyRandomMatrix(int seed)
{
_seed = seed;
}
public int this[int x, int y]
{
get
{
byte[] data = new byte[12];
Buffer.BlockCopy(BitConverter.GetBytes(x), 0, data, 0, 4);
Buffer.BlockCopy(BitConverter.GetBytes(y), 0, data, 4, 4);
Buffer.BlockCopy(BitConverter.GetBytes(_seed), 0, data, 8, 4);
byte[] hash = _hashProvider.ComputeHash(data);
return BitConverter.ToInt32(hash, 0);
}
}
}
PRNGs can be built out of hash functions.
This is what e.g. MS Research did with parallelizing random number generation with MD5 or others with TEA on a GPU.
(In fact, PRNGs can be thought of as a hash function from (seed, state) => nextnumber.)
Generating massive amounts of random numbers on a GPU brings up similar problems.
(E.g., to make it parallel, there should not be a single shared state.)
Although it is more common in the crypto world, using a crypto hash, I have taken the liberty to use MurmurHash 2.0 for sake of speed and simplicity. It has very good statistical properties as a hash function. A related, but not identical test shows that it gives good results as a PRNG. (unless I have fsc#kd up something in the C# code, that is.:) Feel free to use any other suitable hash function; crypto ones (MD5, TEA, SHA) as well - though crypto hashes tend to be much slower.
public class LazyRandomMatrix
{
private uint seed;
public LazyRandomMatrix(int seed)
{
this.seed = (uint)seed;
}
public int this[int x, int y]
{
get
{
return MurmurHash2((uint)x, (uint)y, seed);
}
}
static int MurmurHash2(uint key1, uint key2, uint seed)
{
// 'm' and 'r' are mixing constants generated offline.
// They're not really 'magic', they just happen to work well.
const uint m = 0x5bd1e995;
const int r = 24;
// Initialize the hash to a 'random' value
uint h = seed ^ 8;
// Mix 4 bytes at a time into the hash
key1 *= m;
key1 ^= key1 >> r;
key1 *= m;
h *= m;
h ^= key1;
key2 *= m;
key2 ^= key2 >> r;
key2 *= m;
h *= m;
h ^= key2;
// Do a few final mixes of the hash to ensure the last few
// bytes are well-incorporated.
h ^= h >> 13;
h *= m;
h ^= h >> 15;
return (int)h;
}
}
A pseudo-random number generator is essentially a function that deterministically calculates a successor for a given value.
You could invent a simple algorithm that calculates a value from its neighbours. If a neighbour doesn't have a value yet, calculate its value from its respective neighbours first.
Something like this:
value(0,0) = seed
value(x+1,0) = successor(value(x,0))
value(x,y+1) = successor(value(x,y))
Example with successor(n) = n+1 to calculate value(2,4):
\ x 0 1 2
y +-------------------
0 | 627 628 629
1 | 630
2 | 631
3 | 632
4 | 633
This example algorithm is obviously not very good, but you get the idea.
You want a random number generator with random access to the elements, instead of sequential access. (Note that you can reduce your two coordinates into a single index i.e. by computing i = x + (y << 16).)
A cool example of such a generator is Blum Blum Shub, which is a cryptographically secure PRNG with easy random-access. Unfortunately, it is very slow.
A more practical example is the well-known linear congruential generator. You can easily modify one to allow random access. Consider the definition:
X(0) = S
X(n) = B + X(n-1)*A (mod M)
Evaluating this directly would take O(n) time (that's pseudo linear, not linear), but you can convert to a non-recursive form:
//Expand a few times to see the pattern:
X(n) = B + X(n-1)*A (mod M)
X(n) = B + (B + X(n-2)*A)*A (mod M)
X(n) = B + (B + (B + X(n-3)*A)*A)*A (mod M)
//Aha! I see it now, and I can reduce it to a closed form:
X(n) = B + B*A + B*A*A + ... + B*A^(N-1) + S*A^N (mod M)
X(n) = S*A^N + B*SUM[i:0..n-1](A^i) (mod M)
X(n) = S*A^N + B*(A^N-1)/(A-1) (mod M)
That last equation can be computed relatively quickly, although the second part of it is a bit tricky to get right (because division doesn't distribute over mod the same way addition and multiplication do).
As far as I see, there are 2 basic algorithms possible here:
Generate a new random number based on func(seed, coord) for each coord
Generate a single random number from seed, and then transform it for the coord (something like rotate(x) + translate(y) or whatever)
In the first case, you have the problem of always generating new random numbers, although this may not be as expensive as you fear.
In the second case, the problem is that you may lose randomness during your transformation operations. However, in either case the result is reproducible.

Simple Pseudo-Random Algorithm

I'm need a pseudo-random generator which takes a number as input and returns another number witch is reproducible and seems to be random.
Each input number should match to exactly one output number and vice versa
same input numbers always result in same output numbers
sequential input numbers that are close together (eg. 1 and 2) should produce completely different output numbers (eg. 1 => 9783526, 2 => 283)
It must not be perfect, it's just to create random but reproducible test data.
I use C#.
I wrote this funny piece of code some time ago which produced something random.
public static long Scramble(long number, long max)
{
// some random values
long[] scramblers = { 3, 5, 7, 31, 343, 2348, 89897 };
number += (max / 7) + 6;
number %= max;
// shuffle according to divisibility
foreach (long scrambler in scramblers)
{
if (scrambler >= max / 3) break;
number = ((number * scrambler) % max)
+ ((number * scrambler) / max);
}
return number % max;
}
I would like to have something better, more reliable, working with any size of number (no max argument).
Could this probably be solved using a CRC algorithm? Or some bit shuffling stuff.
I remove the microsoft code from this answer, the GNU code file is a lot longer but basically it contains this from http://cs.uccs.edu/~cs591/bufferOverflow/glibc-2.2.4/stdlib/random_r.c :
int32_t val = state[0];
val = ((state[0] * 1103515245) + 12345) & 0x7fffffff;
state[0] = val;
*result = val;
for your purpose, the seed is state[0] so it would look more like
int getRand(int val)
{
return ((val * 1103515245) + 12345) & 0x7fffffff;
}
You (maybe) can do this easily in C# using the Random class:
public int GetPseudoRandomNumber(int input)
{
Random random = new Random(input);
return random.Next();
}
Since you're explicitly seeding Random with the input, you will get the same output every time given the same input value.
A tausworthe generator is simple to implement and pretty fast. The following pseudocode implementation has full cycle (2**31 - 1, because zero is a fixed point):
def tausworthe(seed)
seed ^= seed >> 13
seed ^= seed << 18
return seed & 0x7fffffff
I don't know C#, but I'm assuming it has XOR (^) and bit shift (<<, >>) operators as in C.
Set an initial seed value, and invoke with seed = tausworthe(seed).
The first two rules suggest a fixed or input-seeded permutation of the input, but the third rule requires a further transform.
Is there any further restriction on what the outputs should be, to guide that transform? - e.g. is there an input set of output values to choose from?
If the only guide is "no max", I'd use the following...
Apply a hash algorithm to the whole input to get the first output item. A CRC might work, but for more "random" results, use a crypto hash algorithm such as MD5.
Use a next permutation algorithm (plenty of links on Google) on the input.
Repeat the hash-then-next-permutation until all required outputs are found.
The next permutation may be overkill though, you could probably just increment the first input (and maybe, on overflow, increment the second and so on) before redoing the hash.
For crypto-style hashing, you'll need a key - just derive something from the input before you start.

C data structure to mimic C#'s List<List<int>>?

I am looking to refactor a c# method into a c function in an attempt to gain some speed, and then call the c dll in c# to allow my program to use the functionality.
Currently the c# method takes a list of integers and returns a list of lists of integers. The method calculated the power set of the integers so an input of 3 ints would produce the following output (at this stage the values of the ints is not important as it is used as an internal weighting value)
1
2
3
1,2
1,3
2,3
1,2,3
Where each line represents a list of integers. The output indicates the index (with an offset of 1) of the first list, not the value. So 1,2 indicates that the element at index 0 and 1 are an element of the power set.
I am unfamiliar with c, so what are my best options for data structures that will allow the c# to access the returned data?
Thanks in advance
Update
Thank you all for your comments so far. Here is a bit of a background to the nature of the problem.
The iterative method for calculating the power set of a set is fairly straight forward. Two loops and a bit of bit manipulation is all there is to it really. It just get called..a lot (in fact billions of times if the size of the set is big enough).
My thoughs around using c (c++ as people have pointed out) are that it gives more scope for performance tuning. A direct port may not offer any increase, but it opens the way for more involved methods to get a bit more speed out of it. Even a small increase per iteration would equate to a measurable increase.
My idea was to port a direct version and then work to increase it. And then refactor it over time (with help from everyone here at SO).
Update 2
Another fair point from jalf, I dont have to use list or equivelent. If there is a better way then I am open to suggestions. The only reason for the list was that each set of results is not the same size.
The code so far...
public List<List<int>> powerset(List<int> currentGroupList)
{
_currentGroupList = currentGroupList;
int max;
int count;
//Count the objects in the group
count = _currentGroupList.Count;
max = (int)Math.Pow(2, count);
//outer loop
for (int i = 0; i < max; i++)
{
_currentSet = new List<int>();
//inner loop
for (int j = 0; j < count; j++)
{
if ((i & (1 << j)) == 0)
{
_currentSetList.Add(_currentGroupList.ElementAt(j));
}
}
outputList.Add(_currentSetList);
}
return outputList;
}
As you can see, not a lot to it. It just goes round and round a lot!
I accept that the creating and building of lists may not be the most efficient way, but I need some way of providing the results back in a manageable way.
Update 2
Thanks for all the input and implementation work. Just to clarify a couple of points raised: I dont need the output to be in 'natural order', and also I am not that interested in the empty set being returned.
hughdbrown's implementation is intesting but i think that i will need to store the results (or at least a subset of them) at some point. It sounds like memory limitiations will apply long before running time becomes a real issue.
Partly because of this, I think I can get away with using bytes instead of integers, giving more potential storage.
The question really is then: Have we reached the maximum speed for this calcualtion in C#? Does the option of unmanaged code provide any more scope. I know in many respects the answer is futile, as even if we havled the time to run, it would only allow an extra values in the original set.
Also, be sure that moving to C/C++ is really what you need to do for speed to begin with. Instrument the original C# method (standalone, executed via unit tests), instrument the new C/C++ method (again, standalone via unit tests) and see what the real world difference is.
The reason I bring this up is that I fear it may be a pyrhhic victory -- using Smokey Bacon's advice, you get your list class, you're in "faster" C++, but there's still a cost to calling that DLL: Bouncing out of the runtime with P/Invoke or COM interop carries a fairly substantial performance cost.
Be sure you're getting your "money's worth" out of that jump before you do it.
Update based on the OP's Update
If you're calling this loop repeatedly, you need to absolutely make sure that the entire loop logic is encapsulated in a single interop call -- otherwise the overhead of marshalling (as others here have mentioned) will definitely kill you.
I do think, given the description of the problem, that the issue isn't that C#/.NET is "slower" than C, but more likely that the code needs to be optimized. As another poster here mentioned, you can use pointers in C# to seriously boost performance in this kind of loop, without the need for marshalling. I'd look into that first, before jumping into a complex interop world, for this scenario.
If you are looking to use C for a performance gain, most likely you are planning to do so through the use of pointers. C# does allow for use of pointers, using the unsafe keyword. Have you considered that?
Also how will you be calling this code.. will it be called often (e.g. in a loop?) If so, marshalling the data back and forth may more than offset any performance gains.
Follow Up
Take a look at Native code without sacrificing .NET performance for some interop options. There are ways to interop without too much of a performance loss, but those interops can only happen with the simplest of data types.
Though I still think that you should investigate speeding up your code using straight .NET.
Follow Up 2
Also, may I suggest that if you have your heart set on mixing native code and managed code, that you create your library using c++/cli. Below is a simple example. Note that I am not a c++/cli guy, and this code doesn't do anything useful...its just meant to show how easily you can mix native and managed code.
#include "stdafx.h"
using namespace System;
System::Collections::Generic::List<int> ^MyAlgorithm(System::Collections::Generic::List<int> ^sourceList);
int main(array<System::String ^> ^args)
{
System::Collections::Generic::List<int> ^intList = gcnew System::Collections::Generic::List<int>();
intList->Add(1);
intList->Add(2);
intList->Add(3);
intList->Add(4);
intList->Add(5);
Console::WriteLine("Before Call");
for each(int i in intList)
{
Console::WriteLine(i);
}
System::Collections::Generic::List<int> ^modifiedList = MyAlgorithm(intList);
Console::WriteLine("After Call");
for each(int i in modifiedList)
{
Console::WriteLine(i);
}
}
System::Collections::Generic::List<int> ^MyAlgorithm(System::Collections::Generic::List<int> ^sourceList)
{
int* nativeInts = new int[sourceList->Count];
int nativeIntArraySize = sourceList->Count;
//Managed to Native
for(int i=0; i<sourceList->Count; i++)
{
nativeInts[i] = sourceList[i];
}
//Do Something to native ints
for(int i=0; i<nativeIntArraySize; i++)
{
nativeInts[i]++;
}
//Native to Managed
System::Collections::Generic::List<int> ^returnList = gcnew System::Collections::Generic::List<int>();
for(int i=0; i<nativeIntArraySize; i++)
{
returnList->Add(nativeInts[i]);
}
return returnList;
}
What makes you think you'll gain speed by calling into C code? C isn't magically faster than C#. It can be, of course, but it can also easily be slower (and buggier). Especially when you factor in the p/invoke calls into native code, it's far from certain that this approach will speed up anything.
In any case, C doesn't have anything like List. It has raw arrays and pointers (and you could argue that int** is more or less equivalent), but you're probably better off using C++, which does have equivalent datastructures. In particular, std::vector.
There are no simple ways to expose this data to C# however, since it will be scattered pretty much randomly (each list is a pointer to some dynamically allocated memory somewhere)
However, I suspect the biggest performance improvement comes from improving the algorithm in C#.
Edit:
I can see several things in your algorithm that seem suboptimal. Constructing a list of lists isn't free. Perhaps you can create a single list and use different offsets to represent each sublist. Or perhaps using 'yield return' and IEnumerable instead of explicitly constructing lists might be faster.
Have you profiled your code, found out where the time is being spent?
This returns one set of a powerset at a time. It is based on python code here. It works for powersets of over 32 elements. If you need fewer than 32, you can change long to int. It is pretty fast -- faster than my previous algorithm and faster than (my modified to use yield return version of) P Daddy's code.
static class PowerSet4<T>
{
static public IEnumerable<IList<T>> powerset(T[] currentGroupList)
{
int count = currentGroupList.Length;
Dictionary<long, T> powerToIndex = new Dictionary<long, T>();
long mask = 1L;
for (int i = 0; i < count; i++)
{
powerToIndex[mask] = currentGroupList[i];
mask <<= 1;
}
Dictionary<long, T> result = new Dictionary<long, T>();
yield return result.Values.ToArray();
long max = 1L << count;
for (long i = 1L; i < max; i++)
{
long key = i & -i;
if (result.ContainsKey(key))
result.Remove(key);
else
result[key] = powerToIndex[key];
yield return result.Values.ToArray();
}
}
}
You can download all the fastest versions I have tested here.
I really think that using yield return is the change that makes calculating large powersets possible. Allocating large amounts of memory upfront increases runtime dramatically and causes algorithms to fail for lack of memory very early on. Original Poster should figure out how many sets of a powerset he needs at once. Holding all of them is not really an option with >24 elements.
I'm also going to put in a vote for tuning-up your C#, particularly by going to 'unsafe' code and losing what might be a lot of bounds-checking overhead.
Even though it's 'unsafe', it's no less 'safe' than C/C++, and it's dramatically easier to get right.
Below is a C# algorithm that should be much faster (and use less memory) than the algorithm you posted. It doesn't use the neat binary trick yours uses, and as a result, the code is a good bit longer. It has a few more for loops than yours, and might take a time or two stepping through it with the debugger to fully grok it. But it's actually a simpler approach, once you understand what it's doing.
As a bonus, the returned sets are in a more "natural" order. It would return subsets of the set {1 2 3} in the same order you listed them in your question. That wasn't a focus, but is a side effect of the algorithm used.
In my tests, I found this algorithm to be approximately 4 times faster than the algorithm you posted for a large set of 22 items (which was as large as I could go on my machine without excessive disk-thrashing skewing the results too much). One run of yours took about 15.5 seconds, and mine took about 3.6 seconds.
For smaller lists, the difference is less pronounced. For a set of only 10 items, yours ran 10,000 times in about 7.8 seconds, and mine took about 3.2 seconds. For sets with 5 or fewer items, they run close to the same time. With many iterations, yours runs a little faster.
Anyway, here's the code. Sorry it's so long; I tried to make sure I commented it well.
/*
* Made it static, because it shouldn't really use or modify state data.
* Making it static also saves a tiny bit of call time, because it doesn't
* have to receive an extra "this" pointer. Also, accessing a local
* parameter is a tiny bit faster than accessing a class member, because
* dereferencing the "this" pointer is not free.
*
* Made it generic so that the same code can handle sets of any type.
*/
static IList<IList<T>> PowerSet<T>(IList<T> set){
if(set == null)
throw new ArgumentNullException("set");
/*
* Caveat:
* If set.Count > 30, this function pukes all over itself without so
* much as wiping up afterwards. Even for 30 elements, though, the
* result set is about 68 GB (if "set" is comprised of ints). 24 or
* 25 elements is a practical limit for current hardware.
*/
int setSize = set.Count;
int subsetCount = 1 << setSize; // MUCH faster than (int)Math.Pow(2, setSize)
T[][] rtn = new T[subsetCount][];
/*
* We don't really need dynamic list allocation. We can calculate
* in advance the number of subsets ("subsetCount" above), and
* the size of each subset (0 through setSize). The performance
* of List<> is pretty horrible when the initial size is not
* guessed well.
*/
int subsetIndex = 0;
for(int subsetSize = 0; subsetSize <= setSize; subsetSize++){
/*
* The "indices" array below is part of how we implement the
* "natural" ordering of the subsets. For a subset of size 3,
* for example, we initialize the indices array with {0, 1, 2};
* Later, we'll increment each index until we reach setSize,
* then carry over to the next index. So, assuming a set size
* of 5, the second iteration will have indices {0, 1, 3}, the
* third will have {0, 1, 4}, and the fifth will involve a carry,
* so we'll have {0, 2, 3}.
*/
int[] indices = new int[subsetSize];
for(int i = 1; i < subsetSize; i++)
indices[i] = i;
/*
* Now we'll iterate over all the subsets we need to make for the
* current subset size. The number of subsets of a given size
* is easily determined with combination (nCr). In other words,
* if I have 5 items in my set and I want all subsets of size 3,
* I need 5-pick-3, or 5C3 = 5! / 3!(5 - 3)! = 10.
*/
for(int i = Combination(setSize, subsetSize); i > 0; i--){
/*
* Copy the items from the input set according to the
* indices we've already set up. Alternatively, if you
* just wanted the indices in your output, you could
* just dup the index array here (but make sure you dup!
* Otherwise the setup step at the bottom of this for
* loop will mess up your output list! You'll also want
* to change the function's return type to
* IList<IList<int>> in that case.
*/
T[] subset = new T[subsetSize];
for(int j = 0; j < subsetSize; j++)
subset[j] = set[indices[j]];
/* Add the subset to the return */
rtn[subsetIndex++] = subset;
/*
* Set up indices for next subset. This looks a lot
* messier than it is. It simply increments the
* right-most index until it overflows, then carries
* over left as far as it needs to. I've made the
* logic as fast as I could, which is why it's hairy-
* looking. Note that the inner for loop won't
* actually run as long as a carry isn't required,
* and will run at most once in any case. The outer
* loop will go through as few iterations as required.
*
* You may notice that this logic doesn't check the
* end case (when the left-most digit overflows). It
* doesn't need to, since the loop up above won't
* execute again in that case, anyway. There's no
* reason to waste time checking that here.
*/
for(int j = subsetSize - 1; j >= 0; j--)
if(++indices[j] <= setSize - subsetSize + j){
for(int k = j + 1; k < subsetSize; k++)
indices[k] = indices[k - 1] + 1;
break;
}
}
}
return rtn;
}
static int Combination(int n, int r){
if(r == 0 || r == n)
return 1;
/*
* The formula for combination is:
*
* n!
* ----------
* r!(n - r)!
*
* We'll actually use a slightly modified version here. The above
* formula forces us to calculate (n - r)! twice. Instead, we only
* multiply for the numerator the factors of n! that aren't canceled
* out by (n - r)! in the denominator.
*/
/*
* nCr == nC(n - r)
* We can use this fact to reduce the number of multiplications we
* perform, as well as the incidence of overflow, where r > n / 2
*/
if(r > n / 2) /* We DO want integer truncation here (7 / 2 = 3) */
r = n - r;
/*
* I originally used all integer math below, with some complicated
* logic and another function to handle cases where the intermediate
* results overflowed a 32-bit int. It was pretty ugly. In later
* testing, I found that the more generalized double-precision
* floating-point approach was actually *faster*, so there was no
* need for the ugly code. But if you want to see a giant WTF, look
* at the edit history for this post!
*/
double denominator = Factorial(r);
double numerator = n;
while(--r > 0)
numerator *= --n;
return (int)(numerator / denominator + 0.1/* Deal with rounding errors. */);
}
/*
* The archetypical factorial implementation is recursive, and is perhaps
* the most often used demonstration of recursion in text books and other
* materials. It's unfortunate, however, that few texts point out that
* it's nearly as simple to write an iterative factorial function that
* will perform better (although tail-end recursion, if implemented by
* the compiler, will help to close the gap).
*/
static double Factorial(int x){
/*
* An all-purpose factorial function would handle negative numbers
* correctly - the result should be Sign(x) * Factorial(Abs(x)) -
* but since we don't need that functionality, we're better off
* saving the few extra clock cycles it would take.
*/
/*
* I originally used all integer math below, but found that the
* double-precision floating-point version is not only more
* general, but also *faster*!
*/
if(x < 2)
return 1;
double rtn = x;
while(--x > 1)
rtn *= x;
return rtn;
}
Your list of results does not match the results your code would produce. In particular, you do not show generating the empty set.
If I were producing powersets that could have a few billion subsets, then generating each subset separately rather than all at once might cut down on your memory requirements, improving your code's speed. How about this:
static class PowerSet<T>
{
static long[] mask = { 1L << 0, 1L << 1, 1L << 2, 1L << 3,
1L << 4, 1L << 5, 1L << 6, 1L << 7,
1L << 8, 1L << 9, 1L << 10, 1L << 11,
1L << 12, 1L << 13, 1L << 14, 1L << 15,
1L << 16, 1L << 17, 1L << 18, 1L << 19,
1L << 20, 1L << 21, 1L << 22, 1L << 23,
1L << 24, 1L << 25, 1L << 26, 1L << 27,
1L << 28, 1L << 29, 1L << 30, 1L << 31};
static public IEnumerable<IList<T>> powerset(T[] currentGroupList)
{
int count = currentGroupList.Length;
long max = 1L << count;
for (long iter = 0; iter < max; iter++)
{
T[] list = new T[count];
int k = 0, m = -1;
for (long i = iter; i != 0; i &= (i - 1))
{
while ((mask[++m] & i) == 0)
;
list[k++] = currentGroupList[m];
}
yield return list;
}
}
}
Then your client code looks like this:
static void Main(string[] args)
{
int[] intList = { 1, 2, 3, 4 };
foreach (IList<int> set in PowerSet<int>.powerset(intList))
{
foreach (int i in set)
Console.Write("{0} ", i);
Console.WriteLine();
}
}
I'll even throw in a bit-twiddling algorithm with templated arguments for free. For added speed, you can wrap the powerlist() inner loop in an unsafe block. It doesn't make much difference.
On my machine, this code is slightly slower than the OP's code until the sets are 16 or larger. However, all times to 16 elements are less than 0.15 seconds. At 23 elements, it runs in 64% of the time. The original algorithm does not run on my machine for 24 or more elements -- it runs out of memory.
This code takes 12 seconds to generate the power set for the numbers 1 to 24, omitting screen I/O time. That's 16 million-ish in 12 seconds, or about 1400K per second. For a billion (which is what you quoted earlier), that would be about 760 seconds. How long do you think this should take?
Does it have to be C, or is C++ an option too? If C++, you can just its own list type from the STL. Otherwise, you'll have to implement your own list - look up linked lists or dynamically sized arrays for pointers on how to do this.
I concur with the "optimize .NET first" opinion. It's the most painless. I imagine that if you wrote some unmanaged .NET code using C# pointers, it'd be identical to C execution, except for the VM overhead.
P Daddy:
You could change your Combination() code to this:
static long Combination(long n, long r)
{
r = (r > n - r) ? (n - r) : r;
if (r == 0)
return 1;
long result = 1;
long k = 1;
while (r-- > 0)
{
result *= n--;
result /= k++;
}
return result;
}
This will reduce the multiplications and the chance of overflow to a minimum.

Categories