Say I have an object that stores a byte array and I want to be able to efficiently generate a hashcode for it. I've used the cryptographic hash functions for this in the past because they are easy to implement, but they are doing a lot more work than they should to be cryptographically oneway, and I don't care about that (I'm just using the hashcode as a key into a hashtable).
Here's what I have today:
struct SomeData : IEquatable<SomeData>
{
private readonly byte[] data;
public SomeData(byte[] data)
{
if (null == data || data.Length <= 0)
{
throw new ArgumentException("data");
}
this.data = new byte[data.Length];
Array.Copy(data, this.data, data.Length);
}
public override bool Equals(object obj)
{
return obj is SomeData && Equals((SomeData)obj);
}
public bool Equals(SomeData other)
{
if (other.data.Length != data.Length)
{
return false;
}
for (int i = 0; i < data.Length; ++i)
{
if (data[i] != other.data[i])
{
return false;
}
}
return true;
}
public override int GetHashCode()
{
return BitConverter.ToInt32(new MD5CryptoServiceProvider().ComputeHash(data), 0);
}
}
Any thoughts?
dp: You are right that I missed a check in Equals, I have updated it. Using the existing hashcode from the byte array will result in reference equality (or at least that same concept translated to hashcodes).
for example:
byte[] b1 = new byte[] { 1 };
byte[] b2 = new byte[] { 1 };
int h1 = b1.GetHashCode();
int h2 = b2.GetHashCode();
With that code, despite the two byte arrays having the same values within them, they are referring to different parts of memory and will result in (probably) different hash codes. I need the hash codes for two byte arrays with the same contents to be equal.
The hash code of an object does not need to be unique.
The checking rule is:
Are the hash codes equal? Then call the full (slow) Equals method.
Are the hash codes not equal? Then the two items are definitely not equal.
All you want is a GetHashCode algorithm that splits up your collection into roughly even groups - it shouldn't form the key as the HashTable or Dictionary<> will need to use the hash to optimise retrieval.
How long do you expect the data to be? How random? If lengths vary greatly (say for files) then just return the length. If lengths are likely to be similar look at a subset of the bytes that varies.
GetHashCode should be a lot quicker than Equals, but doesn't need to be unique.
Two identical things must never have different hash codes. Two different objects should not have the same hash code, but some collisions are to be expected (after all, there are more permutations than possible 32 bit integers).
Don't use cryptographic hashes for a hashtable, that's ridiculous/overkill.
Here ya go... Modified FNV Hash in C#
http://bretm.home.comcast.net/hash/6.html
public static int ComputeHash(params byte[] data)
{
unchecked
{
const int p = 16777619;
int hash = (int)2166136261;
for (int i = 0; i < data.Length; i++)
hash = (hash ^ data[i]) * p;
hash += hash << 13;
hash ^= hash >> 7;
hash += hash << 3;
hash ^= hash >> 17;
hash += hash << 5;
return hash;
}
}
Borrowing from the code generated by JetBrains software, I have settled on this function:
public override int GetHashCode()
{
unchecked
{
var result = 0;
foreach (byte b in _key)
result = (result*31) ^ b;
return result;
}
}
The problem with just XOring the bytes is that 3/4 (3 bytes) of the returned value has only 2 possible values (all on or all off). This spreads the bits around a little more.
Setting a breakpoint in Equals was a good suggestion. Adding about 200,000 entries of my data to a Dictionary, sees about 10 Equals calls (or 1/20,000).
Have you compared with the SHA1CryptoServiceProvider.ComputeHash method? It takes a byte array and returns a SHA1 hash, and I believe it's pretty well optimized. I used it in an Identicon Handler that performed pretty well under load.
I found interesting results:
I have the class:
public class MyHash : IEquatable<MyHash>
{
public byte[] Val { get; private set; }
public MyHash(byte[] val)
{
Val = val;
}
/// <summary>
/// Test if this Class is equal to another class
/// </summary>
/// <param name="other"></param>
/// <returns></returns>
public bool Equals(MyHash other)
{
if (other.Val.Length == this.Val.Length)
{
for (var i = 0; i < this.Val.Length; i++)
{
if (other.Val[i] != this.Val[i])
{
return false;
}
}
return true;
}
else
{
return false;
}
}
public override int GetHashCode()
{
var str = Convert.ToBase64String(Val);
return str.GetHashCode();
}
}
Then I created a dictionary with keys of type MyHash in order to test how fast I can insert and I can also know how many collisions there are. I did the following
// dictionary we use to check for collisions
Dictionary<MyHash, bool> checkForDuplicatesDic = new Dictionary<MyHash, bool>();
// used to generate random arrays
Random rand = new Random();
var now = DateTime.Now;
for (var j = 0; j < 100; j++)
{
for (var i = 0; i < 5000; i++)
{
// create new array and populate it with random bytes
byte[] randBytes = new byte[byte.MaxValue];
rand.NextBytes(randBytes);
MyHash h = new MyHash(randBytes);
if (checkForDuplicatesDic.ContainsKey(h))
{
Console.WriteLine("Duplicate");
}
else
{
checkForDuplicatesDic[h] = true;
}
}
Console.WriteLine(j);
checkForDuplicatesDic.Clear(); // clear dictionary every 5000 iterations
}
var elapsed = DateTime.Now - now;
Console.Read();
Every time I insert a new item to the dictionary the dictionary will calculate the hash of that object. So you can tell what method is most efficient by placing several answers found in here in the method public override int GetHashCode() The method that was by far the fastest and had the least number of collisions was:
public override int GetHashCode()
{
var str = Convert.ToBase64String(Val);
return str.GetHashCode();
}
that took 2 seconds to execute. The method
public override int GetHashCode()
{
// 7.1 seconds
unchecked
{
const int p = 16777619;
int hash = (int)2166136261;
for (int i = 0; i < Val.Length; i++)
hash = (hash ^ Val[i]) * p;
hash += hash << 13;
hash ^= hash >> 7;
hash += hash << 3;
hash ^= hash >> 17;
hash += hash << 5;
return hash;
}
}
had no collisions also but it took 7 seconds to execute!
If you are looking for performance, I tested a few hash keys, and
I recommend Bob Jenkin's hash function. It is both crazy fast
to compute and will give as few collisions as the cryptographic
hash you used until now.
I don't know C# at all, and I don't know if it can link with C, but
here is its implementation in C.
Is using the existing hashcode from the byte array field not good enough? Also note that in the Equals method you should check that the arrays are the same size before doing the compare.
Generating a good hash is easier said than done. Remember, you're basically representing n bytes of data with m bits of information. The larger your data set and the smaller m is, the more likely you'll get a collision ... two pieces of data resolving to the same hash.
The simplest hash I ever learned was simply XORing all the bytes together. It's easy, faster than most complicated hash algorithms and a halfway decent general-purpose hash algorithm for small data sets. It's the Bubble Sort of hash algorithms really. Since the simple implementation would leave you with 8 bits, that's only 256 hashes ... not so hot. You could XOR chunks instead of individal bytes, but then the algorithm gets much more complicated.
So certainly, the cryptographic algorithms are maybe doing some stuff you don't need ... but they're also a huge step up in general-purpose hash quality. The MD5 hash you're using has 128 bits, with billions and billions of possible hashes. The only way you're likely to get something better is to take some representative samples of the data you expect to be going through your application and try various algorithms on it to see how many collisions you get.
So until I see some reason to not use a canned hash algorithm (performance, perhaps?), I'm going to have to recommend you stick with what you've got.
Whether you want a perfect hashfunction (different value for each object that evaluates to equal) or just a pretty good one is always a performance tradeoff, it takes normally time to compute a good hashfunction and if your dataset is smallish you're better of with a fast function. The most important (as your second post points out) is correctness, and to achieve that all you need is to return the Length of the array. Depending on your dataset that might even be ok. If it isn't (say all your arrays are equally long) you can go with something cheap like looking at the first and last value and XORing their values and then add more complexity as you see fit for your data.
A quick way to see how your hashfunction performs on your data is to add all the data to a hashtable and count the number of times the Equals function gets called, if it is too often you have more work to do on the function. If you do this just keep in mind that the hashtable's size needs to be set bigger than your dataset when you start, otherwise you are going to rehash the data which will trigger reinserts and more Equals evaluations (though possibly more realistic?)
For some objects (not this one) a quick HashCode can be generated by ToString().GetHashCode(), certainly not optimal, but useful as people tend to return something close to the identity of the object from ToString() and that is exactly what GetHashcode is looking for
Trivia: The worst performance I have ever seen was when someone by mistake returned a constant from GetHashCode, easy to spot with a debugger though, especially if you do lots of lookups in your hashtable
RuntimeHelpers.GetHashCode might help:
From Msdn:
Serves as a hash function for a
particular type, suitable for use in
hashing algorithms and data structures
such as a hash table.
private int? hashCode;
public override int GetHashCode()
{
if (!hashCode.HasValue)
{
var hash = 0;
for (var i = 0; i < bytes.Length; i++)
{
hash = (hash << 4) + bytes[i];
}
hashCode = hash;
}
return hashCode.Value;
}
Related
I would like to generate a Guid from a list of other Guids. The generated Guid must have the property that for the same input list of guids the resulting Guid will be the same, no matter how many times I apply the transformation.
Also, it should have the lowest collision possible so different guids at the input generate a different guid at the output.
Can someone help me with this? What should be the best way to go here? It's basically a hash function but over Guids.
You could do some arithmetic on the individual bytes of a Guid - the code below basically adds them up (modulo 256 because of the overflow):
byte[] totalBytes = new byte[16];
foreach (var guid in guids) {
var bytes = guid.ToByteArray();
for (int i = 0; i < 16; i++) {
totalBytes[i] += bytes[i];
}
}
var totalGuid = new Guid(totalBytes);
I have a requirement to hash input strings and produce 14 digit decimal numbers as output.
The math I am using tells me I can have, at maximum, a 46 bit unsigned integer.
I am aware that a 46 bit uint means less collision resistance for any potential hash function. However, the number of hashes I am creating keeps the collision probability in an acceptable range.
I would be most grateful if the community could help me verify that my method for truncating a hash to 46 bits is solid. I have a gut feeling that there are optimizations and/or easier ways to do this. My function is as follows (where bitLength is 46 when this function is called):
public static UInt64 GetTruncatedMd5Hash(string input, int bitLength)
{
var md5Hash = MD5.Create();
byte[] fullHashBytes = md5Hash.ComputeHash(Encoding.UTF8.GetBytes(input));
var fullHashBits = new BitArray(fullHashBytes);
// BitArray stores LSB of each byte in lowest indexes, so reversing...
ReverseBitArray(fullHashBits);
// truncate by copying only number of bits specified by bitLength param
var truncatedHashBits = new BitArray(bitLength);
for (int i = 0; i < bitLength - 1; i++)
{
truncatedHashBits[i] = fullHashBits[i];
}
byte[] truncatedHashBytes = new byte[8];
truncatedHashBits.CopyTo(truncatedHashBytes, 0);
return BitConverter.ToUInt64(truncatedHashBytes, 0);
}
Thanks for taking a look at this question. I appreciate any feedback!
With the help of the comments above, I crafted the following solution:
public static UInt64 GetTruncatedMd5Hash(string input, int bitLength)
{
if (string.IsNullOrWhiteSpace(input)) throw new ArgumentException("input must not be null or whitespace");
if(bitLength > 64) throw new ArgumentException("bitLength must be <= 64");
var md5Hash = MD5.Create();
byte[] fullHashBytes = md5Hash.ComputeHash(Encoding.UTF8.GetBytes(input));
if(bitLength == 64)
return BitConverter.ToUInt64(fullHashBytes, 0);
var bitMask = (1UL << bitLength) - 1UL;
return BitConverter.ToUInt64(fullHashBytes, 0) & bitMask;
}
It's much tighter (and faster) than what I was trying to do before.
I have a table of orders and I want to give users a unique code for an order whilst hiding the incrementing identity integer primary key because I don't want to give away how many orders have been made.
One easy way of making sure the codes are unique is to use the primary key to determine the code.
So how can I transform an integer into a friendly, say, eight alpha numeric code such that every code is unique?
The easiest way (if you want an alpha numeric code) is to convert the integer primary key to HEX (like below). And, you can Use `PadLeft()' to make sure the string has 8 characters. But, when the number of orders grow, 8 characters will not be enough.
var uniqueCode = intPrimaryKey.ToString("X").PadLeft(8, '0');
Or, you can create an offset of your primary key, before converting it to HEX, like below:
var uniqueCode = (intPrimaryKey + 999).ToString("X").PadLeft(8, '0');
Assuming the total number of orders being created isn't going to get anywhere near the total number of identifiers in your pool, a reasonably effective technique is to simply generate a random identifier and see if it is used already; continue generating new identifiers until you find one not previously used.
A quick and easy way to do this is to have a guid column that has a default value of
left(newid(),8)
This solution will generally give you a unique value for each row. But if you have extremely large amounts of orders this will not be unique and you should use just the newid() value to generate the guid.
I would just use MD5 for this. MD5 offers enough "uniqueness" for a small subset of integers that represent your customer orders.
For an example see this answer. You will need to adjust input parameter from string to int (or alternatively just call ToString on your number and use the code as-is).
If you would like something that would be difficult to trace and you don;t mind it being 16 characters, you could use something like this that includes some random numbers and mixes the byte positions of the original input with them: (EDITED to make a bit more untraceable, by XOR-ing with the generated random numbers).
public static class OrderIdRandomizer
{
private static readonly Random _rnd = new Random();
public static string GenerateFor(int orderId)
{
var rndBytes = new byte[4];
_rnd.NextBytes(rndBytes);
var bytes = new byte[]
{
(byte)rndBytes[0],
(byte)(((byte)(orderId >> 8)) ^ rndBytes[0]),
(byte)(((byte)(orderId >> 24)) ^ rndBytes[1]),
(byte)rndBytes[1],
(byte)(((byte)(orderId >> 16)) ^ rndBytes[2]),
(byte)rndBytes[2],
(byte)(((byte)(orderId)) ^ rndBytes[3]),
(byte)rndBytes[3],
};
return string.Concat(bytes.Select(b => b.ToString("X2")));
}
public static int ReconstructFrom(string generatedId)
{
if (generatedId == null || generatedId.Length != 16)
throw new InvalidDataException("Invalid generated order id");
var bytes = new byte[8];
for (int i = 0; i < 8; i++)
bytes[i] = byte.Parse(generatedId.Substring(i * 2, 2), System.Globalization.NumberStyles.HexNumber);
return (int)(
((bytes[2] ^ bytes[3]) << 24) |
((bytes[4] ^ bytes[5]) << 16) |
((bytes[1] ^ bytes[0]) << 8) |
((bytes[6] ^ bytes[7])));
}
}
Usage:
var obfuscatedId = OrderIdRandomizer.GenerateFor(123456);
Console.WriteLine(obfuscatedId);
Console.WriteLine(OrderIdRandomizer.ReconstructFrom(obfuscatedId));
Disadvantage: If the algorithm is know, it is obviously easy to break.
Advantage: It is completely custom, i.e. not an established algorithm like MD5 that might be easier to guess/crack if you do not know what algorithm is being used.
An application I'm working on requires a matrix of random numbers. The matrix can grow in any direction at any time, and isn't always full. (I'll probably end up re-implementing it with a quad tree or something else, rather than a matrix with a lot of null objects.)
I need a way to generate the same matrix, given the same seed, no matter in which order I calculate the matrix.
LazyRandomMatrix rndMtx1 = new LazyRandomMatrix(1234) // Seed new object
float X = rndMtx1[0,0] // Lazily generate random numbers on demand
float Y = rndMtx1[3,16]
float Z = rndMtx1[23,-5]
Debug.Assert(X == rndMtx1[0,0])
Debug.Assert(Y == rndMtx1[3,16])
Debug.Assert(Z == rndMtx1[23,-5])
LazyRandomMatrix rndMtx2 = new LazyRandomMatrix(1234) // Seed second object
Debug.Assert(Y == rndMtx2[3,16]) // Lazily generate the same random numbers
Debug.Assert(Z == rndMtx2[23,-5]) // on demand in a different order
Debug.Assert(X == rndMtx2[0,0])
Yes, if I knew the dimensions of the array, the best way would be to generate the entire array, and just return values, but they need to be generated independently and on demand.
My first idea was to initialize a new random number generator for each call to a new coordinate, seeding it with some hash of the overall matrix's seed and the coordinates used in calling, but this seems like a terrible hack, as it would require creating a ton of new Random objects.
What you're talking about is typically called "Perlin Noise", here's a link for you: http://freespace.virgin.net/hugo.elias/models/m_perlin.htm
The most important thing in that article is the noise function in 2D:
function Noise1(integer x, integer y)
n = x + y * 57
n = (n<<13) ^ n;
return ( 1.0 - ( (n * (n * n * 15731 + 789221) + 1376312589) & 7fffffff) / 1073741824.0);
end function
It returns a number between -1.0 and +1.0 based on the x and y coordonates alone (and a hard coded seed that you can change randomly at the start of your app or just leave it as it is).
The rest of the article is about interpolating these numbers, but depending on how random you want these numbers, you can just leave them as it is. Note that these numbers will be utterly random. If you instead apply a Cosine Interpolator and use the generated noise every 5-6 indexes, interpolating inbetween, you get heightmap data (which is what I used it for). Skip it for totally random data.
Standart random generator usually is generator of sequence, where each next element is build from previous. So to generate rndMtx1[3,16] you have to generate all previous elements in a sequence.
Actually you need something different from random generator, because you need only one value, but not the sequence. You have to build your own "generator" which uses seed and indexes as input for formula to produce single random value. You can invent many ways to do so. One of the simplest way is to take random value asm hash(seed + index) (I guess idea of hashes used in passwords and signing is to produce some stable "random" value out of input data).
P.S. You can improve your approach with independent generators (Random(seed + index)) by making lazy blocks of matrix.
I think your first idea of instantiating a new Random object seeded by some deterministic hash of (x-coordinate, y-coordinate, LazyRandomMatrix seed) is probably reasonable for most situations. In general, creating lots of small objects on the managed heap is something the CLR is very good at handling efficiently. And I don't think Random.ctor() is terribly expensive. You can easily measure the performance if it's a concern.
A very similar solution which may be easier than creating a good deterministic hash is to use two Random objects each time. Something like:
public int this[int x, int y]
{
get
{
Random r1 = new Random(_seed * x);
Random r2 = new Random(y);
return (r1.Next() ^ r2.Next());
}
}
Here is a solution based on a SHA1 hash. Basically this takes the bytes for the X, Y and Seed values and packs this into a byte array. Then a hash for the byte array and the first 4 bytes of the hash used to generate an int. This should be pretty random.
public class LazyRandomMatrix
{
private int _seed;
private SHA1 _hashProvider = new SHA1CryptoServiceProvider();
public LazyRandomMatrix(int seed)
{
_seed = seed;
}
public int this[int x, int y]
{
get
{
byte[] data = new byte[12];
Buffer.BlockCopy(BitConverter.GetBytes(x), 0, data, 0, 4);
Buffer.BlockCopy(BitConverter.GetBytes(y), 0, data, 4, 4);
Buffer.BlockCopy(BitConverter.GetBytes(_seed), 0, data, 8, 4);
byte[] hash = _hashProvider.ComputeHash(data);
return BitConverter.ToInt32(hash, 0);
}
}
}
PRNGs can be built out of hash functions.
This is what e.g. MS Research did with parallelizing random number generation with MD5 or others with TEA on a GPU.
(In fact, PRNGs can be thought of as a hash function from (seed, state) => nextnumber.)
Generating massive amounts of random numbers on a GPU brings up similar problems.
(E.g., to make it parallel, there should not be a single shared state.)
Although it is more common in the crypto world, using a crypto hash, I have taken the liberty to use MurmurHash 2.0 for sake of speed and simplicity. It has very good statistical properties as a hash function. A related, but not identical test shows that it gives good results as a PRNG. (unless I have fsc#kd up something in the C# code, that is.:) Feel free to use any other suitable hash function; crypto ones (MD5, TEA, SHA) as well - though crypto hashes tend to be much slower.
public class LazyRandomMatrix
{
private uint seed;
public LazyRandomMatrix(int seed)
{
this.seed = (uint)seed;
}
public int this[int x, int y]
{
get
{
return MurmurHash2((uint)x, (uint)y, seed);
}
}
static int MurmurHash2(uint key1, uint key2, uint seed)
{
// 'm' and 'r' are mixing constants generated offline.
// They're not really 'magic', they just happen to work well.
const uint m = 0x5bd1e995;
const int r = 24;
// Initialize the hash to a 'random' value
uint h = seed ^ 8;
// Mix 4 bytes at a time into the hash
key1 *= m;
key1 ^= key1 >> r;
key1 *= m;
h *= m;
h ^= key1;
key2 *= m;
key2 ^= key2 >> r;
key2 *= m;
h *= m;
h ^= key2;
// Do a few final mixes of the hash to ensure the last few
// bytes are well-incorporated.
h ^= h >> 13;
h *= m;
h ^= h >> 15;
return (int)h;
}
}
A pseudo-random number generator is essentially a function that deterministically calculates a successor for a given value.
You could invent a simple algorithm that calculates a value from its neighbours. If a neighbour doesn't have a value yet, calculate its value from its respective neighbours first.
Something like this:
value(0,0) = seed
value(x+1,0) = successor(value(x,0))
value(x,y+1) = successor(value(x,y))
Example with successor(n) = n+1 to calculate value(2,4):
\ x 0 1 2
y +-------------------
0 | 627 628 629
1 | 630
2 | 631
3 | 632
4 | 633
This example algorithm is obviously not very good, but you get the idea.
You want a random number generator with random access to the elements, instead of sequential access. (Note that you can reduce your two coordinates into a single index i.e. by computing i = x + (y << 16).)
A cool example of such a generator is Blum Blum Shub, which is a cryptographically secure PRNG with easy random-access. Unfortunately, it is very slow.
A more practical example is the well-known linear congruential generator. You can easily modify one to allow random access. Consider the definition:
X(0) = S
X(n) = B + X(n-1)*A (mod M)
Evaluating this directly would take O(n) time (that's pseudo linear, not linear), but you can convert to a non-recursive form:
//Expand a few times to see the pattern:
X(n) = B + X(n-1)*A (mod M)
X(n) = B + (B + X(n-2)*A)*A (mod M)
X(n) = B + (B + (B + X(n-3)*A)*A)*A (mod M)
//Aha! I see it now, and I can reduce it to a closed form:
X(n) = B + B*A + B*A*A + ... + B*A^(N-1) + S*A^N (mod M)
X(n) = S*A^N + B*SUM[i:0..n-1](A^i) (mod M)
X(n) = S*A^N + B*(A^N-1)/(A-1) (mod M)
That last equation can be computed relatively quickly, although the second part of it is a bit tricky to get right (because division doesn't distribute over mod the same way addition and multiplication do).
As far as I see, there are 2 basic algorithms possible here:
Generate a new random number based on func(seed, coord) for each coord
Generate a single random number from seed, and then transform it for the coord (something like rotate(x) + translate(y) or whatever)
In the first case, you have the problem of always generating new random numbers, although this may not be as expensive as you fear.
In the second case, the problem is that you may lose randomness during your transformation operations. However, in either case the result is reproducible.
Can people recommend quick and simple ways to combine the hash codes of two objects. I am not too worried about collisions since I have a Hash Table which will handle that efficiently I just want something that generates a code quickly as possible.
Reading around SO and the web there seem to be a few main candidates:
XORing
XORing with Prime Multiplication
Simple numeric operations like multiplication/division (with overflow checking or wrapping around)
Building a String and then using the String classes Hash Code method
What would people recommend and why?
I would personally avoid XOR - it means that any two equal values will result in 0 - so hash(1, 1) == hash(2, 2) == hash(3, 3) etc. Also hash(5, 0) == hash(0, 5) etc which may come up occasionally. I have deliberately used it for set hashing - if you want to hash a sequence of items and you don't care about the ordering, it's nice.
I usually use:
unchecked
{
int hash = 17;
hash = hash * 31 + firstField.GetHashCode();
hash = hash * 31 + secondField.GetHashCode();
return hash;
}
That's the form that Josh Bloch suggests in Effective Java. Last time I answered a similar question I managed to find an article where this was discussed in detail - IIRC, no-one really knows why it works well, but it does. It's also easy to remember, easy to implement, and easy to extend to any number of fields.
If you are using .NET Core 2.1 or later or .NET Framework 4.6.1 or later, consider using the System.HashCode struct to help with producing composite hash codes. It has two modes of operation: Add and Combine.
An example using Combine, which is usually simpler and works for up to eight items:
public override int GetHashCode()
{
return HashCode.Combine(object1, object2);
}
An example of using Add:
public override int GetHashCode()
{
var hash = new HashCode();
hash.Add(this.object1);
hash.Add(this.object2);
return hash.ToHashCode();
}
Pros:
Part of .NET itself, as of .NET Core 2.1/.NET Standard 2.1 (though, see con below)
For .NET Framework 4.6.1 and later, the Microsoft.Bcl.HashCode NuGet package can be used to backport this type.
Looks to have good performance and mixing characteristics, based on the work the author and the reviewers did before merging this into the corefx repo
Handles nulls automatically
Overloads that take IEqualityComparer instances
Cons:
Not available on .NET Framework before .NET 4.6.1. HashCode is part of .NET Standard 2.1. As of September 2019, the .NET team has no plans to support .NET Standard 2.1 on the .NET Framework, as .NET Core/.NET 5 is the future of .NET.
General purpose, so it won't handle super-specific cases as well as hand-crafted code
While the template outlined in Jon Skeet's answer works well in general as a hash function family, the choice of the constants is important and the seed of 17 and factor of 31 as noted in the answer do not work well at all for common use cases. In most use cases, the hashed values are much closer to zero than int.MaxValue, and the number of items being jointly hashed are a few dozen or less.
For hashing an integer tuple {x, y} where -1000 <= x <= 1000 and -1000 <= y <= 1000, it has an abysmal collision rate of almost 98.5%. For example, {1, 0} -> {0, 31}, {1, 1} -> {0, 32}, etc. If we expand the coverage to also include n-tuples where 3 <= n <= 25, it does less terrible with a collision rate of about 38%. But we can do much better.
public static int CustomHash(int seed, int factor, params int[] vals)
{
int hash = seed;
foreach (int i in vals)
{
hash = (hash * factor) + i;
}
return hash;
}
I wrote a Monte Carlo sampling search loop that tested the method above with various values for seed and factor over various random n-tuples of random integers i. Allowed ranges were 2 <= n <= 25 (where n was random but biased toward the lower end of the range) and -1000 <= i <= 1000. At least 12 million unique collision tests were performed for each seed and factor pair.
After about 7 hours running, the best pair found (where the seed and factor were both limited to 4 digits or less) was: seed = 1009, factor = 9176, with a collision rate of 0.1131%. In the 5- and 6-digit areas, even better options exist. But I selected the top 4-digit performer for brevity, and it peforms quite well in all common int and char hashing scenarios. It also seems to work fine with integers of much greater magnitudes.
It is worth noting that "being prime" did not seem to be a general prerequisite for good performance as a seed and/or factor although it likely helps. 1009 noted above is in fact prime, but 9176 is not. I explicitly tested variations on this where I changed factor to various primes near 9176 (while leaving seed = 1009) and they all performed worse than the above solution.
Lastly, I also compared against the generic ReSharper recommendation function family of hash = (hash * factor) ^ i; and the original CustomHash() as noted above seriously outperforms it. The ReSharper XOR style seems to have collision rates in the 20-30% range for common use case assumptions and should not be used in my opinion.
Use the combination logic in tuple. The example is using c#7 tuples.
(field1, field2).GetHashCode();
I presume that .NET Framework team did a decent job in testing their System.String.GetHashCode() implementation, so I would use it:
// System.String.GetHashCode(): http://referencesource.microsoft.com/#mscorlib/system/string.cs,0a17bbac4851d0d4
// System.Web.Util.StringUtil.GetStringHashCode(System.String): http://referencesource.microsoft.com/#System.Web/Util/StringUtil.cs,c97063570b4e791a
public static int CombineHashCodes(IEnumerable<int> hashCodes)
{
int hash1 = (5381 << 16) + 5381;
int hash2 = hash1;
int i = 0;
foreach (var hashCode in hashCodes)
{
if (i % 2 == 0)
hash1 = ((hash1 << 5) + hash1 + (hash1 >> 27)) ^ hashCode;
else
hash2 = ((hash2 << 5) + hash2 + (hash2 >> 27)) ^ hashCode;
++i;
}
return hash1 + (hash2 * 1566083941);
}
Another implementation is from System.Web.Util.HashCodeCombiner.CombineHashCodes(System.Int32, System.Int32) and System.Array.CombineHashCodes(System.Int32, System.Int32) methods. This one is simpler, but probably doesn't have such a good distribution as the method above:
// System.Web.Util.HashCodeCombiner.CombineHashCodes(System.Int32, System.Int32): http://referencesource.microsoft.com/#System.Web/Util/HashCodeCombiner.cs,21fb74ad8bb43f6b
// System.Array.CombineHashCodes(System.Int32, System.Int32): http://referencesource.microsoft.com/#mscorlib/system/array.cs,87d117c8cc772cca
public static int CombineHashCodes(IEnumerable<int> hashCodes)
{
int hash = 5381;
foreach (var hashCode in hashCodes)
hash = ((hash << 5) + hash) ^ hashCode;
return hash;
}
This is a repackaging of Special Sauce's brilliantly researched solution.
It makes use of Value Tuples (ITuple).
This allows defaults for the parameters seed and factor.
public static int CombineHashes(this ITuple tupled, int seed=1009, int factor=9176)
{
var hash = seed;
for (var i = 0; i < tupled.Length; i++)
{
unchecked
{
hash = hash * factor + tupled[i].GetHashCode();
}
}
return hash;
}
Usage:
var hash1 = ("Foo", "Bar", 42).CombineHashes();
var hash2 = ("Jon", "Skeet", "Constants").CombineHashes(seed=17, factor=31);
If your input hashes are the same size, evenly distributed and not related to each other then an XOR should be OK. Plus it's fast.
The situation I'm suggesting this for is where you want to do
H = hash(A) ^ hash(B); // A and B are different types, so there's no way A == B.
of course, if A and B can be expected to hash to the same value with a reasonable (non-negligible) probability, then you should not use XOR in this way.
If you're looking for speed and don't have too many collisions, then XOR is fastest. To prevent a clustering around zero, you could do something like this:
finalHash = hash1 ^ hash2;
return finalHash != 0 ? finalHash : hash1;
Of course, some prototyping ought to give you an idea of performance and clustering.
Assuming you have a relevant toString() function (where your different fields shall appear), I would just return its hashcode:
this.toString().hashCode();
This is not very fast, but it should avoid collisions quite well.
I would recommend using the built-in hash functions in System.Security.Cryptography rather than rolling your own.