How to improve hashing for short strings to avoid collisions? - c#

I am having a problem with hash collisions using short strings in .NET4.
EDIT: I am using the built-in string hashing function in .NET.
I'm implementing a cache using objects that store the direction of a conversion like this
public class MyClass
{
private string _from;
private string _to;
// More code here....
public MyClass(string from, string to)
{
this._from = from;
this._to = to;
}
public override int GetHashCode()
{
return string.Concat(this._from, this._to).GetHashCode();
}
public bool Equals(MyClass other)
{
return this.To == other.To && this.From == other.From;
}
public override bool Equals(object obj)
{
if (obj == null) return false;
if (this.GetType() != obj.GetType()) return false;
return Equals(obj as MyClass);
}
}
This is direction dependent and the from and to are represented by short strings like "AAB" and "ABA".
I am getting sparse hash collisions with these small strings, I have tried something simple like adding a salt (did not work).
The problem is that too many of my small strings like "AABABA" collides its hash with the reverse of "ABAAAB" (Note that these are not real examples, I have no idea if AAB and ABA actually cause collisions!)
and I have gone heavy duty like implementing MD5 (which works, but is MUCH slower)
I have also implemented the suggestion from Jon Skeet here:
Should I use a concatenation of my string fields as a hash code?
This works but I don't know how dependable it is with my various 3-character strings.
How can I improve and stabilize the hashing of small strings without adding too much overhead like MD5?
EDIT: In response to a few of the answers posted... the cache is implemented using concurrent dictionaries keyed from MyClass as stubbed out above. If I replace the GetHashCode in the code above with something simple like #JonSkeet 's code from the link I posted:
int hash = 17;
hash = hash * 23 + this._from.GetHashCode();
hash = hash * 23 + this._to.GetHashCode();
return hash;
Everything functions as expected.
It's also worth noting that in this particular use-case the cache is not used in a multi-threaded environment so there is no race condition.
EDIT: I should also note that this misbehavior is platform dependant. It works as intended on my fully updated Win7x64 machine but does not behave properly on a non-updated Win7x64 machine. I don't know the extend of what updates are missing but I know it doesn't have Win7 SP1... so I would assume there may also be a framework SP or update it's missing as well.
EDIT: As susggested, my issue was not caused by a problem with the hashing function. I had an elusive race condition, which is why it worked on some computers but not others and also why a "slower" hashing method made things work properly. The answer I selected was the most useful in understanding why my problem was not hash collisions in the dictionary.

Are you sure that collisions are who causes problems? When you say
I finally discovered what was causing this bug
You mean some slowness of your code or something else? If not I'm curious what kind of problem is that? Because any hash function (except "perfect" hash functions on limited domains) would cause collisions.
I put a quick piece of code to check for collisions for 3-letter words. And this code doesn't report a single collision for them. You see what I mean? Looks like buid-in hash algorithm is not so bad.
Dictionary<int, bool> set = new Dictionary<int, bool>();
char[] buffer = new char[3];
int count = 0;
for (int c1 = (int)'A'; c1 <= (int)'z'; c1++)
{
buffer[0] = (char)c1;
for (int c2 = (int)'A'; c2 <= (int)'z'; c2++)
{
buffer[1] = (char)c2;
for (int c3 = (int)'A'; c3 <= (int)'z'; c3++)
{
buffer[2] = (char)c3;
string str = new string(buffer);
count++;
int hash = str.GetHashCode();
if (set.ContainsKey(hash))
{
Console.WriteLine("Collision for {0}", str);
}
set[hash] = false;
}
}
}
Console.WriteLine("Generated {0} of {1} hashes", set.Count, count);
While you could pick almost any of well-known hash functions (as David mentioned) or even choose a "perfect" hash since it looks like your domain is limited (like minimum perfect hash)... It would be great to understand if the source of problems are really collisions.
Update
What I want to say is that .NET build-in hash function for string is not so bad. It doesn't give so many collisions that you would need to write your own algorithm in regular scenarios. And this doesn't depend on the lenght of strings. If you have a lot of 6-symbol strings that doesn't imply that your chances to see a collision are highier than with 1000-symbol strings. This is one of the basic properties of hash functions.
And again, another question is what kind of problems do you experience because of collisions? All build-in hashtables and dictionaries support collision resolution. So I would say all you can see is just... probably some slowness. Is this your problem?
As for your code
return string.Concat(this._from, this._to).GetHashCode();
This can cause problems. Because on every hash code calculation you create a new string. Maybe this is what causes your issues?
int hash = 17;
hash = hash * 23 + this._from.GetHashCode();
hash = hash * 23 + this._to.GetHashCode();
return hash;
This would be much better approach - just because you don't create new objects on the heap. Actually it's one of the main points of this approach - get a good hash code of an object with a complex "key" without creating new objects. So if you don't have a single value key then this should work for you. BTW, this is not a new hash function, this is just a way to combine existing hash values without compromising main properties of hash functions.

Any common hash function should be suitable for this purpose. If you're getting collisions on short strings like that, I'd say you're using an unusually bad hash function. You can use Jenkins or Knuth's with no issues.
Here's a very simple hash function that should be adequate. (Implemented in C, but should easily port to any similar language.)
unsigned int hash(const char *it)
{
unsigned hval=0;
while(*it!=0)
{
hval+=*it++;
hval+=(hval<<10);
hval^=(hval>>6);
hval+=(hval<<3);
hval^=(hval>>11);
hval+=(hval<<15);
}
return hval;
}
Note that if you want to trim the bits of the output of this function, you must use the least significant bits. You can also use mod to reduce the output range. The last character of the string tends to only affect the low-order bits. If you need a more even distribution, change return hval; to return hval * 2654435761U;.
Update:
public override int GetHashCode()
{
return string.Concat(this._from, this._to).GetHashCode();
}
This is broken. It treats from="foot",to="ar" as the same as from="foo",to="tar". Since your Equals function doesn't consider those equal, your hash function should not. Possible fixes include:
1) Form the string from,"XXX",to and hash that. (This assumes the string "XXX" almost never appears in your input strings.
2) Combine the hash of 'from' with the hash of 'to'. You'll have to use a clever combining function. For example, XOR or sum will cause from="foo",to="bar" to hash the same as from="bar",to="foo". Unfortunately, choosing the right combining function is not easy without knowing the internals of the hashing function. You can try:
int hc1=from.GetHashCode();
int hc2=to.GetHashCode();
return (hc1<<7)^(hc2>>25)^(hc1>>21)^(hc2<<11);

Related

Near perfect hash for guid tostring as dictionary key

what Im trying to solve: using a guid string as a key for Dictionary(string, someObject) and I want perfect hashing on the key...
not sure if Im missing something... When I run the following test with the dictionary constructor only passing in size allocation I get +- 10 collisions each run. When I pass in the IEqualityComparer just calling gethashcode on the string I have the test passing all good! with multiple runs using x = 10 iterations in some cases and y upto a million! I thought the dictionary was adjusting the hashing function especially when dealing with strings? I don't have reflector on my machine :( so I cant check tonight... If you comment out the alternating dictionary initialisations youll see... the test runs relatively quick on my i7.
[TestMethod]
public void NearPerfectHashingForGuidStrings()
{
int y = 100000;
int collisions = 0;
//Dictionary<string, string> list = new Dictionary<string, string>(y, new GuidStringHashing());
Dictionary<string, string> list = new Dictionary<string, string>(y);
for (int x = 0; x < 5; x++)
{
Enumerable.Range(1, y).ToList().ForEach((h) =>
{
list[Guid.NewGuid().ToString()] = h.ToString();
});
var hashDuplicates = list.Keys.GroupBy(h => h.GetHashCode())
.Where(group => group.Count() > 1)
.Select(group => group.Key).ToList();
hashDuplicates.ToList().ForEach(v => Debug.WriteLine( x + "--- " + v));
collisions += hashDuplicates.Count();
list.Clear();
}
Assert.AreEqual(0, collisions);
}
public class GuidStringHashing : IEqualityComparer<string>
{
public bool Equals(string x, string y)
{
return GetHashCode(x) == GetHashCode(y);
}
public int GetHashCode(string obj)
{
return obj.GetHashCode();
}
}
Your test is broken.
Because your equality comparer incorrectly reports that two different GUIDs that happen to have the same hash are equal, your dictionary never stores the collisions in the first place.
Due to the pigeonhole principle, it is fundamentally impossible to create a 32-bit perfect hash for more than 232 items.
It's impossible. You want a perfect hash function for an unknown set of keys. You can create perfect hash functions for specific set of keys. You can't create one perfect hash function that will work on all sets of keys.
The reason is the "Two Jesus Principle", as was so nicely put by Mark Knopfler: "Two men say they're jesus, one of them must be wrong." (it's more widely known as the "pigeonhole principle")
What do you mean by a perfect hash code?
Your code is somewhat confusing, especially because you post a class GuidStringHashing that is not used by your test method.
But your code demonstrates that when you make 100,000 GUIDs, convert them all to strings, and then take the hash code of the strings, then it happens quite often that not all hash codes are distinct. That might be surprising when there are more than 4 billion integers to choose between, and you only generate 100,000 strings.
You're using the GetHashCode() for general strings, but your strings are not too general, they're all something like
"2315c2a7-7d29-42b1-9696-fe6a9dd72ffd"
so maybe your hash code is not optimal. It's better to parse the strings h back to a GUID and use the hash code of that, as in (new Guid(h)).GetHashCode().
However this still gives collisions with 100,000 GUIDs. I think what you're seeing is just the birthday paradox.
Try this more simple code. Here I use GetHashCode() on the GUIDs, so we expect that the integers are quite random:
var set = new HashSet<int>();
for (int i = 1; true; ++i)
{
if (!set.Add(Guid.NewGuid().GetHashCode()))
Console.WriteLine("Collision, i is: " + i);
}
We see (by running the above code many times) that a collision almost always happens before 100,000 hash codes have been calculated.

32 bit fast uniform hash function. Use MD5 / SHA1 and cut off 4 bytes? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What is the best 32bit hash function for short strings (tag names)?
I need to hash many strings to 32bit (uint).
Can I just use MD5 or SHA1 and take 4 bytes from it? Or are there better alternatives?
There is no need for security or to care if one is cracked and so on.
I just need to hash fast and uniform to 32 bit. MD5 and SHA1 should be uniform.
But are there better (faster) build in alternatives I could use? If not, which of both would you use?
Here someone asked which one is better, but not for alternatives and there was a security matter (I don't care for security):
How to Use SHA1 or MD5 in C#?(Which One is Better in Performance and Security for Authentication)
Do you need a cryptographic-strength hash? If all you need is 32 bits I bet not.
Try the Fowler-Noll-Vo hash. It's fast, has good distribution and avalanche effect, and is generally acceptable for hashtables, checksums etc:
public static uint To32BitFnv1aHash(this string toHash,
bool separateUpperByte = false)
{
IEnumerable<byte> bytesToHash;
if (separateUpperByte)
bytesToHash = toHash.ToCharArray()
.Select(c => new[] { (byte)((c - (byte)c) >> 8), (byte)c })
.SelectMany(c => c);
else
bytesToHash = toHash.ToCharArray()
.Select(Convert.ToByte);
//this is the actual hash function; very simple
uint hash = FnvConstants.FnvOffset32;
foreach (var chunk in bytesToHash)
{
hash ^= chunk;
hash *= FnvConstants.FnvPrime32;
}
return hash;
}
public static class FnvConstants
{
public static readonly uint FnvPrime32 = 16777619;
public static readonly ulong FnvPrime64 = 1099511628211;
public static readonly uint FnvOffset32 = 2166136261;
public static readonly ulong FnvOffset64 = 14695981039346656037;
}
This is really useful for creating semantically equatable hashes for GetHashCode, based on a string digest of each object (a custom ToString() or otherwise). You can overload this to take any IEnumerable<byte> making it suitable for checksumming stream data etc. If you ever need a 64-bit hash (ulong), just copy the function and replace the constants used with the 64-bit constants. Oh, one more thing; the hash (as most do) rely on unchecked integer overflow; never run this hash in a "checked" block, or it will be virtually guaranteed to throw out exceptions.
If security does not play a role, generating a hash with a cryptographic hash function (such as MD5 or SHA1) and taking 4 bytes from it works. But they are slower than various non-cryptographic hash functions, as these functions are primarily designed for security, not speed.
Have a look at non-cryptographic hash functions such as FNV or Murmur.
Non-Cryptographic Hash Function Zoo
Performance Graphs
MurMurHash3, an ultra fast hash algorithm for C# / .NET
Edit: The floodyberry.com domain is now registered by a domain parking service - removed dead links
The easiest and yet good algorithm for strings is as follow:
int Hash(string s)
{
int res = 0;
for(int i = 0; i < str.Length; i++)
{
res += (i * str[i]) % int.MaxValue;
}
return res;
}
Obviously, this is absolutely not a secured hash algorithm but it is fast (really fast) returns 32 bit and as far as I know, is uniform (I've tried it for many algorithmic challenges with good results).
Not for use to hash password or any sensible data.

How can I generate a unique hashcode for a string

Is there any function, that gives me the same hashcode for the same string?
I'm having trouble when creating 2 different strings (but with the same content), their hashcode is different and therefore is not correctly used in a Dictionary.
I would like to know what GetHashCode() function the Dictionary uses when the key is a string.
I'm building mine like this:
public override int GetHashCode()
{
String str = "Equip" + Equipment.ToString() + "Destiny" + Destiny.ToString();
return str.GetHashCode();
}
But it's producing different results for every instance that uses this code, despite the content of the string being the same.
Your title asks for one thing (unique hash codes) your body asks for something different (consistent hash codes).
You claim:
I'm having trouble when creating 2 different strings (but with the same content), their hashcode is different and therefore is not correctly used in a Dictionary.
If the strings genuinely have the same content, that simply won't occur. Your diagnostics are wrong somehow. Check for non-printable characters in your strings, e.g trailing Unicode "null" characters:
string text1 = "Hello";
string text2 = "Hello\0";
Here text1 and text2 may print the same way in some contexts, but I'd hope they'd have different hash codes.
Note that hash codes are not guaranteed to be unique and can't be... there are only 232 possible hash codes returned from GetHashCode, but more than 232 possible different strings.
Also note that the same content is not guaranteed to produce the same hash code on different runs, even of the same executable - you should not be persisting a hash code anywhere. For example, I believe the 32-bit .NET 4 and 64-bit .NET 4 CLRs produce different hash codes for strings. However, your claim that the values aren't being stored correctly in a Dictionary suggests that this is within a single process - where everything should be consistent.
As noted in comments, it's entirely possible that you're overriding Equals incorrectly. I'd also suggest that your approach to building a hash code isn't great. We don't know what the types of Equipment and Destiny are, but I'd suggest you should use something like:
public override int GetHashCode()
{
int hash = 23;
hash = hash * 31 + Equipment.GetHashCode();
hash = hash * 31 + Destiny.GetHashCode();
return hash;
}
That's the approach I usually use for hash codes. Equals would then look something like:
public override bool Equals(object other)
{
// Reference equality check
if (this == other)
{
return true;
}
if (other == null)
{
return false;
}
// Details of this might change depending on your situation; we'd
// need more information
if (other.GetType() != GetType())
{
return false;
}
// Adjust for your type...
Foo otherFoo = (Foo) other;
// You may want to change the equality used here based on the
// types of Equipment and Destiny
return this.Destiny == otherFoo.Destiny &&
this.Equipment == otherFoo.Equipment;
}

Cache key construction based on the method name and argument values

I've decided to implement a caching facade in one of our applications - the purpose is to eventually reduce the network overhead and limit the amount of db hits. We are using Castle.Windsor as our IoC Container and we have decided to go with Interceptors to add the caching functionality on top of our services layer using the System.Runtime.Caching namespace.
At this moment I can't exactly figure out what's the best approach for constructing the cache key. The goal is to make a distinction between different methods and also include passed argument values - meaning that these two method calls should be cached under two different keys:
IEnumerable<MyObject> GetMyObjectByParam(56); // key1
IEnumerable<MyObject> GetMyObjectByParam(23); // key2
For now I can see two possible implementations:
Option 1:
assembly | class | method return type | method name | argument types | argument hash codes
"MyAssembly.MyClass IEnumerable<MyObject> GetMyObjectByParam(long) { 56 }";
Option 2:
MD5 or SHA-256 computed hash based on the method's fully-qualified name and passed argument values
string key = new SHA256Managed().ComputeHash(name + args).ToString();
I'm thinking about the first option as the second one requires more processing time - on the other hand the second option enforces exactly the same 'length' of all generated keys.
Is it safe to assume that the first option will generate a unique key for methods using complex argument types? Or maybe there is a completely different way of doing this?
Help and opinion will by highly appreciated!
Based on some very useful links that I've found here and here I've decided to implement it more-or-less like this:
public sealed class CacheKey : IEquatable<CacheKey>
{
private readonly Type reflectedType;
private readonly Type returnType;
private readonly string name;
private readonly Type[] parameterTypes;
private readonly object[] arguments;
public User(Type reflectedType, Type returnType, string name,
Type[] parameterTypes, object[] arguments)
{
// check for null, incorrect values etc.
this.reflectedType = reflectedType;
this.returnType = returnType;
this.name = name;
this.parameterTypes = parameterTypes;
this.arguments = arguments;
}
public override bool Equals(object obj)
{
return Equals(obj as CacheKey);
}
public bool Equals(CacheKey other)
{
if (other == null)
{
return false;
}
for (int i = 0; i < parameterTypes.Count; i++)
{
if (!parameterTypes[i].Equals(other.parameterTypes[i]))
{
return false;
}
}
for (int i = 0; i < arguments.Count; i++)
{
if (!arguments[i].Equals(other.arguments[i]))
{
return false;
}
}
return reflectedType.Equals(other.reflectedType) &&
returnType.Equals(other.returnType) &&
name.Equals(other.name);
}
private override int GetHashCode()
{
unchecked
{
int hash = 17;
hash = hash * 31 + reflectedType.GetHashCode();
hash = hash * 31 + returnType.GetHashCode();
hash = hash * 31 + name.GetHashCode();
for (int i = 0; i < parameterTypes.Count; i++)
{
hash = hash * 31 + parameterTypes[i].GetHashCode();
}
for (int i = 0; i < arguments.Count; i++)
{
hash = hash * 31 + arguments[i].GetHashCode();
}
return hash;
}
}
}
Basically it's just a general idea - the above code can be easily rewritten to a more generic version with one collection of Fields - the same rules would have to be applied on each element of the collection. I can share the full code.
An option you seem to have skipped is using the .NET built in GetHashCode() function for the string. I'm fairly certain this is what would go on behind the scenes in a C# dictionary with a String as the <TKey> (I mention that because you've tagged the question with dictionary). I'm not sure how the .NET dictionary class relates to your Castle.Windsor or the system.runtime.caching interface you mention.
The reason you wouldn't want to use GetHashCode as a hash key is that the functionality is specifically disclaimed by MicroSoft to change between versions without warning (as in to provide a more unique or faster executing function). If this cache will live strictly in memory, then this is not a concern because upgrading the .NET framework would necessitate a restart of your application, wiping the cache.
To clarify, just using the concatenated string (Option 1) should be sufficiently unique. It looks like you've added everything possible to uniquely qualify your methods.
If you end up feeding the String of an MD5 or Sha256 into a dictionary key, the program would probably rehash the string behind the scenes anyways. It's been a while since I read about the inner workings of the Dictionary class. If you leave it as a Dictionary<String, IEnumerable<MyObject>> (as opposed to calling GetHashCode() on the strings yourself using the int return value as the key) then the dictionary should handle collisions of the hash code itself.
Also note that (at least according to a benchmark program run on my machine), MD5 is around 10% faster than SHA1 and twice as fast as SHA256. String.GetHashCode() is around 20 times faster than MD5 (it's not cryptographically secure). Tests were taken for the total time to compute the hashes for the same 100,000 randomly generated strings of length between 32 and 1024 characters. But regardless of the exact numbers, using a cryptographically secure hash function as a key will only slow down your program.
I can post the source code for my comparisons if you like.

Is it possible to combine hash codes for private members to generate a new hash code?

I have an object for which I want to generate a unique hash (override GetHashCode()) but I want to avoid overflows or something unpredictable.
The code should be the result of combining the hash codes of a small collection of strings.
The hash codes will be part of generating a cache key, so ideally they should be unique however the number of possible values that are being hashed is small so I THINK probability is in my favour here.
Would something like this be sufficient AND is there a better way of doing this?
int hash = 0;
foreach(string item in collection){
hash += (item.GetHashCode() / collection.Count)
}
return hash;
EDIT: Thanks for answers so far.
#Jon Skeet: No, order is not important
I guess this is almost a another question but since I am using the result to generate a cache key (string) would it make sense to use a crytographic hash function like MD5 or just use the string representation of this int?
The fundamentals pointed out by Marc and Jon are not bad but they are far from optimal in terms of their evenness of distribution of the results. Sadly the 'multiply by primes' approach copied by so many people from Knuth is not the best choice in many cases better distribution can be achieved by cheaper to calculate functions (though this is very slight on modern hardware). In fact throwing primes into many aspects of hashing is no panacea.
If this data is used for significantly sized hash tables I recommend reading of Bret Mulvey's excellent study and explanation of various modern (and not so modern) hashing techniques handily done with c#.
Note that the behaviour with strings of various hash functions is heavily biased towards wehther the strings are short (roughly speaking how many characters are hashed before the bits begin to over flow) or long.
One of the simplest and easiest to implement is also one of the best, the Jenkins One at a time hash.
private static unsafe void Hash(byte* d, int len, ref uint h)
{
for (int i = 0; i < len; i++)
{
h += d[i];
h += (h << 10);
h ^= (h >> 6);
}
}
public unsafe static void Hash(ref uint h, string s)
{
fixed (char* c = s)
{
byte* b = (byte*)(void*)c;
Hash(b, s.Length * 2, ref h);
}
}
public unsafe static int Avalanche(uint h)
{
h += (h<< 3);
h ^= (h>> 11);
h += (h<< 15);
return *((int*)(void*)&h);
}
you can then use this like so:
uint h = 0;
foreach(string item in collection)
{
Hash(ref h, item);
}
return Avalanche(h);
you can merge multiple different types like so:
public unsafe static void Hash(ref uint h, int data)
{
byte* d = (byte*)(void*)&data;
AddToHash(d, sizeof(int), ref h);
}
public unsafe static void Hash(ref uint h, long data)
{
byte* d= (byte*)(void*)&data;
Hash(d, sizeof(long), ref h);
}
If you only have access to the field as an object with no knowledge of the internals you can simply call GetHashCode() on each one and combine that value like so:
uint h = 0;
foreach(var item in collection)
{
Hash(ref h, item.GetHashCode());
}
return Avalanche(h);
Sadly you can't do sizeof(T) so you must do each struct individually.
If you wish to use reflection you can construct on a per type basis a function which does structural identity and hashing on all fields.
If you wish to avoid unsafe code then you can use bit masking techniques to pull out individual bits from ints (and chars if dealing with strings) with not too much extra hassle.
Hashes aren't meant to be unique - they're just meant to be well distributed in most situations. They're just meant to be consistent. Note that overflows shouldn't be a problem.
Just adding isn't generally a good idea, and dividing certainly isn't. Here's the approach I usually use:
int result = 17;
foreach (string item in collection)
{
result = result * 31 + item.GetHashCode();
}
return result;
If you're otherwise in a checked context, you might want to deliberately make it unchecked.
Note that this assumes that order is important, i.e. that { "a", "b" } should be different from { "b", "a" }. Please let us know if that's not the case.
There is nothing wrong with this approach as long as the members whose hashcodes you are combining follow the rules of hash codes. In short ...
The hash code of the private members should not change for the lifetime of the object
The container must not change the object the private members point to lest it in turn change the hash code of the container
If the order of the items is not important (i.e. {"a","b"} is the same as {"b","a"}) then you can use exclusive or to combine the hash codes:
hash ^= item.GetHashCode();
[Edit: As Mark pointed out in a comment to a different answer, this has the drawback of also give collections like {"a"} and {"a","b","b"} the same hash code.]
If the order is important, you can instead multiply by a prime number and add:
hash *= 11;
hash += item.GetHashCode();
(When you multiply you will sometimes get an overflow that is ignored, but by multiplying with a prime number you lose a minimum of information. If you instead multiplied with a number like 16, you would lose four bits of information each time, so after eight items the hash code from the first item would be completely gone.)

Categories