The history why is long, but the problem is simple.
Having 3 strings I need to cache the matching value.
To have a fast cache I use the following code:
public int keygen(string a, string b, string c)
{
var x = a + "##" + b + "##" + c;
var hash = x.GetHashCode();
return hash;
}
(Note that string a,b,c does not contain the code "##")
The cache it self is just a Dictionary<int, object>
I know there is a risk that the hash key might be non unique, but except this:
Does anyone know a faster way to make an int key? (in C#)
This operation takes ~15% of total CPU time and this is a long running app.
I have tried a couple of implementations but failed to find any faster.
You should use a Dictionary<Tuple<string,string,string>, object>. Then you don't have to worry about non-uniqueness, since the Dictionary will take care of it for you.
Instead of concatenating strings (which creates new strings) you could use XOR or even better simple maths (credits to J.Skeet):
public int keygen(string a, string b, string c)
{
unchecked // Overflow is fine, just wrap
{
int hash = 17;
hash = hash * 23 + a == null ? 0 : a.GetHashCode();
hash = hash * 23 + b == null ? 0 : b.GetHashCode();
hash = hash * 23 + c == null ? 0 : c.GetHashCode();
return hash;
}
}
In general it's not necessary to produce unique hashs. But you should minimize collisions.
Another(not as efficient) way is to use an anonymous type which has a builtin support for GetHashCode:
public int keygen(string a, string b, string c)
{
return new { a, b, c }.GetHashCode();
}
Note that the name, type and order matters for the calculation of the hashcode of an anonymous type.
A faster approach would be to compute the hash of each string separately, then combine them using a hash function. This will eliminate the string concatenation which could be taking time.
e.g.
public int KeyGen(string a, string b, string c)
{
var aHash = a.GetHashCode();
var bHash = b.GetHashCode();
var cHash = c.GetHashCode();
var hash = 36469;
unchecked
{
hash = hash * 17 + aHash;
hash = hash * 17 + bHash;
hash = hash * 17 + cHash;
}
return hash;
}
I know there is a risk that the hash key might be non unique
Hash key's don't have to be unique - they just work better if collisions are minimized.
That said, 15% of your time spent computing a string's hash code seems VERY high. Even switching to string.Concat() (which the compiler may do for you anyways) or StringBuilder shouldn't make that much difference. I'd suggest triple-checking your measurements.
I’d guess most of the time of this function is spent on building the concatenated string, only to call GetHashCode on it. I would try something like
public int keygen(string a, string b, string c)
{
return a.GetHashCode() ^ b.GetHashCode() ^ c.GetHashCode();
}
Or possibly use something more complicated than a simple XOR. However, be aware that GetHashCode is not a cryptographic hash function! It is a hash function used for hash tables, not for cryptography, and you should definitely not use it for anything security-related like keys (as your keygen name hints).
Related
Is there any function, that gives me the same hashcode for the same string?
I'm having trouble when creating 2 different strings (but with the same content), their hashcode is different and therefore is not correctly used in a Dictionary.
I would like to know what GetHashCode() function the Dictionary uses when the key is a string.
I'm building mine like this:
public override int GetHashCode()
{
String str = "Equip" + Equipment.ToString() + "Destiny" + Destiny.ToString();
return str.GetHashCode();
}
But it's producing different results for every instance that uses this code, despite the content of the string being the same.
Your title asks for one thing (unique hash codes) your body asks for something different (consistent hash codes).
You claim:
I'm having trouble when creating 2 different strings (but with the same content), their hashcode is different and therefore is not correctly used in a Dictionary.
If the strings genuinely have the same content, that simply won't occur. Your diagnostics are wrong somehow. Check for non-printable characters in your strings, e.g trailing Unicode "null" characters:
string text1 = "Hello";
string text2 = "Hello\0";
Here text1 and text2 may print the same way in some contexts, but I'd hope they'd have different hash codes.
Note that hash codes are not guaranteed to be unique and can't be... there are only 232 possible hash codes returned from GetHashCode, but more than 232 possible different strings.
Also note that the same content is not guaranteed to produce the same hash code on different runs, even of the same executable - you should not be persisting a hash code anywhere. For example, I believe the 32-bit .NET 4 and 64-bit .NET 4 CLRs produce different hash codes for strings. However, your claim that the values aren't being stored correctly in a Dictionary suggests that this is within a single process - where everything should be consistent.
As noted in comments, it's entirely possible that you're overriding Equals incorrectly. I'd also suggest that your approach to building a hash code isn't great. We don't know what the types of Equipment and Destiny are, but I'd suggest you should use something like:
public override int GetHashCode()
{
int hash = 23;
hash = hash * 31 + Equipment.GetHashCode();
hash = hash * 31 + Destiny.GetHashCode();
return hash;
}
That's the approach I usually use for hash codes. Equals would then look something like:
public override bool Equals(object other)
{
// Reference equality check
if (this == other)
{
return true;
}
if (other == null)
{
return false;
}
// Details of this might change depending on your situation; we'd
// need more information
if (other.GetType() != GetType())
{
return false;
}
// Adjust for your type...
Foo otherFoo = (Foo) other;
// You may want to change the equality used here based on the
// types of Equipment and Destiny
return this.Destiny == otherFoo.Destiny &&
this.Equipment == otherFoo.Equipment;
}
I am having a problem with hash collisions using short strings in .NET4.
EDIT: I am using the built-in string hashing function in .NET.
I'm implementing a cache using objects that store the direction of a conversion like this
public class MyClass
{
private string _from;
private string _to;
// More code here....
public MyClass(string from, string to)
{
this._from = from;
this._to = to;
}
public override int GetHashCode()
{
return string.Concat(this._from, this._to).GetHashCode();
}
public bool Equals(MyClass other)
{
return this.To == other.To && this.From == other.From;
}
public override bool Equals(object obj)
{
if (obj == null) return false;
if (this.GetType() != obj.GetType()) return false;
return Equals(obj as MyClass);
}
}
This is direction dependent and the from and to are represented by short strings like "AAB" and "ABA".
I am getting sparse hash collisions with these small strings, I have tried something simple like adding a salt (did not work).
The problem is that too many of my small strings like "AABABA" collides its hash with the reverse of "ABAAAB" (Note that these are not real examples, I have no idea if AAB and ABA actually cause collisions!)
and I have gone heavy duty like implementing MD5 (which works, but is MUCH slower)
I have also implemented the suggestion from Jon Skeet here:
Should I use a concatenation of my string fields as a hash code?
This works but I don't know how dependable it is with my various 3-character strings.
How can I improve and stabilize the hashing of small strings without adding too much overhead like MD5?
EDIT: In response to a few of the answers posted... the cache is implemented using concurrent dictionaries keyed from MyClass as stubbed out above. If I replace the GetHashCode in the code above with something simple like #JonSkeet 's code from the link I posted:
int hash = 17;
hash = hash * 23 + this._from.GetHashCode();
hash = hash * 23 + this._to.GetHashCode();
return hash;
Everything functions as expected.
It's also worth noting that in this particular use-case the cache is not used in a multi-threaded environment so there is no race condition.
EDIT: I should also note that this misbehavior is platform dependant. It works as intended on my fully updated Win7x64 machine but does not behave properly on a non-updated Win7x64 machine. I don't know the extend of what updates are missing but I know it doesn't have Win7 SP1... so I would assume there may also be a framework SP or update it's missing as well.
EDIT: As susggested, my issue was not caused by a problem with the hashing function. I had an elusive race condition, which is why it worked on some computers but not others and also why a "slower" hashing method made things work properly. The answer I selected was the most useful in understanding why my problem was not hash collisions in the dictionary.
Are you sure that collisions are who causes problems? When you say
I finally discovered what was causing this bug
You mean some slowness of your code or something else? If not I'm curious what kind of problem is that? Because any hash function (except "perfect" hash functions on limited domains) would cause collisions.
I put a quick piece of code to check for collisions for 3-letter words. And this code doesn't report a single collision for them. You see what I mean? Looks like buid-in hash algorithm is not so bad.
Dictionary<int, bool> set = new Dictionary<int, bool>();
char[] buffer = new char[3];
int count = 0;
for (int c1 = (int)'A'; c1 <= (int)'z'; c1++)
{
buffer[0] = (char)c1;
for (int c2 = (int)'A'; c2 <= (int)'z'; c2++)
{
buffer[1] = (char)c2;
for (int c3 = (int)'A'; c3 <= (int)'z'; c3++)
{
buffer[2] = (char)c3;
string str = new string(buffer);
count++;
int hash = str.GetHashCode();
if (set.ContainsKey(hash))
{
Console.WriteLine("Collision for {0}", str);
}
set[hash] = false;
}
}
}
Console.WriteLine("Generated {0} of {1} hashes", set.Count, count);
While you could pick almost any of well-known hash functions (as David mentioned) or even choose a "perfect" hash since it looks like your domain is limited (like minimum perfect hash)... It would be great to understand if the source of problems are really collisions.
Update
What I want to say is that .NET build-in hash function for string is not so bad. It doesn't give so many collisions that you would need to write your own algorithm in regular scenarios. And this doesn't depend on the lenght of strings. If you have a lot of 6-symbol strings that doesn't imply that your chances to see a collision are highier than with 1000-symbol strings. This is one of the basic properties of hash functions.
And again, another question is what kind of problems do you experience because of collisions? All build-in hashtables and dictionaries support collision resolution. So I would say all you can see is just... probably some slowness. Is this your problem?
As for your code
return string.Concat(this._from, this._to).GetHashCode();
This can cause problems. Because on every hash code calculation you create a new string. Maybe this is what causes your issues?
int hash = 17;
hash = hash * 23 + this._from.GetHashCode();
hash = hash * 23 + this._to.GetHashCode();
return hash;
This would be much better approach - just because you don't create new objects on the heap. Actually it's one of the main points of this approach - get a good hash code of an object with a complex "key" without creating new objects. So if you don't have a single value key then this should work for you. BTW, this is not a new hash function, this is just a way to combine existing hash values without compromising main properties of hash functions.
Any common hash function should be suitable for this purpose. If you're getting collisions on short strings like that, I'd say you're using an unusually bad hash function. You can use Jenkins or Knuth's with no issues.
Here's a very simple hash function that should be adequate. (Implemented in C, but should easily port to any similar language.)
unsigned int hash(const char *it)
{
unsigned hval=0;
while(*it!=0)
{
hval+=*it++;
hval+=(hval<<10);
hval^=(hval>>6);
hval+=(hval<<3);
hval^=(hval>>11);
hval+=(hval<<15);
}
return hval;
}
Note that if you want to trim the bits of the output of this function, you must use the least significant bits. You can also use mod to reduce the output range. The last character of the string tends to only affect the low-order bits. If you need a more even distribution, change return hval; to return hval * 2654435761U;.
Update:
public override int GetHashCode()
{
return string.Concat(this._from, this._to).GetHashCode();
}
This is broken. It treats from="foot",to="ar" as the same as from="foo",to="tar". Since your Equals function doesn't consider those equal, your hash function should not. Possible fixes include:
1) Form the string from,"XXX",to and hash that. (This assumes the string "XXX" almost never appears in your input strings.
2) Combine the hash of 'from' with the hash of 'to'. You'll have to use a clever combining function. For example, XOR or sum will cause from="foo",to="bar" to hash the same as from="bar",to="foo". Unfortunately, choosing the right combining function is not easy without knowing the internals of the hashing function. You can try:
int hc1=from.GetHashCode();
int hc2=to.GetHashCode();
return (hc1<<7)^(hc2>>25)^(hc1>>21)^(hc2<<11);
For C++ I've always been using Boost.Functional/Hash to create good hash values without having to deal with bit shifts, XORs and prime numbers. Is there any libraries that produces good (I'm not asking for optimal) hash values for C#/.NET? I would use this utility to implement GetHashCode(), not cryptographic hashes.
To clarify why I think this is useful, here's the implementation of boost::hash_combine which combines to hash values (ofcourse a very common operation when implementing GetHashCode()):
seed ^= hash_value(v) + 0x9e3779b9 + (seed << 6) + (seed >> 2);
Clearly, this sort of code doesn't belong in the implementation of GetHashCode() and should therefor be implemented elsewhere.
I wouldn't used a separate library just for that. As mentioned before, for the GetHashCode method it is essential to be fast and stable. Usually I prefer to write inline implementation, but it might be actually a good idea to use a helper class:
internal static class HashHelper
{
private static int InitialHash = 17; // Prime number
private static int Multiplier = 23; // Different prime number
public static Int32 GetHashCode(params object[] values)
{
unchecked // overflow is fine
{
int hash = InitialHash;
if (values != null)
for (int i = 0; i < values.Length; i++)
{
object currentValue = values[i];
hash = hash * Multiplier
+ (currentValue != null ? currentValue.GetHashCode() : 0);
}
return hash;
}
}
}
This way common hash-calculation logic can be used:
public override int GetHashCode()
{
return HashHelper.GetHashCode(field1, field2);
}
The answers to this question contains some examples of helper-classes that resembles Boost.Functional/Hash. None looks quite as elegant, though.
I am not aware of any real .NET library that provides the equivalent.
Unless you have very specific requirements you don't need to calculate your type's hashcode from first principles. Rather combine the hash codes of the fields/properties you use for equality determination in one of the simple ways, something like:
int hash = field1.GetHashCode();
hash = (hash *37) + field2.GetHashCode();
(Combination function taken from §3.3.2 C# in Depth, 2nd Ed, Jon Skeet).
To avoid the boxing issue chain your calls using a generic extension method on Int32
public static class HashHelper
{
public static int InitialHash = 17; // Prime number
private static int Multiplier = 23; // Different prime number
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static Int32 GetHashCode<T>( this Int32 source, T next )
{
// comparing null of value objects is ok. See
// http://stackoverflow.com/questions/1972262/c-sharp-okay-with-comparing-value-types-to-null
if ( next == null )
{
return source;
}
unchecked
{
return source + next.GetHashCode();
}
}
}
then you can do
HashHelper
.InitialHash
.GetHashCode(field0)
.GetHashCode(field1)
.GetHashCode(field2);
Have a look at this link, it describes MD5 hashing.
Otherwise use GetHashCode().
I need to generate a fast hash code in GetHashCode for a BitArray. I have a Dictionary where the keys are BitArrays, and all the BitArrays are of the same length.
Does anyone know of a fast way to generate a good hash from a variable number of bits, as in this scenario?
UPDATE:
The approach I originally took was to access the internal array of ints directly through reflection (speed is more important than encapsulation in this case), then XOR those values. The XOR approach seems to work well i.e. my 'Equals' method isn't called excessively when searching in the Dictionary:
public int GetHashCode(BitArray array)
{
int hash = 0;
foreach (int value in array.GetInternalValues())
{
hash ^= value;
}
return hash;
}
However, the approach suggested by Mark Byers and seen elsewhere on StackOverflow was slightly better (16570 Equals calls vs 16608 for the XOR for my test data). Note that this approach fixes a bug in the previous one where bits beyond the end of the bit array could affect the hash value. This could happen if the bit array was reduced in length.
public int GetHashCode(BitArray array)
{
UInt32 hash = 17;
int bitsRemaining = array.Length;
foreach (int value in array.GetInternalValues())
{
UInt32 cleanValue = (UInt32)value;
if (bitsRemaining < 32)
{
//clear any bits that are beyond the end of the array
int bitsToWipe = 32 - bitsRemaining;
cleanValue <<= bitsToWipe;
cleanValue >>= bitsToWipe;
}
hash = hash * 23 + cleanValue;
bitsRemaining -= 32;
}
return (int)hash;
}
The GetInternalValues extension method is implemented like this:
public static class BitArrayExtensions
{
static FieldInfo _internalArrayGetter = GetInternalArrayGetter();
static FieldInfo GetInternalArrayGetter()
{
return typeof(BitArray).GetField("m_array", BindingFlags.NonPublic | BindingFlags.Instance);
}
static int[] GetInternalArray(BitArray array)
{
return (int[])_internalArrayGetter.GetValue(array);
}
public static IEnumerable<int> GetInternalValues(this BitArray array)
{
return GetInternalArray(array);
}
... more extension methods
}
Any suggestions for improvement are welcome!
It is a terrible class to act as a key in a Dictionary. The only reasonable way to implement GetHashCode() is by using its CopyTo() method to copy the bits into a byte[]. That's not great, it creates a ton of garbage.
Beg, steal or borrow to use a BitVector32 instead. It has a good implementation for GetHashCode(). If you've got more than 32 bits then consider spinning your own class so you can get to the underlying array without having to copy.
If the bit arrays are 32 bits or shorter then you just need to convert them to 32 bit integers (padding with zero bits if necessary).
If they can be longer then you can either convert them to a series of 32-bit integers and XOR them, or better: use the algorithm described in Effective Java.
public int GetHashCode()
{
int hash = 17;
hash = hash * 23 + field1.GetHashCode();
hash = hash * 23 + field2.GetHashCode();
hash = hash * 23 + field3.GetHashCode();
return hash;
}
Taken from here. The field1, field2 correcpond the the first 32 bits, second 32 bits, etc.
I have an object for which I want to generate a unique hash (override GetHashCode()) but I want to avoid overflows or something unpredictable.
The code should be the result of combining the hash codes of a small collection of strings.
The hash codes will be part of generating a cache key, so ideally they should be unique however the number of possible values that are being hashed is small so I THINK probability is in my favour here.
Would something like this be sufficient AND is there a better way of doing this?
int hash = 0;
foreach(string item in collection){
hash += (item.GetHashCode() / collection.Count)
}
return hash;
EDIT: Thanks for answers so far.
#Jon Skeet: No, order is not important
I guess this is almost a another question but since I am using the result to generate a cache key (string) would it make sense to use a crytographic hash function like MD5 or just use the string representation of this int?
The fundamentals pointed out by Marc and Jon are not bad but they are far from optimal in terms of their evenness of distribution of the results. Sadly the 'multiply by primes' approach copied by so many people from Knuth is not the best choice in many cases better distribution can be achieved by cheaper to calculate functions (though this is very slight on modern hardware). In fact throwing primes into many aspects of hashing is no panacea.
If this data is used for significantly sized hash tables I recommend reading of Bret Mulvey's excellent study and explanation of various modern (and not so modern) hashing techniques handily done with c#.
Note that the behaviour with strings of various hash functions is heavily biased towards wehther the strings are short (roughly speaking how many characters are hashed before the bits begin to over flow) or long.
One of the simplest and easiest to implement is also one of the best, the Jenkins One at a time hash.
private static unsafe void Hash(byte* d, int len, ref uint h)
{
for (int i = 0; i < len; i++)
{
h += d[i];
h += (h << 10);
h ^= (h >> 6);
}
}
public unsafe static void Hash(ref uint h, string s)
{
fixed (char* c = s)
{
byte* b = (byte*)(void*)c;
Hash(b, s.Length * 2, ref h);
}
}
public unsafe static int Avalanche(uint h)
{
h += (h<< 3);
h ^= (h>> 11);
h += (h<< 15);
return *((int*)(void*)&h);
}
you can then use this like so:
uint h = 0;
foreach(string item in collection)
{
Hash(ref h, item);
}
return Avalanche(h);
you can merge multiple different types like so:
public unsafe static void Hash(ref uint h, int data)
{
byte* d = (byte*)(void*)&data;
AddToHash(d, sizeof(int), ref h);
}
public unsafe static void Hash(ref uint h, long data)
{
byte* d= (byte*)(void*)&data;
Hash(d, sizeof(long), ref h);
}
If you only have access to the field as an object with no knowledge of the internals you can simply call GetHashCode() on each one and combine that value like so:
uint h = 0;
foreach(var item in collection)
{
Hash(ref h, item.GetHashCode());
}
return Avalanche(h);
Sadly you can't do sizeof(T) so you must do each struct individually.
If you wish to use reflection you can construct on a per type basis a function which does structural identity and hashing on all fields.
If you wish to avoid unsafe code then you can use bit masking techniques to pull out individual bits from ints (and chars if dealing with strings) with not too much extra hassle.
Hashes aren't meant to be unique - they're just meant to be well distributed in most situations. They're just meant to be consistent. Note that overflows shouldn't be a problem.
Just adding isn't generally a good idea, and dividing certainly isn't. Here's the approach I usually use:
int result = 17;
foreach (string item in collection)
{
result = result * 31 + item.GetHashCode();
}
return result;
If you're otherwise in a checked context, you might want to deliberately make it unchecked.
Note that this assumes that order is important, i.e. that { "a", "b" } should be different from { "b", "a" }. Please let us know if that's not the case.
There is nothing wrong with this approach as long as the members whose hashcodes you are combining follow the rules of hash codes. In short ...
The hash code of the private members should not change for the lifetime of the object
The container must not change the object the private members point to lest it in turn change the hash code of the container
If the order of the items is not important (i.e. {"a","b"} is the same as {"b","a"}) then you can use exclusive or to combine the hash codes:
hash ^= item.GetHashCode();
[Edit: As Mark pointed out in a comment to a different answer, this has the drawback of also give collections like {"a"} and {"a","b","b"} the same hash code.]
If the order is important, you can instead multiply by a prime number and add:
hash *= 11;
hash += item.GetHashCode();
(When you multiply you will sometimes get an overflow that is ignored, but by multiplying with a prime number you lose a minimum of information. If you instead multiplied with a number like 16, you would lose four bits of information each time, so after eight items the hash code from the first item would be completely gone.)