Should the hash code of null always be zero, in .NET - c#

Given that collections like System.Collections.Generic.HashSet<> accept null as a set member, one can ask what the hash code of null should be. It looks like the framework uses 0:
// nullable struct type
int? i = null;
i.GetHashCode(); // gives 0
EqualityComparer<int?>.Default.GetHashCode(i); // gives 0
// class type
CultureInfo c = null;
EqualityComparer<CultureInfo>.Default.GetHashCode(c); // gives 0
This can be (a little) problematic with nullable enums. If we define
enum Season
{
Spring,
Summer,
Autumn,
Winter,
}
then the Nullable<Season> (also called Season?) can take just five values, but two of them, namely null and Season.Spring, have the same hash code.
It is tempting to write a "better" equality comparer like this:
class NewNullEnumEqComp<T> : EqualityComparer<T?> where T : struct
{
public override bool Equals(T? x, T? y)
{
return Default.Equals(x, y);
}
public override int GetHashCode(T? x)
{
return x.HasValue ? Default.GetHashCode(x) : -1;
}
}
But is there any reason why the hash code of null should be 0?
EDIT/ADDITION:
Some people seem to think this is about overriding Object.GetHashCode(). It really is not, actually. (The authors of .NET did make an override of GetHashCode() in the Nullable<> struct which is relevant, though.) A user-written implementation of the parameterless GetHashCode() can never handle the situation where the object whose hash code we seek is null.
This is about implementing the abstract method EqualityComparer<T>.GetHashCode(T) or otherwise implementing the interface method IEqualityComparer<T>.GetHashCode(T). Now, while creating these links to MSDN, I see that it says there that these methods throw an ArgumentNullException if their sole argument is null. This must certainly be a mistake on MSDN? None of .NET's own implementations throw exceptions. Throwing in that case would effectively break any attempt to add null to a HashSet<>. Unless HashSet<> does something extraordinary when dealing with a null item (I will have to test that).
NEW EDIT/ADDITION:
Now I tried debugging. With HashSet<>, I can confirm that with the default equality comparer, the values Season.Spring and null will end in the same bucket. This can be determined by very carefully inspecting the private array members m_buckets and m_slots. Note that the indices are always, by design, offset by one.
The code I gave above does not, however, fix this. As it turns out, HashSet<> will never even ask the equality comparer when the value is null. This is from the source code of HashSet<>:
// Workaround Comparers that throw ArgumentNullException for GetHashCode(null).
private int InternalGetHashCode(T item) {
if (item == null) {
return 0;
}
return m_comparer.GetHashCode(item) & Lower31BitMask;
}
This means that, at least for HashSet<>, it is not even possible to change the hash of null. Instead, a solution is to change the hash of all the other values, like this:
class NewerNullEnumEqComp<T> : EqualityComparer<T?> where T : struct
{
public override bool Equals(T? x, T? y)
{
return Default.Equals(x, y);
}
public override int GetHashCode(T? x)
{
return x.HasValue ? 1 + Default.GetHashCode(x) : /* not seen by HashSet: */ 0;
}
}

So long as the hash code returned for nulls is consistent for the type, you should be fine. The only requirement for a hash code is that two objects that are considered equal share the same hash code.
Returning 0 or -1 for null, so long as you choose one and return it all the time, will work. Obviously, non-null hash codes should not return whatever value you use for null.
Similar questions:
GetHashCode on null fields?
What should GetHashCode return when object's identifier is null?
The "Remarks" of this MSDN entry goes into more detail around the hash code. Poignantly, the documentation does not provide any coverage or discussion of null values at all - not even in the community content.
To address your issue with the enum, either re-implement the hash code to return non-zero, add a default "unknown" enum entry equivalent to null, or simply don't use nullable enums.
Interesting find, by the way.
Another problem I see with this generally is that the hash code cannot represent a 4 byte or larger type that is nullable without at least one collision (more as the type size increases). For example, the hash code of an int is just the int, so it uses the full int range. What value in that range do you choose for null? Whatever one you pick will collide with the value's hash code itself.
Collisions in and of themselves are not necessarily a problem, but you need to know they are there. Hash codes are only used in some circumstances. As stated in the docs on MSDN, hash codes are not guaranteed to return different values for different objects so shouldn't be expected to.

It doesn't have to be zero -- you could make it 42 if you wanted to.
All that matters is consistency during the execution of the program.
It's just the most obvious representation, because null is often represented as a zero internally. Which means, while debugging, if you see a hash code of zero, it might prompt you to think, "Hmm.. was this a null reference issue?"
Note that if you use a number like 0xDEADBEEF, then someone could say you're using a magic number... and you kind of would be. (You could say zero is a magic number too, and you'd be kind of right... except that it's so widely used as to be somewhat of an exception to the rule.)

Bear in mind that the hash code is used as a first-step in determining equality only, and [is/should]never (be) used as a de-facto determination as to whether two objects are equal.
If two objects' hash codes are not equal then they are treated as not equal (because we assume that the unerlying implementation is correct - i.e. we don't second-guess that). If they have the same hash code, then they should then be checked for actual equality which, in your case, the null and the enum value will fail.
As a result - using zero is as good as any other value in the general case.
Sure, there will be situations, like your enum, where this zero is shared with a real value's hash code. The question is whether, for you, the miniscule overhead of an additional comparison causes problems.
If so, then define your own comparer for the case of the nullable for your particular type, and ensure that a null value always yields a hash code that is always the same (of course!) and a value that cannot be yielded by the underlying type's own hash code algorithm. For your own types, this is do-able. For others - good luck :)

Good question.
I just tried to code this:
enum Season
{
Spring,
Summer,
Autumn,
Winter,
}
and execute this like this:
Season? v = null;
Console.WriteLine(v);
it returns null
if I do, instead normal
Season? v = Season.Spring;
Console.WriteLine((int)v);
it return 0, as expected, or simple Spring if we avoid casting to int.
So.. if you do the following:
Season? v = Season.Spring;
Season? vnull = null;
if(vnull == v) // never TRUE
EDIT
From MSDN
If two objects compare as equal, the GetHashCode method for each object must return the same value. However, if two objects do not compare as equal, the GetHashCode methods for the two object do not have to return different values
In other words: if two objects have same hash code that doesn't mean that they are equal, cause real equality is determined by Equals.
From MSDN again:
The GetHashCode method for an object must consistently return the same
hash code as long as there is no modification to the object state that
determines the return value of the object's Equals method. Note that
this is true only for the current execution of an application, and
that a different hash code can be returned if the application is run
again.

But is there any reason why the hash code of null should be 0?
It could have been anything at all. I tend to agree that 0 wasn't necessarily the best choice, but it's one that probably leads to fewest bugs.
A hash function absolutely must return the same hash for the same value. Once there exists a component that does this, this is really the only valid value for the hash of null. If there were a constant for this, like, hm, object.HashOfNull, then someone implementing an IEqualityComparer would have to know to use that value. If they don't think about it, the chance they'll use 0 is slightly higher than every other value, I reckon.
at least for HashSet<>, it is not even possible to change the hash of null
As mentioned above, I think it's completely impossible full stop, just because there exist types which already follow the convention that hash of null is 0.

It is 0 for the sake of simplicity. There is no such hard requirement. You only need to ensure the general requirements of hash coding.
For example, you need to make sure that if two objects are equal, their hashcodes must always be equal too. Therefore, different hashcodes must always represent different objects (but it's not necessarily true vice versa: two different objects may have the same hashcode, even though if this happens often then this is not a good quality hash function -- it doesn't have a good collision resistance).
Of course, I restricted my answer to requirements of mathematical nature. There are .NET-specific, technical conditions as well, which you can read here. 0 for a null value is not among them.

So this could be avoided by using an Unknown enum value (although it seems a bit weird for a Season to be unknown). So something like this would negate this issue:
public enum Season
{
Unknown = 0,
Spring,
Summer,
Autumn,
Winter
}
Season some_season = Season.Unknown;
int code = some_season.GetHashCode(); // 0
some_season = Season.Autumn;
code = some_season.GetHashCode(); // 3
Then you would have unique hash code values for each season.

Personally I find using nullable values a bit awkward and try to avoid them whenever I can. Your issue is just another reason. Sometimes they are very handy though but my rule of thumb is not to mix value types with null if possible simply because these are from two different worlds. In .NET framework they seem to do the same - a lot of value types provide TryParse method which is a way of separating values from no value (null).
In your particular case it is easy to get rid of the problem because you handle your own Season type.
(Season?)null to me means 'season is not specified' like when you have a webform where some fields are not required. In my opinion it is better to specify that special 'value' in the enum itself rather than use a bit clunky Nullable<T>. It will be faster (no boxing) easier to read (Season.NotSpecified vs null) and will solve your problem with hash codes.
Of course for other types, like int you can't expand value domain and to denominate one of the values as special is not always possible. But with int? hash code collision is much smaller problem, if at all.

Tuple.Create( (object) null! ).GetHashCode() // 0
Tuple.Create( 0 ).GetHashCode() // 0
Tuple.Create( 1 ).GetHashCode() // 1
Tuple.Create( 2 ).GetHashCode() // 2

Related

What to return when overriding Object.GetHashCode() in classes with no immutable fields?

Ok, before you get all mad because there are hundreds of similar sounding questions posted on the internet, I can assure you that I have just spent the last few hours reading all of them and have not found the answer to my question.
Background:
Basically, one of my large scale applications had been suffering from a situation where some Bindings on the ListBox.SelectedItem property would stop working or the program would crash after an edit had been made to the currently selected item. I initially asked the 'An item with the same key has already been added' Exception on selecting a ListBoxItem from code question here, but got no answers.
I hadn't had time to address that problem until this week, when I was given a number of days to sort it out. Now to cut a long story short, I found out the reason for the problem. It was because my data type classes had overridden the Equals method and therefore the GetHashCode method as well.
Now for those of you that are unaware of this issue, I discovered that you can only implement the GetHashCode method using immutable fields/properties. Using a excerpt from Harvey Kwok's answer to the Overriding GetHashCode() post to explain this:
The problem is that GetHashCode is being used by Dictionary and HashSet collections to place each item in a bucket. If hashcode is calculated based on some mutable fields and the fields are really changed after the object is placed into the HashSet or Dictionary, the object can no longer be found from the HashSet or Dictionary.
So the actual problem was caused because I had used mutable properties in the GetHashCode methods. When users changed these property values in the UI, the associated hash code values of the objects changed and then items could no longer be found in their collections.
Question:
So, my question is what is the best way of handling the situation where I need to implement the GetHashCode method in classes with no immutable fields? Sorry, let me be more specific, as that question has been asked before.
The answers in the Overriding GetHashCode() post suggest that in these situations, it is better to simply return a constant value... some suggest to return the value 1, while other suggest returning a prime number. Personally, I can't see any difference between these suggestions because I would have thought that there would only be one bucket used for either of them.
Furthermore, the Guidelines and rules for GetHashCode article in Eric Lippert's Blog has a section titled Guideline: the distribution of hash codes must be "random" which highlights the pitfalls of using an algorithm that results in not enough buckets being used. He warns of algorithms that decrease the number of buckets used and cause a performance problem when the bucket gets really big. Surely, returning a constant falls into this category.
I had an idea of adding an extra Guid field to all of my data type classes (just in C#, not the database) specifically to be used in and only in the GetHashCode method. So I suppose at the end of this long intro, my actual question is which implementation is better? To summarise:
Summary:
When overriding Object.GetHashCode() in classes with no immutable fields, is it better to return a constant from the GetHashCode method, or to create an additional readonly field for each class, solely to be used in the GetHashCode method? If I should add a new field, what type should it be and shouldn't I then include it in the Equals method?
While I am happy to receive answers from anyone, I am really hoping to receive answers from advanced developers with a sound knowledge on this subject.
Go back to basics. You read my article; read it again. The two ironclad rules that are relevant to your situation are:
if x equals y then the hash code of x must equal the hash code of y. Equivalently: if the hash code of x does not equal the hash code of y then x and y must be unequal.
the hash code of x must remain stable while x is in a hash table.
Those are requirements for correctness. If you can't guarantee those two simple things then your program will not be correct.
You propose two solutions.
Your first solution is that you always return a constant. That meets the requirement of both rules, but you are then reduced to linear searches in your hash table. You might as well use a list.
The other solution you propose is to somehow produce a hash code for each object and store it in the object. That is perfectly legal provided that equal items have equal hash codes. If you do that then you are restricted such that x equals y must be false if the hash codes differ. This seems to make value equality basically impossible. Since you wouldn't be overriding Equals in the first place if you wanted reference equality, this seems like a really bad idea, but it is legal provided that equals is consistent.
I propose a third solution, which is: never put your object in a hash table, because a hash table is the wrong data structure in the first place. The point of a hash table is to quickly answer the question "is this given value in this set of immutable values?" and you don't have a set of immutable values, so don't use a hash table. Use the right tool for the job. Use a list, and live with the pain of doing linear searches.
A fourth solution is: hash on the mutable fields used for equality, remove the object from all hash tables it is in just before every time you mutate it, and put it back in afterwards. This meets both requirements: the hash code agrees with equality, and hashes of objects in hash tables are stable, and you still get fast lookups.
I would either create an additional readonly field or else throw NotSupportedException. In my view the other option is meaningless. Let's see why.
Distinct (fixed) hash codes
Providing distinct hash codes is easy, e.g.:
class Sample
{
private static int counter;
private readonly int hashCode;
public Sample() { this.hashCode = counter++; }
public override int GetHashCode()
{
return this.hashCode;
}
public override bool Equals(object other)
{
return object.ReferenceEquals(this, other);
}
}
Technically you have to look out for creating too many objects and overflowing the counter here, but in practice I think that's not going to be an issue for anyone.
The problem with this approach is that instances will never compare equal. However, that's perfectly fine if you only want to use instances of Sample as indexes into a collection of some other type.
Constant hash codes
If there is any scenario in which distinct instances should compare equal then at first glance you have no other choice than returning a constant. But where does that leave you?
Locating an instance inside a container will always degenerate to the equivalent of a linear search. So in effect by returning a constant you allow the user to make a keyed container for your class, but that container will exhibit the performance characteristics of a LinkedList<T>. This might be obvious to someone familiar with your class, but personally I see it as letting people shoot themselves in the foot. If you know from beforehand that a Dictionary won't behave as one might expect, then why let the user create one? In my view, better to throw NotSupportedException.
But throwing is what you must not do!
Some people will disagree with the above, and when those people are smarter than oneself then one should pay attention. First of all, this code analysis warning states that GetHashCode should not throw. That's something to think about, but let's not be dogmatic. Sometimes you have to break the rules for a reason.
However, that is not all. In his blog post on the subject, Eric Lippert says that if you throw from inside GetHashCode then
your object cannot be a result in many LINQ-to-objects queries that use hash tables
internally for performance reasons.
Losing LINQ is certainly a bummer, but fortunately the road does not end here. Many (all?) LINQ methods that use hash tables have overloads that accept an IEqualityComparer<T> to be used when hashing. So you can in fact use LINQ, but it's going to be less convenient.
In the end you will have to weigh the options yourself. My opinion is that it's better to operate with a whitelist strategy (provide an IEqualityComparer<T> whenever needed) as long as it is technically feasible because that makes the code explicit: if someone tries to use the class naively they get an exception that helpfully tells them what's going on and the equality comparer is visible in the code wherever it is used, making the extraordinary behavior of the class immediately clear.
Where I want to override Equals, but there is no sensible immutable "key" for an object (and for whatever reason it doesn't make sense to make the whole object immutable), in my opinion there is only one "correct" choice:
Implement GetHashCode to hash the same fields as Equals uses. (This might be all the fields.)
Document that these fields must not be altered while in a dictionary.
Trust that users either don't put these objects in dictionaries, or obey the second rule.
(Returning a constant value compromises dictionary performance. Throwing an exception disallows too many useful cases where objects are cached but not modified. Any other implementation for GetHashCode would be wrong.)
Where this runs the user into trouble anyway, it's probably their fault. (Specifically: using a dictionary where they shouldn't, or using a model type in a context where they should be using a view-model type that uses reference equality instead.)
Or perhaps I shouldn't be overriding Equals in the first place.
If the classes truly contain nothing constant on which a hash value can be calculated then I would use something simpler than a GUID. Just use a random number persisted in the class (or in a wrapper class).
A simple approach is to store the hashCode in a private member and generate it on the first use. If your entity doesn't change often, and you're not going to be using two different objects that are Equal (where your Equals method returns true) as keys in your dictionary, then this should be fine:
private int? _hashCode;
public override int GetHashCode() {
if (!_hashCode.HasValue)
_hashCode = Property1.GetHashCode() ^ Property2.GetHashCode() etc... based on whatever you use in your equals method
return _hashCode.Value;
}
However, if you have, say, object a and object b, where a.Equals(b) == true, and you store an entry in your dictionary using a as the key (dictionary[a] = value).
If a does not change, then dictionary[b] will return value, however, if you change a after storing the entry in the dictionary, then dictionary[b] will most likely fail.
The only workaround to this is to rehash the dictionary when any of the keys change.

How to write a good GetHashCode() implementation for a class that is compared by value?

Let's say we have such a class:
class MyClass
{
public string SomeValue { get; set; }
// ...
}
Now, let's say two MyClass instances are equal when their SomeValue property is equal. Thus, I overwrite the Object.Equals() and the Object.GetHashCode() methods to represent that. Object.GetHashCode() returns SomeValue.GetHashCode() But at the same time I need to follow these rules:
If two instances of an object are equal, they should return the same hash code.
The hash code should not change throughout the runtime.
But apparently, SomeValue can change, and the hash code we did get before may turn to be invalid.
I can only think of making the class immutable, but I'd like to know what others do in this case.
What do you do in such cases? Is having such a class represents a subtler problem in the design decisions?
The general contract says that if A.equals(B) is true, then their hash codes must be the same. If SomeValue changes in A in such a way that A.equals(B) is no longer true, then A.GetHashCode() can return a different value than before. Mutable objects cannot cache GetHashCode(), it must be calculated every time the method is called.
This article has detailed guidelines for GetHashCode and mutability:
http://ericlippert.com/2011/02/28/guidelines-and-rules-for-gethashcode/
If your GetHashCode() depends on some mutable value you have to change your hash whenever your value changes. Otherwise you break the equals law.
The part, that a hash should never be changed, once somebody asked for it, is needed if you put your object into a HashSet or as a key within a Dictionary. In these cases you have to ensure that the hash code won't be changed as long as it is stored in such a container. This can either be ensured manually, by simply taking care of this issue when you program or you could provide some Freeze() method to your object. If this is called any subsequent try to set a property would lead to some kind of exception (also you should then provide some Defrost() method). Additionally you put the call of the Freeze() method into your GetHashCode() implementation and so you can be quite sure that nobody alter a frozen object by mistake.
And just one last tip: If you need to alter a object within such a container, simply remove it, alter it (don't forget to defrost it) and re-add it again.
You sort of need to choose between mutability and GetHashCode returning the same value for 'equal' objects. Often when you think you want to implement 'equal' for mutable objects, you end up later deciding that you have "shades of equal" and really didn't mean Object.Equals equality.
Having a mutable object as the 'key' in any sort of data structure is a big red flag to me. For example:
MyObj a = new MyObj("alpha");
MyObj b = new MyObj("beta");
HashSet<MyObj> objs = new HashSet<MyObj>();
objs.Add(a);
objs.Add(b);
// objs.Count == 2
b.SomeValue = "alpha";
// objs.Distinct().Count() == 1, objs.Count == 2
We've badly violated the contract of HashSet<T>. This is an obvious example, there are subtle ones.

Why does C# not implement GetHashCode for Collections?

I am porting something from Java to C#. In Java the hashcode of a ArrayList depends on the items in it. In C# I always get the same hashcode from a List...
Why is this?
For some of my objects the hashcode needs to be different because the objects in their list property make the objects non-equal. I would expect that a hashcode is always unique for the object's state and only equals another hashcode when the object is equal. Am I wrong?
In order to work correctly, hashcodes must be immutable – an object's hash code must never change.
If an object's hashcode does change, any dictionaries containing the object will stop working.
Since collections are not immutable, they cannot implement GetHashCode.
Instead, they inherit the default GetHashCode, which returns a (hopefully) unique value for each instance of an object. (Typically based on a memory address)
Hashcodes must depend upon the definition of equality being used so that if A == B then A.GetHashCode() == B.GetHashCode() (but not necessarily the inverse; A.GetHashCode() == B.GetHashCode() does not entail A == B).
By default, the equality definition of a value type is based on its value, and of a reference type is based on it's identity (that is, by default an instance of a reference type is only equal to itself), hence the default hashcode for a value type is such that it depends on the values of the fields it contains* and for reference types it depends on the identity. Indeed, since we ideally want the hashcodes for non-equal objects to be different particularly in the low-order bits (most likely to affect the value of a re-hashing), we generally want two equivalent but non-equal objects to have different hashes.
Since an object will remain equal to itself, it should also be clear that this default implementation of GetHashCode() will continue to have the same value, even when the object is mutated (identity does not mutate even for a mutable object).
Now, in some cases reference types (or value types) re-define equality. An example of this is string, where for example "ABC" == "AB" + "C". Though there are two different instances of string compared, they are considered equal. In this case GetHashCode() must be overridden so that the value relates to the state upon which equality is defined (in this case, the sequence of characters contained).
While it is more common to do this with types that also are immutable, for a variety of reasons, GetHashCode() does not depend upon immutability. Rather, GetHashCode() must remain consistent in the face of mutability - change a value that we use in determining the hash, and the hash must change accordingly. Note though, that this is a problem if we are using this mutable object as a key into a structure using the hash, as mutating the object changes the position in which it should be stored, without moving it to that position (it's also true of any other case where the position of an object within a collection depends on its value - e.g. if we sort a list and then mutate one of the items in the list, the list is no longer sorted). However, this doesn't mean that we must only use immutable objects in dictionaries and hashsets. Rather it means that we must not mutate an object that is in such a structure, and making it immutable is a clear way to guarantee this.
Indeed, there are quite a few cases where storing mutable objects in such structures is desirable, and as long as we don't mutate them during this time, this is fine. Since we don't have the guarantee immutability brings, we then want to provide it another way (spending a short time in the collection and being accessible from only one thread, for example).
Hence immutability of key values is one of those cases where something is possible, but generally a idea. To the person defining the hashcode algorithm though, it's not for them to assume any such case will always be a bad idea (they don't even know the mutation happened while the object was stored in such a structure); it's for them to implement a hashcode defined on the current state of the object, whether calling it in a given point is good or not. Hence for example, a hashcode should not be memoised on a mutable object unless the memoisation is cleared on every mutate. (It's generally a waste to memoise hashes anyway, as structures that hit the same objects hashcode repeatedly will have their own memoisation of it).
Now, in the case in hand, ArrayList operates on the default case of equality being based on identity, e.g.:
ArrayList a = new ArrayList();
ArrayList b = new ArrayList();
for(int i = 0; i != 10; ++i)
{
a.Add(i);
b.Add(i);
}
return a == b;//returns false
Now, this is actually a good thing. Why? Well, how do you know in the above that we want to consider a as equal to b? We might, but there are plenty of good reasons for not doing so in other cases too.
What's more, it's much easier to redefine equality from identity-based to value-based, than from value-based to identity-based. Finally, there are more than one value-based definitions of equality for many objects (classic case being the different views on what makes a string equal), so there isn't even a one-and-only definition that works. For example:
ArrayList c = new ArrayList();
for(short i = 0; i != 10; ++i)
{
c.Add(i);
}
If we considered a == b above, should we consider a == c aslo? The answer depends on just what we care about in the definition of equality we are using, so the framework could't know what the right answer is for all cases, since all cases don't agree.
Now, if we do care about value-based equality in a given case we have two very easy options. The first is to subclass and over-ride equality:
public class ValueEqualList : ArrayList, IEquatable<ValueEqualList>
{
/*.. most methods left out ..*/
public Equals(ValueEqualList other)//optional but a good idea almost always when we redefine equality
{
if(other == null)
return false;
if(ReferenceEquals(this, other))//identity still entails equality, so this is a good shortcut
return true;
if(Count != other.Count)
return false;
for(int i = 0; i != Count; ++i)
if(this[i] != other[i])
return false;
return true;
}
public override bool Equals(object other)
{
return Equals(other as ValueEqualList);
}
public override int GetHashCode()
{
int res = 0x2D2816FE;
foreach(var item in this)
{
res = res * 31 + (item == null ? 0 : item.GetHashCode());
}
return res;
}
}
This assumes that we will always want to treat such lists this way. We can also implement an IEqualityComparer for a given case:
public class ArrayListEqComp : IEqualityComparer<ArrayList>
{//we might also implement the non-generic IEqualityComparer, omitted for brevity
public bool Equals(ArrayList x, ArrayList y)
{
if(ReferenceEquals(x, y))
return true;
if(x == null || y == null || x.Count != y.Count)
return false;
for(int i = 0; i != x.Count; ++i)
if(x[i] != y[i])
return false;
return true;
}
public int GetHashCode(ArrayList obj)
{
int res = 0x2D2816FE;
foreach(var item in obj)
{
res = res * 31 + (item == null ? 0 : item.GetHashCode());
}
return res;
}
}
In summary:
The default equality definition of a reference type is dependant upon identity alone.
Most of the time, we want that.
When the person defining the class decides that this isn't what is wanted, they can override this behaviour.
When the person using the class wants a different definition of equality again, they can use IEqualityComparer<T> and IEqualityComparer so their that dictionaries, hashmaps, hashsets, etc. use their concept of equality.
It's disastrous to mutate an object while it is the key to a hash-based structure. Immutability can be used of ensure this doesn't happen, but is not compulsory, nor always desirable.
All in all, the framework gives us nice defaults and detailed override possibilities.
*There is a bug in the case of a decimal within a struct, because there is a short-cut used in some cases with stucts when it is safe and not othertimes, but while a struct containing a decimal is one case when the short-cut is not safe, it is incorrectly identified as a case where it is safe.
Yes, you are wrong. In both Java and C#, being equal implies having the same hash-code, but the converse is not (necessarily) true.
See GetHashCode for more information.
It is not possible for a hashcode to be unique across all variations of most non-trivial classes. In C# the concept of List equality is not the same as in Java (see here), so the hash code implementation is also not the same - it mirrors the C# List equality.
You're only partly wrong. You're definitely wrong when you think that equal hashcodes means equal objects, but equal objects must have equal hashcodes, which means that if the hashcodes differ, so do the objects.
The core reasons are performance and human nature - people tend to think about hashes as something fast but it normally requires traversing all elements of an object at least once.
Example: If you use a string as a key in a hash table every query has complexity O(|s|) - use 2x longer strings and it will cost you at least twice as much. Imagine that it was a full blown tree (just a list of lists) - oops :-)
If full, deep hash calculation was a standard operation on a collection, enormous percentage of progammers would just use it unwittingly and then blame the framework and the virtual machine for being slow. For something as expensive as full traversal it is crucial that a programmer has to be aware of the complexity. The only was to achieve that is to make sure that you have to write your own. It's a good deterrent as well :-)
Another reason is updating tactics. Calculating and updating a hash on the fly vs. doing the full calculation every time requires a judgement call depending on concrete case in hand.
Immutabilty is just an academic cop out - people do hashes as a way of detecting a change faster (file hashes for example) and also use hashes for complex structures which change all the time. Hash has many more uses beyong the 101 basics. The key is again that what to use for a hash of a complex object has to be a judgement call on a case by case basis.
Using object's address (actually a handle so it doesn't change after GC) as a hash is actually the case where the hash value remains the same for arbitrary mutable object :-) The reason C# does it is that it's cheap and again nudges people to calculate their own.
Why is too philosophical. Create helper method (may be extension method) and calculate hashcode as you like. May be XOR elements' hashcodes

General advice and guidelines on how to properly override object.GetHashCode()

According to MSDN, a hash function must have the following properties:
If two objects compare as equal, the GetHashCode method for each object must return the same value. However, if two objects do not compare as equal, the GetHashCode methods for the two object do not have to return different values.
The GetHashCode method for an object must consistently return the same hash code as long as there is no modification to the object state that determines the return value of the object's Equals method. Note that this is true only for the current execution of an application, and that a different hash code can be returned if the application is run again.
For the best performance, a hash function must generate a random distribution for all input.
I keep finding myself in the following scenario: I have created a class, implemented IEquatable<T> and overridden object.Equals(object). MSDN states that:
Types that override Equals must also override GetHashCode ; otherwise, Hashtable might not work correctly.
And then it usually stops up a bit for me. Because, how do you properly override object.GetHashCode()? Never really know where to start, and it seems to be a lot of pitfalls.
Here at StackOverflow, there are quite a few questions related to GetHashCode overriding, but most of them seems to be on quite particular cases and specific issues. So, therefore I would like to get a good compilation here. An overview with general advice and guidelines. What to do, what not to do, common pitfalls, where to start, etc.
I would like it to be especially directed at C#, but I would think it will work kind of the same way for other .NET languages as well(?).
I think maybe the best way is to create one answer per topic with a quick and short answer first (close to one-liner if at all possible), then maybe some more information and end with related questions, discussions, blog posts, etc., if there are any. I can then create one post as the accepted answer (to get it on top) with just a "table of contents". Try to keep it short and concise. And don't just link to other questions and blog posts. Try to take the essence of them and then rather link to source (especially since the source could disappear. Also, please try to edit and improve answers instead of created lots of very similar ones.
I am not a very good technical writer, but I will at least try to format answers so they look alike, create the table of contents, etc. I will also try to search up some of the related questions here at SO that answers parts of these and maybe pull out the essence of the ones I can manage. But since I am not very stable on this topic, I will try to stay away for the most part :p
Table of contents
When do I override object.GetHashCode?
Why do I have to override object.GetHashCode()?
What are those magic numbers seen in GetHashCode implementations?
Things that I would like to be covered, but haven't been yet:
How to create the integer (How to "convert" an object into an int wasn't very obvious to me anyways).
What fields to base the hash code upon.
If it should only be on immutable fields, what if there are only mutable ones?
How to generate a good random distribution. (MSDN Property #3)
Part to this, seems to choose a good magic prime number (have seen 17, 23 and 397 been used), but how do you choose it, and what is it for exactly?
How to make sure the hash code stays the same all through the object lifetime. (MSDN Property #2)
Especially when the equality is based upon mutable fields. (MSDN Property #1)
How to deal with fields that are complex types (not among the built-in C# types).
Complex objects and structs, arrays, collections, lists, dictionaries, generic types, etc.
For example, even though the list or dictionary might be readonly, that doesn't mean the contents of it are.
How to deal with inherited classes.
Should you somehow incorporate base.GetHashCode() into your hash code?
Could you technically just be lazy and return 0? Would heavily break MSDN guideline number #3, but would at least make sure #1 and #2 were always true :P
Common pitfalls and gotchas.
What are those magic numbers often seen in GetHashCode implementations?
They are prime numbers. Prime numbers are used for creating hash codes because prime number maximize the usage of the hash code space.
Specifically, start with the small prime number 3, and consider only the low-order nybbles of the results:
3 * 1 = 3 = 3(mod 8) = 0011
3 * 2 = 6 = 6(mod 8) = 1010
3 * 3 = 9 = 1(mod 8) = 0001
3 * 4 = 12 = 4(mod 8) = 1000
3 * 5 = 15 = 7(mod 8) = 1111
3 * 6 = 18 = 2(mod 8) = 0010
3 * 7 = 21 = 5(mod 8) = 1001
3 * 8 = 24 = 0(mod 8) = 0000
3 * 9 = 27 = 3(mod 8) = 0011
And we start over. But you'll notice that successive multiples of our prime generated every possible permutation of bits in our nybble before starting to repeat. We can get the same effect with any prime number and any number of bits, which makes prime numbers optimal for generating near-random hash codes. The reason we usually see larger primes instead of small primes like 3 in the example above is that, for greater numbers of bits in our hash code, the results obtained from using a small prime are not even pseudo-random - they're simply an increasing sequence until an overflow is encountered. For optimal randomness, a prime number that results in overflow for fairly small coefficients should be used, unless you can guarantee that your coefficients will not be small.
Related links:
Why is ‘397’ used for ReSharper GetHashCode override?
Why do I have to override object.GetHashCode()?
Overriding this method is important because the following property must always remain true:
If two objects compare as equal, the GetHashCode method for each object must return the same value.
The reason, as stated by JaredPar in a blog post on implementing equality, is that
Many classes use the hash code to classify an object. In particular hash tables and dictionaries tend to place objects in buckets based on their hash code. When checking if an object is already in the hash table it will first look for it in a bucket. If two objects are equal but have different hash codes they may be put into different buckets and the dictionary would fail to lookup the object.
Related links:
Do I HAVE to override GetHashCode and Equals in new Classes?
Do I need to override GetHashCode() on reference types?
Override Equals and GetHashCode Question
Why is it important to override GetHashCode when Equals method is overriden in C#?
Properly Implementing Equality in VB
Check out Guidelines and rules for GetHashCode by Eric Lippert
You should override it whenever you have a meaningful measure of equality for objects of that type (i.e. you override Equals). If you knew the object wasn't going to be hashed for any reason you could leave it, but it's unlikely you could know this in advance.
The hash should be based only on the properties of the object that are used to define equality since two objects that are considered equal should have the same hash code. In general you would usually do something like:
public override int GetHashCode()
{
int mc = //magic constant, usually some prime
return mc * prop1.GetHashCode() * prop2.GetHashCode * ... * propN.GetHashCode();
}
I usually assume multiplying the values together will produce a fairly uniform distribution, assuming each property's hashcode function does the same, although this may well be wrong. Using this method, if the objects equality-defining properties change, then the hash code is also likely to change, which is acceptable given definition #2 in your question. It also deals with all types in a uniform way.
You could return the same value for all instances, although this will make any algorithms that use hashing (such as dictionarys) very slow - essentially all instances will be hashed to the same bucket and lookup will then become O(n) instead of the expected O(1). This of course negates any benefits of using such structures for lookup.
A) You must override both Equals and GetHashCode if you want to employ value equality instead of the default reference equality. With the later, two object references compare as equal if they both refer to the same object instance. With the former they compare as equal if their value is the same even if they refer to different objects. For example, you probably want to employ value equality for Date, Money, and Point objects.
B) In order to implement value equality you must override Equals and GetHashCode. Both should depend on the fields of the object that encapsulate the value. For example, Date.Year, Date.Month and Date.Day; or Money.Currency and Money.Amount; or Point.X, Point.Y and Point.Z. You should also consider overriding operator ==, operator !=, operator <, and operator >.
C) The hashcode doesn't have to stay constant all through the object lifetime. However it must remain immutable while it participates as the key in a hash. From MSDN doco for Dictionary: "As long as an object is used as a key in the Dictionary<(Of <(TKey, TValue>)>), it must not change in any way that affects its hash value." If you must change the value of a key remove the entry from the dictionary, change the key value, and replace the entry.
D) IMO, you will simplify your life if your value objects are themselves immutable.
When do I override object.GetHashCode()?
As MSDN states:
Types that override Equals must also override GetHashCode ; otherwise, Hashtable might not work correctly.
Related links:
When to override GetHashCode()?
How to generate GetHashCode() and Equals()
Visual Studio 2017
https://learn.microsoft.com/en-us/visualstudio/ide/reference/generate-equals-gethashcode-methods?view=vs-2017
ReSharper
https://www.jetbrains.com/help/resharper/Code_Generation__Equality_Members.html
What fields to base the hash code upon? If it should only be on immutable fields, what if there are only mutable ones?
It doesn't need to be based only on immutable fields. I would base it on the fields that determine the outcome of the equals method.
How to make sure the hash code stays the same all through the object lifetime. (MSDN Property #2) Especially when the equality is based upon mutable fields. (MSDN Property #1)
You seem to misunderstand Property #2. The hashcode doesn't need to stay the same thoughout the objects lifetime. It just needs to stay the same as long as the values that determine the outcome of the equals method are not changed. So logically, you base the hashcode on those values only. Then there shouldn't be a problem.
public override int GetHashCode()
{
return IntProp1 ^ IntProp2 ^ StrProp3.GetHashCode() ^ StrProp4.GetHashCode ^ CustomClassProp.GetHashCode;
}
Do the same in the customClass's GetHasCode method. Works like a charm.

C# How to select a Hashcode for a class that violates the Equals contract?

I've got multiple classes that, for certain reasons, do not follow the official Equals contract. In the overwritten GetHashCode() these classes simply return 0 so they can be used in a Hashmap.
Some of these classes implement the same interface and there are Hashmaps using this interface as key. So I figured that every class should at least return a different (but still constant) value in GetHashCode().
The question is how to select this value. Should I simply let the first class return 1, the next class 2 and so on? Or should I try something like
class SomeClass : SomeInterface {
public overwrite int GetHashCode() {
return "SomeClass".GetHashCode();
}
}
so the hash is distributed more evenly? (Do I have to cache the returned value myself or is Microsoft's compiler able to optimize this?)
Update: It is not possible to return an individual hashcode for each object, because Equals violates the contract. Specifially, I'm refering to this problem.
If it "violates the Equals contract", then I'm not sure you should be using it as a key.
It something is using that as a key, you really need to get the hashing right... it is very unclear what the Equals logic is, but two values that are considered equal must have the same hash-code. It is not required that two values with the same hash-code are equal.
Using a constant string won't really help much - you'll get the values split evenly over the types, but that is about it...
I'm curious what the reasoning would be for overriding GetHashCode() and returning a constant value. Why violate the idea of a hash rather than just violating the "contract" and not overriding the GetHashCode() function at all and leave the default implementation from Object?
Edit
If what you've done is that so you can have your objects match based on their contents rather than their reference then what you propose with having different classes simply use different constants can WORK, but is highly inefficient. What you want to do is come up with a hashing algorithm that can take the contents of your class and produce a value that balances speed with even distribution (that's hashing 101).
I guess I'm not sure what you're looking for...there isn't a "good" scheme for choosing constant numbers for this paradigm. One is not any better than the other. Try to improve your objects so that you're creating a real hash.
I ran into this exact problem when writing a vector class. I wanted to compare vectors for equality, but float operations give rounding errors, so I wanted approximate equality. Long story short, overriding equals is a bad idea unless your implementation is symmetric, reflexive, and transitive.
Other classes are going to assume equals has those properties, and so will classes using those classes, and so you can end up in weird cases. For example a list might enforce uniqueness, but end up with two elements which evaluate as equal to some element B.
A hash table is the perfect example of unpredictable behavior when you break equality. For example:
//Assume a == b, b == c, but a != c
var T = new Dictionary<YourType, int>()
T[a] = 0
T[c] = 1
return T[b] //0 or 1? who knows!
Another example would be a Set:
//Assume a == b, b == c, but a != c
var T = new HashSet<YourType>()
T.Add(a)
T.Add(c)
if (T.contains(b)) then T.remove(b)
//surely T can't contain b anymore! I sure hope no one breaks the properties of equality!
if (T.contains(b)) then throw new Exception()
I suggest using another method, with a name like ApproxEquals. You might also consider overriding the == operator, because it isn't virtual and therefore won't be used accidentally by other classes like Equals could be.
If you really can't use reference equality for the hash table, don't ruin the performance of cases where you can. Add an IApproxEquals interface, implement it in your class, and add an extension method GetApprox to Dictionary which enumerates the keys looking for an approximately equal one, and returns the associated value. You could also write a custom dictionary especially for 3-dimensional vectors, or whatever you need.
When hash collisions occur, the HashTable/Dictionary calls Equals to find the key you're looking for. Using a constant hash code removes the speed advantages of using a hash in the first place - it becomes a linear search.
You're saying the Equals method hasn't been implemented according to the contract. What exactly do you mean with this? Depending on the kind of violation, the HashTable or Dictionary will merely be slow (linear search) or not work at all.

Categories