Probably I miss something using a HashSet and HashCode, but I don´t know why this doesn't work as I thought. I have an object with the HashCode is overridden. I added the object to a HashSet and later I change a property (which is used to calculate the HashCode) then I can´t remove the object.
public class Test
{
public string Code { get; set; }
public override int GetHashCode()
{
return (Code==null)?0: Code.GetHashCode();
}
}
public void TestFunction()
{
var test = new Test();
System.Collections.Generic.HashSet<Test> hashSet = new System.Collections.Generic.HashSet<Test>();
hashSet.Add(test);
test.Code = "Change";
hashSet.Remove(test); //this doesn´t remove from the hashset
}
Firstly, you're overriding GetHashCode but not overriding Equals. Don't do that. They should always be overridden at the same time, in a consistent manner.
Next, you're right that changing an objects hash code will affect finding it in any hash-based data structures. After all, the first part of checking for the presence of a key in a hash-based structure is to find candidate equal values by quickly finding all existing entries with the same hash code. There's no way that the data structure can "know" that the hash code has been changed and update its own representation.
The documentation makes this clear:
You can override GetHashCode for immutable reference types. In general, for mutable reference types, you should override GetHashCode only if:
You can compute the hash code from fields that are not mutable; or
You can ensure that the hash code of a mutable object does not change while the object is contained in a collection that relies on its hash code.
public void TestFunction()
{
var test = new Test();
System.Collections.Generic.HashSet<Test> hashSet = new System.Collections.Generic.HashSet<Test>();
test.Code = "Change";
hashSet.Add(test);
hashSet.Remove(test); //this doesn´t remove from the hashset
}
first set the value into object of Test, then add it to HashSet.
HashCode is used to find object in the HashSet/ Dictionary. If hash code changed object no longer can be found in the HashSet because when looked up by new HashSet the bucket for items with that new hash code will not (most likely) contain the object (which is in bucket marked with old hash code).
Note that after initial search by hash code Equals will be used to perform final match, but it does not apply to your particular case as you have the same object and use default Equals that compares references.
Detailed explanation/guidelines - Guidelines and rules for GetHashCode by Eric Lippert.
Related
Given an instance of an object in C#, how can I determine if that object has value semantics? In other words, I want to guarantee that an object used in my API is suitable to be used as a dictionary key. I was thinking about something like this:
var type = instance.GetType();
var d1 = FormatterServices.GetUninitializedObject(type);
var d2 = FormatterServices.GetUninitializedObject(type);
Assert.AreEqual(d1.GetHashCode(), d2.GetHashCode());
What do you think of that approach?
You can test for implementation of Equals() and GetHashCode() with this:
s.GetType().GetMethod("GetHashCode").DeclaringType == s.GetType()
or rather per #hvd's suggestion:
s.GetType().GetMethod("GetHashCode").DeclaringType != typeof(object)
Some some object s, if GetHashCode() is not implemented by it's type, this will be false, otherwise true.
One thing to be careful on is that this will not protect against a poor implementation of Equals() or GetHashCode() - this would evaluate to true even if the implementation was public override int GetHashCode() { }.
Given the drawbacks, I would tend towards documenting your types ("this type should / should not be used for a dictionary key..."), because this isn't something you could ultimately depend upon. If the implementation of Equals() or GetHashCode() was flawed instead of missing, it would pass this test but still have a run-time error.
FormatterServices.GetUninitializedObject can put the object in an invalid state; It breaks the guaranteed assignment of readonly fields etc. Any code which assumes that fields will not be null will break. I wouldn't use that.
You can check whether GetHashCode and Equals is overridden via reflection, but that's not enough. You could override the method call base class method. That doesn't count as value semantics.
Btw value semantics doesn't mean equal hashcodes. It could be a collision too; Value semantics means that two objects with equals properties should return same hashcode as well as equals method should evaluate to true.
I suggest you to create an instance, assign some properties, clone it; Now both hashcodes should be equal and calling object.Equals(original, clone) should evaluate to true.
You can see if an object defines its own Equals and GetHashCode using the DeclaringType property on the corresponding MethodInfo:
bool definesEquality = type.GetMethod("Equals", new[] { typeof(object) }).DelcaringType == type && type.GetMethod("GetHashCode", Type.EmptyTypes).DeclaringType == type;
Aloha,
Here's a simple class that overrides GetHashCode:
class OverridesGetHashCode
{
public string Text { get; set; }
public override int GetHashCode()
{
return (Text != null ? Text.GetHashCode() : 0);
}
// overriding Equals() doesn't change anything, so I'll leave it out for brevity
}
When I create an instance of that class, add it to a HashSet and then change its Text property, like this:
var hashset = new HashSet<OverridesGetHashCode>();
var oghc = new OverridesGetHashCode { Text = "1" };
hashset.Add(oghc);
oghc.Text = "2";
then this doesn't work:
var removedCount = hashset.RemoveWhere(c => ReferenceEquals(c, oghc));
// fails, nothing is removed
Assert.IsTrue(removedCount == 1);
and neither does this:
// this line works, i.e. it does find a single item matching the predicate
var existing = hashset.Single(c => ReferenceEquals(c, oghc));
// but this fails; nothing is removed again
var removed = hashset.Remove(existing);
Assert.IsTrue(removed);
I guess the hash it internally uses is generated when item is inserted and, if that's true, it's
understandable that hashset.Contains(oghc) doesn't work.
I also guess it looks up item by its hash code and if it finds a match, only then it checks the predicate, and that might be why the first test fails (again, I'm just guessing here).
But why does the last test fail, I just got that object out of the hashset? Am I missing something, is this a wrong way to remove something from a HashSet?
Thank you for taking the time to read this.
UPDATE: To avoid confusion, here's the Equals():
protected bool Equals(OverridesGetHashCode other)
{
return string.Equals(Text, other.Text);
}
public override bool Equals(object obj)
{
if (ReferenceEquals(null, obj)) return false;
if (ReferenceEquals(this, obj)) return true;
if (obj.GetType() != this.GetType()) return false;
return Equals((OverridesGetHashCode) obj);
}
By changing the hash code of your object while that object is being used in a HashSet is a violation of the HashSet's contract.
Being unable to remove the object is not the problem here. You are not allowed to change the hash code in the first place.
Let me quote from MSDN:
The GetHashCode method for an object must consistently return the same
hash code as long as there is no modification to the object state that
determines the return value of the object's Equals method. Note that
this is true only for the current execution of an application, and
that a different hash code can be returned if the application is run
again.
They tell the story a little differently but the essence is the same. They say, the hash code can never change. In practice, you can change it as long as you make sure no one uses the old hash code anymore. Not that this is good practice, but it works.
It's important that any items added to a hash based table (HashSet, Dictionary, etc.) not be modified once they are inserted into the structure (at least not until they are removed).
To find an object in the data structure it computes it hash code, and then finds a location based on that hash code. If you mutate that object then the hash code it returns no longer reflects it's current location in that data structure (unless you're very, very lucky and it just happens to be a hash collision).
On the MSDN page for Dictionary is says:
As long as an object is used as a key in the Dictionary<TKey, TValue>, it must not change in any way that affects its hash value.
This same assertion applies to HashSet as well, as they both are implemented using hash tables.
There are good answers here and just wanted to add this. If you look at the decompiled HashSet<T> code, you'll see that Add(value) does the following:
Calls IEqualityComparer<T>.GetHashCode() to get the hash code for value. For the default comparer this boils down to GetHashCode().
Uses that hash code to calculate which "bucket" and "slot" the (reference to) value should be stored in.
Stores the reference.
When you call Remove(value) it does steps 1. and 2. again, to find where the reference is at. Then it calls IEqualityComparer<T>.Equals() to make sure that it indeed has found the right value. However, since you've changed what GetHashCode() returns, it calculates a different bucket/slot location, which is invalid. Thus, it cannot find the object.
So, note that Equals() doesn't really come into play here, because it will never even get to the right bucket/slot location if the hash code changes.
I am trying to implement simple algorithm with use of C#'s Dictionary :
My 'outer' dictionary looks like this : Dictionary<paramID, Dictionary<string, object>> [where paramID is simply an identifier which holds 2 strings]
if key 'x' is already in the dictionary then add specific entry to this record's dictionary, if it doesn't exist then add its entry to the outer Dictionary and then add entry to the inner dictionary.
Somehow, when I use TryGetValue it always returns false, therefore it always creates new entries in the outer Dictionary - what produces duplicates.
My code looks more or less like this :
Dictionary<string, object> tempDict = new Dictionary<string, object>();
if(outerDict.TryGetValue(new paramID(xKey, xValue), out tempDict))
{
tempDict.Add(newKey, newValue);
}
Block inside the ifis never executed, even if there is this specific entry in the outer Dictionary.
Am I missing something ? (If you want I can post screen shots from debugger - or something else if you desire)
If you haven't over-ridden equals and GetHashCode on your paramID type, and it's a class rather than a struct, then the default equality meaning will be in effect, and each paramID will only be equal to itself.
You likely want something like:
public class ParamID : IEquatable<ParamID> // IEquatable makes this faster
{
private readonly string _first; //not necessary, but immutability of keys prevents other possible bugs
private readonly string _second;
public ParamID(string first, string second)
{
_first = first;
_second = second;
}
public bool Equals(ParamID other)
{
//change for case-insensitive, culture-aware, etc.
return other != null && _first == other._first && _second == other._second;
}
public override bool Equals(object other)
{
return Equals(other as ParamID);
}
public override int GetHashCode()
{
//change for case-insensitive, culture-aware, etc.
int fHash = _first.GetHashCode();
return ((fHash << 16) | (fHash >> 16)) ^ _second.GetHashCode();
}
}
For the requested explanation, I'm going to do a different version of ParamID where the string comparison is case-insensitive and ordinal rather than culture based (a form that would be appropriate for some computer-readable codes (e.g. matching keywords in a case-insensitive computer language or case-insensitive identifiers like language tags) but not for something human-readable (e.g. it will not realise that "SS" is a case-insensitive match to "ß"). This version also considers {"A", "B"} to match {"B", "A"} - that is, it doesn't care what way around the strings are. By doing a different version with different rules it should be possible to touch on a few of the design considerations that come into play.
Let's start with our class containing just the two fields that are it's state:
public class ParamID
{
private readonly string _first; //not necessary, but immutability of keys prevents other possible bugs
private readonly string _second;
public ParamID(string first, string second)
{
_first = first;
_second = second;
}
}
At this point if we do the following:
ParamID x = new ParamID("a", "b");
ParamID y = new ParamID("a", "b");
ParamID z = x;
bool a = x == y;//a is false
bool b = z == x;//b is true
Because by default a reference type is only equal to itself. Why? Well firstly, sometimes that's just what we want, and secondly it isn't always clear what else we might want without the programmer defining how equality works.
Note also, that if ParamID was a struct, then it would have equality defined much like what you wanted. However, the implementation would be rather inefficient, and also buggy if it contained a decimal, so either way it's always a good idea to implement equality explicitly.
The first thing we are going to do to give this a different concept of equality is to override IEquatable<ParamID>. This is not strictly necessary, (and didn't exist until .NET 2.0) but:
It will be more efficient in a lot of use cases, including when key to a Dictionary<TKey, TValue>.
It's easy to do the next step with this as a starting point.
Now, there are four rules we must follow when we implement an equality concept:
An object must still be always equal to itself.
If X == Y and X != Z, then later if the state of none of those objects has changed, X == Y and X != Z still.
If X == Y and Y == Z, then X == Z.
If X == Y and Y != Z then X != Z.
Most of the time, you'll end up following all these rules without even thinking about it, you just have to check them if you're being particularly strange and clever in your implementation. Rule 1 is also something that we can take advantage of to give us a performance boost in some cases:
public class ParamID : IEquatable<ParamID>
{
private readonly string _first; //not necessary, but immutability of keys prevents other possible bugs
private readonly string _second;
public ParamID(string first, string second)
{
_first = first;
_second = second;
}
public bool Equals(ParamID other)
{
if(other == null)
return false;
if(ReferenceEquals(this, other))
return true;
if(string.Compare(_first, other._first, StringComparison.InvariantCultureIgnoreCase) == 0 && string.Compare(_second, other._second, StringComparison.InvariantCultureIgnoreCase) == 0)
return true;
return string.Compare(_first, other._second, StringComparison.InvariantCultureIgnoreCase) == 0 && string.Compare(_second, other._first, StringComparison.InvariantCultureIgnoreCase) == 0;
}
}
The first thing we've done is see if we're being compared with equality to null. We almost always want to return false in such cases (not always, but the exceptions are very, very rare and if you don't know for sure you're dealing with such an exception, you almost certainly are not), and certainly we don't want to throw a NullReferenceException.
The next thing we do is to see if the object is being compared with itself. This is purely an optimisation. In this case, it's probably a waste of time, but it can be very useful with more complicated equality tests, so it's worth pointing out this trick here. This takes advantage of the rule that identity entails equality, that is, any object is equal to itself (Ayn Rand seemed to think this was somehow profound).
Finally, having dealt with these two special cases, we get to the actual rule for equality. As I said above, my example considers two objects equal if they have the same two strings, in either order, for case-insensitive ordinal comparisons, so I've a bit of code to work that out.
(Note that the order in which we compare component parts can have a performance impact. Not in this case, but with a class that contains both an int and a string we would compare the ints first because is faster and we will hence perhaps find an answer of false before we even look at the strings)
Now at this point we've a good basis for overriding the Equals method defined in object:
public override bool Equals(object other)
{
return (other as ParamID);
}
Since as will return a ParamID reference if other is a ParamID and null for anything else (including if null was what we were passed in the first place), and since we already handle comparison with null, we're all set.
Try to compile at this point and you will get a warning that you have overriden Equals but not GetHashCode (the same is true if you'd done it the other way around).
GetHashCode is used by the dictionary (and other hash-based collections like HashTable and HashSet) to decide where to place the key internally. It will take the hashcode, re-hash it down to a smaller value in a way that is its business, and use it to place the object in its internal store.
Because of this, it's clear why the following is a bad idea were ParamID not readonly on all fields:
ParamID x = new ParamID("a", "b");
dict.Add(x, 33);
x.First = "c";//x will now likely never be found in dict because its hashcode doesn't match its position!
This means the following rules apply to hash-codes:
Two objects considered equal, must have the same hashcode. (This is a hard rule, you will have bugs if you break it).
While we can't guarantee uniqueness, the more spread out the returned results, the better. (Soft rule, you will have better performance the better you do at it).
(Well, 2½.) While not a strict rule, if we take such a complicated approach to point 2 above that it takes forever to return a result, the nett effect will be worse than if we had a poorer-quality hash. So we want to try to be reasonably quick too if we can.
Despite the last point, it's rarely worth memoising the results. Hash-based collections will normally memoise the value themselves, so it's a waste to do so in the object.
For the first implementation, because our approach to equality depended upon the default approach to equality of the strings, we could use strings default hashcode. For my different version I'll use another approach that we'll explore more later:
public override int GetHashCode()
{
return StringComparer.OrdinalIgnoreCase.GetHashCode(_first) ^ StringComparer.OrdinalIgnoreCase.GetHashCode(_second);
}
Let's compare this to the first version. In both cases we get hashcodes of the component parts. If the values where integers, chars or bytes we would have worked with the values themselves, but here we build on the work done in implementing the same logic for those parts. In the first version we use the GetHashCode of string itself, but since "a" has a different hashcode to "A" that won't work here, so we use a class that produces a hashcode ignoring that difference.
The other big difference between the two is that in the first case we mix the bits up more with ((fHash << 16) | (fHash >> 16)). The reason for this is to avoid duplicate hashes. We can't produce a perfect hashcode where every different object has a different value, because there are only 4294967296 possible hashcode values, but many more possible values for ParamID (including null, which is treated as having a hashcode of 0). (There are cases where prefect hashes are possible, but they bring in different concerns than here). Because of this imperfection we have to think not only about what values are possible, but which are likely. Generally, shifting bits like we've done in the first version avoids common values having the same hash. We don't want {"A", "B"} to hash the same as {"B", "A"}.
It's an interesting experiment to produce a deliberately poor GetHashCode that always returns 0, it'll work, but instead of being close to O(1), dictionaries will be O(n), and poor as O(n) goes for that!
The second version doesn't do that, because it has different rules so for it we actually want to consider values the same but for being switch around as equal, and hence with the same hashcode.
The other big difference is the use of StringComparer.OrdinalIgnoreCase. This is an instance of StringComparer which, among other interfaces, implements IEqualityComparer<string> and IEqualityComparer. There are two interesting things about the IEqualityComparer<T> and IEqualityComparer interfaces.
The first is that hash-based collections (such as dictionary) all use them, it's just that unless passed an instance of one to their constructor they will use DefaultEqualityComparer which calls into the Equals and GetHashCode methods we've described above.
The other, is that it allows us to ignore the Equals and GetHashCode mentioned above, and provide them from another class. There are three advantages to this:
We can use them in cases (string is a classic case) where there is more than one likely definition of "equals".
We can ignore that by the class' author, and provide our own.
We can use them to avoid a particular attack. This attack is based on being in a situation where input you provide will be hashed by the code you are attacking. You pick input so as to deliberately provide objects that are different, but hash the same. This means that the poor performance we talked about avoiding earlier is hit, and it can be so bad that it becomes a denial of service attack. By providing different IEqualityComparer implementations with random elements to the hash code (but the same for every instance of the comparer) we can vary the algorithm enough each time as to twart the attack. The use for this is rare (it has to be something that will hash based purely on outside input that is large enough for the poor performance to really hurt), but vital when it comes up.
Finally. If we override Equals we may or may not want to override == and != too. It can be useful to keep them refering to identity only (there are times when that is what we care most about) but it can be useful to have them refer to other semantics (`"abc" == "ab" + "c" is an example of an override).
In summary:
The default equality of reference objects is identity (equal only to itself).
The default equality of value types is a simple comparison of all fields (but poor in performance).
We can change the concept of equality for our classes in either case, but this MUST involve both Equals and GetHashCode*
We can override this and provide another concept of equality.
Dictionary, HashSet, ConcurrentDictionary, etc. all depend on this.
Hashcodes represent a mapping from all values of an object to a 32-bit number.
Hashcodes must be the same for objects we consider equal.
Hashcodes must be spread well.
*Incidentally, anonymous classes have a simple comparison like that of value types, but better performance, which matches almost any case in which we mght care about the hash code of an anonymous type.
Most likely, paramID does not implement equality comparison correctly.
It should be implementing IEquatable<paramID> and that means especially that the GetHashCode implementation must adhere to the requirements (see "Notes to implementers").
As for keys in dictionaries, MSDN says:
As long as an object is used as a key in the Dictionary(Of TKey,
TValue), it must not change in any way that affects its hash value.
Every key in a Dictionary(Of TKey, TValue) must be unique according to
the dictionary's equality comparer. A key cannot be Nothing, but a
value can be, if the value type TValue is a reference type.
Dictionary(Of TKey, TValue) requires an equality implementation to
determine whether keys are equal. You can specify an implementation of
the IEqualityComparer(Of T) generic interface by using a constructor
that accepts a comparer parameter; if you do not specify an
implementation, the default generic equality comparer
EqualityComparer(Of T).Default is used. If type TKey implements the
System.IEquatable(Of T) generic interface, the default equality
comparer uses that implementation.
Since you don't show the paramID type I cannot go into more detail.
As an aside: that's a lot of keys and values getting tangled in there. There's a dictionary inside a dictionary, and the keys of the outer dictionary aggregate some kind of value as well. Perhaps this arrangement can be advantageously simplified? What exactly are you trying to achieve?
Use the Dictionary.ContainsKey method.
And so:
Dictionary<string, object> tempDict = new Dictionary<string, object>();
paramID searchKey = new paramID(xKey, xValue);
if(outerDict.ContainsKey(searchKey))
{
outerDict.TryGetValue(searchKey, out tempDict);
tempDict.Add(newKey, newValue);
}
Also don't forget to override the Equals and GetHashCode methods in order to correctly compare two paramIDs:
class paramID
{
// rest of things
public override bool Equals(object obj)
{
paramID p = (paramID)obj;
// how do you determine if two paramIDs are the same?
if(p.key == this.key) return true;
return false;
}
public override int GetHashCode()
{
return this.key.GetHashCode();
}
}
I have a HashSet that contains multiple lists of integers - i.e. HashSet<List<int>>
In order to maintain uniqueness I am currently having to do two things:
1. Manually loop though existing lists, looking for duplicates using SequenceEquals.
2. Sorting the individual lists so that SequenceEquals works currently.
Is there a better way to do this? Is there an existing IEqualityComparer that I can provide to the HashSet so that HashSet.Add() can automatically handle uniqueness?
var hashSet = new HashSet<List<int>>();
for(/* some condition */)
{
List<int> list = new List<int>();
...
/* for eliminating duplicate lists */
list.Sort();
foreach(var set in hashSet)
{
if (list.SequenceEqual(set))
{
validPartition = false;
break;
}
}
if (validPartition)
newHashSet.Add(list);
}
Here is a possible comparer that compares an IEnumerable<T> by its elements. You still need to sort manually before adding.
One could build the sorting into the comparer, but I don't think that's a wise choice. Adding a canonical form of the list seems wiser.
This code will only work in .net 4 since it takes advantage of generic variance. If you need earlier versions you need to either replace IEnumerable with List, or add a second generic parameter for the collection type.
class SequenceComparer<T>:IEqualityComparer<IEnumerable<T>>
{
public bool Equals(IEnumerable<T> seq1,IEnumerable<T> seq2)
{
return seq1.SequenceEqual(seq2);
}
public int GetHashCode(IEnumerable<T> seq)
{
int hash = 1234567;
foreach(T elem in seq)
hash = unchecked(hash * 37 + elem.GetHashCode());
return hash;
}
}
void Main()
{
var hashSet = new HashSet<List<int>>(new SequenceComparer<int>());
List<int> test=new int[]{1,3,2}.ToList();
test.Sort();
hashSet.Add(test);
List<int> test2=new int[]{3,2,1}.ToList();
test2.Sort();
hashSet.Contains(test2).Dump();
}
This starts off wrong, it has to be a HashSet<ReadOnlyCollection<>> because you cannot allow the lists to change and invalidate the set predicate. This then allows you to calculate a hash code in O(n) when you add the collection to the set. And an O(n) test to check if it is already in the set with a very uncommon O(n^2) worst case if all the hashes turn out to be equal. Store the computed hash with the collection.
Is there a reason you aren't just using an array? int[] will perform better. Also I assume the lists contain duplicates, otherwise you'd just be using sets and not have a problem.
It appears that their contents won't change (much) once they've been added to the HashSet. At the end of the day, you are going to have to use a comparer that falls back on SequenceEqual. But you don't have to do it every single time. Instead or doing an exponential number of sequence compares (e.g. -- as the hashset grows, doing a SequenceEqual against each existing member) -- if you create a good hashcode up front, you may have to do very few such compares. While the overhead of generating a good hashcode is probably about the same as doing a SequenceEqual you're only doing it a single time for each list.
So, the first time you operate on a particular List<int>, you should generate a hash based on the ordered sequence of numbers and cache it. Then the next time the list is compared, the cached value can be used. I'm not sure how you might do this with a comparer off the top of my head (maybe a static dictionary?) -- but you could implement List wrapper that does this easily.
Here's a basic idea. You'd need to be careful to ensure that it isn't brittle (e.g. make sure you void any cached hash code when members change) but it doesn't look like that's going to be a typical situation for the way you're using this.
public class FasterComparingList<T>: IList<T>, IList, ...
/// whatever you need to implement
{
// Implement your interfaces against InnerList
// Any methods that change members of the list need to
// set _LongHash=null to force it to be regenerated
public List<T> InnerList { ... lazy load a List }
public int GetHashCode()
{
if (_LongHash==null) {
_LongHash=GetLongHash();
}
return (int)_LongHash;
}
private int? _LongHash=null;
public bool Equals(FasterComparingList<T> list)
{
if (InnerList.Count==list.Count) {
return true;
}
// you could also cache the sorted state and skip this if a list hasn't
// changed since the last sort
// not sure if native `List` does
list.Sort();
InnerList.Sort();
return InnerList.SequenceEqual(list);
}
protected int GetLongHash()
{
return .....
// something to create a reasonably good hash code -- which depends on the
// data. Adding all the numbers is probably fine, even if it fails a couple
// percent of the time you're still orders of magnitude ahead of sequence
// compare each time
}
}
If the lists won't change once added, this should be very fast. Even in situations where the lists could change frequently, the time to create a new hash code is not likely very different (if even greater at all) than doing a sequence compare.
If you don't specify an IEQualityComparer, then the types default will be used, so I think what you'll need to do is create your own implementation of IEQualityComparer, and pass that to the constructor of your HashSet. Here is a good example.
When comparing hashsets of lists one option you always have is that instead of comparing each element, you sort lists and join them using a comma and compare generated strings.
So, in this case, when you create custom comparer instead of iterating over elements and calculating custom hash function, you can apply this logic.
In a program I need to evaluate lots of objects. The result of evaluation is a double.
for example
Object myObject = new Object(x,y,z);
double a = eval(myObject);
after this lots of other objects should be evaluated.
I want to avoid reevaluating same objects. So I need to add evaluated objects and the evaluation result to a hash structure.
for example something like this after first evaluation: -------> this is a pseudo code
myHash.add(myObject, a);
Object anotherObject = new Object(x,y,z);
if (myHash.find(anotherObject))
double evaluationForAnotherObject = myHash.get(anotherObject);
any help would be highly welcomed
A Dictionary<TKey,TValue> can be used for such lookups:
Dictionary<object,double> dict=new Dictionary<object,double>();
if(dict.ContainsKey(obj))
x=dict[obj];
It's important to use the correct equality comparer. For example on objects it uses referential equality by default. If your TKey type doesn't use the desired equality comparison you can supply an IEqualityComparer<TKey> to the constructor of the dictionary.
As an alternative you can pass your function into a memoizer. It returns a new function which caches the result of earlier computations. AFAIK the MiscUtil library contains one.
Func<object,double> memoizingEval=Memoizer.Memoize(eval);
and then use memoizingEval(obj)