What is MemoryCache.AddOrGetExisting for? - c#

The behaviour of MemoryCache.AddOrGetExisting is described as:
Adds a cache entry into the cache using the specified key and a value
and an absolute expiration value.
And that it returns:
If a cache entry with the same key exists, the existing cache entry; otherwise, null.
What is the purpose of a method with these semantics? What is an example of this?

There are often situations where you only want to create a cache entry if a matching entry doesn't already exist (that is, you don't want to overwrite an existing value).
AddOrGetExisting allows you to do this atomically. Without AddOrGetExisting it would be impossible to perform the get-test-set in an atomic, thread-safe manner. For example:
Thread 1 Thread 2
-------- --------
// check whether there's an existing entry for "foo"
// the call returns null because there's no match
Get("foo")
// check whether there's an existing entry for "foo"
// the call returns null because there's no match
Get("foo")
// set value for key "foo"
// assumes, rightly, that there's no existing entry
Set("foo", "first thread rulez")
// set value for key "foo"
// assumes, wrongly, that there's no existing entry
// overwrites the value just set by thread 1
Set("foo", "second thread rulez")
(See also the Interlocked.CompareExchange method, which enables a more sophisticated equivalent at the variable level, and also the wikipedia entries on test-and-set and compare-and-swap.)

LukeH's answer is correct. Because the other answers indicate that the the method's semantics could be interpreted differently, I think it's worth pointing out that AddOrGetExisting in fact will not update existing cache entries.
So this code
Console.WriteLine(MemoryCache.Default.AddOrGetExisting("test", "one", new CacheItemPolicy()) ?? "(null)");
Console.WriteLine(MemoryCache.Default.AddOrGetExisting("test", "two", new CacheItemPolicy()));
Console.WriteLine(MemoryCache.Default.AddOrGetExisting("test", "three", new CacheItemPolicy()));
will print
(null)
one
one
Another thing to be aware of: When AddOrGetExisting finds an existing cache entry, it will not dispose of the CachePolicy passed to the call. This may be problematic if you use custom change monitors that set up expensive resource tracking mechanisms. Normally, when a cache entry is evicted, the cache system calls Dipose() on your ChangeMonitors. This gives you the opportunity to unregister events and the like. When AddOrGetExisting returns an existing entry, however, you have to take care of that yourself.

I haven't actually used this, but I guess one possible use case is if you want to unconditionally update the cache with a new entry for a particular key and you want to explicitly dispose of the old entry being returned.

Related

Inquiry about Kentico9 Caching

I'm using Kentico 9 and trying to test caching. I would like to ask about how to replace the existing cache if a new value is entered.
Recently was trying to cache with this code:
CacheHelper.Cache(cs => getCachingValue(cs, cacheValue), new CacheSettings(10, "cacheValue"));
public string getCachingValue(CacheSettings cs, string result) {
string cacheValue= result;
if (cs.Cached)
{
cs.CacheDependency = CacheHelper.GetCacheDependency("cacheValue");
}
return cacheValue;
}
When caching data you need to setup correct cache dependencies. For example this is cache dependency for all users:
if (cs.Cached)
{
cs.CacheDependency = CacheHelper.GetCacheDependency("cms.user|all");
}
This will drop cache whenever user has been updated or created. So next time you call the method it will get data from database and cache it again until cache expires or someone adds/updates user.
So you don't need to take care about replacing/updating cached data - appropriate mechanism is already there.
See cache dependencies in documentation.
Since your cache dependency is called "cacheValue", you need to "touch" that particular cache key, to force the cache to clear.
When the value you are caching changes (the value you provide to the string result parameter of the getCachingValue method), call the CacheHelper.TouchKey method to force the cache to clear:
CacheHelper.TouchKey("cacheValue");
(You should also consider changing the name of the cache key, to prevent confusion)
Keep in mind, that if your cache key is "cacheValue" then any call that is made to this will always be the same 'hit.' The CacheSetting key is it's 'unique identifier' you could say, and the Cache Depenency is how it automatically resets.
So for example, say you cache a function that adds two values (wouldn't really need to cache this, but for an example where the input changes)
If you have a cache value for your "AddTwoValues(int a, int b)" of
CacheHelper.Cache(cs => AddTwoValuesHelper(cs, a, b), new CacheSettings(10, "cacheValue"));
The first call will cache the the value of the call (say you pass it 1 and 2), so it caches "3" for the key "cacheValue"
Second call if you pass it 3, 5, the cache key is still "cacheValue" so it will assume it's the same call as the first and return 3, and not even try to add 3+5.
I usually append any parameters to the cache key.
CacheHelper.Cache(cs => AddTwoValuesHelper(cs, a, b), new CacheSettings(10, string.Format("AddTwoValues|{0}|{1}", a, b)));
This way if i call it with 1 and 2, twice, the first it will processes and cache "3" for the key "AddTwoValues|1|2", and when called again the key will match so it will just return the cached value.
If you call with different parameters, then the cache key will be different.
Make sense?
The other answers of course talk on the cache dependency in the helper function:
if (cs.Cached)
{
cs.CacheDependency = CacheHelper.GetCacheDependency("cms.user|all");
}
Which identify how it automatically clears (if you do cms.users|all as the dependency, whenever a user is changed, this cache automatically clears itself)

Modify the original object in Service Fabric Reliable Collections

I read this article about working with Reliable Collections and it is mentioned there you MUST not modify an object once you have given it to a reliable collection and the correct way to update a value in a reliable collection, is to get a copy (clone) of the value, checnge the cloned value and then update the cloned value in the RC.
Bad use:
using (ITransaction tx = StateManager.CreateTransaction()) {
// Use the user’s name to look up their data
ConditionalValue<User> user =
await m_dic.TryGetValueAsync(tx, name);
// The user exists in the dictionary, update one of their properties.
if (user.HasValue) {
// The line below updates the property’s value in memory only; the
// new value is NOT serialized, logged, & sent to secondary replicas.
user.Value.LastLogin = DateTime.UtcNow; // Corruption!
await tx.CommitAsync();
}
}
My Quesion is: why can't I modify the the object once I gave it to the RC? why do I have to clone the object before I change something in it? why can't I do something like (update the object in the same transaction):
using (ITransaction tx = StateManager.CreateTransaction()) {
// Use the user’s name to look up their data
ConditionalValue<User> user =
await m_dic.TryGetValueAsync(tx, name);
// The user exists in the dictionary, update one of their properties.
if (user.HasValue) {
// The line below updates the property’s value in memory only; the
// new value is NOT serialized, logged, & sent to secondary replicas.
user.Value.LastLogin = DateTime.UtcNow;
// Update
await m_dic.SetValue(tx, name, user.Value);
await tx.CommitAsync();
}
}
Thanks!
Reliable Dictionary is a replicated object store. If you update the objects inside Reliable Dictionary without going through Reliable Dictionary (e.g. TryUpdateAsync), then you can corrupt the state.
For example, if you change the object inside Reliable Dictionary using your reference, then the change will not be replicated to the secondary replicas.
This is because Reliable Dictionary does not know that you changed one of the TValues. Hence, the change will be lost if the replica ever fails over.
Above is the most simple example. Modifying objects directly can cause other serious problems like breaking ACID in multiple ways.
Technically you can do what you want. But don't forget about lock modes and isolation levels.
Here we can read: “Any Repeatable Read operation by default takes Shared locks. However, for any read operation that supports Repeatable Read, the user can ask for an Update lock instead of the Shared lock”.
That means that TryGetValueAsync gets only Shared lock. And attempt to update this value later could cause a dedlock.
The next statement is: “An Update lock is an asymmetric lock used to prevent a common form of deadlock that occurs when multiple transactions lock resources for potential updates at a later time.”
So, the correct code would be
await m_dic.TryGetValueAsync(tx, name, LockMode.Update)

CacheItemPolicy - SlidingExpiration "Accessed" rules?

From http://msdn.microsoft.com/en-us/library/system.runtime.caching.cacheitempolicy.slidingexpiration(v=vs.110).aspx ...
"A span of time within which a cache entry must be accessed before the cache entry is evicted from the cache. The default is NoSlidingExpiration, meaning that the item should not be expired based on a time span."
What exactly is 'accessed' ? Does that mean if I hit the cached item like:
var object = cache["cachekeyname"];
It's considered to be 'accessed' ?
Or would it only be considered accessed if I actually modify the cached item?
it does mean that the cache is accessed if the following code is called:
var object = cache["cachekeyname"];
Therefore if the piece of code or functionality containing the above code snippet is not called within X time since you put the object into the cache or it was last accessed it will be removed from the cache.

Converting Object.GetHashCode() to Guid

I need to assign a guid to objects for managing state at app startup & shutdown
It looks like i can store the lookup values in a dictionary using
dictionary<int,Guid>.Add(instance.GetHashCode(), myGUID());
are there any potential issues to be aware of here ?
NOTE
This does NOT need to persist between execution runs, only the guid like so
create the object
gethashcode(), associate with new or old guid
before app terminate, gethashcode() and lookup guid to update() or insert() into persistence engine USING GUID
only assumption is that the gethashcode() remains consistent while the process is running
also gethashcode() is called on the same object type (derived from window)
Update 2 - here is the bigger picture
create a state machine to store info about WPF user controls (later ref as UC) between runs
the types of user controls can change over time (added / removed)
in the very 1st run, there is no prior state, the user interacts with a subset of UC and modifies their state, which needs to recreated when the app restarts
this state snapshot is taken when the app has a normal shutdown
also there can be multiple instances of a UC type
at shutdown, each instance is assigned a guid and saved along with the type info and the state info
all these guids are also stored in a collection
at restart, for each guid, create object, store ref/guid, restore state per instance so the app looks exactly as before
the user may add or remove UC instances/types and otherwise interact with the system
at shutdown, the state is saved again
choices at this time are to remove / delete all prior state and insert new state info to the persistence layer (sql db)
with observation/analysis over time, it turns out that a lot of instances remain consistent/static and do not change - so their state need not be deleted/inserted again as the state info is now quite large and stored over a non local db
so only the change delta is persisted
to compute the delta, need to track reference lifetimes
currently stored as List<WeakReference> at startup
on shutdown, iterate through this list and actual UC present on screen, add / update / delete keys accordingly
send delta over to persistence
Hope the above makes it clear.
So now the question is - why not just store the HashCode (of usercontrol only)
instead of WeakReference and eliminate the test for null reference while
iterating thru the list
update 3 - thanks all, going to use weakreference finally
Use GetHashCode to balance a hash table. That's what it's for. Do not use it for some other purpose that it was not designed for; that's very dangerous.
You appear to be assuming that a hash code will be unique. Hash codes don't work like that. See Eric Lippert's blog post on Guidelines and rules for GetHashCode for more details, but basically you should only ever make the assumptions which are guaranteed for well-behaving types - namely the if two objects have different hash codes, they're definitely unequal. If they have the same hash code, they may be equal, but may not be.
EDIT: As noted, you also shouldn't persist hash codes between execution runs. There's no guarantee they'll be stable in the face of restarts. It's not really clear exactly what you're doing, but it doesn't sound like a good idea.
EDIT: Okay, you've now noted that it won't be persistent, so that's a good start - but you still haven't dealt with the possibility of hash code collisions. Why do you want to call GetHashCode() at all? Why not just add the reference to the dictionary?
The quick and easy fix seems to be
var dict = new Dictionary<InstanceType, Guid>();
dict.Add(instance, myGUID());
Of course you need to implement InstanceType.Equals correctly if it isn't yet. (Or implement IEQuatable<InstanceType>)
Possible issues I can think of:
Hash code collisions could give you duplicate dictionary keys
Different object's hash algorithms could give you the same hash code for two functionally different objects; you wouldn't know which object you're working with
This implementation is prone to ambiguity (as described above); you may need to store more information about your objects than just their hash codes.
Note - Jon said this more elegantly (see above)
Since this is for WPF controls, why not just add the Guid as a dependency proptery? You seem to already be iterating through the user controls, in order to get their hash codes, so this would probably be a simpler method.
If you want to capture that a control was removed and which Guid it had, some manager object that subscribes to closing/removed events and just store the Guid and a few other details would be a good idea. Then you would also have an easier time to capture more details for analysis if you need.

Should I check whether particular key is present in Dictionary before accessing it?

Should I check whether particular key is present in Dictionary if I am sure it will be added in dictionary by the time I reach the code to access it?
There are two ways I can access the value in dictionary
checking ContainsKey method. If it returns true then I access using indexer [key] of dictionary object.
or
TryGetValue which will return true or false as well as return value through out parameter.
(2nd will perform better than 1st if I want to get value. Benchmark.)
However if I am sure that the function which is accessing global dictionary will surely have the key then should I still check using TryGetValue or without checking I should use indexer[].
Or I should never assume that and always check?
Use the indexer if the key is meant to be present - if it's not present, it will throw an appropriate exception, which is the right behaviour if the absence of the key indicates a bug.
If it's valid for the key not to be present, use TryGetValue instead and react accordingly.
(Also apply Marc's advice about accessing a shared dictionary safely.)
If the dictionary is global (static/shared), you should be synchronizing access to it (this is important; otherwise you can corrupt it).
Even if your thread is only reading data, it needs to respect the locks of other threads that might be editing it.
However; if you are sure that the item is there, the indexer should be fine:
Foo foo;
lock(syncLock) {
foo = data[key];
}
// use foo...
Otherwise, a useful pattern is to check and add in the same lock:
Foo foo;
lock(syncLock) {
if(!data.TryGetValue(key, out foo)) {
foo = new Foo(key);
data.Add(key, foo);
}
}
// use foo...
Here we only add the item if it wasn't there... but inside the same lock.
Always check. Never say never. I assume your application is not that performance critical that you will have to save the checking time.
TIP: If you decide not to check, at least use Debug.Assert( dict.ContainsKey( key ) ); This will only be compiled when in Debug mode, your release build will not contain it. That way you could at least have the check when debugging.
Still: if possible, just check it :-)
EDIT: There have been some misconceptions here. By "always check" I did not only mean using an if somewhere. Handling an exception properly was also included in this. So, to be more precise: never take anything for granted, expect the unexpected. Check by ContainsKey or handle the potential exception, but do SOMETHING in case the element is not contained.
Personally I'd check the key is there, regardless of whether or not you are SURE it is, some may say this check is superfluous and that dictionary will throw an exception which you can catch, but imho you should not rely on that exception, you should check yourself and then either throw your own exception which means something or a result object with a success flag and reason inside... the failure mechanism is really implementation dependant.
Surely the answer is "it all depends on the situation". You need to balance the risk that the key will be missing from the dictionary (low for small systems where there is limited access to the data, where you can rely on the order things are done, larger for larger systems, multiple programmers accessing the same data, especially with read/write/delete access, where threads are involved and order cannot be guaranteed or where data originates externally and reading can fail) with the impact of the risk (safety-critical systems, commercial releases or systems that a business will rely on compared with something made for fun, for a one-off job and/or for your use only) and with any requirements for speed, size and laziness.
If I were making a system to control railway signalling I would want to be safe against all possible and impossible errors, and safe from errors in the error-handling and so on (Murphy's 2nd law: "what can't go wrong will go wrong".) If I'm chucking stuff together for fun, even if size and speed are not an issue I will be MUCH more relaxed about stuff like this - I will want to get to the fun stuff.
Of course, sometimes this is the fun stuff in itself.
TryGetValue is the same code as indexing it by key, except the former returns a default value (for the out parameter) where the latter throws an exception. Use TryGetValue and you'll get consistent checks with absolutely no performance loss.
Edit: As Jon said, if you know it will always have the key, then you can index it and let it throw the appropriate exception. However, if you can provide better context information by throwing it yourself with a detailed message, that would be preferable.
There's 2 trains of thought on this from a performance point of view.
1) Avoid exceptions where possible, as exceptions are expensive - i.e. check before you try to retrieve a specific key from the dictionary, whether it exists or not. Better approach in my opinion if there's a fair chance it may not exist. This would prevent fairly common exceptions.
2) If you're confident the item will exist in there 99% of the time, then don't check for it's existence before accessing it. The 1% of times when it doesn't exist, an exception will be thrown but you've saved time for the other 99% of the time by not checking.
What I'm saying is, optimise for the majority if there is a clear one. If there is any real degree in uncertainty about an item existing, then check before retrieving.
If you know that the dictionary normally contains the key, you don't have to check for it before accessing it.
If something would be wrong and the dictionary doesn't contain the items that it should, you can let the dictionary throw the exception. The only reason for checking for the key first would be if you want to take care of this problem situation yourself without getting the exception. Letting the dictionary throw the exception and catch that is however a perfectly valid way of handling the situation.
I think Marc and Jon have it (as usual) pretty sown up. Since you also mention performance in your question it might be worth considering how you lock the dictionary.
The straightforward lock serialises all read access which may not be desirable if read is massively frequent and writes are relatively few. In that case using a ReaderWriterLockSlim might be better. The downside is the code is a little more complex and writes are slightly slower.

Categories