Inquiry about Kentico9 Caching - c#

I'm using Kentico 9 and trying to test caching. I would like to ask about how to replace the existing cache if a new value is entered.
Recently was trying to cache with this code:
CacheHelper.Cache(cs => getCachingValue(cs, cacheValue), new CacheSettings(10, "cacheValue"));
public string getCachingValue(CacheSettings cs, string result) {
string cacheValue= result;
if (cs.Cached)
{
cs.CacheDependency = CacheHelper.GetCacheDependency("cacheValue");
}
return cacheValue;
}

When caching data you need to setup correct cache dependencies. For example this is cache dependency for all users:
if (cs.Cached)
{
cs.CacheDependency = CacheHelper.GetCacheDependency("cms.user|all");
}
This will drop cache whenever user has been updated or created. So next time you call the method it will get data from database and cache it again until cache expires or someone adds/updates user.
So you don't need to take care about replacing/updating cached data - appropriate mechanism is already there.
See cache dependencies in documentation.

Since your cache dependency is called "cacheValue", you need to "touch" that particular cache key, to force the cache to clear.
When the value you are caching changes (the value you provide to the string result parameter of the getCachingValue method), call the CacheHelper.TouchKey method to force the cache to clear:
CacheHelper.TouchKey("cacheValue");
(You should also consider changing the name of the cache key, to prevent confusion)

Keep in mind, that if your cache key is "cacheValue" then any call that is made to this will always be the same 'hit.' The CacheSetting key is it's 'unique identifier' you could say, and the Cache Depenency is how it automatically resets.
So for example, say you cache a function that adds two values (wouldn't really need to cache this, but for an example where the input changes)
If you have a cache value for your "AddTwoValues(int a, int b)" of
CacheHelper.Cache(cs => AddTwoValuesHelper(cs, a, b), new CacheSettings(10, "cacheValue"));
The first call will cache the the value of the call (say you pass it 1 and 2), so it caches "3" for the key "cacheValue"
Second call if you pass it 3, 5, the cache key is still "cacheValue" so it will assume it's the same call as the first and return 3, and not even try to add 3+5.
I usually append any parameters to the cache key.
CacheHelper.Cache(cs => AddTwoValuesHelper(cs, a, b), new CacheSettings(10, string.Format("AddTwoValues|{0}|{1}", a, b)));
This way if i call it with 1 and 2, twice, the first it will processes and cache "3" for the key "AddTwoValues|1|2", and when called again the key will match so it will just return the cached value.
If you call with different parameters, then the cache key will be different.
Make sense?
The other answers of course talk on the cache dependency in the helper function:
if (cs.Cached)
{
cs.CacheDependency = CacheHelper.GetCacheDependency("cms.user|all");
}
Which identify how it automatically clears (if you do cms.users|all as the dependency, whenever a user is changed, this cache automatically clears itself)

Related

IMemoryCache, refresh cache before eviction

I am trying to migrate my .Net framework application to .Net Core and in this process, I want to move my in-memory caching from System.Runtime.Caching/MemoryCache to Microsoft.Extensions.Caching.Memory/IMemoryCache. But I have one problem with IMemoryCache, I could not find a way to refresh the cache before it is removed/evicted.
In the case of System.Runtime.Caching/MemoryCache, there is UpdateCallback property in CacheItemPolicy to which I can set the delegate of callback function and this function will be called in a separate thread just before the eviction of the cached object. Even if callback function takes a long time to fetch fresh data, MemoryCache will continue to serve old data beyond its expiry deadline, this ensures my code need not wait for data during the process of cache refresh.
But I don't see such functionality in Microsoft.Extensions.Caching.Memory/IMemoryCache, there is
RegisterPostEvictionCallback property and PostEvictionCallbacks extension method in MemoryCacheEntryOptions. But both of these will be fired after the cache entry is evicted from the cache. So if this callback takes a longer time, all the requests to get this data need to wait.
Is there any solution?
That's because there is no eviction, and, I would argue, that makes IMemoryCache not a cache:
"The ASP.NET Core runtime doesn't trim the cache when system memory is low."
https://learn.microsoft.com/en-us/aspnet/core/performance/caching/memory?view=aspnetcore-5.0#use-setsize-size-and-sizelimit-to-limit-cache-size
"If SizeLimit isn't set, the cache grows without bound."
"The cache size limit does not have a defined unit of measure because the cache has no mechanism to measure the size of entries."
"An entry will not be cached if the sum of the cached entry sizes exceeds the value specified by SizeLimit."
So, not only does the IMemoryCache fail to do the most basic thing you'd expect from a cache - respond to memory pressure by evicting oldest entries - you also don't have the insert logic you expect. Adding a fresh item to a full "cache" doesn't evict an older entry, it refuses to insert the new item.
I argue this is just an unfortunate Dictionary, and not a cache at all. The cake/class is a lie.
To get this to actually work like a cache, you'd need to write a wrapper class that does measure memory size, and system code that interacts with the wrapper that evicts (via .Remove()) in response to memory pressure and expiration, periodically. You know - most of the work of implementing a cache.
So, the reason you couldn't find a way to update before eviction is because by default there isn't any eviction, and if you've implemented your own eviction scheme, you've written so much of an actual cache, what's writing a bit more?
You can do a trick here and add the old cache in RegisterPostEvictionCallback before looking up for the new value. This way if the callback takes a longer time, the old value is still available in cache.
I had this need and I write the class :
public abstract class AutoRefreshCache<TKey, TValue>
{
private readonly ConcurrentDictionary<TKey, TValue> _entries = new ConcurrentDictionary<TKey, TValue>();
protected AutoRefreshCache(TimeSpan interval)
{
var timer = new System.Timers.Timer();
timer.Interval = interval.TotalMilliseconds;
timer.AutoReset = true;
timer.Elapsed += (o, e) =>
{
((System.Timers.Timer)o).Stop();
RefreshAll();
((System.Timers.Timer)o).Start();
};
timer.Start();
}
public TValue Get(TKey key)
{
return _entries.GetOrAdd(key, k => Load(k));
}
public void RefreshAll()
{
var keys = _entries.Keys;
foreach(var key in keys)
{
_entries.AddOrUpdate(key, k => Load(key), (k, v) => Load(key));
}
}
protected abstract TValue Load(TKey key);
}
Values aren't evicted, just refreshed. Only the first Get wait to load the value. During the refresh, Get return the precedent value (no wait).
Example of use :
class Program
{
static void Main(string[] args)
{
var cache = new MyCache();
while (true)
{
System.Threading.Thread.Sleep(TimeSpan.FromSeconds(1));
Console.WriteLine(cache.Get("Key1") ?? "<null>");
}
}
}
public class MyCache : AutoRefreshCache<string, string>
{
public MyCache()
: base(TimeSpan.FromSeconds(5))
{ }
readonly Random random = new Random();
protected override string Load(string key)
{
Console.WriteLine($"Load {key} begin");
System.Threading.Thread.Sleep(TimeSpan.FromSeconds(3));
Console.WriteLine($"Load {key} end");
return "Value " + random.Next();
}
}
Result :
Load Key1 begin
Load Key1 end
Value 1648258406
Load Key1 begin
Value 1648258406
Value 1648258406
Value 1648258406
Load Key1 end
Value 1970225921
Value 1970225921
Value 1970225921
Value 1970225921
Value 1970225921
Load Key1 begin
Value 1970225921
Value 1970225921
Value 1970225921
Load Key1 end
Value 363174357
Value 363174357
You may try to take a look at FusionCache ⚡🦥, a library I recently released.
Features to use
The first interesting thing is that it provides an optimization for concurrent factory calls so that only one call per-key will be exeuted, relieving the load on your data source: basically all concurrent callers for the same cache key at the same time will be blocked and only one factory will be executed.
Then you can specify some timeouts for the factory, so that it will not take too much time: background factory completion isenabled by default so that, even if it will actually times out, it can keep running in the background and update the cache with the new value as soon as it will finish.
Then simply enable fail-safe to re-use the expired value in case of timeouts, or any problem really (the database is down, there are temporary network errors, etc).
A practical example
You can cache something for, let's say, 2 min after which a factory would be called to refresh the data but, in case of problems (exceptions, timeouts, etc), that expired value would be used again until the factory is able to complete in the background, after which it will update the cache right away.
One more thing
Another interesting feature is support for an optional, distributed 2nd level cache, automatically managed and kept in sync with the local one for you without doing anything.
If you will give it a chance please let me know what you think.
/shameless-plug
It looks like you need to set your own ChangeToken for each CacheEntry by calling AddExpirationToken. Then in your implementation of IChangeToken.HasChanged you can have a simple timeout expiration, and right before that gets triggered, asynchronously, you can search for new data that you can add in the cache.
I suggest using the "NeverRemove" priority for cache items and handle cache size and update procedure by methods like "MemoryCache.Compact" if it does not change your current design significantly.
You may find the page "Cache in-memory in ASP.NET Core" useful. Please see the titles of:
- MemoryCache.Compact
- Additional notes: second item

What is MemoryCache.AddOrGetExisting for?

The behaviour of MemoryCache.AddOrGetExisting is described as:
Adds a cache entry into the cache using the specified key and a value
and an absolute expiration value.
And that it returns:
If a cache entry with the same key exists, the existing cache entry; otherwise, null.
What is the purpose of a method with these semantics? What is an example of this?
There are often situations where you only want to create a cache entry if a matching entry doesn't already exist (that is, you don't want to overwrite an existing value).
AddOrGetExisting allows you to do this atomically. Without AddOrGetExisting it would be impossible to perform the get-test-set in an atomic, thread-safe manner. For example:
Thread 1 Thread 2
-------- --------
// check whether there's an existing entry for "foo"
// the call returns null because there's no match
Get("foo")
// check whether there's an existing entry for "foo"
// the call returns null because there's no match
Get("foo")
// set value for key "foo"
// assumes, rightly, that there's no existing entry
Set("foo", "first thread rulez")
// set value for key "foo"
// assumes, wrongly, that there's no existing entry
// overwrites the value just set by thread 1
Set("foo", "second thread rulez")
(See also the Interlocked.CompareExchange method, which enables a more sophisticated equivalent at the variable level, and also the wikipedia entries on test-and-set and compare-and-swap.)
LukeH's answer is correct. Because the other answers indicate that the the method's semantics could be interpreted differently, I think it's worth pointing out that AddOrGetExisting in fact will not update existing cache entries.
So this code
Console.WriteLine(MemoryCache.Default.AddOrGetExisting("test", "one", new CacheItemPolicy()) ?? "(null)");
Console.WriteLine(MemoryCache.Default.AddOrGetExisting("test", "two", new CacheItemPolicy()));
Console.WriteLine(MemoryCache.Default.AddOrGetExisting("test", "three", new CacheItemPolicy()));
will print
(null)
one
one
Another thing to be aware of: When AddOrGetExisting finds an existing cache entry, it will not dispose of the CachePolicy passed to the call. This may be problematic if you use custom change monitors that set up expensive resource tracking mechanisms. Normally, when a cache entry is evicted, the cache system calls Dipose() on your ChangeMonitors. This gives you the opportunity to unregister events and the like. When AddOrGetExisting returns an existing entry, however, you have to take care of that yourself.
I haven't actually used this, but I guess one possible use case is if you want to unconditionally update the cache with a new entry for a particular key and you want to explicitly dispose of the old entry being returned.

How do I clear a System.Runtime.Caching.MemoryCache

I use a System.Runtime.Caching.MemoryCache to hold items which never expire. However, at times I need the ability to clear the entire cache. How do I do that?
I asked a similar question here concerning whether I could enumerate the cache, but that is a bad idea as it needs to be synchronised during enumeration.
I've tried using .Trim(100) but that doesn't work at all.
I've tried getting a list of all the keys via Linq, but then I'm back where I started because evicting items one-by-one can easily lead to race conditions.
I thought to store all the keys, and then issue a .Remove(key) for each one, but there is an implied race condition there too, so I'd need to lock access to the list of keys, and things get messy again.
I then thought that I should be able to call .Dispose() on the entire cache, but I'm not sure if this is the best approach, due to the way it's implemented.
Using ChangeMonitors is not an option for my design, and is unnecassarily complex for such a trivial requirement.
So, how do I completely clear the cache?
I was struggling with this at first. MemoryCache.Default.Trim(100) does not work (as discussed). Trim is a best attempt, so if there are 100 items in the cache, and you call Trim(100) it will remove the ones least used.
Trim returns the count of items removed, and most people expect that to remove all items.
This code removes all items from MemoryCache for me in my xUnit tests with MemoryCache.Default. MemoryCache.Default is the default Region.
foreach (var element in MemoryCache.Default)
{
MemoryCache.Default.Remove(element.Key);
}
You should not call dispose on the Default member of the MemoryCache if you want to be able to use it anymore:
The state of the cache is set to indicate that the cache is disposed.
Any attempt to call public caching methods that change the state of
the cache, such as methods that add, remove, or retrieve cache
entries, might cause unexpected behavior. For example, if you call the
Set method after the cache is disposed, a no-op error occurs. If you
attempt to retrieve items from the cache, the Get method will always
return Nothing.
http://msdn.microsoft.com/en-us/library/system.runtime.caching.memorycache.dispose.aspx
About the Trim, it's supposed to work:
The Trim property first removes entries that have exceeded either an absolute or sliding expiration. Any callbacks that are registered
for items that are removed will be passed a removed reason of Expired.
If removing expired entries is insufficient to reach the specified trim percentage, additional entries will be removed from the cache
based on a least-recently used (LRU) algorithm until the requested
trim percentage is reached.
But two other users reported it doesnt work on same page so I guess you are stuck with Remove() http://msdn.microsoft.com/en-us/library/system.runtime.caching.memorycache.trim.aspx
Update
However I see no mention of it being singleton or otherwise unsafe to have multiple instances so you should be able to overwrite your reference.
But if you need to free the memory from the Default instance you will have to clear it manually or destroy it permanently via dispose (rendering it unusable).
Based on your question you could make your own singleton-imposing class returning a Memorycache you may internally dispose at will.. Being the nature of a cache :-)
Here's is what I had made for something I was working on...
public void Flush()
{
List<string> cacheKeys = MemoryCache.Default.Select(kvp => kvp.Key).ToList();
foreach (string cacheKey in cacheKeys)
{
MemoryCache.Default.Remove(cacheKey);
}
}
I know this is an old question but the best option I've come across is to
Dispose the existing MemoryCache and create a new MemoryCache object.
https://stackoverflow.com/a/4183319/880642
The answer doesn't really provide the code to do this in a thread safe way. But this can be achieved using Interlocked.Exchange
var oldCache = Interlocked.Exchange(ref _existingCache, new MemoryCache("newCacheName"));
oldCache.Dispose();
This will swap the existing cache with a new one and allow you to safely call Dispose on the original cache. This avoids needing to enumerate the items in the cache and race conditions caused by disposing a cache while it is in use.
Edit
Here's how I use it in practice accounting for DI
public class CustomCacheProvider : ICustomCacheProvider
{
private IMemoryCache _internalCache;
private readonly ICacheFactory _cacheFactory;
public CustomCacheProvider (ICacheFactory cacheFactory)
{
_cacheFactory = cacheFactory;
_internalCache = _cacheFactory.CreateInstance();
}
public void Set(string key, object item, MemoryCacheEntryOptions policy)
{
_internalCache.Set(key, item, policy);
}
public object Get(string key)
{
return _internalCache.Get(key);
}
// other methods ignored for breviy
public void Dispose()
{
_internalCache?.Dispose();
}
public void EmptyCache()
{
var oldCache = Interlocked.Exchange(ref _internalCache, _cacheFactory.CreateInstance());
oldCache.Dispose();
}
}
The key is controlling access to the internal cache using another singleton which has the ability to create new cache instances using a factory (or manually if you prefer).
The details in #stefan's answer detail the principle; here's how I'd do it.
One should synchronise access to the cache whilst recreating it, to avoid the race condition of client code accessing the cache after it is disposed, but before it is recreated.
To avoid this synchronisation, do this in your adapter class (which wraps the MemoryCache):
public void clearCache() {
var oldCache = TheCache;
TheCache = new MemoryCache("NewCacheName", ...);
oldCache.Dispose();
GC.Collect();
}
This way, TheCache is always in a non-disposed state, and no synchronisation is needed.
I ran into this problem too. .Dispose() did something quite different than what I expected.
Instead, I added a static field to my controller class. I did not use the default cache, to get around this behavior, but created a private one (if you want to call it that). So my implementation looked a bit like this:
public class MyController : Controller
{
static MemoryCache s_cache = new MemoryCache("myCache");
public ActionResult Index()
{
if (conditionThatInvalidatesCache)
{
s_cache = new MemoryCache("myCache");
}
String s = s_cache["key"] as String;
if (s == null)
{
//do work
//add to s_cache["key"]
}
//do whatever next
}
}
Check out this post, and specifically, the answer that Thomas F. Abraham posted.
It has a solution that enables you to clear the entire cache or a named subset.
The key thing here is:
// Cache objects are obligated to remove entry upon change notification.
base.OnChanged(null);
I've implemented this myself, and everything seems to work just fine.

Static Dictionary<T,T> sharing information accross sessions. How to stop this?

In my web project, I am using Static List. So say suppose I have 2 users (A, B) logged in to my website at the same time, then this List will store some information about A and as well as B. But say when I process B List's records, A's List's are getting processed instead of B's.
Can somebody point out the problem please?, also please suggest me some possible solution to avoid this problem.
I am using ASP.NET C# 3.5.
Thank you in advance :)
Edit:
Now I have changed the data type from Dictionary to List, but still the same problem...
A static variable is one that is the same for all instances of a particular class. So this means your website uses the exact same dictionary for User A, B, C, D, etc. Essentially whichever user last writes to the dictionary is the one whose content you will see, regardless of which user is looking.
As other's have suggested, you can use Session variables. These are stored in the server's memory, and are related to a specific browser 'session' (i.e. user).
Personally, I prefer to use the ViewState as this is stored in the response to the browser, and returned to you whenever the page makes a postback. This means less memory usage on the server, the viewstate is not volatile across sessions (or subject to application resets like session). However this also means that whatever you are storing is sent across the wire, so you would never want to store things like credit card numbers, SSN's, etc.
It also means that if you're storing a lot of very large objects, you're going to have a very large response and postback (possibly slowing the cycle), so you have to be more careful about how and what you store.
So that's a few different options for you, you should do the research and decide which is best for your requirements.
Storing values in session is like:
Session["mykey"] = value;
And reading values from session is like:
Object value = Session["mykey"];
The session will time out after a couple of minutes and the value would then be null.
To avoid this consider using:
Viewstate["mykey"] = value;
Viewstate is used exactly like session except that the value has to be serializable.
The viewstate is send to the client and back again so consider the amount of data that you want to store this way. The viewstate is stored in "__VIEWSTATE" and encoded in base64.
Don't use a static dictionary. Use Session variables to store information about a user's session.
For better information, you will have to provide us with a bit more information: what records? in what format? etc.
When a variable is declared static, that means there is exactly 1 instance of that variable per class (this also includes classes in web applications).
Thus, a static variable is not tied to a user. If you need to store data specific to a user, consider using a Session variable.
You could store the dictionary in the Session, that way each user would have their own and the code wouldn't have access to others.
Or if you sometimes want to be able to access other users' info, you could declare it as
static Dictionary<String, Dictionary<T, T>>
where the string key is the unique user id.

C# Collection whose items expire

I am writing a Console Application in C# in which I want to cache certain items for a predefined time (let's say 1 hour). I want items that have been added into this cache to be automatically removed after they expire. Is there a built-in data structure that I can use? Remember this is a Console App not a web app.
Do you actually need them removed from the cache at that time? Or just that future requests to the cache for that item should return null after a given time?
To do the former, you would need some sort of background thread that was periodically purging the cache. This would only be needed if you were worried about memory consumption or something. If you just want the data to expire, that would be easy to do.
It is trivial to create such a class.
class CachedObject<TValue>
{
DateTime Date{get;set;}
TimeSpan Duration{get;set;}
TValue Cached{get;set;}
}
class Cache : Dictionary<TKey,TValue>
{
public new TValue this(TKey key)
{
get{
if (ContainsKey(key))
{
var val = base.this[key];
//compare dates
//if expired, remove from cache, return null
//else return the cached item.
}
}
set{//create new CachedObject, set date and timespan, set value, add to dictionary}
}
Its already in the BCL. Its just not where you expect to find it: You can use System.Web.Caching from other kinds of applications too, not only in ASP.NET.
This search on google links to several resources about this.
I don't know of any objects in the BCL which do this, but I have written similar things before.
You can do this fairly easily by just including a System.Threading.Timer inside of your caching class (no web/winforms dependencies), and storing an expiration (or last used) time on your objects. Just have the timer check every few minutes, and remove the objects you want to expire.
However, be watchful of events on your objects. I had a system like this, and was not being very careful to unsubscribe from events on my objects in the cache, which was preventing a subtle, but nasty memeory leak over time. This can be very tricky to debug.
Include an ExpirationDate property in the object that you will be caching (probably a wrapper around your real object) and set it to expire in an hour in its constructor. Instead of removing items from the collection, access the collection through a method that filters out the expired items. Or create a custom collection that does this automatically. If you need to actually remove items from the cache, your custom collection could instead purge expired items on every call to one of its members.

Categories