I am trying to migrate my .Net framework application to .Net Core and in this process, I want to move my in-memory caching from System.Runtime.Caching/MemoryCache to Microsoft.Extensions.Caching.Memory/IMemoryCache. But I have one problem with IMemoryCache, I could not find a way to refresh the cache before it is removed/evicted.
In the case of System.Runtime.Caching/MemoryCache, there is UpdateCallback property in CacheItemPolicy to which I can set the delegate of callback function and this function will be called in a separate thread just before the eviction of the cached object. Even if callback function takes a long time to fetch fresh data, MemoryCache will continue to serve old data beyond its expiry deadline, this ensures my code need not wait for data during the process of cache refresh.
But I don't see such functionality in Microsoft.Extensions.Caching.Memory/IMemoryCache, there is
RegisterPostEvictionCallback property and PostEvictionCallbacks extension method in MemoryCacheEntryOptions. But both of these will be fired after the cache entry is evicted from the cache. So if this callback takes a longer time, all the requests to get this data need to wait.
Is there any solution?
That's because there is no eviction, and, I would argue, that makes IMemoryCache not a cache:
"The ASP.NET Core runtime doesn't trim the cache when system memory is low."
https://learn.microsoft.com/en-us/aspnet/core/performance/caching/memory?view=aspnetcore-5.0#use-setsize-size-and-sizelimit-to-limit-cache-size
"If SizeLimit isn't set, the cache grows without bound."
"The cache size limit does not have a defined unit of measure because the cache has no mechanism to measure the size of entries."
"An entry will not be cached if the sum of the cached entry sizes exceeds the value specified by SizeLimit."
So, not only does the IMemoryCache fail to do the most basic thing you'd expect from a cache - respond to memory pressure by evicting oldest entries - you also don't have the insert logic you expect. Adding a fresh item to a full "cache" doesn't evict an older entry, it refuses to insert the new item.
I argue this is just an unfortunate Dictionary, and not a cache at all. The cake/class is a lie.
To get this to actually work like a cache, you'd need to write a wrapper class that does measure memory size, and system code that interacts with the wrapper that evicts (via .Remove()) in response to memory pressure and expiration, periodically. You know - most of the work of implementing a cache.
So, the reason you couldn't find a way to update before eviction is because by default there isn't any eviction, and if you've implemented your own eviction scheme, you've written so much of an actual cache, what's writing a bit more?
You can do a trick here and add the old cache in RegisterPostEvictionCallback before looking up for the new value. This way if the callback takes a longer time, the old value is still available in cache.
I had this need and I write the class :
public abstract class AutoRefreshCache<TKey, TValue>
{
private readonly ConcurrentDictionary<TKey, TValue> _entries = new ConcurrentDictionary<TKey, TValue>();
protected AutoRefreshCache(TimeSpan interval)
{
var timer = new System.Timers.Timer();
timer.Interval = interval.TotalMilliseconds;
timer.AutoReset = true;
timer.Elapsed += (o, e) =>
{
((System.Timers.Timer)o).Stop();
RefreshAll();
((System.Timers.Timer)o).Start();
};
timer.Start();
}
public TValue Get(TKey key)
{
return _entries.GetOrAdd(key, k => Load(k));
}
public void RefreshAll()
{
var keys = _entries.Keys;
foreach(var key in keys)
{
_entries.AddOrUpdate(key, k => Load(key), (k, v) => Load(key));
}
}
protected abstract TValue Load(TKey key);
}
Values aren't evicted, just refreshed. Only the first Get wait to load the value. During the refresh, Get return the precedent value (no wait).
Example of use :
class Program
{
static void Main(string[] args)
{
var cache = new MyCache();
while (true)
{
System.Threading.Thread.Sleep(TimeSpan.FromSeconds(1));
Console.WriteLine(cache.Get("Key1") ?? "<null>");
}
}
}
public class MyCache : AutoRefreshCache<string, string>
{
public MyCache()
: base(TimeSpan.FromSeconds(5))
{ }
readonly Random random = new Random();
protected override string Load(string key)
{
Console.WriteLine($"Load {key} begin");
System.Threading.Thread.Sleep(TimeSpan.FromSeconds(3));
Console.WriteLine($"Load {key} end");
return "Value " + random.Next();
}
}
Result :
Load Key1 begin
Load Key1 end
Value 1648258406
Load Key1 begin
Value 1648258406
Value 1648258406
Value 1648258406
Load Key1 end
Value 1970225921
Value 1970225921
Value 1970225921
Value 1970225921
Value 1970225921
Load Key1 begin
Value 1970225921
Value 1970225921
Value 1970225921
Load Key1 end
Value 363174357
Value 363174357
You may try to take a look at FusionCache ⚡🦥, a library I recently released.
Features to use
The first interesting thing is that it provides an optimization for concurrent factory calls so that only one call per-key will be exeuted, relieving the load on your data source: basically all concurrent callers for the same cache key at the same time will be blocked and only one factory will be executed.
Then you can specify some timeouts for the factory, so that it will not take too much time: background factory completion isenabled by default so that, even if it will actually times out, it can keep running in the background and update the cache with the new value as soon as it will finish.
Then simply enable fail-safe to re-use the expired value in case of timeouts, or any problem really (the database is down, there are temporary network errors, etc).
A practical example
You can cache something for, let's say, 2 min after which a factory would be called to refresh the data but, in case of problems (exceptions, timeouts, etc), that expired value would be used again until the factory is able to complete in the background, after which it will update the cache right away.
One more thing
Another interesting feature is support for an optional, distributed 2nd level cache, automatically managed and kept in sync with the local one for you without doing anything.
If you will give it a chance please let me know what you think.
/shameless-plug
It looks like you need to set your own ChangeToken for each CacheEntry by calling AddExpirationToken. Then in your implementation of IChangeToken.HasChanged you can have a simple timeout expiration, and right before that gets triggered, asynchronously, you can search for new data that you can add in the cache.
I suggest using the "NeverRemove" priority for cache items and handle cache size and update procedure by methods like "MemoryCache.Compact" if it does not change your current design significantly.
You may find the page "Cache in-memory in ASP.NET Core" useful. Please see the titles of:
- MemoryCache.Compact
- Additional notes: second item
Related
I have this static class
static class LocationMemoryCache
{
public static readonly ConcurrentDictionary<int, LocationCityContract> LocationCities = new();
}
My process
Api starts and initializes an empty dictionary
A background job starts and runs once every day to reload the dictionary from the database
Requests come in to read from the dictionary or update a specific city in the dictionary
My problem
If a request comes in to update the city
I update the database
If the update was successful, update the city object in the dictionary
At the same time, the background job started and queried all cities before I updated the specific city
The request finishes and the dictionary city now has the old values because the background job finished last
My solution I thought about first
Is there a way to lock/reserve the concurrent dictionary from reads/writes and then release it when I am done?
This way when the background job starts, it can lock/reserve the dictionary only for itself and when it's done it will release it for other requests to be used.
Then a request might have been waiting for the dictionary to be released and update it with the latest values.
Any ideas on other possible solutions?
Edit
What is the purpose of the background job?
If I manually update/delete something in the database I want those changes to show up after the background job runs again. This could take a day for the changes to show up and I am okay with that.
What happens when the Api wants to access the cache but its not loaded?
When the Api starts I block requests to this particular "Location" project until the background job marks IsReady to true. The cache I implemented is thread safe until I add the background job.
How much time does it take to reload the cache?
I would say less then 10 seconds for a total of 310,000+ records in the "Location" project.
Why I chose the answer
I chose Xerillio's answer because it solves the background job problem by keeping track of date times. Similar to a "object version" approach. I won't be taking this path as I have decided that if I do a manual update in the database, I might as well create an API route that does it for me so that I can update the db and cache at the same time. So I might remove the background job after all or just run it once a week. Thank you for all the answers and I am ok with a possible data inconsistency with the way I am updating the objects because if one route updates 2 specific values and another route updates 2 different specific values then the possibility of having a problem is very minimal
Edit 2
Let's imagine I have this cache now and 10,000 active users
static class LocationMemoryCache
{
public static readonly ConcurrentDictionary<int, LocationCityUserLogContract> LocationCityUserLogs = new();
}
Things I took into consideration
An update will only happen to objects that the user owns and the rate at which the user might update those objects is most likely once every minute. So that reduces the possibility of a problem by a lot for this specific example.
Most of my cache objects are related only to a specific user so it relates with bullet point 1.
The application owns the data, I don't. So I should never manually update the database unless it's critical.
Memory might be a problem but 1,000,000 normalish objects is somewhere between 80MB - 150MB. I can have a lot of objects in memory to gain performance and reduce the load on the database.
Having a lot of objects in memory will put pressure on Garbage Collection and that is not good but I don't think its bad at all for me because Garbage Collection only runs when memory gets low and all I have to do is just plan ahead to make sure there is enough memory. Yes it will run because of day to day operations but it won't be a big impact.
All of these considerations just so that I can have an in memory cache right at my finger tips.
I would suggest adding a UpdatedAt/CreatedAt property to your LocationCityContract or creating a wrapper object (CacheItem<LocationCityContract>) with such a property. That way you can check if the item you're about to add/update with is newer than the existing object like so:
public class CacheItem<T>
{
public T Item { get; }
public DateTime CreatedAt { get; }
// In case of system clock synchronization, consider making CreatedAt
// a long and using Environment.TickCount64. See comment from #Theodor
public CacheItem(T item, DateTime? createdAt = null)
{
Item = item;
CreatedAt = createdAt ?? DateTime.UtcNow;
}
}
// Use it like...
static class LocationMemoryCache
{
public static readonly
ConcurrentDictionary<int, CacheItem<LocationCityContract>> LocationCities = new();
}
// From some request...
var newItem = new CacheItem(newLocation);
// or the background job...
var newItem = new CacheItem(newLocation, updateStart);
LocationMemoryCache.LocationCities
.AddOrUpdate(
newLocation.Id,
newItem,
(_, existingItem) =>
newItem.CreatedAt > existingItem.CreatedAt
? newItem
: existingItem)
);
When a request wants to update the cache entry they do as above with the timestamp of whenever they finished adding the item to the database (see notes below).
The background job should, as soon as it starts, save a timestamp (let's call it updateStart). It then reads everything from the database and adds the items to the cache like above, where CreatedAt for the newLocation is set to updateStart. This way, the background job only updates the cache items that haven't been updated since it started. Perhaps you're not reading all items from DB as the first thing in the background job, but instead you read them one at a time and update the cache accordingly. In that case updateStart should instead be set right before reading each value (we could call it itemReadStart instead).
Since the way of updating the item in the cache is a little more cumbersome and you might be doing it from a lot of places, you could make a helper method to make the call to LocationCities.AddOrUpdate a little easier.
Note:
Since this approach is not synchronizing (locking) updates to the database, there's a race condition that means you might end up with a slightly out-of-date item in the cache. This can happen if two requests wants to update the same item simultaneously. You can't know for sure which one updated the DB last, so even if you set CreatedAt to the timestamp after updating each, it might not truly reflect which one was updated last. Since you're ok with a 24 hour delay from manually updating the DB until the background job updates the cache, perhaps this race condition is not a problem for you as the background job will fix it when run.
As #Theodor mentioned in the comments, you should avoid updating the object from the cache directly. Either use the C# 9 record type (as opposed to a class type) or clone the object if you want to cache new updates. That means, don't use LocationMemoryCache[locationId].Item.CityName = updatedName. Instead you should e.g. clone it like:
// You need to implement a constructor or similar to clone the object
// depending on how complex it is
var newLoc = new LocationCityContract(LocationMemoryCache[locationId].Item);
newLoc.CityName = updatedName;
var newItem = new CacheItem(newLoc);
LocationMemoryCache.LocationCities
.AddOrUpdate(...); /* <- like above */
By not locking the whole dictionary you avoid having requests being blocked by each other because they're trying to update the cache at the same time. If the first point is not acceptable you can also introduce locking based on the location ID (or whatever you call it) when updating the database, so that DB and cache are updated atomically. This avoids blocking requests that are trying to update other locations so you minimize the risk of requests affecting each other.
No, there is no way to lock a ConcurrentDictionary on demand from reads/writes, and then release it when you are done. This class does not offer this functionality. You could manually use a lock every time you are accessing the ConcurrentDictionary, but by doing so you would lose all the advantages that this specialized class has to offer (low contention under heavy usage), while keeping all its disadvantages (awkward API, overhead, allocations).
My suggestion is to use a normal Dictionary protected with a lock. This is a pessimistic approach that will result occasionally to some threads unnecessarily blocked, but it is also very simple and easy to reason about its correctness. Essentially all access to the dictionary and the database will be serialized:
Every time a thread wants to read an object stored in the dictionary, will first have to take the lock, and keep the lock until it's done reading the object.
Every time a thread wants to update the database and then the corresponding object, will first have to take the lock (before even updating the database), and keep the lock until all the properties of the object have been updated.
Every time the background job wants to replace the current dictionary with a new dictionary, will first have to take the lock (before even querying the database), and keep the lock until the new dictionary has taken the place of the old one.
In case the performance of this simple approach proves to be unacceptable, you should look at more sophisticated solutions. But the complexity gap between this solution and the next simplest solution (that also offers guaranteed correctness) is likely to be quite significant, so you'd better have good reasons before going that route.
I'm using Kentico 9 and trying to test caching. I would like to ask about how to replace the existing cache if a new value is entered.
Recently was trying to cache with this code:
CacheHelper.Cache(cs => getCachingValue(cs, cacheValue), new CacheSettings(10, "cacheValue"));
public string getCachingValue(CacheSettings cs, string result) {
string cacheValue= result;
if (cs.Cached)
{
cs.CacheDependency = CacheHelper.GetCacheDependency("cacheValue");
}
return cacheValue;
}
When caching data you need to setup correct cache dependencies. For example this is cache dependency for all users:
if (cs.Cached)
{
cs.CacheDependency = CacheHelper.GetCacheDependency("cms.user|all");
}
This will drop cache whenever user has been updated or created. So next time you call the method it will get data from database and cache it again until cache expires or someone adds/updates user.
So you don't need to take care about replacing/updating cached data - appropriate mechanism is already there.
See cache dependencies in documentation.
Since your cache dependency is called "cacheValue", you need to "touch" that particular cache key, to force the cache to clear.
When the value you are caching changes (the value you provide to the string result parameter of the getCachingValue method), call the CacheHelper.TouchKey method to force the cache to clear:
CacheHelper.TouchKey("cacheValue");
(You should also consider changing the name of the cache key, to prevent confusion)
Keep in mind, that if your cache key is "cacheValue" then any call that is made to this will always be the same 'hit.' The CacheSetting key is it's 'unique identifier' you could say, and the Cache Depenency is how it automatically resets.
So for example, say you cache a function that adds two values (wouldn't really need to cache this, but for an example where the input changes)
If you have a cache value for your "AddTwoValues(int a, int b)" of
CacheHelper.Cache(cs => AddTwoValuesHelper(cs, a, b), new CacheSettings(10, "cacheValue"));
The first call will cache the the value of the call (say you pass it 1 and 2), so it caches "3" for the key "cacheValue"
Second call if you pass it 3, 5, the cache key is still "cacheValue" so it will assume it's the same call as the first and return 3, and not even try to add 3+5.
I usually append any parameters to the cache key.
CacheHelper.Cache(cs => AddTwoValuesHelper(cs, a, b), new CacheSettings(10, string.Format("AddTwoValues|{0}|{1}", a, b)));
This way if i call it with 1 and 2, twice, the first it will processes and cache "3" for the key "AddTwoValues|1|2", and when called again the key will match so it will just return the cached value.
If you call with different parameters, then the cache key will be different.
Make sense?
The other answers of course talk on the cache dependency in the helper function:
if (cs.Cached)
{
cs.CacheDependency = CacheHelper.GetCacheDependency("cms.user|all");
}
Which identify how it automatically clears (if you do cms.users|all as the dependency, whenever a user is changed, this cache automatically clears itself)
I have a webmethod inside a webservice that calls another webservice to get data and fills a generic list then it returns it, what i want to do is to save the list in memory, so the next time the webmethod is invoked it does not hit the other webservice but just returns the list, i have tried but when i invoke the web method for the second time the list count shows as 0, looks like garbage collection is cleaning all. any suggestions ?
Store it in the ASP.NET cache. Setting an absolute expiration of midnight should assure that you only get it once per day (unless it gets tossed from the cache due to space issues).
[Web Method]
public List<Foo> GetFoos()
{
var foos = Cache["FooList"] as List<Foo>;
if (foos == null)
{
... get foos from remote web service ...
var expiration = DateTime.Today.AddHours(7);
if (DateTime.Now >= expiration)
{
expiration = expiration.AddDays(1);
}
Cache.Insert( "FooList", foos, null, expiration, Cache.NoSlidingExpiration );
}
return foos;
}
Note: you could also use output caching as well, but you're limited to a sliding expiration. That is, it will be cached for a duration based on when the request occurs. It's not clear that's what you want. For example, what if the first request occurs at 11pm with a 24 hour duration, you wouldn't check again until 11pm the next day. If you have data changing on a daily basis, you're better off using the ASP.NET cache in conjunction with output caching on a shorter duration to ensure that you get the latest, daily data in a timely fashion.
Updated example based on comments.
It sounds to me like your list might either not be static, or it might constantly be new'd within a non-static constructor. There are three possible fixes for this:
Make sure that your generic list is a static property which only get initialised within a static constructor.
Seeing your time requirements I would also suggest potentially looking into MemoryCache or Cache.
Use the WebMethod attribute and set a CacheDuration (i.e: [WebMethod(CacheDuration=86400)])
I have not tried this on a webservice, but I think output cashing would work.
[WebMethod(CacheDuration=86400)]
public string FunctionName(string Name)
{
...code...
return(sb.ToString());
}
Read: How to perform output caching with Web services in Visual C# .NET
I use a System.Runtime.Caching.MemoryCache to hold items which never expire. However, at times I need the ability to clear the entire cache. How do I do that?
I asked a similar question here concerning whether I could enumerate the cache, but that is a bad idea as it needs to be synchronised during enumeration.
I've tried using .Trim(100) but that doesn't work at all.
I've tried getting a list of all the keys via Linq, but then I'm back where I started because evicting items one-by-one can easily lead to race conditions.
I thought to store all the keys, and then issue a .Remove(key) for each one, but there is an implied race condition there too, so I'd need to lock access to the list of keys, and things get messy again.
I then thought that I should be able to call .Dispose() on the entire cache, but I'm not sure if this is the best approach, due to the way it's implemented.
Using ChangeMonitors is not an option for my design, and is unnecassarily complex for such a trivial requirement.
So, how do I completely clear the cache?
I was struggling with this at first. MemoryCache.Default.Trim(100) does not work (as discussed). Trim is a best attempt, so if there are 100 items in the cache, and you call Trim(100) it will remove the ones least used.
Trim returns the count of items removed, and most people expect that to remove all items.
This code removes all items from MemoryCache for me in my xUnit tests with MemoryCache.Default. MemoryCache.Default is the default Region.
foreach (var element in MemoryCache.Default)
{
MemoryCache.Default.Remove(element.Key);
}
You should not call dispose on the Default member of the MemoryCache if you want to be able to use it anymore:
The state of the cache is set to indicate that the cache is disposed.
Any attempt to call public caching methods that change the state of
the cache, such as methods that add, remove, or retrieve cache
entries, might cause unexpected behavior. For example, if you call the
Set method after the cache is disposed, a no-op error occurs. If you
attempt to retrieve items from the cache, the Get method will always
return Nothing.
http://msdn.microsoft.com/en-us/library/system.runtime.caching.memorycache.dispose.aspx
About the Trim, it's supposed to work:
The Trim property first removes entries that have exceeded either an absolute or sliding expiration. Any callbacks that are registered
for items that are removed will be passed a removed reason of Expired.
If removing expired entries is insufficient to reach the specified trim percentage, additional entries will be removed from the cache
based on a least-recently used (LRU) algorithm until the requested
trim percentage is reached.
But two other users reported it doesnt work on same page so I guess you are stuck with Remove() http://msdn.microsoft.com/en-us/library/system.runtime.caching.memorycache.trim.aspx
Update
However I see no mention of it being singleton or otherwise unsafe to have multiple instances so you should be able to overwrite your reference.
But if you need to free the memory from the Default instance you will have to clear it manually or destroy it permanently via dispose (rendering it unusable).
Based on your question you could make your own singleton-imposing class returning a Memorycache you may internally dispose at will.. Being the nature of a cache :-)
Here's is what I had made for something I was working on...
public void Flush()
{
List<string> cacheKeys = MemoryCache.Default.Select(kvp => kvp.Key).ToList();
foreach (string cacheKey in cacheKeys)
{
MemoryCache.Default.Remove(cacheKey);
}
}
I know this is an old question but the best option I've come across is to
Dispose the existing MemoryCache and create a new MemoryCache object.
https://stackoverflow.com/a/4183319/880642
The answer doesn't really provide the code to do this in a thread safe way. But this can be achieved using Interlocked.Exchange
var oldCache = Interlocked.Exchange(ref _existingCache, new MemoryCache("newCacheName"));
oldCache.Dispose();
This will swap the existing cache with a new one and allow you to safely call Dispose on the original cache. This avoids needing to enumerate the items in the cache and race conditions caused by disposing a cache while it is in use.
Edit
Here's how I use it in practice accounting for DI
public class CustomCacheProvider : ICustomCacheProvider
{
private IMemoryCache _internalCache;
private readonly ICacheFactory _cacheFactory;
public CustomCacheProvider (ICacheFactory cacheFactory)
{
_cacheFactory = cacheFactory;
_internalCache = _cacheFactory.CreateInstance();
}
public void Set(string key, object item, MemoryCacheEntryOptions policy)
{
_internalCache.Set(key, item, policy);
}
public object Get(string key)
{
return _internalCache.Get(key);
}
// other methods ignored for breviy
public void Dispose()
{
_internalCache?.Dispose();
}
public void EmptyCache()
{
var oldCache = Interlocked.Exchange(ref _internalCache, _cacheFactory.CreateInstance());
oldCache.Dispose();
}
}
The key is controlling access to the internal cache using another singleton which has the ability to create new cache instances using a factory (or manually if you prefer).
The details in #stefan's answer detail the principle; here's how I'd do it.
One should synchronise access to the cache whilst recreating it, to avoid the race condition of client code accessing the cache after it is disposed, but before it is recreated.
To avoid this synchronisation, do this in your adapter class (which wraps the MemoryCache):
public void clearCache() {
var oldCache = TheCache;
TheCache = new MemoryCache("NewCacheName", ...);
oldCache.Dispose();
GC.Collect();
}
This way, TheCache is always in a non-disposed state, and no synchronisation is needed.
I ran into this problem too. .Dispose() did something quite different than what I expected.
Instead, I added a static field to my controller class. I did not use the default cache, to get around this behavior, but created a private one (if you want to call it that). So my implementation looked a bit like this:
public class MyController : Controller
{
static MemoryCache s_cache = new MemoryCache("myCache");
public ActionResult Index()
{
if (conditionThatInvalidatesCache)
{
s_cache = new MemoryCache("myCache");
}
String s = s_cache["key"] as String;
if (s == null)
{
//do work
//add to s_cache["key"]
}
//do whatever next
}
}
Check out this post, and specifically, the answer that Thomas F. Abraham posted.
It has a solution that enables you to clear the entire cache or a named subset.
The key thing here is:
// Cache objects are obligated to remove entry upon change notification.
base.OnChanged(null);
I've implemented this myself, and everything seems to work just fine.
I am writing a Console Application in C# in which I want to cache certain items for a predefined time (let's say 1 hour). I want items that have been added into this cache to be automatically removed after they expire. Is there a built-in data structure that I can use? Remember this is a Console App not a web app.
Do you actually need them removed from the cache at that time? Or just that future requests to the cache for that item should return null after a given time?
To do the former, you would need some sort of background thread that was periodically purging the cache. This would only be needed if you were worried about memory consumption or something. If you just want the data to expire, that would be easy to do.
It is trivial to create such a class.
class CachedObject<TValue>
{
DateTime Date{get;set;}
TimeSpan Duration{get;set;}
TValue Cached{get;set;}
}
class Cache : Dictionary<TKey,TValue>
{
public new TValue this(TKey key)
{
get{
if (ContainsKey(key))
{
var val = base.this[key];
//compare dates
//if expired, remove from cache, return null
//else return the cached item.
}
}
set{//create new CachedObject, set date and timespan, set value, add to dictionary}
}
Its already in the BCL. Its just not where you expect to find it: You can use System.Web.Caching from other kinds of applications too, not only in ASP.NET.
This search on google links to several resources about this.
I don't know of any objects in the BCL which do this, but I have written similar things before.
You can do this fairly easily by just including a System.Threading.Timer inside of your caching class (no web/winforms dependencies), and storing an expiration (or last used) time on your objects. Just have the timer check every few minutes, and remove the objects you want to expire.
However, be watchful of events on your objects. I had a system like this, and was not being very careful to unsubscribe from events on my objects in the cache, which was preventing a subtle, but nasty memeory leak over time. This can be very tricky to debug.
Include an ExpirationDate property in the object that you will be caching (probably a wrapper around your real object) and set it to expire in an hour in its constructor. Instead of removing items from the collection, access the collection through a method that filters out the expired items. Or create a custom collection that does this automatically. If you need to actually remove items from the cache, your custom collection could instead purge expired items on every call to one of its members.