I need to add cache functionality and found a new shiny class called MemoryCache. However, I find MemoryCache a little bit crippled as it is (I'm in need of regions functionality). Among other things I need to add something like ClearAll(region). Authors made a great effort to keep this class without regions support, code like:
if (regionName != null)
{
throw new NotSupportedException(R.RegionName_not_supported);
}
flies in almost every method.
I don't see an easy way to override this behaviour. The only way to add region support that I can think of is to add a new class as a wrapper of MemoryCache rather then as a class that inherits from MemoryCache. Then in this new class create a Dictionary and let each method "buffer" region calls. Sounds nasty and wrong, but eventually...
Do you know of better ways to add regions to MemoryCache?
I know it is a long time since you asked this question, so this is not really an answer to you, but rather an addition for future readers.
I was also surprised to find that the standard implementation of MemoryCache does NOT support regions. It would have been so easy to provide right away. I therefore decided to wrap the MemoryCache in my own simple class to provide the functionality I often need.
I enclose my code it here to save time for others having the same need!
/// <summary>
/// =================================================================================================================
/// This is a static encapsulation of the Framework provided MemoryCache to make it easier to use.
/// - Keys can be of any type, not just strings.
/// - A typed Get method is provided for the common case where type of retrieved item actually is known.
/// - Exists method is provided.
/// - Except for the Set method with custom policy, some specific Set methods are also provided for convenience.
/// - One SetAbsolute method with remove callback is provided as an example.
/// The Set method can also be used for custom remove/update monitoring.
/// - Domain (or "region") functionality missing in default MemoryCache is provided.
/// This is very useful when adding items with identical keys but belonging to different domains.
/// Example: "Customer" with Id=1, and "Product" with Id=1
/// =================================================================================================================
/// </summary>
public static class MyCache
{
private const string KeySeparator = "_";
private const string DefaultDomain = "DefaultDomain";
private static MemoryCache Cache
{
get { return MemoryCache.Default; }
}
// -----------------------------------------------------------------------------------------------------------------------------
// The default instance of the MemoryCache is used.
// Memory usage can be configured in standard config file.
// -----------------------------------------------------------------------------------------------------------------------------
// cacheMemoryLimitMegabytes: The amount of maximum memory size to be used. Specified in megabytes.
// The default is zero, which indicates that the MemoryCache instance manages its own memory
// based on the amount of memory that is installed on the computer.
// physicalMemoryPercentage: The percentage of physical memory that the cache can use. It is specified as an integer value from 1 to 100.
// The default is zero, which indicates that the MemoryCache instance manages its own memory
// based on the amount of memory that is installed on the computer.
// pollingInterval: The time interval after which the cache implementation compares the current memory load with the
// absolute and percentage-based memory limits that are set for the cache instance.
// The default is two minutes.
// -----------------------------------------------------------------------------------------------------------------------------
// <configuration>
// <system.runtime.caching>
// <memoryCache>
// <namedCaches>
// <add name="default" cacheMemoryLimitMegabytes="0" physicalMemoryPercentage="0" pollingInterval="00:02:00" />
// </namedCaches>
// </memoryCache>
// </system.runtime.caching>
// </configuration>
// -----------------------------------------------------------------------------------------------------------------------------
/// <summary>
/// Store an object and let it stay in cache until manually removed.
/// </summary>
public static void SetPermanent(string key, object data, string domain = null)
{
CacheItemPolicy policy = new CacheItemPolicy { };
Set(key, data, policy, domain);
}
/// <summary>
/// Store an object and let it stay in cache x minutes from write.
/// </summary>
public static void SetAbsolute(string key, object data, double minutes, string domain = null)
{
CacheItemPolicy policy = new CacheItemPolicy { AbsoluteExpiration = DateTime.Now + TimeSpan.FromMinutes(minutes) };
Set(key, data, policy, domain);
}
/// <summary>
/// Store an object and let it stay in cache x minutes from write.
/// callback is a method to be triggered when item is removed
/// </summary>
public static void SetAbsolute(string key, object data, double minutes, CacheEntryRemovedCallback callback, string domain = null)
{
CacheItemPolicy policy = new CacheItemPolicy { AbsoluteExpiration = DateTime.Now + TimeSpan.FromMinutes(minutes), RemovedCallback = callback };
Set(key, data, policy, domain);
}
/// <summary>
/// Store an object and let it stay in cache x minutes from last write or read.
/// </summary>
public static void SetSliding(object key, object data, double minutes, string domain = null)
{
CacheItemPolicy policy = new CacheItemPolicy { SlidingExpiration = TimeSpan.FromMinutes(minutes) };
Set(key, data, policy, domain);
}
/// <summary>
/// Store an item and let it stay in cache according to specified policy.
/// </summary>
/// <param name="key">Key within specified domain</param>
/// <param name="data">Object to store</param>
/// <param name="policy">CacheItemPolicy</param>
/// <param name="domain">NULL will fallback to default domain</param>
public static void Set(object key, object data, CacheItemPolicy policy, string domain = null)
{
Cache.Add(CombinedKey(key, domain), data, policy);
}
/// <summary>
/// Get typed item from cache.
/// </summary>
/// <param name="key">Key within specified domain</param>
/// <param name="domain">NULL will fallback to default domain</param>
public static T Get<T>(object key, string domain = null)
{
return (T)Get(key, domain);
}
/// <summary>
/// Get item from cache.
/// </summary>
/// <param name="key">Key within specified domain</param>
/// <param name="domain">NULL will fallback to default domain</param>
public static object Get(object key, string domain = null)
{
return Cache.Get(CombinedKey(key, domain));
}
/// <summary>
/// Check if item exists in cache.
/// </summary>
/// <param name="key">Key within specified domain</param>
/// <param name="domain">NULL will fallback to default domain</param>
public static bool Exists(object key, string domain = null)
{
return Cache[CombinedKey(key, domain)] != null;
}
/// <summary>
/// Remove item from cache.
/// </summary>
/// <param name="key">Key within specified domain</param>
/// <param name="domain">NULL will fallback to default domain</param>
public static void Remove(object key, string domain = null)
{
Cache.Remove(CombinedKey(key, domain));
}
#region Support Methods
/// <summary>
/// Parse domain from combinedKey.
/// This method is exposed publicly because it can be useful in callback methods.
/// The key property of the callback argument will in our case be the combinedKey.
/// To be interpreted, it needs to be split into domain and key with these parse methods.
/// </summary>
public static string ParseDomain(string combinedKey)
{
return combinedKey.Substring(0, combinedKey.IndexOf(KeySeparator));
}
/// <summary>
/// Parse key from combinedKey.
/// This method is exposed publicly because it can be useful in callback methods.
/// The key property of the callback argument will in our case be the combinedKey.
/// To be interpreted, it needs to be split into domain and key with these parse methods.
/// </summary>
public static string ParseKey(string combinedKey)
{
return combinedKey.Substring(combinedKey.IndexOf(KeySeparator) + KeySeparator.Length);
}
/// <summary>
/// Create a combined key from given values.
/// The combined key is used when storing and retrieving from the inner MemoryCache instance.
/// Example: Product_76
/// </summary>
/// <param name="key">Key within specified domain</param>
/// <param name="domain">NULL will fallback to default domain</param>
private static string CombinedKey(object key, string domain)
{
return string.Format("{0}{1}{2}", string.IsNullOrEmpty(domain) ? DefaultDomain : domain, KeySeparator, key);
}
#endregion
}
You can create more than one just one MemoryCache instance, one for each partition of your data.
http://msdn.microsoft.com/en-us/library/system.runtime.caching.memorycache.aspx :
you can create multiple instances of the MemoryCache class for use in the same application and in the same AppDomain instance
I just recently came across this problem. I know this is an old question but maybe this might be useful for some folks. Here is my iteration of the solution by Thomas F. Abraham
namespace CLRTest
{
using System;
using System.Collections.Concurrent;
using System.Diagnostics;
using System.Globalization;
using System.Linq;
using System.Runtime.Caching;
class Program
{
static void Main(string[] args)
{
CacheTester.TestCache();
}
}
public class SignaledChangeEventArgs : EventArgs
{
public string Name { get; private set; }
public SignaledChangeEventArgs(string name = null) { this.Name = name; }
}
/// <summary>
/// Cache change monitor that allows an app to fire a change notification
/// to all associated cache items.
/// </summary>
public class SignaledChangeMonitor : ChangeMonitor
{
// Shared across all SignaledChangeMonitors in the AppDomain
private static ConcurrentDictionary<string, EventHandler<SignaledChangeEventArgs>> ListenerLookup =
new ConcurrentDictionary<string, EventHandler<SignaledChangeEventArgs>>();
private string _name;
private string _key;
private string _uniqueId = Guid.NewGuid().ToString("N", CultureInfo.InvariantCulture);
public override string UniqueId
{
get { return _uniqueId; }
}
public SignaledChangeMonitor(string key, string name)
{
_key = key;
_name = name;
// Register instance with the shared event
ListenerLookup[_uniqueId] = OnSignalRaised;
base.InitializationComplete();
}
public static void Signal(string name = null)
{
// Raise shared event to notify all subscribers
foreach (var subscriber in ListenerLookup.ToList())
{
subscriber.Value?.Invoke(null, new SignaledChangeEventArgs(name));
}
}
protected override void Dispose(bool disposing)
{
// Set delegate to null so it can't be accidentally called in Signal() while being disposed
ListenerLookup[_uniqueId] = null;
EventHandler<SignaledChangeEventArgs> outValue = null;
ListenerLookup.TryRemove(_uniqueId, out outValue);
}
private void OnSignalRaised(object sender, SignaledChangeEventArgs e)
{
if (string.IsNullOrWhiteSpace(e.Name) || string.Compare(e.Name, _name, true) == 0)
{
// Cache objects are obligated to remove entry upon change notification.
base.OnChanged(null);
}
}
}
public static class CacheTester
{
private static Stopwatch _timer = new Stopwatch();
public static void TestCache()
{
MemoryCache cache = MemoryCache.Default;
int size = (int)1e6;
Start();
for (int idx = 0; idx < size; idx++)
{
cache.Add(idx.ToString(), "Value" + idx.ToString(), GetPolicy(idx, cache));
}
long prevCnt = cache.GetCount();
Stop($"Added {prevCnt} items");
Start();
SignaledChangeMonitor.Signal("NamedData");
Stop($"Removed {prevCnt - cache.GetCount()} entries");
prevCnt = cache.GetCount();
Start();
SignaledChangeMonitor.Signal();
Stop($"Removed {prevCnt - cache.GetCount()} entries");
}
private static CacheItemPolicy GetPolicy(int idx, MemoryCache cache)
{
string name = (idx % 10 == 0) ? "NamedData" : null;
CacheItemPolicy cip = new CacheItemPolicy();
cip.AbsoluteExpiration = System.DateTimeOffset.UtcNow.AddHours(1);
var monitor = new SignaledChangeMonitor(idx.ToString(), name);
cip.ChangeMonitors.Add(monitor);
return cip;
}
private static void Start()
{
_timer.Start();
}
private static void Stop(string msg = null)
{
_timer.Stop();
Console.WriteLine($"{msg} | {_timer.Elapsed.TotalSeconds} sec");
_timer.Reset();
}
}
}
His solution involved using an event to keep track of ChangeMonitors. But the dispose method was working slow when the number of entries were more than 10k. My guess is that this code SignaledChangeMonitor.Signaled -= OnSignalRaised removes a delegate from invocation list by doing a linear search. So when you remove a lot of entries it becomes slow. I decided to use ConcurrentDictionary instead of an event. In hope that dispose becomes faster. I ran some basic performance tests and here are the results:
Added 10000 items | 0.027697 sec
Removed 1000 entries | 0.0040669 sec
Removed 9000 entries | 0.0105687 sec
Added 100000 items | 0.5065736 sec
Removed 10000 entries | 0.0338991 sec
Removed 90000 entries | 0.1418357 sec
Added 1000000 items | 6.5994546 sec
Removed 100000 entries | 0.4176233 sec
Removed 900000 entries | 1.2514225 sec
I am not sure if my code does not have some critical flaws. I would like to know if that is the case.
Another approach is to implement a wrapper around MemoryCache that implements regions by composing the key and region name e.g.
public interface ICache
{
...
object Get(string key, string regionName = null);
...
}
public class MyCache : ICache
{
private readonly MemoryCache cache
public MyCache(MemoryCache cache)
{
this.cache = cache.
}
...
public object Get(string key, string regionName = null)
{
var regionKey = RegionKey(key, regionName);
return cache.Get(regionKey);
}
private string RegionKey(string key, string regionName)
{
// NB Implements region as a suffix, for prefix, swap order in the format
return string.IsNullOrEmpty(regionName) ? key : string.Format("{0}{1}{2}", key, "::", regionName);
}
...
}
It's not perfect but it works for most use cases.
I've implemented this and it's available as a NuGet package: Meerkat.Caching
Related
First of all, I have been looking online for a Settings Design Pattern but I haven't been able to find a solution that works in my case or only cryptic answers like Use Dependency Injection.
The Problem
I have a large ASP .NET solution which is deployed to multiple production environments. Many decisions in code are taken based on settings which have been implemented in the Web.config file (a few hundreds). At the moment they are all accessed using the NameValueCollection ConfigurationManager.AppSettings. I want to move these settings in the database such that I can create a user interface and modify them more easily afterwards.
I would also prefer for the settings to remain accessible from the Web.config such that if something happens with the database, the application won't break.
Solution Architecture
Basically, the solution consists of the following projects:
UI (this is were the Web.config file resides)
BLL
DAL
Integration Project 1 (the application interracts with other applications. This project uses configuration settings like web addresses, authentication information etc. which at the moment are passed as parameters in the methods - which I personally find very ugly)
Integration Project 2
...
The Code
The Setting class (the database table is practically the same). SettingType is an enum which defines the type of the Setting: Int, String, Bool and I defined for the UI.
public class Setting
{
/// <summary>
/// Gets or sets the identifier.
/// </summary>
/// <value>
/// The identifier.
/// </value>
public int Id { get; set; }
/// <summary>
/// Gets or sets the name of the setting.
/// </summary>
/// <value>
/// The name of the setting.
/// </value>
public string Name { get; set; }
/// <summary>
/// Gets or sets the type of the setting.
/// </summary>
/// <value>
/// The type of the setting.
/// </value>
public SettingType Type { get; set; }
/// <summary>
/// Gets or sets the setting value.
/// </summary>
/// <value>
/// The setting value.
/// </value>
public object Value { get; set; }
/// <summary>
/// Gets or sets the setting description.
/// </summary>
/// <value>
/// The setting description.
/// </value>
public string Description { get; set; }
/// <summary>
/// Gets or sets the setting key.
/// </summary>
/// <value>
/// The setting key.
/// </value>
public string Key { get; set; }
/// <summary>
/// Gets or sets the default value
/// </summary>
/// <value>
/// The default value
/// </value>
public string Default { get; set; }
}
The SettingHelper class (which does everything at the moment, will split functionality in the future):
public static class SettingHelper
{
// Settings collection
private static List<Setting> _settings;
/// <summary>
/// Reloads the settings.
/// </summary>
/// <exception cref="System.Exception"></exception>
public static void LoadSettings()
{
_settings = new List<Setting>();
// Code which loads the settings from the database
}
private static Setting GetSetting(string key)
{
try
{
// if settings are not loaded, we reload them from the database
if (_settings == null || _settings.Count == 0)
LoadSettings();
var value = from Setting setting in _settings
where setting != null && setting.Key == key
select setting;
return value.FirstOrDefault();
}
catch (Exception)
{
return null;
}
}
public static void SetSetting(string key, object newValue, int userID)
{
var currentSetting = GetSetting(key);
if (currentSetting != null && !Convert.ToString(currentSetting.Value).ToUpper().Equals(Convert.ToString(newValue).ToUpper()))
{
// Code which updates the setting value
}
}
// For the UI
public static IEnumerable<Setting> GetAllSettings()
{
if (_settings == null)
LoadSettings();
return _settings;
}
// To change back swiftly to the Web.config in case there are errors in production - will remove in the future
public static bool SettingsFromDataBase
{
get
{
if (HttpContext.Current.Application["SettingsFromDataBase"] == null)
HttpContext.Current.Application["SettingsFromDataBase"] = ConfigurationManager.AppSettings["SettingsFromDataBase"];
bool settingsFromDataBase;
if (bool.TryParse(HttpContext.Current.Application["SettingsFromDataBase"] as string, out settingsFromDataBase))
return settingsFromDataBase;
else return false;
}
}
public static T ObjectToGenericType<T>(object obj)
{
if (obj is T)
{
return (T)obj;
}
else
{
try
{
return (T)Convert.ChangeType(obj, typeof(T));
}
catch (InvalidCastException)
{
return default(T);
}
}
}
public static T GetSetting<T>(string key)
{
if (SettingsFromDataBase)
{
var setting = GetSetting(key);
return setting != null
? ObjectToGenericType<T>(setting.Value)
: ObjectToGenericType<T>(ConfigurationManager.AppSettings[key]);
}
if (HttpContext.Current.Application[key] == null)
HttpContext.Current.Application[key] = ConfigurationManager.AppSettings[key];
return (T)HttpContext.Current.Application[key];
}
// The actual settings which will be used in the other projects
public static string StringSetting
{
get { return GetSetting<string>("StringSetting"); }
}
public static bool BoolSetting
{
get { return GetSetting<bool>("BoolSetting"); }
}
Note: Likely there are improvements to this class using Reflection, but I would prefer to avoid it (apart from that conversion).
Questions
At the moment the SettingHelper resides in the UI project (in order to be able to access the settings from the Web.config). How can I transmit these settings in the BLL or Integration projects?
Also, do you think this implementation with all the settings stored in a static list is the way to go? It feels unoptimal.
Any help or discussion on improvements is appreciated. Thank you.
I have a solution with several projects, including an ASP.NET MVC project and a WPF application. In the DB, I have some general settings which I want to use in both applications. To do that, I've created a class library Foo which loads the settings into a dictionary and provides a Get(string key) method for accessing specific settings out of the dictionary.
Since the settings can be overridden by user, I've added a property containing the UserId. The Get() method automatically takes care of checking and using the UserId property. This way, I don't need to pass the UserId as a param each time I call the Get() method.
For the WPF application, this works just fine, since there is just one instance running. However for the web project, I'd like to have the dictionary filled only once (in Application_Start()) and be accessible to all users visiting the site. This works fine if I make the class instance static. However, that does not allow me to have different UserIds, as this would be overridden for everyone with every user that accesses the site. What's the best way to solve this?
Here's what I tried so far (very simplified):
Class Library:
public class Foo ()
{
private Dictionary<string, string> Res;
private int UserId;
public Foo ()
{
Res = DoSomeMagicAndGetMyDbValues();
}
public void SetUser (int userId)
{
UserId = userId;
}
public string Get(string key)
{
var res = Res[key];
// do some magic stuff with the UserId
return res;
}
}
Global.asax:
public static Foo MyFoo;
protected void Application_Start()
{
MyFoo = new Foo();
}
UserController.cs:
public ActionResult Login(int userId)
{
MvcApplication.MyFoo.SetUser(userId); // <-- this sets the same UserId for all instances
}
What about storing the settings in a Dictionary<int<Dictionary<string, string>>, where the Key of the outer dictionary is the UserId, with key 0 saved for the default settings? Of course this means you'd have to pass the user id to the Get and Set methods...
Then, you could possibly do something like this:
public static class Foo
{
private static Dictionary<int, Dictionary<string, string>> settings;
/// <summary>
/// Populates settings[0] with the default settings for the application
/// </summary>
public static void LoadDefaultSettings()
{
if (!settings.ContainsKey(0))
{
settings.Add(0, new Dictionary<string, string>());
}
// Some magic that loads the default settings into settings[0]
settings[0] = GetDefaultSettings();
}
/// <summary>
/// Adds a user-defined key or overrides a default key value with a User-specified value
/// </summary>
/// <param name="key">The key to add or override</param>
/// <param name="value">The key's value</param>
public static void Set(string key, string value, int userId)
{
if (!settings.ContainsKey(userId))
{
settings.Add(userId, new Dictionary<string, string>());
}
settings[userId][key] = value;
}
/// <summary>
/// Gets the User-defined value for the specified key if it exists,
/// otherwise the default value is returned.
/// </summary>
/// <param name="key">The key to search for</param>
/// <returns>The value of specified key, or empty string if it doens't exist</returns>
public static string Get(string key, int userId)
{
if (settings.ContainsKey(userId) && settings[userId].ContainsKey(key))
{
return settings[userId][key];
}
return settings[0].ContainsKey(key) ? settings[0][key] : string.Empty;
}
}
Are there any multithreaded caching mechanisms that will work in a SQL CLR function without requiring the assembly to be registered as "unsafe"?
As also described in this post, simply using a lock statement will throw an exception on a safe assembly:
System.Security.HostProtectionException:
Attempted to perform an operation that was forbidden by the CLR host.
The protected resources (only available with full trust) were: All
The demanded resources were: Synchronization, ExternalThreading
I want any calls to my functions to all use the same internal cache, in a thread-safe manner so that many operations can do cache reads and writes simultaneously. Essentially - I need a ConcurrentDictionary that will work in a SQLCLR "safe" assembly. Unfortunately, using ConcurrentDictionary itself gives the same exception as above.
Is there something built-in to SQLCLR or SQL Server to handle this? Or am I misunderstanding the threading model of SQLCLR?
I have read as much as I can find about the security restrictions of SQLCLR. In particular, the following articles may be useful to understand what I am talking about:
SQL Server CLR Integration Part 1: Security
Deploy/Use assemblies which require Unsafe/External Access with CLR and T-SQL
This code will ultimately be part of a library that is distributed to others, so I really don't want to be required to run it as "unsafe".
One option that I am considering (brought up in comments below by Spender) is to reach out to tempdb from within the SQLCLR code and use that as a cache instead. But I'm not quite sure exactly how to do that. I'm also not sure if it will be anywhere near as performant as an in-memory cache. See update below.
I am interested in any other alternatives that might be available. Thanks.
Example
The code below uses a static concurrent dictionary as a cache and accesses that cache via SQL CLR user-defined functions. All calls to the functions will work with the same cache. But this will not work unless the assembly is registered as "unsafe".
public class UserDefinedFunctions
{
private static readonly ConcurrentDictionary<string,string> Cache =
new ConcurrentDictionary<string, string>();
[SqlFunction]
public static SqlString GetFromCache(string key)
{
string value;
if (Cache.TryGetValue(key, out value))
return new SqlString(value);
return SqlString.Null;
}
[SqlProcedure]
public static void AddToCache(string key, string value)
{
Cache.TryAdd(key, value);
}
}
These are in an assembly called SqlClrTest, and and use the following SQL wrappers:
CREATE FUNCTION [dbo].[GetFromCache](#key nvarchar(4000))
RETURNS nvarchar(4000) WITH EXECUTE AS CALLER
AS EXTERNAL NAME [SqlClrTest].[SqlClrTest.UserDefinedFunctions].[GetFromCache]
GO
CREATE PROCEDURE [dbo].[AddToCache](#key nvarchar(4000), #value nvarchar(4000))
WITH EXECUTE AS CALLER
AS EXTERNAL NAME [SqlClrTest].[SqlClrTest.UserDefinedFunctions].[AddToCache]
GO
Then they are used in the database like this:
EXEC dbo.AddToCache 'foo', 'bar'
SELECT dbo.GetFromCache('foo')
UPDATE
I figured out how to access the database from SQLCLR using the Context Connection. The code in this Gist shows both the ConcurrentDictionary approach, and the tempdb approach. I then ran some tests, with the following results measured from client statistics (average of 10 trials):
Concurrent Dictionary Cache
10,000 Writes: 363ms
10,000 Reads : 81ms
TempDB Cache
10,000 Writes: 3546ms
10,000 Reads : 1199ms
So that throws out the idea of using a tempdb table. Is there really nothing else I can try?
I've added a comment that says something similar, but I'm going to put it here as an answer instead, because I think it might need some background.
ConcurrentDictionary, as you've correctly pointed out, requires UNSAFE ultimately because it uses thread synchronisation primitives beyond even lock - this explicitly requires access to lower-level OS resources, and therefore requires the code fishing outside of the SQL hosting environment.
So the only way you can get a solution that doesn't require UNSAFE, is to use one which doesn't use any locks or other thread synchronisation primitives. However, if the underlying structure is a .Net Dictionary then the only truly safe way to share it across multiple threads is to use Lock or an Interlocked.CompareExchange (see here) with a spin wait. I can't seem to find any information on whether the latter is allowed under the SAFE permission set, but my guess is that it's not.
I'd also be questioning the validity of applying a CLR-based solution to this problem inside a database engine, whose indexing-and-lookup capability is likely to be far in excess of any hosted CLR solution.
The accepted answer is not correct. Interlocked.CompareExchange is not an option since it requires a shared resource to update, and there is no way to create said static variable, in a SAFE Assembly, that can be updated.
There is (for the most part) no way to cache data across calls in a SAFE Assembly (nor should there be). The reason is that there is a single instance of the class (well, within the App Domain which is per-database per-owner) that is shared across all sessions. That behavior is, more often than not, highly undesirable.
However, I did say "for the most part" it was not possible. There is a way, though I am not sure if it is a bug or intended to be this way. I would err on the side of it being a bug since again, sharing a variable across sessions is a very precarious activity. Nonetheless, you can (do so at your own risk, AND this is not specifically thread safe, but might still work) modify static readonly collections. Yup. As in:
using Microsoft.SqlServer.Server;
using System.Data.SqlTypes;
using System.Collections;
public class CachingStuff
{
private static readonly Hashtable _KeyValuePairs = new Hashtable();
[SqlFunction(DataAccess = DataAccessKind.None, IsDeterministic = true)]
public static SqlString GetKVP(SqlString KeyToGet)
{
if (_KeyValuePairs.ContainsKey(KeyToGet.Value))
{
return _KeyValuePairs[KeyToGet.Value].ToString();
}
return SqlString.Null;
}
[SqlProcedure]
public static void SetKVP(SqlString KeyToSet, SqlString ValueToSet)
{
if (!_KeyValuePairs.ContainsKey(KeyToSet.Value))
{
_KeyValuePairs.Add(KeyToSet.Value, ValueToSet.Value);
}
return;
}
[SqlProcedure]
public static void UnsetKVP(SqlString KeyToUnset)
{
_KeyValuePairs.Remove(KeyToUnset.Value);
return;
}
}
And running the above, with the database set as TRUSTWORTHY OFF and the assembly set to SAFE, we get:
EXEC dbo.SetKVP 'f', 'sdfdg';
SELECT dbo.GetKVP('f'); -- sdfdg
SELECT dbo.GetKVP('g'); -- NULL
EXEC dbo.UnsetKVP 'f';
SELECT dbo.GetKVP('f'); -- NULL
That all being said, there is probably a better way that is not SAFE but also not UNSAFE. Since the desire is to use memory for caching of repeatedly used values, why not set up a memcached or redis server and create SQLCLR functions to communicate with it? That would only require setting the assembly to EXTERNAL_ACCESS.
This way you don't have to worry about several issues:
consuming a bunch of memory which could/should be used for queries.
there is no automatic expiration of the data held in static variables. It exists until you remove it or the App Domain gets unloaded, which might not happen for a long time. But memcached and redis do allow for setting an expiration time.
this is not explicitly thread safe. But cache servers are.
SQL Server locking functions sp_getapplock and sp_releaseapplock can be used in SAFE context. Employ them to protect an ordinary Dictionary and you have yourself a cache!
The price of locking this way is much worse than ordinary lock, but that may not be an issue if you are accessing your cache in a relatively coarsely-grained way.
--- UPDATE ---
The Interlocked.CompareExchange can be used on a field contained in a static instance. The static reference can be made readonly, but a field in the referenced object can still be mutable, and therefore usable by Interlocked.CompareExchange.
Both Interlocked.CompareExchange and static readonly are allowed in SAFE context. Performance is much better than sp_getapplock.
Based on Andras answer, here is my implantation of a "SharedCache" to read and write in a dictionary in SAFE permission.
EvalManager (Static)
using System;
using System.Collections.Generic;
using Z.Expressions.SqlServer.Eval;
namespace Z.Expressions
{
/// <summary>Manager class for eval.</summary>
public static class EvalManager
{
/// <summary>The cache for EvalDelegate.</summary>
public static readonly SharedCache<string, EvalDelegate> CacheDelegate = new SharedCache<string, EvalDelegate>();
/// <summary>The cache for SQLNETItem.</summary>
public static readonly SharedCache<string, SQLNETItem> CacheItem = new SharedCache<string, SQLNETItem>();
/// <summary>The shared lock.</summary>
public static readonly SharedLock SharedLock;
static EvalManager()
{
// ENSURE to create lock first
SharedLock = new SharedLock();
}
}
}
SharedLock
using System.Threading;
namespace Z.Expressions.SqlServer.Eval
{
/// <summary>A shared lock.</summary>
public class SharedLock
{
/// <summary>Acquires the lock on the specified lockValue.</summary>
/// <param name="lockValue">[in,out] The lock value.</param>
public static void AcquireLock(ref int lockValue)
{
do
{
// TODO: it's possible to wait 10 ticks? Thread.Sleep doesn't really support it.
} while (0 != Interlocked.CompareExchange(ref lockValue, 1, 0));
}
/// <summary>Releases the lock on the specified lockValue.</summary>
/// <param name="lockValue">[in,out] The lock value.</param>
public static void ReleaseLock(ref int lockValue)
{
Interlocked.CompareExchange(ref lockValue, 0, 1);
}
/// <summary>Attempts to acquire lock on the specified lockvalue.</summary>
/// <param name="lockValue">[in,out] The lock value.</param>
/// <returns>true if it succeeds, false if it fails.</returns>
public static bool TryAcquireLock(ref int lockValue)
{
return 0 == Interlocked.CompareExchange(ref lockValue, 1, 0);
}
}
}
SharedCache
using System;
using System.Collections.Generic;
namespace Z.Expressions.SqlServer.Eval
{
/// <summary>A shared cache.</summary>
/// <typeparam name="TKey">Type of key.</typeparam>
/// <typeparam name="TValue">Type of value.</typeparam>
public class SharedCache<TKey, TValue>
{
/// <summary>The lock value.</summary>
public int LockValue;
/// <summary>Default constructor.</summary>
public SharedCache()
{
InnerDictionary = new Dictionary<TKey, TValue>();
}
/// <summary>Gets the number of items cached.</summary>
/// <value>The number of items cached.</value>
public int Count
{
get { return InnerDictionary.Count; }
}
/// <summary>Gets or sets the inner dictionary used to cache items.</summary>
/// <value>The inner dictionary used to cache items.</value>
public Dictionary<TKey, TValue> InnerDictionary { get; set; }
/// <summary>Acquires the lock on the shared cache.</summary>
public void AcquireLock()
{
SharedLock.AcquireLock(ref LockValue);
}
/// <summary>Adds or updates a cache value for the specified key.</summary>
/// <param name="key">The cache key.</param>
/// <param name="value">The cache value used to add.</param>
/// <param name="updateValueFactory">The cache value factory used to update.</param>
/// <returns>The value added or updated in the cache for the specified key.</returns>
public TValue AddOrUpdate(TKey key, TValue value, Func<TKey, TValue, TValue> updateValueFactory)
{
try
{
AcquireLock();
TValue oldValue;
if (InnerDictionary.TryGetValue(key, out oldValue))
{
value = updateValueFactory(key, oldValue);
InnerDictionary[key] = value;
}
else
{
InnerDictionary.Add(key, value);
}
return value;
}
finally
{
ReleaseLock();
}
}
/// <summary>Adds or update a cache value for the specified key.</summary>
/// <param name="key">The cache key.</param>
/// <param name="addValueFactory">The cache value factory used to add.</param>
/// <param name="updateValueFactory">The cache value factory used to update.</param>
/// <returns>The value added or updated in the cache for the specified key.</returns>
public TValue AddOrUpdate(TKey key, Func<TKey, TValue> addValueFactory, Func<TKey, TValue, TValue> updateValueFactory)
{
try
{
AcquireLock();
TValue value;
TValue oldValue;
if (InnerDictionary.TryGetValue(key, out oldValue))
{
value = updateValueFactory(key, oldValue);
InnerDictionary[key] = value;
}
else
{
value = addValueFactory(key);
InnerDictionary.Add(key, value);
}
return value;
}
finally
{
ReleaseLock();
}
}
/// <summary>Clears all cached items.</summary>
public void Clear()
{
try
{
AcquireLock();
InnerDictionary.Clear();
}
finally
{
ReleaseLock();
}
}
/// <summary>Releases the lock on the shared cache.</summary>
public void ReleaseLock()
{
SharedLock.ReleaseLock(ref LockValue);
}
/// <summary>Attempts to add a value in the shared cache for the specified key.</summary>
/// <param name="key">The key.</param>
/// <param name="value">The value.</param>
/// <returns>true if it succeeds, false if it fails.</returns>
public bool TryAdd(TKey key, TValue value)
{
try
{
AcquireLock();
if (!InnerDictionary.ContainsKey(key))
{
InnerDictionary.Add(key, value);
}
return true;
}
finally
{
ReleaseLock();
}
}
/// <summary>Attempts to remove a key from the shared cache.</summary>
/// <param name="key">The key.</param>
/// <param name="value">[out] The value.</param>
/// <returns>true if it succeeds, false if it fails.</returns>
public bool TryRemove(TKey key, out TValue value)
{
try
{
AcquireLock();
var isRemoved = InnerDictionary.TryGetValue(key, out value);
if (isRemoved)
{
InnerDictionary.Remove(key);
}
return isRemoved;
}
finally
{
ReleaseLock();
}
}
/// <summary>Attempts to get value from the shared cache for the specified key.</summary>
/// <param name="key">The key.</param>
/// <param name="value">[out] The value.</param>
/// <returns>true if it succeeds, false if it fails.</returns>
public bool TryGetValue(TKey key, out TValue value)
{
try
{
return InnerDictionary.TryGetValue(key, out value);
}
catch (Exception)
{
value = default(TValue);
return false;
}
}
}
}
Source Files:
https://github.com/zzzprojects/Eval-SQL.NET/blob/master/src/Z.Expressions.SqlServer.Eval/EvalManager/EvalManager.cs
https://github.com/zzzprojects/Eval-SQL.NET/blob/master/src/Z.Expressions.SqlServer.Eval/Shared/SharedLock.cs
https://github.com/zzzprojects/Eval-SQL.NET/blob/master/src/Z.Expressions.SqlServer.Eval/Shared/SharedCache.cs
Would your needs be satisfied with a table variable? They're kept in memory, as long as possible anyway, so performance should be excellent. Not so useful if you need to maintain your cache between app calls, of course.
Created as a type, you can also pass such a table into a sproc or UDF.
How do I work with queue at c#?
I want one thread that will enqueue data to the queue & another thread will dequeue data from to the queue. Those threads should run simultaneously.
Is it possible?
If you need thread safety use ConcurrentQueue<T>.
If you use System.Collections.Queue thread-safety is guaranteed in this way:
var queue = new Queue();
Queue.Synchronized(queue).Enqueue(new WorkItem());
Queue.Synchronized(queue).Enqueue(new WorkItem());
Queue.Synchronized(queue).Clear();
if you wanna use System.Collections.Generic.Queue<T> then create your own wrapper class. I did this allready with System.Collections.Generic.Stack<T>:
using System;
using System.Collections.Generic;
[Serializable]
public class SomeStack
{
private readonly object stackLock = new object();
private readonly Stack<WorkItem> stack;
public ContextStack()
{
this.stack = new Stack<WorkItem>();
}
public IContext Push(WorkItem context)
{
lock (this.stackLock)
{
this.stack.Push(context);
}
return context;
}
public WorkItem Pop()
{
lock (this.stackLock)
{
return this.stack.Pop();
}
}
}
One possible implementation is to use a ring buffer with separate read and write pointers. On each read/write operation you copy the opposite pointer (must be thread safe) into your local context and then perform batched reads or writes.
on each read or write you update the pointer and pulse an event.
If the read or write thread gets to where it has no more work to do you wait on the other threads event before rereading the appropriate pointer.
You can implement a thread-safe queue using atomic operations. I once wrote the following class for a multi-player game. It allows multiple threads to safely write to the queue, and a single other thread to safely read from the queue:
/// <summary>
/// The WaitFreeQueue class implements the Queue abstract data type through a linked list. The WaitFreeQueue
/// allows thread-safe addition and removal of elements using atomic operations. Multiple threads can add
/// elements simultaneously, and another thread can remove elements from the queue at the same time. Only one
/// thread can remove elements from the queue at any given time.
/// </summary>
/// <typeparam name="T">The type parameter</typeparam>
public class WaitFreeQueue<T>
{
// Private fields
// ==============
#region Private fields
private Node<T> _tail; // The tail of the queue.
private Node<T> _head; // The head of the queue.
#endregion
// Public methods
// ==============
#region Public methods
/// <summary>
/// Removes the first item from the queue. This method returns a value to indicate if an item was
/// available, and passes the item back through an argument.
/// This method is not thread-safe in itself (only one thread can safely access this method at any
/// given time) but it is safe to call this method while other threads are enqueueing items.
///
/// If no item was available at the time of calling this method, the returned value is initialised
/// to the default value that matches this instance's type parameter. For reference types, this is
/// a Null reference.
/// </summary>
/// <param name="value">The value.</param>
/// <returns>A boolean value indicating if an element was available (true) or not.</returns>
public bool Dequeue(ref T value)
{
bool succeeded = false;
value = default(T);
// If there is an element on the queue then we get it.
if (null != _head)
{
// Set the head to the next element in the list, and retrieve the old head.
Node<T> head = System.Threading.Interlocked.Exchange<Node<T>>(ref _head, _head.Next);
// Sever the element we just pulled off the queue.
head.Next = null;
// We have succeeded.
value = head.Value;
succeeded = true;
}
return succeeded;
}
/// <summary>
/// Adds another item to the end of the queue. This operation is thread-safe, and multiple threads
/// can enqueue items while a single other thread dequeues items.
/// </summary>
/// <param name="value">The value to add.</param>
public void Enqueue(T value)
{
// We create a new node for the specified value, and point it to itself.
Node<T> newNode = new Node<T>(value);
// In one atomic operation, set the tail of the list to the new node, and remember the old tail.
Node<T> previousTail = System.Threading.Interlocked.Exchange<Node<T>>(ref _tail, newNode);
// Link the previous tail to the new tail.
if (null != previousTail)
previousTail.Next = newNode;
// If this is the first node in the list, we save it as the head of the queue.
System.Threading.Interlocked.CompareExchange<Node<T>>(ref _head, newNode, null);
} // Enqueue()
#endregion
// Public constructor
// ==================
#region Public constructor
/// <summary>
/// Constructs a new WaitFreeQueue instance.
/// </summary>
public WaitFreeQueue() { }
/// <summary>
/// Constructs a new WaitFreeQueue instance based on the specified list of items.
/// The items will be enqueued. The list can be a Null reference.
/// </summary>
/// <param name="items">The items</param>
public WaitFreeQueue(IEnumerable<T> items)
{
if(null!=items)
foreach(T item in items)
this.Enqueue(item);
}
#endregion
// Private types
// =============
#region Private types
/// <summary>
/// The Node class represents a single node in the linked list of a WaitFreeQueue.
/// It contains the queued-up value and a reference to the next node in the list.
/// </summary>
/// <typeparam name="U">The type parameter.</typeparam>
private class Node<U>
{
// Public fields
// =============
#region Public fields
public Node<U> Next;
public U Value;
#endregion
// Public constructors
// ===================
#region Public constructors
/// <summary>
/// Constructs a new node with the specified value.
/// </summary>
/// <param name="value">The value</param>
public Node(U value)
{
this.Value = value;
}
#endregion
} // Node generic class
#endregion
} // WaitFreeQueue class
If the restriction of having only a single thread de-queueing while multiple threads can en-queue is OK with you then you could use that. It was great for the game because it meant no thread synchronisation was required.
Example simple usage would be
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
ExampleQueue eq = new ExampleQueue();
eq.Run();
// Wait...
System.Threading.Thread.Sleep(100000);
}
}
class ExampleQueue
{
private Queue<int> _myQueue = new Queue<int>();
public void Run()
{
ThreadPool.QueueUserWorkItem(new WaitCallback(PushToQueue), null);
ThreadPool.QueueUserWorkItem(new WaitCallback(PopFromQueue), null);
}
private void PushToQueue(object Dummy)
{
for (int i = 0; i <= 1000; i++)
{
lock (_myQueue)
{
_myQueue.Enqueue(i);
}
}
System.Console.WriteLine("END PushToQueue");
}
private void PopFromQueue(object Dummy)
{
int dataElementFromQueue = -1;
while (dataElementFromQueue < 1000)
{
lock (_myQueue)
{
if (_myQueue.Count > 0)
{
dataElementFromQueue = _myQueue.Dequeue();
// Do something with dataElementFromQueue...
System.Console.WriteLine("Dequeued " + dataElementFromQueue);
}
}
}
System.Console.WriteLine("END PopFromQueue");
}
}
}
You might want to use a blocking queue, in which the thread that is popping from the queue will wait until some data is available.
See: Creating a blocking Queue<T> in .NET?
Does anyone have any advice on which method is better when caching data in a C# ASP.net application?
I am currently using a combination of two approaches, with some data (List, dictionaries, the usual domain-specific information) being put directly into the cache and boxed when needed, and some data being kept inside a globaldata class, and retrieved through that class (i.e. the GlobalData class is cached, and it's properties are the actual data).
Is either approach preferable?
I get the feeling that caching each item separately would be more sensible from a concurrency point of view, however it creates a lot more work in the long run with more functions that purely deal with getting data out of a cache location in a Utility class.
Suggestions would be appreciated.
Generally the cache's performance is so much better than the underlying source (e.g. a DB) that the performance of the cache is not a problem. The main goal is rather to get as high cache-hit ratio as possible (unless you are developing at really large scale because then it pays off to optimize the cache as well).
To achieve this I usually try to make it as straight forward as possible for the developer to use cache (so that we don't miss any chances of cache-hits just because the developer is too lazy to use the cache). In some projects we've use a modified version of a CacheHandler available in Microsoft's Enterprise Library.
With CacheHandler (which uses Policy Injection) you can easily make a method "cacheable" by just adding an attribute to it. For instance this:
[CacheHandler(0, 30, 0)]
public Object GetData(Object input)
{
}
would make all calls to that method cached for 30 minutes. All invocations gets a unique cache-key based on the input data and method name so if you call the method twice with different input it doesn't get cached but if you call it >1 times within the timout interval with the same input then the method only gets executed once.
Our modified version looks like this:
using System;
using System.Diagnostics;
using System.IO;
using System.Reflection;
using System.Runtime.Remoting.Contexts;
using System.Text;
using System.Web;
using System.Web.Caching;
using System.Web.UI;
using Microsoft.Practices.EnterpriseLibrary.Common.Configuration;
using Microsoft.Practices.Unity.InterceptionExtension;
namespace Middleware.Cache
{
/// <summary>
/// An <see cref="ICallHandler"/> that implements caching of the return values of
/// methods. This handler stores the return value in the ASP.NET cache or the Items object of the current request.
/// </summary>
[ConfigurationElementType(typeof (CacheHandler)), Synchronization]
public class CacheHandler : ICallHandler
{
/// <summary>
/// The default expiration time for the cached entries: 5 minutes
/// </summary>
public static readonly TimeSpan DefaultExpirationTime = new TimeSpan(0, 5, 0);
private readonly object cachedData;
private readonly DefaultCacheKeyGenerator keyGenerator;
private readonly bool storeOnlyForThisRequest = true;
private TimeSpan expirationTime;
private GetNextHandlerDelegate getNext;
private IMethodInvocation input;
public CacheHandler(TimeSpan expirationTime, bool storeOnlyForThisRequest)
{
keyGenerator = new DefaultCacheKeyGenerator();
this.expirationTime = expirationTime;
this.storeOnlyForThisRequest = storeOnlyForThisRequest;
}
/// <summary>
/// This constructor is used when we wrap cached data in a CacheHandler so that
/// we can reload the object after it has been removed from the cache.
/// </summary>
/// <param name="expirationTime"></param>
/// <param name="storeOnlyForThisRequest"></param>
/// <param name="input"></param>
/// <param name="getNext"></param>
/// <param name="cachedData"></param>
public CacheHandler(TimeSpan expirationTime, bool storeOnlyForThisRequest,
IMethodInvocation input, GetNextHandlerDelegate getNext,
object cachedData)
: this(expirationTime, storeOnlyForThisRequest)
{
this.input = input;
this.getNext = getNext;
this.cachedData = cachedData;
}
/// <summary>
/// Gets or sets the expiration time for cache data.
/// </summary>
/// <value>The expiration time.</value>
public TimeSpan ExpirationTime
{
get { return expirationTime; }
set { expirationTime = value; }
}
#region ICallHandler Members
/// <summary>
/// Implements the caching behavior of this handler.
/// </summary>
/// <param name="input"><see cref="IMethodInvocation"/> object describing the current call.</param>
/// <param name="getNext">delegate used to get the next handler in the current pipeline.</param>
/// <returns>Return value from target method, or cached result if previous inputs have been seen.</returns>
public IMethodReturn Invoke(IMethodInvocation input, GetNextHandlerDelegate getNext)
{
lock (input.MethodBase)
{
this.input = input;
this.getNext = getNext;
return loadUsingCache();
}
}
public int Order
{
get { return 0; }
set { }
}
#endregion
private IMethodReturn loadUsingCache()
{
//We need to synchronize calls to the CacheHandler on method level
//to prevent duplicate calls to methods that could be cached.
lock (input.MethodBase)
{
if (TargetMethodReturnsVoid(input) || HttpContext.Current == null)
{
return getNext()(input, getNext);
}
var inputs = new object[input.Inputs.Count];
for (int i = 0; i < inputs.Length; ++i)
{
inputs[i] = input.Inputs[i];
}
string cacheKey = keyGenerator.CreateCacheKey(input.MethodBase, inputs);
object cachedResult = getCachedResult(cacheKey);
if (cachedResult == null)
{
var stopWatch = Stopwatch.StartNew();
var realReturn = getNext()(input, getNext);
stopWatch.Stop();
if (realReturn.Exception == null && realReturn.ReturnValue != null)
{
AddToCache(cacheKey, realReturn.ReturnValue);
}
return realReturn;
}
var cachedReturn = input.CreateMethodReturn(cachedResult, input.Arguments);
return cachedReturn;
}
}
private object getCachedResult(string cacheKey)
{
//When the method uses input that is not serializable
//we cannot create a cache key and can therefore not
//cache the data.
if (cacheKey == null)
{
return null;
}
object cachedValue = !storeOnlyForThisRequest ? HttpRuntime.Cache.Get(cacheKey) : HttpContext.Current.Items[cacheKey];
var cachedValueCast = cachedValue as CacheHandler;
if (cachedValueCast != null)
{
//This is an object that is reloaded when it is being removed.
//It is therefore wrapped in a CacheHandler-object and we must
//unwrap it before returning it.
return cachedValueCast.cachedData;
}
return cachedValue;
}
private static bool TargetMethodReturnsVoid(IMethodInvocation input)
{
var targetMethod = input.MethodBase as MethodInfo;
return targetMethod != null && targetMethod.ReturnType == typeof (void);
}
private void AddToCache(string key, object valueToCache)
{
if (key == null)
{
//When the method uses input that is not serializable
//we cannot create a cache key and can therefore not
//cache the data.
return;
}
if (!storeOnlyForThisRequest)
{
HttpRuntime.Cache.Insert(
key,
valueToCache,
null,
System.Web.Caching.Cache.NoAbsoluteExpiration,
expirationTime,
CacheItemPriority.Normal, null);
}
else
{
HttpContext.Current.Items[key] = valueToCache;
}
}
}
/// <summary>
/// This interface describes classes that can be used to generate cache key strings
/// for the <see cref="CacheHandler"/>.
/// </summary>
public interface ICacheKeyGenerator
{
/// <summary>
/// Creates a cache key for the given method and set of input arguments.
/// </summary>
/// <param name="method">Method being called.</param>
/// <param name="inputs">Input arguments.</param>
/// <returns>A (hopefully) unique string to be used as a cache key.</returns>
string CreateCacheKey(MethodBase method, object[] inputs);
}
/// <summary>
/// The default <see cref="ICacheKeyGenerator"/> used by the <see cref="CacheHandler"/>.
/// </summary>
public class DefaultCacheKeyGenerator : ICacheKeyGenerator
{
private readonly LosFormatter serializer = new LosFormatter(false, "");
#region ICacheKeyGenerator Members
/// <summary>
/// Create a cache key for the given method and set of input arguments.
/// </summary>
/// <param name="method">Method being called.</param>
/// <param name="inputs">Input arguments.</param>
/// <returns>A (hopefully) unique string to be used as a cache key.</returns>
public string CreateCacheKey(MethodBase method, params object[] inputs)
{
try
{
var sb = new StringBuilder();
if (method.DeclaringType != null)
{
sb.Append(method.DeclaringType.FullName);
}
sb.Append(':');
sb.Append(method.Name);
TextWriter writer = new StringWriter(sb);
if (inputs != null)
{
foreach (var input in inputs)
{
sb.Append(':');
if (input != null)
{
//Diffrerent instances of DateTime which represents the same value
//sometimes serialize differently due to some internal variables which are different.
//We therefore serialize it using Ticks instead. instead.
var inputDateTime = input as DateTime?;
if (inputDateTime.HasValue)
{
sb.Append(inputDateTime.Value.Ticks);
}
else
{
//Serialize the input and write it to the key StringBuilder.
serializer.Serialize(writer, input);
}
}
}
}
return sb.ToString();
}
catch
{
//Something went wrong when generating the key (probably an input-value was not serializble.
//Return a null key.
return null;
}
}
#endregion
}
}
Microsoft deserves most credit for this code. We've only added stuff like caching at request level instead of across requests (more useful than you might think) and fixed some bugs (e.g. equal DateTime-objects serializing to different values).
Under what conditions do you need to invalidate your cache? Objects should be stored so that when they are invalidated repopulating the cache only requires re-caching the items that were invalidated.
For example if you have cached say a Customer object that contains the delivery details for an order along with the shopping basket. Invalidating the shopping basket because they added or removed an item would also require repopulating the delivery details unnecessarily.
(NOTE: This is an obteuse example and I'm not advocating this just trying to demonstrate the principle and my imagination is a bit off today).
Ed, I assume those lists and dictionaries contain almost static data with low chances of expiration. Then there's data that gets frequent hits but also changes more frequently, so you're caching it using the HttpRuntime cache.
Now, you should think of all that data and all of the dependencies between diferent types. If you logically find that the HttpRuntime cached data depends somehow on your GlobalData items, you should move that into the cache and set up the appropriate dependencies in there so you'll benefit of the "cascading expiration".
Even if you do use your custom caching mechanism, you'd still have to provide all the synchronization, so you won't save on that by avoiding the other.
If you need (preordered) lists of items with a really low frequency change, you can still do that by using the HttpRuntime cache. So you could just cache a dictionary and either use it to list your items or to index and access by your custom key.
How about the best (worst?) of both worlds?
Have the globaldata class manage all the cache access internally. The rest of your code can then just use globaldata, meaning that it doesn't need to be cache-aware at all.
You could change the cache implementation as/when you like just by updating globaldata, and the rest of your code won't know or care what's going on inside.
There's much more than that to consider when architecting your caching strategy. Think of your cache store as if it were your in-memory db. So carefully handle dependencies and expiration policy for each and every type stored in there. It really doesn't matter what you use for caching (system.web, other commercial solution, rolling your own...).
I'd try to centralize it though and also use some sort of a plugable architecture. Make your data consumers access it through a common API (an abstract cache that exposes it) and plug your caching layer at runtime (let's say asp.net cache).
You should really take a top down approach when caching data to avoid any kind of data integrity problems (proper dependecies like I said) and then take care of providing synchronization.