Related
Currently I'm writing a program to access multiple 'types' of SQL servers, such as TSQL or MySQL in C#. I created a base class DBConnector that has an abstract method:
public abstract class DBConnector
{
//some other code...
protected abstract int ExecuteNonQueryOrScalar(DbConnection con, string statement, bool executeAsScalar, int timeout = 20);
//some other code...
}
Now I have two derived classes TSqlConnector and MySqlConnector which implement this abstract method:
For MySql it looks like this:
protected override int ExecuteNonQueryOrScalar(DbConnection con, string statement, bool executeAsScalar, int timeout = 20)
{
int result = -1;
using (MySqlCommand cmd = new MySqlCommand(statement, (MySqlConnection)con))
{
cmd.CommandTimeout = timeout;
cmd.Connection.Open();
if (executeAsScalar)
{
object resultObject = cmd.ExecuteScalar();
if (resultObject is int tmpResult)
result = tmpResult;
}
else
{
result = cmd.ExecuteNonQuery();
}
}
return result;
}
For TSql it looks like this:
protected override int ExecuteNonQueryOrScalar(DbConnection con, string statement, bool executeAsScalar, int timeout = 20)
{
int result = -1;
using (SqlCommand cmd = new SqlCommand(statement, (SqlConnection)con))
{
cmd.CommandTimeout = timeout;
cmd.Connection.Open();
if (executeAsScalar)
{
object resultObject = cmd.ExecuteScalar();
if (resultObject is int tmpResult)
result = tmpResult;
}
else
{
result = cmd.ExecuteNonQuery();
}
}
return result;
}
By the way, these methods also contain some error handling and some other stuff, but I simplified my methods for this post.
This method gets called in my other custom methods like that:
Code in my calling Insert-Method (obj is an instance of a model class which has the same properties as the database table, including ID):
obj.ID = ExecuteNonQueryOrScalar(GetConnection(), sql, true); //true for scalar-execution
Code in my calling Update-Method:
affectedLines = ExecuteNonQueryOrScalar(GetConnection(), sql, false); //false for nonQuery-execution
Code in my calling Delete-Method:
affectedLines = ExecuteNonQueryOrScalar(GetConnection(), sql, false); //false for nonQuery-execution
GetConnection() returns a DbConnection-Object which is either a SqlConnection or a MySqlConnection at runtime. sql is my sql-string. The boolean parameter decides whether to call ExecuteNonQuery or ExecuteScalar in ExecuteNonQueryOrScalar() as you can see in my code above.
Now to my questions:
As you can see, the code for the two implementations is almost the same. The only differences are the type of the connection and of the command. I have heard that you should follow the pattern "Don't repeat yourself". But I'm repeating myself here. Do you guys have an idea what I could do here? Or should I just stick what I currently have? I've had the idea to move the two methods to one single method in my base class DBConnector and work with generic parameters, which I limit with where T: IDBEntity and where K: DBConnection, but I get compile time errors when I do that. I couldn't really find a solution to prevent this.
Do you have any suggestions how to implement this differently? What would you change?
Let me know if I need to share more of my code.
Thank you for taking the time to read my question.
Edit: Solution to my question:
After I read the responses I realized how to improve my code. I moved my method ExecuteNonQueryOrScalar into the my base class DBConnector and added a IDbCommand-Parameter that can be any IDbCommand-Object at runtime. This method looks like this (simplified version):
protected int ExecuteNonQueryOrScalar(IDbCommand cmd, bool executeAsScalar, int timeout = 20)
{
int result = -1;
try
{
cmd.CommandTimeout = timeout;
if (cmd.Connection.State != ConnectionState.Open)
cmd.Connection.Open();
if (executeAsScalar)
{
string resultObject = cmd.ExecuteScalar().ToString();
if (Int32.TryParse(resultObject, out int tmpResult)) //if (resultObject is int tmpResult)
result = tmpResult;
}
else
{
result = cmd.ExecuteNonQuery();
}
}
//Some error handling...
finally
{
if (cmd.Connection.State != ConnectionState.Closed)
cmd.Connection.Close();
}
return result;
}
Here's an example how I call that ExecuteNonQueryOrScalar-Method in my Update-Method (simplified) in the same DBConnector-Class:
protected int Update<T, K>(T obj, List<DBCondition> filterValues = null) where T : IDBEntity, new() where K : IDbCommand, new()
{
int affectedLines = -1;
if (obj != null)
{
using (IDbCommand cmd = new K())
{
cmd.Connection = GetConnection();
cmd.CommandText = obj.GetSqlUpdateStatement(DBMapping.GetDBMapping<T>(), _sqlSpecificSymbols, filterValues);
affectedLines = ExecuteNonQueryOrScalar(cmd, false);
}
}
return affectedLines;
}
And finally here are two examples how I call this update-method in my MySqlConnector or TSqlConnector:
MySqlConnector:
public override int Update<T>(T obj, List<DBCondition> filterValues = null)
{
return base.Update<T, MySqlCommand>(obj, filterValues); //Call DbConnector.Update<T, K>() from the example above
}
TSqlConnector:
public override int Update<T>(T obj, List<DBCondition> filterValues = null)
{
return base.Update<T, SqlCommand>(obj, filterValues); //Call DbConnector.Update<T, K>() from the example above
}
Please don't hesitate to ask me if you want to see more of my code! I could also upload my not-simplified original code to sites such as github if some of you want to go through everything I did.
Your idea is a good one. But someone on the .net team has saved you some time by creating interfaces that work this way, like IDBConnection and IDBCommand.
Then there are different concrete implementations of this interface that you can use when needed. For example IDBConnection con = new MySqlConnection for MySQL, or IDBConnection con = new SqlConnection for SQL Server.
These interfaces expose common methods, like ExecuteNonQuery. So what you're working with in your code is just a bunch of interfaces. Your code always passes around IDBConnections and IDBCommands, you just choose whichever implementation you need at construction time. This is dependency inversion.
Of course, when populating the actual text of the command, you will have to use the right SQL dialect.
Does this do what you were trying to do?
I was in the middle of implementing a database audit trail whereby CRUD operations performed through my controllers in my Web API project would serialize the old and new poco's and store their values for later retrieval (historical, rollback, etc...).
When I got it all working, I did not like how it made my controllers look during a POST because I ended up having to call SaveChanges() twice, once to get the ID for the inserted entity and then again to commit the audit record which needed to know that ID.
I set out to convert the project (still in its infancy) to use sequences instead of identity columns. This has the added bonus of further abstracting me from SQL Server, though that is not really an issue, but it also allows me to reduce the number of commits and lets me pull that logic out of the controller and stuff it into my service layer which abstracts my controllers from the repositories and lets me do work like this auditing in this "shim" layer.
Once the Sequence object was created and a stored procedure to expose it, I created the following class:
public class SequentialIdProvider : ISequentialIdProvider
{
private readonly IService<SequenceValue> _sequenceValueService;
public SequentialIdProvider(IService<SequenceValue> sequenceValueService)
{
_sequenceValueService = sequenceValueService;
}
public int GetNextId()
{
var value = _sequenceValueService.SelectQuery("GetSequenceIds #numberOfIds", new SqlParameter("numberOfIds", SqlDbType.Int) { Value = 1 }).ToList();
if (value.First() == null)
{
throw new Exception("Unable to retrieve the next id's from the sequence.");
}
return value.First().FirstValue;
}
public IList<int> GetNextIds(int numberOfIds)
{
var values = _sequenceValueService.SelectQuery("GetSequenceIds #numberOfIds", new SqlParameter("numberOfIds", SqlDbType.Int) { Value = numberOfIds }).ToList();
if (values.First() == null)
{
throw new Exception("Unable to retrieve the next id's from the sequence.");
}
var list = new List<int>();
for (var i = values.First().FirstValue; i <= values.First().LastValue; i++)
{
list.Add(i);
}
return list;
}
}
Which simply provides two ways to get IDs, a single and a range.
This all worked great during the first set of unit tests but as soon as I started testing it in a real world scenario, I quickly realized that a single call to GetNextId() would return the same value for the life of that context, until SaveChanges() is called, thus negating any real benefit.
I am not sure if there is a way around this short of creating a second context (not an option) or going old school ADO.NET and making direct SQL calls and use AutoMapper to get to the same net result. Neither of these are appeal to me so I am hoping someone else has an idea.
Don't know if this might help you, but this is how I did my audit log trail using code first.
The following is coded into a class inheriting from DbContext.
in my constructor I have the following
IObjectContextAdapter objectContextAdapter = (this as IObjectContextAdapter);
objectContextAdapter.ObjectContext.SavingChanges += SavingChanges;
This is my saving changes method wired up previously
void SavingChanges(object sender, EventArgs e) {
Debug.Assert(sender != null, "Sender can't be null");
Debug.Assert(sender is ObjectContext, "Sender not instance of ObjectContext");
ObjectContext context = (sender as ObjectContext);
IEnumerable<ObjectStateEntry> modifiedEntities = context.ObjectStateManager.GetObjectStateEntries(EntityState.Modified);
IEnumerable<ObjectStateEntry> addedEntities = context.ObjectStateManager.GetObjectStateEntries(EntityState.Added);
addedEntities.ToList().ForEach(a => {
//Assign ids to objects that don't have
if (a.Entity is IIdentity && (a.Entity as IIdentity).Id == Guid.Empty)
(a.Entity as IIdentity).Id = Guid.NewGuid();
this.Set<AuditLogEntry>().Add(AuditLogEntryFactory(a, _AddedEntry));
});
modifiedEntities.ToList().ForEach(m => {
this.Set<AuditLogEntry>().Add(AuditLogEntryFactory(m, _ModifiedEntry));
});
}
And these are the methods used previosly to build up the audit log details
private AuditLogEntry AuditLogEntryFactory(ObjectStateEntry entry, string entryType) {
AuditLogEntry auditLogEntry = new AuditLogEntry() {
EntryDate = DateTime.Now,
EntryType = entryType,
Id = Guid.NewGuid(),
NewValues = AuditLogEntryNewValues(entry),
Table = entry.EntitySet.Name,
UserId = _UserId
};
if (entryType == _ModifiedEntry) auditLogEntry.OriginalValues = AuditLogEntryOriginalValues(entry);
return auditLogEntry;
}
/// <summary>
/// Creates a string of all modified properties for an entity.
/// </summary>
private string AuditLogEntryOriginalValues(ObjectStateEntry entry) {
StringBuilder stringBuilder = new StringBuilder();
entry.GetModifiedProperties().ToList().ForEach(m => {
stringBuilder.Append(String.Format("{0} = {1},", m, entry.OriginalValues[m]));
});
return stringBuilder.ToString();
}
/// <summary>
/// Creates a string of all modified properties' new values for an entity.
/// </summary>
private string AuditLogEntryNewValues(ObjectStateEntry entry) {
StringBuilder stringBuilder = new StringBuilder();
for (int i = 0; i < entry.CurrentValues.FieldCount; i++) {
stringBuilder.Append(String.Format("{0} = {1},",
entry.CurrentValues.GetName(i), entry.CurrentValues.GetValue(i)));
}
return stringBuilder.ToString();
}
Hopefully this might point you into a direction that might help you solve your problem.
I assume this code has concurrency issues:
const string CacheKey = "CacheKey";
static string GetCachedData()
{
string expensiveString =null;
if (MemoryCache.Default.Contains(CacheKey))
{
expensiveString = MemoryCache.Default[CacheKey] as string;
}
else
{
CacheItemPolicy cip = new CacheItemPolicy()
{
AbsoluteExpiration = new DateTimeOffset(DateTime.Now.AddMinutes(20))
};
expensiveString = SomeHeavyAndExpensiveCalculation();
MemoryCache.Default.Set(CacheKey, expensiveString, cip);
}
return expensiveString;
}
The reason for the concurrency issue is that multiple threads can get a null key and then attempt to insert data into cache.
What would be the shortest and cleanest way to make this code concurrency proof? I like to follow a good pattern across my cache related code. A link to an online article would be a great help.
UPDATE:
I came up with this code based on #Scott Chamberlain's answer. Can anyone find any performance or concurrency issue with this?
If this works, it would save many line of code and errors.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Runtime.Caching;
namespace CachePoc
{
class Program
{
static object everoneUseThisLockObject4CacheXYZ = new object();
const string CacheXYZ = "CacheXYZ";
static object everoneUseThisLockObject4CacheABC = new object();
const string CacheABC = "CacheABC";
static void Main(string[] args)
{
string xyzData = MemoryCacheHelper.GetCachedData<string>(CacheXYZ, everoneUseThisLockObject4CacheXYZ, 20, SomeHeavyAndExpensiveXYZCalculation);
string abcData = MemoryCacheHelper.GetCachedData<string>(CacheABC, everoneUseThisLockObject4CacheXYZ, 20, SomeHeavyAndExpensiveXYZCalculation);
}
private static string SomeHeavyAndExpensiveXYZCalculation() {return "Expensive";}
private static string SomeHeavyAndExpensiveABCCalculation() {return "Expensive";}
public static class MemoryCacheHelper
{
public static T GetCachedData<T>(string cacheKey, object cacheLock, int cacheTimePolicyMinutes, Func<T> GetData)
where T : class
{
//Returns null if the string does not exist, prevents a race condition where the cache invalidates between the contains check and the retreival.
T cachedData = MemoryCache.Default.Get(cacheKey, null) as T;
if (cachedData != null)
{
return cachedData;
}
lock (cacheLock)
{
//Check to see if anyone wrote to the cache while we where waiting our turn to write the new value.
cachedData = MemoryCache.Default.Get(cacheKey, null) as T;
if (cachedData != null)
{
return cachedData;
}
//The value still did not exist so we now write it in to the cache.
CacheItemPolicy cip = new CacheItemPolicy()
{
AbsoluteExpiration = new DateTimeOffset(DateTime.Now.AddMinutes(cacheTimePolicyMinutes))
};
cachedData = GetData();
MemoryCache.Default.Set(cacheKey, cachedData, cip);
return cachedData;
}
}
}
}
}
This is my 2nd iteration of the code. Because MemoryCache is thread safe you don't need to lock on the initial read, you can just read and if the cache returns null then do the lock check to see if you need to create the string. It greatly simplifies the code.
const string CacheKey = "CacheKey";
static readonly object cacheLock = new object();
private static string GetCachedData()
{
//Returns null if the string does not exist, prevents a race condition where the cache invalidates between the contains check and the retreival.
var cachedString = MemoryCache.Default.Get(CacheKey, null) as string;
if (cachedString != null)
{
return cachedString;
}
lock (cacheLock)
{
//Check to see if anyone wrote to the cache while we where waiting our turn to write the new value.
cachedString = MemoryCache.Default.Get(CacheKey, null) as string;
if (cachedString != null)
{
return cachedString;
}
//The value still did not exist so we now write it in to the cache.
var expensiveString = SomeHeavyAndExpensiveCalculation();
CacheItemPolicy cip = new CacheItemPolicy()
{
AbsoluteExpiration = new DateTimeOffset(DateTime.Now.AddMinutes(20))
};
MemoryCache.Default.Set(CacheKey, expensiveString, cip);
return expensiveString;
}
}
EDIT: The below code is unnecessary but I wanted to leave it to show the original method. It may be useful to future visitors who are using a different collection that has thread safe reads but non-thread safe writes (almost all of classes under the System.Collections namespace is like that).
Here is how I would do it using ReaderWriterLockSlim to protect access. You need to do a kind of "Double Checked Locking" to see if anyone else created the cached item while we where waiting to to take the lock.
const string CacheKey = "CacheKey";
static readonly ReaderWriterLockSlim cacheLock = new ReaderWriterLockSlim();
static string GetCachedData()
{
//First we do a read lock to see if it already exists, this allows multiple readers at the same time.
cacheLock.EnterReadLock();
try
{
//Returns null if the string does not exist, prevents a race condition where the cache invalidates between the contains check and the retreival.
var cachedString = MemoryCache.Default.Get(CacheKey, null) as string;
if (cachedString != null)
{
return cachedString;
}
}
finally
{
cacheLock.ExitReadLock();
}
//Only one UpgradeableReadLock can exist at one time, but it can co-exist with many ReadLocks
cacheLock.EnterUpgradeableReadLock();
try
{
//We need to check again to see if the string was created while we where waiting to enter the EnterUpgradeableReadLock
var cachedString = MemoryCache.Default.Get(CacheKey, null) as string;
if (cachedString != null)
{
return cachedString;
}
//The entry still does not exist so we need to create it and enter the write lock
var expensiveString = SomeHeavyAndExpensiveCalculation();
cacheLock.EnterWriteLock(); //This will block till all the Readers flush.
try
{
CacheItemPolicy cip = new CacheItemPolicy()
{
AbsoluteExpiration = new DateTimeOffset(DateTime.Now.AddMinutes(20))
};
MemoryCache.Default.Set(CacheKey, expensiveString, cip);
return expensiveString;
}
finally
{
cacheLock.ExitWriteLock();
}
}
finally
{
cacheLock.ExitUpgradeableReadLock();
}
}
There is an open source library [disclaimer: that I wrote]: LazyCache that IMO covers your requirement with two lines of code:
IAppCache cache = new CachingService();
var cachedResults = cache.GetOrAdd("CacheKey",
() => SomeHeavyAndExpensiveCalculation());
It has built in locking by default so the cacheable method will only execute once per cache miss, and it uses a lambda so you can do "get or add" in one go. It defaults to 20 minutes sliding expiration.
There's even a NuGet package ;)
I've solved this issue by making use of the AddOrGetExisting method on the MemoryCache and the use of Lazy initialization.
Essentially, my code looks something like this:
static string GetCachedData(string key, DateTimeOffset offset)
{
Lazy<String> lazyObject = new Lazy<String>(() => SomeHeavyAndExpensiveCalculationThatReturnsAString());
var returnedLazyObject = MemoryCache.Default.AddOrGetExisting(key, lazyObject, offset);
if (returnedLazyObject == null)
return lazyObject.Value;
return ((Lazy<String>) returnedLazyObject).Value;
}
Worst case scenario here is that you create the same Lazy object twice. But that is pretty trivial. The use of AddOrGetExisting guarantees that you'll only ever get one instance of the Lazy object, and so you're also guaranteed to only call the expensive initialization method once.
I assume this code has concurrency issues:
Actually, it's quite possibly fine, though with a possible improvement.
Now, in general the pattern where we have multiple threads setting a shared value on first use, to not lock on the value being obtained and set can be:
Disastrous - other code will assume only one instance exists.
Disastrous - the code that obtains the instance is not can only tolerate one (or perhaps a certain small number) concurrent operations.
Disastrous - the means of storage is not thread-safe (e.g. have two threads adding to a dictionary and you can get all sorts of nasty errors).
Sub-optimal - the overall performance is worse than if locking had ensured only one thread did the work of obtaining the value.
Optimal - the cost of having multiple threads do redundant work is less than the cost of preventing it, especially since that can only happen during a relatively brief period.
However, considering here that MemoryCache may evict entries then:
If it's disastrous to have more than one instance then MemoryCache is the wrong approach.
If you must prevent simultaneous creation, you should do so at the point of creation.
MemoryCache is thread-safe in terms of access to that object, so that is not a concern here.
Both of these possibilities have to be thought about of course, though the only time having two instances of the same string existing can be a problem is if you're doing very particular optimisations that don't apply here*.
So, we're left with the possibilities:
It is cheaper to avoid the cost of duplicate calls to SomeHeavyAndExpensiveCalculation().
It is cheaper not to avoid the cost of duplicate calls to SomeHeavyAndExpensiveCalculation().
And working that out can be difficult (indeed, the sort of thing where it's worth profiling rather than assuming you can work it out). It's worth considering here though that most obvious ways of locking on insert will prevent all additions to the cache, including those that are unrelated.
This means that if we had 50 threads trying to set 50 different values, then we'll have to make all 50 threads wait on each other, even though they weren't even going to do the same calculation.
As such, you're probably better off with the code you have, than with code that avoids the race-condition, and if the race-condition is a problem, you quite likely either need to handle that somewhere else, or need a different caching strategy than one that expels old entries†.
The one thing I would change is I'd replace the call to Set() with one to AddOrGetExisting(). From the above it should be clear that it probably isn't necessary, but it would allow the newly obtained item to be collected, reducing overall memory use and allowing a higher ratio of low generation to high generation collections.
So yeah, you could use double-locking to prevent concurrency, but either the concurrency isn't actually a problem, or your storing the values in the wrong way, or double-locking on the store would not be the best way to solve it.
*If you know only one each of a set of strings exists, you can optimise equality comparisons, which is about the only time having two copies of a string can be incorrect rather than just sub-optimal, but you'd want to be doing very different types of caching for that to make sense. E.g. the sort XmlReader does internally.
†Quite likely either one that stores indefinitely, or one that makes use of weak references so it will only expel entries if there are no existing uses.
Somewhat dated question, but maybe still useful: you may take a look at FusionCache ⚡🦥, which I recently released.
The feature you are looking for is described here, and you can use it like this:
const string CacheKey = "CacheKey";
static string GetCachedData()
{
return fusionCache.GetOrSet(
CacheKey,
_ => SomeHeavyAndExpensiveCalculation(),
TimeSpan.FromMinutes(20)
);
}
You may also find some of the other features interesting like fail-safe, advanced timeouts with background factory completion and support for an optional, distributed 2nd level cache.
If you will give it a chance please let me know what you think.
/shameless-plug
It is difficult to choose which one is better; lock or ReaderWriterLockSlim. You need real world statistics of read and write numbers and ratios etc.
But if you believe using "lock" is the correct way. Then here is a different solution for different needs. I also include the Allan Xu's solution in the code. Because both can be needed for different needs.
Here are the requirements, driving me to this solution:
You don't want to or cannot supply the 'GetData' function for some reason. Perhaps the 'GetData' function is located in some other class with a heavy constructor and you do not want to even create an instance till ensuring it is unescapable.
You need to access the same cached data from different locations/tiers of the application. And those different locations don't have access to same locker object.
You don't have a constant cache key. For example; need of caching some data with the sessionId cache key.
Code:
using System;
using System.Runtime.Caching;
using System.Collections.Concurrent;
using System.Collections.Generic;
namespace CachePoc
{
class Program
{
static object everoneUseThisLockObject4CacheXYZ = new object();
const string CacheXYZ = "CacheXYZ";
static object everoneUseThisLockObject4CacheABC = new object();
const string CacheABC = "CacheABC";
static void Main(string[] args)
{
//Allan Xu's usage
string xyzData = MemoryCacheHelper.GetCachedDataOrAdd<string>(CacheXYZ, everoneUseThisLockObject4CacheXYZ, 20, SomeHeavyAndExpensiveXYZCalculation);
string abcData = MemoryCacheHelper.GetCachedDataOrAdd<string>(CacheABC, everoneUseThisLockObject4CacheXYZ, 20, SomeHeavyAndExpensiveXYZCalculation);
//My usage
string sessionId = System.Web.HttpContext.Current.Session["CurrentUser.SessionId"].ToString();
string yvz = MemoryCacheHelper.GetCachedData<string>(sessionId);
if (string.IsNullOrWhiteSpace(yvz))
{
object locker = MemoryCacheHelper.GetLocker(sessionId);
lock (locker)
{
yvz = MemoryCacheHelper.GetCachedData<string>(sessionId);
if (string.IsNullOrWhiteSpace(yvz))
{
DatabaseRepositoryWithHeavyConstructorOverHead dbRepo = new DatabaseRepositoryWithHeavyConstructorOverHead();
yvz = dbRepo.GetDataExpensiveDataForSession(sessionId);
MemoryCacheHelper.AddDataToCache(sessionId, yvz, 5);
}
}
}
}
private static string SomeHeavyAndExpensiveXYZCalculation() { return "Expensive"; }
private static string SomeHeavyAndExpensiveABCCalculation() { return "Expensive"; }
public static class MemoryCacheHelper
{
//Allan Xu's solution
public static T GetCachedDataOrAdd<T>(string cacheKey, object cacheLock, int minutesToExpire, Func<T> GetData) where T : class
{
//Returns null if the string does not exist, prevents a race condition where the cache invalidates between the contains check and the retreival.
T cachedData = MemoryCache.Default.Get(cacheKey, null) as T;
if (cachedData != null)
return cachedData;
lock (cacheLock)
{
//Check to see if anyone wrote to the cache while we where waiting our turn to write the new value.
cachedData = MemoryCache.Default.Get(cacheKey, null) as T;
if (cachedData != null)
return cachedData;
cachedData = GetData();
MemoryCache.Default.Set(cacheKey, cachedData, DateTime.Now.AddMinutes(minutesToExpire));
return cachedData;
}
}
#region "My Solution"
readonly static ConcurrentDictionary<string, object> Lockers = new ConcurrentDictionary<string, object>();
public static object GetLocker(string cacheKey)
{
CleanupLockers();
return Lockers.GetOrAdd(cacheKey, item => (cacheKey, new object()));
}
public static T GetCachedData<T>(string cacheKey) where T : class
{
CleanupLockers();
T cachedData = MemoryCache.Default.Get(cacheKey) as T;
return cachedData;
}
public static void AddDataToCache(string cacheKey, object value, int cacheTimePolicyMinutes)
{
CleanupLockers();
MemoryCache.Default.Add(cacheKey, value, DateTimeOffset.Now.AddMinutes(cacheTimePolicyMinutes));
}
static DateTimeOffset lastCleanUpTime = DateTimeOffset.MinValue;
static void CleanupLockers()
{
if (DateTimeOffset.Now.Subtract(lastCleanUpTime).TotalMinutes > 1)
{
lock (Lockers)//maybe a better locker is needed?
{
try//bypass exceptions
{
List<string> lockersToRemove = new List<string>();
foreach (var locker in Lockers)
{
if (!MemoryCache.Default.Contains(locker.Key))
lockersToRemove.Add(locker.Key);
}
object dummy;
foreach (string lockerKey in lockersToRemove)
Lockers.TryRemove(lockerKey, out dummy);
lastCleanUpTime = DateTimeOffset.Now;
}
catch (Exception)
{ }
}
}
}
#endregion
}
}
class DatabaseRepositoryWithHeavyConstructorOverHead
{
internal string GetDataExpensiveDataForSession(string sessionId)
{
return "Expensive data from database";
}
}
}
To avoid the global lock, you can use SingletonCache to implement one lock per key, without exploding memory usage (the lock objects are removed when no longer referenced, and acquire/release is thread safe guaranteeing that only 1 instance is ever in use via compare and swap).
Using it looks like this:
SingletonCache<string, object> keyLocks = new SingletonCache<string, object>();
const string CacheKey = "CacheKey";
static string GetCachedData()
{
string expensiveString =null;
if (MemoryCache.Default.Contains(CacheKey))
{
return MemoryCache.Default[CacheKey] as string;
}
// double checked lock
using (var lifetime = keyLocks.Acquire(url))
{
lock (lifetime.Value)
{
if (MemoryCache.Default.Contains(CacheKey))
{
return MemoryCache.Default[CacheKey] as string;
}
cacheItemPolicy cip = new CacheItemPolicy()
{
AbsoluteExpiration = new DateTimeOffset(DateTime.Now.AddMinutes(20))
};
expensiveString = SomeHeavyAndExpensiveCalculation();
MemoryCache.Default.Set(CacheKey, expensiveString, cip);
return expensiveString;
}
}
}
Code is here on GitHub: https://github.com/bitfaster/BitFaster.Caching
Install-Package BitFaster.Caching
There is also an LRU implementation that is lighter weight than MemoryCache, and has several advantages - faster concurrent reads and writes, bounded size, no background thread, internal perf counters etc. (disclaimer, I wrote it).
Console example of MemoryCache, "How to save/get simple class objects"
Output after launching and pressing Any key except Esc :
Saving to cache!
Getting from cache!
Some1
Some2
class Some
{
public String text { get; set; }
public Some(String text)
{
this.text = text;
}
public override string ToString()
{
return text;
}
}
public static MemoryCache cache = new MemoryCache("cache");
public static string cache_name = "mycache";
static void Main(string[] args)
{
Some some1 = new Some("some1");
Some some2 = new Some("some2");
List<Some> list = new List<Some>();
list.Add(some1);
list.Add(some2);
do {
if (cache.Contains(cache_name))
{
Console.WriteLine("Getting from cache!");
List<Some> list_c = cache.Get(cache_name) as List<Some>;
foreach (Some s in list_c) Console.WriteLine(s);
}
else
{
Console.WriteLine("Saving to cache!");
cache.Set(cache_name, list, DateTime.Now.AddMinutes(10));
}
} while (Console.ReadKey(true).Key != ConsoleKey.Escape);
}
public interface ILazyCacheProvider : IAppCache
{
/// <summary>
/// Get data loaded - after allways throw cached result (even when data is older then needed) but very fast!
/// </summary>
/// <param name="key"></param>
/// <param name="getData"></param>
/// <param name="slidingExpiration"></param>
/// <typeparam name="T"></typeparam>
/// <returns></returns>
T GetOrAddPermanent<T>(string key, Func<T> getData, TimeSpan slidingExpiration);
}
/// <summary>
/// Initialize LazyCache in runtime
/// </summary>
public class LazzyCacheProvider: CachingService, ILazyCacheProvider
{
private readonly Logger _logger = LogManager.GetLogger("MemCashe");
private readonly Hashtable _hash = new Hashtable();
private readonly List<string> _reloader = new List<string>();
private readonly ConcurrentDictionary<string, DateTime> _lastLoad = new ConcurrentDictionary<string, DateTime>();
T ILazyCacheProvider.GetOrAddPermanent<T>(string dataKey, Func<T> getData, TimeSpan slidingExpiration)
{
var currentPrincipal = Thread.CurrentPrincipal;
if (!ObjectCache.Contains(dataKey) && !_hash.Contains(dataKey))
{
_hash[dataKey] = null;
_logger.Debug($"{dataKey} - first start");
_lastLoad[dataKey] = DateTime.Now;
_hash[dataKey] = ((object)GetOrAdd(dataKey, getData, slidingExpiration)).CloneObject();
_lastLoad[dataKey] = DateTime.Now;
_logger.Debug($"{dataKey} - first");
}
else
{
if ((!ObjectCache.Contains(dataKey) || _lastLoad[dataKey].AddMinutes(slidingExpiration.Minutes) < DateTime.Now) && _hash[dataKey] != null)
Task.Run(() =>
{
if (_reloader.Contains(dataKey)) return;
lock (_reloader)
{
if (ObjectCache.Contains(dataKey))
{
if(_lastLoad[dataKey].AddMinutes(slidingExpiration.Minutes) > DateTime.Now)
return;
_lastLoad[dataKey] = DateTime.Now;
Remove(dataKey);
}
_reloader.Add(dataKey);
Thread.CurrentPrincipal = currentPrincipal;
_logger.Debug($"{dataKey} - reload start");
_hash[dataKey] = ((object)GetOrAdd(dataKey, getData, slidingExpiration)).CloneObject();
_logger.Debug($"{dataKey} - reload");
_reloader.Remove(dataKey);
}
});
}
if (_hash[dataKey] != null) return (T) (_hash[dataKey]);
_logger.Debug($"{dataKey} - dummy start");
var data = GetOrAdd(dataKey, getData, slidingExpiration);
_logger.Debug($"{dataKey} - dummy");
return (T)((object)data).CloneObject();
}
}
Its a bit late, however...
Full implementation:
[HttpGet]
public async Task<HttpResponseMessage> GetPageFromUriOrBody(RequestQuery requestQuery)
{
log(nameof(GetPageFromUriOrBody), nameof(requestQuery));
var responseResult = await _requestQueryCache.GetOrCreate(
nameof(GetPageFromUriOrBody)
, requestQuery
, (x) => getPageContent(x).Result);
return Request.CreateResponse(System.Net.HttpStatusCode.Accepted, responseResult);
}
static MemoryCacheWithPolicy<RequestQuery, string> _requestQueryCache = new MemoryCacheWithPolicy<RequestQuery, string>();
Here is getPageContent signature:
async Task<string> getPageContent(RequestQuery requestQuery);
And here is the MemoryCacheWithPolicy implementation:
public class MemoryCacheWithPolicy<TParameter, TResult>
{
static ILogger _nlogger = new AppLogger().Logger;
private MemoryCache _cache = new MemoryCache(new MemoryCacheOptions()
{
//Size limit amount: this is actually a memory size limit value!
SizeLimit = 1024
});
/// <summary>
/// Gets or creates a new memory cache record for a main data
/// along with parameter data that is assocciated with main main.
/// </summary>
/// <param name="key">Main data cache memory key.</param>
/// <param name="param">Parameter model that assocciated to main model (request result).</param>
/// <param name="createCacheData">A delegate to create a new main data to cache.</param>
/// <returns></returns>
public async Task<TResult> GetOrCreate(object key, TParameter param, Func<TParameter, TResult> createCacheData)
{
// this key is used for param cache memory.
var paramKey = key + nameof(param);
if (!_cache.TryGetValue(key, out TResult cacheEntry))
{
// key is not in the cache, create data through the delegate.
cacheEntry = createCacheData(param);
createMemoryCache(key, cacheEntry, paramKey, param);
_nlogger.Warn(" cache is created.");
}
else
{
// data is chached so far..., check if param model is same (or changed)?
if(!_cache.TryGetValue(paramKey, out TParameter cacheParam))
{
//exception: this case should not happened!
}
if (!cacheParam.Equals(param))
{
// request param is changed, create data through the delegate.
cacheEntry = createCacheData(param);
createMemoryCache(key, cacheEntry, paramKey, param);
_nlogger.Warn(" cache is re-created (param model has been changed).");
}
else
{
_nlogger.Trace(" cache is used.");
}
}
return await Task.FromResult<TResult>(cacheEntry);
}
MemoryCacheEntryOptions createMemoryCacheEntryOptions(TimeSpan slidingOffset, TimeSpan relativeOffset)
{
// Cache data within [slidingOffset] seconds,
// request new result after [relativeOffset] seconds.
return new MemoryCacheEntryOptions()
// Size amount: this is actually an entry count per
// key limit value! not an actual memory size value!
.SetSize(1)
// Priority on removing when reaching size limit (memory pressure)
.SetPriority(CacheItemPriority.High)
// Keep in cache for this amount of time, reset it if accessed.
.SetSlidingExpiration(slidingOffset)
// Remove from cache after this time, regardless of sliding expiration
.SetAbsoluteExpiration(relativeOffset);
//
}
void createMemoryCache(object key, TResult cacheEntry, object paramKey, TParameter param)
{
// Cache data within 2 seconds,
// request new result after 5 seconds.
var cacheEntryOptions = createMemoryCacheEntryOptions(
TimeSpan.FromSeconds(2)
, TimeSpan.FromSeconds(5));
// Save data in cache.
_cache.Set(key, cacheEntry, cacheEntryOptions);
// Save param in cache.
_cache.Set(paramKey, param, cacheEntryOptions);
}
void checkCacheEntry<T>(object key, string name)
{
_cache.TryGetValue(key, out T value);
_nlogger.Fatal("Key: {0}, Name: {1}, Value: {2}", key, name, value);
}
}
nlogger is just nLog object to trace MemoryCacheWithPolicy behavior.
I re-create the memory cache if request object (RequestQuery requestQuery) is changed through the delegate (Func<TParameter, TResult> createCacheData) or re-create when sliding or absolute time reached their limit. Note that everything is async too ;)
Seeking some advice, best practice etc...
Technology: C# .NET4.0, Winforms, 32 bit
I am seeking some advice on how I can best tackle large data processing in my C# Winforms application which experiences high memory usage (working set) and the occasional OutOfMemory exception.
The problem is that we perform a large amount of data processing "in-memory" when a "shopping-basket" is opened. In simplistic terms when a "shopping-basket" is loaded we perform the following calculations;
For each item in the "shopping-basket" retrieve it's historical price going all the way back to the date the item first appeared in-stock (could be two months, two years or two decades of data). Historical price data is retrieved from text files, over the internet, any format which is supported by a price plugin.
For each item, for each day since it first appeared in-stock calculate various metrics which builds a historical profile for each item in the shopping-basket.
The result is that we can potentially perform hundreds, thousand and/or millions of calculations depending upon the number of items in the "shopping-basket". If the basket contains too many items we run the risk of hitting a "OutOfMemory" exception.
A couple of caveats;
This data needs to be calculated for each item in the "shopping-basket" and the data is kept until the "shopping-basket" is closed.
Even though we perform steps 1 and 2 in a background thread, speed is important as the number of items in the "shopping-basket" can greatly effect overall calculation speed.
Memory is salvaged by the .NET garbage collector when a "shopping-basket" is closed. We have profiled our application and ensure that all references are correctly disposed and closed when a basket is closed.
After all the calculations are completed the resultant data is stored in a IDictionary. "CalculatedData is a class object whose properties are individual metrics calculated by the above process.
Some ideas I've thought about;
Obviously my main concern is to reduce the amount of memory being used by the calculations however the volume of memory used can only be reduced if I
1) reduce the number of metrics being calculated for each day or
2) reduce the number of days used for the calculation.
Both of these options are not viable if we wish to fulfill our business requirements.
Memory Mapped Files
One idea has been to use memory mapped files which will store the data dictionary. Would this be possible/feasible and how can we put this into place?
Use a temporary database
The idea is to use a separate (not in-memory) database which can be created for the life-cycle of the application. As "shopping-baskets" are opened we can persist the calculated data to the database for repeated use, alleviating the requirement to recalculate for the same "shopping-basket".
Are there any other alternatives that we should consider? What is best practice when it comes to calculations on large data and performing them outside of RAM?
Any advice is appreciated....
The easiest solution is a database, perhaps SQLite. Memory mapped files don't automatically become dictionaries, you would have to code all the memory management yourself, and thereby fight with the .net GC system itself for ownership of he data.
If you're interested in trying the memory mapped file approach, you can try it now. I wrote a small native .NET package called MemMapCache that in essence creates a key/val database backed by MemMappedFiles. It's a bit of a hacky concept, but the program MemMapCache.exe keeps all references to the memory mapped files so that if your application crashes, you don't have to worry about losing the state of your cache.
It's very simple to use and you should be able to drop it in your code without too many modifications. Here is an example using it: https://github.com/jprichardson/MemMapCache/blob/master/TestMemMapCache/MemMapCacheTest.cs
Maybe it'd be of some use to you to at least further figure out what you need to do for an actual solution.
Please let me know if you do end up using it. I'd be interested in your results.
However, long-term, I'd recommend Redis.
As an update for those stumbling upon this thread...
We ended up using SQLite as our caching solution. The SQLite database we employ exists separate to the main data store used by the application. We persist calculated data to the SQLite (diskCache) as it's required and have code controlling cache invalidation etc. This was a suitable solution for us as we were able to achieve write speeds up and around 100,000 records per second.
For those interested, this is the code that controls inserts into the diskCache. Full credit for this code goes to JP Richardson (shown answering a question here) for his excellent blog post.
internal class SQLiteBulkInsert
{
#region Class Declarations
private SQLiteCommand m_cmd;
private SQLiteTransaction m_trans;
private readonly SQLiteConnection m_dbCon;
private readonly Dictionary<string, SQLiteParameter> m_parameters = new Dictionary<string, SQLiteParameter>();
private uint m_counter;
private readonly string m_beginInsertText;
#endregion
#region Constructor
public SQLiteBulkInsert(SQLiteConnection dbConnection, string tableName)
{
m_dbCon = dbConnection;
m_tableName = tableName;
var query = new StringBuilder(255);
query.Append("INSERT INTO ["); query.Append(tableName); query.Append("] (");
m_beginInsertText = query.ToString();
}
#endregion
#region Allow Bulk Insert
private bool m_allowBulkInsert = true;
public bool AllowBulkInsert { get { return m_allowBulkInsert; } set { m_allowBulkInsert = value; } }
#endregion
#region CommandText
public string CommandText
{
get
{
if(m_parameters.Count < 1) throw new SQLiteException("You must add at least one parameter.");
var sb = new StringBuilder(255);
sb.Append(m_beginInsertText);
foreach(var param in m_parameters.Keys)
{
sb.Append('[');
sb.Append(param);
sb.Append(']');
sb.Append(", ");
}
sb.Remove(sb.Length - 2, 2);
sb.Append(") VALUES (");
foreach(var param in m_parameters.Keys)
{
sb.Append(m_paramDelim);
sb.Append(param);
sb.Append(", ");
}
sb.Remove(sb.Length - 2, 2);
sb.Append(")");
return sb.ToString();
}
}
#endregion
#region Commit Max
private uint m_commitMax = 25000;
public uint CommitMax { get { return m_commitMax; } set { m_commitMax = value; } }
#endregion
#region Table Name
private readonly string m_tableName;
public string TableName { get { return m_tableName; } }
#endregion
#region Parameter Delimiter
private const string m_paramDelim = ":";
public string ParamDelimiter { get { return m_paramDelim; } }
#endregion
#region AddParameter
public void AddParameter(string name, DbType dbType)
{
var param = new SQLiteParameter(m_paramDelim + name, dbType);
m_parameters.Add(name, param);
}
#endregion
#region Flush
public void Flush()
{
try
{
if (m_trans != null) m_trans.Commit();
}
catch (Exception ex)
{
throw new Exception("Could not commit transaction. See InnerException for more details", ex);
}
finally
{
if (m_trans != null) m_trans.Dispose();
m_trans = null;
m_counter = 0;
}
}
#endregion
#region Insert
public void Insert(object[] paramValues)
{
if (paramValues.Length != m_parameters.Count)
throw new Exception("The values array count must be equal to the count of the number of parameters.");
m_counter++;
if (m_counter == 1)
{
if (m_allowBulkInsert) m_trans = m_dbCon.BeginTransaction();
m_cmd = m_dbCon.CreateCommand();
foreach (var par in m_parameters.Values)
m_cmd.Parameters.Add(par);
m_cmd.CommandText = CommandText;
}
var i = 0;
foreach (var par in m_parameters.Values)
{
par.Value = paramValues[i];
i++;
}
m_cmd.ExecuteNonQuery();
if(m_counter != m_commitMax)
{
// Do nothing
}
else
{
try
{
if(m_trans != null) m_trans.Commit();
}
catch(Exception)
{ }
finally
{
if(m_trans != null)
{
m_trans.Dispose();
m_trans = null;
}
m_counter = 0;
}
}
}
#endregion
}
I wanted to ask you what is the best approach to implement a cache in C#? Is there a possibility by using given .NET classes or something like that? Perhaps something like a dictionary that will remove some entries, if it gets too large, but where whose entries won't be removed by the garbage collector?
If you are using .NET 4 or superior, you can use MemoryCache class.
If you're using ASP.NET, you could use the Cache class (System.Web.Caching).
Here is a good helper class: c-cache-helper-class
If you mean caching in a windows form app, it depends on what you're trying to do, and where you're trying to cache the data.
We've implemented a cache behind a Webservice for certain methods
(using the System.Web.Caching object.).
However, you might also want to look at the Caching Application Block. (See here) that is part of the Enterprise Library for .NET Framework 2.0.
MemoryCache in the framework is a good place to start, but you might also like to consider the open source library LazyCache because it has a simpler API than memory cache and has built in locking as well as some other developer friendly features. It is also available on nuget.
To give you an example:
// Create our cache service using the defaults (Dependency injection ready).
// Uses MemoryCache.Default as default so cache is shared between instances
IAppCache cache = new CachingService();
// Declare (but don't execute) a func/delegate whose result we want to cache
Func<ComplexObjects> complexObjectFactory = () => methodThatTakesTimeOrResources();
// Get our ComplexObjects from the cache, or build them in the factory func
// and cache the results for next time under the given key
ComplexObject cachedResults = cache.GetOrAdd("uniqueKey", complexObjectFactory);
I recently wrote this article about getting started with caching in dot net that you may find useful.
(Disclaimer: I am the author of LazyCache)
The cache classes supplied with .NET are handy, but have a major problem - they can not store much data (tens of millions+) of objects for a long time without killing your GC. They work great if you cache a few thousand objects, but the moment you move into millions and keep them around until they propagate into GEN2 - the GC pauses would eventually start to be noticeable when you system comes to low memory threshold and GC needs to sweep all gens.
The practicality is this - if you need to store a few hundred thousand instances - use MS cache. Does not matter if your objects are 2-field or 25 field - its about the number of references.
On the other hand there are cases when large RAMs, which are common these days, need to be utilized, i.e. 64 GB.
For that we have created a 100% managed memory manager and cache that sits on top of it.
Our solution can easily store 300,000,000 object in-memory in-process without taxing GC at all - this is because we store data in large (250 mb) byte[] segments.
Here is the code: NFX Pile (Apache 2.0)
And video:
NFX Pile Cache - Youtube
You can use the ObjectCache.
See http://msdn.microsoft.com/en-us/library/system.runtime.caching.objectcache.aspx
For Local Stores
.NET MemoryCache
NCache Express
AppFabric Caching
...
As mentioned in other answers, the default choice using the .NET Framework is MemoryCache and the various related implementations in Microsoft NuGet packages (e.g. Microsoft.Extensions.Caching.MemoryCache). All of these caches bound size in terms of memory used, and attempt to estimate memory used by tracking how total physical memory is increasing relative to the number of cached objects. A background thread then periodically 'trims' entries.
MemoryCache etc. share some limitations:
Keys are strings, so if the key type is not natively string, you will be forced to constantly allocate strings on the heap. This can really add up in a server application when items are 'hot'.
Has poor 'scan resistance' - e.g. if some automated process is rapidly looping through all the items in that exist, the cache size can grow too fast for the background thread to keep up. This can result in memory pressure, page faults, induced GC or when running under IIS, recycling the process due to exceeding the private bytes limit.
Does not scale well with concurrent writes.
Contains perf counters that cannot be disabled (which incur overhead).
Your workload will determine the degree to which these things are problematic. An alternative approach to caching is to bound the number of objects in the cache (rather than estimating memory used). A cache replacement policy then determines which object to discard when the cache is full.
Below is the source code for a simple cache with least recently used eviction policy:
public sealed class ClassicLru<K, V>
{
private readonly int capacity;
private readonly ConcurrentDictionary<K, LinkedListNode<LruItem>> dictionary;
private readonly LinkedList<LruItem> linkedList = new LinkedList<LruItem>();
private long requestHitCount;
private long requestTotalCount;
public ClassicLru(int capacity)
: this(Defaults.ConcurrencyLevel, capacity, EqualityComparer<K>.Default)
{
}
public ClassicLru(int concurrencyLevel, int capacity, IEqualityComparer<K> comparer)
{
if (capacity < 3)
{
throw new ArgumentOutOfRangeException("Capacity must be greater than or equal to 3.");
}
if (comparer == null)
{
throw new ArgumentNullException(nameof(comparer));
}
this.capacity = capacity;
this.dictionary = new ConcurrentDictionary<K, LinkedListNode<LruItem>>(concurrencyLevel, this.capacity + 1, comparer);
}
public int Count => this.linkedList.Count;
public double HitRatio => (double)requestHitCount / (double)requestTotalCount;
///<inheritdoc/>
public bool TryGet(K key, out V value)
{
Interlocked.Increment(ref requestTotalCount);
LinkedListNode<LruItem> node;
if (dictionary.TryGetValue(key, out node))
{
LockAndMoveToEnd(node);
Interlocked.Increment(ref requestHitCount);
value = node.Value.Value;
return true;
}
value = default(V);
return false;
}
public V GetOrAdd(K key, Func<K, V> valueFactory)
{
if (this.TryGet(key, out var value))
{
return value;
}
var node = new LinkedListNode<LruItem>(new LruItem(key, valueFactory(key)));
if (this.dictionary.TryAdd(key, node))
{
LinkedListNode<LruItem> first = null;
lock (this.linkedList)
{
if (linkedList.Count >= capacity)
{
first = linkedList.First;
linkedList.RemoveFirst();
}
linkedList.AddLast(node);
}
// Remove from the dictionary outside the lock. This means that the dictionary at this moment
// contains an item that is not in the linked list. If another thread fetches this item,
// LockAndMoveToEnd will ignore it, since it is detached. This means we potentially 'lose' an
// item just as it was about to move to the back of the LRU list and be preserved. The next request
// for the same key will be a miss. Dictionary and list are eventually consistent.
// However, all operations inside the lock are extremely fast, so contention is minimized.
if (first != null)
{
dictionary.TryRemove(first.Value.Key, out var removed);
if (removed.Value.Value is IDisposable d)
{
d.Dispose();
}
}
return node.Value.Value;
}
return this.GetOrAdd(key, valueFactory);
}
public bool TryRemove(K key)
{
if (dictionary.TryRemove(key, out var node))
{
// If the node has already been removed from the list, ignore.
// E.g. thread A reads x from the dictionary. Thread B adds a new item, removes x from
// the List & Dictionary. Now thread A will try to move x to the end of the list.
if (node.List != null)
{
lock (this.linkedList)
{
if (node.List != null)
{
linkedList.Remove(node);
}
}
}
if (node.Value.Value is IDisposable d)
{
d.Dispose();
}
return true;
}
return false;
}
// Thead A reads x from the dictionary. Thread B adds a new item. Thread A moves x to the end. Thread B now removes the new first Node (removal is atomic on both data structures).
private void LockAndMoveToEnd(LinkedListNode<LruItem> node)
{
// If the node has already been removed from the list, ignore.
// E.g. thread A reads x from the dictionary. Thread B adds a new item, removes x from
// the List & Dictionary. Now thread A will try to move x to the end of the list.
if (node.List == null)
{
return;
}
lock (this.linkedList)
{
if (node.List == null)
{
return;
}
linkedList.Remove(node);
linkedList.AddLast(node);
}
}
private class LruItem
{
public LruItem(K k, V v)
{
Key = k;
Value = v;
}
public K Key { get; }
public V Value { get; }
}
}
This is just to illustrate a thread safe cache - it probably has bugs and can be a bottleneck under heavy concurrent workloads (e.g. in a web server).
A thoroughly tested, production ready, scalable concurrent implementation is a bit beyond a stack overflow post. To solve this in my projects, I implemented a thread safe pseudo LRU (think concurrent dictionary, but with constrained size). Performance is very close to a raw ConcurrentDictionary, ~10x faster than MemoryCache, ~10x better concurrent throughput than ClassicLru above, and better hit rate. A detailed performance analysis provided in the github link below.
Usage looks like this:
int capacity = 666;
var lru = new ConcurrentLru<int, SomeItem>(capacity);
var value = lru.GetOrAdd(1, (k) => new SomeItem(k));
GitHub: https://github.com/bitfaster/BitFaster.Caching
Install-Package BitFaster.Caching
Your question needs more clarification. C# is a language not a framework. You have to specify which framework you want to implement the caching. If we consider that you want to implement it in ASP.NET it is still depends completely on what you want from Cache. You can decide between in-process cache (which will keep the data inside the heap of your application) and out-of-process cache (in this case you can store the data in other memory than the heap like Amazon Elastic cache server). And there is also another decision to make which is between client caching or serve side caching. Usually in solution you have to develop different solution for caching different data. Because base on four factors (accessibility, persistency, size, cost) you have to make decision which solution you need.
I wrote this some time ago and it seems to work well. It allows you to differentiate different cache stores by using different Types: ApplicationCaching<MyCacheType1>, ApplicationCaching<MyCacheType2>....
You can decide to allow some stores to persist after execution and others to expire.
You will need a reference to the Newtonsoft.Json serializer (or use an alternative one) and of course all objects or values types to be cached must be serializable.
Use MaxItemCount to set a limit to the number of items in any one store.
A separate Zipper class (see code below) uses System.IO.Compression. This minimises the size of the store and helps speed up loading times.
public static class ApplicationCaching<K>
{
//====================================================================================================================
public static event EventHandler InitialAccess = (s, e) => { };
//=============================================================================================
static Dictionary<string, byte[]> _StoredValues;
static Dictionary<string, DateTime> _ExpirationTimes = new Dictionary<string, DateTime>();
//=============================================================================================
public static int MaxItemCount { get; set; } = 0;
private static void OnInitialAccess()
{
//-----------------------------------------------------------------------------------------
_StoredValues = new Dictionary<string, byte[]>();
//-----------------------------------------------------------------------------------------
InitialAccess?.Invoke(null, EventArgs.Empty);
//-----------------------------------------------------------------------------------------
}
public static void AddToCache<T>(string key, T value, DateTime expirationTime)
{
try
{
//-----------------------------------------------------------------------------------------
if (_StoredValues is null) OnInitialAccess();
//-----------------------------------------------------------------------------------------
string strValue = JsonConvert.SerializeObject(value);
byte[] zippedValue = Zipper.Zip(strValue);
//-----------------------------------------------------------------------------------------
_StoredValues.Remove(key);
_StoredValues.Add(key, zippedValue);
//-----------------------------------------------------------------------------------------
_ExpirationTimes.Remove(key);
_ExpirationTimes.Add(key, expirationTime);
//-----------------------------------------------------------------------------------------
}
catch (Exception ex)
{
throw ex;
}
}
//=============================================================================================
public static T GetFromCache<T>(string key, T defaultValue = default)
{
try
{
//-----------------------------------------------------------------------------------------
if (_StoredValues is null) OnInitialAccess();
//-----------------------------------------------------------------------------------------
if (_StoredValues.ContainsKey(key))
{
//------------------------------------------------------------------------------------------
if (_ExpirationTimes[key] <= DateTime.Now)
{
//------------------------------------------------------------------------------------------
_StoredValues.Remove(key);
_ExpirationTimes.Remove(key);
//------------------------------------------------------------------------------------------
return defaultValue;
//------------------------------------------------------------------------------------------
}
//------------------------------------------------------------------------------------------
byte[] zippedValue = _StoredValues[key];
//------------------------------------------------------------------------------------------
string strValue = Zipper.Unzip(zippedValue);
T value = JsonConvert.DeserializeObject<T>(strValue);
//------------------------------------------------------------------------------------------
return value;
//------------------------------------------------------------------------------------------
}
else
{
return defaultValue;
}
//---------------------------------------------------------------------------------------------
}
catch (Exception ex)
{
throw ex;
}
}
//=============================================================================================
public static string ConvertCacheToString()
{
//-----------------------------------------------------------------------------------------
if (_StoredValues is null || _ExpirationTimes is null) return "";
//-----------------------------------------------------------------------------------------
List<string> storage = new List<string>();
//-----------------------------------------------------------------------------------------
string strStoredObject = JsonConvert.SerializeObject(_StoredValues);
string strExpirationTimes = JsonConvert.SerializeObject(_ExpirationTimes);
//-----------------------------------------------------------------------------------------
storage.AddRange(new string[] { strStoredObject, strExpirationTimes});
//-----------------------------------------------------------------------------------------
string strStorage = JsonConvert.SerializeObject(storage);
//-----------------------------------------------------------------------------------------
return strStorage;
//-----------------------------------------------------------------------------------------
}
//=============================================================================================
public static void InializeCacheFromString(string strCache)
{
try
{
//-----------------------------------------------------------------------------------------
List<string> storage = JsonConvert.DeserializeObject<List<string>>(strCache);
//-----------------------------------------------------------------------------------------
if (storage != null && storage.Count == 2)
{
//-----------------------------------------------------------------------------------------
_StoredValues = JsonConvert.DeserializeObject<Dictionary<string, byte[]>>(storage.First());
_ExpirationTimes = JsonConvert.DeserializeObject<Dictionary<string, DateTime>>(storage.Last());
//-----------------------------------------------------------------------------------------
if (_ExpirationTimes != null && _StoredValues != null)
{
//-----------------------------------------------------------------------------------------
for (int i = 0; i < _ExpirationTimes.Count; i++)
{
string key = _ExpirationTimes.ElementAt(i).Key;
//-----------------------------------------------------------------------------------------
if (_ExpirationTimes[key] < DateTime.Now)
{
ClearItem(key);
}
//-----------------------------------------------------------------------------------------
}
//-----------------------------------------------------------------------------------------
if (MaxItemCount > 0 && _StoredValues.Count > MaxItemCount)
{
IEnumerable<KeyValuePair<string, DateTime>> countedOutItems = _ExpirationTimes.OrderByDescending(o => o.Value).Skip(MaxItemCount);
for (int i = 0; i < countedOutItems.Count(); i++)
{
ClearItem(countedOutItems.ElementAt(i).Key);
}
}
//-----------------------------------------------------------------------------------------
return;
//-----------------------------------------------------------------------------------------
}
//-----------------------------------------------------------------------------------------
}
//-----------------------------------------------------------------------------------------
_StoredValues = new Dictionary<string, byte[]>();
_ExpirationTimes = new Dictionary<string, DateTime>();
//-----------------------------------------------------------------------------------------
}
catch (Exception)
{
throw;
}
}
//=============================================================================================
public static void ClearItem(string key)
{
//-----------------------------------------------------------------------------------------
if (_StoredValues.ContainsKey(key))
{
_StoredValues.Remove(key);
}
//-----------------------------------------------------------------------------------------
if (_ExpirationTimes.ContainsKey(key))
_ExpirationTimes.Remove(key);
//-----------------------------------------------------------------------------------------
}
//=============================================================================================
}
You can easily start using the cache on the fly with something like...
//------------------------------------------------------------------------------------------------------------------------------
string key = "MyUniqueKeyForThisItem";
//------------------------------------------------------------------------------------------------------------------------------
MyType obj = ApplicationCaching<MyCacheType>.GetFromCache<MyType>(key);
//------------------------------------------------------------------------------------------------------------------------------
if (obj == default)
{
obj = new MyType(...);
ApplicationCaching<MyCacheType>.AddToCache(key, obj, DateTime.Now.AddHours(1));
}
Note the actual types stored in the cache can be the same or different from the cache type. The cache type is ONLY used to differentiate different cache stores.
You can then decide to allow the cache to persist after execution terminates using Default Settings
string bulkCache = ApplicationCaching<MyType>.ConvertCacheToString();
//--------------------------------------------------------------------------------------------------------
if (bulkCache != "")
{
Properties.Settings.Default.*MyType*DataCachingStore = bulkCache;
}
//--------------------------------------------------------------------------------------------------------
try
{
Properties.Settings.Default.Save();
}
catch (IsolatedStorageException)
{
//handle Isolated Storage exceptions here
}
Handle the InitialAccess Event to reinitialize the cache when you restart the app
private static void ApplicationCaching_InitialAccess(object sender, EventArgs e)
{
//-----------------------------------------------------------------------------------------
string storedCache = Properties.Settings.Default.*MyType*DataCachingStore;
ApplicationCaching<MyCacheType>.InializeCacheFromString(storedCache);
//-----------------------------------------------------------------------------------------
}
Finally here is the Zipper class...
public class Zipper
{
public static void CopyTo(Stream src, Stream dest)
{
byte[] bytes = new byte[4096];
int cnt;
while ((cnt = src.Read(bytes, 0, bytes.Length)) != 0)
{
dest.Write(bytes, 0, cnt);
}
}
public static byte[] Zip(string str)
{
var bytes = Encoding.UTF8.GetBytes(str);
using (var msi = new MemoryStream(bytes))
using (var mso = new MemoryStream())
{
using (var gs = new GZipStream(mso, CompressionMode.Compress))
{
CopyTo(msi, gs);
}
return mso.ToArray();
}
}
public static string Unzip(byte[] bytes)
{
using (var msi = new MemoryStream(bytes))
using (var mso = new MemoryStream())
{
using (var gs = new GZipStream(msi, CompressionMode.Decompress))
{
CopyTo(gs, mso);
}
return Encoding.UTF8.GetString(mso.ToArray());
}
}
}
If you are looking to Cache something in ASP.Net then I would look at the Cache class. For example
Hashtable menuTable = new Hashtable();
menuTable.add("Home","default.aspx");
Cache["menu"] = menuTable;
Then to retrieve it again
Hashtable menuTable = (Hashtable)Cache["menu"];
- Memory cache implementation for .Net core
public class CachePocRepository : ICachedEmployeeRepository
{
private readonly IEmployeeRepository _employeeRepository;
private readonly IMemoryCache _memoryCache;
public CachePocRepository(
IEmployeeRepository employeeRepository,
IMemoryCache memoryCache)
{
_employeeRepository = employeeRepository;
_memoryCache = memoryCache;
}
public async Task<Employee> GetEmployeeDetailsId(string employeeId)
{
_memoryCache.TryGetValue(employeeId, out Employee employee);
if (employee != null)
{
return employee;
}
employee = await _employeeRepository.GetEmployeeDetailsId(employeeId);
_memoryCache.Set(employeeId,
employee,
new MemoryCacheEntryOptions()
{
AbsoluteExpiration = DateTimeOffset.UtcNow.AddDays(7),
});
return employee;
}
You could use a Hashtable
it has very fast lookups, no key collisions and your data will not garbage collected