Threading and Incrementing - c#

So I think this might be a fundamental flaw in my approach to threading and incrementing a global counter, but here is my problem. I have a collection of file names from a DB that I iterate through, and for each file name I search it within a top level folder. Each iteration I thread the search and increment a counter when it completes so I can determine when it finishes. The problem is the counter never ever ever gets as high as the total file count, it comes very close sometimes, but never reaches what I would expect.
public class FindRogeRecords
{
private delegate void FindFileCaller(string folder, int uploadedID, string filename);
private Dictionary<int, string> _files;
private List<int> _uploadedIDs;
private int _filesProcessedCounter;
private bool _completed;
public void Run()
{
_files = GetFilesFromDB(); //returns a dictionary of id's and filenames
FindRogueRecords();
}
private void FindRogueRecords()
{
_uploadedIDs = new List<int>();
foreach (KeyValuePair<int, string> pair in _files)
{
var caller = new FindFileCaller(FindFile);
var result = caller.BeginInvoke(txtSource.Text, pair.Key, pair.Value, new AsyncCallback(FindFile_Completed), null);
}
}
private void FindFile(string documentsFolder, int uploadedID, string filename)
{
var docFolders = AppSettings.DocumentFolders;
foreach (string folder in docFolders)
{
string path = Path.Combine(documentsFolder, folder);
var directory = new DirectoryInfo(path);
var files = directory.GetFiles(filename, SearchOption.AllDirectories);
if (files != null && files.Length > 0) return;
}
lock (_uploadedIDs) _uploadedIDs.Add(uploadedID);
}
private void FindFile_Completed(System.IAsyncResult ar)
{
var result = (AsyncResult)ar;
var caller = (FindFileCaller)result.AsyncDelegate;
_filesProcessedCounter++;
_completed = (_files.Count == _filesProcessedCounter); //this never evaluates to true
}
}

You are accessing _filesProcessedCounter variable from multiple threads without any synchronization (even simple lock()) so this caused Race Condition in your code.
To increment an integer variable you can use Interlocked.Increment() which is thread-safe but considering that following line of code requires synchronization as well:
_completed = (_files.Count == _filesProcessedCounter);
I would suggest using lock object to cover both lines and keep code much clear:
// Add this field to a class fields list
private static readonly object counterLock = new object();
// wrap access to shared variables by lock as shown below
lock (counterLock)
{
_filesProcessedCounter++;
_completed = (_files.Count == _filesProcessedCounter);
}

This is because you have a race condition in your program. Since the ++ operator is equal to the following code
c = c + 1; // c++;
your can see, that it is not atomic. A thread that increments the value stores c in a register, increments the value by 1 and then writes it back. When the thread now gets pushed aside because another thread gets the CPU the execution of c = c + 1 may not be finished. A second thread performs the same action, reads the old c and increments its value by one. When the first thread gains the CPU again he would overwrite the data written by the second thread.
You can use locks to make sure, that only one thread can access the variable at one time, or use atomic functions like
Interlocked.Increment(ref c);
to increase c in a threadsave manner;

Use Interlocked.Increment(ref _filesProcessedCounter).

i think it should be:
if (files != null && files.Length > 0) continue;
and not return (you will not increase your counter...)

Convert
_filesProcessedCounter++;
_completed = (_files.Count == _filesProcessedCounter);
to
_completed = (_files.Count == Interlocked.Increment(ref _filesProcessedCounter));

Related

Write records to a file when 500 Records reached or 2 mins of Time Passed from the previous file

I have a scenario, where I have a worker method (DoWork()) which is getting called by the web service continuously.
This worker method writes the data to a file using the (writeToFile()) and I need to write this to a file under 2 conditions
a) when the number of records reached in 500 OR
b) 2 minutes has been passed from the previous file been written
my sample code is as follows:
public override void DoWork()
{
//-- This worker method is called continuously by the webservice
List<string> list1 = new List<string>();
List<string> list2 = new List<string>();
int count =0;
//--- Some code that writes the RawData
list1.Add(rawData);
If(list1.Count<=500)
{
list2=list1;
count = WriteToFile(list2);
}
}
public static int WriteToFile(List<string> list)
{
//-- The writing of File Should happen when 500 records reached OR 2 mins passed.
// ---- Logic for writing the list file using the Streamwriter is working fine
}
I need a logic check if
500 records reached in the List OR
2 mins passed from the previous
file generated,
only then the file writing should happen.
Thanks
:)
To make it a little more readable, I'd use a little helper, here...
Time check:
class TimeCheck{
private TimeSpan timeoutDuration;
private DateTime nextTimeout;
// Default = 2 minutes.
public TimeCheck(): this(TimeSpan.FromMinutes(2))
{}
public TimeCheck( TimeSpan timeout ){
this.timeoutDuration= timeout;
this.nextTimeout = DateTime.Now + timeoutDuration;
}
public bool IsTimeoutReached => DateTime.Now >= nextTimeout;
public void Reset(){
nextTimeout = DateTime.Now + timeoutDuration;
}
}
Usage:
// on class level
const int MAXITEMS = 500;
private Lazy<TimeCheck> timecheck = new Lazy<TimeCheck>( () => return new TimeCheck() );
private List<string> list1 = new List<string>();
private readonly object Locker = new object();
public override void DoWork()
{
lock(Locker){
// ... add items to list1
// Write if a) list1 has 500+ items or b) 2 minutes since last write have passed
if( list1.Count >= MAXITEMS || timecheck.Value.IsTimeoutReached )
{
WriteToFile(list1);
list1.Clear(); // assuming _all_ Items are written.
timecheck.Value.Reset();
}
}
}
Attention:
If code is called by multiple threads, you need to make sure it's thread safe. I used lock which will create a bottleneck. You may want to figure out a more sophisticated manner, but this answer concentrates on the condition requirements.
Above snippet assumes, list1 is not accessed elsewhere.

Should I use a ConcurrentQueue this way or individual threads

I'm doing what amounts to a glorified mail merge and then file conversion to PDF... Based on .Net 4.5 I see a couple ways I can do the threading. The one using a thread safe queue seems interesting (Plan A), but I can see a potential problem. What do you think? I'll try to keep it short, but put in what is needed.
This works on the assumption that it will take far more time to do the database processing than the PDF conversion.
In both cases, the database processing for each file is done in its own thread/task, but PDF conversion could be done in many single threads/tasks (Plan B) or it can be done in a single long running thread (Plan A). It is that PDF conversion I am wondering about. It is all in a try/catch statement, but that thread must not fail or all fails (Plan A). Do you think that is a good idea? Any suggestions would be appreciated.
/* A class to process a file: */
public class c_FileToConvert
{
public string InFileName { get; set; }
public int FileProcessingState { get; set; }
public string ErrorMessage { get; set; }
public List<string> listData = null;
c_FileToConvert(string inFileName)
{
InFileName = inFileName;
FileProcessingState = 0;
ErrorMessage = ""; // yah, yah, yah - String.Empty
listData = new List<string>();
}
public void doDbProcessing()
{
// get the data from database and put strings in this.listData
DAL.getDataForFile(this.InFileName, this.ErrorMessage); // static function
if(this.ErrorMessage != "")
this.FileProcessingState = -1; //fatal error
else // Open file and append strings to it
{
foreach(string s in this.listData}
...
FileProcessingState = 1; // enum DB_WORK_COMPLETE ...
}
}
public void doPDFProcessing()
{
PDFConverter cPDFConverter = new PDFConverter();
cPDFConverter.convertToPDF(InFileName, InFileName + ".PDF");
FileProcessingState = 2; // enum PDF_WORK_COMPLETE ...
}
}
/*** These only for Plan A ***/
public ConcurrentQueue<c_FileToConvert> ConncurrentQueueFiles = new ConcurrentQueue<c_FileToConvert>();
public bool bProcessPDFs;
public void doProcessing() // This is the main thread of the Windows Service
{
List<c_FileToConvert> listcFileToConvert = new List<c_FileToConvert>();
/*** Only for Plan A ***/
bProcessPDFs = true;
Task task1 = new Task(new Action(startProcessingPDFs)); // Start it and forget it
task1.Start();
while(1 == 1)
{
List<string> listFileNamesToProcess = new List<string>();
DAL.getFileNamesToProcessFromDb(listFileNamesToProcess);
foreach(string s in listFileNamesToProcess)
{
c_FileToConvert cFileToConvert = new c_FileToConvert(s);
listcFileToConvert.Add(cFileToConvert);
}
foreach(c_FileToConvert c in listcFileToConvert)
if(c.FileProcessingState == 0)
Thread t = new Thread(new ParameterizedThreadStart(c.doDbProcessing));
/** This is Plan A - throw it on single long running PDF processing thread **/
foreach(c_FileToConvert c in listcFileToConvert)
if(c.FileProcessingState == 1)
ConncurrentQueueFiles.Enqueue(c);
/*** This is Plan B - traditional thread for each file conversion ***/
foreach(c_FileToConvert c in listcFileToConvert)
if(c.FileProcessingState == 1)
Thread t = new Thread(new ParameterizedThreadStart(c.doPDFProcessing));
int iCount = 0;
for(int iCount = 0; iCount < c_FileToConvert.Count; iCount++;)
{
if((c.FileProcessingState == -1) || (c.FileProcessingState == 2))
{
DAL.updateProcessingState(c.FileProcessingState)
listcFileToConvert.RemoveAt(iCount);
}
}
sleep(1000);
}
}
public void startProcessingPDFs() /*** Only for Plan A ***/
{
while (bProcessPDFs == true)
{
if (ConncurrentQueueFiles.IsEmpty == false)
{
try
{
c_FileToConvert cFileToConvert = null;
if (ConncurrentQueueFiles.TryDequeue(out cFileToConvert) == true)
cFileToConvert.doPDFProcessing();
}
catch(Exception e)
{
cFileToConvert.FileProcessingState = -1;
cFileToConvert.ErrorMessage = e.message;
}
}
}
}
Plan A seems like a nice solution, but what if the Task fails somehow? Yes, the PDF conversion can be done with individual threads, but I want to reserve them for the database processing.
This was written in a text editor as the simplest code I could, so there may be something, but I think I got the idea across.
How many files are you working with? 10? 100,000? If the number is very large, using 1 thread to run the DB queries for each file is not a good idea.
Threads are a very low-level control flow construct, and I advise you try to avoid a lot of messy and detailed thread spawning, joining, synchronizing, etc. etc. in your application code. Keep it stupidly simple if you can.
How about this: put the data you need for each file in a thread-safe queue. Create another thread-safe queue for results. Spawn some number of threads which repeatedly pull items from the input queue, run the queries, convert to PDF, then push the output into the output queue. The threads should share absolutely nothing but the input and output queues.
You can pick any number of worker threads which you like, or experiment to see what a good number is. Don't create 1 thread for each file -- just pick a number which allows for good CPU and disk utilization.
OR, if your language/libraries have a parallel map operator, use that. It will save you a lot of messing around.

Locking pattern for proper use of .NET MemoryCache

I assume this code has concurrency issues:
const string CacheKey = "CacheKey";
static string GetCachedData()
{
string expensiveString =null;
if (MemoryCache.Default.Contains(CacheKey))
{
expensiveString = MemoryCache.Default[CacheKey] as string;
}
else
{
CacheItemPolicy cip = new CacheItemPolicy()
{
AbsoluteExpiration = new DateTimeOffset(DateTime.Now.AddMinutes(20))
};
expensiveString = SomeHeavyAndExpensiveCalculation();
MemoryCache.Default.Set(CacheKey, expensiveString, cip);
}
return expensiveString;
}
The reason for the concurrency issue is that multiple threads can get a null key and then attempt to insert data into cache.
What would be the shortest and cleanest way to make this code concurrency proof? I like to follow a good pattern across my cache related code. A link to an online article would be a great help.
UPDATE:
I came up with this code based on #Scott Chamberlain's answer. Can anyone find any performance or concurrency issue with this?
If this works, it would save many line of code and errors.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Runtime.Caching;
namespace CachePoc
{
class Program
{
static object everoneUseThisLockObject4CacheXYZ = new object();
const string CacheXYZ = "CacheXYZ";
static object everoneUseThisLockObject4CacheABC = new object();
const string CacheABC = "CacheABC";
static void Main(string[] args)
{
string xyzData = MemoryCacheHelper.GetCachedData<string>(CacheXYZ, everoneUseThisLockObject4CacheXYZ, 20, SomeHeavyAndExpensiveXYZCalculation);
string abcData = MemoryCacheHelper.GetCachedData<string>(CacheABC, everoneUseThisLockObject4CacheXYZ, 20, SomeHeavyAndExpensiveXYZCalculation);
}
private static string SomeHeavyAndExpensiveXYZCalculation() {return "Expensive";}
private static string SomeHeavyAndExpensiveABCCalculation() {return "Expensive";}
public static class MemoryCacheHelper
{
public static T GetCachedData<T>(string cacheKey, object cacheLock, int cacheTimePolicyMinutes, Func<T> GetData)
where T : class
{
//Returns null if the string does not exist, prevents a race condition where the cache invalidates between the contains check and the retreival.
T cachedData = MemoryCache.Default.Get(cacheKey, null) as T;
if (cachedData != null)
{
return cachedData;
}
lock (cacheLock)
{
//Check to see if anyone wrote to the cache while we where waiting our turn to write the new value.
cachedData = MemoryCache.Default.Get(cacheKey, null) as T;
if (cachedData != null)
{
return cachedData;
}
//The value still did not exist so we now write it in to the cache.
CacheItemPolicy cip = new CacheItemPolicy()
{
AbsoluteExpiration = new DateTimeOffset(DateTime.Now.AddMinutes(cacheTimePolicyMinutes))
};
cachedData = GetData();
MemoryCache.Default.Set(cacheKey, cachedData, cip);
return cachedData;
}
}
}
}
}
This is my 2nd iteration of the code. Because MemoryCache is thread safe you don't need to lock on the initial read, you can just read and if the cache returns null then do the lock check to see if you need to create the string. It greatly simplifies the code.
const string CacheKey = "CacheKey";
static readonly object cacheLock = new object();
private static string GetCachedData()
{
//Returns null if the string does not exist, prevents a race condition where the cache invalidates between the contains check and the retreival.
var cachedString = MemoryCache.Default.Get(CacheKey, null) as string;
if (cachedString != null)
{
return cachedString;
}
lock (cacheLock)
{
//Check to see if anyone wrote to the cache while we where waiting our turn to write the new value.
cachedString = MemoryCache.Default.Get(CacheKey, null) as string;
if (cachedString != null)
{
return cachedString;
}
//The value still did not exist so we now write it in to the cache.
var expensiveString = SomeHeavyAndExpensiveCalculation();
CacheItemPolicy cip = new CacheItemPolicy()
{
AbsoluteExpiration = new DateTimeOffset(DateTime.Now.AddMinutes(20))
};
MemoryCache.Default.Set(CacheKey, expensiveString, cip);
return expensiveString;
}
}
EDIT: The below code is unnecessary but I wanted to leave it to show the original method. It may be useful to future visitors who are using a different collection that has thread safe reads but non-thread safe writes (almost all of classes under the System.Collections namespace is like that).
Here is how I would do it using ReaderWriterLockSlim to protect access. You need to do a kind of "Double Checked Locking" to see if anyone else created the cached item while we where waiting to to take the lock.
const string CacheKey = "CacheKey";
static readonly ReaderWriterLockSlim cacheLock = new ReaderWriterLockSlim();
static string GetCachedData()
{
//First we do a read lock to see if it already exists, this allows multiple readers at the same time.
cacheLock.EnterReadLock();
try
{
//Returns null if the string does not exist, prevents a race condition where the cache invalidates between the contains check and the retreival.
var cachedString = MemoryCache.Default.Get(CacheKey, null) as string;
if (cachedString != null)
{
return cachedString;
}
}
finally
{
cacheLock.ExitReadLock();
}
//Only one UpgradeableReadLock can exist at one time, but it can co-exist with many ReadLocks
cacheLock.EnterUpgradeableReadLock();
try
{
//We need to check again to see if the string was created while we where waiting to enter the EnterUpgradeableReadLock
var cachedString = MemoryCache.Default.Get(CacheKey, null) as string;
if (cachedString != null)
{
return cachedString;
}
//The entry still does not exist so we need to create it and enter the write lock
var expensiveString = SomeHeavyAndExpensiveCalculation();
cacheLock.EnterWriteLock(); //This will block till all the Readers flush.
try
{
CacheItemPolicy cip = new CacheItemPolicy()
{
AbsoluteExpiration = new DateTimeOffset(DateTime.Now.AddMinutes(20))
};
MemoryCache.Default.Set(CacheKey, expensiveString, cip);
return expensiveString;
}
finally
{
cacheLock.ExitWriteLock();
}
}
finally
{
cacheLock.ExitUpgradeableReadLock();
}
}
There is an open source library [disclaimer: that I wrote]: LazyCache that IMO covers your requirement with two lines of code:
IAppCache cache = new CachingService();
var cachedResults = cache.GetOrAdd("CacheKey",
() => SomeHeavyAndExpensiveCalculation());
It has built in locking by default so the cacheable method will only execute once per cache miss, and it uses a lambda so you can do "get or add" in one go. It defaults to 20 minutes sliding expiration.
There's even a NuGet package ;)
I've solved this issue by making use of the AddOrGetExisting method on the MemoryCache and the use of Lazy initialization.
Essentially, my code looks something like this:
static string GetCachedData(string key, DateTimeOffset offset)
{
Lazy<String> lazyObject = new Lazy<String>(() => SomeHeavyAndExpensiveCalculationThatReturnsAString());
var returnedLazyObject = MemoryCache.Default.AddOrGetExisting(key, lazyObject, offset);
if (returnedLazyObject == null)
return lazyObject.Value;
return ((Lazy<String>) returnedLazyObject).Value;
}
Worst case scenario here is that you create the same Lazy object twice. But that is pretty trivial. The use of AddOrGetExisting guarantees that you'll only ever get one instance of the Lazy object, and so you're also guaranteed to only call the expensive initialization method once.
I assume this code has concurrency issues:
Actually, it's quite possibly fine, though with a possible improvement.
Now, in general the pattern where we have multiple threads setting a shared value on first use, to not lock on the value being obtained and set can be:
Disastrous - other code will assume only one instance exists.
Disastrous - the code that obtains the instance is not can only tolerate one (or perhaps a certain small number) concurrent operations.
Disastrous - the means of storage is not thread-safe (e.g. have two threads adding to a dictionary and you can get all sorts of nasty errors).
Sub-optimal - the overall performance is worse than if locking had ensured only one thread did the work of obtaining the value.
Optimal - the cost of having multiple threads do redundant work is less than the cost of preventing it, especially since that can only happen during a relatively brief period.
However, considering here that MemoryCache may evict entries then:
If it's disastrous to have more than one instance then MemoryCache is the wrong approach.
If you must prevent simultaneous creation, you should do so at the point of creation.
MemoryCache is thread-safe in terms of access to that object, so that is not a concern here.
Both of these possibilities have to be thought about of course, though the only time having two instances of the same string existing can be a problem is if you're doing very particular optimisations that don't apply here*.
So, we're left with the possibilities:
It is cheaper to avoid the cost of duplicate calls to SomeHeavyAndExpensiveCalculation().
It is cheaper not to avoid the cost of duplicate calls to SomeHeavyAndExpensiveCalculation().
And working that out can be difficult (indeed, the sort of thing where it's worth profiling rather than assuming you can work it out). It's worth considering here though that most obvious ways of locking on insert will prevent all additions to the cache, including those that are unrelated.
This means that if we had 50 threads trying to set 50 different values, then we'll have to make all 50 threads wait on each other, even though they weren't even going to do the same calculation.
As such, you're probably better off with the code you have, than with code that avoids the race-condition, and if the race-condition is a problem, you quite likely either need to handle that somewhere else, or need a different caching strategy than one that expels old entries†.
The one thing I would change is I'd replace the call to Set() with one to AddOrGetExisting(). From the above it should be clear that it probably isn't necessary, but it would allow the newly obtained item to be collected, reducing overall memory use and allowing a higher ratio of low generation to high generation collections.
So yeah, you could use double-locking to prevent concurrency, but either the concurrency isn't actually a problem, or your storing the values in the wrong way, or double-locking on the store would not be the best way to solve it.
*If you know only one each of a set of strings exists, you can optimise equality comparisons, which is about the only time having two copies of a string can be incorrect rather than just sub-optimal, but you'd want to be doing very different types of caching for that to make sense. E.g. the sort XmlReader does internally.
†Quite likely either one that stores indefinitely, or one that makes use of weak references so it will only expel entries if there are no existing uses.
Somewhat dated question, but maybe still useful: you may take a look at FusionCache ⚡🦥, which I recently released.
The feature you are looking for is described here, and you can use it like this:
const string CacheKey = "CacheKey";
static string GetCachedData()
{
return fusionCache.GetOrSet(
CacheKey,
_ => SomeHeavyAndExpensiveCalculation(),
TimeSpan.FromMinutes(20)
);
}
You may also find some of the other features interesting like fail-safe, advanced timeouts with background factory completion and support for an optional, distributed 2nd level cache.
If you will give it a chance please let me know what you think.
/shameless-plug
It is difficult to choose which one is better; lock or ReaderWriterLockSlim. You need real world statistics of read and write numbers and ratios etc.
But if you believe using "lock" is the correct way. Then here is a different solution for different needs. I also include the Allan Xu's solution in the code. Because both can be needed for different needs.
Here are the requirements, driving me to this solution:
You don't want to or cannot supply the 'GetData' function for some reason. Perhaps the 'GetData' function is located in some other class with a heavy constructor and you do not want to even create an instance till ensuring it is unescapable.
You need to access the same cached data from different locations/tiers of the application. And those different locations don't have access to same locker object.
You don't have a constant cache key. For example; need of caching some data with the sessionId cache key.
Code:
using System;
using System.Runtime.Caching;
using System.Collections.Concurrent;
using System.Collections.Generic;
namespace CachePoc
{
class Program
{
static object everoneUseThisLockObject4CacheXYZ = new object();
const string CacheXYZ = "CacheXYZ";
static object everoneUseThisLockObject4CacheABC = new object();
const string CacheABC = "CacheABC";
static void Main(string[] args)
{
//Allan Xu's usage
string xyzData = MemoryCacheHelper.GetCachedDataOrAdd<string>(CacheXYZ, everoneUseThisLockObject4CacheXYZ, 20, SomeHeavyAndExpensiveXYZCalculation);
string abcData = MemoryCacheHelper.GetCachedDataOrAdd<string>(CacheABC, everoneUseThisLockObject4CacheXYZ, 20, SomeHeavyAndExpensiveXYZCalculation);
//My usage
string sessionId = System.Web.HttpContext.Current.Session["CurrentUser.SessionId"].ToString();
string yvz = MemoryCacheHelper.GetCachedData<string>(sessionId);
if (string.IsNullOrWhiteSpace(yvz))
{
object locker = MemoryCacheHelper.GetLocker(sessionId);
lock (locker)
{
yvz = MemoryCacheHelper.GetCachedData<string>(sessionId);
if (string.IsNullOrWhiteSpace(yvz))
{
DatabaseRepositoryWithHeavyConstructorOverHead dbRepo = new DatabaseRepositoryWithHeavyConstructorOverHead();
yvz = dbRepo.GetDataExpensiveDataForSession(sessionId);
MemoryCacheHelper.AddDataToCache(sessionId, yvz, 5);
}
}
}
}
private static string SomeHeavyAndExpensiveXYZCalculation() { return "Expensive"; }
private static string SomeHeavyAndExpensiveABCCalculation() { return "Expensive"; }
public static class MemoryCacheHelper
{
//Allan Xu's solution
public static T GetCachedDataOrAdd<T>(string cacheKey, object cacheLock, int minutesToExpire, Func<T> GetData) where T : class
{
//Returns null if the string does not exist, prevents a race condition where the cache invalidates between the contains check and the retreival.
T cachedData = MemoryCache.Default.Get(cacheKey, null) as T;
if (cachedData != null)
return cachedData;
lock (cacheLock)
{
//Check to see if anyone wrote to the cache while we where waiting our turn to write the new value.
cachedData = MemoryCache.Default.Get(cacheKey, null) as T;
if (cachedData != null)
return cachedData;
cachedData = GetData();
MemoryCache.Default.Set(cacheKey, cachedData, DateTime.Now.AddMinutes(minutesToExpire));
return cachedData;
}
}
#region "My Solution"
readonly static ConcurrentDictionary<string, object> Lockers = new ConcurrentDictionary<string, object>();
public static object GetLocker(string cacheKey)
{
CleanupLockers();
return Lockers.GetOrAdd(cacheKey, item => (cacheKey, new object()));
}
public static T GetCachedData<T>(string cacheKey) where T : class
{
CleanupLockers();
T cachedData = MemoryCache.Default.Get(cacheKey) as T;
return cachedData;
}
public static void AddDataToCache(string cacheKey, object value, int cacheTimePolicyMinutes)
{
CleanupLockers();
MemoryCache.Default.Add(cacheKey, value, DateTimeOffset.Now.AddMinutes(cacheTimePolicyMinutes));
}
static DateTimeOffset lastCleanUpTime = DateTimeOffset.MinValue;
static void CleanupLockers()
{
if (DateTimeOffset.Now.Subtract(lastCleanUpTime).TotalMinutes > 1)
{
lock (Lockers)//maybe a better locker is needed?
{
try//bypass exceptions
{
List<string> lockersToRemove = new List<string>();
foreach (var locker in Lockers)
{
if (!MemoryCache.Default.Contains(locker.Key))
lockersToRemove.Add(locker.Key);
}
object dummy;
foreach (string lockerKey in lockersToRemove)
Lockers.TryRemove(lockerKey, out dummy);
lastCleanUpTime = DateTimeOffset.Now;
}
catch (Exception)
{ }
}
}
}
#endregion
}
}
class DatabaseRepositoryWithHeavyConstructorOverHead
{
internal string GetDataExpensiveDataForSession(string sessionId)
{
return "Expensive data from database";
}
}
}
To avoid the global lock, you can use SingletonCache to implement one lock per key, without exploding memory usage (the lock objects are removed when no longer referenced, and acquire/release is thread safe guaranteeing that only 1 instance is ever in use via compare and swap).
Using it looks like this:
SingletonCache<string, object> keyLocks = new SingletonCache<string, object>();
const string CacheKey = "CacheKey";
static string GetCachedData()
{
string expensiveString =null;
if (MemoryCache.Default.Contains(CacheKey))
{
return MemoryCache.Default[CacheKey] as string;
}
// double checked lock
using (var lifetime = keyLocks.Acquire(url))
{
lock (lifetime.Value)
{
if (MemoryCache.Default.Contains(CacheKey))
{
return MemoryCache.Default[CacheKey] as string;
}
cacheItemPolicy cip = new CacheItemPolicy()
{
AbsoluteExpiration = new DateTimeOffset(DateTime.Now.AddMinutes(20))
};
expensiveString = SomeHeavyAndExpensiveCalculation();
MemoryCache.Default.Set(CacheKey, expensiveString, cip);
return expensiveString;
}
}
}
Code is here on GitHub: https://github.com/bitfaster/BitFaster.Caching
Install-Package BitFaster.Caching
There is also an LRU implementation that is lighter weight than MemoryCache, and has several advantages - faster concurrent reads and writes, bounded size, no background thread, internal perf counters etc. (disclaimer, I wrote it).
Console example of MemoryCache, "How to save/get simple class objects"
Output after launching and pressing Any key except Esc :
Saving to cache!
Getting from cache!
Some1
Some2
class Some
{
public String text { get; set; }
public Some(String text)
{
this.text = text;
}
public override string ToString()
{
return text;
}
}
public static MemoryCache cache = new MemoryCache("cache");
public static string cache_name = "mycache";
static void Main(string[] args)
{
Some some1 = new Some("some1");
Some some2 = new Some("some2");
List<Some> list = new List<Some>();
list.Add(some1);
list.Add(some2);
do {
if (cache.Contains(cache_name))
{
Console.WriteLine("Getting from cache!");
List<Some> list_c = cache.Get(cache_name) as List<Some>;
foreach (Some s in list_c) Console.WriteLine(s);
}
else
{
Console.WriteLine("Saving to cache!");
cache.Set(cache_name, list, DateTime.Now.AddMinutes(10));
}
} while (Console.ReadKey(true).Key != ConsoleKey.Escape);
}
public interface ILazyCacheProvider : IAppCache
{
/// <summary>
/// Get data loaded - after allways throw cached result (even when data is older then needed) but very fast!
/// </summary>
/// <param name="key"></param>
/// <param name="getData"></param>
/// <param name="slidingExpiration"></param>
/// <typeparam name="T"></typeparam>
/// <returns></returns>
T GetOrAddPermanent<T>(string key, Func<T> getData, TimeSpan slidingExpiration);
}
/// <summary>
/// Initialize LazyCache in runtime
/// </summary>
public class LazzyCacheProvider: CachingService, ILazyCacheProvider
{
private readonly Logger _logger = LogManager.GetLogger("MemCashe");
private readonly Hashtable _hash = new Hashtable();
private readonly List<string> _reloader = new List<string>();
private readonly ConcurrentDictionary<string, DateTime> _lastLoad = new ConcurrentDictionary<string, DateTime>();
T ILazyCacheProvider.GetOrAddPermanent<T>(string dataKey, Func<T> getData, TimeSpan slidingExpiration)
{
var currentPrincipal = Thread.CurrentPrincipal;
if (!ObjectCache.Contains(dataKey) && !_hash.Contains(dataKey))
{
_hash[dataKey] = null;
_logger.Debug($"{dataKey} - first start");
_lastLoad[dataKey] = DateTime.Now;
_hash[dataKey] = ((object)GetOrAdd(dataKey, getData, slidingExpiration)).CloneObject();
_lastLoad[dataKey] = DateTime.Now;
_logger.Debug($"{dataKey} - first");
}
else
{
if ((!ObjectCache.Contains(dataKey) || _lastLoad[dataKey].AddMinutes(slidingExpiration.Minutes) < DateTime.Now) && _hash[dataKey] != null)
Task.Run(() =>
{
if (_reloader.Contains(dataKey)) return;
lock (_reloader)
{
if (ObjectCache.Contains(dataKey))
{
if(_lastLoad[dataKey].AddMinutes(slidingExpiration.Minutes) > DateTime.Now)
return;
_lastLoad[dataKey] = DateTime.Now;
Remove(dataKey);
}
_reloader.Add(dataKey);
Thread.CurrentPrincipal = currentPrincipal;
_logger.Debug($"{dataKey} - reload start");
_hash[dataKey] = ((object)GetOrAdd(dataKey, getData, slidingExpiration)).CloneObject();
_logger.Debug($"{dataKey} - reload");
_reloader.Remove(dataKey);
}
});
}
if (_hash[dataKey] != null) return (T) (_hash[dataKey]);
_logger.Debug($"{dataKey} - dummy start");
var data = GetOrAdd(dataKey, getData, slidingExpiration);
_logger.Debug($"{dataKey} - dummy");
return (T)((object)data).CloneObject();
}
}
Its a bit late, however...
Full implementation:
[HttpGet]
public async Task<HttpResponseMessage> GetPageFromUriOrBody(RequestQuery requestQuery)
{
log(nameof(GetPageFromUriOrBody), nameof(requestQuery));
var responseResult = await _requestQueryCache.GetOrCreate(
nameof(GetPageFromUriOrBody)
, requestQuery
, (x) => getPageContent(x).Result);
return Request.CreateResponse(System.Net.HttpStatusCode.Accepted, responseResult);
}
static MemoryCacheWithPolicy<RequestQuery, string> _requestQueryCache = new MemoryCacheWithPolicy<RequestQuery, string>();
Here is getPageContent signature:
async Task<string> getPageContent(RequestQuery requestQuery);
And here is the MemoryCacheWithPolicy implementation:
public class MemoryCacheWithPolicy<TParameter, TResult>
{
static ILogger _nlogger = new AppLogger().Logger;
private MemoryCache _cache = new MemoryCache(new MemoryCacheOptions()
{
//Size limit amount: this is actually a memory size limit value!
SizeLimit = 1024
});
/// <summary>
/// Gets or creates a new memory cache record for a main data
/// along with parameter data that is assocciated with main main.
/// </summary>
/// <param name="key">Main data cache memory key.</param>
/// <param name="param">Parameter model that assocciated to main model (request result).</param>
/// <param name="createCacheData">A delegate to create a new main data to cache.</param>
/// <returns></returns>
public async Task<TResult> GetOrCreate(object key, TParameter param, Func<TParameter, TResult> createCacheData)
{
// this key is used for param cache memory.
var paramKey = key + nameof(param);
if (!_cache.TryGetValue(key, out TResult cacheEntry))
{
// key is not in the cache, create data through the delegate.
cacheEntry = createCacheData(param);
createMemoryCache(key, cacheEntry, paramKey, param);
_nlogger.Warn(" cache is created.");
}
else
{
// data is chached so far..., check if param model is same (or changed)?
if(!_cache.TryGetValue(paramKey, out TParameter cacheParam))
{
//exception: this case should not happened!
}
if (!cacheParam.Equals(param))
{
// request param is changed, create data through the delegate.
cacheEntry = createCacheData(param);
createMemoryCache(key, cacheEntry, paramKey, param);
_nlogger.Warn(" cache is re-created (param model has been changed).");
}
else
{
_nlogger.Trace(" cache is used.");
}
}
return await Task.FromResult<TResult>(cacheEntry);
}
MemoryCacheEntryOptions createMemoryCacheEntryOptions(TimeSpan slidingOffset, TimeSpan relativeOffset)
{
// Cache data within [slidingOffset] seconds,
// request new result after [relativeOffset] seconds.
return new MemoryCacheEntryOptions()
// Size amount: this is actually an entry count per
// key limit value! not an actual memory size value!
.SetSize(1)
// Priority on removing when reaching size limit (memory pressure)
.SetPriority(CacheItemPriority.High)
// Keep in cache for this amount of time, reset it if accessed.
.SetSlidingExpiration(slidingOffset)
// Remove from cache after this time, regardless of sliding expiration
.SetAbsoluteExpiration(relativeOffset);
//
}
void createMemoryCache(object key, TResult cacheEntry, object paramKey, TParameter param)
{
// Cache data within 2 seconds,
// request new result after 5 seconds.
var cacheEntryOptions = createMemoryCacheEntryOptions(
TimeSpan.FromSeconds(2)
, TimeSpan.FromSeconds(5));
// Save data in cache.
_cache.Set(key, cacheEntry, cacheEntryOptions);
// Save param in cache.
_cache.Set(paramKey, param, cacheEntryOptions);
}
void checkCacheEntry<T>(object key, string name)
{
_cache.TryGetValue(key, out T value);
_nlogger.Fatal("Key: {0}, Name: {1}, Value: {2}", key, name, value);
}
}
nlogger is just nLog object to trace MemoryCacheWithPolicy behavior.
I re-create the memory cache if request object (RequestQuery requestQuery) is changed through the delegate (Func<TParameter, TResult> createCacheData) or re-create when sliding or absolute time reached their limit. Note that everything is async too ;)

Lock an object that creates ID

Would the following method ensure that only one thread can read an ID at a time? I have a parallel process which uses the following method and I need it to return unique IDs. Unfortunately I cannot change the way the ID is structured.
private static int Seq = 0;
private static long dtDiff = 0;
private static object thisLock = new object();
private static object BuildClientID(string Code)
{
lock (thisLock)
{
object sReturn = "";
Seq++;
dtDiff++;
if (Seq == 1000)
{
Seq = 0;
dtDiff = DateAndTime.DateDiff(DateInterval.Second, DateTime.Parse("1970-01-01"), DateTime.Now);
}
sReturn = dtDiff.ToString() + Code + Seq.ToString("000");
return sReturn;
}
}
I don't see any reason it wouldn't. Both the lock object and the method are static. The only thing you need to determine is, do you need a more sophisticated form of locking like a Mutex, SpinLock, ReaderWriterLock, or Semaphore.
You'll need to study those, and here is a good link to get started.
Yes, it will work fine as both threads will use the same static object as the lock object and will have to wait for each other.
edit
Based on Dan's comments: consider making Seq and dtDiff properties and put access to them inside the same lock.

Is this thread safe? Breakpoints get hit multiple times

I have the following code:
public class EmailJobQueue
{
private EmailJobQueue()
{
}
private static readonly object JobsLocker = new object();
private static readonly Queue<EmailJob> Jobs = new Queue<EmailJob>();
private static readonly object ErroredIdsLocker = new object();
private static readonly List<long> ErroredIds = new List<long>();
public static EmailJob GetNextJob()
{
lock (JobsLocker)
{
lock (ErroredIdsLocker)
{
// If there are no jobs or they have all errored then get some new ones - if jobs have previously been skipped then this will re get them
if (!Jobs.Any() || Jobs.All(j => ErroredIds.Contains(j.Id)))
{
var db = new DBDataContext();
foreach (var emailJob in db.Emailing_SelectSend(1))
{
// Dont re add jobs that exist
if (Jobs.All(j => j.Id != emailJob.Id) && !ErroredIds.Contains(emailJob.Id))
{
Jobs.Enqueue(new EmailJob(emailJob));
}
}
}
while (Jobs.Any())
{
var curJob = Jobs.Dequeue();
// Check the job has not previously errored - if they all have then eventually we will exit the loop
if (!ErroredIds.Contains(curJob.Id))
return curJob;
}
return null;
}
}
}
public static void ReInsertErrored(long id)
{
lock (ErroredIdsLocker)
{
ErroredIds.Add(id);
}
}
}
I then start 10 threads which do this:
var email = EmailJobQueue.GetNextJob();
if (email != null)
{
// Breakpoint here
}
The thing is that if I put a breakpoint where the comment is and add one item to the queue then the breakpoint gets hit multiple times. Is this an issue with my code or a peculiarity with VS debugger?
Thanks,
Joe
It appears as if you are getting your jobs from the database:
foreach (var emailJob in db.Emailing_SelectSend(1))
Is that database call marking the records as unavailable for section in future queries? If not, I believe that's why you're hitting the break point multiple times.
For example, if I replace that call to the database with the following, I see your behavior.
// MockDB is a static configured as `MockDB.Enqueue(new EmailJob{Id = 1})`
private static IEnumerable<EmailJob> GetJobFromDB()
{
return new List<EmailJob>{MockDB.Peek()};
}
However, if I actually Dequeue from the mock db, it only hits the breakpoint once.
private static IEnumerable<EmailJob> GetJobFromDB()
{
var list = new List<EmailJob>();
if (MockDB.Any())
list.Add(MockDB.Dequeue());
return list;
}
This is a side effect of debugging a multi-threaded piece of your application.
You are seeing the breakpoint being hit on each thread. Debugging a multi-threaded piece of the application is tricky because you're actually debugging all threads at the same time. In fact, at times, it will jump between classes while you're stepping through because it's doing different things on all of those threads, depending on your application.
Now, to address whether or not it's thread-safe. That really depends on how you're using the resources on those threads. If you're just reading, it's likely that it's thread-safe. But if you're writing, you'll need to leverage at least the lock operation on shared objects:
lock (someLockObject)
{
// perform the write operation
}

Categories