Ensuring thread safety with ASP.Net cache - a tale of two strategies - c#

I have code that is not currently thread safe:
public byte[] GetImageByteArray(string filepath, string contentType, RImgOptions options)
{
//Our unique cache keys will be composed of both the image's filepath and the requested width
var cacheKey = filepath + options.Width.ToString();
var image = HttpContext.Current.Cache[cacheKey];
//If there is nothing in the cache, we need to generate the image, insert it into the cache, and return it
if (image == null)
{
RImgGenerator generator = new RImgGenerator();
byte[] bytes = generator.GenerateImage(filepath, contentType, options);
CacheItem(cacheKey, bytes);
return bytes;
}
//Image already exists in cache, serve it up!
else
{
return (byte[])image;
}
}
My CacheItem() method checks to see if its max cache size has been reached, and if it has, it will start removing cached items:
//If the cache exceeds its max allotment, we will remove items until it falls below the max
while ((int)cache[CACHE_SIZE] > RImgConfig.Settings.Profile.CacheSize * 1000 * 1000)
{
var entries = (Dictionary<string, DateTime>)cache[CACHE_ENTRIES];
var earliestCacheItem = entries.SingleOrDefault(kvp => kvp.Value == entries.Min(d => d.Value));
int length = ((byte[])cache[earliestCacheItem.Key]).Length;
cache.Remove(earliestCacheItem.Key);
cache[CACHE_SIZE] = (int)cache[CACHE_SIZE] - length;
}
Since one thread could remove an item from the cache as another thread is referencing it, I can think of two options:
Option 1: A lock
lock (myLockObject)
{
if(image == null){ **SNIP** }
}
Option 2: Assign a shallow copy to a local variable
var image = HttpContext.Current.Cache[cacheKey] != null ? HttpContext.Current.Cache[cacheKey].MemberwiseClone() : null;
Both of these options have overhead. The first forces threads to enter that code block one at a time. The second necessitates creating a new object in memory which could be of non-trivial size.
Are there any other strategies I could employ here?

To provide pure consistency of your cache solution you should lock your resource while slowing down the application.
In general, you should try to provide caching strategy, based on your application logic.
Check sliding window caching: while item which is older when some
time span - will reduce locking of different threads - good when you
have large spread of different cached items which not for sure will
be used again.
Consider using least frequently used strategy: the item that is least used
should be removed while reached max cache size - best serves while
you have multiple client hitting frequently same part of cached
content.
Just check which one more suites better your type of BL and use it. It will not remove the locking issue at all, but right choice will significantly remove racing conditions.
In order to reduce shared resource between different threads, use read and write locks on each item and not on entire collection. This will boost your performance as well.
Another point of consideration that should be kept in mind - what if content of image with the same path is changed physically on the disk (different image was saved with the same name) while having this image already cached there is a chance of mistakenly provide not relevant data.
Hope it helped.

Related

C# .NET reading all of of a process troubles

I'm messing around with a scanning engine I'm working on and I'm trying to read the memory of a process. My code is below (it's a little messy) but for some reason if I read the memory of an application in different states, or after it has a lot of things loaded into memory, I get the same memory size no matter what. Are my entry point addresses and length incorrect?
If I use a memory editor I don't get the same results I do with this.
Process process = Process.GetProcessesByName(processName)[0];
List<Byte[]> moduleMemory = new List<byte[]>();
byte[] temp;
//MessageBox.Show(pm.FileName);
temp = new byte[pm.ModuleMemorySize];
int read;
if (ReadProcessMemory(process.Handle, pm.BaseAddress, temp, temp.Length, out read)) {
moduleMemory.Add(temp);
}
}
//string d = Encoding.Default.GetString(moduleMemory[0]);
MessageBox.Show("Size: " + moduleMemory[0].Length);
Your problem is probaly caused by the fact, that Process class caches values:
The process component obtains information about a group of properties
all at once. After the Process component has obtained information
about one member of any group, it will cache the values for the other
properties in that group and not obtain new information about the
other members of the group until you call the Refresh method.
Therefore, a property value is not guaranteed to be any newer than the
last call to the Refresh method. The group breakdowns are
operating-system dependent.
Therefore after target process loads some additional modules, process instance will still return old values. Calling process.Refresh() should update all cached values and fix the issue.
As I see this code does nothing more than reading the memory layout of the executable module (.exe file) which the process was created for. So no wonder you get the same size all the time.
I assume you are up to read the "operational" memory of the process. If so, you should have a look at this discussion.

Clearing Regex cache in C#

I am using the Regex method Regex.Replace for large strings. Since these strings are cached, it is consuming a lot of memory.
I want to clear these Regex cache once a particulate operation is completed, so that the strings are garbage collected.
I can set the Regex cache size using Regex.CacheSize property, but how can I keep the cache size and clear the cache? Setting the cache size to zero will impact performance since I'm using this method multiple times for the same strings.
If I set the cache size to zero and reset it back to the old value, will the cached objects be discarded and garbage collected?
Code:
// languageDetails is a xml string holding, xml comments, name space etc.
// Need to remove the comments.
string pattern = "(<!--.*?-->)";
string languageDetails = Regex.Replace(
languageDetails,
pattern,
string.Empty,
RegexOptions.Singleline);
... for large strings. Since these strings are cached , it is consuming a lot of memory.
The input/output strings are not cached in any way.
The cache stores compiled regular expressions. So unless you have very many very large patterns, your memory problem is not caused by the cache.
Look at thesource code:
public static int CacheSize
{
[__DynamicallyInvokable] get
{
return Regex.cacheSize;
}
[__DynamicallyInvokable] set
{
if (value < 0)
throw new ArgumentOutOfRangeException(nameof (value));
Regex.cacheSize = value;
if (Regex.livecode.Count <= Regex.cacheSize)
return;
lock (Regex.livecode)
{
while (Regex.livecode.Count > Regex.cacheSize)
Regex.livecode.RemoveLast();
}
}
}
As you can see, setting the value to 0 will call Regex.livecode.RemoveLast();
So, it will clean the livecode list.

ASP.NET Caching, limiting the number of entries in cache

I'm trying to implement data caching for a web app in ASP.NET, this is for class and I've been asked to limit the number of entries in the ObjectCache, not by memory size but by the number of entries itself. This is quite easy since I can call ObjectCache.Count, but when the cache grows beyond the established limit (5, just for testing) I can't figure out how to remove the oldest element stored since it's alphabetically sorted.
This is being implemented in a Service, at the Data Access layer so I can't use any additional structure like a Queue to keep track of the insertions in the cache.
What can I do? Is there a way to filter or get the older element in the cache?
Here's the method code
public List<EventSummary> FindEvents(String keywords, long categoryId, int start, int count)
{
string queryKey = "FindEvent-" + start + ":" + count + "-" + keywords.Trim() + "-" + categoryId;
ObjectCache cache = MemoryCache.Default;
List<EventSummary> val = (List<EventSummary>)cache.Get(queryKey);
if (val != null)
return val;
Category evnCategory = CategoryDao.Find(categoryId);
List<Event> fullResult = EventDao.FindByEventCategoryAndKeyword(evnCategory, keywords, start, count);
List<EventSummary> summaryResult = new List<EventSummary>();
foreach (Event evento in fullResult)
{
summaryResult.Add(new EventSummary(evento.evnId, evento.evnName, evento.Category, evento.evnDate));
}
if (cache.Count() >= maxCacheSize)
{
//WHAT SHOULD I DO HERE?
}
cache.Add(queryKey, summaryResult, DateTime.Now.AddDays(cacheDays));
return summaryResult;
}
As mentioned in the comments, the Trim method from MemoryCache has a LRU (Least Recently Used) policy, which is the behavior you are looking for here. Unfortunately, the method is not based on an absolute number of objects to remove from the cache, but rather on a percentage, which is an int parameter. This just means that, if you try to hack your way around it and pass 1 / cache.Count() as the percentage, you have no control over how many objects have truly been removed from the cache, which is not an ideal scenario.
Another way to do it would just be to go with a DIY approach and simply not use the .NET caching utilities since, in our case, they do not seem to natively exactly fit your needs. I'm thinking of something along the lines of a SortedDictionary with the timecode of your cache objects as the key and a list of cache objects inserted into the cache at the given timecode as you values. It would be a good and, IMO, not too daring exercice to try and reproduce the .NET cache behavior you are already using, with the additionnal benefit of directly controlling the removal policy yourself.
As a side comment,not directly related to your question,
the biggest problem with caches in managed memory models is GC.
The moment you start storing over a few million entries you are asking for eventual GC pauses even with the most advanced non-blocking GCs.
It is hard to cache over 16 Gb, without pausing every now and then for 5-6 seconds (that is stop-all).
I have previously described here: https://stackoverflow.com/a/30584575/1932601
the caching of objects as-is is eventually a bad choice if you need to store very many expiring entries (say 100 million chat messages)
Take a look at what we did to store hundreds of millions of objects for a long time without killing the GC.
https://www.youtube.com/watch?v=Dz_7hukyejQ

C# Parallel.foreach - Making variables thread safe

I have been rewriting some process intensive looping to use TPL to increase speed. This is the first time I have tried threading, so want to check what I am doing is the correct way to do it.
The results are good - processing the data from 1000 Rows in a DataTable has reduced processing time from 34 minutes to 9 minutes when moving from a standard foreach loop into a Parallel.ForEach loop. For this test, I removed non thread safe operations, such as writing data to a log file and incrementing a counter.
I still need to write back into a log file and increment a counter, so i tried implementing a lock which encases the streamwriter/increment code block.
FileStream filestream = new FileStream("path_to_file.txt", FileMode.Create);
StreamWriter streamwriter = new StreamWriter(filestream);
streamwriter.AutoFlush = true;
try
{
object locker = new object();
// Lets assume we have a DataTable containing 1000 rows of data.
DataTable datatable_results;
if (datatable_results.Rows.Count > 0)
{
int row_counter = 0;
Parallel.ForEach(datatable_results.AsEnumerable(), data_row =>
{
// Process data_row as normal.
// When ready to write to log, do so.
lock (locker)
{
row_counter++;
streamwriter.WriteLine("Processing row: {0}", row_counter);
// Write any data we want to log.
}
});
}
}
catch (Exception e)
{
// Catch the exception.
}
streamwriter.Close();
The above seems to work as expected, with minimal performance costs (still 9 minutes execution time). Granted, the actions contained in the lock are hardly significant themselves - I assume that as the time taken to process code within the lock increases, the longer the thread is locked for, the more it affects processing time.
My question: is the above an efficient way of doing this or is there a different way of achieving the above that is either faster or safer?
Also, lets say our original DataTable actually contains 30000 rows. Is there anything to be gained by splitting this DataTable into chunks of 1000 rows each and then processing them in the Parallel.ForEach, instead of processing all 300000 rows in one go?
Writing to the file is expensive, you're holding a exclusive lock while writing to the file, that's bad. It's going to introduce contention.
You could add it in a buffer, then write to the file all at once. That should remove contention and provide way to scale.
if (datatable_results.Rows.Count > 0)
{
ConcurrentQueue<string> buffer = new ConcurrentQueue<string>();
Parallel.ForEach(datatable_results.AsEnumerable(), (data_row, state, index) =>
{
// Process data_row as normal.
// When ready to write to log, do so.
buffer.Enqueue(string.Format( "Processing row: {0}", index));
});
streamwriter.AutoFlush = false;
string line;
while (buffer.TryDequeue(out line))
{
streamwriter.WriteLine(line);
}
streamwriter.Flush();//Flush once when needed
}
Note that you don't need to maintain a loop counter,
Parallel.ForEach provides you one. Difference is that it is not
the counter but index. If I've changed the expected behavior you can
still add the counter back and use Interlocked.Increment to
increment it.
I see that you're using streamwriter.AutoFlush = true, that will hurt performance, you can set it to false and flush it once you're done writing all the data.
If possible, wrap the StreamWriter in using statement, so that you don't even need to flush the stream(you get it for free).
Alternatively, you could look at the logging frameworks which does their job pretty well. Example: NLog, Log4net etc.
You may try to improve this, if you avoid logging, or log into only thread specific log file (not sure if that makes sense to you)
TPL start as many threads as many cores you have Does Parallel.ForEach limits the number of active threads?.
So what you can do is:
1) Get numbers of core on target machine
2) Create a list of counters, with as many elements inside as many cores you have
3) Update counter for every core
4) Sum all them up after parallel execution terminates.
So, in practice :
//KEY(THREAD ID, VALUE: THREAD LOCAL COUNTER)
Dictionary<int,int> counters = new Dictionary<int, int>(NUMBER_OF_CORES);
....
Parallel.ForEach(datatable_results.AsEnumerable(), data_row =>
{
// Process data_row as normal.
// When ready to write to log, do so.
//lock (locker) //NO NEED FOR LOCK, EVERY THREAD UPDATES ITS _OWN_ COUNTER
//{
//row_counter++;
counters[Thread.CurrentThread.ManagedThreadId].Value +=1;
//NO WRITING< OR WRITING THREAD SPECIFIC FILE ONLY
//streamwriter.WriteLine("Processing row: {0}", row_counter);
//}
});
....
//AFTER EXECUTION OF PARALLEL LOOP SUM ALL COUNTERS AND GET TOTAL OF ALL THREADS.
The benefit of this that no locking envolved at all, which will drammatically improve performance. When you use .net concurent collections, they are always use some kind of locking inside.
This is naturally a basic idea, may not work as it expected if you copy paste. We are talking about multi threading , which is always a hard topic. But, hopefully, it provides to you some ideas to relay on.
First of all, it takes about 2 seconds to process a row in your table and perhaps a few milliseconds to increment the counter and write to the log file. With the actual processing being 1000x more than the part you need to serialize, the method doesn't matter too much.
Furthermore, the way you have implemented it is perfectly solid. There are ways to optimize it, but none that are worth implementing in your situation.
One useful way to avoid locking on the increment is to use Interlocked.Increment. It is a bit slower than x++ but much faster than lock {x++;}. In your case, though, it doesn't matter.
As for the file output, remember that the output is going to be serialized anyway, so at best you can minimize the amount of time spent in the lock. You can do this by buffering all of your output before entering the lock, then just perform the write operation inside the lock. You probably want to do async writes to avoid unnecessary blocking on I/O.
You can transfer the parallel code in new method. For example :
// Class scope
private string GetLogRecord(int rowCounter, DataRow row)
{
return string.Format("Processing row: {0}", rowCounter); // Write any data we want to log.
}
//....
Parallel.ForEach(datatable_results.AsEnumerable(), data_row =>
{
// Process data_row as normal.
// When ready to write to log, do so.
lock (locker)
row_counter++;
var logRecord = GetLogRecord(row_counter, data_row);
lock (locker)
streamwriter.WriteLine(logRecord);
});
This is my code that uses a parallel for. The concept is similar, and perhaps easier for you to implement. FYI, for debugging, I keep a regular for loop in the code and conditionally compile the parallel code. Hope this helps. The value of i in this scenario isn't the same as the number of records processed, however. You could create a counter and use a lock and add values for that. For my other code where I do have a counter, I didn't use a lock and just allowed the value to be potentially off to avoid the slower code. I have a status mechanism to indicate number of records processed. For my implementation, the slight chance that the count is not an issue - at the end of the loop I put out a message saying all the records have been processed.
#if DEBUG
for (int i = 0; i < stend.PBBIBuckets.Count; i++)
{
//int serverIndex = 0;
#else
ParallelOptions options = new ParallelOptions();
options.MaxDegreeOfParallelism = m_maxThreads;
Parallel.For(0, stend.PBBIBuckets.Count, options, (i) =>
{
#endif
g1client.Message request;
DataTable requestTable;
request = new g1client.Message();
requestTable = request.GetDataTable();
requestTable.Columns.AddRange(
Locations.Columns.Cast<DataColumn>().Select(x => new DataColumn(x.ColumnName, x.DataType)).ToArray
());
FillPBBIRequestTables(requestTable, request, stend.PBBIBuckets[i], stend.BucketLen[i], stend.Hierarchies);
#if DEBUG
}
#else
});
#endif

C# byte[] substring? (design)

I'm downloading some files asynchronously into a large byte array, and I have a callback that fires off periodically whenever some data is added to that array. If I want to give developers the ability to use the last chunk of data that was added to array, then... well how would I do that? In C++ I could give them a pointer to somewhere in the middle, and then perhaps tell them the number of bytes that were added in the last operation so they at least know the chunk they should be looking at... I don't really want to give them a 2nd copy of that data, that's just wasteful.
I'm just thinking if people want to process this data before the file has completed downloading. Would anyone actually want to do that? Or is it a useless feature anyway? I already have a callback for when the buffer (entire byte array) is full, and then they can dump the whole thing without worrying about start and end points...
.NET has a struct that does exactly what you want:
System.ArraySegment.
In any case, it's easy to implement it yourself too - just make a constructor that takes a base array, an offset, and a length. Then implement an indexer that offsets indexes behind the scenes, so your ArraySegment can be seamlessly used in the place of an array.
You can't give them a pointer into the array, but you could give them the array and start index and length of the new data.
But I have to wonder what someone would use this for. Is this a known need? or are you just guessing that someone might want this someday. And If so, is there any reason why you couldn't wait to add the capability once somone actually needs it?
Whether this is needed or not depends on whether you can afford to accumulate all the data from a file before processing it, or whether you need to provide a streaming mode where you process each chunk as it arrives. This depends on two things: how much data there is (you probably would not want to accumulate a multi-gigabyte file), and how long it takes the file to completely arrive (if you are getting the data over a slow link you might not want your client to wait till it had all arrived). So it is a reasonable feature to add, depending on how the library is to be used. Streaming mode is usually a desirable attribute, so I would vote for implementing the feature. However, the idea of putting the data into an array seems wrong, because it fundamentally implies a non-streaming design, and because it requires an additional copy. What you could do instead is to keep each chunk of arriving data as a discrete piece. These could be stored in a container for which adding at the end and removing from the front is efficient.
Copying a chunk of a byte array may seem "wasteful," but then again, object-oriented languages like C# tend to be a little more wasteful than procedural languages anyway. A few extra CPU cycles and a little extra memory consumption can greatly reduce complexity and increase flexibility in the development process. In fact, copying bytes to a new location in memory to me sounds like good design, as opposed to the pointer approach which will give other classes access to private data.
But if you do want to use pointers, C# does support them. Here is a decent-looking tutorial. The author is correct when he states, "...pointers are only really needed in C# where execution speed is highly important."
I agree with the OP: sometimes you just plain need to pay some attention to efficiency. I don't think the example of providing an API is the best, because that certainly calls for leaning toward safety and simplicity over efficiency.
However, a simple example is when processing large numbers of huge binary files that have zillions of records in them, such as when writing a parser. Without using a mechanism such as System.ArraySegment, the parser becomes a big memory hog, and is greatly slowed down by creating a zillion new data elements, copying all the memory over, and fragmenting the heck out of the heap. It's a very real performance issue. I write these kinds of parsers all the time for telecommunications stuff which generate millions of records per day in each of several categories from each of many switches with variable length binary structures that need to be parsed into databases.
Using the System.ArraySegment mechanism versus creating new structure copies for each record tremendously speeds up the parsing, and greatly reduces the peak memory consumption of the parser. These are very real advantages because the servers run multiple parsers, run them frequently, and speed and memory conservation = very real cost savings in not having to have so many processors dedicated to the parsing.
System.Array segment is very easy to use. Here's a simple example of providing a base way to track the individual records in a typical big binary file full of records with a fixed length header and a variable length record size (obvious exception control deleted):
public struct MyRecord
{
ArraySegment<byte> header;
ArraySegment<byte> data;
}
public class Parser
{
const int HEADER_SIZE = 10;
const int HDR_OFS_REC_TYPE = 0;
const int HDR_OFS_REC_LEN = 4;
byte[] m_fileData;
List<MyRecord> records = new List<MyRecord>();
bool Parse(FileStream fs)
{
int fileLen = (int)fs.FileLength;
m_fileData = new byte[fileLen];
fs.Read(m_fileData, 0, fileLen);
fs.Close();
fs.Dispose();
int offset = 0;
while (offset + HEADER_SIZE < fileLen)
{
int recType = (int)m_fileData[offset];
switch (recType) { /*puke if not a recognized type*/ }
int varDataLen = ((int)m_fileData[offset + HDR_OFS_REC_LEN]) * 256
+ (int)m_fileData[offset + HDR_OFS_REC_LEN + 1];
if (offset + varDataLen > fileLen) { /*puke as file has odd bytes at end*/}
MyRecord rec = new MyRecord();
rec.header = new ArraySegment(m_fileData, offset, HEADER_SIZE);
rec.data = new ArraySegment(m_fileData, offset + HEADER_SIZE,
varDataLen);
records.Add(rec);
offset += HEADER_SIZE + varDataLen;
}
}
}
The above example gives you a list with ArraySegments for each record in the file while leaving all the actual data in place in one big array per file. The only overhead are the two array segments in the MyRecord struct per record. When processing the records, you have the MyRecord.header.Array and MyRecord.data.Array properties which allow you to operate on the elements in each record as if they were their own byte[] copies.
I think you shouldn't bother.
Why on earth would anyone want to use it?
That sounds like you want an event.
public class ArrayChangedEventArgs : EventArgs {
public (byte[] array, int start, int length) {
Array = array;
Start = start;
Length = length;
}
public byte[] Array { get; private set; }
public int Start { get; private set; }
public int Length { get; private set; }
}
// ...
// and in your class:
public event EventHandler<ArrayChangedEventArgs> ArrayChanged;
protected virtual void OnArrayChanged(ArrayChangedEventArgs e)
{
// using a temporary variable avoids a common potential multithreading issue
// where the multicast delegate changes midstream.
// Best practice is to grab a copy first, then test for null
EventHandler<ArrayChangedEventArgs> handler = ArrayChanged;
if (handler != null)
{
handler(this, e);
}
}
// finally, your code that downloads a chunk just needs to call OnArrayChanged()
// with the appropriate args
Clients hook into the event and get called when things change. This is what most client code in .NET expects to have in an API ("call me when something happens"). They can hook into the code with something as simple as:
yourDownloader.ArrayChanged += (sender, e) =>
Console.WriteLine(String.Format("Just downloaded {0} byte{1} at position {2}.",
e.Length, e.Length == 1 ? "" : "s", e.Start));

Categories