How do I implement asynchrounous caching? - c#

We're using the following pattern to handle caching of universal objects for our asp.net application.
private object SystemConfigurationCacheLock = new object();
public SystemConfiguration SystemConfiguration
{
get
{
if (HttpContext.Current.Cache["SystemConfiguration"] == null)
lock (SystemConfigurationCacheLock)
{
if (HttpContext.Current.Cache["SystemConfiguration"] == null)
HttpContext.Current.Cache.Insert("SystemConfiguration", GetSystemConfiguration(), null, DateTime.Now.AddMinutes(1), Cache.NoSlidingExpiration, new CacheItemUpdateCallback(SystemConfigurationCacheItemUpdateCallback));
}
return HttpContext.Current.Cache["SystemConfiguration"] as SystemConfiguration;
}
}
private void SystemConfigurationCacheItemUpdateCallback(string key, CacheItemUpdateReason reason, out object expensiveObject, out CacheDependency dependency, out DateTime absoluteExpiration, out TimeSpan slidingExpiration)
{
dependency = null;
absoluteExpiration = DateTime.Now.AddMinutes(1);
slidingExpiration = Cache.NoSlidingExpiration;
expensiveObject = GetSystemConfiguration();
}
private SystemConfiguration GetSystemConfiguration()
{
//Load system configuration
}
The problem is that when under load (~100,000 users) we see a huge jump in TTFB as the CacheItemUpdateCallback blocks all the other threads from executing until it has finished refreshing the cache from the database.
So what I figured we needed is solution that when the first thread after an expiry of the cache attempts to access it, an asynchronous thread is fired off to update the cache but still allows all other executing threads to read from the old cache until it has sucessfully updated.
Is there anything built into the .NET framework that can natively handle what I'm asking, or will I have to write it from scratch? Your thoughts please...
A couple of things...
The use of the HttpContext.Current.Cache is incidental and not necessarily essential as we've got no problem using private members on a singleton to hold the cached data.
Please don't comment on the cache times, SPROC effeciency, why we're caching in the first place etc as it's not relevent. Thanks!

AppFabric might be a good fit for what you're looking for.
http://msdn.microsoft.com/en-us/windowsserver/ee695849
http://msdn.microsoft.com/en-us/library/ff383731.aspx

So it turns out after several hours of investigation that the problem is not the CacheItemUpdateCallback blocking other threads as I originally thought, in fact it did exactly what I wanted it to asynchronously but it was the garbage collector stopping everything to clean up the LOH.

Related

Garbage collection of global Application object in ASP.NET MVC C#

I have ran into a problem where a long running singleton added to HttpApplicationState which does some data masking (GDPR), stops masking data after running in the background for some time.
It's hard to debug because it only happens in our UAT enviroment and it usually happens overnight.
The problem is that the data masking library is third party, and is still work in progress (or at the end of that work in progress).
But I'd appreciate if anyone with better GC knowledge could look at the init code below, and confirm this is out of GC domain.
The Translator.GetInstance() is a Lazy loader of the GDPR masking/translation singleton. So it's initialized the first time the user masks/unmasks the data.
protected void Application_Start()
{
if (Translator)
{
Application["MaskDataUtility"] = new MaskDataUtility(Translator.GetInstance());
}
else
{
Application["MaskDataUtility"] = new MaskDataUtility(new CustomTranslator());
}
}

Cache expires although explicitly set not to expire

I have plenty of available ram (about 25 GB of free memory) and I don't want the cache to expire and I just remove and recache items when there is a change.As my website is in testing process it has 1 or 2 KBs of cached items but when I check cache after some time (like half an hour) I see that they have expired. I use this code for inserting into cache:
Cache.Insert(ckey, Results, null, Cache.NoAbsoluteExpiration, TimeSpan.Zero);
This is my first time to use cache, Does anybody know what is wrong with the code or the cache?
Chances are if you are leaving it for some time then your app domain is shutting down due to lack of use and if that goes so does its in memory cache.
ASP.NET Data Cache - preserve contents after app domain restart discusses this issue and some possible solutions to it.
try this
Cache.Insert(
ckey, Results,
null, /*CacheDependency*/
Cache.NoAbsoluteExpiration, /*absoluteExpiration*/
Cache.NoSlidingExpiration, /*slidingExpiratioin*/
CacheItemPriority.Normal, /*priority*/
null /*onRemoveCallback*/
);
View this article for further info, it may be already answered there:
Default duration of Cache.Insert in ASP.NET
I have stumbled upon a similar issue. It looks that HttpRuntime.Cache takes the liberty of removing items from cache when it "feels" that there is not enough memory. This happens even if CacheItemPriority.NotRemovable priority is provided along with no absolute/no sliding expiration and under normal application domain operation (no shutdown).
How to catch actual expiration
HttpRuntime.Cache provides a remove callback to be used when an item is removed. Of course, in order to filter out normal evictions on application pool shutdown, System.Web.Hosting.HostingEnvironment.ShutdownReason should be checked.
public class ApplicationPoolService : IApplicationPoolService
{
public bool IsShuttingDown()
{
return System.Web.Hosting.HostingEnvironment.ShutdownReason != ApplicationShutdownReason.None;
}
}
private void ReportRemovedCallback(string key, object value, CacheItemRemovedReason reason)
{
if (!ApplicationPoolService.IsShuttingDown())
{
var str = $"Removed cached item with key {key} and count {(value as IDictionary)?.Count}, reason {reason}";
LoggingService.Log(LogLevel.Info, str);
}
}
HttpRuntime.Cache.Insert(CacheDictKey, dict, dependencies: null,
absoluteExpiration: DateTime.Now.AddMinutes(absoluteExpiration),
slidingExpiration: slidingExpiration <= 0 ? Cache.NoSlidingExpiration : TimeSpan.FromMinutes(slidingExpiration),
priority: CacheItemPriority.NotRemovable,
onRemoveCallback: ReportRemovedCallback);
Alternative
MemoryCache can be used as a good replacement for HttpRuntime.Cache. It provides very similar functionality. A full analysis can be read here.

Locking across different threads in an ASP.NET WebAPI

I've got a scenario where I require to cache information from a webapi temporarily when it is first called. With the same parameters this API can be called a few times a second.
Due to performance restrictions I don't want each call fetching the data and putting it into the memory cache so I've implemented a system with Semaphores to try and allow one thread to initialize the cache and then allow the rest to just query that cache.
I've stripped down the code to show an example of what i'm doing currently.
private static MemoryCacher memCacher = new MemoryCacher();
private static ConcurrentDictionary<string, Semaphore> dictionary = new ConcurrentDictionary<string, Semaphore>();
private async Task<int[]> DoAThing(string requestHash)
{
// check for an existing cached result before hitting the dictionary
var cacheValue = memCacher.GetValue(requestHash);
if (cacheValue != null)
{
return ((CachedResult)cacheValue).CheeseBurgers;
}
Semaphore semi;
semi = dictionary.GetOrAdd(requestHash, new Semaphore(1, 1, requestHash));
semi.WaitOne();
//It's possible a previous thread has now filled up the cache. Have a squiz.
cacheValue = memCacher.GetValue(requestHash);
if (cacheValue != null)
{
dictionary.TryRemove(requestHash);
semi.Release();
return ((CachedResult)cacheValue).CheeseBurgers;
}
// fetch the latest data from the relevant web api
var response = await httpClient.PostAsync(url, content);
// add the result to the cache
memCacher.Add(requestHash, new CachedResult() { CheeseBurgers = response.returnArray }, DateTime.Now.AddSeconds(30));
// We have added everything to the cacher so we don't need this semaphore in the dictonary anymore:
dictionary.TryRemove(requestHash);
//Open the floodgates
semi.Release()
return response.returnArray;
}
Unfortunately there are many weird issues where more than one thread at a time manages to get through the WaitOne() call and then when released manages to break due to the count restriction on the semaphore. (to make sure only one semaphore is working at a time)
I've tried using Mutexes and Monitors, but since IIS doesn't guarantee that an API call will always run on the same thread this causes it to fail regularly when the mutex is attempted to be released in a different thread.
Any suggestions on other ways to implement this would be welcome as well!

Preventing duplicate user transactions with user-specific locks?

We have a legacy ASP.NET 2.0 environment where each page execution is authenticated to a specific user, and therefore I have an integer representing the logged-in user's ID.
On one of the pages I need to run some code where I want to prevent the user from performing a duplicate action. Finding it difficult to guarantee this can't happen, even though we're doing basic dupe-prevention checking.
Obviously I could create a static object and do a lock(myObject) { ... } around the entire piece of code to try and help prevent some of these race conditions. But I don't want to create a bottleneck for everyone ... just want to stop the same logged-in user from running the code simultaneously or nearly simultaneously.
So I am thinking of creating an object instance for each user, and storing it in a cache based on their user id. Then I lookup that object, and if the object is found, I lock on it. If not found, I first create/cache it, then lock on it.
Does this make sense? Is there a better way to accomplish what I'm after?
Something like this is what I'm thinking:
public class MyClass
{
private static object lockObject = new object(); // global locking object
public void Page_Load()
{
string cachekey = "preventdupes:" + UserId.ToString();
object userSpecificLock = null;
// This part would synchronize among all requests, but should be quick
// as it is simply trying to find out if a user-specific lock object
// exists, and if so, it gets it. Otherwise, it creates and stores it.
lock (lockObject)
{
userSpecificLock = HttpRuntime.Cache.Get(cachekey);
if (userSpecificLock == null)
{
userSpecificLock = new object();
// Cache the locking object on a sliding 30 minute window
HttpRuntime.Cache.Add(cachekey, userSpecificLock, null,
System.Web.Caching.Cache.NoAbsoluteExpiration,
new TimeSpan(0, 30, 0),
System.Web.Caching.CacheItemPriority.AboveNormal, null);
}
}
// Now we have obtained an instance of an object specific to the user,
// and we'll lock the next block of code specifically to them.
lock (userSpecificLock)
{
try
{
// Perform some operations to check our database to see if the
// transaction already occurred for this user, and if not,
// perform the transaction, and then record it into our db.
}
catch (Exception)
{
// Rollback anything our code has done up until this exception,
// so that if the user tries again, it will work.
}
}
}
}
The solution is to use mutex.
Mutex can be named, so you can name your user id, and they are work for the full computer, so they are work if you have many processes under the same pool (web garden).
More to read:
http://en.wikipedia.org/wiki/Mutual_exclusion
Asp.Net. Synchronization access(mutex)
http://www.dotnetperls.com/mutex
MSDN Mutex with example
Some points
The lock The lock is work only inside the same and parent threads and you can use them only for synchronized static variables. But also the HttpRuntime.Cache is a static memory, that is means that if you have many processes under the same pool (web garden), you have many different Cache variables.
The page is also automatically synchronized by the session. So if you have disable the session for this page, then the mutex have a point, if not, the session all ready locks the page_load (with mutex), and the mutex that you will going to place have no meaning.
Some reference about:
ASP.NET Server does not process pages asynchronously
Is Session variable thread-safe within a Parallel.For loop in ASP.Net page
HttpContext.Current is null when in method called from PageAsyncTask

Ninject.Web.MVC + MVC3 throws StackOverflowException

I've got a simple web application using ASP.NET MVC3 and Ninject.Web.MVC (the MVC3 version).
The whole thing is working fine, except when the application ends. Whenever it ends, the kernel is disposed, as seen in Application_End() in NinjectHttpApplication:
Reflector tells me this:
public void Application_End()
{
lock (this)
{
if (kernel != null)
{
kernel.Dispose();
kernel = null;
}
this.OnApplicationStopped();
}
}
What happens is that my webserver goes down with a StackOverflowException (I tried both IIS7 and the built-in webserver in VS2010). I can only assume this is where it's going wrong, as I haven't written any code myself on application end.
I figured out that the Kernel knows how to resolve IKernel (which returns the Kernel itself), might this be something that could cause the stack overflow? I could imagine something like this happens:
Kernel.Dispose()
Dispose all instances in the kernel
hey! look at this, the kernel is also in the kernel. Return to step 1.
In other words, the kernel gets disposed, disposes all references it holds (which includes a self-reference), which causes it to dispose itself.
Does this make any sense?
Edit:
It seems the problem is in NinjectHttpApplication. Take a look at this activation code:
public void Application_Start()
{
lock (this)
{
kernel = this.CreateKernel();
...
kernel.Bind<IResolutionRoot>().ToConstant(kernel).InSingletonScope();
...
}
}
It seems ok, but what's happening now is that whenever an IResolutionRoot is called, kernel is cached within itself. When disposing the kernel, the cache is emptied which disposes all cached objects, which causes a circular reference.
A simple solution for NinjectHttpApplication would be to simply change the binding. Change the constant binding to a method one:
kernel.Bind<IResolutionRoot>().ToConstant(kernel).InSingletonScope();
becomes
kernel.Bind<IResolutionRoot>().ToMethod(x => this.Kernel);
This solves the problem, but I am not sure if the whole circular dispose caching issue is a bug in ninject.
I encountered the same issue.
I ended up copying the code for NinjectHttpApplication and removing Kernel.Dispose() in the Application_End function.
public void Application_End()
{
lock (this)
{
if (kernel != null)
{
//kernel.Dispose();
kernel = null;
}
this.OnApplicationStopped();
}
}
That should fix the error. Not sure if there is a planned fix for it though.
There was a bug in MVC3. It's fixed in the latest revision and will be part of the RC2 comming next week. In the mean time take the build from the build server http://teamcity.codebetter.com

Categories