MemoryCache over Threads - c#

I'm currently investigating some code which has a cache layer which at the bottom level uses the MemoryCache class. This is a c# Windows Service app so not web/IIS. There is a section of the code which spawns off a number of threads in which it creates and executes some code in a POC to perform some calculations. These are then stored in the above mentioned cache layer. What is been seen is it looks like cached values seem to be getting stored per thread and not at apication level. I thought that MemoryCache was a Singleton that would sit out side of the individual threads.
Can anybody confirm this behaviour would be expected?
Many thanks for any comments.

A MemoryCache is thread-safe but there is no reason to assume it's a singleton. If you want different threads to access the same MemoryCache instance you need to give them all a reference to the same instance (either as a singleton (really bad) static (still bad) or through argument passing as dependency injection (good)).
The simple way to do it (which does use global state) is to access the default memory cache:
var cache = MemoryCache.Default; // not really a good idea and harder to test, works
You can find the specific docs here. Make sure to configure it in your app/web.config file.

Related

ReleaseWriterLockSlim as a Singleton?

I have a Blazor Server App that is running its frontend as a Scoped process and that uses a memory cache to handle objects across requests. Within the scoped process, it is obviously easy enough to use a ReleaseWriterLockSlim object to prevent multi-threading issues, however I need to do so across numerous instances.
I have read through several posts focused on this issue, but none appear to work as I require and I wondered if I could just create a class, expose a ReleaseWriterLockSlim as a public readonly object and then register that class as a singleton.
Shouldn't this provide the same effect no matter where it is called? I feel that I must be missing something that will cause massive issues.
Thanks!

Azure Functions Performance and Dependancy Injection

I have been looking at Azure functions with vs 2017 from Performance improvement point of view. Dependency injection is something that's not currently supported by azure function. But if we use workaround
(something like this https://blog.wille-zone.de/post/azure-functions-proper-dependency-injection/ ) and perform dependancy injection from Static Construction of a function, what impact does it have on performance? Especially with two hosting plans.
1)Consumption Plan: If I understand correctly, it is possible that every request is seperate and will create a new host in this plan. Does this mean, that every time static constructor will be called? and make all objects instantiate again? in that case, should dependency injection be avoided for Consumption Plan?
2)App Service Plan: This will have a dedicated vm on which Function will run, and provided "Always On" will be enabled, function will only be initiated once . In this case , does dependency injection make more sense? or still the function will exit context once trigger is complete, and every time new instances will be created?
I couldn't find a proper explanation about this possibility(if at all it's a possibility) . anyone has an idea?
Consumption Plan doesn't mean that you will get a new host on every request. The existing hosts will be reused for subsequent requests unless a) they are too busy, scale out kicks in and you get a new host, or b) there are no requests for several minutes, and your only host gets recycled.
Overall, I don't see such dependency injection to be a bottleneck in most scenarios.

How to have one Unity DbContext per WF instance?

Objective: To have one DbContext instance per Workflow instance, for isolation of changes to prevent concurrency failures and to allow Transactions.
Environment:
I use Unity in an N-tier (Services, Repository, EF5) application and re-use all the same interfaces/classes across the website and WF (Unity.WF, with custom Factory to add Unity RegisterTypes). I have the website's EF object running under the PerRequestLifetimeManager which works well, I'm trying to do the same for my WF instances.
I have one server running WF services (WF 4.5 with a WCF endpoint), soon this will need to be two (if that makes any difference to anyone's answers). I have multiple WF definitions (xamlx workflows), one in particular will soon be called 1000s of times a day to process files uploaded by clients. The processing time can be anywhere between <1 min to 1 hour depending on the amount of data uploaded, because of this is have set to allow the incoming requests to persist immediately and then resume when the server is free.
Problem:
The documented LifetimeManager for EF with Unity.WF is the HierarchicalLifetimeManager, but this seems to use the same EF instance across all running WF instances of the same WF definition.
I have tried several WCF Unity Lifetime Managers, but they all rely on the OperationContext.Current which is only available if the WF does not persist, this is not going to work for my situation.
So I tried the Microsoft WF Security Pack which claimed to be able to create/resurrect the OperationContext with an activity, this also does not work.
Surely this is a common problem that others have faced?
My next move would be to create my own LifetimeManager and somehow know about the workflow instance id and return the correct EF object from a Dictionary, but how would I get that id when the code is trying to resolve a dependency 2 levels deep in a repository constructor?
Thanks
Update:
Firstly, I could use a NativeActivity container and set the DataContext in the properties as in http://blogs.msdn.com/b/tilovell/archive/2009/12/20/workflow-scopes-and-execution-properties.aspx. This does provide using the same DataContext for my child Activities.
My problem did not go away, it always failed in the same Activity, i suspect it is because of Lazy Loading. So, i finally used a lock in the Execute function to ensure that only 1 WF can run this activity at once with a volatile static bool to check against so that the other WFs can Delay and not block the maxConcurrentInstances.
I hope this helps someone else.

Exactly how long is `InSingletonScope` for a webapp?

I'm just getting my feet wet with the Ninject.Mvc3 NuGet package, and I'm wondering about how long the created objects last.
InRequestScope is pretty each to understand: each object created in this scope lives as long as the webserver is handling a particular web request. (To be pedantic, the objects live as long as the HttpContext.Current object does)
But how long so the InSingletonScope objects last? The documentation says as long as the Ninject Kernel itself does--which is wrapped up the the NinjectWebCommon static class. The best guess I've made so far is that the kernel lives as long as the server is running the webapp--as long as the server is up, until the app is manually restarted in IIS or updated, the objects are in scope.
I'm curious because I'm tempted to have some Data Accessors containing read-only data dictionaries as Singleton Scope, and I'm wondering if this is a good idea, or a memory leak in planning.
It would last as long as your ASP.NET application pool lasts.
When will your application pool recycle? There are many settings which govern this: have a read of Configuring Recycling Settings for an Application Pool (IIS 7).
Basically, though, it ain't gonna be forever: if you want to store read-only data in there, just make sure you load it all up in Application_Start() so it's ready when requests come in, and you should be good to go.
You are correct. As long as the app pool is running your singletons will live. Why you might want to turn off application pool recycling.
For most of my websites I cache settings in static classes (or as singletons using Ninject or StructureMap), and data in thread safe dictionaries. This of course consumes memory, but it is not a memory leak. Working as designed.

C# / ASP.NET - Web Application locking

I'm working on a C#/ASP.NET web application, and I have a number of situations where I need to do locking. Ideally, I want the locks to act independently, since they have nothing to do with each other. I've been considering [MethodImpl(MethodImplOptions.Synchronized)] and a few ways of using lock(), but I have a few questions/concerns.
It seems like MethodImplOptions.Synchronizedwill essentially dolock(this)`. If that's the case, it seems like a thread entering any synchronized method would block all other threads from entering any synchronized method. Is that right? If so, this isn't granular enough. At that point, it seems like I may as well use Application.Lock. (But please correct me if I'm wrong.)
Concerning lock(), I'm trying to figure out what I should pass in. Should I create a set of objects solely for this purpose, and use each one for a different lock? Is there a better way?
Thanks in advance!
My preference is to create an object specifically for the lock.
private object lockForSomeResource = new object();
in the class that is managing the contentious resource.
Jeff Richter posted an article I read some time ago that recommended this.
You need to think carefully about designing these as a hierarchy if there is any code within a lock that needs another lock. Make sure you always request them in the same order.
I have posted a similar question on this forum, that may help you. Following is the link
Issue writing to single file in Web service in .NET
You can expose some static reference or a singleton, and lock() that.
Maybe you can care to explain why you need such locking and what you will use it for?
Creating discrete object instances at static/application level is the best way for plain exclusive locking.
Should also consider if reader/writer lock instances at application level could also help improve your application concurrency e.g. for reading and updating lists, hashes etc.

Categories