HttpContext.Current.Session null after making method async - c#

I had a method like below
book.Bindbook();
I made it async as follow
new Task(book.Bindbook).Start();
Now this method uses HttpContext.Current.Session which is now returning null.
Here is code that returns null
public static Bookmanager CartManager
{
//Gets the value from the session variable.
get
{
try
{
if (HttpContext.Current.Session["BookData"] == null)
{
Bookmanager bookmgr= new Bookmanager ();
Book book = new Book(SessionManager.CurrentUser);
bookmgr.SetCurrentCart(book);
HttpContext.Current.Session["BookData"] = bookmgr;
}
else if (((Bookmanager)HttpContext.Current.Session["BookData"]).GetCurrentCart() == null)
{
Book book = new Book(SessionManager.CurrentUser);
((Bookmanager)HttpContext.Current.Session["BookData"]).SetCurrentCart(book);
}
}
catch(Exception ex)
{
//throw ex;
}
return ((Bookmanager)HttpContext.Current.Session["BookData"]);
}
//Sets the value of the session variable.
set
{
HttpContext.Current.Session["BookData"] = value;
}
}

There's a lot of potential problems with your solution which have lead to this problem. I'll try to break it down into pieces to explain what's going on.
new Task(book.Bindbook).Start() doesn't always run where you think it does
This method of creating an asynchronous operation is subtly dangerous as it's not easy to know how the task will be executed. When you call this constructor, the Task will capture the TaskScheduler.Current value as the mechanism it will use to schedule it's own execution. This means that your task's execution is invisibly tied to the context it's in.
Typically, you want to use Task.Run(Action) instead of creating a new Task instance and then calling Start, as this always runs on the value of TaskScheduler.Default, which is usually the .NET thread pool and is generally what you want to do when running a background task.
HttpContext is not thread-safe
The HttpContext class was never intended to be called from multiple threads safely. It's Current value tied to the thread which is processing the request and is not available on other threads. You should not pass it to other threads. Generally-speaking you should reduce the surface-area of HttpContext in your applications to a bare minimum. It's nearly impossible to mock for testing purposes and has several subtle limitations (such as you are finding) which make it challenging to work with.
Instead, surface the Current value as early as possible in your code and keep a reference to the objects you actually need to work with (like the session).
Static properties are usually harmful
Having a static property on an object either means that there are exactly one of these things for the entirety of the AppDomain (such as TaskScheduler.Default) where they represent some cross-cutting concern that can be configured, or that there is some hidden context manipulating the value behind the scenes. The former case is rare, but can be acceptable in some cases, but the second is pretty harmful. HttpContext.Current is an example of a value that should not be static (and future version of ASP.NET do away with it entirely). It makes code hard to reason about, nearly impossible to test and introduces subtle bugs (like this one) which can't easily be dealt with.
Fundamentally, this is the biggest problem here and the root cause of your pain. If this property were exposed as an instance property and the instance was scoped to the request context, you would have none of your issues. Once you're working with an object whose lifetime is the same as your request, all your critical state becomes local and easy to reason about.

Use ConfigureAwait(true) to allow to continue on the original context.
var task = new Task(() => book.Bindbook()).ConfigureAwait(true);
task.Start();

HttpContext is bound to thread, that's why it is null.
I think better solution will be to pass all needed data through parameters to other thread and not sharing HttpContext.

Related

What happens when you read and write an object from multiple threads without protection?

I have learned that accessing the same Object from different Threads is not threadsafe and should be protected. Be it thru Locking or 'Interlocked.Exchange' or Immutables or any other means.
This means that the following code is potentially NOT threadsafe because it does not protect access to the shared 'test' object.
My Questions are:
Is the following code Safe or Not?
If Not, what is the worst that could happen?
If Not, is there any exception that is thrown on a dirty Read or Writes that I could catch to prevent the worst?
class Test
{
public Test()
{
Foo =new Random().Next(10000);
}
public int Foo;
}
internal class Program
{
public static async Task Main(string[] args)
{
var test =new Test();
var exitToken = new CancellationTokenSource(TimeSpan.FromSeconds(120)).Token;
var readerTask = Task.Run(async () =>
{
while (!exitToken.IsCancellationRequested)
{
Console.WriteLine("Random Foo: " + test.Foo);
await Task.Delay(TimeSpan.FromSeconds(5));
}
});
var writerTask = Task.Run(async () =>
{
while (!exitToken.IsCancellationRequested)
{
test = new Test();
await Task.Delay(TimeSpan.FromSeconds(5));
}
});
await Task.WhenAll(readerTask, writerTask);
}
}
Is the following code Safe or Not?
When talking about "safe" it is important to specify what it is "safe" for. Usually when designing a class we have some requirements we want the class to fulfill (sometimes called "invariants"). If these requirements are fulfilled even if the object is used from multiple threads we call it "thread safe". Take a trivial class:
public class Incrementer{
public int Value {get; private set;}
public void Increment() => Value++;
}
We would probably have a requirement that the value should exactly equal the number of times the Increment method was called. This requirement would not be fulfilled if it is called concurrently, but that does not mean it will throw an exception, blow up, or crash, just that the value does not match our requirement.
If we change the requirement to be "value > 0 if Increment is called at least one time", then the read-modify-write issue is not relevant, since we only care if the value has been written once.
In your specific example the only variable that is written to and read concurrently is the test-variable. Since writing references are atomic we know this will always point to some object. There could potentially be issues with reordering when the object is constructed. This could cause the reference to be updated before the actual value has been set, and would therefore be observed as zero. I do not think this would occur with common x86 hardware and software, but the safe version would be to either use a lock, or create the object, issue a memory barrier and then update the reference.
The other potential risk is that the read-thread does not update the reference from memory, and just loads it once and reuses the same value in each iteration. I do not think this could actually occur in this case, since the loop also calls Task.Delay and Console.WriteLine, and I would expect both of these to issue a memory barriers or something else that ensures a read actually occurs, and that no reordering is done. But I would probably still recommend using a lock or marking the variable as volatile, it is usually a good idea to err on the side of caution.
If Not, what is the worst that could happen?
In this case, that the same value would always be printed. But as I mentioned above, this will most likely not occur.
If Not, is there any exception that is thrown on a dirty Read or Writes that I could catch to prevent the worst?
In general, no. Some types may throw exceptions if they are used concurrently or from the wrong thread, a typical example would be any UI classes. But this is not something that should be relied upon.
This is also one of the reason why multi threading bugs are so devious, If you are unlucky the only effect is that the value is wrong, and it might not even be obviously wrong, it might just be off a little bit. And if you are even more unlucky it only occur in special circumstances, like when running under full load on a 20 core server, so you might never be able to reproduce it in a development environment.

C# lock based on class property

I've seen many examples of the lock usage, and it's usually something like this:
private static readonly object obj = new object();
lock (obj)
{
// code here
}
Is it possible to lock based on a property of a class? I didn't want to lock globally for any calls to the method with the lock statement, I'd like to lock only if the object passed as argument had the same property value as another object which was being processed prior to that.
Is that possible? Does that make sense at all?
This is what I had in mind:
public class GmailController : Controller
{
private static readonly ConcurrentQueue<PushRequest> queue = new ConcurrentQueue<PushRequest>();
[HttpPost]
public IActionResult ProcessPushNotification(PushRequest push)
{
var existingPush = queue.FirstOrDefault(q => q.Matches(push));
if (existingPush == null)
{
queue.Enqueue(push);
existingPush = push;
}
try
{
// lock if there is an existing push in the
// queue that matches the requested one
lock (existingPush)
{
// process the push notification
}
}
finally
{
queue.TryDequeue(out existingPush);
}
}
}
Background: I have an API where I receive push notifications from Gmail's API when our users send/receive emails. However, if someone sends a message to two users at the same time, I get two push notifications. My first idea was querying the database before inserting (based on subject, sender, etc). In some rare cases, the query of the second call is made before the SaveChanges of the previous call, so I end up having duplicates.
I know that if I ever wanted to scale out, lock would become useless. I also know I could just create a job to check recent entries and eliminate duplicates, but I was trying something different. Any suggestions are welcome.
Let me first make sure I understand the proposal. The problem given is that we have some resource shared to multiple threads, call it database, and it admits two operations: Read(Context) and Write(Context). The proposal is to have lock granularity based on a property of the context. That is:
void MyRead(Context c)
{
lock(c.P) { database.Read(c); }
}
void MyWrite(Context c)
{
lock(c.P) { database.Write(c); }
}
So now if we have a call to MyRead where the context property has value X, and a call to MyWrite where the context property has value Y, and the two calls are racing on two different threads, they are not serialized. However, if we have, say, two calls to MyWrite and a call to MyRead, and in all of them the context property has value Z, those calls are serialized.
Is this possible? Yes. That doesn't make it a good idea. As implemented above, this is a bad idea and you shouldn't do it.
It is instructive to learn why it is a bad idea.
First, this simply fails if the property is a value type, like an integer. You might think, well, my context is an ID number, that's an integer, and I want to serialize all accesses to the database using ID number 123, and serialize all accesses using ID number 345, but not serialize those accesses with respect to each other. Locks only work on reference types, and boxing a value type always gives you a freshly allocated box, so the lock would never be contested even if the ids were the same. It would be completely broken.
Second, it fails badly if the property is a string. Locks are logically "compared" by reference, not by value. With boxed integers, you always get different references. With strings, you sometimes get different references! (Because of interning being applied inconsistently.) You could be in a situation where you are locking on "ABC" and sometimes another lock on "ABC" waits, and sometimes it does not!
But the fundamental rule that is broken is: you must never lock on an object unless that object has been specifically designed to be a lock object, and the same code which controls access to the locked resource controls access to the lock object.
The problem here is not "local" to the lock but rather global. Suppose your property is a Frob where Frob is a reference type. You don't know if any other code in your process is also locking on that same Frob, and therefore you don't know what lock ordering constraints are necessary to prevent deadlocks. Whether a program deadlocks or not is a global property of a program. Just like you can build a hollow house out of solid bricks, you can build a deadlocking program out of a collection of locks that are individually correct. By ensuring that every lock is only taken out on a private object that you control, you ensure that no one else is ever locking on one of your objects, and therefore the analysis of whether your program contains a deadlock becomes simpler.
Note that I said "simpler" and not "simple". It reduces it to almost impossible to get correct, from literally impossible to get correct.
So if you were hell bent on doing this, what would be the right way to do it?
The right way would be to implement a new service: a lock object provider. LockProvider<T> needs to be able to hash and compare for equality two Ts. The service it provides is: you tell it that you want a lock object for a particular value of T, and it gives you back the canonical lock object for that T. When you're done, you say you're done. The provider keeps a reference count of how many times it has handed out a lock object and how many times it got it back, and deletes it from its dictionary when the count goes to zero, so that we don't have a memory leak.
Obviously the lock provider needs to be threadsafe and needs to be extremely low contention, because it is a mechanism designed to prevent contention, so it had better not cause any! If this is the road you intend to go down, you need to get an expert on C# threading to design and implement this object. It is very easy to get this wrong. As I have noted in comments to your post, you are attempting to use a concurrent queue as a sort of poor lock provider and it is a mass of race condition bugs.
This is some of the hardest code to get correct in all of .NET programming. I have been a .NET programmer for almost 20 years and implemented parts of the compiler and I do not consider myself competent to get this stuff right. Seek the help of an actual expert.
Although I find Eric Lippert's answer fantastic and marked it as the correct one (and I won't change that), his thoughts made me think and I wanted to share an alternative solution I found to this problem (and I'd appreciate any feedbacks), even though I'm not going to use it as I ended up using Azure functions with my code (so this wouldn't make sense), and a cron job to detected and eliminate possible duplicates.
public class UserScopeLocker : IDisposable
{
private static readonly object _obj = new object();
private static ICollection<string> UserQueue = new HashSet<string>();
private readonly string _userId;
protected UserScopeLocker(string userId)
{
this._userId = userId;
}
public static UserScopeLocker Acquire(string userId)
{
while (true)
{
lock (_obj)
{
if (UserQueue.Contains(userId))
{
continue;
}
UserQueue.Add(userId);
return new UserScopeLocker(userId);
}
}
}
public void Dispose()
{
lock (_obj)
{
UserQueue.Remove(this._userId);
}
}
}
...then you would use it like this:
[HttpPost]
public IActionResult ProcessPushNotification(PushRequest push)
{
using(var scope = UserScopeLocker.Acquire(push.UserId))
{
// process the push notification
// two threads can't enter here for the same UserId
// the second one will be blocked until the first disposes
}
}
The idea is:
UserScopeLocker has a protected constructor, ensuring you call Acquire.
_obj is private static readonly, only the UserScopeLocker can lock this object.
_userId is a private readonly field, ensuring even its own class can't change its value.
lock is done when checking, adding and removing, so two threads can't compete on these actions.
Possible flaws I detected:
Since UserScopeLocker relies on IDisposable to release some UserId, I can't guarantee the caller will properly use using statement (or manually dispose the scope object).
I can't guarantee the scope won't be used in a recursive function (thus possibly causing a deadlock).
I can't guarantee the code inside the using statement won't call another function which also tries to acquire a scope to the user (this would also cause a deadlock).

How to check that AsyncLocal<T> is accessed within same "async context"

TL;DR ThreadLocal<T>.Value points to same location if Thread.CurrentThread stays the same. Is there anything similar for AsyncLocal<T>.Value (e.g. would SychronizationContext.Current or ExecutionContext.Capture() suffice for all scenarios)?
Imagine we have created some snapshot of data structure which is kept in thread-local storage (e.g. ThreadLocal<T> instance) and passed it to axillary class for later use. This axillary class is used to restore this data structure to snapshot state. We don't want to restore this snapshot onto different thread, so we can check on which thread axillary class was created. For example:
class Storage<T>
{
private ThreadLocal<ImmutableStack<T>> stackHolder;
public IDisposable Push(T item)
{
var bookmark = new StorageBookmark<T>(this);
stackHolder.Value = stackHolder.Value.Push(item);
return bookmark;
}
private class StorageBookmark<TInner> :IDisposable
{
private Storage<TInner> owner;
private ImmutableStack<TInner> snapshot;
private Thread boundThread;
public StorageBookmark(Storage<TInner> owner)
{
this.owner = owner;
this.snapshot = owner.stackHolder.Value;
this.boundThread = Thread.CurrentThread;
}
public void Dispose()
{
if(Thread.CurrentThread != boundThread)
throw new InvalidOperationException ("Bookmark crossed thread boundary");
owner.stackHolder.Value = snapshot;
}
}
}
With this, we essentialy bound StorageBookmark to specific thread, and, therefore, to specific version of data structure in ThreadLocal storage. And we did that by assuring we don't cross "thread context" with the help of Thread.CurrentThread
Now, to question at hand. How can we achieve the same behavior with AsyncLocal<T> instead of ThreadLocal<T>? To be precise, is there anything similar to Thread.CurrentThread which can be checked at times of construction and usage to control that "async context" has not been crossed (That means AsyncLocal<T>.Value would point to same object as when bookmark was constructed).
It seems either SynchronizationContext.Current or ExecutionContext.Capture() may suffice, but I'm not sure which is better and that there is no catch (or even that would work in all possible situations)
What you're hoping to do is fundamentally contrary to the nature of asynchronous execution context; you're not required to (and therefore can't guarantee) that all Tasks created within your asynchronous context will be awaited immediately, in the same order they were created, or ever at all, but their creation within the scope of the calling context makes them part of the same asynchronous context, period.
It may be challenging to think of asynchronous execution context as different from thread contexts, but asynchrony is not synonymous with parallelism, which is specifically what logical threads support. Objects stored in Thread Local Storage that aren't intended to be shared/copied across threads can generally be mutable because execution within a logical thread will always be able to guarantee relatively constrained sequential logic (if some special treatment may be necessary to ensure compile-time optimizations don't mess with you, though this is rare and only necessary in very specific scenarios). For that reason the ThreadLocal in your example doesn't really need to be an ImmutableStack, it could just be a Stack (which has much better performance) since you don't need to worry about copy-on-write or concurrent access. If the stack were publicly accessible then it would be more concerning that someone could pass the stack to other threads which could push/pop items, but since it's a private implementation detail here the ImmutableStack could actually be seen as unnecessary complexity.
Anyway, Execution Context, which is not a unique concept to .NET (implementations on other platforms may differ in some ways, though in my experience never by much) is very much like (and directly related to) the call stack, but in a way that considers new asynchronous tasks to be new calls on the stack which may need to both share the caller's state as it was at the time the operation was executed, as well as diverge since the caller may continue to create more tasks and create/update state in ways that will not make logical sense when reading a sequential set of instructions. It is generally recommended that anything placed in the ExecutionContext be immutable, though in some cases all copies of the context still pointing to the same instance reference should necessarily share mutable data. IHttpContext, for instance, is stored on the default implementation of IHttpContextAccessor using AsyncLocal, so all tasks created in the scope of a single request will have access to the same response state, for example. Allowing multiple concurrent contexts to make mutations to the same reference instance necessarily introduces the possibility of issues, both from concurrency and logical order of execution. For instance, multiple tasks trying to set different results on an HTTP response will either result in an exception or unexpected behavior. You can try, to some extent, to help the consumer here, but at the end of the day it is the consumer's responsibility to understand the complexity of the nuanced implementation details they're dependent on (which is generally a code smell, but sometimes a necessary evil in real world situations).
That scenario aside, as said, for the sake of ensuring all nested contexts function predictably and safely it's generally recommended to only store immutable types and to always restore the context to its previous value (as you're doing with you disposable stack mechanism). The easiest way to think of the copy-on-write behavior is as though every single new task, new thread pool work item, and new thread gets its own clone of the context, but if they point to the same reference type (i.e. all have copies of the same reference pointer) they all have the same instance; the copy-on-write is simply an optimization that prevents copying when unnecessary, but can essentially be completely ignored and thought of as every logical task having its own copy of the context (which is very much like that ImmutableStack or a string). If the only way to update anything about the current value that the immutable collection item points to is to reassign it to a new modified instance then you never have to worry about cross-context pollution (just like that ImmutableStack you're using).
Your example doesn't show anything about how the data is accessed or what types are passed in for T so there's no way to see what issue you might face, but if what you're concerned about is nested tasks disposing the "Current" context or the IDisposable value being assigned to a field somewhere and accessed from a different thread, there's a few things you can try and some points worth considering:
The closest equivalent to your current check would be:
if(stackHolder.Value != this)
throw new InvalidOperationException ("Bookmark disposed out of order or in wrong context");
A simple ObjectDisposedException will throw an exception from at least one context if two contexts try to dispose it.
Though this generally isn't recommended, if you want to be absolutely certain the object was disposed at least once you could throw an exception in the finalizer of the IDisposable implementation (being sure to call GC.SuppressFinalize(this) in the Dispose method).
By combining the previous two, while it won't guarantee that it was disposed in the exact some task/method block that created it, you can at least guarantee that an object is disposed once and only once.
Due to the fundamental importance of the ways ExecutionContext is supposed to be flowed and controlled, it is the responsibility of the execution engine (typically the runtime, task scheduler, etc. but also in cases where a third party is using Tasks/Threads in novel ways) to ensure ExecutionContext flow is captured and suppressed where appropriate. If a thread, scheduler, or synchronization migration occurs in the root context, the ExecutionContext should not be flowed into the next logical task's ExecutionContext the thread/scheduler processes in the context where the task formerly executed. For example, if a task continuation starts on a ThreadPool thread and then awaits continuation that causes the next logical operations to continue on a different ThreadPool Thread than it was originally started on or some other I/O resource completion thread, when the original thread is returned the the ThreadPool it should not continue to reference/flow the ExecutionContext of the task which is no longer logically executing within it. Assuming no additional tasks are created in parallel and left astray, once execution resumes in the root awaiter it will be the only execution context that continues to have a reference to the context. When a Task completes, so does its execution context (or, rather, its copy of it).
Even if unobserved background tasks are started and never awaited, if the data stored in the AsyncLocal is immutable, the copy-on-write behavior combined with your immutable stack will ensure that parallel clones of execution contexts can never pollute each other
With the first check and used with immutable types, you really don't need to worry about cloned parallel execution contexts unless you're worried about them gaining access to sensitive data from previous contexts; when they Dispose the current item only the stack of the current execution context (e.g. the nested parallel context, specifically) reverts to the previous; all cloned contexts (including the parent) are not modified.
If you are worried about nested contexts accessing parent data by disposing things they shouldn't there are relatively simple patterns you can use to separate the IDisposable from the ambient value, as well as suppression patterns like those used in TransactionScope to, say, temporarily set the current value to null, etc.
Just to reiterate in a practical way, let's say, for instance, that you store an ImmutableList in one of your bookmarks. If the item stored in the ImmutableList is mutable then context pollution is possible.
var someImmutableListOfMutableItems = unsafeAsyncLocal.Value;
// sets Name for all contexts pointing to the reference type.
someImmutableListOfMutableItems[0].Name = "Jon"; // assigns property setter on shared reference of Person
// notice how unsafeAsyncLocal.Value never had to be reassigned?
Whereas an immutable collection of immutable items will never pollute another context unless something is super fundamentally wrong about how execution context is being flowed (contact a vendor, file a bug report, raise the alarm, etc.)
var someImmutableListOfImmutableItems = safeAsyncLocal.Value;
someImmutableListOfImmutableItems = someImmutableListOfImmutableItems.SetItem(0,
someImmutableListOfImmutableItems[0].SetName("Jon") // SetName returns a new immutable Person instance
); // SetItem returns new immutable list instance
// notice both the item and the collection need to be reassigned. No other context will be polluted here
safeAsyncLocal.Value = someImmutableListOfImmutableItems;
EDIT: Some articles for people who want to read something perhaps more coherent than my ramblings here :)
https://devblogs.microsoft.com/pfxteam/executioncontext-vs-synchronizationcontext/
https://weblogs.asp.net/dixin/understanding-c-sharp-async-await-3-runtime-context
And for some comparison, here's an article about how context is managed in JavaScript, which is single threaded but supports an asynchronous programming model (which I figure might help to illustrate how they relate/differ):
https://blog.bitsrc.io/understanding-execution-context-and-execution-stack-in-javascript-1c9ea8642dd0
The logical call context has the same flow semantics as the execution context, and therefore as AsyncLocal. Knowing that, you can store a value in the logical context to detect when you cross "async context" boundaries:
class Storage<T>
{
private AsyncLocal<ImmutableStack<T>> stackHolder = new AsyncLocal<ImmutableStack<T>>();
public IDisposable Push(T item)
{
var bookmark = new StorageBookmark<T>(this);
stackHolder.Value = (stackHolder.Value ?? ImmutableStack<T>.Empty).Push(item);
return bookmark;
}
private class StorageBookmark<TInner> : IDisposable
{
private Storage<TInner> owner;
private ImmutableStack<TInner> snapshot;
private Thread boundThread;
private readonly object id;
public StorageBookmark(Storage<TInner> owner)
{
id = new object();
this.owner = owner;
this.snapshot = owner.stackHolder.Value;
CallContext.LogicalSetData("AsyncStorage", id);
}
public void Dispose()
{
if (CallContext.LogicalGetData("AsyncStorage") != id)
throw new InvalidOperationException("Bookmark crossed async context boundary");
owner.stackHolder.Value = snapshot;
}
}
}
public class Program
{
static void Main()
{
DoesNotThrow().Wait();
Throws().Wait();
}
static async Task DoesNotThrow()
{
var storage = new Storage<string>();
using (storage.Push("hello"))
{
await Task.Yield();
}
}
static async Task Throws()
{
var storage = new Storage<string>();
var disposable = storage.Push("hello");
using (ExecutionContext.SuppressFlow())
{
Task.Run(() => { disposable.Dispose(); }).Wait();
}
}
}

Lock-free, awaitable, exclusive access methods

I have a thread safe class which uses a particular resource that needs to be accessed exclusively. In my assessment it does not make sense to have the callers of various methods block on a Monitor.Enter or await a SemaphoreSlim in order to access this resource.
For instance I have some "expensive" asynchronous initialization. Since it does not make sense to initialize more than once, whether it be from multiple threads or a single one, multiple calls should return immediately (or even throw an exception). Instead one should create, init and then distribute the instance to multiple threads.
UPDATE 1:
MyClass uses two NamedPipes in either direction. The InitBeforeDistribute method is not really initialization, but rather properly setting up a connection in both directions. It does not make sense to make the pipe available to N threads before you have set up the connection. Once it is setup multiple threads can post work, but only one can actually read/write to the stream. My apologies for obfuscating this with poor naming of the examples.
UPDATE 2:
If InitBeforeDistribute implemented a SemaphoreSlim(1, 1) with proper await logic (instead of the interlocked operation throwing an exception), is the Add/Do Square method OK practice? It does not throw a redundant exception (such as in InitBeforeDistribute), while being lock-free?
The following would be a good bad example:
class MyClass
{
private int m_isIniting = 0; // exclusive access "lock"
private volatile bool vm_isInited = false; // vol. because other methods will read it
public async Task InitBeforeDistribute()
{
if (Interlocked.Exchange(ref this.m_isIniting, -1) != 0)
throw new InvalidOperationException(
"Cannot init concurrently! Did you distribute before init was finished?");
try
{
if (this.vm_isInited)
return;
await Task.Delay(5000) // init asynchronously
.ConfigureAwait(false);
this.vm_isInited = true;
}
finally
{
Interlocked.Exchange(ref this.m_isConnecting, 0);
}
}
}
Some points:
If there is a case where blocking/awaiting access to a lock makes
perfect sense, then this example does not (make sense, that is).
Since I need to await in the method, I must use something like a
SemaphoreSlim if I where to use a "proper" lock. Forgoing the
Semaphore for the example above allows me to not worry about
disposing the class once I'm done with it. (I always disliked the
idea of disposing an item used by multiple threads. This is a minor
positive, for sure.)
If the method is called often there might be some performance
benefits, which of course should be measured.
The above example does not make sense in ref. to (3.) so here is another example:
class MyClass
{
private volatile bool vm_isInited = false; // see above example
private int m_isWorking = 0; // exclusive access "lock"
private readonly ConcurrentQueue<Tuple<int, TaskCompletionSource<int>> m_squareWork =
new ConcurrentQueue<Tuple<int, TaskCompletionSource<int>>();
public Task<int> AddSquare(int number)
{
if (!this.vm_isInited) // see above example
throw new InvalidOperationException(
"You forgot to init! Did you already distribute?");
var work = new Tuple<int, TaskCompletionSource<int>(number, new TaskCompletionSource<int>()
this.m_squareWork.Enqueue(work);
Task do = DoSquare();
return work.Item2.Task;
}
private async Task DoSquare()
{
if (Interlocked.Exchange(ref this.m_isWorking, -1) != 0)
return; // let someone else do the work for you
do
{
try
{
Tuple<int, TaskCompletionSource<int> work;
while (this.m_squareWork.TryDequeue(out work))
{
await Task.Delay(5000) // Limiting resource that can only be
.ConfigureAwait(false); // used by one thread at a time.
work.Item2.TrySetResult(work.Item1 * work.Item1);
}
}
finally
{
Interlocked.Exchange(ref this.m_isWorking, 0);
}
} while (this.m_squareWork.Count != 0 &&
Interlocked.Exchange(ref this.m_isWorking, -1) == 0)
}
}
Are there some of the specific negative aspects of this "lock-free" example that I should pay attention to?
Most questions relating to "lock-free" code on SO generally advise against it, stating that it is for the "experts". Rarely (I could be wrong on this one) do I see suggestions for books/blogs/etc that one can delve into, should one be so inclined. If there any such resources I should look into, please share. Any suggestions will be highly appreciated!
Update: great article related
.: Creating High-Performance Locks and Lock-free Code (for .NET) :.
The main point about lock-free algorythms is not that they are for experts.
The main point is Do you really need lock-free algorythm here? I can't understand your logic here:
Since it does not make sense to initialize more than once, whether it be from multiple threads or a single one, multiple calls should return immediately (or even throw an exception).
Why can't your users simply wait for a result of initialization, and use your resource after that? If your can, simply use the Lazy<T> class or even Asynchronous Lazy Initialization.
You really should read about consensus number and CAS-operations and why does it matters while implementing your own synchronization primitive.
In your code your are using the Interlocked.Exchange method, which isn't CAS in real, as it always exchanges the value, and it has a consensus number equal to 2. This means that the primitive using such construction will work correctly only for 2 threads (not in your situation, but still 2).
I've tried to define is your code works correctly for 3 threads, or there can be some circumstances which lead your application to damaged state, but after 30 minutes I stopped. And any your team member will stop like me after some time trying to understand your code. This is a waste of time, not only yours, but your team. Don't reinvent the wheel until you really have to.
My favorite book in related area is Writing High-Performance .NET Code by Ben Watson, and my favorite blog is Stephen Cleary's. If you can be more specific about what kind of book are you interested in, I can add some more references.
No locks in program doesn't make your application lock-free. In .NET application you really should not use the Exceptions for your internal program flow. Consider that the initializing thread isn't scheduled for a while by the OS (on various reasons, no matter what they are exactly).
In this case all other threads in your app will die step by step trying to access your shared resource. I can't say that this is a lock-free code. Yes, there are no locks in it, but it doesn't guarantee the correctness of the program and thus it isn't a lock-free by definition.
The Art of Multiprocessor Programming by Maurice Herlihy and Nir Shavit, is a great resource for lock-free and wait-free programming. lock-free is a progress guarantee other than a mode of programming, so to argue that an algorithm is lock-free, one has to validate or show proofs of the progress guarantee. lock-free in simple terms implies that blocking or halting of one thread doesn't not block progress of other threads or that if a threads is blocked infinitely often, then there is another thread that makes progress infinitely often.

HttpContext Class and its Thread Safety

I have an Singleton object in application that has following property:
private AllocationActionsCollection AllocationActions
{
get
{
return HttpContext.Current.Session["AllocationOptions.AllocationActions"] as AllocationActionsCollection;
}
set
{
HttpContext.Current.Session["AllocationOptions.AllocationActions"] = value;
}
}
I'm dealing with one error (HttpContext.Current.Session["AllocationOptions.AllocationActions"] is null even though it is supposed to me always set to valid instance...). I just read in MSDN that HttpContext instance member are not guaranteed to be thread safe! I wonder if that could be the issue. There could be a resources race somewhere in application and the moment where HttpContext.Current.Session["AllocationOptions.AllocationActions"] is null is the moment where AllocationActions setter is used using this statement:
AllocationActions = new AllocationActionsCollection(Instance.CacheId);
My questions are:
a) I'm shocked that HttpContext.Current.Session is not thread safe. How to safely use that property then?
b) do you have any ideas why that Session variable can be null (even though I'm pretty sure I'm setting it before it's used for the first time)?
Thanks,Pawel
EDIT 1:
a) line that initializes session variable is set every 2 minutes with following statement (executed in Page_Load)
AllocationActions = new AllocationActionsCollection(Instance.CacheId);
b) code that calls getter is called in event handlers (like Button_Click)
c) there is not custom threading in application. only common HTTP Handler
A singleton object is realized by restricting the instantiation of a class to one object.
HttpContext.Current.Session is an area dedicated to a single user; any object stored in Session will be available only for the user/session that created it.
Any object stored in Application will be available only for every user/session.
Any static object also, will be available only for every user/session. Suggested implementations always use static objects.. why didn't you?
HttpContext.Current returns a separate HttpContext instance for each request. There can be multiple threads processing requests, but each request will get its own HttpContext instance. So any given instance is not being used by multiple threads, and thread safety is not an issue.
So unless you're manually spinning up multiple threads of your own for a single request, you are threadsafe.
The thread safety of the HttpContext class is pretty standard for .NET. The basic rule of the thumb (unless explicitly specified) is that static members are thread safe and instance members aren't.
In any case it is hard to tell why your session variable is null without looking more into the code that sets/resets it. Or perhaps you are calling your get_AllocationActions method from a different session than the one you set it in. Again, more code would help.
To access the session property safely you would just wrap the access in a lock statement and use the SyncRoot object of the session class.

Categories