I have a library with an API much like this:
public class Library : IDisposable
{
public Library(Action callback);
public string Query();
public void Dispose();
}
After I have instantiated Library, at any time and on any thread it might invoke the callback I have passed to it. That callback needs to call Query to do useful work. The library will only stop calling my callback once disposed, but if a callback attempts to call Query after the main thread has called Dispose, bad stuff will happen.
I do want to allow the callbacks to run on multiple threads simultaneously. That's okay. But we need to be sure that no callbacks can be running when we call Dispose. I thought a ReaderWriterLockSlim might be appropriate - you need the write-lock to call Dispose, and the callbacks need read-locks to call Query. The problem here is that ReaderWriterLockSlim is IDisposable, and I don't think it will ever be safe to dispose it - I can never know that there is not a callback in flight that simply hasn't gotten to the point of acquiring the read-lock yet.
What should I do? It looks like ReaderWriterLock isn't IDisposable, but it's own documentation says you should use ReaderWriterLockSlim instead. I could try to do something equivalent with just the "lock" keyword, but that sounds wasteful and easy to screw up.
PS - Feel free to say that the library API is not good if you think that's the case. I would personally prefer that it guaranteed that Dispose would block until all callbacks had completed.
This sounds like something you can wrap with your own API which makes the guarantee from the final paragraph.
Essentially, each callback should atomically register that it's running, and check whether it's still okay to run - and then either quit immediately (equivalent to never being called) or do its stuff and deregister that it's running.
Your Dispose method just needs to block until it finds a time when nothing's running, atomically checking whether anything's running and invalidating if not.
I can imagine this being done reasonably simply using a simple lock, monitor, Wait/Pulse approach. Your API wrapper would wrap any callback it's given inside another callback which does all this, so you only need to put the logic in one place.
Do you see what I mean? I don't have time to implement it for you right now, but I can elaborate on the ideas if you like.
This is a rather difficult problem to solve if you had to attempt it on your own. The pattern I am going to describe here uses the CountdownEvent class as the fundamental synchronization mechanism. It is available in .NET 4.0 or as part of the Reactive Extensions download for .NET 3.5. This class is an ideal candidate for problems in this genre because:
it can maintain a count.
it can wait for that count to reach zero.
Let me describe the pattern. I have created a CallbackInvoker class which contains only two operations.
It can invoke the callback synchronously using the Invoke operation.
It can receive a stop signal and wait for an acknowledgement using the FinishAndWait operation.
The Library class creates and uses an instance of CallbackInvoker. Anytime Library needs to invoke the callback it should do so by calling the Invoke method. When it is time to dispose the class just call FinishAndWait from the Dispose method. This works because the moment the CountdownEvent is signaled from FinishAndWait it locks out the TryAddCount in an atomic fashion. That is why the wait handle is initialed to a count of 1.
public class Library : IDisposable
{
private CallbackInvoker m_CallbackInvoker;
public Library(Action callback)
{
m_CallbackInvoker = new CallbackInvoker(callback);
}
public void Dispose()
{
m_CallbackInvoker.FinishAndWait();
}
private void DoSomethingThatInvokesCallback()
{
m_CallbackInvoker.Invoke();
}
private class CallbackInvoker
{
private Action m_Callback;
private CountdownEvent m_Pending = new CountdownEvent(1);
public CallbackInvoker(Action callback)
{
m_Callback = callback;
}
public bool Invoke()
{
bool acquired = false;
try
{
acquired = m_Pending.TryAddCount();
if (acquired)
{
if (m_Callback != null)
{
m_Callback();
}
}
}
finally
{
if (acquired) m_Pending.Signal();
}
return acquired;
}
public void FinishAndWait()
{
m_Pending.Signal();
m_Pending.Wait();
}
}
}
Related
It is a little bit confusing. In C# for multithread managing we have mutex and we have lock and in addition I found such lock RAII implementation
public class ReaderWriterLockSlim_ScopedLockRead : IDisposable
{
ReaderWriterLockSlim m_myLock;
public ReaderWriterLockSlim_ScopedLockRead(ReaderWriterLockSlim myLock)
{
m_myLock = myLock;
m_myLock.EnterReadLock();
}
public void Dispose()
{
m_myLock.ExitReadLock();
}
}
public class ReaderWriterLockSlim_ScopedLockWrite : IDisposable
{
ReaderWriterLockSlim m_myLock;
public ReaderWriterLockSlim_ScopedLockWrite(ReaderWriterLockSlim myLock)
{
m_myLock = myLock;
m_myLock.EnterWriteLock();
}
public void Dispose()
{
m_myLock.ExitWriteLock();
}
}
}
I would like to understand the difference between them, as for me mutex is a first multithreading managing implementation you need to call mutex.lock() and then don't forget to call mutex.release() it is usually not so suitable to call mutex.release() because you can get an error at the middle of execution, so here we have lock(obj){} as far as I see it is kind of RAII object with the same behavior but if you get error at the middle under the hood it will call mutex.release() and all nice.
But what about the last custom implementaion that I posted? It looks like the same with lock(obj){}, just with a difference that we have read and write behavior, like in write state it is possible that a few threads get accesses to method and with read state just one by one...
Am I right here?
So for locking it's important that every lock that is acquired is also released (no matter if the code it was locking had any exceptions). So normally, no matter what lock you use, it'll look something like this:
myLock.MyAcquireMethod();
try
{
//...
}
finally
{
myLock.MyReleaseMethod();
}
Now for the Monitor locking mechanism in c# they have a keyword to make it easier: lock.
which basically wraps the acquiring and releasing in one lock code-block.
So this:
lock(myObj)
{
//...
}
Is just a more convenient way of writing this:
Monitor.Enter(myObj);
try
{
//...
}
finally
{
Monitor.Exit(myObj);
}
Sadly for the other locks (and because Monitor has it's limitations we don't always want to use it) we don't have such a handy short way of doing the whole thing, and to solve that the ReaderWriterLockSlim_ScopedLockRead wrapper implements IDisposable that gives you this try finally mechanism (using also guarantees that Dispose() is called on the IDisposable no matter if the code ran to completion or an exception occurred.
So instead of:
m_myLock.EnterWriteLock();
try
{
//...
}
finally
{
m_myLock.ExitWriteLock();
}
You're now able to do this:
using(new ReaderWriterLockSlim_ScopedLockRead(m_myLock))
{
//...
}
Hope this answers your question!
As a bonus a warning on the Monitor class of c#. This locking mechanism is re-entrant on a thread level. Meaning the thread holding the lock is allowed to acquire it multiple times (though it also has to release it multiple times), which allows you to do something like this:
private readonly object _myLock = new object();
public void MyLockedMethod1()
{
lock(_myLock)
{
MyLockedMethod2();
}
}
public void MyLockedMethod2()
{
lock(_myLock)
{
//...
}
}
So no matter if MyLockedMethod2 is called directly or through MyLockedMethod1 (that might need the lock for other stuff as well) you can have thread-safety.
However these days a lot of people use async/await where a method can be continued on a different thread, which can break the Monitor if the thread that acquired it is not the thread releasing it, so I advise you not to use something like this:
public async Task MyLockedMethod()
{
lock(_myLock)
{
await MyAsyncMethod();
}
}
Anyway there is a lot of documentation about this if you would like to learn more.
This is not it at all. The Read Writer lock is an implementation used in a specific context.
All can read at any time without one blocking the other, but blocking anyone who wants to write.
None can read or write when there is even one writing at that time
It is exactly as wikipedia describes it here and non-specific for C# or any other language. This is just the C# flavor of a ReadWriter lock
is a synchronization primitive that solves one of the readers–writers problems. An RW lock allows concurrent access for read-only operations, while write operations require exclusive access.
Check Microsoft docs here for more information
I'm designing a base class that, when inherited, will provide business functionality against a context in a multithreaded environment. Each instance may have long-running initialization operations, so I want to make the objects reusable. In order to do so, I need to be able to:
Assign a context to one of these objects to allow it to do its work
Prevent an object from being assigned a new context while it already has one
Prevent certain members from being accessed while the object doesn't have a context
Also, each context object can be shared by many worker objects.
Is there a correct synchronization primitive that fits what I'm trying to do? This is the pattern I've come up with that best fits what I need:
private Context currentContext;
internal void BeginProcess(Context currentContext)
{
// attempt to acquire a lock; throw if the lock is already acquired,
// otherwise store the current context in the instance field
}
internal void EndProcess()
{
// release the lock and set the instance field to null
}
private void ThrowIfNotProcessing()
{
// throw if this method is called while there is no lock acquired
}
Using the above, I can protect base class properties and methods that shouldn't be accessed unless the object is currently in the processing state.
protected Context CurrentContext
{
get
{
this.ThrowIfNotProcessing();
return this.context;
}
}
protected void SomeAction()
{
this.ThrowIfNotProcessing();
// do something important
}
My initial though was to use Monitor.Enter and related functions, but that doesn't prevent same-thread reentrancy (multiple calls to BeginProcess on the original thread).
There is one synchronization object in .NET that isn't re-entrant, you are looking for a Semaphore.
Before you commit to this, do get your ducks in a row and ask yourself how it can be possible that BeginProcess() can be called again on the same thread. That is very, very unusual, your code has to be re-entrant for that to happen. This can normally only happen on a thread that has a dispatcher loop, the UI thread of a GUI app is a common example. If this is truly possible and you actually use a Semaphore then you'll get to deal with the consequence as well, your code will deadlock. Since it recursed into BeginProcess and stalls on the semaphore. Thus never completing and never able to call EndProcess(). There's a good reason why Monitor and Mutex are re-entrant :)
You can use Semaphore class which came with .NET Framework 2.0.
A good usage of Semaphores is to synchronize limited amount of resources. In your case it seems you have resources like Context which you want to share between consumers.
You can create a semaphore to manage the resources like:
var resourceManager = new Semaphore(0, 10);
And then wait for a resource to be available in the BeginProcess method using:
resourceManager.WaitOne();
And finally free the resource in the EndProcess method using:
resourceManager.Release();
Here's a good blog about using Semaphores in a situation like yours:
https://web.archive.org/web/20121207180440/http://www.dijksterhuis.org/using-semaphores-in-c/
The Interlocked class can be used for a thread-safe solution that exits the method, instead of blocking when a re-entrant call is made. Like Vlad Gonchar solution, but thread-safe.
private int refreshCount = 0;
private void Refresh()
{
if (Interlocked.Increment(ref refreshCount) != 1) return;
try
{
// do something here
}
finally
{
Interlocked.Decrement(ref refreshCount);
}
}
There is very simple way to prevent re-entrancy (on one thread):
private bool bRefresh = false;
private void Refresh()
{
if (bRefresh) return;
bRefresh = true;
try
{
// do something here
}
finally
{
bRefresh = false;
}
}
I have a question about locking and whether I'm doing it right.
In a class, I have a static lock-object which is used in several methods, assume access modifiers are set appropriately, I won't list them to keep it concise.
class Foo
{
static readonly object MyLock = new object();
void MethodOne()
{
lock(MyLock) {
// Dostuff
}
}
void MethodTwo()
{
lock(MyLock) {
// Dostuff
}
}
}
Now, the way I understand it, a lock guarantees only one thread at a time will be able to grab it and get into the DoStuff() part of one method.
But is it possible for the same thread to call MethodOne() and MethodTwo() at the same time? Meaning that he uses the lock he has gotten for both methods?
My intended functionality is that every method in this class can only be called by a single thread while no other method in this class is currently executing.
The underlying usage is a database class for which I only want a single entry and exit point. It uses SQL Compact, so if I attempt to read protected data I get all sorts of memory errors.
Let me just add that every once and a while a memory exception on the database occurs and I don't know where it's coming from. I thought it was because of one thread doing multiple things with the database before completing things, but this code seems to work like it should.
But is it possible for the same thread to call MethodOne() and MethodTwo() at the same time?
No. Same thread can't call both the methods at same time whether lock is used on not.
lock(MyLock)
It can be understood as following:
MyLock object has a key to enter itself. a thread (say t1) who accesses it first gets it. Other threads will have to wait until t1 releases it. But t1 can call another method and will pass this line as it already has acquired lock.
But, at the same time calling both the methods ... not possible by single thread. Not in current programming world.
the way I understand it, a lock guarantees only one thread at a time will be able to grab it and get into the DoStuff() part of one method.
Your understanding is correct but remember that threads are used to do parallel execution but the execution within a thread is always sequential.
But is it possible for the same thread to call MethodOne() and MethodTwo() at the same time?
It is not possible for a single thread to call anything at the same time.
In a multithreaded application, this can happen - the methods can be called simultaneously, but the // Dostuff sections can only be accessed sequentially.
My intended functionality is that every method in this class can only be called by a single thread while no other method in this class is currently executing.
Then don't use additional threads in your application - just have the main one and don't use extra ones.
The only way for a thread inside Dostuff of the running MethodOne to call MethodTwo is for the Dostuff of the MethodOne to make the call to MethodTwo. If this is not happening (i.e. methods in your "mutually locked" group do not call each other), you are safe.
There are a few things that can be answered here.
But is it possible for the same thread to call MethodOne() and
MethodTwo() at the same time? Meaning that he uses the lock he has
gotten for both methods?
No, a thread has a single program counter, its either in MethodOne() or in MethodTwo(). If however you have something as follows,
public void MethodThree()
{
lock (MyLock)
{
MethodOne();
MethodTwo();
}
}
That will also work, a thread can acquire the same lock multiple times. Just watch out for what you're doing as you can easily get into a deadlock as the code becomes more complex.
My intended functionality is that every method in this class can only
be called by a single thread while no other method in this class is
currently executing.
The underlying usage is a database class for which I only want a
single entry and exit point. It uses SQL Compact, so if I attempt to
read protected data I get all sorts of memory errors.
I don't really understand why, but if you think you need to do this because you're using SqlCompact, you're wrong. You should be using transactions which are supported on SqlCe.
E.g.
using (var connection = new SqlCeConnection())
using (var command = new SqlCeCommand())
using (var transaction = conn.BeginTransaction())
{
command.Transaction = transaction;
command.ExecuteNonQuery();
transaction.Commit();
}
I have a C# app that needs to do a hot swap of a data input stream to a new handler class without breaking the data stream.
To do this, I have to perform multiple steps in a single thread without any other threads (most of all the data recieving thread) to run in between them due to CPU switching.
This is a simplified version of the situation but it should illustrate the problem.
void SwapInputHandler(Foo oldHandler, Foo newHandler)
{
UnhookProtocol(oldHandler);
HookProtocol(newHandler);
}
These two lines (unhook and hook) must execute in the same cpu slice to prevent any packets from getting through in case another thread executes in between them.
How can I make sure that these two commands run squentially using C# threading methods?
edit
There seems to be some confusion so I will try to be more specific. I didn't mean concurrently as in executing at the same time, just in the same cpu time slice so that no thread executes before these two complete. A lock is not what I'm looking for because that will only prevent THIS CODE from being executed again before the two commands run. I need to prevent ANY THREAD from running before these commands are done. Also, again I say this is a simplified version of my problem so don't try to solve my example, please answer the question.
Performing the operation in a single time slice will not help at all - the operation could just execute on another core or processor in parallel and access the stream while you perform the swap. You will have to use locking to prevent everybody from accessing the stream while it is in an inconsistent state.
Your data receiving thread needs to lock around accessing the handler pointer and you need to lock around changing the handler pointer.
Alternatively if your handler is a single variable you could use Interlocked.Exchange() to swap the value atomically.
Why not go at this from another direction, and let the thread in question handle the swap. Presumably, something wakes up when there's data to be handled, and passes it off to the current Foo. Could you post a notification to that thread that it needs to swap in a new handler the next time it wakes up? That would be much less fraught, I'd think.
Okay - to answer your specific question.
You can enumerate through all the threads in your process and call Thread.Suspend() on each one (except the active one), make the change and then call Thread.Resume().
Assuming your handlers are thread safe, my recommendation is to write a public wrapper over your handlers that does all the locking it needs using a private lock so you can safely change the handlers behind the scenes.
If you do this you can also use a ReaderWriterLockSlim, for accessing the wrapped handlers which allows concurrent read access.
Or you could architect your wrapper class and handler clases in such a way that no locking is required and the handler swamping can be done using a simple interlocked write or compare exchange.
Here's and example:
public interface IHandler
{
void Foo();
void Bar();
}
public class ThreadSafeHandler : IHandler
{
ReaderWriterLockSlim rwLock = new ReaderWriterLockSlim();
IHandler wrappedHandler;
public ThreadSafeHandler(IHandler handler)
{
wrappedHandler = handler;
}
public void Foo()
{
try
{
rwLock.EnterReadLock();
wrappedHandler.Foo();
}
finally
{
rwLock.ExitReadLock();
}
}
public void Bar()
{
try
{
rwLock.EnterReadLock();
wrappedHandler.Foo();
}
finally
{
rwLock.ExitReadLock();
}
}
public void SwapHandler(IHandler newHandler)
{
try
{
rwLock.EnterWriteLock();
UnhookProtocol(wrappedHandler);
HookProtocol(newHandler);
}
finally
{
rwLock.ExitWriteLock();
}
}
}
Take note that this is still not thread safe if atomic operations are required on the handler's methods, then you would need to use higher order locking between treads or add methods on your wrapper class to support thread safe atomic operations (something like, BeginTreadSafeBlock() folowed by EndTreadSafeBlock() that lock the wrapped handler for writing for a series of operations.
You can't and it's logical that you can't. The best you can do is avoid any other thread from disrupting the state between those two actions (as have already been said).
Here is why you can't:
Imagine there was an block that told the operating system to never thread switch while you're on that block. That would be technically possible but will lead to starvation everywhere.
You might thing your threads are the only one being used but that's an unwise assumption. There's the garbage collector, there are the async operations that works with threadpool threads, an external reference, such as a COM object could span its own thread (in your memory space) so that noone could progress while you're at it.
Imagine you make a very long operation in your HookOperation method. It involves a lot of non leaky operations but, as the Garbage Collector can't take over to free your resources, you end up without any memory left. Or imagine you call a COM object that uses multithreading to handle your request... but it can't start the new threads (well it can start them but they never get to run) and then joins them waiting for them to finish before coming back... and therefore you join on yourself, never returning!!.
As other posters have already said, you can't enforce system-wide critical section from user-mode code. However, you don't need it to implement the hot swapping.
Here is how.
Implement a proxy with the same interface as your hot-swappable Foo object. The proxy shall call HookProtocol and never unhook (until your app is stopped). It shall contain a reference to the current Foo handler, which you can replace with a new instance when needed. The proxy shall direct the data it receives from hooked functions to the current handler. Also, it shall provide a method for atomic replacement of the current Foo handler instance (there is a number of ways to implement it, from simple mutex to lock-free).
I have a method which should be executed in an exclusive fashion. Basically, it's a multi threaded application where the method is invoked periodically by a timer, but which could also be manually triggered by a user action.
Let's take an example :
The timer elapses, so the method is
called. The task could take a few
seconds.
Right after, the user clicks on some
button, which should trigger the
same task : BAM. It does nothing
since the method is already running.
I used the following solution :
public void DoRecurentJob()
{
if(!Monitor.TryEnter(this.lockObject))
{
return;
}
try
{
// Do work
}
finally
{
Monitor.Exit(this.lockObject);
}
}
Where lockObject is declared like that:
private readonly object lockObject = new object();
Edit : There will be only one instance of the object which holds this method, so I updated the lock object to be non-static.
Is there a better way to do that ? Or maybe this one is just wrong for any reason ?
This looks reasonable if you are just interested in not having the method run in parallel. There's nothing to stop it from running immediately after each other, say that you pushed the button half a microsecond after the timer executed the Monitor.Exit().
And having the lock object as readonly static also make sense.
You could also use Mutex or Semaphore if you want it to work cross process (with a slight performance penalty), or if you need to set any other number than one of allowed simultaneous threads running your piece of code.
There are other signalling constructs that would work, but your example looks like it does the trick, and in a simple and straightforward manner.
Minor nit: if the lockObject variable is static, then "this.lockObject" shouldn't compile. It also feels slightly odd (and should at least be heavily documented) that although this is an instance method, it has distinctly type-wide behaviour as well. Possibly make it a static method which takes an instance as the parameter?
Does it actually use the instance data? If not, make it static. If it does, you should at least return a boolean to say whether or not you did the work with the instance - I find it hard to imagine a situation where I want some work done with a particular piece of data, but I don't care if that work isn't performed because some similar work was being performed with a different piece of data.
I think it should work, but it does feel a little odd. I'm not generally a fan of using manual locking, just because it's so easy to get wrong - but this does look okay. (You need to consider asynchronous exceptions between the "if" and the "try" but I suspect they won't be a problem - I can't remember the exact guarantees made by the CLR.)
I think Microsoft recommends using the lock statement, instead of using the Monitor class directly. It gives a cleaner layout and ensures the lock is released in all circumstances.
public class MyClass
{
// Used as a lock context
private readonly object myLock = new object();
public void DoSomeWork()
{
lock (myLock)
{
// Critical code section
}
}
}
If your application requires the lock to span all instances of MyClass you can define the lock context as a static field:
private static readonly object myLock = new object();
The code is fine, but would agree with changing the method to be static as it conveys intention better. It feels odd that all instances of a class have a method between them that runs synchronously, yet that method isn't static.
Remember you can always have the static syncronous method to be protected or private, leaving it visible only to the instances of the class.
public class MyClass
{
public void AccessResource()
{
OneAtATime(this);
}
private static void OneAtATime(MyClass instance)
{
if( !Monitor.TryEnter(lockObject) )
// ...
This is a good solution although I'm not really happy with the static lock. Right now you're not waiting for the lock so you won't get into trouble with deadlocks. But making locks too visible can easily get you in to trouble the next time you have to edit this code. Also this isn't a very scalable solution.
I usually try to make all the resources I try to protect from being accessed by multiple threads private instance variables of a class and then have a lock as a private instance variable too. That way you can instantiate multiple objects if you need to scale.
A more declarative way of doing this is using the MethodImplOptions.Synchronized specifier on the method to which you wish to synchronize access:
[MethodImpl(MethodImplOptions.Synchronized)]
public void OneAtATime() { }
However, this method is discouraged for several reasons, most of which can be found here and here. I'm posting this so you won't feel tempted to use it. In Java, synchronized is a keyword, so it may come up when reviewing threading patterns.
We have a similar requirement, with the added requirement that if the long-running process is requested again, it should enqueue to perform another cycle after the current cycle is complete. It's similar to this:
https://codereview.stackexchange.com/questions/16150/singleton-task-running-using-tasks-await-peer-review-challenge
private queued = false;
private running = false;
private object thislock = new object();
void Enqueue() {
queued = true;
while (Dequeue()) {
try {
// do work
} finally {
running = false;
}
}
}
bool Dequeue() {
lock (thislock) {
if (running || !queued) {
return false;
}
else
{
queued = false;
running = true;
return true;
}
}
}