Interlocked.exchange as a lock - c#

I'm trying to use Interlocked.Exchange to create a thread safe lock for some object initialize functions. Consider the below code. I want to be sure that the if is doing the same as when the while is substituted. The reason I ask is if the code is run over and over there are times when you get an exit message before the set message. I just would like to confirm that this is just a gui thing since the state on exit always seems to be correct.
class Program
{
private static void Main(string[] args)
{
Thread thread1 = new Thread(new ThreadStart(() => InterlockedCheck("1")));
Thread thread2 = new Thread(new ThreadStart(() => InterlockedCheck("2")));
Thread thread3 = new Thread(new ThreadStart(() => InterlockedCheck("3")));
Thread thread4 = new Thread(new ThreadStart(() => InterlockedCheck("4")));
thread4.Start();
thread1.Start();
thread2.Start();
thread3.Start();
Console.ReadKey();
}
const int NOTCALLED = 0;
const int CALLED = 1;
static int _state = NOTCALLED;
//...
static void InterlockedCheck(string thread)
{
Console.WriteLine("Enter thread [{0}], state [{1}]", thread, _state);
//while (Interlocked.Exchange(ref _state, CALLED) == NOTCALLED)
if (Interlocked.Exchange(ref _state, CALLED) == NOTCALLED)
{
Console.WriteLine("Setting state on T[{0}], state[{1}]", thread, _state);
}
Console.WriteLine("Exit from thread [{0}] state[{1}]", thread, _state);
}
}

I wouldn't call that a lock since it can be used only once, but you are correct if you assume that the statements inside the if scope would be executed exactly once even if InterlockedCheck is called from multiple threads concurrently.
That's because you're starting with NOTCALLED and only setting CALLED using the atomic Interlocked.Exchange. Only the first call would get back NOTCALLED in return while all subsequent calls would get back CALLED.
A better (and simpler) solution would be to use .Net's Lazy class which is perfect for initialization:
static Lazy<ExpensiveInstance> _lazy = new Lazy<ExpensiveInstance>(Initialization, LazyThreadSafetyMode.ExecutionAndPublication);
You can retrieve the result with _lazy.Value and query whether it was created with _lazy.IsValueCreated. The Initialization wouldn't run until it needs to and not more than once.

I don't see why you are using Interlocked here instead of the much more readable and easily-understood lock statement. Personally, I would advise the latter.
Either way, the reason you are seeing one or more "exit" messages before a "set" message is due to thread scheduling. Even though the first thread to hit the Interlocked will always perform the "set" operation, that thread may be pre-empted before it gets a chance to do the operation, allowing some other thread to emit its "exit" message first.
Note that depending on your exact needs here, while using an if is the same as using a while loop, it's probable that neither accomplish what you want. I.e. the first thread to hit the if is going to set the value to the CALLED value, and so any other thread will just keep on going. If you're really trying to initialize something here, you probably want the other threads to wait for the thread that is actually executing the initialization code, so that all threads can proceed knowing the initialized state is valid.
I do think one way or the other it would be a good idea (i.e. less confusing to the user, and more importantly if the real code is more complicated than what you've shown, is more likely to produce correct results) if you synchronize the entire method.
Using Interlocked to accomplish that is much more complicated than the code you have now. It would involve a loop and at least one additional state value. But with a lock statement, it's simple and easy to read. It would look more like this:
const int NOTCALLED = 0;
const int CALLED = 1;
static int _state = NOTCALLED;
static readonly object _lock = new object();
//...
static void InterlockedCheck(string thread)
{
lock (_lock)
{
Console.WriteLine("Enter thread [{0}], state [{1}]", thread, _state);
if (_state == NOTCALLED)
{
Console.WriteLine("Setting state on T[{0}], state[{1}]", thread, _state);
_state = CALLED;
}
Console.WriteLine("Exit from thread [{0}] state[{1}]", thread, _state);
}
}
That way, the first thread to acquire the lock gets to execute all of its code before any other thread does, and in particular it ensures that the "set" operation in that first thread happens before the "exit" operation in any other thread.

Related

Better approach to concurrently "do or wait and skip"

I wonder is there a better solution for this task. One have a function which called concurrently by some amount of threads, but if some thread is already executing the code the other threads should skip that part of code and wait until that thread finish the execution. Here is what I have for now:
int _flag = 0;
readonly ManualResetEventSlim Mre = new ManualResetEventSlim();
void Foo()
{
if (Interlocked.CompareExchange(ref _flag, 1, 0) == 0)
{
Mre.Reset();
try
{
// do stuff
}
finally
{
Mre.Set();
Interlocked.Exchange(ref _flag, 0);
}
}
else
{
Mre.Wait();
}
}
What I want to achieve is faster execution, lower overhead and prettier look.
You could use a combination of an AutoResetEvent and a Barrier to do this.
You can use the AutoResetEvent to ensure that only one thread enters a "work" method.
The Barrier is used to ensure that all the threads wait until the one that entered the "work" method has returned from it.
Here's some sample code:
using System;
using System.Threading;
using System.Threading.Tasks;
namespace Demo
{
class Program
{
const int TASK_COUNT = 3;
static readonly Barrier barrier = new Barrier(TASK_COUNT);
static readonly AutoResetEvent gate = new AutoResetEvent(true);
static void Main()
{
Parallel.Invoke(task, task, task);
}
static void task()
{
while (true)
{
Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " is waiting at the gate.");
// This bool is just for test purposes to prevent the same thread from doing the
// work every time!
bool didWork = false;
if (gate.WaitOne(0))
{
work();
didWork = true;
gate.Set();
}
Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " is waiting at the barrier.");
barrier.SignalAndWait();
if (didWork)
Thread.Sleep(10); // Give a different thread a chance to get past the gate!
}
}
static void work()
{
Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " is entering work()");
Thread.Sleep(3000);
Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " is leaving work()");
}
}
}
However, it might well be that the Task Parallel Library might have a better, higher-level solution. It's worth reading up on it a bit.
First of all, the waiting threads wouldn't do anything, they only wait, and after they get the signal from the event, they simply move out of the method, so you should add the while loop. After that, you can use the AutoResetEvent instead of manual one, as #MatthewWatson suggested. Also, you may consider SpinWait inside the loop, which is a lightweight solution.
Second, why use int, if this is definitely bool nature for the flag field?
Third, why not to use the simple locking, as #grrrrrrrrrrrrr suggested? This is exactly what are you doing here: forcing other threads to wait for one. If your code should write something by only one thread in a given time, but can read by multiple threads, you can use the ReaderWriterLockSlim object for such synchronization.
What I want to achieve is faster execution, lower overhead and prettier look.
faster execution
unless your "Do Stuff" is extremely fast this code shouldn't have any major overhead.
lower overhead
Again, Interlocked Exchange,/CompareExchange are very low overhead, as is manual reset event.
If your "Do Stuff" is really fast, e.g. moving a linked list head, then you can spin:
prettier look
Correct multi-threaded C# code rarely looks pretty when compared to correct single threaded C# code. The language idioms are just not there yet.
That said: If you have a really fast operation ("a few tens of cycles"), then you can spin: (although without knowing exactly what your code is doing, I can't say if this is correct).
if (Interlocked.CompareExchange(ref _flag, 1, 0) == 0)
{
try
{
// do stuff that is very quick.
}
finally
{
Interlocked.Exchange(ref _flag, 0);
}
}
else
{
SpinWait.SpinUntil(() => _flag == 0);
}
The first thing that springs to mind is to change it to use a lock. This won't skip the code, but will cause each thread getting to it to pause while the first thread executes its stuff. This way the lock will also automatically get released in the case of an exception.
object syncer = new object();
void Foo()
{
lock(syncer)
{
//Do stuff
}
}

How to properly lock an object

I am just reading a great tutorial about threads and have a problem with locks. I need some tip/advice that will point me in right direction. I'd like to understand why the output isn't ordered as i expect. The code shows my simple example.
class Program {
class A {
public object obj = new object();
public int i;
}
class B {
public object obj = new object();
public int j;
}
static void Main() {
Console.Write("Thread1: ");
A a = new A();
for (a.i = 0; a.i < 9; a.i++) {
lock (a) {
new Thread(() => { Console.Write(a.i); }).Start();
}
}
Thread.Sleep(500);
Console.Write("\nThread2: ");
B b = new B();
for (b.j = 0; b.j < 9; b.j++) {
new Thread(() => { lock (b) { Console.Write(b.j); } }).Start();
}
Console.ReadLine();
}
}
Example output:
Thread1: 222456799
Thread2: 233357889
Link to the tutorial:
http://www.albahari.com/threading/
You are only locking while you create the thread, or (in the second case), access the value. Locks must be used by all threads, otherwise they do nothing. It is the act of trying to acquire the lock that blocks. Even if you did lock in both threads, that wouldn't help you marry each thread to the value of a.i (etc) at a particular point in time (that no longer exists).
Equally, threads work at their own pace; you cannot guarantee order unless you have a single worker and queue; or you implement your own re-ordering.
it will run at its own pace, and since you are capturing the variable a, it is entirely likely that the field a.i has changed by the time the thread gets as far as Console.Write. Instead, you should capture the value, by making a copy:
A a = new A();
for (a.i = 0; a.i < 9; a.i++) {
var tmp = a.i;
new Thread(() => { Console.Write(tmp); }).Start();
}
(or probably remove a completely)
for (int i = 0; i < 9; i++) {
var tmp = i;
new Thread(() => { Console.Write(tmp); }).Start();
}
there are several issues here:
First, you are locking on a when you create a thread, so the thread is created, but your original main thread then releases the lock and keeps on trucking in the loop, while the created threads run concurrently.
You want to move the first lock into the thread that uses A to the Thread delegate like this:
for(a.i=0;a.i<9;a.i++)
{
int id=a.i;
new Thread(()=>{ lock(a){Console.Out.WriteLine("Thread{0} sees{1}",id,a.i)};}).Start(); // lots of smileys here :)
}
If you look closely, you will notice that the threads are not locked the same way for A and B, which tells you that threads live their own lives and Thread creation != Thread life.
Even with locking your thread runners, you can and will end-up in situations where thread 1 runs AFTER thread 2... but they will never run at the same time thanks to your lock.
You also reference a shared member in all your threads: a.i. This member is initialized in the main thread which doesn't lock anything so your behaviour is not predictable. This is why I added the captured variable i that grabs the value of a.i when the thread is created, and is used in the thread delegate in a safe way.
Also, always lock on a non-public instance. if you lock on A, make sure no-one sees A and gets the opportunity to lock on it.
Because the lock is always held by the main thread, as you are starting threads after acquiring lock and once you acquire there is no contention. Now the threads are free to run however they want, the threads which started by main thread aren't synchronized by any lock. Something which comes close to your expections is following (only order) count again depends on how fast and how many cores you've got. Observe b.j++ is now inside a lock.
for (b.j = 0; b.j < 9; )
{
new Thread(() => { lock (b) { Console.Write(b.j); b.j++; } }).Start();
}
Basic idea behind locking or critical section is to only allow one thing to happen, not the order, in the above modification I've locked the increment operation, that gaurantees that before next thread starts running code under lock, current thread has to finish running all the code under its acquired lock, before it releases the lock.

Waiting the main thread to stop until a task is processed by an async thread

Lets say we have a following program where in its Start method, it delegates some long running task to another thread. When the Stop method is called, i need to make sure that worker thread completes executing the current task and does not leave it in the middle of it. If it has already completed the task and is in sleep state, then it can stop immidiately.
Please guide me on how should I do it.
static int itemsProcessed = 0;
static Thread worker;
static void Start()
{
var ts = new ThreadStart(Run);
worker = new Thread(ts);
worker.Start();
}
static void Stop()
{
//wait until the 'worker' completes processing the current item.
Console.WriteLine("{0} Items Processed", itemsProcessed);
}
static void Run(object state)
{
while (true)
{
ALongRunningTask();
itemsProcessed++;
Thread.Sleep(1000);
}
}
One way to do this is to use a volatile variable to communicate betweej the two threads. To do this, create a "volatile bool isRunning;". In Start set the value to true, then in Run chage your while loop to "while (isRunning)". In stop, set isRunning equal to false and then call worker.Join(). This will cause your run method to exit when it finishes processing the current item, and Join will wait until the thread exits.
The last thing you need to do is to access itemsProcessed in a thread-safe way. In the current code there is no way to know if Stop sees the most up to date value of itemsProcessed since it was changed from another thread. One option would be to create a lock for itemsProcessed and hold the lock inside of Run, and acquire the lock before the WriteLine statement in Stop.
Assuming you always want the long running task thread as a whole to finish, you could just wait till the thread is done by calling Thread.Join();
If you want to finish work in your thread gracefully you must do some sort of message passing, in the most simple case it could be just a boolean - in your case there seems to be just one thread, so it can be a static variable (simplifying here as much as possible)
static volatile bool processing = true;
static void Stop()
{
processing = false;
//wait until the 'worker' completes processing the current item.
worker.Join();
Console.WriteLine("{0} Items Processed", itemsProcessed);
}
static void Run(object state)
{
while (proccessing)
{
ALongRunningTask();
itemsProcessed++;
}
}

What's a useful pattern for waiting for all threads to finish?

I have a scenario where I will have to kick off a ton of threads (possibly up to a 100), then wait for them to finish, then perform a task (on yet another thread).
What is an accepted pattern for doing this type of work? Is it simply .Join? Or is there a higher level of abstraction nowadays?
Using .NET 2.0 with VS2008.
In .NET 3.5sp1 or .NET 4, the TPL would make this much easier. However, I'll tailor this to .NET 2 features only.
There are a couple of options. Using Thread.Join is perfectly acceptable, especially if the threads are all ones you are creating manually. This is very easy, reliable, and simple to implement. It would probably be my choice.
However, the other option would be to create a counter for the total amount of work, and to use a reset event when the counter reaches zero. For example:
class MyClass {
int workToComplete; // Total number of elements
ManualResetEvent mre; // For waiting
void StartThreads()
{
this.workToComplete = 100;
mre = new ManualResetEvent(false);
int total = workToComplete;
for(int i=0;i<total;++i)
{
Thread thread = new Thread( new ThreadStart(this.ThreadFunction) );
thread.Start(); // Kick off the thread
}
mre.WaitOne(); // Will block until all work is done
}
void ThreadFunction()
{
// Do your work
if (Interlocked.Decrement(ref this.workToComplete) == 0)
this.mre.Set(); // Allow the main thread to continue here...
}
}
Did you look at ThreadPool? Looks like here -ThreadPool tutorial, avtor solves same task as you ask.
What's worked well for me is to store each thread's ManagedThreadId in a dictionary as I launch it, and then have each thread pass its id back through a callback method when it completes. The callback method deletes the id from the dictionary and checks the dictionary's Count property; when it's zero you're done. Be sure to lock around the dictionary both for adding to and deleting from it.
I am not sure that any kind of standard thread locking or synchronization mechanisms will really work with so many threads. However, this might be a scenario where some basic messaging might be an ideal solution to the problem.
Rather than using Thread.Join, which will block (and could be very difficult to manage with so many threads), you might try setting up one more thread that aggregates completion messages from your worker threads. When the aggregator has received all expected messages, it completes. You could then use a single WaitHandle between the aggregator and your main application thread to signal that all of your worker threads are done.
public class WorkerAggregator
{
public WorkerAggregator(WaitHandle completionEvent)
{
m_completionEvent = completionEvent;
m_workers = new Dictionary<int, Thread>();
}
private readonly WaitHandle m_completionEvent;
private readonly Dictionary<int, Thread> m_workers;
public void StartWorker(Action worker)
{
var thread = new Thread(d =>
{
worker();
notifyComplete(thread.ManagedThreadID);
}
);
lock (m_workers)
{
m_workers.Add(thread.ManagedThreadID, thread);
}
thread.Start();
}
private void notifyComplete(int threadID)
{
bool done = false;
lock (m_workers)
{
m_workers.Remove(threadID);
done = m_workers.Count == 0;
}
if (done) m_completionEvent.Set();
}
}
Note, I have not tested the code above, so it might not be 100% correct. However I hope it illustrates the concept enough to be useful.

Is this is a bug in .net Monitor/lock statement or does MessageBox.Show behaves differently?

Imagine you have two buttons on the win form. What do you think should be the behavior when user presses the "button 1" with the below code?
Should it display all 5 message box in one go, or one by one - MessageBox.Show statement is inside a lock statement?
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private static readonly object lockobject = new object();
private void button1_Click(object sender, EventArgs e)
{
var action = new Action(function);
for(int i = 0; i< 5; i++)
{
action.BeginInvoke(null, null);
}
}
private void function()
{
if (button2.InvokeRequired)
{
var func = new Action(function);
button2.Invoke(func);
}
else
{
lock (lockobject)
{
MessageBox.Show("Testing");
}
}
}
}
Now if we replace MessageBox.Show with any other statment, it would execute the statement only one at a time, the other threads would wait, one at a time.
Since your lock statement is executed when InvokeRequired is false, the locks will all run on the same (main) thread. Therefore the locks will not block.
If you want the MessageBox to block, use ShowDialog instead.
lock only blocks if another thread owns the lock, locking on the same object from the same thread multiple times is allowed - otherwise it would be an instant deadlock, after all it would have been blocking the current thread while waiting for the current thread.
Control.BeginInvoke doesn't execute code in a different thread, it will always execute the code in the thread pumping messages for the control, it does so by posting a message to the control's input queue and then executing the code when the message arrives.
because of 2 your code isn't multi-threaded at all, everything executes in the same thread - and this brings us back to 1, when you don't have multiple threads lock does nothing.
I suspect the UI thread is pumping messages during the MessageBox life-cycle. Because locks are re-entrant (and the UI thread is running the code each time), this causes the above. Perhaps try passing the owner (this) into the message-box? (I'll try in a sec...).
You could block it more forcefully, but that will block painting ("not responding" etc).
I agree with Nir. After you change your function to the one below, you can test that you are running on the same thread (not surprisingly):
private void function()
{
if (button2.InvokeRequired)
{
var func = new Action(function);
button2.Invoke(func);
}
else
{
lock (lockobject)
{
int threadId = Thread.CurrentThread.ManagedThreadId;
MessageBox.Show("Testing. Running on thread "+threadId);
}
}
}
So here because your UI thread is owing the lock, it doesn't get blocked. The bottom line is that STA threads are not compatible with proper multithreaded programming.

Categories