What happens to locks on objects passed to other threads? - c#

I'm not quite sure how to word this, so I'll just paste my code and ask the question:
private void remoteAction_JobStatusUpdated(JobStatus status) {
lock (status) {
status.LastUpdatedTime = DateTime.Now;
doForEachClient(c => c.OnJobStatusUpdated(status));
OnJobStatusUpdated(status);
}
}
private void doForEachClient(Action<IRemoteClient> task) {
lock (clients) {
foreach (KeyValuePair<RemoteClientId, IRemoteClient> entry in clients) {
IRemoteClient clientProxy = entry.Value;
RemoteClientId clientId = entry.Key;
ThreadPool.QueueUserWorkItem(delegate {
try {
task(clientProxy);
#pragma warning disable 168
} catch (CommunicationException ex) {
#pragma warning restore 168
RemoveClient(clientId);
}
});
}
}
}
Assume that any other code which modifies the status object will acquire a lock on it first.
Since the status object is passed all the way through to multiple ThreadPool threads, and the call to ThreadPool.QueueUserWorkItem will complete before the actual tasks complete, am I ensuring that the same status object gets sent to all clients?
Put another way, when does the lock (status) statement "expire" or cause its lock to be released?

Locks don't expire. When a thread tries to pass the lock statement it can only do it if no other thread is executing inside a lock block having a lock on that particular object instance used in the lock statemement.
In your case it seems that you have a main thread executing. It will lock both the status and the clients instances before it spins of new tasks that are executed on seperate threads. If any code in the new threads want to acquire a lock on either status or clients it will have to wait until the main thread has released both locks by leaving both lock blocks. That happens when remoteAction_JobStatusUpdated returns.
You pass the status object to each worker thread and they are all free to do whatever they want to do with that object. The statement lock (status) in no way protects the status instance. However, if any of the threads tries to execute lock (status) they will block until the main thread releases the lock.
Using two separate object instances to lock can lead to deadlock. Assume one thread executes the following code:
lock (status) {
...
lock (clients) {
...
}
}
Another thread executes the following code where the locks are acquired in the reverse sequence:
lock (clients) {
...
lock (status) {
...
}
}
If the first thread manages to get the status first and the second the clients lock first they are deadlocked and both threads will no longer run.
In general I would advice you to encapsulate your shared state in a separate class and make access to it thread safe:
class State {
readonly Object locker = new Object();
public void ModifyState() {
lock (this.locker) {
...
}
}
public String AccessState() {
lock (this.locker) {
...
return ...
}
}
}
You can also mark you methods with the [MethodImpl(MethodImpl.Synchronized)] attribute, but it has its pitfalls as it will surround the method with a lock (this) which in general isn't recommended.
If you want to better understand what is going on behind the scenes of the lock statement you can read the Safe Thread Synchronization article in MSDN Magazine.

The locks certainly don't "expire" on their own, the lock will be valid until the closing brace of the lock(..){} statement.

Related

Multithreading issue with semaphore

I need to have the piece of code which allowed to execute only by 1 thread at the same time based on parameter key:
private static readonly ConcurrentDictionary<string, SemaphoreSlim> Semaphores = new();
private async Task<TModel> GetValueWithBlockAsync<TModel>(string valueKey, Func<Task<TModel>> valueAction)
{
var semaphore = Semaphores.GetOrAdd(valueKey, s => new SemaphoreSlim(1, 1));
try
{
await semaphore.WaitAsync();
return await valueAction();
}
finally
{
semaphore.Release(); // Exception here - System.ObjectDisposedException
if (semaphore.CurrentCount > 0 && Semaphores.TryRemove(valueKey, out semaphore))
{
semaphore?.Dispose();
}
}
}
Time to time I got the error:
The semaphore has been disposed. : System.ObjectDisposedException: The semaphore has been disposed.
at System.Threading.SemaphoreSlim.CheckDispose()
at System.Threading.SemaphoreSlim.Release(Int32 releaseCount)
at Project.GetValueWithBlockAsync[TModel](String valueKey, Func`1 valueAction)
All cases that I can imagine here are thread safety. Please help, what case I missed?
You have a thread race here, where another task is trying to acquire the same semaphore, and acquires it when you Release - i.e. another thread is awaiting the semaphore.WaitAsync(). The check against CurrentCount is a race condition, and it could go either way depending on timing. The check for TryRemove is irrelevant, as the competing thread already got the semaphore out - it was, after all, awaiting the WaitAsync().
As discussed in the comments, you have a couple of race conditions here.
Thread 1 holds the lock and Thread 2 is waiting on WaitAsync(). Thread 1 releases the lock, and then checks semaphore.CurrentCount, before Thread 2 is able to acquire it.
Thread 1 holds the lock, releases it, and checks semaphore.CurrentCount which passes. Thread 2 enters GetValueWithBlockAsync, calls Semaphores.GetOrAdd and fetches the semaphore. Thread 1 then calls Semaphores.TryRemove and diposes the semaphore.
You really need locking around the decision to remove an entry from Semaphores -- there's no way around this. You also don't have a way of tracking whether any threads have fetched a semaphore from Semaphores (and are either currently waiting on it, or haven't yet got to that point).
One way is to do something like this: have a lock which is shared between everyone, but which is only needed when fetching/creating a semaphore, and deciding whether to dispose it. We manually keep track of how many threads currently have an interest in a particular semaphore. When a thread has released the semaphore, it then acquires the shared lock to check whether anyone else currently has an interest in that semaphore, and disposes it only if noone else does.
private static readonly object semaphoresLock = new();
private static readonly Dictionary<string, State> semaphores = new();
private async Task<TModel> GetValueWithBlockAsync<TModel>(string valueKey, Func<Task<TModel>> valueAction)
{
State state;
lock (semaphoresLock)
{
if (!semaphores.TryGetValue(valueKey, out state))
{
state = new();
semaphores[valueKey] = state;
}
state.Count++;
}
try
{
await state.Semaphore.WaitAsync();
return await valueAction();
}
finally
{
state.Semaphore.Release();
lock (semaphoresLock)
{
state.Count--;
if (state.Count == 0)
{
semaphores.Remove(valueKey);
state.Semaphore.Dispose();
}
}
}
}
private class State
{
public int Count { get; set; }
public SemaphoreSlim Semaphore { get; } = new(1, 1);
}
The other option, of course, is to let Semaphores grow. Maybe you have a periodic operation to go through and clear out anything which isn't being used, but this will of course need to be protected to ensure that a thread doesn't suddenly become interested in a semaphore which is being cleared up.

AutoResetEvent Reset Method

super simple question, but I just wanted some clarification. I want to be able to restart a thread using AutoResetEvent, so I call the following sequence of methods to my AutoResetEvent.
setupEvent.Reset();
setupEvent.Set();
I know it's really obvious, but MSDN doesn't state in their documentation that the Reset method restarts the thread, just that it sets the state of the event to non-signaled.
UPDATE:
Yes the other thread is waiting at WaitOne(), I'm assuming when it gets called it will resume at the exact point it left off, which is what I don't want, I want it to restart from the beginning. The following example from this valuable resource illustrates this:
static void Main()
{
new Thread (Work).Start();
_ready.WaitOne(); // First wait until worker is ready
lock (_locker) _message = "ooo";
_go.Set(); // Tell worker to go
_ready.WaitOne();
lock (_locker) _message = "ahhh"; // Give the worker another message
_go.Set();
_ready.WaitOne();
lock (_locker) _message = null; // Signal the worker to exit
_go.Set();
}
static void Work()
{
while (true)
{
_ready.Set(); // Indicate that we're ready
_go.WaitOne(); // Wait to be kicked off...
lock (_locker)
{
if (_message == null) return; // Gracefully exit
Console.WriteLine (_message);
}
}
}
If I understand this example correctly, notice how the Main thread will resume where it left off when the Work thread signals it, but in my case, I would want the Main thread to restart from the beginning.
UPDATE 2:
#Jaroslav Jandek - It's quite involved, but basically I have a CopyDetection thread that runs a FileSystemWatcher to monitor a folder for any new files that are moved or copied into it. My second thread is responsible for replicating the structure of that particular folder into another folder. So my CopyDetection thread has to block that thread from working while a copy/move operation is in progress. When the operation completes, the CopyDetection thread restarts the second thread so it can re-duplicate the folder structure with the newly added files.
UPDATE 3:
#SwDevMan81 - I actually didn't think about that and that would work save for one caveat. In my program, the source folder that is being duplicated is emptied once the duplication process is complete. That's why I have to block and restart the second thread when new items are added to the source folder, so it can have a chance to re-parse the folder's new structure properly.
To address this, I'm thinking of maybe adding a flag that signals that it is safe to delete the source folder's contents. Guess I could put the delete operation on it's own Cleanup thread.
#Jaroslav Jandek - My apologies, I thought it would be a simple matter to restart a thread on a whim. To answer your questions, I'm not deleting the source folder, only it's content, it's a requirement by my employer that unfortunately I cannot change. Files in the source folder are getting moved, but not all of them, only files that are properly validated by another process, the rest must be purged, i.e. the source folder is emptied. Also, the reason for replicating the source folder structure is that some of the files are contained within a very strict sub-folder hierarchy that must be preserved in the destination directory. Again sorry for making it complicated. All of these mechanisms are in place, have been tested and are working, which is why I didn't feel the need to elaborate on them. I only need to detect when new files are added so I may properly halt the other processes while the copy/move operation is in progress, then I can safely replicate the source folder structure and resume processing.
So thread 1 monitors and thread 2 replicates while other processes modify the monitored files.
Concurrent file access aside, you can't continue replicating after a change. So a successful replication only occurs when there is long enough delay between modifications. Replication cannot be stopped immediately since you replicate in chunks.
So the result of monitoring should be a command (file copy, file delete, file move, etc.).
The result of a successful replication should be an execution of a command.
Considering multiple operations can occur, you need a queue (or queued dictionary - to only perform 1 command on a file) of commands.
// T1:
somethingChanged(string path, CT commandType)
{
commandQueue.AddCommand(path, commandType);
}
// T2:
while (whatever)
{
var command = commandQueue.Peek();
if (command.Execute()) commandQueue.Remove();
else // operation failed, do what you like.
}
Now you may ask how to create a thread-safe query, but that probably belongs to another question (there are many implementations on the web).
EDIT (queue-less version with whole dir replication - can be used with query):
If you do not need multiple operations (eg. always replication the whole directory) and expect the replication to always finish or fail and cancel, you can do:
private volatile bool shouldStop = true;
// T1:
directoryChanged()
{
// StopReplicating
shouldStop = true;
workerReady.WaitOne(); // Wait for the worker to stop replicating.
// StartReplicating
shouldStop = false;
replicationStarter.Set();
}
// T2:
while (whatever)
{
replicationStarter.WaitOne();
... // prepare, throw some shouldStops so worker does not have to work too much.
if (!shouldStop)
{
foreach (var file in files)
{
if (shouldStop) break;
// Copy the file or whatever.
}
}
workerReady.Set();
}
I think this example clarifies (to me anyway) how reset events work:
var resetEvent = new ManualResetEvent(false);
var myclass = new MyAsyncClass();
myclass.MethodFinished += delegate
{
resetEvent.Set();
};
myclass.StartAsyncMethod();
resetEvent.WaitOne(); //We want to wait until the event fires to go on
Assume that MyAsyncClass runs the method on a another thread and fires the event when complete.
This basically turns the asynchronous "StartAsyncMethod" into a synchronous one. Many times I find a real-life example more useful.
The main difference between AutoResetEvent and ManualResetEvent, is that using AutoResetEvent doesn't require you to call Reset(), but automatically sets the state to "false". The next call to WaitOne() blocks when the state is "false" or Reset() has been called.
You just need to make it loop like the other Thread does. Is this what you are looking for?
class Program
{
static AutoResetEvent _ready = new AutoResetEvent(false);
static AutoResetEvent _go = new AutoResetEvent(false);
static Object _locker = new Object();
static string _message = "Start";
static AutoResetEvent _exitClient = new AutoResetEvent(false);
static AutoResetEvent _exitWork = new AutoResetEvent(false);
static void Main()
{
new Thread(Work).Start();
new Thread(Client).Start();
Thread.Sleep(3000); // Run for 3 seconds then finish up
_exitClient.Set();
_exitWork.Set();
_ready.Set(); // Make sure were not blocking still
_go.Set();
}
static void Client()
{
List<string> messages = new List<string>() { "ooo", "ahhh", null };
int i = 0;
while (!_exitClient.WaitOne(0)) // Gracefully exit if triggered
{
_ready.WaitOne(); // First wait until worker is ready
lock (_locker) _message = messages[i++];
_go.Set(); // Tell worker to go
if (i == 3) { i = 0; }
}
}
static void Work()
{
while (!_exitWork.WaitOne(0)) // Gracefully exit if triggered
{
_ready.Set(); // Indicate that we're ready
_go.WaitOne(); // Wait to be kicked off...
lock (_locker)
{
if (_message != null)
{
Console.WriteLine(_message);
}
}
}
}
}

Monitor vs lock

When is it appropriate to use either the Monitor class or the lock keyword for thread safety in C#?
EDIT:
It seems from the answers so far that lock is short hand for a series of calls to the Monitor class. What exactly is the lock call short-hand for? Or more explicitly,
class LockVsMonitor
{
private readonly object LockObject = new object();
public void DoThreadSafeSomethingWithLock(Action action)
{
lock (LockObject)
{
action.Invoke();
}
}
public void DoThreadSafeSomethingWithMonitor(Action action)
{
// What goes here ?
}
}
Update
Thank you all for your help : I have posted a another question as a follow up to some of the information you all provided. Since you seem to be well versed in this area, I have posted the link: What is wrong with this solution to locking and managing locked exceptions?
Eric Lippert talks about this in his blog:
Locks and exceptions do not mix
The equivalent code differs between C# 4.0 and earlier versions.
In C# 4.0 it is:
bool lockWasTaken = false;
var temp = obj;
try
{
Monitor.Enter(temp, ref lockWasTaken);
{ body }
}
finally
{
if (lockWasTaken) Monitor.Exit(temp);
}
It relies on Monitor.Enter atomically setting the flag when the lock is taken.
And earlier it was:
var temp = obj;
Monitor.Enter(temp);
try
{
body
}
finally
{
Monitor.Exit(temp);
}
This relies on no exception being thrown between Monitor.Enter and the try. I think in debug code this condition was violated because the compiler inserted a NOP between them and thus made thread abortion between those possible.
lock is just shortcut for Monitor.Enter with try + finally and Monitor.Exit. Use lock statement whenever it is enough - if you need something like TryEnter, you will have to use Monitor.
A lock statement is equivalent to:
Monitor.Enter(object);
try
{
// Your code here...
}
finally
{
Monitor.Exit(object);
}
However, keep in mind that Monitor can also Wait() and Pulse(), which are often useful in complex multithreading situations.
Update
However in C# 4 its implemented differently:
bool lockWasTaken = false;
var temp = obj;
try
{
Monitor.Enter(temp, ref lockWasTaken);
//your code
}
finally
{
if (lockWasTaken)
Monitor.Exit(temp);
}
Thanx to CodeInChaos for comments and links
Monitor is more flexible. My favorite use case of using monitor is:
When you don't want to wait for your turn and just skip:
//already executing? forget it, lets move on
if (Monitor.TryEnter(_lockObject))
{
try
{
//do stuff;
}
finally
{
Monitor.Exit(_lockObject);
}
}
As others have said, lock is "equivalent" to
Monitor.Enter(object);
try
{
// Your code here...
}
finally
{
Monitor.Exit(object);
}
But just out of curiosity, lock will preserve the first reference you pass to it and will not throw if you change it. I know it's not recommended to change the locked object and you don't want to do it.
But again, for the science, this works fine:
var lockObject = "";
var tasks = new List<Task>();
for (var i = 0; i < 10; i++)
tasks.Add(Task.Run(() =>
{
Thread.Sleep(250);
lock (lockObject)
{
lockObject += "x";
}
}));
Task.WaitAll(tasks.ToArray());
...And this does not:
var lockObject = "";
var tasks = new List<Task>();
for (var i = 0; i < 10; i++)
tasks.Add(Task.Run(() =>
{
Thread.Sleep(250);
Monitor.Enter(lockObject);
try
{
lockObject += "x";
}
finally
{
Monitor.Exit(lockObject);
}
}));
Task.WaitAll(tasks.ToArray());
Error:
An exception of type 'System.Threading.SynchronizationLockException'
occurred in 70783sTUDIES.exe but was not handled in user code
Additional information: Object synchronization method was called from
an unsynchronized block of code.
This is because Monitor.Exit(lockObject); will act on lockObject which has changed because strings are immutable, then you're calling it from an unsynchronized block of code.. but anyway. This is just a fun fact.
Both are the same thing. lock is c sharp keyword and use Monitor class.
http://msdn.microsoft.com/en-us/library/ms173179(v=vs.80).aspx
The lock and the basic behavior of the monitor (enter + exit) is more or less the same, but the monitor has more options that allows you more synchronization possibilities.
The lock is a shortcut, and it's the option for the basic usage.
If you need more control, the monitor is the better option. You can use the Wait, TryEnter and the Pulse, for advanced usages (like barriers, semaphores and so on).
Lock
Lock keyword ensures that one thread is executing a piece of code at one time.
lock(lockObject)
{
// Body
}
The lock keyword marks a statement block as a critical section by obtaining the mutual-exclusion lock for a given object, executing a statement and then releasing the lock
If another thread tries to enter a locked code, it will wait, block, until the object is released.
Monitor
The Monitor is a static class and belongs to the System.Threading namespace.
It provides exclusive lock on the object so that only one thread can enter into the critical section at any given point of time.
Difference between Monitor and lock in C#
The lock is the shortcut for Monitor.Enter with try and finally.
Lock handles try and finally block internally
Lock = Monitor + try finally.
If you want more control to implement advanced multithreading solutions using TryEnter() Wait(), Pulse(), and PulseAll() methods, then the Monitor class is your option.
C# Monitor.wait(): A thread wait for other threads to notify.
Monitor.pulse(): A thread notify to another thread.
Monitor.pulseAll(): A thread notifies all other threads within a process
In addition to all above explanations, lock is a C# statement whereas Monitor is a class of .NET located in System.Threading namespace.

C# Multithreading

Okay. I want to have two threads running. Current code:
public void foo()
{
lock(this)
{
while (stopThreads == false)
{
foreach (var acc in myList)
{
// process some stuff
}
}
}
}
public void bar()
{
lock(this)
{
while (stopThreads == false)
{
foreach (var acc in myList)
{
// process some stuff
}
}
}
}
Both are accessing the same List, the problem is that the first thread "foo" is not releasing the lock i guess; because "bar" only starts when "foo" is done. Thanks
Yes, that's how lock is designed to work.
The lock keyword marks a statement block as a critical section by obtaining the mutual-exclusion lock for a given object, executing a statement, and then releasing the lock.
Mutual-exclusion means that there can be at most one thread that holds the lock at any time.
Locking on this is a bad idea and is discouraged. You should create a private object and lock on that instead. To solve your problem you could lock on two different objects.
private object lockObject1 = new object();
private object lockObject2 = new object();
public void foo()
{
lock (lockObject1)
{
// ...
}
}
public void bar()
{
lock (lockObject2)
{
// ...
}
}
Alternatively you could reuse the same lock but move it inside the loop so that each loop has a chance to proceed:
while (stopThreads == false)
{
foreach (var acc in myList)
{
lock (lockObject)
{
// process some stuff
}
}
}
However I would suggest that you spend some time to understand what is going on rather than reordering the lines of code until it appears to work on your machine. Writing correct multithreaded code is difficult.
For stopping a thread I would recommend this article:
Shutting Down Worker Threads Gracefully
Since you are not really asking a question, I suggest you should read a tutorial on how threading works. A .Net specific guide can be found here. It features the topics "Getting Started", "Basic Synchronization", "Using Threads", "Advanced Threading" and "Parallel Programming".
Also, you are locking on "this". The Msdn says:
In general, avoid locking on a public
type, or instances beyond your code's
control. The common constructs lock
(this), lock (typeof (MyType)), and
lock ("myLock") violate this
guideline:
lock (this) is a problem if the
instance can be accessed publicly.
lock (typeof (MyType)) is a problem if
MyType is publicly accessible.
lock(“myLock”) is a problem because
any other code in the process using
the same string, will share the same
lock.
Best practice is to define a private
object to lock on, or a private static
object variable to protect data common
to all instances.
The problem you have is that you work with a very coarse lock. Both Foo and Bad basically do not work concurrently because whoever starts first stops the other one for the COMPLETE WORK CYCLE.
It should, though, ONLY lock WHILE IT TAKES THINGS OUT OF THE LIST. Foreach does not work here - per definition. You ahve to put up a second list and have each thread REMOVE THE TOP ITEM (while lockin), then work on it.
Basically:
Foreach does not work, as both threads will run through the compelte list
Second, locks must be granular in that they only lock while needed.
In your case, you lock in foo will only be released when foo is finished.

Threading only block the first thread

I have the scenario where a command comes in over a socket which requires a fair amount of work. Only one thread can process the data at a time. The commands will come in faster than can process it. Over time there will be quiet a back log.
The good part is that I can discard waiting threads and really only have to process the last one that is waiting - (or process the first one in and discard all the other once). I was thinking about using a semaphore to control the critical section of code and to use a boolean to see if there are any threads blocking. If there are blocking thread I would just discard the thread.
My mind is drawing a blank on how to implement it nicely I would like to implement it with out using an integer or boolean to see if there is a thread waiting already.
I am coding this in c#
You can use Monitor.TryEnter to see whether a lock is already taken on an object:
void ProcessConnection(TcpClient client)
{
bool lockTaken = false;
Monitor.TryEnter(lockObject, out lockTaken);
if (!lockTaken)
{
client.Close();
return;
}
try
{
// long-running process here
}
finally
{
Monitor.Exit(lockObject);
client.Close();
}
}
Note that for this to work you'll still have to invoke the method in a thread, for example:
client = listener.AcceptTcpClient();
ThreadPool.QueueUserWorkItem(notused => ProcessConnection(client));
FYI, the lock statement is just sugar for:
Monitor.Enter(lockObject);
try
{
// code within lock { }
}
finally
{
Monitor.Exit(lockObject);
}
I believe you are looking for the lock statement.
private readonly object _lock = new object();
private void ProccessCommand(Command command)
{
lock (_lock)
{
// ...
}
}
It sounds like you just need to use the lock statement. Code inside a lock statement will allow only one thread to work inside the code block at once.
More info: lock Statement
From the sounds of what you've posted here, you might be able to avoid so many waiting threads. You could queue up the next command to execute, and rather than keep the threads waiting, just replace the command to execute next after the current command finishes. Lock when replacing and removing the "waiting" command.
Something like this:
class CommandHandler
{
Action nextCommand;
ManualResetEvent manualResetEvent = new ManualResetEvent(false);
object lockObject = new object();
public CommandHandler()
{
new Thread(ProcessCommands).Start();
}
public void AddCommand(Action nextCommandToProcess)
{
lock (lockObject)
{
nextCommand = nextCommandToProcess;
}
manualResetEvent.Set();
}
private void ProcessCommands()
{
while (true)
{
Action action = null;
lock (lockObject)
{
action = nextCommand;
nextCommand = null;
}
if (action != null)
{
action();
}
lock (lockObject)
{
if(nextCommand != null)
continue;
manualResetEvent.Reset();
}
manualResetEvent.WaitOne();
}
}
}
check out: ManualResetEvent
It's a useful threading class.

Categories