A simple exercise in threading here. Say I have a static lock, a web request, and a thread queue thread. Will the following cause a problem (ignoring the quality of the code itself):
static object locker = new object();
static MyObject obj = new MyObject();
public static void Update(){
lock(locker){
obj.Foo = "biz";
DoStuff();
}
}
public static void DoStuff(){
ThreadPool.QueueUserWorkItem(args => {
lock(locker){
obj.Foo = "bar";
}
});
}
The example is contrived, but the concept holds :).
This will not cause a problem. If this is called a single time, DoStuff() will not be able to acquire the lock until Update()'s code has exited the lock. However, ThreadPool.QueueUserWorkItem is an asynchronous call, so the lock will be able to be released, which in turn will allow DoStuff() to process.
It shouldn't. The only gotcha specific to thread pool threads is that the thread pool grows relatively slowly, so if you blocked a lot waiting for locks you can cause performance issues.
Related
I need to have the piece of code which allowed to execute only by 1 thread at the same time based on parameter key:
private static readonly ConcurrentDictionary<string, SemaphoreSlim> Semaphores = new();
private async Task<TModel> GetValueWithBlockAsync<TModel>(string valueKey, Func<Task<TModel>> valueAction)
{
var semaphore = Semaphores.GetOrAdd(valueKey, s => new SemaphoreSlim(1, 1));
try
{
await semaphore.WaitAsync();
return await valueAction();
}
finally
{
semaphore.Release(); // Exception here - System.ObjectDisposedException
if (semaphore.CurrentCount > 0 && Semaphores.TryRemove(valueKey, out semaphore))
{
semaphore?.Dispose();
}
}
}
Time to time I got the error:
The semaphore has been disposed. : System.ObjectDisposedException: The semaphore has been disposed.
at System.Threading.SemaphoreSlim.CheckDispose()
at System.Threading.SemaphoreSlim.Release(Int32 releaseCount)
at Project.GetValueWithBlockAsync[TModel](String valueKey, Func`1 valueAction)
All cases that I can imagine here are thread safety. Please help, what case I missed?
You have a thread race here, where another task is trying to acquire the same semaphore, and acquires it when you Release - i.e. another thread is awaiting the semaphore.WaitAsync(). The check against CurrentCount is a race condition, and it could go either way depending on timing. The check for TryRemove is irrelevant, as the competing thread already got the semaphore out - it was, after all, awaiting the WaitAsync().
As discussed in the comments, you have a couple of race conditions here.
Thread 1 holds the lock and Thread 2 is waiting on WaitAsync(). Thread 1 releases the lock, and then checks semaphore.CurrentCount, before Thread 2 is able to acquire it.
Thread 1 holds the lock, releases it, and checks semaphore.CurrentCount which passes. Thread 2 enters GetValueWithBlockAsync, calls Semaphores.GetOrAdd and fetches the semaphore. Thread 1 then calls Semaphores.TryRemove and diposes the semaphore.
You really need locking around the decision to remove an entry from Semaphores -- there's no way around this. You also don't have a way of tracking whether any threads have fetched a semaphore from Semaphores (and are either currently waiting on it, or haven't yet got to that point).
One way is to do something like this: have a lock which is shared between everyone, but which is only needed when fetching/creating a semaphore, and deciding whether to dispose it. We manually keep track of how many threads currently have an interest in a particular semaphore. When a thread has released the semaphore, it then acquires the shared lock to check whether anyone else currently has an interest in that semaphore, and disposes it only if noone else does.
private static readonly object semaphoresLock = new();
private static readonly Dictionary<string, State> semaphores = new();
private async Task<TModel> GetValueWithBlockAsync<TModel>(string valueKey, Func<Task<TModel>> valueAction)
{
State state;
lock (semaphoresLock)
{
if (!semaphores.TryGetValue(valueKey, out state))
{
state = new();
semaphores[valueKey] = state;
}
state.Count++;
}
try
{
await state.Semaphore.WaitAsync();
return await valueAction();
}
finally
{
state.Semaphore.Release();
lock (semaphoresLock)
{
state.Count--;
if (state.Count == 0)
{
semaphores.Remove(valueKey);
state.Semaphore.Dispose();
}
}
}
}
private class State
{
public int Count { get; set; }
public SemaphoreSlim Semaphore { get; } = new(1, 1);
}
The other option, of course, is to let Semaphores grow. Maybe you have a periodic operation to go through and clear out anything which isn't being used, but this will of course need to be protected to ensure that a thread doesn't suddenly become interested in a semaphore which is being cleared up.
My code seens to be allowing more than one thread to get into a specific method "protected" by mutex.
private static Mutex mut = new Mutex();
public DadoMySql PegaPrimeiroFila(int identificacao)
{
DadoMySql dadoMySql = null;
mut.WaitOne();
dadoMySql = PegaPrimeiroFila_Processa();
mut.ReleaseMutex();
return dadoMySql;
}
I have 10 threads and a keep getting 2 random ones of than getting the same "dadoMySql" everytime.
If i add logs inside de mutex wait everything works fine. The extra time it takes to write the log makes it work :/, maybe?
Mutex is overkill here, unless you are synchronizing across multiple processes.
A simple lock should work since you want mutual exclusion:
private static readonly object lockObject = new object();
public DadoMySql PegaPrimeiroFila(int identificacao)
{
DadoMySql dadoMySql = null;
lock (lockObject)
{
dadoMySql = PegaPrimeiroFila_Processa();
}
return dadoMySql;
}
Using the lock keyword also gives you a stronger guarantee that Monitor.Exit nearly always gets called. A good example is when an exception is thrown inside of lock scope.
I have a Thread (STAThread) in a Windows Service, which performs a big amount of work. When the windows service is restarted I want to stop this thread gracefully.
I know of a couple of ways
A volatile boolean
ManualResetEvent
CancellationToken
As far as I have found out Thread.Abort is a no go...
What is the best practice ?
The work is perfomed in another class than the one where the thread is started, so it is necessary to either introduce a cancellationToken parameter in a constructor or for example have a volatile variable. But I just can't figure out what is smartest.
Update
Just to clarify a little I have wrapped up a very simple example of what I'm talking about. As said earlier, this is being done in a windows service. Right now I'm thinking a volatile boolean that is checked on in the loop or a cancellationToken....
I cannot wait for the loop to finish, as stated below it can take several minutes, making the system administrators of the server believe that something is wrong with the service when they need to restart it.... I can without problems just drop all the work within the loop without problems, however I cannot do this with a Thread.Abort it is "evil" and furthermore a COM interface is called, so a small clean up is needed.
Class Scheduler{
private Thread apartmentThread;
private Worker worker;
void Scheduling(){
worker = new Worker();
apartmentThread = new Thread(Run);
apartmentThread.SetApartmentState(ApartmentState.STA);
apartmentThread.Start();
}
private void Run() {
while (!token.IsCancellationRequested) {
Thread.Sleep(pollInterval * MillisecondsToSeconds);
if (!token.IsCancellationRequested) {
worker.DoWork();
}
}
}
}
Class Worker{
//This will take several minutes....
public void DoWork(){
for(int i = 0; i < 50000; i++){
//Do some work including communication with a COM interface
//Communication with COM interface doesn't take long
}
}
}
UPDATE
Just examined performance, using a cancellationToken where the isCancelled state is "examined" in the code, is much faster than using a waitOne on a ManualResetEventSlim. Some quick figuers, an if on the cancellationToken iterating 100.000.000 times in a for loop costs me approx. 500 ms, where the WaitOne costs approx. 3 seconds. So performance in this scenario it is faster to use the cancellationToken.
You haven't posted enough of your implementation but I would highly recommend a CancellationToken if that is available to you. It's simple enough to use and understand from a maintainability standpoint. You can setup cooperative cancellation as well too if you decide to have more than one worker thread.
If you find yourself in a situation where this thread may block for long periods of time, it's best to setup your architecture so that this doesn't occur. You shouldn't be starting threads that won't play nice when you tell them to stop. If they don't stop when you ask them, the only real way is to tear down the process and let the OS kill them.
Eric Lippert posted a fantastic answer to a somewhat-related question here.
I tend to use a bool flag, a lock object and a Terminate() method, such as:
object locker = new object();
bool do_term = false;
Thread thread = new Thread(ThreadStart(ThreadProc));
thread.Start();
void ThreadProc()
{
while (true) {
lock (locker) {
if (do_term) break;
}
... do work...
}
}
void Terminate()
{
lock (locker) {
do_term = true;
}
}
Asides from Terminate() all the other fields and methods are private to the "worker" class.
Use a WaitHandle, most preferably a ManualResetEvent. Your best bet is to let whatever is in your loop finish. This is the safest way to accomplish your goal.
ManualResetEvent _stopSignal = new ManualResetEvent(false); // Your "stopper"
ManualResetEvent _exitedSignal = new ManualResetEvent(false);
void DoProcessing() {
try {
while (!_stopSignal.WaitOne(0)) {
DoSomething();
}
}
finally {
_exitedSignal.Set();
}
}
void DoSomething() {
//Some work goes here
}
public void Terminate() {
_stopSignal.Set();
_exitedSignal.WaitOne();
}
Then to use it:
Thread thread = new Thread(() => { thing.DoProcessing(); });
thread.Start();
//Some time later...
thing.Terminate();
If you have a particularly long-running process in your "DoSomething" implementation, you may want to call that asynchronously, and provide it with state information. That can get pretty complicated, though -- better to just wait until your process is finished, then exit, if you are able.
There are two situations in which you may find your thread:
Processing.
Blocking.
In the case where your thread is processing something, you must wait for your thread to finish processing in order for it to safely exit. If it's part of a work loop, then you can use a boolean flag to terminate the loop.
In the case where your thread is blocking, then you need to wake your thread and get it processing again. A thread may be blocking on a ManualResetEvent, a database call, a socket call or whatever else you could block on. In order to wake it up, you must call the Thread.Interrupt() method which will raise a ThreadInterruptedException.
It may look something like this:
private object sync = new object():
private bool running = false;
private void Run()
{
running = true;
while(true)
{
try
{
lock(sync)
{
if(!running)
{
break;
}
}
BlockingFunction();
}
catch(ThreadInterruptedException)
{
break;
}
}
}
public void Stop()
{
lock(sync)
{
running = false;
}
}
And here is how you can use it:
MyRunner r = new MyRunner();
Thread t = new Thread(()=>
{
r.Run();
});
t.IsBackground = true;
t.Start();
// To stop the thread
r.Stop();
// Interrupt the thread if it's in a blocking state
t.Interrupt();
// Wait for the thread to exit
t.Join();
Okay. I want to have two threads running. Current code:
public void foo()
{
lock(this)
{
while (stopThreads == false)
{
foreach (var acc in myList)
{
// process some stuff
}
}
}
}
public void bar()
{
lock(this)
{
while (stopThreads == false)
{
foreach (var acc in myList)
{
// process some stuff
}
}
}
}
Both are accessing the same List, the problem is that the first thread "foo" is not releasing the lock i guess; because "bar" only starts when "foo" is done. Thanks
Yes, that's how lock is designed to work.
The lock keyword marks a statement block as a critical section by obtaining the mutual-exclusion lock for a given object, executing a statement, and then releasing the lock.
Mutual-exclusion means that there can be at most one thread that holds the lock at any time.
Locking on this is a bad idea and is discouraged. You should create a private object and lock on that instead. To solve your problem you could lock on two different objects.
private object lockObject1 = new object();
private object lockObject2 = new object();
public void foo()
{
lock (lockObject1)
{
// ...
}
}
public void bar()
{
lock (lockObject2)
{
// ...
}
}
Alternatively you could reuse the same lock but move it inside the loop so that each loop has a chance to proceed:
while (stopThreads == false)
{
foreach (var acc in myList)
{
lock (lockObject)
{
// process some stuff
}
}
}
However I would suggest that you spend some time to understand what is going on rather than reordering the lines of code until it appears to work on your machine. Writing correct multithreaded code is difficult.
For stopping a thread I would recommend this article:
Shutting Down Worker Threads Gracefully
Since you are not really asking a question, I suggest you should read a tutorial on how threading works. A .Net specific guide can be found here. It features the topics "Getting Started", "Basic Synchronization", "Using Threads", "Advanced Threading" and "Parallel Programming".
Also, you are locking on "this". The Msdn says:
In general, avoid locking on a public
type, or instances beyond your code's
control. The common constructs lock
(this), lock (typeof (MyType)), and
lock ("myLock") violate this
guideline:
lock (this) is a problem if the
instance can be accessed publicly.
lock (typeof (MyType)) is a problem if
MyType is publicly accessible.
lock(“myLock”) is a problem because
any other code in the process using
the same string, will share the same
lock.
Best practice is to define a private
object to lock on, or a private static
object variable to protect data common
to all instances.
The problem you have is that you work with a very coarse lock. Both Foo and Bad basically do not work concurrently because whoever starts first stops the other one for the COMPLETE WORK CYCLE.
It should, though, ONLY lock WHILE IT TAKES THINGS OUT OF THE LIST. Foreach does not work here - per definition. You ahve to put up a second list and have each thread REMOVE THE TOP ITEM (while lockin), then work on it.
Basically:
Foreach does not work, as both threads will run through the compelte list
Second, locks must be granular in that they only lock while needed.
In your case, you lock in foo will only be released when foo is finished.
I work with new Parallel.For that creates multiple threads to perform the same operation.
In case one of the threads fail, it means that I'm working "too fast" and I need to put all the threads to rest for a few seconds.
Is there a way to perform something like Thread.Sleep - only to do the same on all threads at once?
This is a direct answer to the question, except for the Parallel.For bit.
It really is a horrible pattern; you should probably be using a proper synchronization mechanism, and get the worker threads to, without preemption, occasionally check if they need to 'back off.'
In addition, this uses Thread.Suspend and Thread.Resume which are both deprecated, and with good reason (from Thread.Suspend):
"Do not use the Suspend and Resume methods to synchronize the activities of threads. You have no way of knowing what code a thread is executing when you suspend it. If you suspend a thread while it holds locks during a security permission evaluation, other threads in the AppDomain might be blocked. If you suspend a thread while it is executing a class constructor, other threads in the AppDomain that attempt to use that class are blocked. Deadlocks can occur very easily."
(Untested)
public class Worker
{
private readonly Thread[] _threads;
private readonly object _locker = new object();
private readonly TimeSpan _tooFastSuspensionSpan;
private DateTime _lastSuspensionTime;
public Worker(int numThreads, TimeSpan tooFastSuspensionSpan)
{
_tooFastSuspensionSpan = tooFastSuspensionSpan;
_threads = Enumerable.Repeat(new ThreadStart(DoWork), numThreads)
.Select(ts => new Thread(ts))
.ToArray();
}
public void Run()
{
foreach (var thread in _threads)
{
thread.Start();
}
}
private void DoWork()
{
while (!IsWorkComplete())
{
try
{
// Do work here
}
catch (TooFastException)
{
SuspendAll();
}
}
}
private void SuspendAll()
{
lock (_locker)
{
// We don't want N near-simultaneous failures causing a sleep-duration of N * _tooFastSuspensionSpan
// 1 second is arbitrary. We can't be deterministic about it since we are forcefully suspending threads
var now = DateTime.Now;
if (now.Subtract(_lastSuspensionTime) < _tooFastSuspensionSpan + TimeSpan.FromSeconds(1))
return;
_lastSuspensionTime = now;
var otherThreads = _threads.Where(t => t.ManagedThreadId != Thread.CurrentThread.ManagedThreadId).ToArray();
foreach (var otherThread in otherThreads)
otherThread.Suspend();
Thread.Sleep(_tooFastSuspensionSpan);
foreach (var otherThread in otherThreads)
otherThread.Resume();
}
}
}
You need to create an inventory of your worker threads and then perhaps you can use Thread.Suspend and Resume methods. Mind you that using Suspend can be dangerous (for example, thread may have acquired lock before suspending). And suspend/resume have been marked obsolate due to such issues.