My code seens to be allowing more than one thread to get into a specific method "protected" by mutex.
private static Mutex mut = new Mutex();
public DadoMySql PegaPrimeiroFila(int identificacao)
{
DadoMySql dadoMySql = null;
mut.WaitOne();
dadoMySql = PegaPrimeiroFila_Processa();
mut.ReleaseMutex();
return dadoMySql;
}
I have 10 threads and a keep getting 2 random ones of than getting the same "dadoMySql" everytime.
If i add logs inside de mutex wait everything works fine. The extra time it takes to write the log makes it work :/, maybe?
Mutex is overkill here, unless you are synchronizing across multiple processes.
A simple lock should work since you want mutual exclusion:
private static readonly object lockObject = new object();
public DadoMySql PegaPrimeiroFila(int identificacao)
{
DadoMySql dadoMySql = null;
lock (lockObject)
{
dadoMySql = PegaPrimeiroFila_Processa();
}
return dadoMySql;
}
Using the lock keyword also gives you a stronger guarantee that Monitor.Exit nearly always gets called. A good example is when an exception is thrown inside of lock scope.
Related
I need to have the piece of code which allowed to execute only by 1 thread at the same time based on parameter key:
private static readonly ConcurrentDictionary<string, SemaphoreSlim> Semaphores = new();
private async Task<TModel> GetValueWithBlockAsync<TModel>(string valueKey, Func<Task<TModel>> valueAction)
{
var semaphore = Semaphores.GetOrAdd(valueKey, s => new SemaphoreSlim(1, 1));
try
{
await semaphore.WaitAsync();
return await valueAction();
}
finally
{
semaphore.Release(); // Exception here - System.ObjectDisposedException
if (semaphore.CurrentCount > 0 && Semaphores.TryRemove(valueKey, out semaphore))
{
semaphore?.Dispose();
}
}
}
Time to time I got the error:
The semaphore has been disposed. : System.ObjectDisposedException: The semaphore has been disposed.
at System.Threading.SemaphoreSlim.CheckDispose()
at System.Threading.SemaphoreSlim.Release(Int32 releaseCount)
at Project.GetValueWithBlockAsync[TModel](String valueKey, Func`1 valueAction)
All cases that I can imagine here are thread safety. Please help, what case I missed?
You have a thread race here, where another task is trying to acquire the same semaphore, and acquires it when you Release - i.e. another thread is awaiting the semaphore.WaitAsync(). The check against CurrentCount is a race condition, and it could go either way depending on timing. The check for TryRemove is irrelevant, as the competing thread already got the semaphore out - it was, after all, awaiting the WaitAsync().
As discussed in the comments, you have a couple of race conditions here.
Thread 1 holds the lock and Thread 2 is waiting on WaitAsync(). Thread 1 releases the lock, and then checks semaphore.CurrentCount, before Thread 2 is able to acquire it.
Thread 1 holds the lock, releases it, and checks semaphore.CurrentCount which passes. Thread 2 enters GetValueWithBlockAsync, calls Semaphores.GetOrAdd and fetches the semaphore. Thread 1 then calls Semaphores.TryRemove and diposes the semaphore.
You really need locking around the decision to remove an entry from Semaphores -- there's no way around this. You also don't have a way of tracking whether any threads have fetched a semaphore from Semaphores (and are either currently waiting on it, or haven't yet got to that point).
One way is to do something like this: have a lock which is shared between everyone, but which is only needed when fetching/creating a semaphore, and deciding whether to dispose it. We manually keep track of how many threads currently have an interest in a particular semaphore. When a thread has released the semaphore, it then acquires the shared lock to check whether anyone else currently has an interest in that semaphore, and disposes it only if noone else does.
private static readonly object semaphoresLock = new();
private static readonly Dictionary<string, State> semaphores = new();
private async Task<TModel> GetValueWithBlockAsync<TModel>(string valueKey, Func<Task<TModel>> valueAction)
{
State state;
lock (semaphoresLock)
{
if (!semaphores.TryGetValue(valueKey, out state))
{
state = new();
semaphores[valueKey] = state;
}
state.Count++;
}
try
{
await state.Semaphore.WaitAsync();
return await valueAction();
}
finally
{
state.Semaphore.Release();
lock (semaphoresLock)
{
state.Count--;
if (state.Count == 0)
{
semaphores.Remove(valueKey);
state.Semaphore.Dispose();
}
}
}
}
private class State
{
public int Count { get; set; }
public SemaphoreSlim Semaphore { get; } = new(1, 1);
}
The other option, of course, is to let Semaphores grow. Maybe you have a periodic operation to go through and clear out anything which isn't being used, but this will of course need to be protected to ensure that a thread doesn't suddenly become interested in a semaphore which is being cleared up.
I'm trying to understand thread-safe access to fields. For this, i implemented some test sample:
class Program
{
public static void Main()
{
Foo test = new Foo();
bool temp;
new Thread(() => { test.Loop = false; }).Start();
do
{
temp = test.Loop;
}
while (temp == true);
}
}
class Foo
{
public bool Loop = true;
}
As expected, sometimes it doesn't terminate. I know that this issue can be solved either with volatile keyword or with lock. I consider that i'm not author of class Foo, so i can't make field volatile. I tried using lock:
public static void Main()
{
Foo test = new Foo();
object locker = new Object();
bool temp;
new Thread(() => { test.Loop = false; }).Start();
do
{
lock (locker)
{
temp = test.Loop;
}
}
while (temp == true);
}
this seems to solve the issue. Just to be sure i moved the cycle inside the lock block:
lock(locker)
{
do
{
temp = test.Loop;
}
while (temp == true);
}
and... the program does not terminates anymore.
It is totally confusing me. Doesn't lock provides thread-safe access? If not, how to access non-volatile fields safely? I could use VolatileRead(), but it is not suitable for any case, like not primitive type or properties. I considered that Monitor.Enter does the job, Am i right? I don't understand how could it work.
This piece of code:
do
{
lock (locker)
{
temp = test.Loop;
}
}
while (temp == true);
works because of a side-effect of lock: it causes a 'memory-fence'. The actual locking is irrelevant here. Equivalent code:
do
{
Thread.MemoryBarrier();
temp = test.Loop;
}
while (temp == true);
And the issue you're trying to solve here is not exactly thread-safety, it is about caching of the variable (stale data).
It does not terminate anymore because you are accessing the variable outside of the lock as well.
In
new Thread(() => { test.Loop = false; }).Start();
you write to the variable outside the lock. This write is not guaranteed to be visible.
Two concurrent accesses to the same location of which at least one is a write is a data race. Don't do that.
Lock provides thread safety for 2 or more code blocks on different threads, that uses the lock.
Your Loop assignment inside the new thread declaration is not enclosed in lock.
That means there is no thread safety there.
In general, no, lock is not something that will magically make all code inside it thread-safe.
The simple rule is: If you have some data that's shared by multiple threads, but you always access it only inside a lock (using the same lock object), then that access is thread-safe.
Once you leave that “simple” code and start asking questions like “How could I use volatile/VolatileRed() safely here?” or “Why does this code that doesn't use lock properly seem to work?”, things get complicated quickly. And you should probably avoid that, unless you're prepared to spend a lot of time learning about the C# memory model. And even then, bugs that manifest only once in million runs or only on certain CPUs (ARM) are very easy to make.
Locking only works when all access to the field is controlled by a lock. In your example only the reading is locked, but since the writing is not, there is no thread-safety.
However it is also crucial that the locking takes place on a shared object, otherwise there is no way for another thread to know that someone is trying to access the field. So in your case when locking on an object which is only scoped inside the Main method, any other call on another thread, would not be able to block.
If you have no way to change Foo, the only way to obtain thread-safety is to have ALL calls actually lock on the same Foo instance. This would generally not be recommended though, since all methods on the object would be locked.
The volatile keyword is not a guarantuee of thread-safety in itself. It is meant to indicate that the value of a field can be changed from different threads, and so any thread reading that field, should not cache it, since the value could change.
To achieve thread-safety, Foo should probably look something along these lines:
class Program
{
public static void Main()
{
Foo test = new Foo();
test.Run();
new Thread(() => { test.Loop = false; }).Start();
do
{
temp = test.Loop;
}
while (temp == true);
}
}
class Foo
{
private volatile bool _loop = true;
private object _syncRoot = new object();
public bool Loop
{
// All access to the Loop value, is controlled by a lock on an instance-scoped object. I.e. when one thread accesses the value, all other threads are blocked.
get { lock(_syncRoot) return _loop; }
set { lock(_syncRoot) _loop = value; }
}
public void Run()
{
Task(() =>
{
while(_loop) // _loop is volatile, so value is not cached
{
// Do something
}
});
}
}
A simple exercise in threading here. Say I have a static lock, a web request, and a thread queue thread. Will the following cause a problem (ignoring the quality of the code itself):
static object locker = new object();
static MyObject obj = new MyObject();
public static void Update(){
lock(locker){
obj.Foo = "biz";
DoStuff();
}
}
public static void DoStuff(){
ThreadPool.QueueUserWorkItem(args => {
lock(locker){
obj.Foo = "bar";
}
});
}
The example is contrived, but the concept holds :).
This will not cause a problem. If this is called a single time, DoStuff() will not be able to acquire the lock until Update()'s code has exited the lock. However, ThreadPool.QueueUserWorkItem is an asynchronous call, so the lock will be able to be released, which in turn will allow DoStuff() to process.
It shouldn't. The only gotcha specific to thread pool threads is that the thread pool grows relatively slowly, so if you blocked a lot waiting for locks you can cause performance issues.
I noticed the following code from our foreign programmers:
private Client[] clients = new Client[0];
public CreateClients(int count)
{
lock (clients)
{
clients = new Client[count];
for(int i=0; i<count; i++)
{
Client[i] = new Client();//Stripped
}
}
}
It's not exactly proper code but I was wondering what exactly this will do. Will this lock on a new object each time this method is called?
To answer your question of "I was wondering what exactly this will do" consider what happens if two threads try to do this.
Thread 1: locks on the clients reference, which is `new Client[0]`
Thread 1 has entered the critical block
Thread 1: makes a array and assigns it to the clients reference
Thread 2: locks on the clients reference, which is the array just made in thread 1
Thread 2 has entered the critical block
You know have two threads in the critical block at the same time. That's bad.
This lock really does nothing. It locks an instance of an object which is immediately changed such that other threads entering this method will lock on a different object. The result is 2 threads executing in the middle of the lock which is probably not what was intended.
A much better approach here is to use a different, non-changing object to lock on
private readonly object clientsLock = new object();
private Client[] clients = new Client[0];
public CreateClients(int count) {
lock (clientsLock) {
clients = new string[count];
...
}
}
This code is wrong - it will lock on a new instance every time it's called.
It should look like that:
private static readonly object clientsLock = new object();
private static string[] Clients = null;
public CreateClients(int count)
{
if(clients == null)
{
lock (clientsLock)
{
if(clients == null)
{
clients = new string[count];
for(int i=0; i<count; i++)
{
client[i] = new Client();//Stripped
}
}
}
}
}
There's no point in locking every time the method is called - that's why the surrounding if clause.
Use :
private object = new Object();
lock(object){
//your code
}
I think you're correct to doubt this code!
This code will lock on the previous instance each time - this might be the desired effect, but I doubt it. It won't stop multiple threads from creating multiple arrays.
I work with new Parallel.For that creates multiple threads to perform the same operation.
In case one of the threads fail, it means that I'm working "too fast" and I need to put all the threads to rest for a few seconds.
Is there a way to perform something like Thread.Sleep - only to do the same on all threads at once?
This is a direct answer to the question, except for the Parallel.For bit.
It really is a horrible pattern; you should probably be using a proper synchronization mechanism, and get the worker threads to, without preemption, occasionally check if they need to 'back off.'
In addition, this uses Thread.Suspend and Thread.Resume which are both deprecated, and with good reason (from Thread.Suspend):
"Do not use the Suspend and Resume methods to synchronize the activities of threads. You have no way of knowing what code a thread is executing when you suspend it. If you suspend a thread while it holds locks during a security permission evaluation, other threads in the AppDomain might be blocked. If you suspend a thread while it is executing a class constructor, other threads in the AppDomain that attempt to use that class are blocked. Deadlocks can occur very easily."
(Untested)
public class Worker
{
private readonly Thread[] _threads;
private readonly object _locker = new object();
private readonly TimeSpan _tooFastSuspensionSpan;
private DateTime _lastSuspensionTime;
public Worker(int numThreads, TimeSpan tooFastSuspensionSpan)
{
_tooFastSuspensionSpan = tooFastSuspensionSpan;
_threads = Enumerable.Repeat(new ThreadStart(DoWork), numThreads)
.Select(ts => new Thread(ts))
.ToArray();
}
public void Run()
{
foreach (var thread in _threads)
{
thread.Start();
}
}
private void DoWork()
{
while (!IsWorkComplete())
{
try
{
// Do work here
}
catch (TooFastException)
{
SuspendAll();
}
}
}
private void SuspendAll()
{
lock (_locker)
{
// We don't want N near-simultaneous failures causing a sleep-duration of N * _tooFastSuspensionSpan
// 1 second is arbitrary. We can't be deterministic about it since we are forcefully suspending threads
var now = DateTime.Now;
if (now.Subtract(_lastSuspensionTime) < _tooFastSuspensionSpan + TimeSpan.FromSeconds(1))
return;
_lastSuspensionTime = now;
var otherThreads = _threads.Where(t => t.ManagedThreadId != Thread.CurrentThread.ManagedThreadId).ToArray();
foreach (var otherThread in otherThreads)
otherThread.Suspend();
Thread.Sleep(_tooFastSuspensionSpan);
foreach (var otherThread in otherThreads)
otherThread.Resume();
}
}
}
You need to create an inventory of your worker threads and then perhaps you can use Thread.Suspend and Resume methods. Mind you that using Suspend can be dangerous (for example, thread may have acquired lock before suspending). And suspend/resume have been marked obsolate due to such issues.