C# Does an empty lock block get "optimized" away? - c#

I couldn't find any information on whether the C# compiler or JIT will remove a lock statement with no code inside. Will this always generate and execute the Monitor.Enter and Monitor.Exit calls?
lock(lockObj) { }
A (drastically simplified) version of what I'm trying to do (yes I know calling a callback in a lock is bad):
public class ExecutionSource
{
private List<Action<object>> _callbacks = new List<Action<object>>();
private object _value;
public void AddListener(Action<object> listener)
{
object temp = _value;
if (temp != null)
{
listener(temp);
return;
}
lock (_callbacks)
{
temp = _value;
if (temp != null)
{
listener(temp);
}
else
{
_callbacks.Add(listener);
}
}
}
public void Execute(object value)
{
if (value == null) throw new InvalidOperationException("value must be non-null");
if (Interlocked.CompareExchange(ref _value, value, null) != null)
{
throw new InvalidOperationException("Can only execute once.");
}
lock (_callbacks) { } // Wait for a listener that is currently being added on another thread. No need to lock the entire loop.
foreach (var callback in _callbacks)
{
callback(value);
}
_callbacks.Clear();
}
}

No, they do not. The CLI cannot assume there is no other thread already locking over the object.
The object header or sync block table will still get marked with the Thread ID and a non zero recursion count on Monitor.Enter/Exit, if any other thread (or your current code) tries to lock over the object with a non zero thread id, it will go into a spin wait or promote to a kernel based event if needed.
For what it's worth, since you have no care for reordering, and depending on what your use cases are, there are likely other synchronization primitives that might be a better fit for your particular use case. Like a reset event etc.

As per sharplib we can see that compiler do not remove empty lock block. Moreoever, I can't imagine how we can optimize it out in compile-time in real multithreading environment and be sure that we do not broke anything.

Related

Is it bad to overwrite a lock object if it is the last statement in the lock?

I've seen this a couple times now but I'm not sure if it is actually incorrect.
Consider the following example class:
class Foo
{
List<string> lockedList = new List<string>();
public void ReplaceList(IEnumerable<string> items)
{
var newList = new List<string>(items);
lock (lockedList)
{
lockedList = newList;
}
}
public void Add(string newItem)
{
lock (lockedList)
{
lockedList.Add(newItem);
}
}
public void Contains(string item)
{
lock (lockedList)
{
lockedList.Contains(item);
}
}
}
ReplaceList overwrites lockedList while it has a lock on it. As soon as it does all subsequent callers will actually be locking on the new value. They can enter their lock before ReplaceList exits its lock.
Despite raising flags by replacing a lock object, this code might actually work correctly. As long as the assignment is the last statement of the lock there is no more synchronized code to run.
Besides the increased maintenance cost of ensuring the assignment remains at the end of lock block, is there another reason to avoid this?
So, to start with, no the specific solution that you have provided is not safe, due to the specifics of how you're accessing the field.
Getting a working solution is easy enough. Just don't create a new list; instead, clear it out and add the new items:
class Foo
{
private List<string> lockedList = new List<string>();
public void ReplaceList(IEnumerable<string> items)
{
lock (lockedList)
{
lockedList.Clear();
lockedList.AddRange(items);
}
}
public void Add(string newItem)
{
lock (lockedList)
{
lockedList.Add(newItem);
}
}
public void Contains(string item)
{
lock (lockedList)
{
lockedList.Contains(item);
}
}
}
Now your field isn't actually changing, and you don't need to worry about all of the problems that that can cause.
As for how the code in the question can break, all it takes is a call to Add or Contains to read the field, get the list, lock the list, and then have another thread replace the field. The when you read the field for a second time, after already getting the value to lock on, the value may have changed, so you'll end up mutating or reading from a list that another caller won't be restricted from accessing.
All that said, while mutating the lockedList variable is a really bad idea, and you should unquestionably avoid mutating it, as shown above, you can also ensure that you only actually read the field once, rather than reading from it repeatedly, and you'll still ensure that each list is only ever accessed from a single thread at any one time:
class Foo
{
private volatile List<string> lockedList = new List<string>();
public void ReplaceList(IEnumerable<string> items)
{
lockedList = new List<string>(items);
}
public void Add(string newItem)
{
var localList = lockedList;
lock (localList)
{
localList.Add(newItem);
}
}
public void Contains(string item)
{
var localList = lockedList;
lock (localList)
{
localList.Contains(item);
}
}
}
Notice here that the problem that this fixes isn't mutating the field that the object to lock on was fetched from (that's not inherently the problem, although is a very bad practice), but rather constantly getting new values from the field in all usages of it from inside of the lock statements and expecting that value to never change, when it can.
This is going to be much harder to maintain, is very fragile, and is much more difficult to understand or ensure the correctness of though, so again, do do things like this.
I realized that reading the lock object's field is done before we can lock on it; so this will never be correct. If another thread attempts to enter the lock before the value is changed it will end up entering its lock block with a lock on the old value but the field will have the new value. It will then be using the new value without having a lock on it.
Example:
Thread A calls ReplaceList and enters the lock.
Thread B calls Add. It reaches the lock and is blocked.
Thread A replaces lockedList with a new value.
Thread C calls Contains, it gets the new value of lockedList and takes out the lock.
Thread A exits its lock, allowing thread B to resume.
Thread B enters the lock using the old value of lockedList and adds an item to the new list without having a lock on the new list.
Thread C throws an exception because the list was modified while it was enumerating it.

Correct way to implement a resource pool

I'm trying to implement something that manages a pool of resources such that the calling code can request an object and will be given one from the pool if it's available, or else it will be made to wait. I'm having trouble getting the synchronization to work correctly however. What I have in my pool class is something like this (where autoEvent is an AutoResetEvent initially set as signaled:
public Foo GetFooFromPool()
{
autoEvent.WaitOne();
var foo = Pool.FirstOrDefault(p => !p.InUse);
if (foo != null)
{
foo.InUse = true;
autoEvent.Set();
return foo;
}
else if (Pool.Count < Capacity)
{
System.Diagnostics.Debug.WriteLine("count {0}\t capacity {1}", Pool.Count, Capacity);
foo = new Foo() { InUse = true };
Pool.Add(foo);
autoEvent.Set();
return foo;
}
else
{
return GetFooFromPool();
}
}
public void ReleaseFoo(Foo p)
{
p.InUse = false;
autoEvent.Set();
}
The idea is when you call GetFooFromPool, you wait until signaled, then you try and find an existing Foo that is not in use. If you find one, we set it to InUse and then fire a signal so other threads can proceed. If we don't find one, we check to see if the the pool is full. If not, we create a new Foo, add it to the pool and signal again. If neither of those conditions are satisfied, we are made to wait again by calling GetFooFromPool again.
Now in ReleaseFoo we just set InUse back to false, and signal the next thread waiting in GetFooFromPool (if any) to try and get a Foo.
The problem seems to be in my managing the size of the pool. With a capacity of 5, I'm ending up with 6 Foos. I can see in my debug line count 0 appear a couple of times and count 1 might appear a couple of times also. So clearly I have multiple threads getting into the block when, as far as I can see, they shouldn't be able to.
What am I doing wrong here?
Edit: A double check lock like this:
else if (Pool.Count < Capacity)
{
lock(locker)
{
if (Pool.Count < Capacity)
{
System.Diagnostics.Debug.WriteLine("count {0}\t capacity {1}", Pool.Count, Capacity);
foo = new Foo() { InUse = true };
Pool.Add(foo);
autoEvent.Set();
return foo;
}
}
}
Does seem to fix the problem, but I'm not sure it's the most elegant way to do it.
As was already mentioned in the comments, a counting semaphore is your friend.
Combine this with a concurrent stack and you have got a nice simple, thread safe implementation, where you can still lazily allocate your pool items.
The bare-bones implementation below provides an example of this approach. Note that another advantage here is that you do not need to "contaminate" your pool items with an InUse member as a flag to track stuff.
Note that as a micro-optimization, a stack is preferred over a queue in this case, because it will provide the most recently returned instance from the pool, that may still be in e.g. L1 cache.
public class GenericConcurrentPool<T> : IDisposable where T : class
{
private readonly SemaphoreSlim _sem;
private readonly ConcurrentStack<T> _itemsStack;
private readonly Action<T> _onDisposeItem;
private readonly Func<T> _factory;
public GenericConcurrentPool(int capacity, Func<T> factory, Action<T> onDisposeItem = null)
{
_itemsStack = new ConcurrentStack<T>(new T[capacity]);
_factory = factory;
_onDisposeItem = onDisposeItem;
_sem = new SemaphoreSlim(capacity);
}
public async Task<T> CheckOutAsync()
{
await _sem.WaitAsync();
return Pop();
}
public T CheckOut()
{
_sem.Wait();
return Pop();
}
public void CheckIn(T item)
{
Push(item);
_sem.Release();
}
public void Dispose()
{
_sem.Dispose();
if (_onDisposeItem != null)
{
T item;
while (_itemsStack.TryPop(out item))
{
if (item != null)
_onDisposeItem(item);
}
}
}
private T Pop()
{
T item;
var result = _itemsStack.TryPop(out item);
Debug.Assert(result);
return item ?? _factory();
}
private void Push(T item)
{
Debug.Assert(item != null);
_itemsStack.Push(item);
}
}
There are a few problems with what you're doing, but your specific race condition is likely caused by a situation like the following. Imagine you have a capacity of one.
1) There is one unused item in the pool.
2) Thread #1 grabs it and signals the event.
3) Thread #2 finds no available event and gets inside the capacity block. It does not add the item yet.
4) Thread #1 returns the item to the pool and signals the event.
5) Repeat steps 1, 2, and 3 using two other threads (e.g. #3, #4).
6) Thread #2 adds an item to the pool.
7) Thread #4 adds an item to the pool.
There are now two items in a pool with a capacity of one.
Your implementation has other potential issues, however.
Depending on how your Pool.Count and Add() are synchronized, you might not see an up-to-date value.
You could potentially have multiple threads grab the same unused item.
Controlling access with an AutoResetEvent opens yourself up to difficult to find issues (like this one) because you are trying to use a lockless solution instead of just taking a lock and using Monitor.Wait() and Monitor.Pulse() for this purpose.

How to lock a part of method from another threads?

How can i lock a part of method in c# from another threads?
I mean if one of threads was here, then exit...
For example:
if(threads[0].WasHere)
{
return;
}
an effective way is with an interlocked exchange; by setting some token field to a non-default value during the work, the other threads can check this and exit. For example:
private int hazWorker; // = 0 - put this at the scope you want to protect
then:
// means: atomically set hazWorker to 1, but only if the old value was 0, and
// tell me what the old value was (and compare that result to 0)
if(Interlocked.CompareExchange(ref hazWorker, 1, 0) != 0) {
return; // someone else has the conch
}
try {
// your work here
} finally {
Interlocked.Exchange(ref hazWorker, 0); // set it back to default
}
You can use Monitor.TryEnter for this purpose.
if(!Monitor.TryEnter(someLock))
{
return;
}
try
{
//Critical region
}
finally
{
Monitor.Exit(someLock);
}
Or more reliable way to fight with Rude Thread aborts (suggested by marc in comments)
bool lockTaken = false;
try
{
Monitor.TryEnter(someLock, ref lockTaken);
if (lockTaken)
{
//Critical region
}
}
finally
{
if(lockTaken) Monitor.Exit(someLock);
}
Note that this doesn't checks for threads[0] still working, rather it checks whether any other thread is in Critical region. If so, it exits the method.
You can use a bool value - assign it "false" on default, and then the first of the threads sets it to "true". And then the piece of code could look like this:
if (!alreadyExecuted)
{
// ...
alreadyExecuted = true;
}
I would also put the code in a lock to make sure only one thread executes it at time (to deal with any possible race conditions), like below.
The lockVariable is a locker variable and it can be of any reference type, ex. object lockVariable = new object();
lock (lockVariable)
{
if (!alreadyExecuted)
{
// ...
alreadyExecuted = true;
}
}

C# "ref" not Doing what I Think it Should

I have a class that talks to an external .exe. The class has a bunch of similar methods; they call a function of the .exe, wait for response, and then return true or false.
The response comes in the form of events that change the values of fields of this class.
Simplified code:
class Manager
{
private static bool connected = false;
public static bool Connect()
{
runtime.Connect();
int secondsWaited = 0;
while (!connected)
{
Thread.Sleep(1000);
if (secondsWaited++ == 10)
{
return false;
}
}
return true;
}
}
The other methods use the same call-wait-loop-return structure.
My goal is to make a single method to do this waiting for me, like so:
private static bool WaitReferenceEqualsValue<T>(ref T reference, T value)
{
int secondsWaited = 0;
while (!reference.Equals(value))
{
Thread.Sleep(1000);
if (secondsWaited++ == 10)
{
return false;
}
}
return true;
}
Then each method would do:
runtime.DoSomething();
return WaitReferenceEqualsValue<someType>(ref someField, someSuccessfulValue);
However, when I replace the wait-loop with this method call, the field "connected", even though passed in as a reference, always stays the same.
Any idea what's going on here, and how to get the desired functionality?
Thanks in advance.
EDIT:
public static bool Connect()
{
...
runtime.Connect();
// this code works
/*int secondsWaited = 0;
while (connected != true)
{
Thread.Sleep(1000);
if (secondsWaited++ == 10)
{
return false;
}
}*/
// this somehow blocks OnConnect from firing, so connected never gets set to true
lock (typeof(SkypeKitManager))
{
WaitReferenceEqualsValue<bool>(ref connected, true);
}
...
}
OnConnect:
private static void OnConnect(object sender, Events.OnConnectArgs e)
{
if (e != null && e.success)
{
lock (typeof(Manager))
{
connected = true;
}
}
}
You're not doing any synchronization on that field although you access it from multiple threads and one of them is writing. This is a race (no exception! this is a race even if it looks safe. It isn't safe.).
Probably the JIT enregistered it which is a common optimization. It just never gets read from memory, always from a register. Add synchronization (for example a lock, or Interlocked or Volatile methods).
Your usage of ref is correct.
The problem with your code is essentially compiler optimization. Fo optimization purpose compilers (or jits) necessarily take a pretty much single threaded view. The compiler/jit will then notice that you don't touch reference in your code at all, therefore it can move the comparison outside the loop. It is free to do so, since you basically create a race condition (no synchronization/atomic accesses).
Fixing it could either involve using synchronization mechanisms or add the volatile specifier to reference, thus telling the compiler/jit, that the variable can be changed from outside the method.

how to obtain a lock in two places but release on one place?

i'm newbie in c#. I need to obtain lock in 2 methods, but release in one method. Will that work?
public void obtainLock() {
Monitor.Enter(lockObj);
}
public void obtainReleaseLock() {
lock (lockObj) {
doStuff
}
}
Especially can I call obtainLock and then obtainReleaseLock? Is "doubleLock" allowed in C#? These two methods are always called from the same thread, however lockObj is used in another thread for synchronization.
upd: after all comments what do you think about such code? is it ideal?
public void obtainLock() {
if (needCallMonitorExit == false) {
Monitor.Enter(lockObj);
needCallMonitorExit = true;
}
// doStuff
}
public void obtainReleaseLock() {
try {
lock (lockObj) {
// doAnotherStuff
}
} finally {
if (needCallMonitorExit == true) {
needCallMonitorExit = false;
Monitor.Exit(lockObj);
}
}
}
Yes, locks are "re-entrant", so a call can "double-lock" (your phrase) the lockObj. however note that it needs to be released exactly as many times as it is taken; you will need to ensure that there is a corresponding "ReleaseLock" to match "ObtainLock".
I do, however, suggest it is easier to let the caller lock(...) on some property you expose, though:
public object SyncLock { get { return lockObj; } }
now the caller can (instead of obtainLock()):
lock(something.SyncLock) {
//...
}
much easier to get right. Because this is the same underlying lockObj that is used internally, this synchronizes against either usage, even if obtainReleaseLock (etc) is used inside code that locked against SyncLock.
With the context clearer (comments), it seems that maybe Wait and Pulse are the way to do this:
void SomeMethodThatMightNeedToWait() {
lock(lockObj) {
if(needSomethingSpecialToHappen) {
Monitor.Wait(lockObj);
// ^^^ this ***releases*** the lock (however many times needed), and
// enters the pending-queue; when *another* thread "pulses", it
// enters the ready-queue; when the lock is *available*, it
// reacquires the lock (back to as many times as it held it
// previously) and resumes work
}
// do some work, happy that something special happened, and
// we have the lock
}
}
void SomeMethodThatMightSignalSomethingSpecial() {
lock(lockObj) {
// do stuff
Monitor.PulseAll(lockObj);
// ^^^ this moves **all** items from the pending-queue to the ready-queue
// note there is also Pulse(...) which moves a *single* item
}
}
Note that when using Wait you might want to use the overload that accepts a timeout, to avoid waiting forever; note also it is quite common to have to loop and re-validate, for example:
lock(lockObj) {
while(needSomethingSpecialToHappen) {
Monitor.Wait(lockObj);
// at this point, we know we were pulsed, but maybe another waiting
// thread beat us to it! re-check the condition, and continue; this might
// also be a good place to check for some "abort" condition (and
// remember to do a PulseAll() when aborting)
}
// do some work, happy that something special happened, and we have the lock
}
You would have to use the Monitor for this functionality. Note that you open yourself up to deadlocks and race conditions if you aren't careful with your locks and having them taken and released in seperate areas of code can be risky
Monitor.Exit(lockObj);
Only one owner can hold the lock at a given time; it is exclusive. While the locking can be chained the more important component is making sure you obtain and release the proper number of times, avoiding difficult to diagnose threading issues.
When you wrap your code via lock { ... }, you are essentially calling Monitor.Enter and Monitor.Exit as scope is entered and departed.
When you explicitly call Monitor.Enter you are obtaining the lock and at that point you would need to call Monitor.Exit to release the lock.
This doesn't work.
The code
lock(lockObj)
{
// do stuff
}
is translated to something like
Monitor.Enter(lockObj)
try
{
// do stuff
}
finally
{
Monitor.Exit(lockObj)
}
That means that your code enters the lock twice but releases it only once. According to the documentation, the lock is only really released by the thread if Exit was called as often as Enter which is not the case in your code.
Summary: Your code will not deadlock on the call to obtainReleaseLock, but the lock on lockObj will never be released by the thread. You would need to have an explicit call to Monitor.Exit(lockObj), so the calls to Monitor.Enter matches the number of calls to Monitor.Exit.

Categories