Why is it a bad practice to use lock as in the following code, I'm assuming this is a bad practice based on the answers in this SO question here
private void DoSomethingUseLess()
{
List<IProduct> otherProductList = new List<IProduct>();
Parallel.ForEach(myOriginalProductList, product =>
{
//Some code here removed for brevity
//Some more code here :)
lock (otherProductList)
{
otherProductList.Add((IProduct)product.Clone());
}
});
}
The answers over there mentions that it is bad practice , but they don't say why
Note: Please ignore the usefulness of the code, this is just for example purpose and i know it is not at all useful
From the C# language reference here:
In general, avoid locking on a public type, or instances beyond your code's control. The common constructs lock (this), lock (typeof (MyType)), and lock ("myLock") violate this guideline:
lock (this) is a problem if the instance can be accessed publicly.
lock (typeof (MyType)) is a problem if MyType is publicly accessible.
lock("myLock") is a problem because any other code in the process
using the same string, will share the same lock.
Best practice is to define a private object to lock on, or a private
static object variable to protect data common to all instances.
In your case, I would read the above guidance as suggesting that locking on the collection you will be modifying is bad practise. For example, if you wrote this code:
lock (otherProductList)
{
otherProductList = new List<IProduct>();
}
...then your lock will be worthless. For these reasons it's recommended to use a dedicated object variable for locking.
Note that this doesn't mean your application will break if you use the code you posted. "Best practices" are usually defined to provide easily-repeated patterns that are more technically resilient. That is, if you follow best practice and have a dedicated "lock object," you are highly unlikely to ever write broken lock-based code; if you don't follow best practice then, maybe one time in a hundred, you'll get bitten by an easily-avoided problem.
Additionally (and more generally), code written using best practices is typically more easily modified, because you can be less wary of unexpected side-effects.
It might be not a good idea indeed, because if someone else uses the same object reference to do a lock, you could have a deadlock. If there is a chance your locked object is accessible outside your own code, then someone else could break your code.
Imagine the following example based on your code:
namespace ClassLibrary1
{
public class Foo : IProduct
{
}
public interface IProduct
{
}
public class MyClass
{
public List<IProduct> myOriginalProductList = new List<IProduct> { new Foo(), new Foo() };
public void Test(Action<IEnumerable<IProduct>> handler)
{
List<IProduct> otherProductList = new List<IProduct> { new Foo(), new Foo() };
Parallel.ForEach(myOriginalProductList, product =>
{
lock (otherProductList)
{
if (handler != null)
{
handler(otherProductList);
}
otherProductList.Add(product);
}
});
}
}
}
Now you compile your library, send it to a customer, and this customer writes in his code:
public class Program
{
private static void Main(string[] args)
{
new MyClass().Test(z => SomeMethod(z));
}
private static void SomeMethod(IEnumerable<IProduct> myReference)
{
Parallel.ForEach(myReference, item =>
{
lock (myReference)
{
// Some stuff here
}
});
}
}
Then there could be a nice hard-to-debug deadlock for your customer, each of two used thread waiting for the otherProductList instance to be not locked anymore.
I agree, this scenario is unlikely to happen, but it illustrates that if your locked reference is visible in a piece of code you do not own, by any possible way, then there's a possibility for the final code to be broken.
Related
Is there any way to detect that a certain method in my code is called without using any lock in any of the methods below in the call stack?
The goal is to debug a faulty application and find out if certain pieces of code aren't thread safe.
This seems like a decent use case for AOP (aspect oriented programming). A very basic summary of AOP is that its a method of dealing with cross cutting concerns to make code dry and modular. The idea is that if you're doing something to every method call on an object (eg. logging each call) instead of adding a log at the start and end of each method you instead you inherit the object and do that outside of the class as to not muddy its purpose.
This can be done a few ways and I'll give you an example of two. First is manually (this isn't great but can be done very easily for small casses).
Assume you have a class, Doer with two methods Do and Other. You can inherit from that and make
public class Doer
{
public virtual void Do()
{
//do stuff.
}
public virtual void Other()
{
//do stuff.
}
}
public class AspectDoer : Doer
{
public override void Do()
{
LogCall("Do");
base.Do();
}
public override void Other()
{
LogCall("Other");
base.Other();
}
private void LogCall(string method)
{
//Record call
}
}
This is great if you only care about one class but quickly becomes unfeasible if you have to do it for many classes. For those cases I'd recommend using something like the CastleProxy library. This is a library which dynamically creates a proxy to wrap any class you want. In combination with an IOC you can easily wrap every service in your application.
Here's a quick example of using CastleProxy, main points being use ProxyGenerator.GenerateProxy and pass in IInterceptors to do stuff around method calls:
[Test]
public void TestProxy()
{
var generator = new ProxyGenerator();
var proxy = generator.CreateClassProxy<Doer>(new LogInterceptor());
proxy.Do();
Assert.True(_wasCalled);
}
private static bool _wasCalled = false;
public class LogInterceptor : IInterceptor
{
public void Intercept(IInvocation invocation)
{
Log(invocation.Method.Name);
invocation.Proceed();
}
private void Log(string name)
{
_wasCalled = true;
}
}
Now, the logging portion. I'm not sure you really NEED this to be lockless, short locks might be enough but lets proceed thinking you do.
I don't know of many tools in C# that support lock free operations but the the simplest version of this I can see is using Interlocked to increment a counter of how many instances are in the method at any given time If would look something like this:
[Test]
public void TestProxy()
{
var generator = new ProxyGenerator();
var proxy = generator.CreateClassProxy<Doer>(new LogInterceptor());
proxy.Do();
Assert.AreEqual(1, _totalDoCount);
}
private static int _currentDoCount = 0;
private static int _totalDoCount = 0;
public class LogInterceptor : IInterceptor
{
public void Intercept(IInvocation invocation)
{
if (invocation.Method.Name == "Do")
{
var result = Interlocked.Increment(ref _currentDoCount);
Interlocked.Increment(ref _totalDoCount);
if(result > 1) throw new Exception("thread safe violation");
}
invocation.Proceed();
Interlocked.Decrement(ref _currentDoCount);
}
}
Interlocked uses magical register magic to do thread safe operation (Compare-And-Swap I believe, but I don't really know). If you need more context than just "It Happened". You can use a concurrent stack or a concurrent queue which are lockless (they use interlock as well: https://msdn.microsoft.com/en-us/library/dd997305.aspx/). I would include a timestamp on these though, since I haven't used them enough to know if they promise to return elements in the order they occurred.
Like I said above, you might not NEED lock free operations but this should. I don't know if any of this is a perfect fit for you since I don't know your exact problem but it should provide you some tools to tackle this.
You could host the CLR yourself, and track the locks taken using the IHostSyncManager::CreateMonitorEvent method. You'd then need to expose your own mechanism from your host to your method called say "IsLockTaken()". You could then call that from your method in your actual code.
I think it is possible, but it would be quite a lot of work and almost certainly a complete distraction from the problem you're trying to solve, but no doubt a lot of fun!
Here's an interesting read on Deadlock detection https://blogs.msdn.microsoft.com/sqlclr/2006/07/25/deadlock-detection-in-sql-clr/
I have two classes A & B. Both are calling each other and having their own locks. I am getting a deadlock in one particular scenario. Here is the sample code.
class A : Interface1, Interface2
{
private B _bInstance = new B();
private object _aSync = new object();
private static A Instance;
private A(){}
public GetInstance()
{
if (Instance == null) Instance = new A();
return Instance;
}
void Method1()
{
lock(_aSync)
{
_bInstance.Method1();
}
}
void WriteData()
{
lock (_aSync)
{
WriteToFile();
}
}
}
class B
{
private object _bSync = new object();
void Method1()
{
lock (_bSync)
{
// Have some code here which need to protect my
// member variables.
A.GetInstance.WriteData();
}
}
void OneSecondTimerEvent()
{
lock (_bSync)
{
// Have some code here which need to protect my
// member variables.
A.GetInstance.WriteData();
}
}
}
How do I synchronize the OneSecondTimerEvent(), if One second timer gets triggered When the A.Method1() is being executed?
Yes, your code shows canonical example of deadlock - 2 resources waiting for each other to continue.
To resolve you can:
manually order lock statements (i.e. B never takes additional locks if A already have lock),
scope locks to only internal state of each class and never nest locks. In this case sometimes you'd need to copy state to call external methods.
use other synchronization primitives/constructs that allow such nesting (i.e. Reader-Writer locks).
Rather than try and solve this particular deadlock issue (which btw, is a classic result of locking things in an inconsistent order), I would strongly advise designing a better relationship between A and B. The fact you had to use a static instance to achieve a circular dependency should be a big clue you've done something wrong. Perhaps A and B should reference a 3rd class C, which is solely responsible for locking and writing the data? (Although it's difficult to say without a bit more context).
In an application that I am developing I will be using 2 threads to do various operations. (I will not go into detail here.) These threads work in loops, checking if there is work to be done, doing work, calculating the time they need to wait and waiting. (See below)
public Global : System.Web.HttpApplication
{
private static Thread StartingDateThread;
private static Thread DeadlineDateThread;
private static object o1;
private static object o2;
public static Thread GetStartingDateThreadInstance
{
get
{
if(StartingDateThread==null)
{
StartingDateThread=new Thread(new ThreadStart(MonitorStartingDates));
}
return StartingDateThread;
}
}
public static Thread GetDeadlineThreadInstance
{
get
{
if(DeadlineDateThread==null)
{
DeadlineDateThread=new Thread(new ThreadStart(MonitorDeadlines));
}
return DeadlineDateThread;
}
}
public static object GetFirstObjectInstance
{
get
{
if(o1==null)
{
o1=new object();
}
return o1;
}
}
public static object GetSecondObjectInstance
{
get
{
if(o2==null)
{
o2=new object();
}
return o2;
}
}
protected void Application_Start(object sender, EventArgs e)
{
GetStartingDateThreadInstance.Start();
GetDeadlineThreadInstance.Start();
//////////////////////
////Do other stuff.
}
public void MonitorStartingDates()
{
while(true)
{
//Check if there is stuff to do.
//Do stuff if available.
//Check if there will be stuff to do in the future and if there is, check
//the time to wake up.
//If there is nothing to do, sleep for a pre-determined 12 hours.
if(StuffToDoInFuture)
{
Monitor.Enter(GetFirstObjectInstance);
Monitor.Wait(WaitingTime);
Monitor.Exit(GetFirstObjectInstance);
}
else
{
Monitor.Enter(GetFirstObjectInstance);
Monitor.Wait(new TimeSpan(12, 0, 0));
Monitor.Exit(GetFirstObjectInstance);
}
}
}
public void MonitorDeadlines()
{
while(true)
{
//Check if there is stuff to do.
//Do stuff if available.
//Check if there will be stuff to do in the future and if there is, check
//the time to wake up.
//If there is nothing to do, sleep for a pre-determined 3 days and 12 hours.
if(StuffToDoInFuture)
{
Monitor.Enter(GetSecondObjectInstance);
Monitor.Wait(WaitingTime);
Monitor.Exit(GetSecondObjectInstance);
}
else
{
Monitor.Enter(GetSecondObjectInstance);
Monitor.Wait(new TimeSpan(3, 12, 0, 0));
Monitor.Exit(GetSecondObjectInstance);
}
}
}
As you can see these two threads are started in the Application_Start method in the asax file. They operate if there is stuff available to do and then they calculate the time period they need to wait and then they wait. However, as users of the web application do operations new records will be inserted into the database and there will be circumstances where any of the two threads will have to resume operation sooner than planned. So, say I have a method in my DataAccess class which inserts into the database new data. (See below)
public class DataAccess
{
///////////////
//
public void InsertNewAuction()
{
///Insert new row calculate the time
Monitor.Pulse(Global.GetFirstObjectInstance);
Monitor.Pulse(Global.GetSecondObjectInstance);
///
}
}
It seems like this is an invalid operation, because at the stage where the Monitor.Pulse is called from the InsertNewAuction method I get an exception. Something like "Object synchronization method was called from an unsynchronized block of code." Is there any way of doing this? Thanks for your help
As to the specific error you're seeing, this is because Monitor.Pulse must be called inside the Monitor lock, like this (I've used lock rather than Enter/Exit, as it's safer for making sure the lock is always released, since it uses a proper try/finally block):
lock (Global.GetFirstObjectInstance)
{
Monitor.Pulse(Global.GetFirstObjectInstance);
}
In regard to the more general design question here, it's often dangerous to expose lock objects as public (or even worse, global) fields. In particular, it can be a recipe for deadlocks when multiple global locks are exposed and acquired in differing orders or when you have cases like blocking dispatches to the UI thread while holding a lock. Consider looking into alternate ways to accomplish what you're after.
As noted in the other answer, you have to acquire the lock before you can call Monitor.Pulse() on the monitor object.
That said, your code has at least one other serious bug: you are not initializing the synchronization object in a thread-safe way, which could easily lead to two different threads using two different object instances, resulting in no synchronization between those threads:
public static object GetFirstObjectInstance
{
get
{
if(o1==null)
{
o1=new object();
}
return o1;
}
}
If two threads call this getter simultaneously, they each may see o1 as null and try to initialize it. Then each might return a different value for the object instance.
You should simply initialize the object in a initializer:
private static readonly object o1 = new object();
And then return it from the getter:
public static object GetFirstObjectInstance { get { return o1; } }
That addresses the thread-safety issue. But you still have other issues with the code. First, you should encapsulate synchronization in an object, not expose the actual synchronization object instance. Second, assuming you are going to expose the synchronization object, I don't understand why you bother with the property, since you made the field public. The field should be private if you want to use a property as well.
It would also be better if the property followed normal .NET naming conventions. A method that returned the object would have "Get" in the name, but a property would not. Just name it "FirstObjectInstance".
Also as noted by Dan, use lock everywhere you want to acquire the lock.
There may be other issues in the code as well...I didn't do a thorough review. But the above you need to fix for sure.
A project that I work on was analyzed by a commercial analysis tool. It flagged our implementations of ReaderWriterLockSlim as potential sources of memory leaks because we didn't call the Dispose() method.
I've never seen this method called on this lock: either in code I've worked on or code examples I learned from. Should Dispose() be called? What if it's disposed while a thread still needs it? Is this possible?
Here's a sample of how we currently use it - no Dispose():
Public Class Test
{
private ReaderWriterLockSlim _lookupLock = new ReaderWriterLockSlim();
public IDictionary<int, SomeObject> GetAll()
{
_lookupLock.EnterWriteLock();
try
{
if (X == null || X.Count == 0)
{
Do Something...;
}
}
finally
{
_lookupLock.ExitWriteLock();
}
return Something...;
}
}
It does need to be disposed.
Mostly a ReaderWriterLockSlim is used to protect a static resource, so will be a static instance that doesn't need to be disposed.
But in your case (one ReaderWriterLockSlim per instance), you would need to make your class IDisposable, and dispose the ReaderWriterLockSlim.
Or maybe a better alternative is to use an ordinary lock (i.e. Monitor) to protect instance resources rather than a ReaderWriterLockSlim. There's probably not much performance difference, it makes your code simpler, and it avoids you needing to make your class IDisposable.
Framework classes like ConcurrentDictionary use ordinary locks.
In your class, try to inherit from IDisposable. Look at this class declaration as an example.
Change your declaration to this, and of course, add in the rest of your existing code.
Public Class Test : IDisposable
{
}
How I would declare it, example below.
public class WriteLock : IDisposable
{
ReaderWriterLockSlim _rwlock;
public WriteLock(ReaderWriterLockSlim rwlock )
{
_rwlock = rwlock;
_rwlock.EnterWriteLock();
}
public void Dispose()
{
_rwlock.ExitWriteLock();
}
}
Consider the following extensions:
public static class ReaderWriteExt
{
public static void ExecWriteAction(this ReaderWriterLockSlim rwlock, Action action)
{
rwlock.EnterWriteLock();
try
{
action();
}
finally
{
rwlock.ExitWriteLock();
}
}
public static void ExecUpgradeableReadAction(this ReaderWriterLockSlim rwlock, Action action)
{
rwlock.EnterUpgradeableReadLock();
try
{
action();
}
finally
{
rwlock.ExitUpgradeableReadLock();
}
}
}
Also consider the following sample usage (stripped of some supporting code):
private static ReaderWriterLockSlim _rwlock = new ReaderWriterLockSlim();
private static ... _cacheEntries = ....;
public static void RemoveEntry(string name)
{
WeakReference outValue = null;
_rwlock.ExecUpgradeableReadAction(() =>
{
if (_cacheEntries.TryGetValue(name, out outValue))
{
if (!outValue.IsAlive)
{
_rwlock.ExecWriteAction(() => _cacheEntries.Remove(name));
}
}
});
}
I'm new to C# coding and I was unable to find enough information about these topics that could guide me. To my question: I am considering using this concept in our production code, is it a bad idea? What can go wrong?
That seems fine to me except that the code looks very cumbersome
I would probably implement IDisposable as:
public class WriteLock : IDisposable
{
ReaderWriterLockSlim _rwlock;
public WriteLock(ReaderWriterLockSlim rwlock )
{
_rwlock = rwlock;
_rwlock.EnterWriteLock();
}
public void Dispose()
{
_rwlock.ExitWriteLock();
}
}
Usage:
private ReaderWriterLockSlim _rwlock = new ReaderWriterLockSlim();
//...
using (new WriteLock(_rwlock)) //<-- here the constructor calls EnterWriteLock
{
_cacheEntries.Remove(name);
} //<---here Dispose method gets called automatically which calls ExitWriteLock
Similarly, you can implement UpgradeableReadLock class implementing IDisposable interface.
The idea is that you can create an instance of disposable class in using construct which ensures that in the constructor you enter into write lock by calling EnterWriteLock() method, and when it goes out of scope, Dispose() method is called automatically (by CLR) which calls ExitWriteLock() method.
Note that it will not dispose ReaderWriterLockSlim object; it will dispose WriteLock object which is just a wrapper. ReaderWriterLockSlim will be as such in the user-class.
I am not seeing any reason why that would not work safely. However, I have a suspicion that it will actually be slower than using a plain old lock. The reason is because ReaderWriterLockSlim has about 2x the overhead as compared to a lock.1 So you would you need the execution of the code in the critical section to consume a sufficient enough number of CPU cycles to overcome this added overhead just to reach the breakeven point. If all you are doing is simply accessing a Dictionary (or whatever data structure _cacheEntries happens to be) then I doubt RWLS is right for the situation. Reader-writer locks tend to work better in scenarios where the number of readers significantly outnumbers the writers and when the guarded section of code is long and drawn out. Obviously you should do your own benchmark tests because mileage will vary considerably depending on a lot of other factors. The degrees of parallelism and the number of cores in the hardware could give you more throughput using RWLS even though a simple breakeven point analysis did not favor them initially.
1Based on my own tests. ReaderWriterLock had about 5x the overhead.
Yes, there are technical reasons why this is not a good idea. If the Action throws an exception when updating the data, then the data is now in an unknown and very likely corrupt state. Your code unconditionally releases the writer lock, meaning that other threads are now accessing the partially-updated shared state. This is a recipe for disaster.