Skipping locked section already in use - c#

I have a critical section which is to be executed only once but is invoked by many scenarios. How can I execute this thread proc and skip all the rest of the calls?
Thanks in advance.

// Assuming this uses calls from multiple threads, I used volatile.
volatile bool executed = false;
object lockExcecuted = new object();
void DoSomething()
{
lock (lockExecuted)
{
if (executed) return;
executed = true;
}
// do something
}

public static bool hasExecuted = false;
static readonly object lockObject = new object();
static void Method()
{
lock(lockObject)
{
if(!hasExecuted)
{
// run code
hasExecuted = true;
}
}
}

lock is the preferred method; unless you have more detailed requirements.
EDIT: the addition of "only once" means lock is insufficient.

Use lock to allow only one thread to access the critical section and when it's complete set a flag saying it's done.

Depends on ur scenario. If this section returns a value, u can use the new feature of .Net 4.0 - Lazy(T)
Of course, you can always just set a flag within ur critical section, and check its value before executing (which is sometimes being done when using the singleton pattern)

Related

Mutex allowing more than one thread to pass

My code seens to be allowing more than one thread to get into a specific method "protected" by mutex.
private static Mutex mut = new Mutex();
public DadoMySql PegaPrimeiroFila(int identificacao)
{
DadoMySql dadoMySql = null;
mut.WaitOne();
dadoMySql = PegaPrimeiroFila_Processa();
mut.ReleaseMutex();
return dadoMySql;
}
I have 10 threads and a keep getting 2 random ones of than getting the same "dadoMySql" everytime.
If i add logs inside de mutex wait everything works fine. The extra time it takes to write the log makes it work :/, maybe?
Mutex is overkill here, unless you are synchronizing across multiple processes.
A simple lock should work since you want mutual exclusion:
private static readonly object lockObject = new object();
public DadoMySql PegaPrimeiroFila(int identificacao)
{
DadoMySql dadoMySql = null;
lock (lockObject)
{
dadoMySql = PegaPrimeiroFila_Processa();
}
return dadoMySql;
}
Using the lock keyword also gives you a stronger guarantee that Monitor.Exit nearly always gets called. A good example is when an exception is thrown inside of lock scope.

Correct way to use the Interlocked class for multithreading in .NET

I have a counter, which counts the currently processed large reports
private int processedLargeReports;
and I'm generating and starting five threads, where each thread accesses this method:
public bool GenerateReport(EstimatedReportSize reportSize)
{
var currentDateTime = DateTimeFactory.Instance.DateTimeNow;
bool allowLargeReports = (this.processedLargeReports < Settings.Default.LargeReportLimit);
var reportOrderNextInQueue = this.ReportOrderLogic.GetNextReportOrderAndLock(
currentDateTime.AddHours(
this.timeoutValueInHoursBeforeReleaseLock),
reportSize,
CorrelationIdForPickingReport,
allowLargeReports);
if (reportOrderNextInQueue.IsProcessing)
{
Interlocked.Increment(ref this.processedLargeReports);
}
var currentReport = this.GetReportToBeWorked(reportOrderNextInQueue);
var works = this.WorkTheReport(reportOrderNextInQueue, currentReport, currentDateTime);
if (reportOrderNextInQueue.IsProcessing)
{
Interlocked.Decrement(ref this.processedLargeReports);
}
return works;
}
the "reportOrderNextInQueue" variable gets a reportorder from the database and checks whether the report order is either "Normal" or "Large" (this is achieved by defining the bool IsProcessing property of reportOrderNextInQueue variable). In case of a large report, the system then Interlock Increments the processedLargeReport int and processes the large report. Once the large report is processed, the system Interlock Decrements the value.
The whole idea is that I'll only allow a single report to be processed at a time, so once a thread is processing a large report, the other threads should not be able to access a large report in the database. The bool allowLargeReport variable checks whether the processedLargeReports int and is above the limit or not.
I'm curious whether this is the proper implementation, since I cannot test it before Monday. I'm not sure whether I have to use the InterLocked class or just define the processedLargeReports variable as a volatile member.
Say you have 5 threads starting to run code above, and LargeReportLimit is 1. They will all read processedLargeReports as 0, allowLargeReports will be true for them, and they will start processing 5 items simultaneously, despite your limit is 1. So I don't really see how this code achieves you goal, if I understand it correctly.
To expand it a bit: you read processedLargeReports and then act on it (use it to check if you should allow report to be processed). You act like this variable cannot be changed between read and act, but that is not true. Any number of threads can do anything with processedLargeReports in between you read and act on variable, because you have no locking. Interlocked in this case will only ensure that processedLargeReports will always get to 0 after all threads finished processing all tasks, but that is all.
If you need to limit concurrent access to some resourse - just use appropriate tool for this: Semaphore or SemaphoreSlim classes. Create semaphore which allows LargeReportLimit threads in. Before processing report, Wait on your semaphore. This will block if number of concrurrent threads processing report is reached. When processing is done, release your semaphore to allow waiting threads to get in. No need to use Interlocked class here.
volatile does not provide thread safety. As usual with multithreading you need some synchronization - it could be based on Interlocked, lock or any other synchronization primitive and depends on your needs. You have chosen Interlocked - fine, but you have a race condition. You read the processedLargeReports field outside of any synchronization block and making a decision based on that value. But it could immediately change after you read it, so the whole logic will not work. The correct way would be to always do Interlocked.Increment and base your logic on the returned value. Something like this:
First, let use better name for the field
private int processingLargeReports;
and then
public bool GenerateReport(EstimatedReportSize reportSize)
{
var currentDateTime = DateTimeFactory.Instance.DateTimeNow;
bool allowLargeReports =
(Interlocked.Increment(ref this.processingLargeReports) <= Settings.Default.LargeReportLimit);
if (!allowLargeReports)
Interlocked.Decrement(ref this.processingLargeReports);
var reportOrderNextInQueue = this.ReportOrderLogic.GetNextReportOrderAndLock(
currentDateTime.AddHours(
this.timeoutValueInHoursBeforeReleaseLock),
reportSize,
CorrelationIdForPickingReport,
allowLargeReports);
if (allowLargeReports && !reportOrderNextInQueue.IsProcessing)
Interlocked.Decrement(ref this.processingLargeReports);
var currentReport = this.GetReportToBeWorked(reportOrderNextInQueue);
var works = this.WorkTheReport(reportOrderNextInQueue, currentReport, currentDateTime);
if (allowLargeReports && reportOrderNextInQueue.IsProcessing)
Interlocked.Decrement(ref this.processingLargeReports);
return works;
}
Note that this also contains race conditions, but holds your LargeReportLimit constraint.
EDIT: Now when I'm thinking, since your processing is based on both Allow and Is Large Report, Interlocked is not a good choice, better use Monitor based approach like:
private int processingLargeReports;
private object processingLargeReportsLock = new object();
private void AcquireProcessingLargeReportsLock(ref bool lockTaken)
{
Monitor.Enter(this.processingLargeReportsLock, ref lockTaken);
}
private void ReleaseProcessingLargeReportsLock(ref bool lockTaken)
{
if (!lockTaken) return;
Monitor.Exit(this.processingLargeReportsLock);
lockTaken = false;
}
public bool GenerateReport(EstimatedReportSize reportSize)
{
bool lockTaken = false;
try
{
this.AcquireProcessingLargeReportsLock(ref lockTaken);
bool allowLargeReports = (this.processingLargeReports < Settings.Default.LargeReportLimit);
if (!allowLargeReports)
{
this.ReleaseProcessingLargeReportsLock(ref lockTaken);
}
var currentDateTime = DateTimeFactory.Instance.DateTimeNow;
var reportOrderNextInQueue = this.ReportOrderLogic.GetNextReportOrderAndLock(
currentDateTime.AddHours(
this.timeoutValueInHoursBeforeReleaseLock),
reportSize,
CorrelationIdForPickingReport,
allowLargeReports);
if (reportOrderNextInQueue.IsProcessing)
{
this.processingLargeReports++;
this.ReleaseProcessingLargeReportsLock(ref lockTaken);
}
var currentReport = this.GetReportToBeWorked(reportOrderNextInQueue);
var works = this.WorkTheReport(reportOrderNextInQueue, currentReport, currentDateTime);
if (reportOrderNextInQueue.IsProcessing)
{
this.AcquireProcessingLargeReportsLock(ref lockTaken);
this.processingLargeReports--;
}
return works;
}
finally
{
this.ReleaseProcessingLargeReportsLock(ref lockTaken);
}
}

Does lock section always guarantee thread safety?

I'm trying to understand thread-safe access to fields. For this, i implemented some test sample:
class Program
{
public static void Main()
{
Foo test = new Foo();
bool temp;
new Thread(() => { test.Loop = false; }).Start();
do
{
temp = test.Loop;
}
while (temp == true);
}
}
class Foo
{
public bool Loop = true;
}
As expected, sometimes it doesn't terminate. I know that this issue can be solved either with volatile keyword or with lock. I consider that i'm not author of class Foo, so i can't make field volatile. I tried using lock:
public static void Main()
{
Foo test = new Foo();
object locker = new Object();
bool temp;
new Thread(() => { test.Loop = false; }).Start();
do
{
lock (locker)
{
temp = test.Loop;
}
}
while (temp == true);
}
this seems to solve the issue. Just to be sure i moved the cycle inside the lock block:
lock(locker)
{
do
{
temp = test.Loop;
}
while (temp == true);
}
and... the program does not terminates anymore.
It is totally confusing me. Doesn't lock provides thread-safe access? If not, how to access non-volatile fields safely? I could use VolatileRead(), but it is not suitable for any case, like not primitive type or properties. I considered that Monitor.Enter does the job, Am i right? I don't understand how could it work.
This piece of code:
do
{
lock (locker)
{
temp = test.Loop;
}
}
while (temp == true);
works because of a side-effect of lock: it causes a 'memory-fence'. The actual locking is irrelevant here. Equivalent code:
do
{
Thread.MemoryBarrier();
temp = test.Loop;
}
while (temp == true);
And the issue you're trying to solve here is not exactly thread-safety, it is about caching of the variable (stale data).
It does not terminate anymore because you are accessing the variable outside of the lock as well.
In
new Thread(() => { test.Loop = false; }).Start();
you write to the variable outside the lock. This write is not guaranteed to be visible.
Two concurrent accesses to the same location of which at least one is a write is a data race. Don't do that.
Lock provides thread safety for 2 or more code blocks on different threads, that uses the lock.
Your Loop assignment inside the new thread declaration is not enclosed in lock.
That means there is no thread safety there.
In general, no, lock is not something that will magically make all code inside it thread-safe.
The simple rule is: If you have some data that's shared by multiple threads, but you always access it only inside a lock (using the same lock object), then that access is thread-safe.
Once you leave that “simple” code and start asking questions like “How could I use volatile/VolatileRed() safely here?” or “Why does this code that doesn't use lock properly seem to work?”, things get complicated quickly. And you should probably avoid that, unless you're prepared to spend a lot of time learning about the C# memory model. And even then, bugs that manifest only once in million runs or only on certain CPUs (ARM) are very easy to make.
Locking only works when all access to the field is controlled by a lock. In your example only the reading is locked, but since the writing is not, there is no thread-safety.
However it is also crucial that the locking takes place on a shared object, otherwise there is no way for another thread to know that someone is trying to access the field. So in your case when locking on an object which is only scoped inside the Main method, any other call on another thread, would not be able to block.
If you have no way to change Foo, the only way to obtain thread-safety is to have ALL calls actually lock on the same Foo instance. This would generally not be recommended though, since all methods on the object would be locked.
The volatile keyword is not a guarantuee of thread-safety in itself. It is meant to indicate that the value of a field can be changed from different threads, and so any thread reading that field, should not cache it, since the value could change.
To achieve thread-safety, Foo should probably look something along these lines:
class Program
{
public static void Main()
{
Foo test = new Foo();
test.Run();
new Thread(() => { test.Loop = false; }).Start();
do
{
temp = test.Loop;
}
while (temp == true);
}
}
class Foo
{
private volatile bool _loop = true;
private object _syncRoot = new object();
public bool Loop
{
// All access to the Loop value, is controlled by a lock on an instance-scoped object. I.e. when one thread accesses the value, all other threads are blocked.
get { lock(_syncRoot) return _loop; }
set { lock(_syncRoot) _loop = value; }
}
public void Run()
{
Task(() =>
{
while(_loop) // _loop is volatile, so value is not cached
{
// Do something
}
});
}
}

Object synchronization method was called from an unsynchronized block of code. Exception on Mutex.Release()

I have found different articles about this exception but none of them was my case.
Here is the source code:
class Program
{
private static Mutex mutex;
private static bool mutexIsLocked = false;
static void Main(string[] args)
{
ICrmService crmService =
new ArmenianSoftware.Crm.Common.CrmServiceWrapper(GetCrmService("Armsoft", "crmserver"));
//Lock mutex for concurrent access to workflow
mutex = new Mutex(true, "ArmenianSoftware.Crm.Common.FilterCtiCallLogActivity");
mutexIsLocked = true;
//Create object for updating filtered cti call log
ArmenianSoftware.Crm.Common.FilterCtiCallLog filterCtiCallLog =
new ArmenianSoftware.Crm.Common.FilterCtiCallLog(crmService);
//Bind events
filterCtiCallLog.CtiCallsRetrieved += new EventHandler<ArmenianSoftware.Crm.Common.CtiCallsRetrievedEventArgs>(filterCtiCallLog_CtiCallsRetrieved);
//Execute filter
try
{
filterCtiCallLog.CreateFilteredCtiCallLogSync();
}
catch (Exception ex)
{
throw ex;
}
finally
{
if (mutexIsLocked)
{
mutexIsLocked = false;
mutex.ReleaseMutex();
}
}
}
static void filterCtiCallLog_CtiCallsRetrieved(object sender,
ArmenianSoftware.Crm.Common.CtiCallsRetrievedEventArgs e)
{
tryasasas
{
if (mutexIsLocked)
{
mutexIsLocked = false;
mutex.ReleaseMutex();
}
}
catch (Exception ex)
{
throw ex;
}
}
}
filterCtiCallLog.CreateFilteredCtiCallLogSync(); function executes requests to server, and raises some events, one of which is CtiCallsRetrieve event. And I need to release the mutex when this event is fired. But on calling the mutex.Release() function exception is thrown. CreateFilteredCtiCallLogSync works synchronously. What is the problem?
Keeping a bool around that indicates that the mutex is owned is a grave mistake. You are not making the bool thread-safe. You got into this pickle because you are using the wrong synchronization object. A mutex has thread-affinity, the owner of a mutex is a thread. The thread that acquired it must also be the one that calls ReleaseMutex(). Which is why your code bombs.
You in all likelihood need an event here, use AutoResetEvent. Create it in the main thread, call Set() in the worker, WaitOne() in the main thread to wait for the worker to complete its job. And dispose it afterwards. Also note that using a thread to perform a job and having your main thread wait for its completion is not productive. You might as well have the main thread do the job.
If you are actually doing this to protect access to an object that's not thread-safe (it isn't clear) then use the lock statement.
Another reason why this exception may occur:
if (Monitor.TryEnter(_lock))
{
try
{
... await MyMethodAsync(); ...
}
finally
{
Monitor.Exit(_lock);
}
}
I get this exception on Monitor.Exit when after 'await' another thread continues execution.
Edit:
Use SemaphoreSlim, because it doesn't require releasing thread to be the same.
You will also run into this exception if you do the following:
mutex.WaitOne();
… Some Work...
await someTask;
mutex.ReleaseMutex();
That's because the code after the await can be executed on a different thread from the line just before. Basically, it seems that if you asynch code now (in early 2020), Mutexes simply don't work. Use events or something.
I have found the problem. First several things about the filterCtiCallLog class. I have designed it so to work both asynchronous and synchronous. For first I have written code for asynchronous execution. I needed a way to trigger events from child worker thread to parent, to report the working state. For this I have used AsyncOperation class and it's post method. Here is the code part for triggering CtiCallsRetrieved event.
public class FilterCtiCallLog
{
private int RequestCount = 0;
private AsyncOperation createCallsAsync = null;
private SendOrPostCallback ctiCallsRetrievedPost;
public void CreateFilteredCtiCallLogSync()
{
createCallsAsync = AsyncOperationManager.CreateOperation(null);
ctiCallsRetrievedPost = new SendOrPostCallback(CtiCallsRetrievedPost);
CreateFilteredCtiCallLog();
}
private void CreateFilteredCtiCallLog()
{
int count=0;
//do the job
//............
//...........
//Raise the event
createCallsAsync.Post(CtiCallsRetrievedPost, new CtiCallsRetrievedEventArgs(count));
//...........
//...........
}
public event EventHandler<CtiCallsRetrievedEventArgs> CtiCallsRetrieved;
private void CtiCallsRetrievedPost(object state)
{
CtiCallsRetrievedEventArgs args = state as CtiCallsRetrievedEventArgs;
if (CtiCallsRetrieved != null)
CtiCallsRetrieved(this, args);
}
}
As you can see the code is executing synchronously. The problem here is in AsyncOperation.Post() method. I presumed that if it is called in the main thread it will act as simply triggering the event, not posting it to parent thread. However it wasn't the case. I don't know how it is working, but I have changed the code, to check if the CreateFilteredCtiCallLog is called sync or async. And if it is async call I used AsyncOperation.Post method, if not, I have simply triggered the EventHandler if it is not null. Here is the corrected code
public class FilterCtiCallLog
{
private int RequestCount = 0;
private AsyncOperation createCallsAsync = null;
private SendOrPostCallback ctiCallsRetrievedPost;
public void CreateFilteredCtiCallLogSync()
{
createCallsAsync = AsyncOperationManager.CreateOperation(null);
ctiCallsRetrievedPost = new SendOrPostCallback(CtiCallsRetrievedPost);
CreateFilteredCtiCallLog(false);
}
private void CreateFilteredCtiCallLog(bool isAsync)
{
int count=0;
//do the job
//............
//...........
//Raise the event
RaiseEvent(CtiCallsRetrievedPost, new CtiCallsRetrievedEventArgs(count),isAsync);
//...........
//...........
}
public event EventHandler<CtiCallsRetrievedEventArgs> CtiCallsRetrieved;
private void RaiseEvent(SendOrPostCallback callback, object state, bool isAsync)
{
if (isAsync)
createCallsAsync.Post(callback, state);
else
callback(state);
}
private void CtiCallsRetrievedPost(object state)
{
CtiCallsRetrievedEventArgs args = state as CtiCallsRetrievedEventArgs;
if (CtiCallsRetrieved != null)
CtiCallsRetrieved(this, args);
}
}
Thanks everybody for the answers!
I have seen this happen when you lock code using a Monitor, then call an async code and you get this, when using a lock(object) you get a compiler error, however between monitor.enter(object) and Monitor.Exist(object) the compiler does not complain... unfortunately.
Using a flag to attempt to monitor a kernel synchro object state will just not work - the point of using those synchro calls is that they work correctly without any explicit checking. Setting flags will just cause intermittent problems because the flag may be changed inappropriately due to interrupts between checking the flag and acting on it.
A mutex can only be released by the threat that acquired it. If you callback is called by a different thread, (one internal to CreateFilteredCtiCallLogSync() or a kernel thread pool), the release will fail.
It's not clear exactly what you are attempting to do. Presumably, you want to serialize access to CreateFilteredCtiCallLogSync() and the callback flags that the instance is available for re-use? If so, you could use a semaphore instead - init. it to one unit, wait for it at the start and release it in the callback.
Is there some issue where sometimes the callback is not called, and hence the try/finally/release? If so this way out seems a bit dodgy if the callback is asychronous and may be called by another thread after the setup thread has left the function.
I only had this one once or twice, and in every case it came about by trying to release a mutex I didn't own.
Are you sure the events are raised on the same thread the mutex was acquired on?
Although you mention that filterCtiCallLog.CreateFilteredCtiCallLogSync() is a blocking call, perhaps it spawns of worker threads that raise the event?
Maybe not the most meaningful error message, I've seen this happen in some third party code as below,
object obj = new object();
lock (obj)
{
//do something
Monitor.Exit(obj);//obj released
}//exception happens here, when trying to release obj
I have read the thread and got some ideas. But did not know what exactly need to do to solve the issue. I face the same error when uploading the image to the s3 at nopCommerce solution.And the below code is working for me.
using var mutex = new Mutex(false, thumbFileName);
mutex.WaitOne();
try
{
if (pictureBinary != null)
{
try
{
using var image = SKBitmap.Decode(pictureBinary);
var format = GetImageFormatByMimeType(picture.MimeType);
pictureBinary = ImageResize(image, format, targetSize);
}
catch
{
}
}
if (s3Enabled)
//await S3UploadImageOnThumbsAsync(thumbFileName, pictureBinary, picture.MimeType, picture, targetSize);
// The above code was causing the issue. Because it is wait for the thread.
//So I replace the code below line and the error disappear. This also kind of same implementation by nopCommerce.
//The thread need to wait.
S3UploadImageOnThumbsAsync(thumbFileName, pictureBinary, picture.MimeType, picture, targetSize).Wait();
else
File.WriteAllBytes(thumbFilePath, pictureBinary);
}
finally
{
mutex.ReleaseMutex();
}

Monitor vs lock

When is it appropriate to use either the Monitor class or the lock keyword for thread safety in C#?
EDIT:
It seems from the answers so far that lock is short hand for a series of calls to the Monitor class. What exactly is the lock call short-hand for? Or more explicitly,
class LockVsMonitor
{
private readonly object LockObject = new object();
public void DoThreadSafeSomethingWithLock(Action action)
{
lock (LockObject)
{
action.Invoke();
}
}
public void DoThreadSafeSomethingWithMonitor(Action action)
{
// What goes here ?
}
}
Update
Thank you all for your help : I have posted a another question as a follow up to some of the information you all provided. Since you seem to be well versed in this area, I have posted the link: What is wrong with this solution to locking and managing locked exceptions?
Eric Lippert talks about this in his blog:
Locks and exceptions do not mix
The equivalent code differs between C# 4.0 and earlier versions.
In C# 4.0 it is:
bool lockWasTaken = false;
var temp = obj;
try
{
Monitor.Enter(temp, ref lockWasTaken);
{ body }
}
finally
{
if (lockWasTaken) Monitor.Exit(temp);
}
It relies on Monitor.Enter atomically setting the flag when the lock is taken.
And earlier it was:
var temp = obj;
Monitor.Enter(temp);
try
{
body
}
finally
{
Monitor.Exit(temp);
}
This relies on no exception being thrown between Monitor.Enter and the try. I think in debug code this condition was violated because the compiler inserted a NOP between them and thus made thread abortion between those possible.
lock is just shortcut for Monitor.Enter with try + finally and Monitor.Exit. Use lock statement whenever it is enough - if you need something like TryEnter, you will have to use Monitor.
A lock statement is equivalent to:
Monitor.Enter(object);
try
{
// Your code here...
}
finally
{
Monitor.Exit(object);
}
However, keep in mind that Monitor can also Wait() and Pulse(), which are often useful in complex multithreading situations.
Update
However in C# 4 its implemented differently:
bool lockWasTaken = false;
var temp = obj;
try
{
Monitor.Enter(temp, ref lockWasTaken);
//your code
}
finally
{
if (lockWasTaken)
Monitor.Exit(temp);
}
Thanx to CodeInChaos for comments and links
Monitor is more flexible. My favorite use case of using monitor is:
When you don't want to wait for your turn and just skip:
//already executing? forget it, lets move on
if (Monitor.TryEnter(_lockObject))
{
try
{
//do stuff;
}
finally
{
Monitor.Exit(_lockObject);
}
}
As others have said, lock is "equivalent" to
Monitor.Enter(object);
try
{
// Your code here...
}
finally
{
Monitor.Exit(object);
}
But just out of curiosity, lock will preserve the first reference you pass to it and will not throw if you change it. I know it's not recommended to change the locked object and you don't want to do it.
But again, for the science, this works fine:
var lockObject = "";
var tasks = new List<Task>();
for (var i = 0; i < 10; i++)
tasks.Add(Task.Run(() =>
{
Thread.Sleep(250);
lock (lockObject)
{
lockObject += "x";
}
}));
Task.WaitAll(tasks.ToArray());
...And this does not:
var lockObject = "";
var tasks = new List<Task>();
for (var i = 0; i < 10; i++)
tasks.Add(Task.Run(() =>
{
Thread.Sleep(250);
Monitor.Enter(lockObject);
try
{
lockObject += "x";
}
finally
{
Monitor.Exit(lockObject);
}
}));
Task.WaitAll(tasks.ToArray());
Error:
An exception of type 'System.Threading.SynchronizationLockException'
occurred in 70783sTUDIES.exe but was not handled in user code
Additional information: Object synchronization method was called from
an unsynchronized block of code.
This is because Monitor.Exit(lockObject); will act on lockObject which has changed because strings are immutable, then you're calling it from an unsynchronized block of code.. but anyway. This is just a fun fact.
Both are the same thing. lock is c sharp keyword and use Monitor class.
http://msdn.microsoft.com/en-us/library/ms173179(v=vs.80).aspx
The lock and the basic behavior of the monitor (enter + exit) is more or less the same, but the monitor has more options that allows you more synchronization possibilities.
The lock is a shortcut, and it's the option for the basic usage.
If you need more control, the monitor is the better option. You can use the Wait, TryEnter and the Pulse, for advanced usages (like barriers, semaphores and so on).
Lock
Lock keyword ensures that one thread is executing a piece of code at one time.
lock(lockObject)
{
// Body
}
The lock keyword marks a statement block as a critical section by obtaining the mutual-exclusion lock for a given object, executing a statement and then releasing the lock
If another thread tries to enter a locked code, it will wait, block, until the object is released.
Monitor
The Monitor is a static class and belongs to the System.Threading namespace.
It provides exclusive lock on the object so that only one thread can enter into the critical section at any given point of time.
Difference between Monitor and lock in C#
The lock is the shortcut for Monitor.Enter with try and finally.
Lock handles try and finally block internally
Lock = Monitor + try finally.
If you want more control to implement advanced multithreading solutions using TryEnter() Wait(), Pulse(), and PulseAll() methods, then the Monitor class is your option.
C# Monitor.wait(): A thread wait for other threads to notify.
Monitor.pulse(): A thread notify to another thread.
Monitor.pulseAll(): A thread notifies all other threads within a process
In addition to all above explanations, lock is a C# statement whereas Monitor is a class of .NET located in System.Threading namespace.

Categories