why "lock" keyword a object argument [duplicate] - c#

This question already has answers here:
C# "lock" keyword: Why is an object necessary for the syntax?
(3 answers)
Closed 4 years ago.
I do not speak English and I use translator.
I'm wondering when I'm studying thread synchronization.
class MainApp
{
static public int count = 0;
static private object tLock = new object();
static void plus()
{
for (int i = 0; i < 100; i++)
{
lock (tLock)
{
count++;
Console.WriteLine("plus " + count);
Thread.Sleep(1);
}
}
}
static void minus()
{
for (int i = 0; i < 100; i++)
{
lock (tLock)
{
count--;
Console.WriteLine("minus " + count);
Thread.Sleep(1);
}
}
}
static void Main()
{
Thread t1 = new Thread(new ThreadStart(plus));
Thread t2 = new Thread(new ThreadStart(minus));
t1.Start();
t2.Start();
}
}
Simple thread studying.
static private object tLock = new object();
lock (tLock) << argument value, why object argument??

Why have an object argument on lock?
Well, because it's convenient.
First of all, it's obvious in your code example that you need some shared state between the calls to lock, to declare that two different sections of code are mutually exclusive. If the syntax was just lock { } without a parameter, like this:
public void DoSomestuff()
{
lock
{
// Section A
}
}
public void DoOtherStuff()
{
lock
{
// Section B
}
}
Then either all locks would be mutually exclusive, or would impact only their individual portion of code (so two threads could execute section A and B concurrently, but only one thread at a time could execute A). This would greatly reduce the usefulness of the keyword.
Now that we established that we need a shared state, what this state should be? We could have used a string:
lock ("My Section")
{
// Section A
}
It would work but has a few drawbacks:
You expose yourself to potential collisions between the name of different sections in different libraries
It means that the runtime has to keep a kind of table to associate the string to a lock. Nothing too difficult, but that's some overhead
Instead, the .NET authors went for using an object argument. This solves problem 1/, as you know that another library won't have a reference to your object unless you willingly give it. But this also solves problem 2/, because this allows the runtime to store the lock in the actual object header. That's a pretty neat optimization.

Consider the following (without lock):
for (int i = 0; i < 1000; i++)
{
count++;
Console.WriteLine("plus " + count);
Thread.Sleep(1);
}
If two threads run simultaneously:
First thread adds one to count which is now 1.
Now second thread takes over and adds one to count which is now 2.
Second thread continues to print plus 2 and loops and again adds one to count which is now 3.
Now the first thread takes over and prints plus 3 which was not intended since count was 1 when WriteLine was to be called.
When adding a locking mechanism (lock) the developer makes sure that a part of the code is atomic, i.e. is run in sequence without interruption.
for (int i = 0; i < 1000; i++)
{
lock (tLock)
{
count++;
Console.WriteLine("plus " + count);
Thread.Sleep(1);
}
}
If you follow the same pattern here:
First thread adds one to count which is now 1.
Now second thread tries to take over but has to wait until the lock is released by the first thread.
First thread prints plus 1 and releases the lock.
Now second thread can take over and add one to count which is now 2.
First thread tries to take over but has to wait until the second thread releases the lock.
Second thread prints plus 2 and releases the lock.
As you can see the increment and WriteLine are now synchronized operations.
Edit
After you changed the question:
The lock keyword requires an object of reference type. It doesn't have to be an object. It can also be a class, interface, delegate, dynamic or string.
public static string a = string.Empty;
public static void Main()
{
lock(a)
{
Console.WriteLine("Hello World");
}
}
See the documentation for more information.

Related

C# Thread and lock

I test simple code
static Thread _readThread = null;
static private Object thisLock = new Object();
static int a = 1;
private static void ReadComPort()
{
lock (thisLock)
{
for (int i = 0; i < 3; i++)
{
Console.WriteLine(Thread.CurrentThread.Name + " " + a++.ToString());
Thread.Sleep(1000);
}
}
}
static void Main(string[] args)
{
for (int i = 0; i < 3; i++)
{
_readThread = new Thread(new ThreadStart(ReadComPort));
_readThread.IsBackground = true;
_readThread.Name = i.ToString();
_readThread.Start();
//Thread.Sleep(50);
}
Console.WriteLine("End");
Console.ReadKey();
}
but why is the sequence of execution and the launching of threads chaotic:
0,2,1 Why?
Console output:
0 1
End
0 2
0 3
2 4
2 5
2 6
1 7
1 8
1 9
Because you can't expect threads to start or run in a specific order. The OS schedules threads the way it wants to. Sometimes it puts a thread on hold, executes another one, before coming back to the original one.
Here you see that the threads start almost at the same time. Obviously (from the output) thread 0 wins it to the first lock. Then, by pure chance, thread 2 gets by the lock earlier than thread 1. This could have gone entirely different since the threads are created shortly after each other. As said: there is no guarantee.
Lock does not guarantee the order : Does lock() guarantee acquired in order requested?
Also, in your code, you should wait your threads to finish at the end of your for loop in order to not have "end" at the beginning - if you press a key, you will exit while your thread are still running, and you may have unexpected behaviour.
Read the C# reference carefully.
https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/lock-statement
There, you cannot find anything about the order of threads entering the lock block.

Exit a loop if another thread enter in the

I've a multi-threading issue.
I've a method that is called to make refresh on several items.
In this method, I iterate on a list of items and refresh one of it's property.
The list has a lot of elements and we have to do some math to compute it's property.
The current code of this operation look like this:
public void AddItemsWithLayoutRefresh(IEnumerable<MyItem> items){
_control.Invoke(()=>{
AddItems(items);
for(int i =0;i<_guiItems.Count;i++){
//The goal is to have a condition here to "break" the loop and let the next call to RefreshLayout proceed
_guiItems[i].Propriety = ComputePropriety(_guiItems[i]);
}
});
}
The problem is that I may have 4 call, which are currently just blocking on the Invoke.
I've to finish the "AddItems" methods, but concerning everything that is in the "for" loop, I can abort this without any issue if I know that it will be executed just after.
But how to do this in a thread-safe way?
If I put a private bool _isNewRefreshHere;, set to true before entering the Invoke, then checking in the Invoke, I've no warranty that there is not already two call that have reach the Invoke BEFORE I check it in the for loop.
So how can I break when being in my loop when a new call is made to my method?
Solution
Based on Andrej Mohar's answer, I did the following:
private long m_refreshQueryCount;
public void AddItemsWithLayoutRefresh(IEnumerable<MyItem> items){
Interlocked.Increment(ref m_refreshQueryCount);
_control.Invoke(()=>{
Interlocked.Decrement(ref m_refreshQueryCount);
AddItems(items);
for(int i =0;i<_guiItems.Count;i++){
if (Interlocked.Read(ref m_refreshQueryCount) > 0)
{
break;
}
_guiItems[i].Propriety = ComputePropriety(_guiItems[i]);
}
});
}
Which seems to work very nicely
If I were you, I'd try to make a thread-safe waiting counter. You can use Interlocked methods like Increment and Decrement. What these basically do is they increment the value as an atomic operation, which is considered to be thread-safe. So you increase the variable before the Invoke call. This will allow you to know how many threads are in the waiting queue. You decrement the variable after the for loop finishes and before the ending of the Invoke block. You can then check inside the for statement for the number of waiting threads and break the for if the number is greater than 1. This way you should know exactly how many threads are in the execution chain.
I would do it in the following way:
private readonly object _refresherLock = new object();
private bool _isNewRefreshHere = false;
private AutoResetEvent _refresher = new AutoResetEvent(true);
public void AddItemsWithLayoutRefresh(IEnumerable<MyItem> items)
{
lock (_refresherLock)
{
if (_isNewRefreshHere)
{
return;
}
_isNewRefreshHere = true;
}
_refresher.WaitOne();
_isNewRefreshHere = false;
_control.Invoke(() =>
{
AddItems(items);
for (int i = 0; i < _guiItems.Count && !_isNewRefreshHere; i++)
{
_guiItems[i].Propriety = ComputePropriety(_guiItems[i]);
}
_refresher.Set();
});
}
That is:
You can always cancel the current updation with a new one.
You cannot queue up more than one updation at a time.
You are guaranteed to have no cross-threading conflicts.
You should test that code since I did not. :)

Understanding multithreading

I'm trying to get my head around multithreading.
For simple tasks the easiest way I have found is to do this:
new Thread(delegate()
{
Console.Writeline("doing stuff here");
}).Start();
new Thread(delegate()
{
Console.Writeline("doing other stuff here");
}).Start();
What I'm wondering is if I call a method within my two threads, can this cause a conflict:
new Thread(delegate()
{
dostuff();
}).Start();
new Thread(delegate()
{
dostuff();
}).Start();
private void dostuff()
{
Console.WriteLine("Do Stuff Here");
}
It can only cause a conflict if you are sharing a variable, like maybe a class static or a global between those threads in that dostuff method.
All variables local to that method only are safe, its the ones that you may be sharing you will have to use a lock on to protect against data races.
Also your console is a shared resource which would need coordination to write to, if you want it to be ordered properly.
As with everything the answer is - It Depends ...
It depends on what is happening inside your dostuff() method. Are the things you are interacting with Threadsafe - This is Required Reading
You can block threads in your code like this:
var myLock = new object();
Then make use of this in threading scenarios like:
lock(myLock)
{
// do things in here
}
Atomic operations
This particular case won't cause a conflict as it's basically doing exactly the same thing as the first case (and because Console.Write line is thread-safe). Only when they are dealing with shared objects, an example of this is:
private int number = 0;
public void RunThreads()
{
new Thread(delegate()
{
Increment();
}).Start();
new Thread(delegate()
{
Increment();
}).Start();
}
private void Increment()
{
number += number;
}
At the end of this, number could be either 1 or 2 depending on the order in which the threads execute. This is because reading number and setting number are atomic operations (ie. reading and setting together aren't) and therefore could possibly be interleaved like so:
Thread1: Read number as 0
Thread2: Read number as 0
Thread1: Set number as 0+1
Thread2: Set number as 0+1
Resulting in number == 1 after both threads finish.
Locking a critical section
To fix a case like this you can create a 'lock object' with the lock keyword to only allow one thread in to that critical section at a time.
private object mylock = new Object();
public void RunThreads()
{
// ...
}
private void Increment()
{
lock (mylock)
{
number += number;
}
}
Locking like this obviously slows the execution however as only one thread is allowed into the critical section at one time and the other(s) are blocked.

How do I achieve mutual exclusion like in the lock statement, but the block would be skipped if it is locked?

Using the lock statement, one can "ensure that one thread does not enter a critical section of code while another thread is in the critical section. If another thread tries to enter a locked code, it will wait, block, until the object is released."
What if the behaviour I want is that if another thread tries to enter the locked code, it will just skip the whole code (instead of waiting the lock to be released)? An idea that come to my mind is using a flag, something like
if(flag) return;
flag = true;
//do stuff here
flag =false;
But I know this is not safe because two threads can pass the first line before anyone set to true, or the flag being never set to false in case of exceptions.. Can you suggest an improvement or an alternative?
Use this overload of Monitor.TryEnter, which lets you specify a timeout.
Attempts, for the specified amount of time, to acquire an exclusive
lock on the specified object.
Return Value Type: System.Boolean true if the current thread acquires
the lock without blocking; otherwise, false.
In your case, you probably want to use a timeout of close to TimeSpan.Zero.
If you don't want the thread attempting to take the lock to wait for any length of time, you can just this overload of Monitor.TryEnter, which does not accept a TimeSpan argument. This method will immediately return without waiting - very close to the sentiment of the flag technique you are trying to use.
You need Semaphores with a limit 1 and timeout period 0 miliseconds
By using Semaphore you can say that only a limited number of threads can access a piece of code at a time.
See this sample for how to use it
You need to use this method for waiting
bool WaitOne(int millisecondsTimeout)
specify timeout period = 0; in this way your waiting threads will wait 0 second which means they will simply skip the code
Example
class SemaphoreExample
{
// Three reserved slots for threads
public static Semaphore Pool = new Semaphore(1, 0);
public static void Main(string[] args)
{
// Create and start 20 threads
for (int i = 0; i < 20; i++)
{
Thread t = new Thread(new ThreadStart(DoWork));
t.Start();
}
Console.ReadLine();
}
private static void DoWork()
{
// Wait 0 miliseconds
SemaphoreExample.Pool.WaitOne(0);
#region Area Protected By Semaphore
Console.WriteLine("Acquired slot...");
for (int i = 0; i < 10; i++)
{
Console.WriteLine(i + 1);
}
Console.WriteLine("Released slot...");
#endregion
// Release the semaphore slot
SemaphoreExample.Pool.Release();
}
}
you can use Monitor.TryEnter
http://msdn.microsoft.com/en-us/library/dd289679.aspx

How to properly lock an object

I am just reading a great tutorial about threads and have a problem with locks. I need some tip/advice that will point me in right direction. I'd like to understand why the output isn't ordered as i expect. The code shows my simple example.
class Program {
class A {
public object obj = new object();
public int i;
}
class B {
public object obj = new object();
public int j;
}
static void Main() {
Console.Write("Thread1: ");
A a = new A();
for (a.i = 0; a.i < 9; a.i++) {
lock (a) {
new Thread(() => { Console.Write(a.i); }).Start();
}
}
Thread.Sleep(500);
Console.Write("\nThread2: ");
B b = new B();
for (b.j = 0; b.j < 9; b.j++) {
new Thread(() => { lock (b) { Console.Write(b.j); } }).Start();
}
Console.ReadLine();
}
}
Example output:
Thread1: 222456799
Thread2: 233357889
Link to the tutorial:
http://www.albahari.com/threading/
You are only locking while you create the thread, or (in the second case), access the value. Locks must be used by all threads, otherwise they do nothing. It is the act of trying to acquire the lock that blocks. Even if you did lock in both threads, that wouldn't help you marry each thread to the value of a.i (etc) at a particular point in time (that no longer exists).
Equally, threads work at their own pace; you cannot guarantee order unless you have a single worker and queue; or you implement your own re-ordering.
it will run at its own pace, and since you are capturing the variable a, it is entirely likely that the field a.i has changed by the time the thread gets as far as Console.Write. Instead, you should capture the value, by making a copy:
A a = new A();
for (a.i = 0; a.i < 9; a.i++) {
var tmp = a.i;
new Thread(() => { Console.Write(tmp); }).Start();
}
(or probably remove a completely)
for (int i = 0; i < 9; i++) {
var tmp = i;
new Thread(() => { Console.Write(tmp); }).Start();
}
there are several issues here:
First, you are locking on a when you create a thread, so the thread is created, but your original main thread then releases the lock and keeps on trucking in the loop, while the created threads run concurrently.
You want to move the first lock into the thread that uses A to the Thread delegate like this:
for(a.i=0;a.i<9;a.i++)
{
int id=a.i;
new Thread(()=>{ lock(a){Console.Out.WriteLine("Thread{0} sees{1}",id,a.i)};}).Start(); // lots of smileys here :)
}
If you look closely, you will notice that the threads are not locked the same way for A and B, which tells you that threads live their own lives and Thread creation != Thread life.
Even with locking your thread runners, you can and will end-up in situations where thread 1 runs AFTER thread 2... but they will never run at the same time thanks to your lock.
You also reference a shared member in all your threads: a.i. This member is initialized in the main thread which doesn't lock anything so your behaviour is not predictable. This is why I added the captured variable i that grabs the value of a.i when the thread is created, and is used in the thread delegate in a safe way.
Also, always lock on a non-public instance. if you lock on A, make sure no-one sees A and gets the opportunity to lock on it.
Because the lock is always held by the main thread, as you are starting threads after acquiring lock and once you acquire there is no contention. Now the threads are free to run however they want, the threads which started by main thread aren't synchronized by any lock. Something which comes close to your expections is following (only order) count again depends on how fast and how many cores you've got. Observe b.j++ is now inside a lock.
for (b.j = 0; b.j < 9; )
{
new Thread(() => { lock (b) { Console.Write(b.j); b.j++; } }).Start();
}
Basic idea behind locking or critical section is to only allow one thing to happen, not the order, in the above modification I've locked the increment operation, that gaurantees that before next thread starts running code under lock, current thread has to finish running all the code under its acquired lock, before it releases the lock.

Categories