I have a program I am writing that will run a variety of tasks. I have set up what I have called a "Task Queue" in which I will continually grab the next task to process (if there is one) and start a new thread to handle that task. However, I want to limit the amount of threads that can spawn at one time for apparent reasons. I created a variable to keep up with the max threads to spawn and one for the current thread count. I was thinking of using a lock to try and accurately keep up with the current thread count. Here is my general idea.
public class Program {
private static int mintThreadCount;
private static int mintMaxThreadCount = 10;
private static object mobjLock = new object();
static void Main(string[] args) {
mintThreadCount = 0;
int i = 100;
while(i > 0) {
StartNewThread();
i--;
}
Console.Read();
}
private static void StartNewThread() {
lock(mobjLock) {
if(mintThreadCount < mintMaxThreadCount) {
Thread newThread = new Thread(StartTask);
newThread.Start(mintThreadCount);
mintThreadCount++;
}
else {
Console.WriteLine("Max Thread Count Reached.");
}
}
}
private static void StartTask(object iCurrentThreadCount) {
int id = new Random().Next(0, 1000000);
Console.WriteLine("New Thread with id of: " + id.ToString() + " Started. Current Thread count: " + ((int)iCurrentThreadCount).ToString());
Thread.Sleep(new Random().Next(0, 3000));
lock(mobjLock) {
Console.WriteLine("Ending thread with id of: " + id.ToString() + " now.");
mintThreadCount--;
Console.WriteLine("Thread space release by id of: " + id.ToString() + " . Thread count now at: " + mintThreadCount);
}
}
}
Since I am locking in two places to access the same variable (increment when starting the new thread and decrement when ending it) is there a chance that the thread waiting on the lock to decrement could get hung up and never end? Thereby reaching max thread count and never being able to start another one? Any alternate suggestions to my method?
Easiest question first… :)
…is there a chance that the thread waiting on the lock to decrement could get hung up and never end?
No, not in the code you posted. None of the code holds a lock while waiting for the count to change, or anything like that. You only ever take the lock, then either modify the count or emit a message, and immediately release the lock. So no thread will hold the lock indefinitely, nor are there nested locks (which could lead to deadlock if done incorrectly).
Now, that said: from the code you posted and your question, it's not entirely clear what the intent here is. The code as written will indeed limit the number of threads created. But once that limit is reached (and it will do so quickly), the main loop will just spin, reporting "Max Thread Count Reached.".
Indeed, with a total loop count of 100, I think it's possible that the entire loop could finish before the first thread even gets to run, depending on what else is tying up CPU cores on your system. If some threads do get to run and it happens that some of them get very low durations to sleep, there's a chance that you might sneak in a few more threads later. But most of the iterations of the loop will see the thread count at the maximum, report the limit has been reached and continue with the next iteration of the loop.
You write in the comments (something you should really put in the question itself, if you think it's relevant) that "the main thread should never be blocked". Of course, the question there is, what is the main thread doing when not blocked? How will the main thread know if and when to try to schedule a new thread?
These are important details, if you want a really useful answer.
Note that you've been offered the suggestion of using a semaphore (specifically, SemaphoreSlim). This could be a good idea, but note that that class is typically used to coordinate multiple threads all competing for the same resource. For it to be useful, you'd actually have more than 10 threads, with the semaphore ensuring that only 10 get to run at a given time.
In your case, it seems to me that you are actually asking how to avoid creating the extra thread in the first place. I.e. you want the main loop to check the count and just not create a thread at all if the maximum count is reached. In that case, one possible solution might be to just use the Monitor class directly:
private static void StartNewThread() {
lock(mobjLock) {
while (mintThreadCount >= mintMaxThreadCount) {
Console.WriteLine("Max Thread Count Reached.");
Monitor.Wait(mobjLock);
}
Thread newThread = new Thread(StartTask);
newThread.Start(mintThreadCount);
mintThreadCount++;
}
}
}
The above will cause the StartNewThread() method to wait until the count is below the maximum, and then will always create a new thread.
Of course, each thread needs to signal that it's updated the count, so that the above loop can be released from the wait and check the count:
private readonly Random _rnd = new Random();
private static void StartTask(object iCurrentThreadCount) {
int id = _rnd.Next(0, 1000000);
Console.WriteLine("New Thread with id of: " + id.ToString() + " Started. Current Thread count: " + ((int)iCurrentThreadCount).ToString());
Thread.Sleep(_rnd.Next(0, 3000));
lock(mobjLock) {
Console.WriteLine("Ending thread with id of: " + id.ToString() + " now.");
mintThreadCount--;
Console.WriteLine("Thread space release by id of: " + id.ToString() + " . Thread count now at: " + mintThreadCount);
Monitor.Pulse(mobjLock);
}
}
The problem with the above is that it will block the main loop. Which if I understood correctly, you don't want.
(Note: you have a common-but-serious bug in your code, in that you create a new Random object each time you want a random number. To use the Random class correctly, you must create just one instance and reuse it as you want new random numbers. I've adjusted the code example above to fix that problem).
One of the other problems, both with the above, and with your original version, is that each new task is assigned a brand new thread. Threads are expensive to create and even to simply exist, which is why thread pools exist. Depending on what your actual scenario is, it's possible that you should just be using e.g. the Parallel, ParallelEnumerable, or Task to manage your tasks.
But if you really want to do this all explicitly, one option is to simply start up ten threads, and have them retrieve data to operate on from a BlockingCollection<T>. Since you start exactly ten threads, you know you'll never have more than that running. When there is enough work for all ten threads to be busy, they will be. Otherwise, the queue will be empty and some or all will be waiting for new data to show in the queue. Idle, but not using any CPU resources.
For example:
private BlockingCollection<int> _queue = new BlockingCollection<int>();
private static void StartThreads() {
for (int i = 0; i < mintMaxThreadCount; i++) {
new Thread(StartTask).Start();
}
}
private static void StartTask() {
// NOTE: a random number can't be a reliable "identification", as two or
// more threads could theoretically get the same "id".
int id = new Random().Next(0, 1000000);
Console.WriteLine("New Thread with id of: " + id.ToString() + " Started.");
foreach (int i in _queue) {
Thread.Sleep(i);
}
Thread.Sleep(new Random().Next(0, 3000));
}
You'd call StartThreads() just once somewhere, rather than calling your other StartNewThread() method multiple times. Presumably, before the while (true) loop you mentioned.
Then as the need to process some task, you just add data to the queue, e.g.:
_queue.Add(_rnd.Next(0, 3000));
When you want the threads to all exit (e.g. after your main loop exits, however that happens):
_queue.CompleteAdding();
That will cause each of the foreach loops in progress to end, letting each thread exit.
Of course, the T type parameter for BlockingCollection<T> can be anything. Presumably, it will be whatever in your case actually represents a "task". I used int, only because that was effectively your "task" in your example (i.e. the number of milliseconds the thread should sleep).
Then your main thread can just do whatever it normally does, calling the Add() method to dispatch new work to your consumer threads as needed.
Again, without more details I can't really comment on whether this approach would be better than using one of the built-in task-running mechanisms in .NET. But it should work well, given what you've explained so far.
Related
I have a program that starts 2 threads and use Join.My understanding says that joins blocks the calling operation till it is finished executing .So,the below program should give 2 Million as answer since both the threads blocks till execution is completed but I am always getting the different value.This might be because first thread is completed but second thread is not run completely.
Can someone please explain the output.
Reference -Multithreading: When would I use a Join?
namespace ThreadSample
{
class Program
{
static int Total = 0;
public static void Main()
{
Thread thread1 = new Thread(Program.AddOneMillion);
Thread thread2 = new Thread(Program.AddOneMillion);
thread1.Start();
thread2.Start();
thread1.Join();
thread2.Join();
Console.WriteLine("Total = " + Total);
Console.ReadLine();
}
public static void AddOneMillion()
{
for (int i = 1; i <= 1000000; i++)
{
Total++;
}
}
}
}
When you call start method of thread, it starts immediately. hence by the time u call join on the thread1, thread2 would also have started. As a result variable 'Total' will be accessed by both threads simultaneously. Hence you will not get correct result as one thread operation is overwriting the value of 'Total' value causing data lose.
public static void Main()
{
Thread thread1 = new Thread(Program.AddOneMillion);
Thread thread2 = new Thread(Program.AddOneMillion);
thread1.Start(); //starts immediately
thread2.Start();//starts immediately
thread1.Join(); //By the time this line executes, both threads have accessed the Total varaible causing data loss or corruption.
thread2.Join();
Console.WriteLine("Total = " + Total);
Console.ReadLine();
}
Inorder to correct results either u can lock the Total variable as follows
static object _l = new object();
public static void AddOneMillion()
{
for (int i = 0; i < 1000000; i++)
{
lock(_l)
ii++;
}
}
U can use Interlocked.Increment which atomically updates the variable.
Please refer the link posted by #Emanuel Vintilă in the comment for more insight.
public static void AddOneMillion()
{
for (int i = 0; i < 1000000; i++)
{
Interlocked.Increment(ref Total);
}
}
It's because the increment operation is not done atomically. That means that each thread may hold a copy of Total and increment it. To avoid that you can use a lock or Interlock.Increment that is specific to incrementing a variable.
Clarification:
thread 1: read copy of Total
thread 2: read copy of Total
thread 1: increment and store Total
thread 2: increment and store Total (overwriting previous value)
I leave you with all possible scenarios where things could go wrong.
I would suggest avoiding explicit threading when possible and use map reduce operations that are less error prone.
You need to read about multi-threading programming and functional programming constructs available in mainstream languages. Most languages have added libraries to leverage the multicore capabilities of modern CPUs.
I want to release all locked threads after one of them passes and complete the some task. Let me post some sample code about what I want to do. The important thing is they must pass all together after first thread completed his job. They(rest 99 threads) must be like that they have never locked not pass one by one.
Monitor.Enter(_lock);//imagine 100x threads hit this lock at same time.
//1 thread pass there
if (data == null)
{
data = GetData();
}
Monitor.Exit(_locker);//one more thread allow after this code.And they all come one by one in order.In these point I want to release them all together.
I have tried lots of class about threading like Monitor, Mutex, Semaphore, ReadWriteLock, ManaualResetEvent etc. but I didn't manage to do this, they all come one by one. Have you ever done this? Or Have you got ant idea about that? I don't wanna spent more time on it.
This might not be the most efficient way but it will work:
static SemaphoreSlim semaphore = new SemaphoreSlim(1, 1);
static CancellationTokenSource cts = new CancellationTokenSource();
static void CriticalSection()
{
if(!cts.Token.IsCancellationRequested)
{
try
{
semaphore.Wait(cts.Token);
}
catch (OperationCanceledException ) { }
}
/*
Critical section here
*/
if(!cts.Token.IsCancellationRequested)
cts.Cancel();
}
the SemaphoreSlim will only let 1 thread run the "critical section". After the first thread is over with the section it will cancel the token. This will lead into an OperationCanceledException as described here. All the threads that were waiting will throw the exception that will be catched in the "try catch statement" and then execute the critical section. The first "if statement" is to check the state of the token to avoid the wait and throw pattern if it has been cancelled in the past.
The performance hit will be on the first time you have your threads "released" from the wait since they will all throw an exception. Later on, the only impact is going to be around the cancellation token check and the general maintainability of your code.
static SemaphoreSlim semaphore = new SemaphoreSlim(1);
static void Main(string[] args)
{
for (int i = 0; i < 10; i++)
{
Thread t = new Thread(LoadDataPart);
t.Name = (i + 1).ToString();
t.Start();
}
Console.Read();
}
static void LoadDataPart()
{
Console.WriteLine("Before Wait {0}", Thread.CurrentThread.Name);
semaphore.Wait();
Console.WriteLine("After Wait {0}", Thread.CurrentThread.Name);
Thread.Sleep(3000);
Console.WriteLine("Done {0}", Thread.CurrentThread.Name);
semaphore.Release(10);//this line must be changed,its allow too much thread coz its called 10 times!
}
I can manage what I want to do like this. In this code sample 10 threads hit wait. 9 of them waited other one keep going. When 1 thread done his job other 9 goes together not one by one. To check that I put thread sleep and all threads complated in 6 sec not in 30 secs. Now I can customize my code.
This is an example about Thread Local Storage (TLS) from Apress parallel programming book. I know that if we have 4 cores computer 4 thread can run parallel in same time. In this example we create 10 task and we suppose that have 4 cores computer. Each Thread local storage live in on thread so when start 10 task parallel only 4 thread perform. And We have 4 TLS so 10 task try to change 4 Thread local storage object. i want to ask how Tls prevent data race problem when thread count < Task count ??
using System;
using System.Threading;
using System.Threading.Tasks;
namespace Listing_04
{
class BankAccount
{
public int Balance
{
get;
set;
}
}
class Listing_04
{
static void Main(string[] args)
{
// create the bank account instance
BankAccount account = new BankAccount();
// create an array of tasks
Task<int>[] tasks = new Task<int>[10];
// create the thread local storage
ThreadLocal<int> tls = new ThreadLocal<int>();
for (int i = 0; i < 10; i++)
{
// create a new task
tasks[i] = new Task<int>((stateObject) =>
{
// get the state object and use it
// to set the TLS data
tls.Value = (int)stateObject;
// enter a loop for 1000 balance updates
for (int j = 0; j < 1000; j++)
{
// update the TLS balance
tls.Value++;
}
// return the updated balance
return tls.Value;
}, account.Balance);
// start the new task
tasks[i].Start();
}
// get the result from each task and add it to
// the balance
for (int i = 0; i < 10; i++)
{
account.Balance += tasks[i].Result;
}
// write out the counter value
Console.WriteLine("Expected value {0}, Balance: {1}",
10000, account.Balance);
// wait for input before exiting
Console.WriteLine("Press enter to finish");
Console.ReadLine();
}
}
}
We have 4 TLS so 10 task try to change 4 Thread local storage object
In your example, you could have anywhere between 1 and 10 TLS slots. This is because a) you are not managing your threads explicitly and so the tasks are executed using the thread pool, and b) the thread pool creates and destroys threads over time according to demand.
A loop of only 1000 iterations will completely almost instantaneously. So it's likely all ten of your tasks will get through the thread pool before the thread pool decides a work item has been waiting long enough to justify adding any new threads. But there is no guarantee of this.
Some important parts of the documentation include these statements:
By default, the minimum number of threads is set to the number of processors on a system
and
When demand is low, the actual number of thread pool threads can fall below the minimum values.
In other words, on your four-core system, the default minimum number of threads is four, but the actual number of threads active in the thread pool could in fact be less than that. And if the tasks take long enough to execute, the number of active threads could rise above that.
The biggest thing to keep in mind here is that using TLS in the context of a thread pool is almost certainly the wrong thing to do.
You use TLS when you have control over the threads, and you want a thread to be able to maintain some data private or unique to that thread. That's the opposite of what happens when you are using the thread pool. Even in the simplest case, multiple tasks can use the same thread, and so would wind up sharing TLS. And in more complicated scenarios, such as when using await, a single task could wind up executed in different threads, and so that one task could wind up using different TLS values depending on what thread is assigned to that task at that moment.
how Tls prevent data race problem when thread count < Task count ??
That depends on what "data race problem" you're talking about.
The fact is, the code you posted is filled with problems that are at the very least odd, if not outright wrong. For example, you are passing account.Balance as the initial value for each task. But why? This value is evaluated when you create the task, before it could ever be modified later, so what's the point of passing it?
And if you thought you were passing whatever the current value is when the task starts, that seems like that would be wrong too. Why would it be valid to make the starting value for a given task vary according to how many tasks had already completed and been accounted for in your later loop? (To be clear: that's not what's happening…but even if it were, it'd be a strange thing to do.)
Beyond all that, it's not clear what you thought using TLS here would accomplish anyway. When each task starts, you reinitialize the TLS value to 0 (i.e. the value of account.Balance that you've passed to the Task<int> constructor). So no thread involved ever sees a value other than 0 during the context of executing any given task. A local variable would accomplish exactly the same thing, without the overhead of TLS and without confusing anyone who reads the code and tries to figure out why TLS was used when it adds no value to the code.
So, does TLS solve some sort of "data race problem"? Not in this example, it doesn't appear to. So asking how it does that is impossible to answer. It doesn't do that, so there is no "how".
For what it's worth, I modified your example slightly so that it would report the individual threads that were assigned to the tasks. I found that on my machine, the number of threads used varied between two and eight. This is consistent with my eight-core machine, with the variation due to how much the first thread in the pool can get done before the pool has initialized additional threads and assigned tasks to them. Most commonly, I would see the first thread completing between three and five of the tasks, with the remaining tasks handled by remaining individual threads.
In each case, the thread pool created eight threads as soon as the tasks were started. But most of the time, at least one of those threads wound up unused, because the other threads were able to complete the tasks before the pool was saturated. That is, there is overhead in the thread pool just managing the tasks, and in your example the tasks are so inexpensive that this overhead allows one or more thread pool threads to finish one task before the thread pool needs that thread for another.
I've copied that version below. Note that I also added a delay between trial iterations, to allow the thread pool to terminate the threads it created (on my machine, this took 20 seconds, hence the delay time hard-coded…you can see the threads being terminated in the debugger output).
static void Main(string[] args)
{
while (_PromptContinue())
{
// create the bank account instance
BankAccount account = new BankAccount();
// create an array of tasks
Task<int>[] tasks = new Task<int>[10];
// create the thread local storage
ThreadLocal<int> tlsBalance = new ThreadLocal<int>();
ThreadLocal<(int Id, int Count)> tlsIds = new ThreadLocal<(int, int)>(
() => (Thread.CurrentThread.ManagedThreadId, 0), true);
for (int i = 0; i < 10; i++)
{
int k = i;
// create a new task
tasks[i] = new Task<int>((stateObject) =>
{
// get the state object and use it
// to set the TLS data
tlsBalance.Value = (int)stateObject;
(int id, int count) = tlsIds.Value;
tlsIds.Value = (id, count + 1);
Console.WriteLine($"task {k}: thread {id}, initial value {tlsBalance.Value}");
// enter a loop for 1000 balance updates
for (int j = 0; j < 1000; j++)
{
// update the TLS balance
tlsBalance.Value++;
}
// return the updated balance
return tlsBalance.Value;
}, account.Balance);
// start the new task
tasks[i].Start();
}
// Make sure this thread isn't busy at all while the thread pool threads are working
Task.WaitAll(tasks);
// get the result from each task and add it to
// the balance
for (int i = 0; i < 10; i++)
{
account.Balance += tasks[i].Result;
}
// write out the counter value
Console.WriteLine("Expected value {0}, Balance: {1}", 10000, account.Balance);
Console.WriteLine("{0} thread ids used: {1}",
tlsIds.Values.Count,
string.Join(", ", tlsIds.Values.Select(t => $"{t.Id} ({t.Count})")));
System.Diagnostics.Debug.WriteLine("done!");
_Countdown(TimeSpan.FromSeconds(20));
}
}
private static void _Countdown(TimeSpan delay)
{
System.Diagnostics.Stopwatch sw = System.Diagnostics.Stopwatch.StartNew();
TimeSpan remaining = delay - sw.Elapsed,
sleepMax = TimeSpan.FromMilliseconds(250);
int cchMax = $"{delay.TotalSeconds,2:0}".Length;
string format = $"\r{{0,{cchMax}:0}}", previousText = null;
while (remaining > TimeSpan.Zero)
{
string nextText = string.Format(format, remaining.TotalSeconds);
if (previousText != nextText)
{
Console.Write(format, remaining.TotalSeconds);
previousText = nextText;
}
Thread.Sleep(remaining > sleepMax ? sleepMax : remaining);
remaining = delay - sw.Elapsed;
}
Console.Write(new string(' ', cchMax));
Console.Write('\r');
}
private static bool _PromptContinue()
{
Console.Write("Press Esc to exit, any other key to proceed: ");
try
{
return Console.ReadKey(true).Key != ConsoleKey.Escape;
}
finally
{
Console.WriteLine();
}
}
So, I have a method, which contains async call to the server.
That code is called from 3rd party tool, which somehow sometimes calls same method several times in a row from different threads, so I can't affect that.
What I want to be sure is that my method is called once, and then, another calls should be ignored.
At first, I tried to lock(locker) with bool isBusy, but that is not satisfied me, as async request was still called several times from second thread, which was fast enough to see isBusy=true;
Then, I tried Monitor
object obj = new object();
Monitor.TryEnter(obj);
try
{
var res = await _dataService.RequestServerAsync(SelectedIndex, e.StartIndex, e.Count);
****
}
finally
{
Monitor.Exit(obj);
}
However, on Exit(), I'm getting exception:
A first chance exception of type
'System.Threading.SynchronizationLockException'
Is there any other way to guarantee only 1 time execution of the code?
Put in the class:
private int entered = 0;
and in the method:
if (Interlocked.Increment(ref entered) != 1)
{
return;
}
Only the first call to the method will be able to change entered from 0 to 1. The others will make it 2, 3, 4, 5...
Clearly you'll need something to reset the entered if you want your method to be refireable...
Interlocked.Exchange(ref entered, 0);
at the end of a successful call to the method.
Ah... and it isn't possible to use lock/Monitor.* around an await, because the current thread of the method can change, while nearly all the synchronization libraries expect that the thread you use to enter a lock is the same you use to exit the lock.
You can even use the Interlocked.CompareExchange()...
if (Interlocked.CompareExchange(ref entered, 1, 0) != 0)
{
return;
}
The first thread to enter will be able to exchange the value of entered from 0 to 1, and it will receive the old value, 0 (so failing the if and continuing in the remaining code). The other threads will fail the CompareExchange and see the "current" value of 1, so entering the if and exiting the method
If you do want to restrict multiple threads using the same method concurrently then I would use the Semaphore class to facilitate the required thread limit; here's how...
A semaphore is like a mean night club bouncer, it has been provide a club capacity and is not allowed to exceed this limit. Once the club is full, no one else can enter... A queue builds up outside. Then as one person leaves another can enter (analogy thanks to J. Albahari).
A Semaphore with a value of one is equivalent to a Mutex or Lock except that the Semaphore has no owner so that it is thread ignorant. Any thread can call Release on a Semaphore whereas with a Mutex/Lock only the thread that obtained the Mutex/Lock can release it.
Now, for your case we are able to use Semaphores to limit concurrency and prevent too many threads from executing a particular piece of code at once. In the following example five threads try to enter a night club that only allows entry to three...
class BadAssClub
{
static SemaphoreSlim sem = new SemaphoreSlim(3);
static void Main()
{
for (int i = 1; i <= 5; i++)
new Thread(Enter).Start(i);
}
// Enfore only three threads running this method at once.
static void Enter(int i)
{
try
{
Console.WriteLine(i + " wants to enter.");
sem.Wait();
Console.WriteLine(i + " is in!");
Thread.Sleep(1000 * (int)i);
Console.WriteLine(i + " is leaving...");
}
finally
{
sem.Release();
}
}
}
Note, that SemaphoreSlim is a lighter weight version of the Semaphore class and incurs about a quarter of the overhead. it is sufficient for what you require.
I hope this helps.
I have a queue, a list with producer threads and a list with consumer threads.
My code looks like this
public class Runner
{
List<Thread> Producers;
List<Thread> Consumers;
Queue<int> queue;
Random random;
public Runner()
{
Producers = new List<Thread>();
Consumers = new List<Thread>();
for (int i = 0; i < 2; i++)
{
Thread thread = new Thread(Produce);
Producers.Add(thread);
}
for (int i = 0; i < 2; i++)
{
Thread thread = new Thread(Consume);
Consumers.Add(thread);
}
queue = new Queue<int>();
random = new Random();
Producers.ForEach(( thread ) => { thread.Start(); });
Consumers.ForEach(( thread ) => { thread.Start(); });
}
protected void Produce()
{
while (true)
{
int number = random.Next(0, 99);
queue.Enqueue(number);
Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " Produce: " + number);
}
}
protected void Consume()
{
while (true)
{
if (queue.Any())
{
int number = queue.Dequeue();
Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " Consume: " + number);
}
else
{
Console.WriteLine("No items to consume");
}
}
}
}
Shouldn't this fail miserable cause of the missing use of the lock keyword?
It failed once because it tried to dequeue when the queue was empty, using the lock keyword will fix that right?
If the lock keyword is not needed for the above code, when is it needed then?
Thank you in advance! =)
Locking is to done to eliminate aberrant behavior of an application, most specifically in multithreading. The most common goal is the elimination of a "race condition" which causes non-deterministic program behavior.
This is the behavior you saw. In one run you get an error for the queue having no items, in another run you have no issues. This is a race condition. Proper usage of locking will eliminate this scenario.
Using Queue without locks is not thread safe indeed. But better than using locks you may try ConcurrentQueue. Google for "C# ConcurrentQueue" and you will find quite a lot of examples, e.g. this one compares the use and performance of Queue with a lock and ConcurrentQueue.
To clarify the existing answers, if you have a multithreading problem (such as a race condition) then it isn't guaranteed to always fail - it may fail, in a very unpredictable manner.
The reason is that two (or more) threads that are accessing a resource may try to access it at different times - precisely when each of them tries to access it will depend on many factors (how fast your CPU is, how many processor cores it has available, what other programs are running at the time, whether you are running a release or debug build, or running under a debugger, etc). You could run it many times without the failure showing up, and then have it suddenly and "inexplicably" fail - this can make these errors extremely hard to track down because they don't often show up while you're writing the faulty code, but more often when you are writing a different unrelated piece of code.
If you are going to use multithreading it is vital that you read up on the subject and gain an understanding of what can go wrong, when, and how to handle it properly - bad use of locking can be just as dangerous (if not more so) than not using locks at all (locking can cause deadlocks where your program simply "locks up"). This are aof programming must be approached carefully!
Yes this code will fail. The queue needs to support multi-threading. Use a ConcurrentQueue. See http://msdn.microsoft.com/en-us/library/dd267265.aspx
By running your code I received InvalidOperationException - "Collection was modified after the enumerator was instantiated." It means that you modify data while using several threads.
You can use the lock every time you Enqueue or Dequeue - because you modify the queue from several threads. A far better option is to use ConcurentQueues as it is thread safe and lock-free concurrent collection. It also provides better performance.
Yep, you would definitely to synchronize access to the Queue to make it thread-safe. But, you have another problem. There is no mechanism which keeps the consumers from spinning wildly around the loop. Synchronizing access to the Queue or using ConcurrentQueue will not fix that problem.
The simplest way to implement the producer-consumer pattern is to use a blocking queue. Fortunately, .NET 4.0 provides the BlockingCollection which is, despite the name, an implementation of a blocking queue.
public class Runner
{
private BlockingCollection<int> queue = new BlockingCollection<int>();
private Random random = new Random();
public Runner()
{
for (int i = 0; i < 2; i++)
{
var thread = new Thread(Produce);
thread.Start();
}
for (int i = 0; i < 2; i++)
{
var thread = new Thread(Consume);
thread.Start();
}
}
protected void Produce()
{
while (true)
{
int number = random.Next(0, 99);
queue.Add(number);
Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " Produce: " + number);
}
}
protected void Consume()
{
while (true)
{
int number = queue.Take();
Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " Consume: " + number);
}
}
}