I have a queue, a list with producer threads and a list with consumer threads.
My code looks like this
public class Runner
{
List<Thread> Producers;
List<Thread> Consumers;
Queue<int> queue;
Random random;
public Runner()
{
Producers = new List<Thread>();
Consumers = new List<Thread>();
for (int i = 0; i < 2; i++)
{
Thread thread = new Thread(Produce);
Producers.Add(thread);
}
for (int i = 0; i < 2; i++)
{
Thread thread = new Thread(Consume);
Consumers.Add(thread);
}
queue = new Queue<int>();
random = new Random();
Producers.ForEach(( thread ) => { thread.Start(); });
Consumers.ForEach(( thread ) => { thread.Start(); });
}
protected void Produce()
{
while (true)
{
int number = random.Next(0, 99);
queue.Enqueue(number);
Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " Produce: " + number);
}
}
protected void Consume()
{
while (true)
{
if (queue.Any())
{
int number = queue.Dequeue();
Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " Consume: " + number);
}
else
{
Console.WriteLine("No items to consume");
}
}
}
}
Shouldn't this fail miserable cause of the missing use of the lock keyword?
It failed once because it tried to dequeue when the queue was empty, using the lock keyword will fix that right?
If the lock keyword is not needed for the above code, when is it needed then?
Thank you in advance! =)
Locking is to done to eliminate aberrant behavior of an application, most specifically in multithreading. The most common goal is the elimination of a "race condition" which causes non-deterministic program behavior.
This is the behavior you saw. In one run you get an error for the queue having no items, in another run you have no issues. This is a race condition. Proper usage of locking will eliminate this scenario.
Using Queue without locks is not thread safe indeed. But better than using locks you may try ConcurrentQueue. Google for "C# ConcurrentQueue" and you will find quite a lot of examples, e.g. this one compares the use and performance of Queue with a lock and ConcurrentQueue.
To clarify the existing answers, if you have a multithreading problem (such as a race condition) then it isn't guaranteed to always fail - it may fail, in a very unpredictable manner.
The reason is that two (or more) threads that are accessing a resource may try to access it at different times - precisely when each of them tries to access it will depend on many factors (how fast your CPU is, how many processor cores it has available, what other programs are running at the time, whether you are running a release or debug build, or running under a debugger, etc). You could run it many times without the failure showing up, and then have it suddenly and "inexplicably" fail - this can make these errors extremely hard to track down because they don't often show up while you're writing the faulty code, but more often when you are writing a different unrelated piece of code.
If you are going to use multithreading it is vital that you read up on the subject and gain an understanding of what can go wrong, when, and how to handle it properly - bad use of locking can be just as dangerous (if not more so) than not using locks at all (locking can cause deadlocks where your program simply "locks up"). This are aof programming must be approached carefully!
Yes this code will fail. The queue needs to support multi-threading. Use a ConcurrentQueue. See http://msdn.microsoft.com/en-us/library/dd267265.aspx
By running your code I received InvalidOperationException - "Collection was modified after the enumerator was instantiated." It means that you modify data while using several threads.
You can use the lock every time you Enqueue or Dequeue - because you modify the queue from several threads. A far better option is to use ConcurentQueues as it is thread safe and lock-free concurrent collection. It also provides better performance.
Yep, you would definitely to synchronize access to the Queue to make it thread-safe. But, you have another problem. There is no mechanism which keeps the consumers from spinning wildly around the loop. Synchronizing access to the Queue or using ConcurrentQueue will not fix that problem.
The simplest way to implement the producer-consumer pattern is to use a blocking queue. Fortunately, .NET 4.0 provides the BlockingCollection which is, despite the name, an implementation of a blocking queue.
public class Runner
{
private BlockingCollection<int> queue = new BlockingCollection<int>();
private Random random = new Random();
public Runner()
{
for (int i = 0; i < 2; i++)
{
var thread = new Thread(Produce);
thread.Start();
}
for (int i = 0; i < 2; i++)
{
var thread = new Thread(Consume);
thread.Start();
}
}
protected void Produce()
{
while (true)
{
int number = random.Next(0, 99);
queue.Add(number);
Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " Produce: " + number);
}
}
protected void Consume()
{
while (true)
{
int number = queue.Take();
Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " Consume: " + number);
}
}
}
Related
I have a program I am writing that will run a variety of tasks. I have set up what I have called a "Task Queue" in which I will continually grab the next task to process (if there is one) and start a new thread to handle that task. However, I want to limit the amount of threads that can spawn at one time for apparent reasons. I created a variable to keep up with the max threads to spawn and one for the current thread count. I was thinking of using a lock to try and accurately keep up with the current thread count. Here is my general idea.
public class Program {
private static int mintThreadCount;
private static int mintMaxThreadCount = 10;
private static object mobjLock = new object();
static void Main(string[] args) {
mintThreadCount = 0;
int i = 100;
while(i > 0) {
StartNewThread();
i--;
}
Console.Read();
}
private static void StartNewThread() {
lock(mobjLock) {
if(mintThreadCount < mintMaxThreadCount) {
Thread newThread = new Thread(StartTask);
newThread.Start(mintThreadCount);
mintThreadCount++;
}
else {
Console.WriteLine("Max Thread Count Reached.");
}
}
}
private static void StartTask(object iCurrentThreadCount) {
int id = new Random().Next(0, 1000000);
Console.WriteLine("New Thread with id of: " + id.ToString() + " Started. Current Thread count: " + ((int)iCurrentThreadCount).ToString());
Thread.Sleep(new Random().Next(0, 3000));
lock(mobjLock) {
Console.WriteLine("Ending thread with id of: " + id.ToString() + " now.");
mintThreadCount--;
Console.WriteLine("Thread space release by id of: " + id.ToString() + " . Thread count now at: " + mintThreadCount);
}
}
}
Since I am locking in two places to access the same variable (increment when starting the new thread and decrement when ending it) is there a chance that the thread waiting on the lock to decrement could get hung up and never end? Thereby reaching max thread count and never being able to start another one? Any alternate suggestions to my method?
Easiest question first… :)
…is there a chance that the thread waiting on the lock to decrement could get hung up and never end?
No, not in the code you posted. None of the code holds a lock while waiting for the count to change, or anything like that. You only ever take the lock, then either modify the count or emit a message, and immediately release the lock. So no thread will hold the lock indefinitely, nor are there nested locks (which could lead to deadlock if done incorrectly).
Now, that said: from the code you posted and your question, it's not entirely clear what the intent here is. The code as written will indeed limit the number of threads created. But once that limit is reached (and it will do so quickly), the main loop will just spin, reporting "Max Thread Count Reached.".
Indeed, with a total loop count of 100, I think it's possible that the entire loop could finish before the first thread even gets to run, depending on what else is tying up CPU cores on your system. If some threads do get to run and it happens that some of them get very low durations to sleep, there's a chance that you might sneak in a few more threads later. But most of the iterations of the loop will see the thread count at the maximum, report the limit has been reached and continue with the next iteration of the loop.
You write in the comments (something you should really put in the question itself, if you think it's relevant) that "the main thread should never be blocked". Of course, the question there is, what is the main thread doing when not blocked? How will the main thread know if and when to try to schedule a new thread?
These are important details, if you want a really useful answer.
Note that you've been offered the suggestion of using a semaphore (specifically, SemaphoreSlim). This could be a good idea, but note that that class is typically used to coordinate multiple threads all competing for the same resource. For it to be useful, you'd actually have more than 10 threads, with the semaphore ensuring that only 10 get to run at a given time.
In your case, it seems to me that you are actually asking how to avoid creating the extra thread in the first place. I.e. you want the main loop to check the count and just not create a thread at all if the maximum count is reached. In that case, one possible solution might be to just use the Monitor class directly:
private static void StartNewThread() {
lock(mobjLock) {
while (mintThreadCount >= mintMaxThreadCount) {
Console.WriteLine("Max Thread Count Reached.");
Monitor.Wait(mobjLock);
}
Thread newThread = new Thread(StartTask);
newThread.Start(mintThreadCount);
mintThreadCount++;
}
}
}
The above will cause the StartNewThread() method to wait until the count is below the maximum, and then will always create a new thread.
Of course, each thread needs to signal that it's updated the count, so that the above loop can be released from the wait and check the count:
private readonly Random _rnd = new Random();
private static void StartTask(object iCurrentThreadCount) {
int id = _rnd.Next(0, 1000000);
Console.WriteLine("New Thread with id of: " + id.ToString() + " Started. Current Thread count: " + ((int)iCurrentThreadCount).ToString());
Thread.Sleep(_rnd.Next(0, 3000));
lock(mobjLock) {
Console.WriteLine("Ending thread with id of: " + id.ToString() + " now.");
mintThreadCount--;
Console.WriteLine("Thread space release by id of: " + id.ToString() + " . Thread count now at: " + mintThreadCount);
Monitor.Pulse(mobjLock);
}
}
The problem with the above is that it will block the main loop. Which if I understood correctly, you don't want.
(Note: you have a common-but-serious bug in your code, in that you create a new Random object each time you want a random number. To use the Random class correctly, you must create just one instance and reuse it as you want new random numbers. I've adjusted the code example above to fix that problem).
One of the other problems, both with the above, and with your original version, is that each new task is assigned a brand new thread. Threads are expensive to create and even to simply exist, which is why thread pools exist. Depending on what your actual scenario is, it's possible that you should just be using e.g. the Parallel, ParallelEnumerable, or Task to manage your tasks.
But if you really want to do this all explicitly, one option is to simply start up ten threads, and have them retrieve data to operate on from a BlockingCollection<T>. Since you start exactly ten threads, you know you'll never have more than that running. When there is enough work for all ten threads to be busy, they will be. Otherwise, the queue will be empty and some or all will be waiting for new data to show in the queue. Idle, but not using any CPU resources.
For example:
private BlockingCollection<int> _queue = new BlockingCollection<int>();
private static void StartThreads() {
for (int i = 0; i < mintMaxThreadCount; i++) {
new Thread(StartTask).Start();
}
}
private static void StartTask() {
// NOTE: a random number can't be a reliable "identification", as two or
// more threads could theoretically get the same "id".
int id = new Random().Next(0, 1000000);
Console.WriteLine("New Thread with id of: " + id.ToString() + " Started.");
foreach (int i in _queue) {
Thread.Sleep(i);
}
Thread.Sleep(new Random().Next(0, 3000));
}
You'd call StartThreads() just once somewhere, rather than calling your other StartNewThread() method multiple times. Presumably, before the while (true) loop you mentioned.
Then as the need to process some task, you just add data to the queue, e.g.:
_queue.Add(_rnd.Next(0, 3000));
When you want the threads to all exit (e.g. after your main loop exits, however that happens):
_queue.CompleteAdding();
That will cause each of the foreach loops in progress to end, letting each thread exit.
Of course, the T type parameter for BlockingCollection<T> can be anything. Presumably, it will be whatever in your case actually represents a "task". I used int, only because that was effectively your "task" in your example (i.e. the number of milliseconds the thread should sleep).
Then your main thread can just do whatever it normally does, calling the Add() method to dispatch new work to your consumer threads as needed.
Again, without more details I can't really comment on whether this approach would be better than using one of the built-in task-running mechanisms in .NET. But it should work well, given what you've explained so far.
I have an integration service which runs a calculation heavy, data bound process. I want to make sure that there are never more than say, n = 5, (but n will be configurable, changeable at runtime) of these processes running at the same. The idea is to throttle the load on the server to a safe level. The amount of data processed by the method is limited by batching, so I don't need to worry about 1 process representing a much bigger load than another.
The processing method is called by another process, where requests to run payroll are held on a queue, and I can insert some logic at that point to determine whether to process this request now, or leave it on the queue.
So i want a seperate method on the same service as the processing method, which can tell me if the server can accept another call to the processing method. It's going to ask, "how many payroll runs are going on? is that less than n?" What's a neat way of achieving this?
-----------edit------------
I think I need to make it clear, the process that decides whether to take the request off the queue this is seperated from the service that processes the payroll data by a WCF boundary. Stopping a thread on the payroll processing process isn't going to prevent more requests coming in
You can use a Semaphore to do this.
public class Foo
{
private Semaphore semaphore;
public Foo(int numConcurrentCalls)
{
semaphore = new Semaphore(numConcurrentCalls, numConcurrentCalls);
}
public bool isReady()
{
return semaphore.WaitOne(0);
}
public void Bar()
{
try
{
semaphore.WaitOne();//it will only get past this line if there are less than
//"numConcurrentCalls" threads in this method currently.
//do stuff
}
finally
{
semaphore.Release();
}
}
}
Review the Object Pool pattern. This is what you're describing. While not strictly required by the pattern, you can expose the number of objects currently in the pool, the maximum (configured) number, the high-watermark, etc.
I think that you might want a BlockingCollection, where each item in the collection represents one of the concurrent calls.
Also see IProducerConsumerCollection.
If you were just using threads, I'd suggest you look at the methods for limiting thread concurrency (e.g. the TaskScheduler.MaximumConcurrencyLevel property, and this example.).
Also see ParallelEnumerable.WithDegreeOfParallelism
void ThreadTest()
{
ConcurrentQueue<int> q = new ConcurrentQueue<int>();
int MaxCount = 5;
Random r = new Random();
for (int i = 0; i <= 10000; i++)
{
q.Enqueue(r.Next(100000, 200000));
}
ThreadStart proc = null;
proc = () =>
{
int read = 0;
if (q.TryDequeue(out read))
{
Console.WriteLine(String.Format("[{1:HH:mm:ss}.{1:fff}] starting: {0}... #Thread {2}", read, DateTime.Now, Thread.CurrentThread.ManagedThreadId));
Thread.Sleep(r.Next(100, 1000));
Console.WriteLine(String.Format("[{1:HH:mm:ss}.{1:fff}] {0} ended! #Thread {2}", read, DateTime.Now, Thread.CurrentThread.ManagedThreadId));
proc();
}
};
for (int i = 0; i <= MaxCount; i++)
{
new Thread(proc).Start();
}
}
I have a queue , it receives data overtime.
I used multi thread to dequeue and save to database.
I create an Array of Thread to do this job.
for (int i = 0; i < thr.Length; i++)
{
thr[i] = new Thread(new ThreadStart(SaveData));
thr[i].Start();
}
SaveData
Note : eQ and eiQ is 2 global queue. I used while to keep thread alive.
public void SaveData()
{
var imgDAO = new imageDAO ();
string exception = "";
try
{
while (eQ.Count > 0 && eiQ.Count > 0)
{
var newRecord = eQ.Dequeue();
var newRecordImage = eiQ.Dequeue();
imageDAO.SaveEvent(newEvent, newEventImage);
var storepath = Properties.Settings.Default.StorePath;
save.WriteFile(storepath, newEvent, newEventImage);
}
}
catch (Exception e)
{
Global._logger.Info(e.Message + e.Source);
}
}
It did create multi thread but when I debug, only 1 thread alive, the rest is dead.
I dont know why ? Any one have idea? Tks
You are using WriteFile in that thread function.
Is this possible, that you trying to write file that may be locked by another thread (same filename or something)?
And one more thing - saving data on disk by multiple threads - i dont like it.
I think you should create some buffer instead of many threads and write it every few records/entries.
As mentioned in the comments, your threads will only live as long as there are elements in the queues, as soons as both are emptied the threads will terminate. This could explain why you see only one living thread while debugging.
A potential answer to you question would be to use a BlockingCollection from the System.Collections.Concurrent classes instead of a Queue. That has the capability of doing a blocking dequeue, which will stop the thread(s) until more elements are available for processing.
Another problem is the nice race condition between eQ and eiQ -- consider using a single queue with a Tuple or custom data type so you can dequeue both newRecord and newRecordImage in a single operation.
I have a very simple program counting the characters in a string. An integer threadnum sets the number of threads and divides the data by threadnum accordingly into chunks for each thread to process.
Each thread increments the values contained in a shared dictionary, building a character historgram.
private Dictionary<UInt32, int> dict = new Dictionary<UInt32, int>();
In order to wait for all threads to finish and continue with the main process, I invoke Thread.Join
Initially I had a local dictionary for each thread which get merged afterwards, but a shared dictionary worked fine, without locking.
No references are locked in the method BuildDictionary, though locking the dictionary did not significantly impact thread-execution time.
Each thread is timed, and the resulting dictionary compared.
The dictionary content is the same regardless of a single or multiple threads - as it should be.
Each thread takes a fraction determined by threadnum to complete - as it should be.
Problem:
The total time is roughly a multiple of threadnum , that is to say the execution time increases ?
(Unfortunately I cannot run a C# Profiler at the moment. Additionally I would prefer C# 3 code compatibility. )
Others are likely struggling as well. It may be that the VS 2010 express edition vshost process stacks and schedules threads to be run sequentially?
Another MT-performance issue was posted recently posted here as "Visual Studio C# 2010 Express Debug running Faster than Release":
Code:
public int threadnum = 8;
Thread[] threads = new Thread[threadnum];
Stopwatch stpwtch = new Stopwatch();
stpwtch.Start();
for (var threadidx = 0; threadidx < threadnum; threadidx++)
{
threads[threadidx] = new Thread(BuildDictionary);
threads[threadidx].Start(threadidx);
threads[threadidx].Join(); //Blocks the calling thread, till thread completion
}
WriteLine("Total - time: {0} msec", stpwtch.ElapsedMilliseconds);
Can you help please?
Update:
It appears that the strange behavior of an almost linear slowdown with increasing thread-number is an artifact due to the numerous hooks of the IDE's Debugger.
Running the process outside the developer environment, I actually do get a 30% speed increase on a 2 logical/physical core machine. During debugging I am already at the high end of CPU utilization, and hence I suspect it is wise to have some leeway during development through additional idle cores.
As initially, I let each thread compute on its own local data-chunk, which is locked and written back to a shared list and aggregated after all threads have finished.
Conclusion:
Be heedful of the environment the process is running in.
We can put the dictionary synchronization issues Tony the Lion mentions in his answer aside for the moment, because in your current implementation you are in fact not running anything in parallel!
Let's take a look at what you are currently doing in your loop:
Start a thread.
Wait for the thread to complete.
Start the next thread.
In other words, you should not be calling Join inside the loop.
Instead, you should start all threads as you are doing, but use a singaling construct such as an AutoResetEvent to determine when all threads have completed.
See example program:
class Program
{
static EventWaitHandle _waitHandle = new AutoResetEvent(false);
static void Main(string[] args)
{
int numThreads = 5;
for (int i = 0; i < numThreads; i++)
{
new Thread(DoWork).Start(i);
}
for (int i = 0; i < numThreads; i++)
{
_waitHandle.WaitOne();
}
Console.WriteLine("All threads finished");
}
static void DoWork(object id)
{
Thread.Sleep(1000);
Console.WriteLine(String.Format("Thread {0} completed", (int)id));
_waitHandle.Set();
}
}
Alternatively you could just as well be calling Join in the second loop if you have references to the threads available.
After you have done this you can and should worry about the dictionary synchronization problems.
A Dictionary can support multiple readers concurrently, as long as the collection is not modified. From MSDN
You say:
but a shared dictionary worked fine, without locking.
Each thread increments the values contained in a shared dictionary
Your program is by definition broken, if you alter the data in the dictionary without proper locking, you will end up with bugs. Nothing more needs to be said.
I wouldn't use some shared static Dictionary, if each thread worked on a local copy you could amalgamate your results once all threads had signalled completion.
WaitHandle.WaitAll avoids any deadlocking on an AutoResetEvent.
class Program
{
static void Main()
{
char[] text = "Some String".ToCharArray();
int numThreads = 5;
// I leave the implementation of the next line to the OP.
Partition[] partitions = PartitionWork(text, numThreads);
completions = new WaitHandle[numThreads];
results = IDictionary<char, int>[numThreads];
for (int i = 0; i < numThreads; i++)
{
results[i] = new IDictionary<char, int>();
completions[i] = new ManualResetEvent(false);
new Thread(DoWork).Start(
text,
partitions[i].Start,
partitions[i].End,
results[i],
completions[i]);
}
if (WaitHandle.WaitAll(completions, new TimeSpan(366, 0, 0, 0))
{
Console.WriteLine("All threads finished");
}
else
{
Console.WriteLine("Timed out after a year and a day");
}
// Merge the results
IDictionary<char, int> result = results[0];
for (int i = 1; i < numThreads - 1; i ++)
{
foreach(KeyValuePair<char, int> item in results[i])
{
if (result.ContainsKey(item.Key)
{
result[item.Key] += item.Value;
}
else
{
result.Add(item.Key, item.Value);
}
}
}
}
static void BuildDictionary(
char[] text,
int start,
int finish,
IDictionary<char, int> result,
WaitHandle completed)
{
for (int i = start; i <= finish; i++)
{
if (result.ContainsKey(text[i])
{
result[text[i]]++;
}
else
{
result.Add(text[i], 1);
}
}
completed.Set();
}
}
With this implementation the only variable that is ever shared is the char[] of the text and that is always read only.
You do have the burden of merging the dictionaries at the end but, that is a small price for avoiding any concurrencey issues. In a later version of the framework I would have used TPL and ConcurrentDictionary and possibly Partitioner<TSource>.
I totally agree with TonyTheLion and others, and as you fix the actual problem with join'ing at the wrong place, there still will be problem with (no) locks and updating the shared dictionary. I wanted to drop you a quick workaround: just wrap your integer value into some object:
instead of:
Dictionary<uint, int> dict = new Dictionary<uint, int>();
use:
class Entry { public int value; }
Dictionary<uint, Entry> dict = new Dictionary<uint, Entry>();
and now increment the Entry::value instead. That way, the Dictionary will not notice any changes and it will be safe without locking the dictionary.
Note: this will however work only if you are guaranteed if one thread would use only its own one Entry. I've just noticed this is not true as you said 'histogram of characters'. You will have to lock over each Entry during the increment, or some increments may be lost. Still, locking at Entry layer will speed up signinificantly when compared to locking at whole dictionary
Roem saw it.
Your main thread should Join the X other Threads after having started all of them.
Else it waits for the 1st thread to be finished, to start and wait for the 2nd one.
for (var threadidx = 0; threadidx < threadnum; threadidx++)
{
threads[threadidx] = new Thread(BuildDictionary);
threads[threadidx].Start(threadidx);
}
for (var threadidx = 0; threadidx < threadnum; threadidx++)
{
threads[threadidx].Join(); //Blocks the calling thread, till thread completion
}
As Rotem points out, by joining in the loop you are waiting for each thread to complete before going continuing.
The hint for why this is can be found on the Thread.Join documentation on MSDN
Blocks the calling thread until a thread terminates
So you loop will not continue until that one thread has completed it's work. To start all the threads then wait for them to complete, join them outside the loop:
public int threadnum = 8;
Thread[] threads = new Thread[threadnum];
Stopwatch stpwtch = new Stopwatch();
stpwtch.Start();
// Start all the threads doing their work
for (var threadidx = 0; threadidx < threadnum; threadidx++)
{
threads[threadidx] = new Thread(BuildDictionary);
threads[threadidx].Start(threadidx);
}
// Join to all the threads to wait for them to complete
for (var threadidx = 0; threadidx < threadnum; threadidx++)
{
threads[threadidx].Join();
}
System.Diagnostics.Debug.WriteLine("Total - time: {0} msec", stpwtch.ElapsedMilliseconds);
You will really need to post your BuildDictionary function. It is very likely that the operation will be no faster with multiple threads and the threading overhead will actually increase execution time.
I'm just getting to know some of the new .NET concurrent collections like ConcurrentDictionary and ConcurrentQueue and I was running some tests to see what happens when I write parallel to the Queue.
So I ran this:
private static void ParallelWriteToQueue(Queue<int> queue)
{
Stopwatch sw = Stopwatch.StartNew();
Parallel.For(1,1000001,(i) => queue.Enqueue(i));
sw.Stop();
Console.WriteLine("Regular int Queue - " + queue.Count + " time" + sw.ElapsedMilliseconds);
}
And as I thought I got the next exception:
Source array was not long enough. Check srcIndex and length, and the array's lower bounds.
So this Queue can't handle concurrent en-queues as predicted.
But, When I changed the type of the queue to string, there was no exception, and the result writes something like
Regular string Queue - 663209 time117
Which means that only about 663k were en-queued.
Why was there no exception?
What happened to all of the not en-queued items?
this is the same function with Queue
private static void ParallelWriteToQueue(Queue<string> queue)
{
Stopwatch sw = Stopwatch.StartNew();
Parallel.For(1, 100001, (i) => queue.Enqueue(i.ToString()));
sw.Stop();
Console.WriteLine("Regular string Queue - " + queue.Count + " time" + +sw.ElapsedMilliseconds);
}
Queue<T> as opposed to ConcurrentQueue<T> is not thread safe, as per MSDN. The rest of the behavior you describe happens by chance from collisions arising from concurrent (multi-threaded) write access, purely based on the fact Queue<T> isn't thread safe.
Whether or not you get an exception has nothing to do with the type you put in the queue. It is non-deterministic, I can reproduce the exception for both types and I can reproduce the case without exception also for both types - without changes to the code.
Running the following snippet shows this:
int exceptions = 0;
int noExceptions = 0;
for (int x = 0; x < 100; ++x)
{
Queue<int> q = new Queue<int>();
try
{
Parallel.For(1,1000001,(i) => q.Enqueue(i));
++noExceptions;
}
catch
{
++exceptions;
}
}
Console.WriteLine("Runs with exception: {0}. Runs without: {1}", exceptions, noExceptions);
The output is something like Runs with exception: 96. Runs without: 4
The reason is - as others already mentioned - that Queue is not thread safe. What happens here is called "Race condition".
The standard collection implementations are not thread-safe, a your test shows. The fact that an exception is thrown with ints, but not strings is likely just chance, and you may get different results if you try the test again.
As far as the "lost" items, it's not possible to determine - the queue's internal state is likely corrupt as a result of the multithreaded access, so the count could itself be wrong, or the items may simply not have been enqueued.
Since you are using Parallel.For(), the collection must be thread-safe to provide normal work.
So, consider using ConcurrentQueue<T> class.