Modified Producer/Consumer example, any problems with it? - c#

I've modified Producer/Consumer example http://msdn.microsoft.com/en-us/library/yy12yx1f(v=vs.80).aspx. I don't want Consumer to process queue "on event". Instead i'm using infinity loop (the same to one used in Producer) and try to process all elements asap. Are there any problems with such approach? Why we need "events" between Consumer and Producer if we can use infinity loop?
// Consumer.ThreadRun
public void ThreadRun()
{
int count = 0;
while (!_syncEvents.ExitThreadEvent.WaitOne(0, false))
{
lock (((ICollection)_queue).SyncRoot)
{
while (_queue.Count > 0)
{
int item = _queue.Dequeue();
count++;
}
}
}
Console.WriteLine("Consumer Thread: consumed {0} items", count);
}

I see two potential problems with what you have
When the queue is empty your version will sit in a busy loop burning precious CPU, using a event puts the thread to sleep until there is actual work to be done.
By locking the queue and processing all the elements in the queue in a single loop like you are doing, you negate the potential benefit of having multiple consumer threads processing the queue. Now because you only increment a count in your example this might not seem like a big deal, but if you start doing real work with the items that you dequeue you could benefit from having multple threads handling that work.
If you are using .NET 4 you might want to take a look at using BlockingCollection(T) Class which would give an even cleaner solution to all of this with less locking to boot.

A potential problem could occur if your setting of the ExitThreadEvent gets into a race condition (since you don't show that part of the code it's hard to tell if that could happen).

If you are able to use .NET 4.0 you can use the built in BlockingCollection class to solve this problem simply and efficiently.

Related

Is it safe to put TryDequeue in a while loop?

I have not used concurrent queue before.
Is it OK to use TryDequeue as below, in a while loop? Could this not get stuck forever?
var cq = new ConcurrentQueue<string>();
cq.Enqueue("test");
string retValue;
while(!cq.TryDequeue(out retValue))
{
// Maybe sleep?
}
//Do rest of code
It's safe in the sense that the loop won't actually end until there is an item it has pulled out, and that it will eventually end if the queue has an item to be taken out. If the queue is emptied by another thread and no more items are added then of course the loop will not end.
Beyond all of that, what you have is a busy loop. This should virtually always be avoided. Either you end up constantly polling the queue asking for more items, wasting CPU time and effort tin the process, or you end up sleeping and therefore not actually using the item in the queue as soon as it is added (and even then, still wasting some time/effort on context switches just to poll the queue).
What you should be doing instead, if you find yourself in the position of wanting to "wait until there is an item for me to take" is use a BlockingCollection. It is specifically designed to wrap various types of concurrent collections and block until there is an item available to take. It allows you to change your code to queue.Take() and have it be easier to write, semantically stating what you're doing, be clearly correct, noticeably more effective, and completely safe.
Yes it is safe as per the documentation, but it is not a recommended design.
It might get "Stuck forever" if the queue was empty at the first call TryDequeue, and if no other thread pushes data in the queue after that point (you could break the while after N attempts or after a timeout, though).
ConcurrentQueue offers an IsEmpty member to check if there are items in the Queue. It is much more efficient to check it than to loop over a TryDequeue call (particularly if the queue is generally empty)
What you might want to do is :
while(cq.IsEmpty())
{
// Maybe sleep / wait / ...
}
if(cq.TryDequeue(out retValue))
{
...
}
EDIT:
If this last call returns false: another of your threads dequeued the item. If you don't have other threads, this is safe, if you do, you should use while(TryDequeue)

De-queue Items with worker threads

I have been trying to figure out how to solve an requirement I have but for the life of me I just can't come up with a solution.
I have a database of items which stores them a kind of queue.
(The database has already been implemented and other processes will be adding items to this queue.)
The items require a lot of work/time to "process" so I need to be able to:
Constantly de-queue items from the database.
For each item run a new thread and process the item and then return true/false it it was successfully processed. (this will be used to re-add it to the database queue or not)
But to only do this while the current number of active threads (one per item being processed) is less then a maximum number of threads parameter.
Once the maximum number of threads has been reached I need to stop de-queuing items from the database until the current number of threads is less than the maximum number of threads.
At which point it needs to continue de-queuing items.
It feels like this should be something I can come up with but it is just not coming to me.
To clarify: I only need to implement the threading. The database has already be implemented.
One really easy way to do this is with a Semaphore. You have one thread that dequeues items and creates threads to process them. For example:
const int MaxThreads = 4;
Semaphore sem = new Semaphore(MaxThreads, MaxThreads);
while (Queue.HasItems())
{
sem.WaitOne();
var item = Queue.Dequeue();
Threadpool.QueueUserWorkItem(ProcessItem, item); // see below
}
// When the queue is empty, you have to wait for all processing
// threads to complete.
// If you can acquire the semaphore MaxThreads times, all workers are done
int count = 0;
while (count < MaxThreads)
{
sem.WaitOne();
++count;
}
// the code to process an item
void ProcessItem(object item)
{
// cast the item to whatever type you need,
// and process it.
// when done processing, release the semaphore
sem.Release();
}
The above technique works quite well. It's simple to code, easy to understand, and very effective.
One change is that you might want to use the Task API rather Threadpool.QueueUserWorkItem. Task gives you more control over the asynchronous processing, including cancellation. I used QueueUserWorkItem in my example because I'm more familiar with it. I would use Task in a production program.
Although this does use N+1 threads (where N is the number of items you want processed concurrently), that extra thread isn't often doing anything. The only time it's running is when it's assigning work to worker threads. Otherwise, it's doing a non-busy wait on the semaphore.
Do you just not know where to start?
Consider a thread pool with a max number of threads. http://msdn.microsoft.com/en-us/library/y5htx827.aspx
Consider spinning up your max number of threads immediately and monitoring the DB. http://msdn.microsoft.com/en-us/library/system.threading.threadpool.queueuserworkitem.aspx is convenient.
Remember that you can't guarantee your process will be ended safely...crashes happen. Consider logging of processing state.
Remember that your select and remove-from-queue operations should be atomic.
Ok, so the architecture of the solution is going to depend on one thing: does the processing time per queue item vary according to the item's data?
If not then you can have something that merely round-robins between the processing threads. This will be fairly simple to implement.
If the processing time does vary then you're going to need something with more of a 'next available' feel to it, so that whichever of you threads happens to be free first gets given the job of processing the data item.
Having worked that out you're then going to have the usual run around with how to synchronise between a queue reader and the processing threads. The difference between 'next-available' and 'round-robin' is how you do that synchronisation.
I'm not overly familiar with C#, but I've heard tell of a beast called a background worker. That is likely to be an acceptable means of bringing this about.
For round robin, just start up a background worker per queue item, storing the workers' references in an array. Limit yourself to, say, 16 in progress background workers. The idea is that having started 16 you would then wait for the first to complete before starting the 17th, and so on. I believe that background workers actually run as jobs on the thread pool, so that will automatically limit the number of threads that are actually running at any one time to something appropriate for the underlying hardware. To wait for a background worker see this. Having waited for a background worker to complete you'd then handle its result and start another up.
For the next available approach its not so different. Instead of waiting for the 1st to complete you would use WaitAny() to wait for any of the workers to complete. You handle the return from whichever one completed, and then start another one up and go back to WaitAny().
The general philosophy of both approaches is to keep a number of threads on the boil all the time. A features of the next-available approach is that the order in which you emit the results is not necessarily the same as the order of the input items. If that matters then the round robin approach with more background workers than CPU cores will be reasonably efficient (the threadpool will just start commissioned but not yet running workers anyway). However the latency will vary with the processing time.
BTW 16 is an arbitrary number chosen on the basis of how many cores you think will be on the PC running the software. More cores, bigger number.
Of course, in the seemingly restless and ever changing world of .NET there may now be a better way of doing this.
Good luck!

How do you pause a consumer until objects have been produced in a lock-free manner?

I have two threads, a producer and a consumer.
The producer might not always be producing something. The consumer however, needs to consume it as soon as it becomes available.
The producer thread works in a loop and puts results into a ConcurrentQueue. The consumer thread is in a while (!disposing) loop that calls AutoResetEvent.WaitOne when the system becomes disabled. I've considered calling AutoResetEvent.WaitOne also in the case when the ConcurrentQueue.TryDequeue method returns false; this should only ever happen when there are no items left in the queue.
However, if I were to do this, a deadlock could occur when the following execution is done:
Enqueue
TryDequeue returns true
TryDequeue returns false
Enqueue
WaitOne
This is a possibility in this snippet:
while (this.isDisposing == 0)
{
if (this.isEnabled == 0)
{
this.signal.WaitOne();
}
object item;
if (!this.queue.TryDequeue(out item))
{
this.signal.WaitOne();
continue;
}
this.HandleItem(item);
}
What is the proper way to do this without using locks?
I think the BlockingCollection would be good to use here. It will wait efficiently until there is data in the queue. You can combine this with a ConcurrentQueue I think. See http://msdn.microsoft.com/en-us/library/dd267312.aspx
The problem here is that thread pausing is in almost all operating systems a kernel level event. Windows I think with Fibers permits user-level pausing/unpausing, but that's all I know of.
So you locklessly whizz along with your queue, but how do you signal when there is something in the queue in the first place?
Signalling implies sleeping - and that's the problem. You can do lock-free signalling, but waiting, well, you gotta call the OS equivelent of WaitForEvent(), and that's a problem, because you don't WANT to be using these slow, OS provided mechanisms.
Basically, there's no or very litle OS support for this yet.

C# .net For() Step?

I have a function, that processes a list of 6100 list items. The code used to work when the list was just 300 items. But instantly crashes with 6100. Is there a way I can loop through these 6100 items say 30 at a time and execute a new thread per item?
for (var i = 0; i < ListProxies.Items.Count; i++)
{
var s = ListProxies.Items[i] as string;
var thread = new ParameterizedThreadStart(ProxyTest.IsAlive);
var doIt = new Thread(thread) { Name = "CheckProxy# " + i };
doIt.Start(s);
}
Any help would be greatly appreciated.
Do you really need to spawn a new thread for each work item? Unless there is a genuine need for this (if so, please tell us why), I would strongly recommend you use the Managed Thread Pool instead. This will give you the concurrency benefits you require, but without the resource requirements (as well as the creation, destruction and massive context-switching costs) of running thousands of threads. If you are on .NET 4.0, you might also want to consider using the Task Parallel Library.
For example:
for (var i = 0; i < ListProxies.Items.Count; i++)
{
var s = ListProxies.Items[i] as string;
ThreadPool.QueueUserWorkItem(ProxyTest.IsAlive, s);
}
On another note, I would seriously consider renaming the IsAlive method (which looks like a boolean property or method) since:
It clearly has a void IsAlive(object) signature.
It has observable side-effects (from your comment that it "increment a progress bar and add a 'working' proxy to a new list").
There is a limit on the number of threads you can spawn. 6100 threads does seem quite a bit excesive.
I agree win Ani, you should look into a ThreadPool or even a Producer / Consumer process depending on what you are trying to accomplish.
There are quite a few processes for handling multi threaded applications but without knowing what you are doing in the start there really is no way to recommend any approach other than a ThreadPool or Producer / Consumer process (Queues with SyncEvents).
At any rate you really should try to keep the number of threads to a minimum otherwise you run the risk of thread locks, spin locks, wait locks, dead locks, race conditions, who knows what, etc...
If you want good information on threading with C# check out the book Concurrent Programming on Windows By Joe Duffy it is really helpful.

Comparison of Join and WaitAll

For multiple threads wait, can anyone compare the pros and cons of using WaitHandle.WaitAll and Thread.Join?
WaitHandle.WaitAll has a 64 handle limit so that is obviously a huge limitation. On the other hand, it is a convenient way to wait for many signals in only a single call. Thread.Join does not require creating any additional WaitHandle instances. And since it could be called individually on each thread the 64 handle limit does not apply.
Personally, I have never used WaitHandle.WaitAll. I prefer a more scalable pattern when I want to wait on multiple signals. You can create a counting mechanism that counts up or down and once a specific value is reach you signal a single shared event. The CountdownEvent class conveniently packages all of this into a single class.
var finished = new CountdownEvent(1);
for (int i = 0; i < NUM_WORK_ITEMS; i++)
{
finished.AddCount();
SpawnAsynchronousOperation(
() =>
{
try
{
// Place logic to run in parallel here.
}
finally
{
finished.Signal();
}
}
}
finished.Signal();
finished.Wait();
Update:
The reason why you want to signal the event from the main thread is subtle. Basically, you want to treat the main thread as if it were just another work item. Afterall, it, along with the other real work items, is running concurrently as well.
Consider for a moment what might happen if we did not treat the main thread as a work item. It will go through one iteration of the for loop and add a count to our event (via AddCount) indicating that we have one pending work item right? Lets say the SpawnAsynchronousOperation completes and gets the work item queued on another thread. Now, imagine if the main thread gets preempted before swinging around to the next iteration of the loop. The thread executing the work item gets its fair share of the CPU and starts humming along and actually completes the work item. The Signal call in the work item runs and decrements our pending work item count to zero which will change the state of the CountdownEvent to signalled. In the meantime the main thread wakes up and goes through all iterations of the loop and hits the Wait call, but since the event got prematurely signalled it pass on by even though there are still pending work items.
Again, avoiding this subtle race condition is easy when you treat the main thread as a work item. That is why the CountdownEvent is intialized with one count and the Signal method is called before the Wait.
I like #Brian's answer as a comparison of the two mechanisms.
If you are on .Net 4, it would be worthwhile exploring Task Parallel Library to achieve Task Parellelism via System.Threading.Tasks which allows you to manage tasks across multiple threads at a higher level of abstraction. The signalling you asked about in this question to manage thread interactions is hidden or much simplified, and you can concentrate on properly defining what each Task consists of and how to coordinate them.
This may seem offtopic but as Microsoft themselves say in the MSDN docs:
in the .NET Framework 4, tasks are the
preferred API for writing
multi-threaded, asynchronous, and
parallel code.
The waitall mechanism involves kernal-mode objects. I don't think the same is true for the join mechanism. I would prefer join, given the opportunity.
Technically though, the two are not equivalent. IIRC Join can only operate on one thread. Waitall can hold for the signalling of multiple kernel objects.

Categories