I have not used concurrent queue before.
Is it OK to use TryDequeue as below, in a while loop? Could this not get stuck forever?
var cq = new ConcurrentQueue<string>();
cq.Enqueue("test");
string retValue;
while(!cq.TryDequeue(out retValue))
{
// Maybe sleep?
}
//Do rest of code
It's safe in the sense that the loop won't actually end until there is an item it has pulled out, and that it will eventually end if the queue has an item to be taken out. If the queue is emptied by another thread and no more items are added then of course the loop will not end.
Beyond all of that, what you have is a busy loop. This should virtually always be avoided. Either you end up constantly polling the queue asking for more items, wasting CPU time and effort tin the process, or you end up sleeping and therefore not actually using the item in the queue as soon as it is added (and even then, still wasting some time/effort on context switches just to poll the queue).
What you should be doing instead, if you find yourself in the position of wanting to "wait until there is an item for me to take" is use a BlockingCollection. It is specifically designed to wrap various types of concurrent collections and block until there is an item available to take. It allows you to change your code to queue.Take() and have it be easier to write, semantically stating what you're doing, be clearly correct, noticeably more effective, and completely safe.
Yes it is safe as per the documentation, but it is not a recommended design.
It might get "Stuck forever" if the queue was empty at the first call TryDequeue, and if no other thread pushes data in the queue after that point (you could break the while after N attempts or after a timeout, though).
ConcurrentQueue offers an IsEmpty member to check if there are items in the Queue. It is much more efficient to check it than to loop over a TryDequeue call (particularly if the queue is generally empty)
What you might want to do is :
while(cq.IsEmpty())
{
// Maybe sleep / wait / ...
}
if(cq.TryDequeue(out retValue))
{
...
}
EDIT:
If this last call returns false: another of your threads dequeued the item. If you don't have other threads, this is safe, if you do, you should use while(TryDequeue)
Related
I understand Thread.Abort() is evil from the multitude of articles I've read on the topic, so I'm currently in the process of ripping out all of my abort's in order to replace it for a cleaner way; and after comparing user strategies from people here on stackoverflow and then after reading "How to: Create and Terminate Threads (C# Programming Guide)" from MSDN both which state an approach very much the same -- which is to use a volatile bool approach checking strategy, which is nice, but I still have a few questions....
Immediately what stands out to me here, is what if you do not have a simple worker process which is just running a loop of crunching code? For instance for me, my process is a background file uploader process, I do in fact loop through each file, so that's something, and sure I could add my while (!_shouldStop) at the top which covers me every loop iteration, but I have many more business processes which occur before it hits it's next loop iteration, I want this cancel procedure to be snappy; don't tell me I need to sprinkle these while loops every 4-5 lines down throughout my entire worker function?!
I really hope there is a better way, could somebody please advise me on if this is in fact, the correct [and only?] approach to do this, or strategies they have used in the past to achieve what I am after.
Thanks gang.
Further reading: All these SO responses assume the worker thread will loop. That doesn't sit comfortably with me. What if it is a linear, but timely background operation?
Unfortunately there may not be a better option. It really depends on your specific scenario. The idea is to stop the thread gracefully at safe points. That is the crux of the reason why Thread.Abort is not good; because it is not guaranteed to occur at safe points. By sprinkling the code with a stopping mechanism you are effectively manually defining the safe points. This is called cooperative cancellation. There are basically 4 broad mechanisms for doing this. You can choose the one that best fits your situation.
Poll a stopping flag
You have already mentioned this method. This a pretty common one. Make periodic checks of the flag at safe points in your algorithm and bail out when it gets signalled. The standard approach is to mark the variable volatile. If that is not possible or inconvenient then you can use a lock. Remember, you cannot mark a local variable as volatile so if a lambda expression captures it through a closure, for example, then you would have to resort to a different method for creating the memory barrier that is required. There is not a whole lot else that needs to be said for this method.
Use the new cancellation mechanisms in the TPL
This is similar to polling a stopping flag except that it uses the new cancellation data structures in the TPL. It is still based on cooperative cancellation patterns. You need to get a CancellationToken and the periodically check IsCancellationRequested. To request cancellation you would call Cancel on the CancellationTokenSource that originally provided the token. There is a lot you can do with the new cancellation mechanisms. You can read more about here.
Use wait handles
This method can be useful if your worker thread requires waiting on an specific interval or for a signal during its normal operation. You can Set a ManualResetEvent, for example, to let the thread know it is time to stop. You can test the event using the WaitOne function which returns a bool indicating whether the event was signalled. The WaitOne takes a parameter that specifies how much time to wait for the call to return if the event was not signaled in that amount of time. You can use this technique in place of Thread.Sleep and get the stopping indication at the same time. It is also useful if there are other WaitHandle instances that the thread may have to wait on. You can call WaitHandle.WaitAny to wait on any event (including the stop event) all in one call. Using an event can be better than calling Thread.Interrupt since you have more control over of the flow of the program (Thread.Interrupt throws an exception so you would have to strategically place the try-catch blocks to perform any necessary cleanup).
Specialized scenarios
There are several one-off scenarios that have very specialized stopping mechanisms. It is definitely outside the scope of this answer to enumerate them all (never mind that it would be nearly impossible). A good example of what I mean here is the Socket class. If the thread is blocked on a call to Send or Receive then calling Close will interrupt the socket on whatever blocking call it was in effectively unblocking it. I am sure there are several other areas in the BCL where similiar techniques can be used to unblock a thread.
Interrupt the thread via Thread.Interrupt
The advantage here is that it is simple and you do not have to focus on sprinkling your code with anything really. The disadvantage is that you have little control over where the safe points are in your algorithm. The reason is because Thread.Interrupt works by injecting an exception inside one of the canned BCL blocking calls. These include Thread.Sleep, WaitHandle.WaitOne, Thread.Join, etc. So you have to be wise about where you place them. However, most the time the algorithm dictates where they go and that is usually fine anyway especially if your algorithm spends most of its time in one of these blocking calls. If you algorithm does not use one of the blocking calls in the BCL then this method will not work for you. The theory here is that the ThreadInterruptException is only generated from .NET waiting call so it is likely at a safe point. At the very least you know that the thread cannot be in unmanaged code or bail out of a critical section leaving a dangling lock in an acquired state. Despite this being less invasive than Thread.Abort I still discourage its use because it is not obvious which calls respond to it and many developers will be unfamiliar with its nuances.
Well, unfortunately in multithreading you often have to compromise "snappiness" for cleanliness... you can exit a thread immediately if you Interrupt it, but it won't be very clean. So no, you don't have to sprinkle the _shouldStop checks every 4-5 lines, but if you do interrupt your thread then you should handle the exception and exit out of the loop in a clean manner.
Update
Even if it's not a looping thread (i.e. perhaps it's a thread that performs some long-running asynchronous operation or some type of block for input operation), you can Interrupt it, but you should still catch the ThreadInterruptedException and exit the thread cleanly. I think that the examples you've been reading are very appropriate.
Update 2.0
Yes I have an example... I'll just show you an example based on the link you referenced:
public class InterruptExample
{
private Thread t;
private volatile boolean alive;
public InterruptExample()
{
alive = false;
t = new Thread(()=>
{
try
{
while (alive)
{
/* Do work. */
}
}
catch (ThreadInterruptedException exception)
{
/* Clean up. */
}
});
t.IsBackground = true;
}
public void Start()
{
alive = true;
t.Start();
}
public void Kill(int timeout = 0)
{
// somebody tells you to stop the thread
t.Interrupt();
// Optionally you can block the caller
// by making them wait until the thread exits.
// If they leave the default timeout,
// then they will not wait at all
t.Join(timeout);
}
}
If cancellation is a requirement of the thing you're building, then it should be treated with as much respect as the rest of your code--it may be something you have to design for.
Lets assume that your thread is doing one of two things at all times.
Something CPU bound
Waiting for the kernel
If you're CPU bound in the thread in question, you probably have a good spot to insert the bail-out check. If you're calling into someone else's code to do some long-running CPU-bound task, then you might need to fix the external code, move it out of process (aborting threads is evil, but aborting processes is well-defined and safe), etc.
If you're waiting for the kernel, then there's probably a handle (or fd, or mach port, ...) involved in the wait. Usually if you destroy the relevant handle, the kernel will return with some failure code immediately. If you're in .net/java/etc. you'll likely end up with an exception. In C, whatever code you already have in place to handle system call failures will propagate the error up to a meaningful part of your app. Either way, you break out of the low-level place fairly cleanly and in a very timely manner without needing new code sprinkled everywhere.
A tactic I often use with this kind of code is to keep track of a list of handles that need to be closed and then have my abort function set a "cancelled" flag and then close them. When the function fails it can check the flag and report failure due to cancellation rather than due to whatever the specific exception/errno was.
You seem to be implying that an acceptable granularity for cancellation is at the level of a service call. This is probably not good thinking--you are much better off cancelling the background work synchronously and joining the old background thread from the foreground thread. It's way cleaner becasue:
It avoids a class of race conditions when old bgwork threads come back to life after unexpected delays.
It avoids potential hidden thread/memory leaks caused by hanging background processes by making it possible for the effects of a hanging background thread to hide.
There are two reasons to be scared of this approach:
You don't think you can abort your own code in a timely fashion. If cancellation is a requirement of your app, the decision you really need to make is a resource/business decision: do a hack, or fix your problem cleanly.
You don't trust some code you're calling because it's out of your control. If you really don't trust it, consider moving it out-of-process. You get much better isolation from many kinds of risks, including this one, that way.
The best answer largely depends on what you're doing in the thread.
Like you said, most answers revolve around polling a shared boolean every couple lines. Even though you may not like it, this is often the simplest scheme. If you want to make your life easier, you can write a method like ThrowIfCancelled(), which throws some kind of exception if you're done. The purists will say this is (gasp) using exceptions for control flow, but then again cacelling is exceptional imo.
If you're doing IO operations (like network stuff), you may want to consider doing everything using async operations.
If you're doing a sequence of steps, you could use the IEnumerable trick to make a state machine. Example:
<
abstract class StateMachine : IDisposable
{
public abstract IEnumerable<object> Main();
public virtual void Dispose()
{
/// ... override with free-ing code ...
}
bool wasCancelled;
public bool Cancel()
{
// ... set wasCancelled using locking scheme of choice ...
}
public Thread Run()
{
var thread = new Thread(() =>
{
try
{
if(wasCancelled) return;
foreach(var x in Main())
{
if(wasCancelled) return;
}
}
finally { Dispose(); }
});
thread.Start()
}
}
class MyStateMachine : StateMachine
{
public override IEnumerabl<object> Main()
{
DoSomething();
yield return null;
DoSomethingElse();
yield return null;
}
}
// then call new MyStateMachine().Run() to run.
>
Overengineering? It depends how many state machines you use. If you just have 1, yes. If you have 100, then maybe not. Too tricky? Well, it depends. Another bonus of this approach is that it lets you (with minor modifications) move your operation into a Timer.tick callback and void threading altogether if it makes sense.
and do everything that blucz says too.
Perhaps the a piece of the problem is that you have such a long method / while loop. Whether or not you are having threading issues, you should break it down into smaller processing steps. Let's suppose those steps are Alpha(), Bravo(), Charlie() and Delta().
You could then do something like this:
public void MyBigBackgroundTask()
{
Action[] tasks = new Action[] { Alpha, Bravo, Charlie, Delta };
int workStepSize = 0;
while (!_shouldStop)
{
tasks[workStepSize++]();
workStepSize %= tasks.Length;
};
}
So yes it loops endlessly, but checks if it is time to stop between each business step.
You don't have to sprinkle while loops everywhere. The outer while loop just checks if it's been told to stop and if so doesn't make another iteration...
If you have a straight "go do something and close out" thread (no loops in it) then you just check the _shouldStop boolean either before or after each major spot inside the thread. That way you know whether it should continue on or bail out.
for example:
public void DoWork() {
RunSomeBigMethod();
if (_shouldStop){ return; }
RunSomeOtherBigMethod();
if (_shouldStop){ return; }
//....
}
Instead of adding a while loop where a loop doesn't otherwise belong, add something like if (_shouldStop) CleanupAndExit(); wherever it makes sense to do so. There's no need to check after every single operation or sprinkle the code all over with them. Instead, think of each check as a chance to exit the thread at that point and add them strategically with this in mind.
All these SO responses assume the worker thread will loop. That doesn't sit comfortably with me
There are not a lot of ways to make code take a long time. Looping is a pretty essential programming construct. Making code take a long time without looping takes a huge amount of statements. Hundreds of thousands.
Or calling some other code that is doing the looping for you. Yes, hard to make that code stop on demand. That just doesn't work.
I have been trying to figure out how to solve an requirement I have but for the life of me I just can't come up with a solution.
I have a database of items which stores them a kind of queue.
(The database has already been implemented and other processes will be adding items to this queue.)
The items require a lot of work/time to "process" so I need to be able to:
Constantly de-queue items from the database.
For each item run a new thread and process the item and then return true/false it it was successfully processed. (this will be used to re-add it to the database queue or not)
But to only do this while the current number of active threads (one per item being processed) is less then a maximum number of threads parameter.
Once the maximum number of threads has been reached I need to stop de-queuing items from the database until the current number of threads is less than the maximum number of threads.
At which point it needs to continue de-queuing items.
It feels like this should be something I can come up with but it is just not coming to me.
To clarify: I only need to implement the threading. The database has already be implemented.
One really easy way to do this is with a Semaphore. You have one thread that dequeues items and creates threads to process them. For example:
const int MaxThreads = 4;
Semaphore sem = new Semaphore(MaxThreads, MaxThreads);
while (Queue.HasItems())
{
sem.WaitOne();
var item = Queue.Dequeue();
Threadpool.QueueUserWorkItem(ProcessItem, item); // see below
}
// When the queue is empty, you have to wait for all processing
// threads to complete.
// If you can acquire the semaphore MaxThreads times, all workers are done
int count = 0;
while (count < MaxThreads)
{
sem.WaitOne();
++count;
}
// the code to process an item
void ProcessItem(object item)
{
// cast the item to whatever type you need,
// and process it.
// when done processing, release the semaphore
sem.Release();
}
The above technique works quite well. It's simple to code, easy to understand, and very effective.
One change is that you might want to use the Task API rather Threadpool.QueueUserWorkItem. Task gives you more control over the asynchronous processing, including cancellation. I used QueueUserWorkItem in my example because I'm more familiar with it. I would use Task in a production program.
Although this does use N+1 threads (where N is the number of items you want processed concurrently), that extra thread isn't often doing anything. The only time it's running is when it's assigning work to worker threads. Otherwise, it's doing a non-busy wait on the semaphore.
Do you just not know where to start?
Consider a thread pool with a max number of threads. http://msdn.microsoft.com/en-us/library/y5htx827.aspx
Consider spinning up your max number of threads immediately and monitoring the DB. http://msdn.microsoft.com/en-us/library/system.threading.threadpool.queueuserworkitem.aspx is convenient.
Remember that you can't guarantee your process will be ended safely...crashes happen. Consider logging of processing state.
Remember that your select and remove-from-queue operations should be atomic.
Ok, so the architecture of the solution is going to depend on one thing: does the processing time per queue item vary according to the item's data?
If not then you can have something that merely round-robins between the processing threads. This will be fairly simple to implement.
If the processing time does vary then you're going to need something with more of a 'next available' feel to it, so that whichever of you threads happens to be free first gets given the job of processing the data item.
Having worked that out you're then going to have the usual run around with how to synchronise between a queue reader and the processing threads. The difference between 'next-available' and 'round-robin' is how you do that synchronisation.
I'm not overly familiar with C#, but I've heard tell of a beast called a background worker. That is likely to be an acceptable means of bringing this about.
For round robin, just start up a background worker per queue item, storing the workers' references in an array. Limit yourself to, say, 16 in progress background workers. The idea is that having started 16 you would then wait for the first to complete before starting the 17th, and so on. I believe that background workers actually run as jobs on the thread pool, so that will automatically limit the number of threads that are actually running at any one time to something appropriate for the underlying hardware. To wait for a background worker see this. Having waited for a background worker to complete you'd then handle its result and start another up.
For the next available approach its not so different. Instead of waiting for the 1st to complete you would use WaitAny() to wait for any of the workers to complete. You handle the return from whichever one completed, and then start another one up and go back to WaitAny().
The general philosophy of both approaches is to keep a number of threads on the boil all the time. A features of the next-available approach is that the order in which you emit the results is not necessarily the same as the order of the input items. If that matters then the round robin approach with more background workers than CPU cores will be reasonably efficient (the threadpool will just start commissioned but not yet running workers anyway). However the latency will vary with the processing time.
BTW 16 is an arbitrary number chosen on the basis of how many cores you think will be on the PC running the software. More cores, bigger number.
Of course, in the seemingly restless and ever changing world of .NET there may now be a better way of doing this.
Good luck!
I have a task that should wait until a value has been added to a ConcurrentDictionary. If the value is added, it should stop waiting and continue its work. It should stop waiting as well if a timeout occured (5 seconds for example).
The value should be added to the ConcurrentDictionary by another thread/task, however, due to the nature of the code, I do not want the threads to communicate with each other.
How I should go about implementing this (in C#) ?
Btw I am using Tasks, not Threads for most of the part, and thus I am not sure if Thread.Sleep or other methods on the current Thread would be a good decision, cause it might sleep other tasks which use the same thread and cause random problems.
I think "waiting" for an element to be added to a collection is generally a bad thing. Doing this, in general, means that a thread is blocked for some period of time. If it's even possible, you then have to deal with timeouts. If it's not possible to detect timeouts then you have to deal with aborting the thread to abort the wait (you never want to get into a situation where a thread is blocked indefinitely) e.g. cancellation.
ConcurrentDictionary is thread safe in and of itself; but this doesn't make all code that uses ConcurrentDictionary thread safe. The application-specific thread-safety requirements need to still be taken into account when using a ConcurrentDictionary object. ConcurrentDictionary cannot possibly implement those types of things. What to do when a value is added to the dictionary is also very similar--the overhead of waiting or notifying external code upon new value additions would cause all usages of ConcurrentDictionary to be slower, even those usages that don't need to wait or don't need to be notified--so things like that are not implemented. The fact that an application adds a value and needs to be notified that a value was added is probably pretty rare (from the dictionaries point of view, why would it tell you that you just added a value...). So, that sort of application-specific thing is normally done at the application level. i.e. the fact that a concurrent dictionary is used is a coincidence and your application notifies other threads that another thread has done something they need to know about. This could mean wrapping adds to a dictionary instance with calls to ManualResetEventSlim.Reset/Set and something can wait with a ManualResetEventSlim.Wait override. Or, it could simply be a matter of writing an event that gets raised whenever a value is added to the dictionary.
I don't believe ConcurrentDictionary provides any hooks for things like this.
You could potentially either wrap or derive from ConcurrentDictionary to provide this, but I'd be nervous of doing so.
The low-tech alternatively is simply to poll - loop round, checking and then sleeping, until you either find the right value or time out. It's far from great from an efficiency point of view, but it's probably the simplest approach if it doesn't give you any other problems.
This is how the sleep polling could work. Quick and dirty. :)
System.Collections.Concurrent.ConcurrentDictionary<string,string> dic = new System.Collections.Concurrent.ConcurrentDictionary<string,string>();
int timeoutCount = 0;
bool hasTimedOut = false;
while (!dic.ContainsKey("KeyYouAreLookingFor"))
{
//5 minutes has expired
if (timeoutCount >= 10)
{
hasTimedOut = true;
break;
}
//30 second sleep or whatever you want your poll time to be
System.Threading.Thread.Sleep(30000);
timeoutCount++;
}
if (hasTimedOut)
{
//TODO: timeout code
}
else
{
//TODO: Key has been added
}
I have two threads, a producer and a consumer.
The producer might not always be producing something. The consumer however, needs to consume it as soon as it becomes available.
The producer thread works in a loop and puts results into a ConcurrentQueue. The consumer thread is in a while (!disposing) loop that calls AutoResetEvent.WaitOne when the system becomes disabled. I've considered calling AutoResetEvent.WaitOne also in the case when the ConcurrentQueue.TryDequeue method returns false; this should only ever happen when there are no items left in the queue.
However, if I were to do this, a deadlock could occur when the following execution is done:
Enqueue
TryDequeue returns true
TryDequeue returns false
Enqueue
WaitOne
This is a possibility in this snippet:
while (this.isDisposing == 0)
{
if (this.isEnabled == 0)
{
this.signal.WaitOne();
}
object item;
if (!this.queue.TryDequeue(out item))
{
this.signal.WaitOne();
continue;
}
this.HandleItem(item);
}
What is the proper way to do this without using locks?
I think the BlockingCollection would be good to use here. It will wait efficiently until there is data in the queue. You can combine this with a ConcurrentQueue I think. See http://msdn.microsoft.com/en-us/library/dd267312.aspx
The problem here is that thread pausing is in almost all operating systems a kernel level event. Windows I think with Fibers permits user-level pausing/unpausing, but that's all I know of.
So you locklessly whizz along with your queue, but how do you signal when there is something in the queue in the first place?
Signalling implies sleeping - and that's the problem. You can do lock-free signalling, but waiting, well, you gotta call the OS equivelent of WaitForEvent(), and that's a problem, because you don't WANT to be using these slow, OS provided mechanisms.
Basically, there's no or very litle OS support for this yet.
For multiple threads wait, can anyone compare the pros and cons of using WaitHandle.WaitAll and Thread.Join?
WaitHandle.WaitAll has a 64 handle limit so that is obviously a huge limitation. On the other hand, it is a convenient way to wait for many signals in only a single call. Thread.Join does not require creating any additional WaitHandle instances. And since it could be called individually on each thread the 64 handle limit does not apply.
Personally, I have never used WaitHandle.WaitAll. I prefer a more scalable pattern when I want to wait on multiple signals. You can create a counting mechanism that counts up or down and once a specific value is reach you signal a single shared event. The CountdownEvent class conveniently packages all of this into a single class.
var finished = new CountdownEvent(1);
for (int i = 0; i < NUM_WORK_ITEMS; i++)
{
finished.AddCount();
SpawnAsynchronousOperation(
() =>
{
try
{
// Place logic to run in parallel here.
}
finally
{
finished.Signal();
}
}
}
finished.Signal();
finished.Wait();
Update:
The reason why you want to signal the event from the main thread is subtle. Basically, you want to treat the main thread as if it were just another work item. Afterall, it, along with the other real work items, is running concurrently as well.
Consider for a moment what might happen if we did not treat the main thread as a work item. It will go through one iteration of the for loop and add a count to our event (via AddCount) indicating that we have one pending work item right? Lets say the SpawnAsynchronousOperation completes and gets the work item queued on another thread. Now, imagine if the main thread gets preempted before swinging around to the next iteration of the loop. The thread executing the work item gets its fair share of the CPU and starts humming along and actually completes the work item. The Signal call in the work item runs and decrements our pending work item count to zero which will change the state of the CountdownEvent to signalled. In the meantime the main thread wakes up and goes through all iterations of the loop and hits the Wait call, but since the event got prematurely signalled it pass on by even though there are still pending work items.
Again, avoiding this subtle race condition is easy when you treat the main thread as a work item. That is why the CountdownEvent is intialized with one count and the Signal method is called before the Wait.
I like #Brian's answer as a comparison of the two mechanisms.
If you are on .Net 4, it would be worthwhile exploring Task Parallel Library to achieve Task Parellelism via System.Threading.Tasks which allows you to manage tasks across multiple threads at a higher level of abstraction. The signalling you asked about in this question to manage thread interactions is hidden or much simplified, and you can concentrate on properly defining what each Task consists of and how to coordinate them.
This may seem offtopic but as Microsoft themselves say in the MSDN docs:
in the .NET Framework 4, tasks are the
preferred API for writing
multi-threaded, asynchronous, and
parallel code.
The waitall mechanism involves kernal-mode objects. I don't think the same is true for the join mechanism. I would prefer join, given the opportunity.
Technically though, the two are not equivalent. IIRC Join can only operate on one thread. Waitall can hold for the signalling of multiple kernel objects.