EventWaitHandle behavior for pthread_cond_t - c#

I've recently seen the light of EventWaitHandle's powerful behavior in C# and decided to move some functionality in a sister application to do the same. The only problem is that the sister app is written in C.
No big deal, I'm using pthreads, which have a pthread_cond_t datatype that allows for signalling. My only question is, is it possible for a cond to be 'signalled' before something is waiting on it?
Right now my testing says no. That is, if ThreadA fires a signal before ThreadB is waiting, ThreadB will wait indefinately. Is there another pthread type that I can use that behaves closer to the functionality of the EventWaitHandle in C#? An object is signalled, meaning that the first thread to wait on it, will pass immediately, and set it to unsignalled.
Wrapping the pthread_cond into another data structure wouldn't be too hard to achieve this. But again, is this functionality already available in the pthread library?

If you're using condition variables correctly, this won't matter.
The basic flow of your code should be (in pseudocode):
lock(lockobj);
while (!signalled) {
wait(condvar);
}
signalled = false;
unlock(lockobj);
on the waiting side, and:
lock(lockobj);
signalled = true;
notify(condvar);
unlock(lockobj);
on the signalling side. (Of course, the lock object and condition variable used have to be the same on both sides.) Hope this helps!

Alternative answer (also in pseudocode) if you want multiple signallings (i.e., if signalled twice, then two threads can wait before the state is unsignalled again).
Waiting side:
lock(lockobj);
while (signalled != 0) {
wait(condvar);
}
--signalled;
unlock(lockobj);
Signalling side:
lock(lockobj);
++signalled;
notify(condvar);
unlock(lockobj);

I ended up just wrapping a condition type in a new structure and created some simple functions to behave much like the EventWaitHandle from C#. I needed two mutexes to acheive proper serialized access.
The the cond_mutex is used for waiting on the conditional variable, while the data_mutex is used when setting the state from signaled to not signaled.
The reset mode is the same from C#. AUTO or MANUAL. This allows for the event_wait_t to reset itself automatically after waiting. Or letting the programmer manually do it with a call to event_wait_reset(event_wait_t *ewh);

Related

How can I make a interrupt in C#? [duplicate]

I understand Thread.Abort() is evil from the multitude of articles I've read on the topic, so I'm currently in the process of ripping out all of my abort's in order to replace it for a cleaner way; and after comparing user strategies from people here on stackoverflow and then after reading "How to: Create and Terminate Threads (C# Programming Guide)" from MSDN both which state an approach very much the same -- which is to use a volatile bool approach checking strategy, which is nice, but I still have a few questions....
Immediately what stands out to me here, is what if you do not have a simple worker process which is just running a loop of crunching code? For instance for me, my process is a background file uploader process, I do in fact loop through each file, so that's something, and sure I could add my while (!_shouldStop) at the top which covers me every loop iteration, but I have many more business processes which occur before it hits it's next loop iteration, I want this cancel procedure to be snappy; don't tell me I need to sprinkle these while loops every 4-5 lines down throughout my entire worker function?!
I really hope there is a better way, could somebody please advise me on if this is in fact, the correct [and only?] approach to do this, or strategies they have used in the past to achieve what I am after.
Thanks gang.
Further reading: All these SO responses assume the worker thread will loop. That doesn't sit comfortably with me. What if it is a linear, but timely background operation?
Unfortunately there may not be a better option. It really depends on your specific scenario. The idea is to stop the thread gracefully at safe points. That is the crux of the reason why Thread.Abort is not good; because it is not guaranteed to occur at safe points. By sprinkling the code with a stopping mechanism you are effectively manually defining the safe points. This is called cooperative cancellation. There are basically 4 broad mechanisms for doing this. You can choose the one that best fits your situation.
Poll a stopping flag
You have already mentioned this method. This a pretty common one. Make periodic checks of the flag at safe points in your algorithm and bail out when it gets signalled. The standard approach is to mark the variable volatile. If that is not possible or inconvenient then you can use a lock. Remember, you cannot mark a local variable as volatile so if a lambda expression captures it through a closure, for example, then you would have to resort to a different method for creating the memory barrier that is required. There is not a whole lot else that needs to be said for this method.
Use the new cancellation mechanisms in the TPL
This is similar to polling a stopping flag except that it uses the new cancellation data structures in the TPL. It is still based on cooperative cancellation patterns. You need to get a CancellationToken and the periodically check IsCancellationRequested. To request cancellation you would call Cancel on the CancellationTokenSource that originally provided the token. There is a lot you can do with the new cancellation mechanisms. You can read more about here.
Use wait handles
This method can be useful if your worker thread requires waiting on an specific interval or for a signal during its normal operation. You can Set a ManualResetEvent, for example, to let the thread know it is time to stop. You can test the event using the WaitOne function which returns a bool indicating whether the event was signalled. The WaitOne takes a parameter that specifies how much time to wait for the call to return if the event was not signaled in that amount of time. You can use this technique in place of Thread.Sleep and get the stopping indication at the same time. It is also useful if there are other WaitHandle instances that the thread may have to wait on. You can call WaitHandle.WaitAny to wait on any event (including the stop event) all in one call. Using an event can be better than calling Thread.Interrupt since you have more control over of the flow of the program (Thread.Interrupt throws an exception so you would have to strategically place the try-catch blocks to perform any necessary cleanup).
Specialized scenarios
There are several one-off scenarios that have very specialized stopping mechanisms. It is definitely outside the scope of this answer to enumerate them all (never mind that it would be nearly impossible). A good example of what I mean here is the Socket class. If the thread is blocked on a call to Send or Receive then calling Close will interrupt the socket on whatever blocking call it was in effectively unblocking it. I am sure there are several other areas in the BCL where similiar techniques can be used to unblock a thread.
Interrupt the thread via Thread.Interrupt
The advantage here is that it is simple and you do not have to focus on sprinkling your code with anything really. The disadvantage is that you have little control over where the safe points are in your algorithm. The reason is because Thread.Interrupt works by injecting an exception inside one of the canned BCL blocking calls. These include Thread.Sleep, WaitHandle.WaitOne, Thread.Join, etc. So you have to be wise about where you place them. However, most the time the algorithm dictates where they go and that is usually fine anyway especially if your algorithm spends most of its time in one of these blocking calls. If you algorithm does not use one of the blocking calls in the BCL then this method will not work for you. The theory here is that the ThreadInterruptException is only generated from .NET waiting call so it is likely at a safe point. At the very least you know that the thread cannot be in unmanaged code or bail out of a critical section leaving a dangling lock in an acquired state. Despite this being less invasive than Thread.Abort I still discourage its use because it is not obvious which calls respond to it and many developers will be unfamiliar with its nuances.
Well, unfortunately in multithreading you often have to compromise "snappiness" for cleanliness... you can exit a thread immediately if you Interrupt it, but it won't be very clean. So no, you don't have to sprinkle the _shouldStop checks every 4-5 lines, but if you do interrupt your thread then you should handle the exception and exit out of the loop in a clean manner.
Update
Even if it's not a looping thread (i.e. perhaps it's a thread that performs some long-running asynchronous operation or some type of block for input operation), you can Interrupt it, but you should still catch the ThreadInterruptedException and exit the thread cleanly. I think that the examples you've been reading are very appropriate.
Update 2.0
Yes I have an example... I'll just show you an example based on the link you referenced:
public class InterruptExample
{
private Thread t;
private volatile boolean alive;
public InterruptExample()
{
alive = false;
t = new Thread(()=>
{
try
{
while (alive)
{
/* Do work. */
}
}
catch (ThreadInterruptedException exception)
{
/* Clean up. */
}
});
t.IsBackground = true;
}
public void Start()
{
alive = true;
t.Start();
}
public void Kill(int timeout = 0)
{
// somebody tells you to stop the thread
t.Interrupt();
// Optionally you can block the caller
// by making them wait until the thread exits.
// If they leave the default timeout,
// then they will not wait at all
t.Join(timeout);
}
}
If cancellation is a requirement of the thing you're building, then it should be treated with as much respect as the rest of your code--it may be something you have to design for.
Lets assume that your thread is doing one of two things at all times.
Something CPU bound
Waiting for the kernel
If you're CPU bound in the thread in question, you probably have a good spot to insert the bail-out check. If you're calling into someone else's code to do some long-running CPU-bound task, then you might need to fix the external code, move it out of process (aborting threads is evil, but aborting processes is well-defined and safe), etc.
If you're waiting for the kernel, then there's probably a handle (or fd, or mach port, ...) involved in the wait. Usually if you destroy the relevant handle, the kernel will return with some failure code immediately. If you're in .net/java/etc. you'll likely end up with an exception. In C, whatever code you already have in place to handle system call failures will propagate the error up to a meaningful part of your app. Either way, you break out of the low-level place fairly cleanly and in a very timely manner without needing new code sprinkled everywhere.
A tactic I often use with this kind of code is to keep track of a list of handles that need to be closed and then have my abort function set a "cancelled" flag and then close them. When the function fails it can check the flag and report failure due to cancellation rather than due to whatever the specific exception/errno was.
You seem to be implying that an acceptable granularity for cancellation is at the level of a service call. This is probably not good thinking--you are much better off cancelling the background work synchronously and joining the old background thread from the foreground thread. It's way cleaner becasue:
It avoids a class of race conditions when old bgwork threads come back to life after unexpected delays.
It avoids potential hidden thread/memory leaks caused by hanging background processes by making it possible for the effects of a hanging background thread to hide.
There are two reasons to be scared of this approach:
You don't think you can abort your own code in a timely fashion. If cancellation is a requirement of your app, the decision you really need to make is a resource/business decision: do a hack, or fix your problem cleanly.
You don't trust some code you're calling because it's out of your control. If you really don't trust it, consider moving it out-of-process. You get much better isolation from many kinds of risks, including this one, that way.
The best answer largely depends on what you're doing in the thread.
Like you said, most answers revolve around polling a shared boolean every couple lines. Even though you may not like it, this is often the simplest scheme. If you want to make your life easier, you can write a method like ThrowIfCancelled(), which throws some kind of exception if you're done. The purists will say this is (gasp) using exceptions for control flow, but then again cacelling is exceptional imo.
If you're doing IO operations (like network stuff), you may want to consider doing everything using async operations.
If you're doing a sequence of steps, you could use the IEnumerable trick to make a state machine. Example:
<
abstract class StateMachine : IDisposable
{
public abstract IEnumerable<object> Main();
public virtual void Dispose()
{
/// ... override with free-ing code ...
}
bool wasCancelled;
public bool Cancel()
{
// ... set wasCancelled using locking scheme of choice ...
}
public Thread Run()
{
var thread = new Thread(() =>
{
try
{
if(wasCancelled) return;
foreach(var x in Main())
{
if(wasCancelled) return;
}
}
finally { Dispose(); }
});
thread.Start()
}
}
class MyStateMachine : StateMachine
{
public override IEnumerabl<object> Main()
{
DoSomething();
yield return null;
DoSomethingElse();
yield return null;
}
}
// then call new MyStateMachine().Run() to run.
>
Overengineering? It depends how many state machines you use. If you just have 1, yes. If you have 100, then maybe not. Too tricky? Well, it depends. Another bonus of this approach is that it lets you (with minor modifications) move your operation into a Timer.tick callback and void threading altogether if it makes sense.
and do everything that blucz says too.
Perhaps the a piece of the problem is that you have such a long method / while loop. Whether or not you are having threading issues, you should break it down into smaller processing steps. Let's suppose those steps are Alpha(), Bravo(), Charlie() and Delta().
You could then do something like this:
public void MyBigBackgroundTask()
{
Action[] tasks = new Action[] { Alpha, Bravo, Charlie, Delta };
int workStepSize = 0;
while (!_shouldStop)
{
tasks[workStepSize++]();
workStepSize %= tasks.Length;
};
}
So yes it loops endlessly, but checks if it is time to stop between each business step.
You don't have to sprinkle while loops everywhere. The outer while loop just checks if it's been told to stop and if so doesn't make another iteration...
If you have a straight "go do something and close out" thread (no loops in it) then you just check the _shouldStop boolean either before or after each major spot inside the thread. That way you know whether it should continue on or bail out.
for example:
public void DoWork() {
RunSomeBigMethod();
if (_shouldStop){ return; }
RunSomeOtherBigMethod();
if (_shouldStop){ return; }
//....
}
Instead of adding a while loop where a loop doesn't otherwise belong, add something like if (_shouldStop) CleanupAndExit(); wherever it makes sense to do so. There's no need to check after every single operation or sprinkle the code all over with them. Instead, think of each check as a chance to exit the thread at that point and add them strategically with this in mind.
All these SO responses assume the worker thread will loop. That doesn't sit comfortably with me
There are not a lot of ways to make code take a long time. Looping is a pretty essential programming construct. Making code take a long time without looping takes a huge amount of statements. Hundreds of thousands.
Or calling some other code that is doing the looping for you. Yes, hard to make that code stop on demand. That just doesn't work.

Is it safe to read a local variable in multithreaded programming in C#?

In Java, we can't share a local variable between threads unless the final keyword is added.
But in C#, it's allowed to write like this:
void DoSomeJob() {
bool isDone = false;
new Thread( ()=> {
// Do some background job
isDone = true;
}).Start();
while (isDone == false) {
// Do some foreground job
}
}
Actually, this worked in my simple tests.
-As you see, no Thread.Sleep() call or anything similar which causes reloading.
-Ran in both debug and release mode.
If I had to share a variable between threads (without any locks) like this, I would define it as a static variable with the volatile keyword to prevent running into an infinite loop.
So I wonder if just simply reading a local variable like above will always work.
Btw, this is just a question out of curiosity, but not about writing better-multithreaded code.
This example is probably fine HOWEVER you never know for sure.
The fact that it is a local variable does not matter. Always protect access to a variable that is used from >1 thread by using lock. Even if one thread only reads the value.
Not sure if useful, but, if what you want to do is test for a thread being complete, there are other ways to do this that are better, e.g. event handles, using join, or using tasks and wait.
This example has nothing to do with accessing UI controls from another thread, which while true, is a separate topic all together, and more of a design constraint due to the way that windows works.
You should always keep one thing in mind when doing multithreading: as long as a piece of code is not atomic, it maybe interupted by another thread.
Assigning a bool value is atomic, but together with calculations before it its not.
Takeing your code as example, when the executions in the new thread is done but isDone bool indicator value is not updated yet, the main thread maybe just checking isDone and found it false. However in fact the executions in new thread was actually done therefore the main thread is doing wrong thing due to early reading on the isDone indicator bool value.
Your code will be fine if the code in main thread's loop works okey regardless of whether its isDone or not.

Busy waiting in C#

How do you implement busy waiting in a not total inefficient way? I am facing the issue that I can load the data of my model only in a pull manner, which means I have to invoke getXYZ() methods in a continuous way.
This has to happen not fast enough for user interaction, but fast enought, that when a state in the GUI is changed, the model can be noticed and the new state is received by the getXYZ() methods.
My approach simply be:
while (c.hasChanged()) {
Thread.sleep(500);
}
updateData();
Are there better mechanisms?
Your problem seems to be solvable with Threading.
In WPF you can do:
Thread t = new Thread((ThreadStart)delegate() {
while (true) {
Thread.sleep(500);
if (c.hasChanged())
Dispatcher.Invoke((Action)delegate() {updateData();});
}
}).Start();
In WinForms
Thread t = new Thread((ThreadStart)delegate() {
while (true) {
Thread.sleep(500);
// this must derive from Control
if (c.hasChanged())
this.Invoke((Action)delegate() {updateData();});
}
}).Start();
There may be missing parameters to Invoke (which is needed to execute the code on the calling UI thread) but I'm writing this from my brain so no intellisense at disposal :D
In .NET 4 you can use TaskFactory.StartNew instead of spawning a thread by yourself.
In .Net <= 4, you could use the TreadPool for the thread.
However I recall you need this to be run at once because you expect it to be there checking as soon as possible and the thread pool won't assure you that (it could be already full, but not very likely:-).
Just don't do silly things like spawning more of them in a loop!
And inside the thread you should put a check like
while (!Closing)
so that the thread can finish when you need it without having to resort to bad things like t.Abort();
An when exiting put the Closing to true and do a t.Join() to close the checker thread.
EDIT:
I forgot to say that the Closing should be a bool property or a VOLATILE boolean, not a simple boolean, because you won't be ensured that the thread could ever finish (well it would in case you are closing the application, but it is good practice to make them finish by your will). the volatile keyword is intended to prevent the (pseudo)compiler from applying any optimizations on the code that assume values of variables cannot change
It's not clear from your post exactly what you are trying to do, but it sounds like you should put your model/service calls on a separate thread (via Background worker or async delegate) and use a callback from the model/service call to notify the UI when it's done. Your UI thread can then do busy things, like show a progress bar, but not become unresponsive.
If you are polling from a GUI, use a (WinForms) Timer.
If this is some kind of background process, your Sleep() may be the lesser evil.
Explicit busy waiting is evil and must be avoided whenever possible.
If you cannot avoid it, then build your application using the Observer design pattern and register the interested objects to an object which performs the polling, backed by a thread.
That way you have a clean design, confining the ugly stuff in just one place.

Question about terminating a thread cleanly in .NET

I understand Thread.Abort() is evil from the multitude of articles I've read on the topic, so I'm currently in the process of ripping out all of my abort's in order to replace it for a cleaner way; and after comparing user strategies from people here on stackoverflow and then after reading "How to: Create and Terminate Threads (C# Programming Guide)" from MSDN both which state an approach very much the same -- which is to use a volatile bool approach checking strategy, which is nice, but I still have a few questions....
Immediately what stands out to me here, is what if you do not have a simple worker process which is just running a loop of crunching code? For instance for me, my process is a background file uploader process, I do in fact loop through each file, so that's something, and sure I could add my while (!_shouldStop) at the top which covers me every loop iteration, but I have many more business processes which occur before it hits it's next loop iteration, I want this cancel procedure to be snappy; don't tell me I need to sprinkle these while loops every 4-5 lines down throughout my entire worker function?!
I really hope there is a better way, could somebody please advise me on if this is in fact, the correct [and only?] approach to do this, or strategies they have used in the past to achieve what I am after.
Thanks gang.
Further reading: All these SO responses assume the worker thread will loop. That doesn't sit comfortably with me. What if it is a linear, but timely background operation?
Unfortunately there may not be a better option. It really depends on your specific scenario. The idea is to stop the thread gracefully at safe points. That is the crux of the reason why Thread.Abort is not good; because it is not guaranteed to occur at safe points. By sprinkling the code with a stopping mechanism you are effectively manually defining the safe points. This is called cooperative cancellation. There are basically 4 broad mechanisms for doing this. You can choose the one that best fits your situation.
Poll a stopping flag
You have already mentioned this method. This a pretty common one. Make periodic checks of the flag at safe points in your algorithm and bail out when it gets signalled. The standard approach is to mark the variable volatile. If that is not possible or inconvenient then you can use a lock. Remember, you cannot mark a local variable as volatile so if a lambda expression captures it through a closure, for example, then you would have to resort to a different method for creating the memory barrier that is required. There is not a whole lot else that needs to be said for this method.
Use the new cancellation mechanisms in the TPL
This is similar to polling a stopping flag except that it uses the new cancellation data structures in the TPL. It is still based on cooperative cancellation patterns. You need to get a CancellationToken and the periodically check IsCancellationRequested. To request cancellation you would call Cancel on the CancellationTokenSource that originally provided the token. There is a lot you can do with the new cancellation mechanisms. You can read more about here.
Use wait handles
This method can be useful if your worker thread requires waiting on an specific interval or for a signal during its normal operation. You can Set a ManualResetEvent, for example, to let the thread know it is time to stop. You can test the event using the WaitOne function which returns a bool indicating whether the event was signalled. The WaitOne takes a parameter that specifies how much time to wait for the call to return if the event was not signaled in that amount of time. You can use this technique in place of Thread.Sleep and get the stopping indication at the same time. It is also useful if there are other WaitHandle instances that the thread may have to wait on. You can call WaitHandle.WaitAny to wait on any event (including the stop event) all in one call. Using an event can be better than calling Thread.Interrupt since you have more control over of the flow of the program (Thread.Interrupt throws an exception so you would have to strategically place the try-catch blocks to perform any necessary cleanup).
Specialized scenarios
There are several one-off scenarios that have very specialized stopping mechanisms. It is definitely outside the scope of this answer to enumerate them all (never mind that it would be nearly impossible). A good example of what I mean here is the Socket class. If the thread is blocked on a call to Send or Receive then calling Close will interrupt the socket on whatever blocking call it was in effectively unblocking it. I am sure there are several other areas in the BCL where similiar techniques can be used to unblock a thread.
Interrupt the thread via Thread.Interrupt
The advantage here is that it is simple and you do not have to focus on sprinkling your code with anything really. The disadvantage is that you have little control over where the safe points are in your algorithm. The reason is because Thread.Interrupt works by injecting an exception inside one of the canned BCL blocking calls. These include Thread.Sleep, WaitHandle.WaitOne, Thread.Join, etc. So you have to be wise about where you place them. However, most the time the algorithm dictates where they go and that is usually fine anyway especially if your algorithm spends most of its time in one of these blocking calls. If you algorithm does not use one of the blocking calls in the BCL then this method will not work for you. The theory here is that the ThreadInterruptException is only generated from .NET waiting call so it is likely at a safe point. At the very least you know that the thread cannot be in unmanaged code or bail out of a critical section leaving a dangling lock in an acquired state. Despite this being less invasive than Thread.Abort I still discourage its use because it is not obvious which calls respond to it and many developers will be unfamiliar with its nuances.
Well, unfortunately in multithreading you often have to compromise "snappiness" for cleanliness... you can exit a thread immediately if you Interrupt it, but it won't be very clean. So no, you don't have to sprinkle the _shouldStop checks every 4-5 lines, but if you do interrupt your thread then you should handle the exception and exit out of the loop in a clean manner.
Update
Even if it's not a looping thread (i.e. perhaps it's a thread that performs some long-running asynchronous operation or some type of block for input operation), you can Interrupt it, but you should still catch the ThreadInterruptedException and exit the thread cleanly. I think that the examples you've been reading are very appropriate.
Update 2.0
Yes I have an example... I'll just show you an example based on the link you referenced:
public class InterruptExample
{
private Thread t;
private volatile boolean alive;
public InterruptExample()
{
alive = false;
t = new Thread(()=>
{
try
{
while (alive)
{
/* Do work. */
}
}
catch (ThreadInterruptedException exception)
{
/* Clean up. */
}
});
t.IsBackground = true;
}
public void Start()
{
alive = true;
t.Start();
}
public void Kill(int timeout = 0)
{
// somebody tells you to stop the thread
t.Interrupt();
// Optionally you can block the caller
// by making them wait until the thread exits.
// If they leave the default timeout,
// then they will not wait at all
t.Join(timeout);
}
}
If cancellation is a requirement of the thing you're building, then it should be treated with as much respect as the rest of your code--it may be something you have to design for.
Lets assume that your thread is doing one of two things at all times.
Something CPU bound
Waiting for the kernel
If you're CPU bound in the thread in question, you probably have a good spot to insert the bail-out check. If you're calling into someone else's code to do some long-running CPU-bound task, then you might need to fix the external code, move it out of process (aborting threads is evil, but aborting processes is well-defined and safe), etc.
If you're waiting for the kernel, then there's probably a handle (or fd, or mach port, ...) involved in the wait. Usually if you destroy the relevant handle, the kernel will return with some failure code immediately. If you're in .net/java/etc. you'll likely end up with an exception. In C, whatever code you already have in place to handle system call failures will propagate the error up to a meaningful part of your app. Either way, you break out of the low-level place fairly cleanly and in a very timely manner without needing new code sprinkled everywhere.
A tactic I often use with this kind of code is to keep track of a list of handles that need to be closed and then have my abort function set a "cancelled" flag and then close them. When the function fails it can check the flag and report failure due to cancellation rather than due to whatever the specific exception/errno was.
You seem to be implying that an acceptable granularity for cancellation is at the level of a service call. This is probably not good thinking--you are much better off cancelling the background work synchronously and joining the old background thread from the foreground thread. It's way cleaner becasue:
It avoids a class of race conditions when old bgwork threads come back to life after unexpected delays.
It avoids potential hidden thread/memory leaks caused by hanging background processes by making it possible for the effects of a hanging background thread to hide.
There are two reasons to be scared of this approach:
You don't think you can abort your own code in a timely fashion. If cancellation is a requirement of your app, the decision you really need to make is a resource/business decision: do a hack, or fix your problem cleanly.
You don't trust some code you're calling because it's out of your control. If you really don't trust it, consider moving it out-of-process. You get much better isolation from many kinds of risks, including this one, that way.
The best answer largely depends on what you're doing in the thread.
Like you said, most answers revolve around polling a shared boolean every couple lines. Even though you may not like it, this is often the simplest scheme. If you want to make your life easier, you can write a method like ThrowIfCancelled(), which throws some kind of exception if you're done. The purists will say this is (gasp) using exceptions for control flow, but then again cacelling is exceptional imo.
If you're doing IO operations (like network stuff), you may want to consider doing everything using async operations.
If you're doing a sequence of steps, you could use the IEnumerable trick to make a state machine. Example:
<
abstract class StateMachine : IDisposable
{
public abstract IEnumerable<object> Main();
public virtual void Dispose()
{
/// ... override with free-ing code ...
}
bool wasCancelled;
public bool Cancel()
{
// ... set wasCancelled using locking scheme of choice ...
}
public Thread Run()
{
var thread = new Thread(() =>
{
try
{
if(wasCancelled) return;
foreach(var x in Main())
{
if(wasCancelled) return;
}
}
finally { Dispose(); }
});
thread.Start()
}
}
class MyStateMachine : StateMachine
{
public override IEnumerabl<object> Main()
{
DoSomething();
yield return null;
DoSomethingElse();
yield return null;
}
}
// then call new MyStateMachine().Run() to run.
>
Overengineering? It depends how many state machines you use. If you just have 1, yes. If you have 100, then maybe not. Too tricky? Well, it depends. Another bonus of this approach is that it lets you (with minor modifications) move your operation into a Timer.tick callback and void threading altogether if it makes sense.
and do everything that blucz says too.
Perhaps the a piece of the problem is that you have such a long method / while loop. Whether or not you are having threading issues, you should break it down into smaller processing steps. Let's suppose those steps are Alpha(), Bravo(), Charlie() and Delta().
You could then do something like this:
public void MyBigBackgroundTask()
{
Action[] tasks = new Action[] { Alpha, Bravo, Charlie, Delta };
int workStepSize = 0;
while (!_shouldStop)
{
tasks[workStepSize++]();
workStepSize %= tasks.Length;
};
}
So yes it loops endlessly, but checks if it is time to stop between each business step.
You don't have to sprinkle while loops everywhere. The outer while loop just checks if it's been told to stop and if so doesn't make another iteration...
If you have a straight "go do something and close out" thread (no loops in it) then you just check the _shouldStop boolean either before or after each major spot inside the thread. That way you know whether it should continue on or bail out.
for example:
public void DoWork() {
RunSomeBigMethod();
if (_shouldStop){ return; }
RunSomeOtherBigMethod();
if (_shouldStop){ return; }
//....
}
Instead of adding a while loop where a loop doesn't otherwise belong, add something like if (_shouldStop) CleanupAndExit(); wherever it makes sense to do so. There's no need to check after every single operation or sprinkle the code all over with them. Instead, think of each check as a chance to exit the thread at that point and add them strategically with this in mind.
All these SO responses assume the worker thread will loop. That doesn't sit comfortably with me
There are not a lot of ways to make code take a long time. Looping is a pretty essential programming construct. Making code take a long time without looping takes a huge amount of statements. Hundreds of thousands.
Or calling some other code that is doing the looping for you. Yes, hard to make that code stop on demand. That just doesn't work.

stopping dll loop

I have a multi-thread C# application that uses some recursive functions in a dll. The problem that I have is how to cleanly stop the recursive functions.
The recursive functions are used to traverse our SCADA system's hierarchical 'SCADA Object' data. Traversing the data takes a long time (10s of minutes) depending on the size of our system and what we need to do with the data.
When I start the work I create a background thread so the GUI stays responsive. Then the background worker handles the calling of the recursive function in the dll.
I can send a cancel request to the background worker using CancelAsync but the background worker can't check the CancellationPending flag because it is blocked waiting of the dll's recursive function to finish.
Typically there is only 1 recursive function active at a time but there are dozens of recursive functions that are used at various times by different background workers.
As a quick (and really shameful) hack I added a global 'CodeEnabled' flag to the dll. So when the GUI does the CancelAsync it also sets the 'CodeEnabled' flag to false. (I know I need some of those bad code offsets). Then the dll's recursive loop checks the 'CodeEnabled' flag and returns to the background worker which is finally able to stop.
I don't want to move the recursive logic to the background worker thread because I need it in other places (e.g. other background workers).
What other approach should be used for this type of problem?
It depends on the design, really. Much recursion can be replaced with (for example) a local stack (Stack<>) or queue (Queue<>), in which case a cancel flag can be held locally without too much pain. Another option is to use some kind of progress event that allows subscribers to set a cancel flag. A third option is to pass some kind of context class into the function(s), with a (volatile or synchronized) flag that can be set.
In any of these cases you should have relatively easy access to a cancel flag to exit the recursion.
FooContext ctx = new FooFontext();
BeginSomeRecursiveFunction(ctx);
...
ctx.Cancel = true; // or ctx.Cancel(), whatever
with (in your function that accepts the context):
if(ctx.Cancel) return; // or maybe throw something
// like an OperationCancelledException();
blah...
CallMyself(ctx); // and further down the rabbit hole we go...
Another interesting option is to use iterator blocks for your long function rather than regular code; then your calling code can simply stop iterating when it has had enough.
Well, it seems to me that you need to propagate the "stop now" state down the recursive calls. You could have some sort of cancellation token which you pass down the recursive calls, and also keep hold of in the UI thread. Something as simple as this:
public class CancellationToken
{
private volatile bool cancelled;
public bool IsCancelled { get { return cancelled; } }
public void Cancel() { cancelled = true; }
}
(I'm getting increasingly wary of volatility and lock-free coding; I would be tempted to use a lock here instead of a volatile variable, but I've kept it here for the sake of simplicity.)
So you'd create the cancellation token, pass it in, and then at the start of each recursive method call you'd have:
if (token.IsCancelled)
{
return null; // Or some other dummy value, or throw an exception
}
Then you'd just call Cancel() in the UI thread. Basically it's a just a way of sharing the state of "should this task continue".
The choice of whether to propagate a dummy return value back or throw an exception is an interesting one. In some ways this isn't exceptional - you must be partially expecting it, or you wouldn't pass the cancellation token in the first place - but at the same time exceptions have the behaviour you want in terms of unwinding the stack to somewhere that can recognise the cancellation easily.
I like the previous answers, but here's another.
I think you're asking how to have different cancel flag for different threads.
Assuming that the threads which you might want to cancel each have some kind of ThreadId then, instead of having a single global 'CodeEnabled' flag, you could have a global thread-safe dictionary of flags, where the TheadId values are used as the dictionary's keys.
A thread would then query the dictionary to see whether its flag has been set.

Categories