How do I abort CCR threads\tasks? - c#

I want to implement a timeout on the execution of tasks in a project that uses the CCR. Basically when I post an item to a Port or enqueue a Task to a DispatcherQueue I want to be able to abort the task or the thread that its running on if it takes longer than some configured time. How can I do this?

Can you confirm what you are asking? Are you running a long-lived task in the Dispatcher? Killing the thread would break the CCR model, so you need to be able to signal to the thread to finish its work and yield. Assuming it's a loop that is not finishing quick enough, you might choose to enqueue a timer:
var resultTimeoutPort = new Port<DateTime>();
dispatcherQueue.EnqueueTimer(TimeSpan.FromSeconds(RESULT_TIMEOUT),
resultTimeoutPort);
and ensure the blocking thread has available a reference to resultTimeoutPort. In the blocking loop, one of the exit conditions might be:
do
{
//foomungus amount of work
}while(resultTimeoutPort.Test()==null&&
someOtherCondition)
Please post more info if I'm barking up the wrong tree.

You could register the thread (Thread.CurrentThread) at the beginning of your CCR "Receive" handler (or in a method that calls your method via a delegate). Then you can do your periodic check and abort if necessary basically the same way you would have done it if you created the thread manually. The catch is that if you use your own Microsoft.Ccr.Core.Dispatcher with a fixed number of threads, I don't think there is a way to get those threads back once you abort them (based on my testing). So, if your dispatcher has 5 threads, you'll only be able to abort 5 times before posting will no longer work regardless of what tasks have been registered. However, if you construct a DispatcherQueue using the CLR thread pool, any CCR threads you abort will be replaced automatically and you won't have that problem. From what I've seen, although the CCR dispatcher is recommended, I think using the CLR thread pool is the way to go in this situation.

Related

How can I make a thread wait until another thread is wating (C#)

I have a consumer thread that creates some worker threads. These threads must switch between active and waiting states. When all worker threads are in the waiting states, it means that the current job is done. How can I make the consumer thread wait for all the worker threads to be in the waiting state? I want a behavior very similar to Thread.Join() on all worker threads, however, I want the threads to keep running for the next job. I cannot create new threads because the jobs are in a tight loop and creating new threads is costly.
As far as I am aware there is no mechanism to do what you wish. (Thread.Join but since you can't block that is not an option)
From the info you provided it sounds like your really building a state machine, just across multiple threads.
I would create a Singleton and have that act as a state machine. Threads could signal to the Singleton there status.
It sounds like you have an indeterminate number of threads, so you would need to put the status of each in a collection. I would look here Thread Safe Collections to find the right fit for how you wish to store your state information.
Hope this helps.
Apologies for the brief answer (may expand later), but you probably the WaitHandle.WaitAll method, combined with a ManualResetEvent. You would pass your ManualResetEvent objects into each worker thread when they're created, signal them when they become idle, and pass the entire set of handles into the WaitHandle.WaitAll method to wake the observing thread when they're complete. You can also use the timeout feature of this method if you want to periodically run some kind of task while waiting, or perform some kind of operation if the task is taking too long.
Note that if your worker threads are intended to terminate when the operation is complete (wasn't totally clear if this is the case), it might be more appropriate to spawn them as tasks and use Task.WaitAll instead.
Edit: On a quick re-read, it sounds like you do want to be using tasks rather than trying to re-use full worker threads. Tasks use threads which have been allocated from the thread pool, eliminating that thread creation overhead you were worried about, because the threads will (generally) be ready and waiting for work. You can simply spawn each task and wait for them all to be finished.

.NET Task Performance with 1000s of blocked Tasks

I have some .NET4 code that needs to know if/when a network request times out.
Is the following code going to cause a new Thread to be added to the .NET ThreadPool each time a task runs, and then release it when it exits?
var wait = new Task(() =>
{
using (var pauseEvent = new ManualResetEvent(false))
pauseEvent.WaitOne(TimeSpan.FromMilliseconds(delay));
}).ContinueWith(action);
wait.Start()
https://stackoverflow.com/a/15096427/464603 suggests this approach would work, but have performance implications for the general system.
If so, how would you recommend handling a high number of request timeouts/s - probably 1000timeouts/s when bursting?
In Python I have previously used something like a tornado IOLoop to make sure this isn't heavy on the Kernel / ThreadPool.
I have some .NET4 code that needs to know if/when a network request times out.
The easiest way to do this is to use a timeout right at the API level, e.g., WebRequest.Timeout or CancellationTokenSource.CancelAfter. That way the operation itself will actually stop with an error when the timeout occurs. This is the proper way to do a timeout.
Doing a timed wait is quite different. (Your code does a timed wait). With a timed wait, it's only the wait that times out; the operation is still going, consuming system resources, and has no idea that it's supposed to stop.
If you must do a timed wait on a WaitHandle like ManualResetEvent, then you can use ThreadPool.RegisterWaitForSingleObject, which allows a thread pool thread to wait for 31 objects at a time instead of just one. However, I would consider this a last-ditch extreme solution, only acceptable if the code simply cannot be modified to use proper timeouts.
P.S. Microsoft.Bcl.Async adds async/await support for .NET 4.
P.P.S. Don't ever use StartNew or ContinueWith without explicitly specifying a scheduler. As I describe on my blog, it's dangerous.
First of all, adding Tasks to Thread Pool doesn't necessarily cause new Thread to be added to Thread Pool. When you add a new Task to Thread Pool it is added to internal queue. Existing Threads from Thread Pool take Tasks from this queue one by one and execute them. Thread Pool will start new Threads or stop them as it deems appropriate.
Adding Task with blocking logic inside will cause Threads from Thread Pool to block. It means that they won't be able to execute other Tasks from queue, which will lead to performance issues.
One way to add delay to some action is to use Task.Delay method which internally uses timers.
Task.Delay(delay).ContinueWith(action);
This will not block any Threads from Thread Pool. After specified delay, action will be added to Thread Pool and executed.
You may also directly use timers.
As someone suggested in comment, you may also use async methods. I believe the following code would be equivalent of your sample.
public async Task ExecuteActionAfterDelay()
{
await Task.Delay(3000);
action();
}
You might also want to look at this question Asynchronously wait for Task<T> to complete with timeout.

How to get the reference of TPL task's thread in C#?

When I create a task as
Task task = Task.Factory.StartNew(() => someMethod(args));
in C# 4.0+, how can I get the reference of the thread(s) of this task?
Is it possible that the task is executed in the same thread that created the task or spawn more than one thread?
Update:
The reasons are:
I'd like to identify the task's thread in debugger (and attribute a name for it), etc.
Is created task executed always in separate thread from the one in which a task was created?
Is it one, zero or more than one thread?
Is it executed on a single and the same core?
It is important to know since, for example, I can put to sleep the main thread thinking that I am freezing the background worker
Update:
Useful answer:
Specifying a Thread's Name when using Task.StartNew
Is created task executed always in separate thread from the one in which a task was created?
No, there are certain situations in which the TPL is able to determine that the task can be executed on the same thread that created it, either because the relevant task creation option (or task scheduler) was supplied, or as an optimization because the calling thread would otherwise not have anything to do. You don't really need to worry about this though; it's not like you're going to end up blocking the UI thread because the TPL choose to execute it's code in that context. That won't happen unless you specifically indicate that it should. For all intents and purposes you can assume that this never happens (unless you force it to happen) but behind the scenes, without you ever needing to realize it, yes, it can happen.
Is it one, zero or more than one thread?
By default, tasks are executed in the thread pool. The thread pool will vary in the number of threads it contains based on the workload it's given. It will start out at one, but grow if there is sufficient need, and shrink if that need disappears. If you specify the LongRunning option, a new thread will be created just for that Task. If you specify a custom TaskScheduler, you can have it do whatever you want it to.
Is it executed on a single and the same core?
Potentially, but not assuredly.
It is important to know since, for example, I can put to sleep the main thread thinking that I am freezing the background worker
Putting the main thread to sleep will not prevent background workers from working. That's the whole point of creating the background workers, the two tasks don't stop each other from doing work. Note that if the background workers ever try to access the UI either to report progress or display results, and the UI is blocked, then they will be waiting for the UI thread to be free at that point.
You can use:
System.Threading.Thread.CurrentThread
But as said in the comments, you use the TPL to abstract threading away, so going back to this "low level" is a likely indicator of poor design.
Task.Factory.StartNew() queues the task for execution (see here). The actual thread that executes the task and when it gets executed is up to the TaskScheduler specified (the current TaskScheduler is used if none is specified).
In .Net 4 the default TaskScheduler uses the ThreadPool to execute tasks (see here) so if a ThreadPool Thread queued the task the same thread can possibly execute it later on.
The number of threads is dictated by the ThreadPool.
You shouldn't really care about which core your tasks are executed on.
Queuing a Task for execution will most likely schedule it to be executed on a ThreadPool Thread so you won't be at risk of accidentally putting the main thread to sleep

Thread: How to re-start thread once completed?

I have a method void DoWork(object input) that takes roughly 5 seconds to complete. I have read that Thread is better suited than ThreadPool for these longer operations but I have encountered a problem.
I click a button which calls threadRun.Start(input) which runs and completes fine. I click the button again and receive the following exception:
Thread is running or terminated; it cannot restart.
Can you not "reuse" a Thread? Should I use ThreadPool? Why is Thread "better suited for longer operations" compared to ThreadPool? If you can't reuse a thread, why use it at all (i.e. what advantages does it offer)?
Can you not "reuse" a Thread?
You can. But you have to code the thread not to terminate but to instead wait for more work. That's what a thread pool does.
Should I use ThreadPool?
If you want to re-use a thread, yes.
Why is Thread "better suited for longer operations" compared to ThreadPool?
Imagine a thread pool that is serving a large number of quick operations. You don't want to have too many threads, because the computer can only do so many things at a time. Each long operation you make the thread pool do ties up a thread from the pool. So the pool either has to have lots of extra threads or may run short of threads. Neither leads to an efficient thread pool design.
For longer operations, the overhead of creating and destroying a thread is very small in comparison to the cost of the operation. So the normal downside of using a thread just for the operation doesn't apply.
If you can't reuse a thread, why use it at all (i.e. what advantages does it offer)?
I'm assuming you mean using a thread dedicated to a job that then terminates over using a thread pool. The advantage is that the number of threads will always equal the number of jobs this way. This means you have to create a thread every time you start a job and destroy a thread every time you finish one, but you never have extra threads nor do you ever run short on threads. (This can be a good thing with I/O bound threads but can be a bad thing if most threads are CPU bound most of the time.)
Thread.Start documentation says:
Once the thread terminates, it cannot be restarted with another call
to Start.
Threads are not reusable. I have already faced this problem a while ago, the solution was to create a new Thread instance whenever needed.
It looks like this by by design.
I encountered the same problem and the only solution I could find was to recreate the thread. In my case I wasn't restarting the thread very often so I didn't look any further.
A search now has turned up this thread on social.msdn where the accepted answer states:
a stopped or aborted thread cannot be stated again.
The MSDN repeat this as well:
trying to restart an aborted thread by calling Start on a thread that has terminated throws a ThreadStateException.
As the message states, you cannot restart the thread. You can simply create a new thread for your next operation. Or, you might consider a design where the background thread keeps working until it completes all of your tasks, rather than launch a new thread for each one.
for(;;){} or while(true){} are useful constructs to 'reuse' a thread. Typically, the thread waits on some synchronization object at the top of these loops. In your example, you could wait on an event or semaphore and signal it from your button OnClick() handler.
It's just in background mode. It sounds like you need to use the ThreadPool because re-starting and re-creating Thread objects are very expensive operations. If you have a long running job that may last longer than your main process, then consider the use of a Windows Service.

Delegate.BeginInvoke Delay

Sometimes when Delegate.BeginInvoke is invoked, it takes more than one second to execute the delegate method.
What could be the reasons for the delay? I get this issue 1 or 2 times a day in an application which runs continuosly.
Please help me.
Thanks!
The thread pool manager makes sure that only as many threads are allowed to execute as you have CPU cores. As soon as one completes, another one that's waiting in the queue is allowed to execute.
Twice a second, it re-evaluates what's going on with the running threads. If they don't complete, it assumes they are blocked and allows another waiting thread to run. On the typical two-core CPU, you'll get two threads running right away, the 3rd thread starts after one second, the 4th thread after 1.5 second, etcetera.
Well, there's your second. The Q&D fix is to use ThreadPool.SetMinThreads(), but that's the sledgehammer solution. The real issue is that your program is using thread pool threads for long-running tasks. Either because they execute a lot of code or because they block on some kind of I/O request. The latter being the more common case.
The way to solve it is to not use a thread pool thread for such a blocking thread but use the Thread class instead. Don't do this if the threads are actually burning CPU cycles, you'll slow everything down. Easy to tell, you'll see 100% cpu load in Taskmgr.exe
Since you're using Delegate.BeginInvoke then you're, indirectly, using the ThreadPool. The ThreadPool recycles completed threads and allows them to be reused without going through the expense of constructing new threads and tearing completed threads down.
So... when you use Delegate.BeginInvoke you're adding the method to be invoked to a queue, as soon as the ThreadPool thinks it has an available thread for your task it will execute. However, if the ThreadPool is out of available threads then you'll be left waiting.
System.Threading.ThreadPool has several properties and methods to show how many threads are available, maximums, etc. I would try monitoring those counts to see if it looks like the ThreadPool is being spread thin.
If that's the case then the best resolution is to ensure that the ThreadPool is only being used for short-lived (small) tasks. If it's being used for long-running tasks then those tasks should be modified to use their own dedicated thread rather than occupying the ThreadPool.
Can you set the priority of the BeginInvoke?
http://msdn.microsoft.com/en-us/library/system.windows.threading.dispatcherpriority.aspx
Do you have other BeginInvoke calls waiting?
"If multiple BeginInvoke calls are made at the same DispatcherPriority, they will be executed in the order the calls were made."
http://msdn.microsoft.com/en-us/library/ms591206.aspx

Categories