I have a program that uses threads to perform time-consuming processes sequentially. I want to be able to monitor the progress of each thread similar to the way that the BackgroundWorker.ReportProgress/ProgressChanged model does. I can't use ThreadPool or BackgroundWorker due to other constraints I'm under. What is the best way to allow/expose this functionality. Overload the Thread class and add a property/event? Another more-elegant solution?
Overload the Thread class and add a
property/event?
If by "overload" you actually mean inherit then no. The Thread is sealed so it cannot be inherited which means you will not be able to add any properties or events to it.
Another more-elegant solution?
Create a class that encapsulates the logic that will be executed by the thread. Add a property or event (or both) which can be used to obtain progress information from it.
public class Worker
{
private Thread m_Thread = new Thread(Run);
public event EventHandler<ProgressEventArgs> Progress;
public void Start()
{
m_Thread.Start();
}
private void Run()
{
while (true)
{
// Do some work.
OnProgress(new ProgressEventArgs(...));
// Do some work.
}
}
private void OnProgress(ProgressEventArgs args)
{
// Get a copy of the multicast delegate so that we can do the
// null check and invocation safely. This works because delegates are
// immutable. Remember to create a memory barrier so that a fresh read
// of the delegate occurs everytime. This is done via a simple lock below.
EventHandler<ProgressEventArgs> local;
lock (this)
{
var local = Progress;
}
if (local != null)
{
local(this, args);
}
}
}
Update:
Let me be a little more clear on why a memory barrier is necessary in this situation. The barrier prevents the read from being moved before other instructions. The most likely optimization is not from the CPU, but from the JIT compiler "lifting" the read of Progress outside of the while loop. This movement gives the impression of "stale" reads. Here is a semi-realistic demonstration of the problem.
class Program
{
static event EventHandler Progress;
static void Main(string[] args)
{
var thread = new Thread(
() =>
{
var local = GetEvent();
while (local == null)
{
local = GetEvent();
}
});
thread.Start();
Thread.Sleep(1000);
Progress += (s, a) => { Console.WriteLine("Progress"); };
thread.Join();
Console.WriteLine("Stopped");
Console.ReadLine();
}
static EventHandler GetEvent()
{
//Thread.MemoryBarrier();
var local = Progress;
return local;
}
}
It is imperative that a Release build is ran without the vshost process. Either one will disable the optimization that manifest the bug (I believe this is not reproducable in framework version 1.0 and 1.1 as well due to their more primitive optimizations). The bug is that "Stopped" is never displayed even though it clearly should be. Now, uncomment the call to Thread.MemoryBarrier and notice the change in behavior. Also keep in mind that even the most subtle changes to the structure of this code currently inhibit the compiler's ability to make the optimization in question. One such change would be to actually invoke the delegate. In other words you cannot currently reproduce the stale read problem using the null check followed by an invocation pattern, but there is nothing in the CLI specification (that I am aware of anyway) that prohibits a future hypothetical JIT compiler from reapplying that "lifting" optimization.
I tried this some time ago and it worked for me.
Create a List-like class with locks.
Have your threads add data to an instance of the class you created.
Place a timer in your Form or wherever you want to record the log/progress.
Write code in the Timer.Tick event to read the messages the threads output.
You might also want to check out the Event-based Asynchronous Pattern.
Provide each thread with a callback that returns a status object. You can use the thread's ManagedThreadId to keep track of separate threads, such as using it as a key to a Dictionary<int, object>. You can invoke the callback from numerous places in the thread's processing loop or call it from a timer fired from within the thread.
You can also use the return argument on a callback to signal the thread to pause or halt.
I've used callbacks with great success.
Related
EDIT:
please see question history, for unchanged question in order not to invalidate comments.
I am clicking button that executes certain codes and it creates a thread (System.Threading.Thread). When I reclick button which starts process it hangs and freezes ui. What could be the reason?
public partial class ucLoader : UserControl
{
//lock object for whole instance of class ucLoader
private object lockUcLoader = new object();
//bringing info from ui
private void btnBringInfo_Click(object sender, EventArgs e)
{
lock (lockUcLoader)
{
btnBringInfo_PerformClick(false);
}
}
//using this method because it could be called when even button not visible
internal void btnBringInfo_PerformClick(bool calledFromBandInit)
{
lock (lockUcLoader) //HANGS HERE when called multiple times and ui freeze as well
//by the way I am using (repetitive) lock, because this method also called independently from btnBringInfo_Click
{
//...
this.btnLoad_PerformClick();
}
}
//Another button perform click that could be triggered elsewhere when even button not visible
private void btnLoad_PerformClick()
{
lock (lockUcLoader) //I am using (repetitive) lock, because this method also called independently from btnBringInfo_PerformClick
{
//...
Run();
}
}
//method for creating thread which System.Threading.Thread
private void Run()
{
lock (lockUcLoader) //Maybe this lock is NOT REQUIRED, as it is called by only btnLoad_PerformClick(), could you please confirm?
{
//some code that thread can be killed when available, you can ingore this two lines as they are irrelevant to subject, I think
Source = new CancellationTokenSource();
Token = Source.Token;
var shell = new WindowsShell();
Thread = new Thread((object o) =>
{
//...
var tokenInThread = (CancellationToken)o;
exitCode =TaskExtractBatchFiles(cls, shell, exitCode);
using (var logEnt = new logEntities())
{
//Do some db operation
//...
this.Invoke((MethodInvoker)delegate
{
//do some ui update operation
//...
});
}
}
Thread.Start(Token);
}
}
public void Progress(string message)
{
Invoke((MethodInvoker)delegate //ATTENTION HERE see below picture Wait occurs here
{
if (message != null && message.Trim() != string.Empty)
{
this.txtStatus.AppendText(message + Environment.NewLine);
}
});
}
}
In order to avoid get closed question, what my question is how can I prevent
below method can be accesses with out lock from background thread and ui thread
public void Progress(string message)
{
Invoke((MethodInvoker)delegate //ATTENTION HERE see below picture Wait occurs here
{
if (message != null && message.Trim() != string.Empty)
{
this.txtStatus.AppendText(message + Environment.NewLine);
}
});
}
Invoke((MethodInvoker)delegate ...
Whenever you use the lock statement in your code then you always run the risk of inducing deadlock. One of the classic threading bugs. You generally need at least two locks to get there, acquiring them in the wrong order. And yes, there are two in your program. One you declared yourself. And one you cannot see because it is buried inside the plumbing that makes Control.Invoke() work. Not being able to see a lock is what makes deadlock a difficult problem to debug.
You can reason it out, the lock inside Control.Invoke is necessary to ensure that the worker thread is blocked until the UI thread executed the delegate target. Probably also helps to reason out why the program deadlocked. You started the worker thread, it acquired the lockUcLoader lock and starts doing its job, calling Control.Invoke while doing so. Now you click the button before the worker is done, it necessarily blocks. But that makes the UI thread go catatonic and no longer capable of executing the Control.Invoke code. So the worker thread hangs on the Invoke call and it won't release the lock. And the UI thread hangs forever on the lock since the worker can't complete, deadlock city.
Control.Invoke dates from .NET 1.0, a version of the framework that has several serious design mistakes in code related to threading. While meant to be helpful, they just set death-traps for programmers to blunder into. What is unique about Control.Invoke is that it is never correct to use it.
Distinguish Control.Invoke and Control.BeginInvoke. You only ever need Invoke when you need its return value. Note how you don't, using BeginInvoke instead is good enough and instantly solves the deadlock. You'd consider Invoke to obtain a value from the UI so you can use it in the worker thread. But that induces other major threading issue, a threading race bug, the worker has no idea what state the UI is in. Say, the user might be busy interacting with it, typing a new value. You can't know what value you obtain, it will easily be the stale old value. Inevitably producing a mismatch between the UI and the work being done. The only way to avoid that mishap is to prevent the user from typing a new value, easily done with Enable = false. But now it no longer makes sense to use Invoke, you might as well pass the value when you start the thread.
So using BeginInvoke is already good enough to solve the problem. But that is not where you should stop. There is no point to those locks in the Click event handlers, all they do is make the UI unresponsive, greatly confuzzling the user. What you must do instead is set the Enable properties of those buttons to false. Set them back to true when the worker is done. Now it can't go wrong anymore, you don't need the locks and the user gets good feedback.
There is another serious problem you haven't run into yet but you must address. A UserControl has no control over its lifetime, it gets disposed when the user closes the form on which it is hosted. But that is completely out of sync with the worker thread execution, it keeps calling BeginInvoke even though the control is dead as a doornail. That will make your program bomb, hopefully on an ObjectDisposedException. A threading race bug that a lock cannot solve. The form has to help, it must actively prevent the user from closing it. Some notes about this bug in this Q+A.
For completeness I should mention the third most common threading bug that code like this is likely to suffer from. It doesn't have an official name, I call it a "firehose bug". It occurs when the worker thread calls BeginInvoke too often, giving the UI thread too much work to do. Happens easily, calling it more than about thousand times per second tends to be enough. The UI thread starts burning 100% core, trying to keep up with the invoke requests and never being able to catch up. Easy to see, it stops painting itself and responding to input, duties that are performed with a lower priority. That needs to be fixed the logical way, updating UI more than 25 times per second just produces a blur that the human eye can't observe and is therefore pointless.
I want to call a task at a specified interval. And avoid calling a new task unless the last has already completed.
private async void OnTimerTick(object sender, object e)
{
if (_criticalSection.IsEntered()) return; // only allow 1 at any given time, ignore the rest
using (var section = await _criticalSection.EnterAsync())
{
await update();
}
}
How do I achieve this? Any suggestions for a better pattern?
A critical section (like a Window's mutex) is for mutual exclusion: only allowing a single thread into a code path.
But that's not what you are trying to do: you need something that will tell you if something is happening.
A better approach would be a Manual Reset Event: set it (also know as signalled) at the start of the task and reset at the end. Then you can check if it is signalled by waiting on it with a timeout of zero for a normal Window's event, or with the applicable member for other types of event.
As this appears to be all in a single process a good starting point is System.Threading.ManualRestEventSlim. Used something like:
// One off initialisation somewhere at class scope
private static ManualResetEventSlim taskRunning = new ManualResetEventSlim();
private static object taskLock = new Object();
// code called from the timer, do in a lock to avoid race conditions with two
// or more threads call this.
lock (taskLock) {
if (!taskRunning.IsSet) {
StartTheTask(); // assuming this does not return until task is running.
}
}
// At the outermost scope of the code in the task:
try {
Debug.Assert(!taskRunning.IsSet); // Paranoia is good when doing threading
taskRunning.Set();
// Task impementation
} finally {
Debug.Assert(taskRunning.IsSet); // Paranoia is good when doing threading
taskRunning.Reset();
}
Another approach would be to always start the task, but have it check the event, if set then immediately exit. This would still need the lock to avoid races between the IsSet and Set() calls across threads. This second approach keeps the checking code together at the cost of briefly having another task running (unless that is common I would likely take this approach for the code locality).
Let's say I am designing a simple logging class (yes - I know there are those already out there in the wild!) and I want the class to be static so the rest of my code can call it without having to instantiate it first. Maybe something like this:
internal static class Log
{
private static string _logFile = "";
internal static void InitializeLogFile(string path)
{
...
}
internal static void WriteHeader()
{
...
}
internal static void WriteLine(params string[] items)
{
...
}
}
Now, I want the internals to spin up their own thread and execute in an Asynch manner, possibly using BackgroundWorker to help simplify things. Should I just create a new BackgroundWorker in each method, create a static BackgroundWorker as a private property of the static class, or is there something I am overlooking altogether?
You definitely do not want spin up a new thread or BackgroundWorker on each invocation of the methods. I would use the producer-consumer pattern here. As it turns out this is such a common pattern that Microsoft provided us with the BlockingCollection class which simplies the implementation greatly. The nice thing about this approach is that:
there is only one extra thread required
the Log methods will have asynchronous semantics
the temporal ordering of the log messages is preserved
Here is some code to get your started.
internal static class Log
{
private static BlockingCollection<string> s_Queue = new BlockingCollection<string>();
static Log()
{
var thread = new Thread(Run);
thread.IsBackground = true;
thread.Start();
}
private static void Run()
{
while (true)
{
string line = s_Queue.Take();
// Add code to append the line to the log here.
}
}
internal static void WriteLine(params string[] items)
{
foreach (string item in items)
{
s_Queue.Add(item);
}
}
}
You only want to have 1 thread per log file/db. Otherwise, the order of items in the log is unreliable. Have a background thread that pulls from a thread-safe queue and does the writing.
Good call,
You definitely want the logging operations to occur in a separate thread as the code that is doing the logging. For instance, the accessor methods (such as "logEvent(myEvent)" ) should not block on file I/O operations while the logger logs the event to a file.
Make a queue so that the accessors simply push items onto the queue. This way your code shouldn't block while it is trying to log an event.
Start-up a second thread to empty the internal queue of events. This thread can run on a static private method of your logger class.
The performance drawback comes when you try to ensure thread safety of the underlying event queue. You will need to acquire a lock on the queue every time before a pop or push onto the queue.
Hope this helps.
I think that my recommendation is not exactly what you expect, but I hope it is useful anyway:
Don't use a static class. Instead,
use a regular class and hold a single
instance of it (the singleton
pattern); using a dependency
injection engine helps a lot with
this (I use MS Unity and it
works fine). If you define an interface for your logging class as well, your code will be much more testable.
As for the threading stuff, if I
understand correclty you want the
logging work to be performed in
separate threads. Are you sure that
you really need this? A logger should
be light enough so that you can
simple call the "Write" methods and
expect that your application
performance will not suffer.
A last note: you mention the BackgroundWorker class, but if I am not wrong this class is intended for use with desktop applications, not with ASP.NET. In this environment you should probably use something like the ThreadPool class.
Just my 2 euro cents...
I created a thread safe logging class myself a while back. I used it something like this.
Logging obj = new Logging(filename);
Action<string> log = obj.RequestLog();
RequestLog would return an anonymous method that wrote to its own Queue. Because a Q is thread safe for 1 reader/writer, I didn't need to use any locks when calling log()
The actual Logging object would create a new thread that ran in the background and would periodically check all of the queues. If a Q had a string in it, it would write it to a buffered file stream.
I added a little extra code to the reading thread so for each pass it made on the queues, if there was nothing written, it would sleep an extra 10 ms, up to a max of 100ms. This way the thread didn't spool too much. But if there was heavy writing going on, it would poll the Qs every 10ms.
Here's a snpit of the return code for the requested queue. The "this.blNewData = true" was so I didn't need to hit up every Q to see if any new data was written. No lock involved because a false positive still did no work since all the Qs would be empty anyway.
OutputQueue was the list of Queues that I looped through to see if anything was written. The code to loop through the list was in a lock in case NewQueueLog() was called and caused the list to get resized.
public Action<String> NewQueueLog()
{
Queue<String> tmpQueue = new Queue<String>(32);
lock (OutputQueue)
{
OutputQueue.Add(tmpQueue);
}
return (String Output) =>
{
tmpQueue.Enqueue(Output);
this.blNewData = true;
};
}
In the end, writing to the log was lock free, which helped when lots of threads were writing.
With reference to this quote from MSDN about the System.Timers.Timer:
The Timer.Elapsed event is raised on a
ThreadPool thread, so the
event-handling method might run on one
thread at the same time that a call to
the Timer.Stop method runs on another
thread. This might result in the
Elapsed event being raised after the
Stop method is called. This race
condition cannot be prevented simply
by comparing the SignalTime property
with the time when the Stop method is
called, because the event-handling
method might already be executing when
the Stop method is called, or might
begin executing between the moment
when the Stop method is called and the
moment when the stop time is saved. If
it is critical to prevent the thread
that calls the Stop method from
proceeding while the event-handling
method is still executing, use a more
robust synchronization mechanism such
as the Monitor class or the
CompareExchange method. Code that uses
the CompareExchange method can be
found in the example for the
Timer.Stop method.
Can anyone give an example of a "robust synchronization mechanism such as the Monitor class" to explain what this means exactly?
I am thinking it means use a lock somehow, but I am unsure how you would implement that.
Stopping a System.Timers.Timer reliably is indeed a major effort. The most serious problem is that the threadpool threads that it uses to call the Elapsed event can back up due to the threadpool scheduler algorithm. Having a couple of backed-up calls isn't unusual, having hundreds is technically possible.
You'll need two synchronizations, one to ensure you stop the timer only when no Elapsed event handler is running, another to ensure that these backed-up TP threads don't do any harm. Like this:
System.Timers.Timer timer = new System.Timers.Timer();
object locker = new object();
ManualResetEvent timerDead = new ManualResetEvent(false);
private void Timer_Elapsed(object sender, ElapsedEventArgs e) {
lock (locker) {
if (timerDead.WaitOne(0)) return;
// etc...
}
}
private void StopTimer() {
lock (locker) {
timerDead.Set();
timer.Stop();
}
}
Consider setting the AutoReset property to false. That's brittle another way, the Elapsed event gets called from an internal .NET method that catches Exception. Very nasty, your timer code stops running without any diagnostic at all. I don't know the history, but there must have been another team at MSFT that huffed and puffed at this mess and wrote System.Threading.Timer. Highly recommended.
That is what it is suggesting.
Monitor is the class that's used by the C# compiler for a lock statement.
That being said, the above is only a problem if it is an issue in your situation. The entire statement basically translates to "You could get a timer event that happens right after you call Stop(). If this is a problem, you'll need to deal with it." Depending on what your timer is doing, it may be an issue, or it may not.
If it's a problem, the Timer.Stop page shows a robust way (using Interlocked.CompareExchange) to handle this. Just copy the code from the sample and modify as necessary.
Try:
lock(timer) {
timer.Stop();
}
Here is a very simple way to prevent this race condition from occurring:
private object _lock = new object();
private Timer _timer; // init somewhere else
public void StopTheTimer()
{
lock (_lock)
{
_timer.Stop();
}
}
void elapsed(...)
{
lock (_lock)
{
if (_timer.Enabled) // prevent event after Stop() is called
{
// do whatever you do in the timer event
}
}
}
Seems timer is not thread safe. You must keep all calls to it in sync via locking. lock(object){} is actually just short hand for a simple monitor call.
I've been working on a web crawling .NET app in my free time, and one of the features of this app that I wanted to included was a pause button to pause a specific thread.
I'm relatively new to multi-threading and I haven't been able to figure out a way to pause a thread indefinitely that is currently supported. I can't remember the exact class/method, but I know there is a way to do this but it has been flagged as obsolete by the .NET framework.
Is there any good general purpose way to indefinitely pause a worker thread in C# .NET.
I haven't had a lot of time lately to work on this app and the last time I touched it was in the .NET 2.0 framework. I'm open to any new features (if any) that exist in the .NET 3.5 framework, but I'd like to know of solution that also works in the 2.0 framework since that's what I use at work and it would be good to know just in case.
Never, ever use Thread.Suspend. The major problem with it is that 99% of the time you can't know what that thread is doing when you suspend it. If that thread holds a lock, you make it easier to get into a deadlock situation, etc. Keep in mind that code you are calling may be acquiring/releasing locks behind the scenes. Win32 has a similar API: SuspendThread and ResumeThread. The following docs for SuspendThread give a nice summary of the dangers of the API:
http://msdn.microsoft.com/en-us/library/ms686345(VS.85).aspx
This function is primarily designed for use by debuggers. It is not intended to be used for thread synchronization. Calling SuspendThread on a thread that owns a synchronization object, such as a mutex or critical section, can lead to a deadlock if the calling thread tries to obtain a synchronization object owned by a suspended thread. To avoid this situation, a thread within an application that is not a debugger should signal the other thread to suspend itself. The target thread must be designed to watch for this signal and respond appropriately.
The proper way to suspend a thread indefinitely is to use a ManualResetEvent. The thread is most likely looping, performing some work. The easiest way to suspend the thread is to have the thread "check" the event each iteration, like so:
while (true)
{
_suspendEvent.WaitOne(Timeout.Infinite);
// Do some work...
}
You specify an infinite timeout so when the event is not signaled, the thread will block indefinitely, until the event is signaled at which point the thread will resume where it left off.
You would create the event like so:
ManualResetEvent _suspendEvent = new ManualResetEvent(true);
The true parameter tells the event to start out in the signaled state.
When you want to pause the thread, you do the following:
_suspendEvent.Reset();
And to resume the thread:
_suspendEvent.Set();
You can use a similar mechanism to signal the thread to exit and wait on both events, detecting which event was signaled.
Just for fun I'll provide a complete example:
public class Worker
{
ManualResetEvent _shutdownEvent = new ManualResetEvent(false);
ManualResetEvent _pauseEvent = new ManualResetEvent(true);
Thread _thread;
public Worker() { }
public void Start()
{
_thread = new Thread(DoWork);
_thread.Start();
}
public void Pause()
{
_pauseEvent.Reset();
}
public void Resume()
{
_pauseEvent.Set();
}
public void Stop()
{
// Signal the shutdown event
_shutdownEvent.Set();
// Make sure to resume any paused threads
_pauseEvent.Set();
// Wait for the thread to exit
_thread.Join();
}
public void DoWork()
{
while (true)
{
_pauseEvent.WaitOne(Timeout.Infinite);
if (_shutdownEvent.WaitOne(0))
break;
// Do the work here..
}
}
}
The Threading in C# ebook summarises Thread.Suspend and Thread.Resume thusly:
The deprecated Suspend and Resume methods have two modes – dangerous and useless!
The book recommends using a synchronization construct such as an AutoResetEvent or Monitor.Wait to perform thread suspending and resuming.
If there are no synchronization requirements:
Thread.Sleep(Timeout.Infinite);
I just implemented a LoopingThread class which loops an action passed to the constructor. It is based on Brannon's post. I've put some other stuff into that like WaitForPause(), WaitForStop(), and a TimeBetween property, that indicates the time that should be waited before next looping.
I also decided to change the while-loop to an do-while-loop. This will give us a deterministic behavior for a successive Start() and Pause(). With deterministic I mean, that the action is executed at least once after a Start() command. In Brannon's implementation this might not be the case.
I omitted some things for the root of the matter. Things like "check if the thread was already started", or the IDisposable pattern.
public class LoopingThread
{
private readonly Action _loopedAction;
private readonly AutoResetEvent _pauseEvent;
private readonly AutoResetEvent _resumeEvent;
private readonly AutoResetEvent _stopEvent;
private readonly AutoResetEvent _waitEvent;
private readonly Thread _thread;
public LoopingThread (Action loopedAction)
{
_loopedAction = loopedAction;
_thread = new Thread (Loop);
_pauseEvent = new AutoResetEvent (false);
_resumeEvent = new AutoResetEvent (false);
_stopEvent = new AutoResetEvent (false);
_waitEvent = new AutoResetEvent (false);
}
public void Start ()
{
_thread.Start();
}
public void Pause (int timeout = 0)
{
_pauseEvent.Set();
_waitEvent.WaitOne (timeout);
}
public void Resume ()
{
_resumeEvent.Set ();
}
public void Stop (int timeout = 0)
{
_stopEvent.Set();
_resumeEvent.Set();
_thread.Join (timeout);
}
public void WaitForPause ()
{
Pause (Timeout.Infinite);
}
public void WaitForStop ()
{
Stop (Timeout.Infinite);
}
public int PauseBetween { get; set; }
private void Loop ()
{
do
{
_loopedAction ();
if (_pauseEvent.WaitOne (PauseBetween))
{
_waitEvent.Set ();
_resumeEvent.WaitOne (Timeout.Infinite);
}
} while (!_stopEvent.WaitOne (0));
}
}
Beside suggestions above, I'd like to add one tip. In some cases, use BackgroundWorker can simplify your code (especially when you use anonymous method to define DoWork and other events of it).
In line with what the others said - don't do it. What you really want to do is to "pause work", and let your threads roam free. Can you give us some more details about the thread(s) you want to suspend? If you didn't start the thread, you definitely shouldn't even consider suspending it - its not yours. If it is your thread, then I suggest instead of suspending it, you just have it sit, waiting for more work to do. Brannon has some excellent suggestions for this option in his response. Alternatively, just let it end; and spin up a new one when you need it.
The Suspend() and Resume() may be depricated, however they are in no way useless.
If, for example, you have a thread doing a lengthy work altering data, and the user wishes to stop it, he clicks on a button. Of course, you need to ask for verification, but, at the same time you do not want that thread to continue altering data, if the user decides that he really wants to abort.
Suspending the Thread while waiting for the user to click that Yes or No button at the confirmation dialog is the only way to prevent it from altering the data, before you signal the designated abort event that will allow it to stop.
Events may be nice for simple threads having one loop, but complicated threads with complex processing is another issue.
Certainly, Suspend() must never be used for syncronising, since its usefulness is not for this function.
Just my opinion.