I was playing with a project of mine today and found an interesting little snippet, given the following pattern, you can safely cleanup a thread, even if it's forced to close early. My project is a network server where it spawns a new thread for each client. I've found this useful for early termination from the remote side, but also from the local side (I can just call .Abort() from inside my processing code).
Are there any problems you can see with this, or any suggestions you'd make to anyone looking at a similar approach?
Test case follows:
using System;
using System.Threading;
class Program
{
static Thread t1 = new Thread(thread1);
static Thread t2 = new Thread(thread2);
public static void Main(string[] args)
{
t1.Start();
t2.Start();
t1.Join();
}
public static void thread1() {
try {
// Do our work here, for this test just look busy.
while(true) {
Thread.Sleep(100);
}
} finally {
Console.WriteLine("We're exiting thread1 cleanly.\n");
// Do any cleanup that might be needed here.
}
}
public static void thread2() {
Thread.Sleep(500);
t1.Abort();
}
}
For reference, without the try/finally block, the thread just dies as one would expect.
Aborting another thread at all is just a bad idea unless the whole application is coming down. It's too easy to leave your program in an unknown state. Aborting your own thread is occasionally useful - ASP.NET throws a ThreadAbortException if you want to prematurely end the response, for example - but it's not a terribly nice design.
Safe clean-up of a thread should be mutual - there should be some shared flag requesting that the thread shuts down. The thread should check that flag periodically and quit appropriately.
Whether or not this will "safely" cleanup a thread cannot be discerned from a general code sample unfortunately. It's highly dependent upon the actual code that is executed within the thread. There are multiple issues you must consider. Each represents a potential bug in the code.
If the thread is currently in native code, it will not immediately respect the Thread.Abort call. It will do all of the work it wants to do in native code and will not throw until the code returns back to managed. Until this happens thread2 will hang.
Any native resources that are not freed in a finally block will be leaked in this scenario. All native resources should be freed in a finally block but not all code does this and it's an issue to consider.
Any locks that are not freed in a finally block will remain in a lock'd state and can lead to future dead locks.
There are other issues which are slipping my mind at the moment. But hopefully this will give you some guidance with your application.
It is generally not a good idea to abort threads. What you can do is poll for a stopRequested flag which can be set from other threads. Below is a sample WorkerThread class for your reference. For more information on how to use it, please refer to http://devpinoy.org/blogs/jakelite/archive/2008/12/20/threading-patterns-the-worker-thread-pattern.aspx
public abstract class WorkerThreadBase : IDisposable
{
private Thread _workerThread;
protected internal ManualResetEvent _stopping;
protected internal ManualResetEvent _stopped;
private bool _disposed;
private bool _disposing;
private string _name;
protected WorkerThreadBase()
: this(null, ThreadPriority.Normal)
{
}
protected WorkerThreadBase(string name)
: this(name, ThreadPriority.Normal)
{
}
protected WorkerThreadBase(string name,
ThreadPriority priority)
: this(name, priority, false)
{
}
protected WorkerThreadBase(string name,
ThreadPriority priority,
bool isBackground)
{
_disposing = false;
_disposed = false;
_stopping = new ManualResetEvent(false);
_stopped = new ManualResetEvent(false);
_name = name == null ? GetType().Name : name; ;
_workerThread = new Thread(threadProc);
_workerThread.Name = _name;
_workerThread.Priority = priority;
_workerThread.IsBackground = isBackground;
}
protected bool StopRequested
{
get { return _stopping.WaitOne(1, true); }
}
protected bool Disposing
{
get { return _disposing; }
}
protected bool Disposed
{
get { return _disposed; }
}
public string Name
{
get { return _name; }
}
public void Start()
{
ThrowIfDisposedOrDisposing();
_workerThread.Start();
}
public void Stop()
{
ThrowIfDisposedOrDisposing();
_stopping.Set();
_stopped.WaitOne();
}
public void WaitForExit()
{
ThrowIfDisposedOrDisposing();
_stopped.WaitOne();
}
#region IDisposable Members
public void Dispose()
{
dispose(true);
}
#endregion
public static void WaitAll(params WorkerThreadBase[] threads)
{
WaitHandle.WaitAll(
Array.ConvertAll<WorkerThreadBase, WaitHandle>(
threads,
delegate(WorkerThreadBase workerThread)
{ return workerThread._stopped; }));
}
public static void WaitAny(params WorkerThreadBase[] threads)
{
WaitHandle.WaitAny(
Array.ConvertAll<WorkerThreadBase, WaitHandle>(
threads,
delegate(WorkerThreadBase workerThread)
{ return workerThread._stopped; }));
}
protected virtual void Dispose(bool disposing)
{
//stop the thread;
Stop();
//make sure the thread joins the main thread
_workerThread.Join(1000);
//dispose of the waithandles
DisposeWaitHandle(_stopping);
DisposeWaitHandle(_stopped);
}
protected void ThrowIfDisposedOrDisposing()
{
if (_disposing)
{
throw new InvalidOperationException(
Properties.Resources.ERROR_OBJECT_DISPOSING);
}
if (_disposed)
{
throw new ObjectDisposedException(
GetType().Name,
Properties.Resources.ERROR_OBJECT_DISPOSED);
}
}
protected void DisposeWaitHandle(WaitHandle waitHandle)
{
if (waitHandle != null)
{
waitHandle.Close();
waitHandle = null;
}
}
protected abstract void Work();
private void dispose(bool disposing)
{
//do nothing if disposed more than once
if (_disposed)
{
return;
}
if (disposing)
{
_disposing = disposing;
Dispose(disposing);
_disposing = false;
//mark as disposed
_disposed = true;
}
}
private void threadProc()
{
Work();
_stopped.Set();
}
}
Related
Is this possible to lock method for one thread and force another to go futher rather than waiting until first thread finish? Can this problem be resolved with static thread or some proper pattern with one instance of mendtioned below service.
For presentation purposes, it can be done with static boolen like below.
public class SomeService
{
private readonly IRepository _repo;
public SomeService(IRepository repo)
{
_repo = repo;
}
private Thread threadOne;
public static bool isLocked { get; set; }
public void StartSomeMethod()
{
if(!isLocked)
{
threadOne = new Thread(SomeMethod);
isLocked = true;
}
}
public void SomeMethod()
{
while(true)
{
lots of time
}
...
isLocked = false;
}
}
I want to avoid situation when user clicked, by accident, two times to start and accidentailly second thread starts immediatelly after first finished.
You can use lock :)
object locker = new object();
void MethodToLockForAThread()
{
lock(locker)
{
//put method body here
}
}
Now the result will be that when this method is called by a thread (any thread) it puts something like flag at the beginning of lock: "STOP! You are not allowed to go any further, you must wait!" Like red light on crossroads.
When thread that called this method first, levaes the scope, then at the beginning of the scope this "red light" changes into green.
If you want to not call the method when it is already called by another thread, the only way to do this is by using bool value. For example:
object locker = new object();
bool canAccess = true;
void MethodToLockForAThread()
{
if(!canAccess)
return;
lock(locker)
{
if(!canAccess)
return;
canAccess = false;
//put method body here
canAccess = true;
}
}
Other check of canAccess in lock scope is because of what has been told on comments. No it's really thread safe. This is kind of protection that is advisible in thread safe singleton.
EDIT
After some discussion with mjwills I have to change my mind and turn more into Monitor.TryEnter. You can use it like that:
object locker = new object();
void ThreadMethod()
{
if(Monitor.TryEnter(locker, TimeSpan.FromMiliseconds(1))
{
try
{
//do the thread code
}
finally
{
Monitor.Exit(locker);
}
} else
return; //means that the lock has not been aquired
}
Now, lock could not be aquired because of some exception or because some other thread has already acuired it. In second parameter you can pass the time that a thread will wait to acquire a lock. I gave here short time because you don't want the other thread to do the job, when first is doing it.
So this solution seems the best.
When the other thread could not acquire the lock, it will go further instead of waiting (well it will wait for 1 milisecond).
Since lock is a language-specific wrapper around Monitor class, you need Monitor.TryEnter:
public class SomeService
{
private readonly object lockObject = new object();
public void StartSomeMethod()
{
if (Monitor.TryEnter(lockObject))
{
// start new thread
}
}
public void SomeMethod()
{
try
{
// ...
}
finally
{
Monitor.Exit(lockObject);
}
}
}
You can use a AutoResetEvent instead of your isLocked flag.
AutoResetEvent autoResetEvent = new AutoResetEvent(true);
public void StartSomeMethod()
{
if(autoResetEvent.WaitOne(0))
{
//start thread
}
}
public void SomeMethod()
{
try
{
//Do your work
}
finally
{
autoResetEvent.Set();
}
}
I've been building out a service that processes files using a Queue<string> object to manage the items.
public partial class BasicQueueService : ServiceBase
{
private readonly EventWaitHandle completeHandle =
new EventWaitHandle(false, EventResetMode.ManualReset, "ThreadCompleters");
public BasicQueueService()
{
QueueManager = new Queue<string>();
}
public bool Stopping { get; set; }
private Queue<string> QueueManager { get; }
protected override void OnStart(string[] args)
{
Stopping = false;
ProcessFiles();
}
protected override void OnStop()
{
Stopping = true;
}
private void ProcessFiles()
{
while (!Stopping)
{
var count = QueueManager.Count;
for (var i = 0; i < count; i++)
{
//Check the Stopping Variable again.
if (Stopping) break;
var fileName = QueueManager.Dequeue();
if (string.IsNullOrWhiteSpace(fileName) || !File.Exists(fileName))
continue;
Console.WriteLine($"Processing {fileName}");
Task.Run(() =>
{
DoWork(fileName);
})
.ContinueWith(ThreadComplete);
}
if (Stopping) continue;
Console.WriteLine("Waiting for thread to finish, or 1 minute.");
completeHandle.WaitOne(new TimeSpan(0, 0, 15));
completeHandle.Reset();
}
}
partial void DoWork(string fileName);
private void ThreadComplete(Task task)
{
completeHandle.Set();
}
public void AddToQueue(string file)
{
//Called by FileWatcher/Manual classes, not included for brevity.
lock (QueueManager)
{
if (QueueManager.Contains(file)) return;
QueueManager.Enqueue(file);
}
}
}
Whilst researching how to limit the number of threads on this (I've tried a manual class with an incrementing int, but there's an issue where it doesn't decrement properly in my code), I came across TPL DataFlow, which seems like its a better fit for what I'm trying to achieve - specifically, it allows me to let the framework handle threading/queueing, etc.
This is now my service:
public partial class BasicDataFlowService : ServiceBase
{
private readonly ActionBlock<string> workerBlock;
public BasicDataFlowService()
{
workerBlock = new ActionBlock<string>(file => DoWork(file), new ExecutionDataflowBlockOptions()
{
MaxDegreeOfParallelism = 32
});
}
public bool Stopping { get; set; }
protected override void OnStart(string[] args)
{
Stopping = false;
}
protected override void OnStop()
{
Stopping = true;
}
partial void DoWork(string fileName);
private void AddToDataFlow(string file)
{
workerBlock.Post(file);
}
}
This works well. However, I want to ensure that a file is only ever added to the TPL DataFlow once. With the Queue, I can check that using .Contains(). Is there a mechanism that I can use for TPL DataFlow?
Your solution with Queue works only if file goes into your service twice in a small period of time. If it came again in, say, few hours, queue will not contain it, as you Dequeue it from there.
If this solution is expected, then you may use a MemoryCache to store file paths being already handled, like this:
using System.Runtime.Caching;
private static object _lock = new object();
private void AddToDataFlow(string file)
{
lock (_lock)
{
if (MemoryCache.Default.Contains(file))
{
return;
}
// no matter what to put into the cache
MemoryCache.Default[file] = true;
// we can now exit the lock
}
workerBlock.Post(file);
}
However, if your application must run for a long time (which service is intended to do), you'll eventually run out of memory. In that case you probably need to store your file paths in database or something, so even after restarting the service your code will restore the state.
You can check it inside of DoWork.
You have to save in Hash already works items and check current filename doesn't exist in hash.
I have a simple windows service written, here is its skeleton:
internal class ServiceModel {
private Thread workerThread;
private AutoResetEvent finishedEvent;
private Int32 timeout = 60000*15;
public void Start() {
this.workerThread = new Thread(this.Process);
this.finishedEvent = new AutoResetEvent(false);
this.workerThread.Start();
}
public void Stop() {
this.finishedEvent.Set();
this.workerThread.Join(30000);
}
public void Process() {
while(!this.finishedEvent.WaitOne(timeout)) {
// run things here
}
}
}
the first thing
The first thing that I can't understand is that service waits one timeout before running. Would rewriting the new AutoResetEvent(false); to new AutoResetEvent(true); cause a service to start without waiting?
the second thing
Due to some internal reasons (requesting data from external server/service, exception handling) sometimes it is not enough to wait that fixed 15..30-minutes timeout.
How do I rewrite it to work without a fixed timeout?
Do I need to remove that AutoResetEvent instance at all and run Process body inside an infinite loop?
public void Process() {
while(true) {
// run things here
}
}
edit. try-catch/lock
In Process method there is a global try-catch block:
public void Process() {
do {
try {
// processing goes here
}
catch(Exception ex) {
Logger.Log.Warn(ex); // or Log.Fatal(ex)...
}
}
while(true);
}
if I use a synchronization object where do I put the lock statement so that I'm able to call break when isStopped is true?
You don't have to deal with low-level thread and synchronization primitives API. Consider using Task Parallel Library (TPL). It's easy to implement OnStop using TPL cancellation framework:
using System.ServiceProcess;
using System.Threading;
using System.Threading.Tasks;
namespace WindowsService1
{
public partial class Service1 : ServiceBase
{
CancellationTokenSource _mainCts;
Task _mainTask;
public Service1()
{
InitializeComponent();
}
async Task MainTaskAsync(CancellationToken token)
{
while (true)
{
token.ThrowIfCancellationRequested();
// ...
await DoPollingAsync(token);
// ...
}
}
protected override void OnStart(string[] args)
{
_mainCts = new CancellationTokenSource();
_mainTask = MainTaskAsync(_mainCts.Token);
}
protected override void OnStop()
{
_mainCts.Cancel();
try
{
_mainTask.Wait();
}
catch
{
if (!_mainTask.IsCanceled)
throw;
}
}
}
}
Inside MainTaskAsync you can use Task.Run for any CPU-bound work items.
using Threads you can achieve your requirement using the following code:
internal class ServiceModel {
private Thread workerThread;
private object syncLock = new object();
private bool stop = false;
public void Start() {
this.workerThread = new Thread(this.Process);
this.workerThread.Start();
}
public void Stop() {
lock(syncLock) stop = true;
this.workerThread.Join(30000);
}
public void Process() {
while(true){
//your stuff here.
lock(syncLock)
{
if(stop)
break;
}
Thread.Sleep(30000);
}
}
}
I'm having a small background thread which runs for the applications lifetime - however when the application is shutdown, the thread should exit gracefully.
The problem is that the thread runs some code at an interval of 15 minutes - which means it sleeps ALOT.
Now in order to get it out of sleep, I toss an interrupt at it - my question is however, if there's a better approach to this, since interrupts generate ThreadInterruptedException.
Here's the gist of my code (somewhat pseudo):
public class BackgroundUpdater : IDisposable
{
private Thread myThread;
private const int intervalTime = 900000; // 15 minutes
public void Dispose()
{
myThread.Interrupt();
}
public void Start()
{
myThread = new Thread(ThreadedWork);
myThread.IsBackground = true; // To ensure against app waiting for thread to exit
myThread.Priority = ThreadPriority.BelowNormal;
myThread.Start();
}
private void ThreadedWork()
{
try
{
while (true)
{
Thread.Sleep(900000); // 15 minutes
DoWork();
}
}
catch (ThreadInterruptedException)
{
}
}
}
There's absolutely a better way - either use Monitor.Wait/Pulse instead of Sleep/Interrupt, or use an Auto/ManualResetEvent. (You'd probably want a ManualResetEvent in this case.)
Personally I'm a Wait/Pulse fan, probably due to it being like Java's wait()/notify() mechanism. However, there are definitely times where reset events are more useful.
Your code would look something like this:
private readonly object padlock = new object();
private volatile bool stopping = false;
public void Stop() // Could make this Dispose if you want
{
stopping = true;
lock (padlock)
{
Monitor.Pulse(padlock);
}
}
private void ThreadedWork()
{
while (!stopping)
{
DoWork();
lock (padlock)
{
Monitor.Wait(padlock, TimeSpan.FromMinutes(15));
}
}
}
For more details, see my threading tutorial, in particular the pages on deadlocks, waiting and pulsing, the page on wait handles. Joe Albahari also has a tutorial which covers the same topics and compares them.
I haven't looked in detail yet, but I wouldn't be surprised if Parallel Extensions also had some functionality to make this easier.
You could use an Event to Check if the Process should end like this:
var eventX = new AutoResetEvent(false);
while (true)
{
if(eventX.WaitOne(900000, false))
{
break;
}
DoWork();
}
There is CancellationTokenSource class in .NET 4 and later which simplifies this task a bit.
private readonly CancellationTokenSource cancellationTokenSource =
new CancellationTokenSource();
private void Run()
{
while (!cancellationTokenSource.IsCancellationRequested)
{
DoWork();
cancellationTokenSource.Token.WaitHandle.WaitOne(
TimeSpan.FromMinutes(15));
}
}
public void Stop()
{
cancellationTokenSource.Cancel();
}
Don't forget that CancellationTokenSource is disposable, so make sure you dispose it properly.
One method might be to add a cancel event or delegate that the thread will subscribe to. When the cancel event is invoke, the thread can stop itself.
I absolutely like Jon Skeets answer. However, this might be a bit easier to understand and should also work:
public class BackgroundTask : IDisposable
{
private readonly CancellationTokenSource cancellationTokenSource;
private bool stop;
public BackgroundTask()
{
this.cancellationTokenSource = new CancellationTokenSource();
this.stop = false;
}
public void Stop()
{
this.stop = true;
this.cancellationTokenSource.Cancel();
}
public void Dispose()
{
this.cancellationTokenSource.Dispose();
}
private void ThreadedWork(object state)
{
using (var syncHandle = new ManualResetEventSlim())
{
while (!this.stop)
{
syncHandle.Wait(TimeSpan.FromMinutes(15), this.cancellationTokenSource.Token);
if (!this.cancellationTokenSource.IsCancellationRequested)
{
// DoWork();
}
}
}
}
}
Or, including waiting for the background task to actually have stopped (in this case, Dispose must be invoked by other thread than the one the background thread is running on, and of course this is not perfect code, it requires the worker thread to actually have started):
using System;
using System.Threading;
public class BackgroundTask : IDisposable
{
private readonly ManualResetEventSlim threadedWorkEndSyncHandle;
private readonly CancellationTokenSource cancellationTokenSource;
private bool stop;
public BackgroundTask()
{
this.threadedWorkEndSyncHandle = new ManualResetEventSlim();
this.cancellationTokenSource = new CancellationTokenSource();
this.stop = false;
}
public void Dispose()
{
this.stop = true;
this.cancellationTokenSource.Cancel();
this.threadedWorkEndSyncHandle.Wait();
this.cancellationTokenSource.Dispose();
this.threadedWorkEndSyncHandle.Dispose();
}
private void ThreadedWork(object state)
{
try
{
using (var syncHandle = new ManualResetEventSlim())
{
while (!this.stop)
{
syncHandle.Wait(TimeSpan.FromMinutes(15), this.cancellationTokenSource.Token);
if (!this.cancellationTokenSource.IsCancellationRequested)
{
// DoWork();
}
}
}
}
finally
{
this.threadedWorkEndSyncHandle.Set();
}
}
}
If you see any flaws and disadvantages over Jon Skeets solution i'd like to hear them as i always enjoy learning ;-)
I guess this is slower and uses more memory and should thus not be used in a large scale and short timeframe. Any other?
I have an object in C# on which I need to execute a method on a regular basis. I would like this method to be executed only when other people are using my object, as soon as people stop using my object I would like this background operation to stop.
So here is a simple example is this (which is broken):
class Fish
{
public Fish()
{
Thread t = new Thread(new ThreadStart(BackgroundWork));
t.IsBackground = true;
t.Start();
}
public void BackgroundWork()
{
while(true)
{
this.Swim();
Thread.Sleep(1000);
}
}
public void Swim()
{
Console.WriteLine("The fish is Swimming");
}
}
The problem is that if I new a Fish object anywhere, it never gets garbage collected, cause there is a background thread referencing it. Here is an illustrated version of broken code.
public void DoStuff()
{
Fish f = new Fish();
}
// after existing from this method my Fish object keeps on swimming.
I know that the Fish object should be disposable and I should clean up the thread on dispose, but I have no control over my callers and can not ensure dispose is called.
How do I work around this problem and ensure the background threads are automatically disposed even if Dispose is not called explicitly?
Here is my proposed solution to this problem:
class Fish : IDisposable
{
class Swimmer
{
Thread t;
WeakReference fishRef;
public ManualResetEvent terminate = new ManualResetEvent(false);
public Swimmer(Fish3 fish)
{
this.fishRef = new WeakReference(fish);
t = new Thread(new ThreadStart(BackgroundWork));
t.IsBackground = true;
t.Start();
}
public void BackgroundWork()
{
bool done = false;
while(!done)
{
done = Swim();
if (!done)
{
done = terminate.WaitOne(1000, false);
}
}
}
// this is pulled out into a helper method to ensure
// the Fish object is referenced for the minimal amount of time
private bool Swim()
{
bool done;
Fish fish = Fish;
if (fish != null)
{
fish.Swim();
done = false;
}
else
{
done = true;
}
return done;
}
public Fish Fish
{
get { return fishRef.Target as Fish3; }
}
}
Swimmer swimmer;
public Fish()
{
swimmer = new Swimmer(this);
}
public void Swim()
{
Console.WriteLine("The third fish is Swimming");
}
volatile bool disposed = false;
public void Dispose()
{
if (!disposed)
{
swimmer.terminate.Set();
disposed = true;
GC.SuppressFinalize(this);
}
}
~Fish()
{
if(!disposed)
{
Dispose();
}
}
}
I think the IDisposable solution is the correct one.
If the users of your class don't follow the guidelines for using classes that implement IDisposable it's their fault - and you can make sure that the documentation explicitly mentions how the class should be used.
Another, much messier, option would be a "KeepAlive" DateTime field that each method called by your client would update. The worker thread then checks the field periodically and exits if it hasn't been updated for a certain amount of time. When a method is setting the field the thread will be restarted if it has exited.
This is how I would do it:
class Fish3 : IDisposable
{
Thread t;
private ManualResetEvent terminate = new ManualResetEvent(false);
private volatile int disposed = 0;
public Fish3()
{
t = new Thread(new ThreadStart(BackgroundWork));
t.IsBackground = true;
t.Start();
}
public void BackgroundWork()
{
while(!terminate.WaitOne(1000, false))
{
Swim();
}
}
public void Swim()
{
Console.WriteLine("The third fish is Swimming");
}
public void Dispose()
{
if(Interlocked.Exchange(ref disposed, 1) == 0)
{
terminate.Set();
t.Join();
GC.SuppressFinalize(this);
}
}
~Fish3()
{
if(Interlocked.Exchange(ref disposed, 1) == 0)
{
Dispose();
}
}
}