Close all child threads WPF - c#

I have a Window in WPF and user can start very long operations on it. User must be able to cancel those operations.
All of my operations are in separate threads. So my question is:
Can I terminate all threads that are started from that Window, without killing UI thread obviously, at any time?
On places where I need to do long operations threads were created and started like this
Thread thread =
new Thread(
new ThreadStart(
delegate
{...}));
thread.Start();
How to pass that object to it? is it possible? If it is important at all I do not care about graceful closing of threads, they can be killed, it would still be a solution. Is window object aware of threads to whom it is parent?
Thank you in advance.

Typically you won't want to create/destroy threads. There's much more overhead when creating a Thread every time you need one than there is in thread pools and Tasks (This applies, like specified, when you need to create a significant number of Threads in the lifetime of your processes).
The preferred approach (especially if you're using .Net 4.0, or even better 4.5) is to use Tasks.
There's is tons of documentation on how to use Tasks, and how to cancel them. #xxbbcc posted a link in a comment on your question.
However, if you still think that dealing with Threads is your best choice, you could keep a track of all the threads. Then whenever you (as a developer) or your user determines they want to kill the thread, you can just iterate through the threads and call the Abort() method on them.
public class MyExampleClass
{
private List<Thread> MyThreads { get; set; }
public MyExampleClass()
{
MyThreads = new List<Thread>();
InstanciateThreadsWithSomeSuperImportantOperations();
}
private void InstanciateThreadsWithSomeSuperImportantOperations()
{
var thread = new Thread();
// some code here
MyThreads.Add(thread);
}
public void KillAllThreads()
{
foreach (var t in MyThreads)
{
if (t.IsAlive)
t.Abort(); // Note this isn't guaranteed to stop the thread.
}
}
}

Related

killing a long running thread that is blocking on another child process to end

So, a little background. I have a program that creates a child process that runs long term and does some processing that we don't really care about for this question. It exists, and it needs to keep existing. So after starting that child process I start a thread that watches that child process and blocks waiting for it to end by Process.WaitForExit() and if it ends, it will restart the child process and then wait again. Now the problem is, how do I gracefully shut all of this down? If I kill the child process first, the thread waiting on it will spin it up again, so I know that the watcher thread needs to be killed first. I have been doing this by Thread.Abort() and then just catching the ThreadAbortException and returning ending the watcher thread and then I kill my child process. But I have been told that Thread.Abort() should be avoided at all costs and is possibly no longer supported in .Net core? So my question is why is Thread.Abort() so dangerous if I am catching the ThreadAbortException? and what is the best practice for immediately killing that thread so it doesn't have a chance to spin up the child thread again during shut down?
What you are looking for is way to communicate across threads. There are multiple ways to do this but they all have specific conditions applicable.
For example mutex and semaphore are available across processes. events or wait handles are specific to a given process, etc. Once you know the details of these you can use them to send signal from one thread to another.
A simple setup for your requirement can be -
Create a resetevent before spawning any of your threads.
Let the child thread begin. In your parent wait on the reset event that you have created.
Let the child thread reset the event.
In your parent thread the wait state is completed, you can take further actions, such as kicking of the thread again and waiting on it or simply cleaning up and walking out of execution.
Thread.Abort is an unclean way of finishing your processing. If you read the msdn article here - https://learn.microsoft.com/en-us/dotnet/api/system.threading.thread.abort?view=net-6.0 the remark clearly tells you that you cant be sure what current state your thread execution was in. Your thread may not get opportunity to follow up with important clean up tasks, such as releasing resources that it does not require no more.
This can also lead to deadlock if you have more complicated constructs in place, such as thread being aborted doing so from protected region of code, such as a catch block or a finally block. If the thread that calls Abort holds a lock that the aborted thread is waiting on, a deadlock can acquire.
Key to remember in multithreading is that it is your responsibility to let the logic have a clean way of reaching to completion and finish thread's execution.
Please note that steps suggested above is one way of doing it. Depending on your requirements it can be restructured/imporved further. For example, if you are spawning another process, you will require kernel level objects such as mutex or semaphore. Objects like event or flag cant work across the process.
Read here - https://learn.microsoft.com/en-us/dotnet/standard/threading/overview-of-synchronization-primitives for more information.
As mentioned by others, Thread.Abort has major issues, and should be avoided if at all possible. It can raise the exception at any point in the code, in a possibly completely unexpected location, and possibly leave data in a highly corrupted state.
In this instance, it's entirely unnecessary.
You should change the waiting thread to use async instead. For example, you can do something like this.
static async Task RunProcessWithRestart()
{
using cancel = new CancellationTokenSource();
try
{
while (true)
{
using (var process = CreateMyProcessAndStart())
{
await process.WaitForExitAsync(cancel.Token);
}
}
}
catch(OperationCanceledException)
{
}
}
static CancellationTokenSource cancel;
public static void StartWaitForProcess()
{
Task.Run(RunProcessWithRestart);
}
public static void ShutdownWaitForProcess()
{
cancel.Cancel();
}
An alternative, which doesn't require calling Cancel() from a separate shutdown function, is to subscribe to the AppDomain.ProcessExit event.
static async Task RunProcessWithRestart()
{
using var cancel = new CancellationTokenSource();
AppDomain.ProcessExit += (s, e) => cancel.Cancel();
try
{
while (true)
{
using (var process = CreateMyProcessAndStart())
{
await process.WaitForExitAsync(cancel.Token);
}
}
}
catch(OperationCanceledException)
{
}
}
public static void StartWaitForProcess()
{
Task.Run(RunProcessWithRestart);
}

Two threads one core

I'm playing around with a simple console app that creates one thread and I do some inter thread communication between the main and the worker thread.
I'm posting objects from the main thread to a concurrent queue and the worker thread is dequeueing that and does some processing.
What strikes me as odd, is that when I profile this app, even despite I have two cores.
One core is 100% free and the other core have done all the work, and I see that both threads have been running in that core.
Why is this?
Is it because I use a wait handle that sets when I post a message and releases when the processing is done?
This is my sample code, now using 2 worker threads.
It still behaves the same, main, worker1 and worker2 is running in the same core.
Ideas?
[EDIT]
It sort of works now, atleast, I get twice the performance compared to yesterday.
the trick was to slow down the consumer just enough to avoid signaling using the AutoResetEvent.
public class SingleThreadDispatcher
{
public long Count;
private readonly ConcurrentQueue<Action> _queue = new ConcurrentQueue<Action>();
private volatile bool _hasMoreTasks;
private volatile bool _running = true;
private int _status;
private readonly AutoResetEvent _signal = new AutoResetEvent(false);
public SingleThreadDispatcher()
{
var thread = new Thread(Run)
{
IsBackground = true,
Name = "worker" + Guid.NewGuid(),
};
thread.Start();
}
private void Run()
{
while (_running)
{
_signal.WaitOne();
do
{
_hasMoreTasks = false;
Action task;
while (_queue.TryDequeue(out task) && _running)
{
Count ++;
task();
}
//wait a short while to let _hasMoreTasks to maybe be set to true
//this avoids the roundtrip to the AutoResetEvent
//that is, if there is intense pressure on the pool, we let some new
//tasks have the chance to arrive and be processed w/o signaling
if(!_hasMoreTasks)
Thread.Sleep(5);
Interlocked.Exchange(ref _status, 0);
} while (_hasMoreTasks);
}
}
public void Schedule(Action task)
{
_hasMoreTasks = true;
_queue.Enqueue(task);
SetSignal();
}
private void SetSignal()
{
if (Interlocked.Exchange(ref _status, 1) == 0)
{
_signal.Set();
}
}
}
Is it because I use a wait handle that sets when I post a message and releases when the processing is done?
Without seeing your code it is hard to say for sure, but from your description it appears that the two threads that you wrote act as co-routines: when the main thread is running, the worker thread has nothing to do, and vice versa. It looks like .NET scheduler is smart enough to not load the second core when this happens.
You can change this behavior in several ways - for example
by doing some work on the main thread before waiting on the handle, or
by adding more worker threads that would compete for the tasks that your main thread posts, and could both get a task to work on.
OK, I've figured out what the problem is.
The producer and consumer is pretty much just as fast in this case.
This results in the consumer finishing all its work fast and then looping back to wait for the AutoResetEvent.
The next time the producer sends a task, it has to touch the AutoresetEvent and set it.
The solution was to add a very very small delay in the consumer, making it slightly slower than the producer.
This results in when the producer sends a task, it notices that the consumer is already active and it just has to post to the worker queue w/o touching the AutoResetEvent.
The original behavior resulted in a sort of ping-pong effect, that can be seen on the screenshot.
Dasblinkelight (probably) has the right answer.
Apart from that, it would also be the correct behaviour when one of your threads is I/O bound (that is, it's not stuck on the CPU) - in that case, you've got nothing to gain from using multiple cores, and .NET is smart enough to just change contexts on one core.
This is often the case for UI threads - it has very little work to do, so there usually isn't much of a reason for it to occupy a whole core for itself. And yes, if your concurrent queue is not used properly, it could simply mean that the main thread waits for the worker thread - again, in that case, there's no need to switch cores, since the original thread is waiting anyway.
You should use BlockingCollection rather than ConcurrentQueue. By default, BlockingCollection uses a ConcurrentQueue under the hood, but it has a much easier to use interface. In particular, it does non-busy waits. In addition, BlockingCollection supports cancellation, so your consumer becomes very simple. Here's an example:
public class SingleThreadDispatcher
{
public long Count;
private readonly BlockingCollection<Action> _queue = new BlockingCollection<Action>();
private readonly CancellationTokenSource _cancellation = new CancellationTokenSource();
public SingleThreadDispatcher()
{
var thread = new Thread(Run)
{
IsBackground = true,
Name = "worker" + Guid.NewGuid(),
};
thread.Start();
}
private void Run()
{
foreach (var task in _queue.GetConsumingEnumerable(_cancellation.Token))
{
Count++;
task();
}
}
public void Schedule(Action task)
{
_queue.Add(task);
}
}
The loop with GetConsumingEnumerable will do a non-busy wait on the queue. There's no need to do it with a separate event. It will wait for an item to be added to the queue, or it will exit if you set the cancellation token.
To stop it normally, you just call _queue.CompleteAdding(). That tells the consumer that no more items will be added to the queue. The consumer will empty the queue and then exit.
If you want to quit early, then just call _cancellation.Cancel(). That will cause GetConsumingEnumerable to exit.
In general, you shouldn't ever have to use ConcurrentQueue directly. BlockingCollection is easier to use and provides equivalent performance.

Freeing resources when thread is not alive

I am using BackgroundWorker and inside it I am using foreach loop, inside which i create new thread, wait for it to finish, and than report progress and continue foreach loop. Here is what I am talking about:
private void DoWork(object sender, DoWorkEventArgs e) {
var fileCounter = Convert.ToDecimal(fileNames.Count());
decimal i = 0;
foreach (var file in fileNames) {
i++;
var generator = new Generator(assembly);
var thread = new Thread(new ThreadStart(
delegate() {
generator.Generate(file);
}));
thread.SetApartmentState(ApartmentState.STA);
thread.Start();
while (thread.IsAlive); // critical point
int progress = Convert.ToInt32(Math.Round(i / fileCounter * 100));
backgroundWorker.ReportProgress(progress);
}
}
The problem is that memory is not being freed after thread finishes (after "critical point" line is passed). I thought that when thread is not alive, all resources associated with it will be released. But apparently this is not true.
Can anyone explain to me why and what am I doing wrong.
Thanks.
You managed to shut up the component telling you that you were doing something wrong. You however didn't actually fix the problem. An STA, Single Threaded Apartment, is required by components that do not support threading. So that all of its methods are called from the same thread, even if the call was made on another thread. COM takes care of marshaling the call from one thread to another. An STA thread makes that possible by pumping a message loop.
What you did however is create another thread and make calls on it, distinct from the thread on which the generator object was created. This doesn't solve the problem, it is still thread-unsafe. COM still marshals the call.
What matters a great deal is the thread on which you created the generator object. Since it is an apartment threaded object, it must be created on an STA thread. There normally is only one in a Windows app, the main thread of your program, otherwise commonly known as the UI thread. If you create it on a .NET worker thread that isn't STA, like you do here, then COM will step in and create an STA thread itself to give the component a hospitable home. This is nice but usually undesirable.
There's no free lunch here, you cannot magically make a chunk of code that explicitly says it doesn't (the ThreadingModel key in the registry) support threading behave like it does. Your next best bet is to create an STA thread and run all of the code on it, including the COM object creation. Beware that you typically have to pump a message loop with Application.Run(), many COM servers assume there's one available. Especially when they tell you that an STA thread is required. You'll notice that they do when they misbehave, deadlocking on a method call or not raising events.
Regarding your original question, this is standard .NET behavior. The garbage collector runs when it needs to, not when you think it should. You can override it with GC.Collect() but that's very rarely necessary. Although it might be a quick fix in your case, COM creates a new thread for every single file. The STA thread to give the generator a home. Use Debug + Windows + Threads to see them. These threads won't stop until the COM object is destroyed. Which requires the finalizer thread to run. Your code will also consume all available memory and bomb with OOM when there are more than two thousand files, perhaps reason enough to look for a real fix.
This might be cause garbage collection is not immediate. Try to collect after the thread goes out of scope:
Edit:
You also need to implement a better way to wait for the thread to finish other then busy wait(while (thread.IsAlive);) to save CPU time, you can use AutoResetEvent.
private void DoWork(object sender, DoWorkEventArgs e) {
var fileCounter = Convert.ToDecimal(fileNames.Count());
decimal i = 0;
var Event = new AutoResetEvent(false);
foreach (var file in fileNames) {
i++;
var generator = new Generator(assembly);
{
var thread = new Thread(new ThreadStart(
delegate() {
generator.Generate(file);
Event.Set();
}));
thread.SetApartmentState(ApartmentState.STA);
thread.Start();
//while (thread.IsAlive); // critical point
Event.WaitOne();
}
GC.Collect();
int progress = Convert.ToInt32(Math.Round(i / fileCounter * 100));
backgroundWorker.ReportProgress(progress);
}
}
Do Generate method get data from UI controls?

Suggest an Improved approach for Multi-threaded application

I am building up a multi-threaded application where I spawn three threads when application starts and these threads continue to run for application lifetime. All my threads are exclusive and do not interfere with each other in anyway. Now a user can suspend the application and, here I want to suspend or, say, abort my threads.
I am currently spawning threads as foreground threads, but I guess changing them to background threads wouldn't affect my application anyway (except they(foreground threads) would keep the application alive until they finish).
I would ask people here to suggest an approach to suspend the application via thread.suspend() or thread.abort(). I know thread.suspend is obsolete and risky, but is it harmful for my application also where I am not using any type of synchronization.
PS: My threads are saving and retrieving some data to & from embedded database(sqlite) every minute.
Use the Blocking mechanisms like WaitHandles (ManualResetEvent, AutoResetEvent), Monitor, Semaphore etc...
Andrew
P.S. the question is quite broad so I would ultimately recommend reading up on proven practices and principles of Multi Threading which will include synchronization. Your requirements do not sound too complex so I am sure you will be able to research the best way which suits your needs.
You could create a mutex and let the threads wait for a signal on that mutex. This way your threads are not destroyed but they will sleep almost without consuming resources.
Mutex.WaitOne
I always use ManualResetEvent for this:
class Myclass
{
ManualResetEvent _event;
Thread _thread;
public void Start()
{
_thread = new Thread(WorkerThread);
_thread.IsBackground = true;
_thread.Start();
}
public void Stop()
{
_event.Set();
if (!_thread.Join(5000))
_thread.Abort();
}
private void WorkerThread()
{
while (true)
{
// wait 5 seconds, change to whatever you like
if (_event.WaitOne(5000))
break; // signalled to stop
//do something else here
}
}
}
Actually, this situation is the only one where Thread.Suspend does make sense. The reason it's obsoleted is because people misuse it to fake synchronization, or use it on threads they do not own (e.g., ThreadPool threads).

Starting multiple threads and keeping track of them from my .NET application

I would like to start x number of threads from my .NET application, and I would like to keep track of them as I will need to terminate them manually or when my application closes my application later on.
Example ==> Start Thread Alpha, Start Thread Beta .. then at any point in my application I should be able to say Terminate Thread Beta ..
What is the best way to keep track of opened threads in .NET and what do I need to know ( an id ? ) about a thread to terminate it ?
You could save yourself the donkey work and use this Smart Thread Pool. It provides a unit of work system which allows you to query each thread's status at any point, and terminate them.
If that is too much bother, then as mentioned anIDictionary<string,Thread> is probably the simplest solution. Or even simpler is give each of your thread a name, and use an IList<Thread>:
public class MyThreadPool
{
private IList<Thread> _threads;
private readonly int MAX_THREADS = 25;
public MyThreadPool()
{
_threads = new List<Thread>();
}
public void LaunchThreads()
{
for (int i = 0; i < MAX_THREADS;i++)
{
Thread thread = new Thread(ThreadEntry);
thread.IsBackground = true;
thread.Name = string.Format("MyThread{0}",i);
_threads.Add(thread);
thread.Start();
}
}
public void KillThread(int index)
{
string id = string.Format("MyThread{0}",index);
foreach (Thread thread in _threads)
{
if (thread.Name == id)
thread.Abort();
}
}
void ThreadEntry()
{
}
}
You can of course get a lot more involved and complicated with it. If killing your threads isn't time sensitive (for example if you don't need to kill a thread in 3 seconds in a UI) then a Thread.Join() is a better practice.
And if you haven't already read it, then Jon Skeet has this good discussion and solution for the "don't use abort" advice that is common on SO.
You can create a Dictionary of threads and assign them id's, like:
Dictionary<string, Thread> threads = new Dictionary<string, Thread>();
for(int i = 0 ;i < numOfThreads;i++)
{
Thread thread = new Thread(new ThreadStart(MethodToExe));
thread.Name = threadName; //Any name you want to assign
thread.Start(); //If you wish to start them straight away and call MethodToExe
threads.Add(id, thread);
}
If you don't want to save threads against an Id you can use a list and later on just enumerate it to kill threads.
And when you wish to terminate them, you can abort them. Better have some condition in your MethodToExe that allows that method to leave allowing the thread to terminate gracefully. Something like:
void MethodToExe()
{
while(_isRunning)
{
//you code here//
if(!_isRunning)
{
break;
}
//you code here//
}
}
To abort you can enumerate the dictionary and call Thread.Abort(). Be ready to catch ThreadAbortException
I asked a similar questions and received a bunch of good answers: Shutting down a multithreaded application
Note: my question did not require a graceful exit, but people still recommended that I gracefully exit from the loop of each thread.
The main thing to remember is that if you want to avoid having your threads prevent your process from terminating you should set all your threads to background:
Thread thread = new Thread(new ThreadStart(testObject.RunLoop));
thread.IsBackground = true;
thread.start();
The preferred way to start and manage threads is in a ThreadPool, but just about any container out there can be used to keep a reference to your threads. Your threads should always have a flag that will tell them to terminate and they should continually check it.
Furthermore, for better control you can supply your threads with a CountdownLatch: whenever a thread is exiting its loop it will signal on a CountdownLatch. Your main thread will call the CountdownLatch.Wait() method and it will block until all the threads have signaled... this allows you to properly cleanup and ensures that all your threads have shutdown before you start cleaning up.
public class CountdownLatch
{
private int m_remain;
private EventWaitHandle m_event;
public CountdownLatch(int count)
{
Reset(count);
}
public void Reset(int count)
{
if (count < 0)
throw new ArgumentOutOfRangeException();
m_remain = count;
m_event = new ManualResetEvent(false);
if (m_remain == 0)
{
m_event.Set();
}
}
public void Signal()
{
// The last thread to signal also sets the event.
if (Interlocked.Decrement(ref m_remain) == 0)
m_event.Set();
}
public void Wait()
{
m_event.WaitOne();
}
}
It's also worthy to mention that the Thread.Abort() method does some strange things:
When a thread calls Abort on itself,
the effect is similar to throwing an
exception; the ThreadAbortException
happens immediately, and the result is
predictable. However, if one thread
calls Abort on another thread, the
abort interrupts whatever code is
running. There is also a chance that a
static constructor could be aborted.
In rare cases, this might prevent
instances of that class from being
created in that application domain. In
the .NET Framework versions 1.0 and
1.1, there is a chance the thread could abort while a finally block is
running, in which case the finally
block is aborted.
The thread that calls Abort might
block if the thread that is being
aborted is in a protected region of
code, such as a catch block, finally
block, or constrained execution
region. If the thread that calls Abort
holds a lock that the aborted thread
requires, a deadlock can occur.
After creating your thread, you can set it's Name property. Assuming you store it in some collection you can access it conveniently via LINQ in order to retrieve (and abort) it:
var myThread = (select thread from threads where thread.Name equals "myThread").FirstOrDefault();
if(myThread != null)
myThread.Abort();
Wow, there are so many answers..
You can simply use an array to hold the threads, this will only work if the access to the array will be sequantial, but if you'll have another thread accessing this array, you will need to synchronize access
You can use the thread pool, but the thread pool is very limited and can only hold fixed amount of threads.
As mentioned above, you can create you own thread pool, which in .NET v4 becomes much easier with the introduction of safe collections.
you can manage them by holding a list of mutex object which will determine when those threads should finish, the threads will query the mutex each time they run before doing anything else, and if its set, terminate, you can manage the mutes from anywhere, and since mutex are by defenition thread-safe, its fairly easy..
i can think of another 10 ways, but those seems to work. let me know if they dont fit your needs.
Depends on how sophisticated you need it to be. You could implement your own type of ThreadPool with helper methods etc. However, I think its as simple as just maintaining a list/array and adding/removing the threads to/from the collection accordingly.
You could also use a Dictionary collection and use your own type of particular key to retrieve them i.e. Guids/strings.
As you start each thread, put it's ManagedThreadId into a Dictionary as the key and the thread instance as the value. Use a callback from each thread to return its ManagedThreadId, which you can use to remove the thread from the Dictionary when it terminates. You can also walk the Dictionary to abort threads if needed. Make the threads background threads so that they terminate if your app terminates unexpectedly.
You can use a separate callback to signal threads to continue or halt, which reflects a flag set by your UI, for a graceful exit. You should also trap the ThreadAbortException in your threads so that you can do any cleanup if you have to abort threads instead.

Categories