c#: what is a thread polling? - c#

What does it mean when one says no polling is allowed when implimenting your thread solution since it's wasteful, it has latency and it's non-deterministic. Threads should not use polling to signal each other.
EDIT
Based on your answers so far, I believe my threading implementation (taken from: http://www.albahari.com/threading/part2.aspx#_AutoResetEvent) below is not using polling. Please correct me if I am wrong.
using System;
using System.Threading;
using System.Collections.Generic;
class ProducerConsumerQueue : IDisposable {
EventWaitHandle _wh = new AutoResetEvent (false);
Thread _worker;
readonly object _locker = new object();
Queue<string> _tasks = new Queue<string>();
public ProducerConsumerQueue() (
_worker = new Thread (Work);
_worker.Start();
}
public void EnqueueTask (string task) (
lock (_locker) _tasks.Enqueue (task);
_wh.Set();
}
public void Dispose() (
EnqueueTask (null); // Signal the consumer to exit.
_worker.Join(); // Wait for the consumer's thread to finish.
_wh.Close(); // Release any OS resources.
}
void Work() (
while (true)
{
string task = null;
lock (_locker)
if (_tasks.Count > 0)
{
task = _tasks.Dequeue();
if (task == null) return;
}
if (task != null)
{
Console.WriteLine ("Performing task: " + task);
Thread.Sleep (1000); // simulate work...
}
else
_wh.WaitOne(); // No more tasks - wait for a signal
}
}
}

Your question is very unclear, but typically "polling" refers to periodically checking for a condition, or sampling a value. For example:
while (true)
{
Task task = GetNextTask();
if (task != null)
{
task.Execute();
}
else
{
Thread.Sleep(5000); // Avoid tight-looping
}
}
Just sleeping is a relatively inefficient way of doing this - it's better if there's some coordination so that the thread can wake up immediately when something interesting happens, e.g. via Monitor.Wait/Pulse or Manual/AutoResetEvent... but depending on the context, that's not always possible.
In some contexts you may not want the thread to actually sleep - you may want it to become available for other work. For example, you might use a Timer of one sort or other to periodically poll a mailbox to see whether there's any incoming mail - but you don't need the thread to actually be sleeping when it's not checking; it can be reused by another thread-pool task.

Here you go: check out this website:
http://msdn.microsoft.com/en-us/library/dsw9f9ts%28VS.71%29.aspx
Synchronization Techniques
There are two approaches to synchronization, polling and using synchronization objects. Polling repeatedly checks the status of an asynchronous call from within a loop. Polling is the least efficient way to manage threads because it wastes resources by repeatedly checking the status of the various thread properties.
For example, the IsAlive property can be used when polling to see if a thread has exited. Use this property with caution because a thread that is alive is not necessarily running. You can use the thread's ThreadState property to get more detailed information about a thread's status. Because threads can be in more than one state at any given time, the value stored in ThreadState can be a combination of the values in the System.Threading.Threadstate enumeration. Consequently, you should carefully check all relevant thread states when polling. For example, if a thread's state indicates that it is not Running, it may be done. On the other hand, it may be suspended or sleeping.
Waiting for a Thread to Finish
The Thread.Join method is useful for determining if a thread has completed before starting another task. The Join method waits a specified amount of time for a thread to end. If the thread ends before the timeout, Join returns True; otherwise it returns False. For information on Join, see Thread.Join Method
Polling sacrifices many of the advantages of multithreading in return for control over the order that threads run. Because it is so inefficient, polling generally not recommended. A more efficient approach would use the Join method to control threads. Join causes a calling procedure to wait either until a thread is done or until the call times out if a timeout is specified. The name, join, is based on the idea that creating a new thread is a fork in the execution path. You use Join to merge separate execution paths into a single thread again
One point should be clear: Join is a synchronous or blocking call. Once you call Join or a wait method of a wait handle, the calling procedure stops and waits for the thread to signal that it is done.
Copy
Sub JoinThreads()
Dim Thread1 As New System.Threading.Thread(AddressOf SomeTask)
Thread1.Start()
Thread1.Join() ' Wait for the thread to finish.
MsgBox("Thread is done")
End Sub
These simple ways of controlling threads, which are useful when you are managing a small number of threads, are difficult to use with large projects. The next section discusses some advanced techniques you can use to synchronize threads.
Hope this helps.
PK

Polling can be used in reference to the four asyncronous patterns .NET uses for delegate execution.
The 4 types (I've taken these descriptions from this well explained answer) are:
Polling: waiting in a loop for IAsyncResult.Completed to be true
I'll call you
You call me
I don't care what happens (fire and forget)
So for an example of 1:
Action<IAsyncResult> myAction = (IAsyncResult ar) =>
{
// Send Nigerian Prince emails
Console.WriteLine("Starting task");
Thread.Sleep(2000);
// Finished
Console.WriteLine("Finished task");
};
IAsyncResult result = myAction.BeginInvoke(null,null,null);
while (!result.IsCompleted)
{
// Do something while you wait
Console.WriteLine("I'm waiting...");
}
There's alternative ways of polling, but in general it means "I we there yet", "I we there yet", "I we there yet"

What does it mean when one says no
polling is allowed when implimenting
your thread solution since it's
wasteful, it has latency and it's
non-deterministic. Threads should not
use polling to signal each other.
I would have to see the context in which this statement was made to express an opinion on it either way. However, taken as-is it is patently false. Polling is a very common and very accepted strategy for signaling threads.
Pretty much all lock-free thread signaling strategies use polling in some form or another. This is clearly evident in how these strategies typically spin around in a loop until a certain condition is met.
The most frequently used scenario is the case of signaling a worker thread that it is time to terminate. The worker thread will periodically poll a bool flag at safe points to see if a shutdown was requested.
private volatile bool shutdownRequested;
void WorkerThread()
{
while (true)
{
// Do some work here.
// This is a safe point so see if a shutdown was requested.
if (shutdownRequested) break;
// Do some more work here.
}
}

Related

killing a long running thread that is blocking on another child process to end

So, a little background. I have a program that creates a child process that runs long term and does some processing that we don't really care about for this question. It exists, and it needs to keep existing. So after starting that child process I start a thread that watches that child process and blocks waiting for it to end by Process.WaitForExit() and if it ends, it will restart the child process and then wait again. Now the problem is, how do I gracefully shut all of this down? If I kill the child process first, the thread waiting on it will spin it up again, so I know that the watcher thread needs to be killed first. I have been doing this by Thread.Abort() and then just catching the ThreadAbortException and returning ending the watcher thread and then I kill my child process. But I have been told that Thread.Abort() should be avoided at all costs and is possibly no longer supported in .Net core? So my question is why is Thread.Abort() so dangerous if I am catching the ThreadAbortException? and what is the best practice for immediately killing that thread so it doesn't have a chance to spin up the child thread again during shut down?
What you are looking for is way to communicate across threads. There are multiple ways to do this but they all have specific conditions applicable.
For example mutex and semaphore are available across processes. events or wait handles are specific to a given process, etc. Once you know the details of these you can use them to send signal from one thread to another.
A simple setup for your requirement can be -
Create a resetevent before spawning any of your threads.
Let the child thread begin. In your parent wait on the reset event that you have created.
Let the child thread reset the event.
In your parent thread the wait state is completed, you can take further actions, such as kicking of the thread again and waiting on it or simply cleaning up and walking out of execution.
Thread.Abort is an unclean way of finishing your processing. If you read the msdn article here - https://learn.microsoft.com/en-us/dotnet/api/system.threading.thread.abort?view=net-6.0 the remark clearly tells you that you cant be sure what current state your thread execution was in. Your thread may not get opportunity to follow up with important clean up tasks, such as releasing resources that it does not require no more.
This can also lead to deadlock if you have more complicated constructs in place, such as thread being aborted doing so from protected region of code, such as a catch block or a finally block. If the thread that calls Abort holds a lock that the aborted thread is waiting on, a deadlock can acquire.
Key to remember in multithreading is that it is your responsibility to let the logic have a clean way of reaching to completion and finish thread's execution.
Please note that steps suggested above is one way of doing it. Depending on your requirements it can be restructured/imporved further. For example, if you are spawning another process, you will require kernel level objects such as mutex or semaphore. Objects like event or flag cant work across the process.
Read here - https://learn.microsoft.com/en-us/dotnet/standard/threading/overview-of-synchronization-primitives for more information.
As mentioned by others, Thread.Abort has major issues, and should be avoided if at all possible. It can raise the exception at any point in the code, in a possibly completely unexpected location, and possibly leave data in a highly corrupted state.
In this instance, it's entirely unnecessary.
You should change the waiting thread to use async instead. For example, you can do something like this.
static async Task RunProcessWithRestart()
{
using cancel = new CancellationTokenSource();
try
{
while (true)
{
using (var process = CreateMyProcessAndStart())
{
await process.WaitForExitAsync(cancel.Token);
}
}
}
catch(OperationCanceledException)
{
}
}
static CancellationTokenSource cancel;
public static void StartWaitForProcess()
{
Task.Run(RunProcessWithRestart);
}
public static void ShutdownWaitForProcess()
{
cancel.Cancel();
}
An alternative, which doesn't require calling Cancel() from a separate shutdown function, is to subscribe to the AppDomain.ProcessExit event.
static async Task RunProcessWithRestart()
{
using var cancel = new CancellationTokenSource();
AppDomain.ProcessExit += (s, e) => cancel.Cancel();
try
{
while (true)
{
using (var process = CreateMyProcessAndStart())
{
await process.WaitForExitAsync(cancel.Token);
}
}
}
catch(OperationCanceledException)
{
}
}
public static void StartWaitForProcess()
{
Task.Run(RunProcessWithRestart);
}

How do I find the other threads that send back the signal when one thread calls WaitOne?

One of the things I'm having a hard time to understand in multi-threaded programming is that fact that when one thread reaches a line that calls WaitOne(), how do I know which other threads are involved? Where or how can I find (or understand) how the WaitHandle receives the signal? For example, I'm looking at this code right now:
private void RunSync(object state, ElapsedEventArgs elapsedEventArgs)
{
_mutex.WaitOne();
using (var sync = GWSSync.BuildSynchronizer(_log))
{
try
{
sync.Syncronize();
}
catch(Exception ex)
{
_log.Write(string.Format("Error during synchronization : {0}", ex));
}
}
_mutex.ReleaseMutex();
_syncTimer.Interval = TimeBeforeNextSync().TotalMilliseconds;
_syncTimer.Start();
}
There are a few methods like this in the file (i.e RunThis(), RunThat()). These methods run inside a Windows service and are called when a Timer elapses. Each of these methods are called using different Timers and set up like this:
//Synchro
var timeBeforeFirstSync = TimeBeforeNextSync();
_syncTimer = new System.Timers.Timer(timeBeforeFirstSync.TotalMilliseconds);
_syncTimer.AutoReset = false;
_syncTimer.Elapsed += RunSync;
_syncTimer.Start();
I understand that when the Timer elapses, the RunSync method will run. But when it hits the WaitOne() line, the thread is blocked. But who is it waiting for? Which "other" thread will send the signal?
WaitHandle is an abstraction, as stated in the documentation:
Encapsulates operating system–specific objects that wait for exclusive access to shared resources.
You don't know which other threads are involved, but you do know which other code is involved by checking the usage of the handle (_mutex in your case). Every WaitHandle derived class inherits WaitOne, but what happens after successful wait and how it's get signalled is specific. For instance, in your example _mutex most probably is a Mutex class, so WaitOne acts like "wait until it's free and take ownership" while the ReleaseMutex acts like "release ownership and signal". With that in mind, it should be obvious what all these methods do - ensuring that while RunThis you cannot RunThat and vise versa.

Explanation about obtaining locks

I have been coding with C# for a good little while, but this locking sequence does not make any sense to me. My understanding of locking is that once a lock is obtained with lock(object), the code has to exit the lock scope to unlock the object.
This brings me to the question at hand. I cut out the code below which happens to appear in an animation class in my code. The way the method works is that settings are passed to the method and modified and then passed to a another overloaded method. That other overloaded method will pass all the information to another thread to handle and actually animate the object in some way. When the animation completes, the other thread calls the OnComplete method. This actually all works perfectly, but I do not understand why!
The other thread is able to call OnComplete, obtain a lock on the object and signal to the original thread that it should continue. Should the code not freeze at this point since the object is held in a lock on another thread?
So this is not a need for help in fixing my code, it is a need for clarification on why it works. Any help in understanding is appreciated!
public void tween(string type, object to, JsDictionaryObject properties) {
// Settings class that has a delegate field OnComplete.
Tween.Settings settings = new Tween.Settings();
object wait_object = new object();
settings.OnComplete = () => {
// Why are we able to obtain a lock when the wait_object already has a lock below?
lock(wait_object) {
// Let the waiting thread know it is ok to continue now.
Monitor.Pulse(wait_object);
}
};
// Send settings to other thread and start the animation.
tween(type, null, to, settings);
// Obtain a lock to ensure that the wait object is in synchronous code.
lock(wait_object) {
// Wait here if the script tells us to. Time out with total duration time + one second to ensure that we actually DO progress.
Monitor.Wait(wait_object, settings.Duration + 1000);
}
}
As documented, Monitor.Wait releases the monitor it's called with. So by the time you try to acquire the lock in OnComplete, there won't be another thread holding the lock.
When the monitor is pulsed (or the call times out) it reacquires it before returning.
From the docs:
Releases the lock on an object and blocks the current thread until it reacquires the lock.
I wrote an article about this: Wait and Pulse demystified
There's more going on than meets the eye!
Remember that :
lock(someObj)
{
int uselessDemoCode = 3;
}
Is equivalent to:
Monitor.Enter(someObj);
try
{
int uselessDemoCode = 3;
}
finally
{
Monitor.Exit(someObj);
}
Actually there are variants of this that varies from version to version.
Already, it should be clear that we could mess with this with:
lock(someObj)
{
Monitor.Exit(someObj);
//Don't have the lock here!
Monitor.Enter(someObj);
//Have the lock again!
}
You might wonder why someone would do this, and well, so would I, it's a silly way to make code less clear and less reliable, but it does come into play when you want to use Pulse and Wait, which the version with explicit Enter and Exit calls makes clearer. Personally, I prefer to use them over lock if I'm going to Pulse or Wait for that reason; I find that lock stops making code cleaner and starts making it opaque.
I tend to avoid this style, but, as Jon already said, Monitor.Wait releases the monitor it's called with, so there is no locking at that point.
But the example is slightly flawed IMHO. The problem is, generally, that if Monitor.Pulse gets called before Monitor.Wait, the waiting thread will never be signaled. Having that in mind, the author decided to "play safe" and used an overload which specified a timeout. So, putting aside the unnecessary acquiring and releasing of the lock, the code just doesn't feel right.
To explain this better, consider the following modification:
public static void tween()
{
object wait_object = new object();
Action OnComplete = () =>
{
lock (wait_object)
{
Monitor.Pulse(wait_object);
}
};
// let's say that a background thread
// finished really quickly here
OnComplete();
lock (wait_object)
{
// this will wait for a Pulse indefinitely
Monitor.Wait(wait_object);
}
}
If OnComplete gets called before the lock is acquired in the main thread, and there is no timeout, we will get a deadlock. In your case, Monitor.Wait will simply hang for a while and continue after a timeout, but you get the idea.
That is why I usually recommend a simpler approach:
public static void tween()
{
using (AutoResetEvent evt = new AutoResetEvent(false))
{
Action OnComplete = () => evt.Set();
// let's say that a background thread
// finished really quickly here
OnComplete();
// event is properly set even in this case
evt.WaitOne();
}
}
To quote MSDN:
The Monitor class does not maintain state indicating that the Pulse method has been called. Thus, if you call Pulse when no threads are waiting, the next thread that calls Wait blocks as if Pulse had never been called. If two threads are using Pulse and Wait to interact, this could result in a deadlock.
Contrast this with the behavior of the AutoResetEvent class: If you signal an AutoResetEvent by calling its Set method, and there are no threads waiting, the AutoResetEvent remains in a signaled state until a thread calls WaitOne, WaitAny, or WaitAll. The AutoResetEvent releases that thread and returns to the unsignaled state.

How do I block until a thread is returned to the pool?

As part of a windows service
I'm accepting incoming socket connection using
myListener.BeginAcceptSocket(acceptAsync, null)
The acceptAsync function executes on a seperate thread (just as expected).
When the service is requested to shutdown, I "signal" the threads that accepted and are currently working on the sockets, to finish up.
After signaling each thread to end,I need to block until they are all done. I have a list of threads, that I thought I could iterate through and Join each thread until they were all done.
Howerver it seems that these threads don't end, but return to the pool, so the Join will wait for ever.
How do I block until a thread is returned to the pool?
You shouldn't use Join in this case. Rather, you should use a series of WaitHandles (specifically, an AutoResetEvent or ManualResetEvent) which your threads will signal when they are done with their work.
You would then call the static WaitAll method on the WaitHandle class, passing all of the events to wait on.
The canonical pattern for doing this is to use a CountdownEvent. The main thread will increment the event to indicate that it is participating and the worker threads will do the same once they start. After the worker threads have finished they will decrement the event. When the main thread is ready to wait for completion it should decrement the event and then wait on it. If you are not using .NET 4.0 then you can get an implemention of a countdown event from part 4 of Joe Albahari's threading ebook.
public class Example
{
private CountdownEvent m_Finisher = new CountdownEvent(0);
public void MainThread()
{
m_Finisher.AddCount();
// Your stuff goes here.
// myListener.BeginAcceptSocket(OnAcceptSocket, null);
m_Finisher.Signal();
m_Finisher.Wait();
}
private void OnAcceptSocket(object state)
{
m_Finisher.AddCount()
try
{
// Your stuff goes here.
}
finally
{
m_Finisher.Signal();
}
}
}
The best way would be to change acceptAsync so that it signals on a semaphore, your main thread can then wait on that semaphore.
You don't have a lot of acces to or control over Threapool threads.

Starting multiple threads and keeping track of them from my .NET application

I would like to start x number of threads from my .NET application, and I would like to keep track of them as I will need to terminate them manually or when my application closes my application later on.
Example ==> Start Thread Alpha, Start Thread Beta .. then at any point in my application I should be able to say Terminate Thread Beta ..
What is the best way to keep track of opened threads in .NET and what do I need to know ( an id ? ) about a thread to terminate it ?
You could save yourself the donkey work and use this Smart Thread Pool. It provides a unit of work system which allows you to query each thread's status at any point, and terminate them.
If that is too much bother, then as mentioned anIDictionary<string,Thread> is probably the simplest solution. Or even simpler is give each of your thread a name, and use an IList<Thread>:
public class MyThreadPool
{
private IList<Thread> _threads;
private readonly int MAX_THREADS = 25;
public MyThreadPool()
{
_threads = new List<Thread>();
}
public void LaunchThreads()
{
for (int i = 0; i < MAX_THREADS;i++)
{
Thread thread = new Thread(ThreadEntry);
thread.IsBackground = true;
thread.Name = string.Format("MyThread{0}",i);
_threads.Add(thread);
thread.Start();
}
}
public void KillThread(int index)
{
string id = string.Format("MyThread{0}",index);
foreach (Thread thread in _threads)
{
if (thread.Name == id)
thread.Abort();
}
}
void ThreadEntry()
{
}
}
You can of course get a lot more involved and complicated with it. If killing your threads isn't time sensitive (for example if you don't need to kill a thread in 3 seconds in a UI) then a Thread.Join() is a better practice.
And if you haven't already read it, then Jon Skeet has this good discussion and solution for the "don't use abort" advice that is common on SO.
You can create a Dictionary of threads and assign them id's, like:
Dictionary<string, Thread> threads = new Dictionary<string, Thread>();
for(int i = 0 ;i < numOfThreads;i++)
{
Thread thread = new Thread(new ThreadStart(MethodToExe));
thread.Name = threadName; //Any name you want to assign
thread.Start(); //If you wish to start them straight away and call MethodToExe
threads.Add(id, thread);
}
If you don't want to save threads against an Id you can use a list and later on just enumerate it to kill threads.
And when you wish to terminate them, you can abort them. Better have some condition in your MethodToExe that allows that method to leave allowing the thread to terminate gracefully. Something like:
void MethodToExe()
{
while(_isRunning)
{
//you code here//
if(!_isRunning)
{
break;
}
//you code here//
}
}
To abort you can enumerate the dictionary and call Thread.Abort(). Be ready to catch ThreadAbortException
I asked a similar questions and received a bunch of good answers: Shutting down a multithreaded application
Note: my question did not require a graceful exit, but people still recommended that I gracefully exit from the loop of each thread.
The main thing to remember is that if you want to avoid having your threads prevent your process from terminating you should set all your threads to background:
Thread thread = new Thread(new ThreadStart(testObject.RunLoop));
thread.IsBackground = true;
thread.start();
The preferred way to start and manage threads is in a ThreadPool, but just about any container out there can be used to keep a reference to your threads. Your threads should always have a flag that will tell them to terminate and they should continually check it.
Furthermore, for better control you can supply your threads with a CountdownLatch: whenever a thread is exiting its loop it will signal on a CountdownLatch. Your main thread will call the CountdownLatch.Wait() method and it will block until all the threads have signaled... this allows you to properly cleanup and ensures that all your threads have shutdown before you start cleaning up.
public class CountdownLatch
{
private int m_remain;
private EventWaitHandle m_event;
public CountdownLatch(int count)
{
Reset(count);
}
public void Reset(int count)
{
if (count < 0)
throw new ArgumentOutOfRangeException();
m_remain = count;
m_event = new ManualResetEvent(false);
if (m_remain == 0)
{
m_event.Set();
}
}
public void Signal()
{
// The last thread to signal also sets the event.
if (Interlocked.Decrement(ref m_remain) == 0)
m_event.Set();
}
public void Wait()
{
m_event.WaitOne();
}
}
It's also worthy to mention that the Thread.Abort() method does some strange things:
When a thread calls Abort on itself,
the effect is similar to throwing an
exception; the ThreadAbortException
happens immediately, and the result is
predictable. However, if one thread
calls Abort on another thread, the
abort interrupts whatever code is
running. There is also a chance that a
static constructor could be aborted.
In rare cases, this might prevent
instances of that class from being
created in that application domain. In
the .NET Framework versions 1.0 and
1.1, there is a chance the thread could abort while a finally block is
running, in which case the finally
block is aborted.
The thread that calls Abort might
block if the thread that is being
aborted is in a protected region of
code, such as a catch block, finally
block, or constrained execution
region. If the thread that calls Abort
holds a lock that the aborted thread
requires, a deadlock can occur.
After creating your thread, you can set it's Name property. Assuming you store it in some collection you can access it conveniently via LINQ in order to retrieve (and abort) it:
var myThread = (select thread from threads where thread.Name equals "myThread").FirstOrDefault();
if(myThread != null)
myThread.Abort();
Wow, there are so many answers..
You can simply use an array to hold the threads, this will only work if the access to the array will be sequantial, but if you'll have another thread accessing this array, you will need to synchronize access
You can use the thread pool, but the thread pool is very limited and can only hold fixed amount of threads.
As mentioned above, you can create you own thread pool, which in .NET v4 becomes much easier with the introduction of safe collections.
you can manage them by holding a list of mutex object which will determine when those threads should finish, the threads will query the mutex each time they run before doing anything else, and if its set, terminate, you can manage the mutes from anywhere, and since mutex are by defenition thread-safe, its fairly easy..
i can think of another 10 ways, but those seems to work. let me know if they dont fit your needs.
Depends on how sophisticated you need it to be. You could implement your own type of ThreadPool with helper methods etc. However, I think its as simple as just maintaining a list/array and adding/removing the threads to/from the collection accordingly.
You could also use a Dictionary collection and use your own type of particular key to retrieve them i.e. Guids/strings.
As you start each thread, put it's ManagedThreadId into a Dictionary as the key and the thread instance as the value. Use a callback from each thread to return its ManagedThreadId, which you can use to remove the thread from the Dictionary when it terminates. You can also walk the Dictionary to abort threads if needed. Make the threads background threads so that they terminate if your app terminates unexpectedly.
You can use a separate callback to signal threads to continue or halt, which reflects a flag set by your UI, for a graceful exit. You should also trap the ThreadAbortException in your threads so that you can do any cleanup if you have to abort threads instead.

Categories