Unexpected behavior accessing objects from different threads in C# - c#

I'm currently seeing some off behaviors as I'm working on a multithreaded windows service. The issue that I'm having is that some objects appear to be resetting when accessed from different threads.
Let me demonstrate with some code (simplified to explain the problem)....
First, I have a class that launches threads based on methods in another class (using Ninject to get the classes) and then later stops them:
public class ContainerService : ServiceBase
{
private IEnumerable<IRunnableBatch> _services;
public void start()
{
_services = ServiceContainer.SvcContainer.Kernel.GetAll<IRunnableBatch>();
foreach (IRunnableBatch s in _services)
{
s.run();
}
}
public void stop()
{
foreach (IRunnableBatch s in _services)
{
s.stop();
}
}
}
Now, within the run() method of an IRunnableBatch class I have something like this:
public class Batch : IRunnableBatch
{
//this class is used for starting and stopping threads as well as tracking
//threads to restart them should the stop
protected IWatchdog _watchdog;
... code ommitted for brevity but the watchdog class is injected by Ninject
in the constructor ...
public void run()
{
_watchdog.startThreads(this);
}
public void stop()
{
_watchdog.stopThreads();
}
}
And here's the code for the Watchdog class:
public class Watchdog : IWatchdog
{
private ILog _logger;
private Dictionary<int, MethodInfo> _batches = new Dictionary<int, MethodInfo>();
private Dictionary<int, Thread> _threads = new Dictionary<int, Thread>();
private IRunnableBatch _service;
private Thread _watcher;
private Dictionary<int, ThreadFailure> _failureCounts = new Dictionary<int, ThreadFailure>();
private bool _runWatchdog = true;
#region IWatchdog Members
/**
* This function will scan an IRunnableService for the custom attribute
* "BatchAttribute" and use that to determine what methods to run when
* a batch needs to be launched
*/
public void startThreads(IRunnableBatch s)
{
_service = s;
//scan service for runnable methods
Type t = s.GetType();
MethodInfo[] methods = t.GetMethods();
foreach (MethodInfo m in methods)
{
object[] attrs = m.GetCustomAttributes(typeof(BatchAttribute), true);
if (attrs != null && attrs.Length >= 1)
{
BatchAttribute b = attrs[0] as BatchAttribute;
_batches.Add(b.Batch_Number, m);
}
}
//loop through and see if the batches need to run
foreach (KeyValuePair<int, MethodInfo> kvp in _batches)
{
startThread(kvp.Key, kvp.Value);
}
//check if the watcher thread is running. If not, start it
if (_watcher == null || !_watcher.IsAlive)
{
_watcher = new Thread(new ThreadStart(watch));
_watcher.Start();
_logger.Info("Watcher thread started.");
}
}
private void startThread(int key, MethodInfo method)
{
if (_service.shouldBatchRun(key))
{
Thread thread = new Thread(new ThreadStart(() => method.Invoke(_service, null)));
try
{
thread.Start();
_logger.Info("Batch " + key + " (" + method.Name + ") has been started.");
if (_threads.ContainsKey(key))
{
_threads[key] = thread;
}
else
{
_threads.Add(key, thread);
}
}
catch (Exception ex)
{
//mark this as the first problem starting the thread.
if (ex is System.Threading.ThreadStateException || ex is System.OutOfMemoryException)
{
_logger.Warn("Unable to start thread: " + method.Name, ex);
ThreadFailure tf = new ThreadFailure();
tf.Count = 1;
_failureCounts.Add(key, tf);
}
else { throw; }
}
}
}
public void stopThreads()
{
_logger.Info("stopThreads called");
//stop the watcher thread first
if (_watcher != null && _watcher.IsAlive)
{
_logger.Info("Stopping watcher thread.");
_runWatchdog = false;
_watcher.Join();
_logger.Info("Watcher thread stopped.");
}
int stoppedCount = 0;
_logger.Info("There are " + _threads.Count + " batches to stop.");
while (stoppedCount < _threads.Count)
{
ArrayList stopped = new ArrayList();
foreach (KeyValuePair<int, Thread> kvp in _threads)
{
if (kvp.Value.IsAlive)
{
_service.stopBatch(kvp.Key);
kvp.Value.Join(); //wait for thread to terminate
_logger.Info("Batch " + kvp.Key.ToString() + " stopped");
}
else
{
_logger.Info("Batch " + kvp.Key + " (" + _batches[kvp.Key].Name + ") has been stopped");
stoppedCount++;
stopped.Add(kvp.Key);
}
}
foreach (int n in stopped)
{
_threads.Remove(n);
}
}
}
public void watch()
{
int numIntervals = 15 * 12; //15 minutes in 5 second intervals
while (_runWatchdog)
{
//cycle through the batches and check the matched threads.
foreach (KeyValuePair<int, MethodInfo> kvp in _batches)
{
//if they are not running
if (!_threads[kvp.Key].IsAlive)
{
//mark the thread failure and then try again.
ThreadFailure tf;
if (_failureCounts.ContainsKey(kvp.Key))
{
tf = _failureCounts[kvp.Key];
}
else
{
tf = new ThreadFailure();
}
tf.Count++;
if (tf.Count >= 8)
{
//log an error as we've been trying to start this thread for 2 hours now
_logger.Error("Unable to start the thread: " + kvp.Value.Name + " ***** NOT TRYING AGAIN UNTIL SERVICE RESTART");
}
else
{
_logger.Warn("Thread (" + kvp.Value.Name + ") found stopped... RESTARTING");
startThread(kvp.Key, kvp.Value);
}
}
}
//sleep 15 minutes and repeat.
_logger.Info("*** Watcher sleeping for 15 minutes");
for (int i = 1; i <= numIntervals; i++)
{
if (!_runWatchdog)
break;
Thread.Sleep(5000); //sleep for 5 seconds
}
_logger.Info("*** Watcher woke up.");
}
_logger.Info("Watcher thread stopping.");
}
public void setLogger(ILog l)
{
_logger = l;
}
#endregion
}
So, the main program calls ContainerService.start() which calls the IRunnableBatch.run(), which calls IWatchdog.startThreads(). The startThreads() method locates and launches all of the threads it finds, then launches a thread to watch the others in case they die for some reason. Then the functions exit all the way back up the the main function.
Now, a service simply waits for the service manager to call OnStop() but for testing purposes I have the main thread sleep for 1 minute then call ContainerService.stop().
After all of that explanation, I now get to the issue.... whew!!
When the main thread calls stop(), and the stop() method calls IRunnableBatch.stop(), if I have a breakpoint there and examine the _watchdog variable I see that all of it's associated member variables are set back to their initial values (no threads, no watcher thread, no batches, nothing...).
Anyone have any ideas why?

I see the problem. Read https://github.com/ninject/ninject/wiki/Multi-injection, and you'll see that GetAll returns an enumerable that activates your objects as you iterate, not a list. Therefore, in ContainerService.start, your runnable batch objects are created, and in stop, a whole new set of objects are created.
Try adding a .ToList() after your call to GetAll, or change your Ninject config so that your runnables are not transient.

Related

What is the best way to cancel operation anywhere in a method?

Suppose I have a long operation inside a subroutine or function and I want to be able to cancel (exit subroutine or function) immediately after a "cancellation flag" is set to true. What is the best way to do it? One way is to check the flag after each line of code but that is not very elegant.
For example:
dim _CancelFlag as boolean = false
Sub LongOperation()
dim a as integer
a = 1
if _CancelFlag = True Then
Exit Sub
End If
a = 2
if _CancelFlag = True Then
Exit Sub
End If
'And so on...
End Sub
Of course a = 1 is only for illustration purpose. Say the operation is really long until a = 100 and it is not possible to put them into a loop, how can I trigger the cancellation from outside of the subroutine and stop it immediately?
I was thinking to put the sub into a BackgroundWorker or Task but then I still have to check for CancellationToken somewhere inside the sub.. Do I really have to check after each line of code?
It depends on the granularity you want to achieve: how many seconds can you expect your method be canceled?
If the cancellation must take place "immediately" you have to check in as many place as you can. However, just checking before and after long sub steps of your operation is enough in the general case.
Remember that if you have to wait on handles, you have to use the appropriate overload that specifies a timeout or a cancellation token.
Additionally, you should propagate the cancellation token/your flag deep down your methods to allow detection early the cancellation requests.
I found a more elegant way to do it, although it does use a loop in the end. Please let me know if anybody has an better solution. I will also update when I find something else.
Sub LongOperation()
dim state as integer = 0
Do while state < 100
Select Case state
Case 0
a = 1
Case 1
a = 2
Case Else
Exit do
End Select
If _CancelFlag = True Then
Exit Sub
End If
state += 1
Loop
End Sub
This is a sample windows application I have created to cancel or pause the log running task.
public partial class Form1 : Form
{
updateUI _updateGUI;
CancellationToken _cancelToken;
PauseTokenSource _pauseTokeSource;
public Form1()
{
InitializeComponent();
}
delegate void updateUI(dynamic value);
private void btnStartAsync_Click(object sender, EventArgs e)
{
_pauseTokeSource = new PauseTokenSource();
_cancelToken = default(CancellationToken);
_pauseTokeSource.onPause -= _pauseTokeSource_onPause;
_pauseTokeSource.onPause += _pauseTokeSource_onPause;
Task t = new Task(() => { LongRunning(_pauseTokeSource); }, _cancelToken);
t.Start();
}
private void _pauseTokeSource_onPause(object sender, PauseEventArgs e)
{
var message = string.Format("Task {0} at {1}", e.Paused ? "Paused" : "Resumed", DateTime.Now.ToString());
this.Invoke(_updateGUI, message);
}
private async void LongRunning(PauseTokenSource pause)
{
_updateGUI = new updateUI(SetUI);
for (int i = 0; i < 20; i++)
{
await pause.WaitWhilePausedAsync();
Thread.Sleep(500);
this.Invoke(_updateGUI, i.ToString() + " => " + txtInput.Text);
//txtOutput.AppendText(Environment.NewLine + i.ToString());
if (_cancelToken.IsCancellationRequested)
{
this.Invoke(_updateGUI, "Task cancellation requested at " + DateTime.Now.ToString());
break;
}
}
_updateGUI = null;
}
private void SetUI(dynamic output)
{
//txtOutput.AppendText(Environment.NewLine + count.ToString() + " => " + txtInput.Text);
txtOutput.AppendText(Environment.NewLine + output.ToString());
}
private void btnCancelTask_Click(object sender, EventArgs e)
{
_cancelToken = new CancellationToken(true);
}
private void btnPause_Click(object sender, EventArgs e)
{
_pauseTokeSource.IsPaused = !_pauseTokeSource.IsPaused;
btnPause.Text = _pauseTokeSource.IsPaused ? "Resume" : "Pause";
}
}
public class PauseTokenSource
{
public delegate void TaskPauseEventHandler(object sender, PauseEventArgs e);
public event TaskPauseEventHandler onPause;
private TaskCompletionSource<bool> _paused;
internal static readonly Task s_completedTask = Task.FromResult(true);
public bool IsPaused
{
get { return _paused != null; }
set
{
if (value)
{
Interlocked.CompareExchange(ref _paused, new TaskCompletionSource<bool>(), null);
}
else
{
while (true)
{
var tcs = _paused;
if (tcs == null) return;
if (Interlocked.CompareExchange(ref _paused, null, tcs) == tcs)
{
tcs.SetResult(true);
onPause?.Invoke(this, new PauseEventArgs(false));
break;
}
}
}
}
}
public PauseToken Token
{
get
{
return new PauseToken(this);
}
}
internal Task WaitWhilePausedAsync()
{
var cur = _paused;
if (cur != null)
{
onPause?.Invoke(this, new PauseEventArgs(true));
return cur.Task;
}
return s_completedTask;
}
}
public struct PauseToken
{
private readonly PauseTokenSource m_source;
internal PauseToken(PauseTokenSource source) { m_source = source; }
public bool IsPaused { get { return m_source != null && m_source.IsPaused; } }
public Task WaitWhilePausedAsync()
{
return IsPaused ?
m_source.WaitWhilePausedAsync() :
PauseTokenSource.s_completedTask;
}
}
public class PauseEventArgs : EventArgs
{
public PauseEventArgs(bool paused)
{
Paused = paused;
}
public bool Paused { get; private set; }
}
If your LongOperation() is well splittable into short operations (I assume a=1, a=2, ..., a=100 being all reasonably short) than you could wrap all the short operations into Tasks, put them into a TaskQueue and process that queue, checking between the Tasks if cancellation was requested.
If LongOperation() is difficult to split you could run the LongOperation() on a separate dedicated thread and abort that thrad on cancellation. Some have commented aborting a thread being dirty and not being recommended. Actually that's not that bad, if properly handled. Aborting a thread just raises a ThradAbortException within the thread method. So if there is a try - catch - finally in the LongOperation(), catching and handling the exception and if the finally code properly does cleanup, closes all handles, disposes etc., this should be ok and nothing to be afraid of.

C# Multi-Threaded Tree traversal

I am trying to write a C# system that will multi-threaded traverse a tree structure. Another way to look at this is where the consumer of the BlockingCollection is also the producer.
The problem I am having is telling when everything is finished.
The test I really need is to see if all the threads are on the TryTake.
If they are then everything has finished, but I cannot find a way to test of this or wrap this with anything that would help achieve this.
The code below is a very simple example of this code as far as I have it, but there is a condition in which this code can fail. If the first thread just passed the test.TryTake(out v,-1) and has not yet executed the s.Release(); and it just pulled the last item from the collection, and the second thread just performed the if(s.CurrentCount == 0 && test.Count ==0) this could return true, and incorrectly start finishing things up.
But then the first thread would continue on and try and add more to the collection.
If I could make the lines:
if (!test.TryTake(out v, -1))
break;
s.Release();
atomic then I believe this code would work. (Which is obviously not possible.)
But I cannot figure out how to fix this flaw.
class Program
{
private static BlockingCollection<int> test;
static void Main(string[] args)
{
test = new BlockingCollection<int>();
WorkClass.s = new SemaphoreSlim(2);
WorkClass w0 = new WorkClass("A");
WorkClass w1 = new WorkClass("B");
Thread t0 = new Thread(w0.WorkFunction);
Thread t1 = new Thread(w1.WorkFunction);
test.Add(10);
t0.Start();
t1.Start();
t0.Join();
t1.Join();
Console.WriteLine("Done");
Console.ReadLine();
}
class WorkClass
{
public static SemaphoreSlim s;
private readonly string _name;
public WorkClass(string name)
{
_name = name;
}
public void WorkFunction(object t)
{
while (true)
{
int v;
s.Wait();
if (s.CurrentCount == 0 && test.Count == 0)
test.CompleteAdding();
if (!test.TryTake(out v, -1))
break;
s.Release();
Console.WriteLine(_name + " = " + v);
Thread.Sleep(5);
for (int i = 0; i < v; i++)
test.Add(i);
}
Console.WriteLine("Done " + _name);
}
}
}
This can be parallelized using task parallelism. Every node in the tree is considered to be a task which may spawn sub-tasks. See Dynamic Task Parallelism for a more detailed description.
For a binary tree with 5 levels that writes each node to console and waits for 5 milliseconds as in your example, the ParallelWalk method would then look for example as follows:
class Program
{
internal class TreeNode
{
internal TreeNode(int level)
{
Level = level;
}
internal int Level { get; }
}
static void Main(string[] args)
{
ParallelWalk(new TreeNode(0));
Console.Read();
}
static void ParallelWalk(TreeNode node)
{
if (node == null) return;
Console.WriteLine(node.Level);
Thread.Sleep(5);
if(node.Level > 4) return;
int nextLevel = node.Level + 1;
var t1 = Task.Factory.StartNew(
() => ParallelWalk(new TreeNode(nextLevel)));
var t2 = Task.Factory.StartNew(
() => ParallelWalk(new TreeNode(nextLevel)));
Task.WaitAll(t1, t2);
}
}
The central lines are where the tasks t1 and t2 are spawned.
By this decomposition in tasks, the scheduling is done by the Task Parallel Library and you don't have to manage a shared set of nodes anymore.

c# - Waiting for 1 of 2 threads to be finished

I have a place in my code, that I need to wait for a either finger to be identified on a sensor, or the user pressed a key to abort this action and return to the main menu.
I tried using something like conditional variables with Monitor and lock concepts but when I try to alert the main thread, nothing happens.
CODE:
private static object _syncFinger = new object(); // used for syncing
private static bool AttemptIdentify()
{
// waiting for either the user cancels or a finger is inserted
lock (_syncFinger)
{
Thread tEscape = new Thread(new ThreadStart(HandleIdentifyEscape));
Thread tIdentify = new Thread(new ThreadStart(HandleIdentify));
tEscape.IsBackground = false;
tIdentify.IsBackground = false;
tEscape.Start();
tIdentify.Start();
Monitor.Wait(_syncFinger); // -> Wait part
}
// Checking the change in the locked object
if (_syncFinger is FingerData) // checking for identity found
{
Console.WriteLine("Identity: {0}", ((FingerData)_syncFinger).Guid.ToString());
}
else if(!(_syncFinger is Char)) // char - pressed a key to return
{
return false; // returns with no error
}
return true;
}
private static void HandleIdentifyEscape()
{
do
{
Console.Write("Enter 'c' to cancel: ");
} while (Console.ReadKey().Key != ConsoleKey.C);
_syncFinger = new Char();
LockNotify((object)_syncFinger);
}
private static void HandleIdentify()
{
WinBioIdentity temp = null;
do
{
Console.WriteLine("Enter your finger.");
try // trying to indentify
{
temp = Fingerprint.Identify(); // returns FingerData type
}
catch (Exception ex)
{
Console.WriteLine("ERROR: " + ex.Message);
}
// if couldn't identify, temp would stay null
if(temp == null)
{
Console.Write("Invalid, ");
}
} while (temp == null);
_syncFinger = temp;
LockNotify(_syncFinger);
}
private static void LockNotify(object syncObject)
{
lock(syncObject)
{
Monitor.Pulse(syncObject);
}
}
when i try to alert the main thread, nothing happens.
That's because the main thread is waiting on the monitor for the object created here:
private static object _syncFinger = new object(); // used for syncing
But each of your threads replaces that object value, and then signals the monitor for the new object. The main thread has no knowledge of the new object, and so of course signaling the monitor for that new object will have no effect on the main thread.
First, any time you create an object for the purpose of using with lock, make it readonly:
private static readonly object _syncFinger = new object(); // used for syncing
It's always the right thing to do , and that will prevent you from ever making the mistake of changing the monitored object while a thread is waiting on it.
Next, create a separate field to hold the WinBioIdentity value, e.g.:
private static WinBioIdentity _syncIdentity;
And use that to relay the result back to the main thread:
private static bool AttemptIdentify()
{
// waiting for either the user cancels or a finger is inserted
lock (_syncFinger)
{
_syncIdentity = null;
Thread tEscape = new Thread(new ThreadStart(HandleIdentifyEscape));
Thread tIdentify = new Thread(new ThreadStart(HandleIdentify));
tEscape.IsBackground = false;
tIdentify.IsBackground = false;
tEscape.Start();
tIdentify.Start();
Monitor.Wait(_syncFinger); // -> Wait part
}
// Checking the change in the locked object
if (_syncIdentity != null) // checking for identity found
{
Console.WriteLine("Identity: {0}", ((FingerData)_syncIdentity).Guid.ToString());
return true;
}
return false; // returns with no error
}
private static void HandleIdentifyEscape()
{
do
{
Console.Write("Enter 'c' to cancel: ");
} while (Console.ReadKey().Key != ConsoleKey.C);
LockNotify((object)_syncFinger);
}
private static void HandleIdentify()
{
WinBioIdentity temp = null;
do
{
Console.WriteLine("Enter your finger.");
try // trying to indentify
{
temp = Fingerprint.Identify(); // returns FingerData type
}
catch (Exception ex)
{
Console.WriteLine("ERROR: " + ex.Message);
}
// if couldn't identify, temp would stay null
if(temp == null)
{
Console.Write("Invalid, ");
}
} while (temp == null);
__syncIdentity = temp;
LockNotify(_syncFinger);
}
All that said, you should prefer to use the modern async/await idiom for this:
private static bool AttemptIdentify()
{
Task<WinBioIdentity> fingerTask = Task.Run(HandleIdentify);
Task cancelTask = Task.Run(HandleIdentifyEscape);
if (Task.WaitAny(fingerTask, cancelTask) == 0)
{
Console.WriteLine("Identity: {0}", fingerTask.Result.Guid);
return true;
}
return false;
}
private static void HandleIdentifyEscape()
{
do
{
Console.Write("Enter 'c' to cancel: ");
} while (Console.ReadKey().Key != ConsoleKey.C);
}
private static WinBioIdentity HandleIdentify()
{
WinBioIdentity temp = null;
do
{
Console.WriteLine("Enter your finger.");
try // trying to indentify
{
temp = Fingerprint.Identify(); // returns FingerData type
}
catch (Exception ex)
{
Console.WriteLine("ERROR: " + ex.Message);
}
// if couldn't identify, temp would stay null
if(temp == null)
{
Console.Write("Invalid, ");
}
} while (temp == null);
return temp;
}
The above is a bare-minimum example. It would be better to make the AttemptIdentify() method async itself, and then use await Task.WhenAny() instead of Task.WaitAny(). It would also be better to include some mechanism to interrupt the tasks, i.e. once one has completed, you should want to interrupt the other so it's not lying around continuing to attempt its work.
But those kinds of issues are not unique to the async/await version, and don't need to be solved to improve on the code you have now.

C# Dual Threading, Thread.IsAlive is false even when this thread didn't finish yet

I wrote a short Program which searches for empty directories and deletes them.
This process should run in background while a second process should write something to the Console every second so that the user knows the program is still running.
My problem is that the whole program stops after about 3 seconds while the processDirectory method didn't even finish.
My Main Method which calls a Method (processDirectory()) which runs in a second Thread:
static void Main(string[] args)
{
Thread delEmpty = new Thread(() => Thread2.processDirectory(#"C:\Users\Mani\Documents"));
delEmpty.Start();
printRunning(delEmpty);
File.WriteAllLines(#"C:\Users\Mani\Desktop\Unauthorized Folders.txt", Thread2.unauthorized);
File.WriteAllLines(#"C:\Users\Mani\Desktop\Empty Folders.txt", Thread2.emptyFolders);
Console.ReadKey();
}
My second Class which stores my processDirectory Method which should run in background:
public static List<string> unauthorized = new List<string>();
public static List<string> emptyFolders = new List<string>();
public static void processDirectory(string rootPath)
{
if (!Directory.Exists(rootPath)) return;
foreach (var dir in Directory.GetDirectories(rootPath))
{
try
{
processDirectory(dir);
if (Directory.GetFiles(dir).Length == 0 && Directory.GetDirectories(dir).Length == 0) Directory.Delete(dir, false);
}
catch (UnauthorizedAccessException uae) { unauthorized.Add(uae.Message); }
}
}
Code for printing something:
static async void printRunning(Thread delEmpty)
{
Console.CursorVisible = false;
for (int cnt = 1; delEmpty.IsAlive; cnt++)
{
switch (cnt)
{
case 1:
Console.Write("Running. ");
break;
case 2:
Console.Write("Running . ");
break;
case 3:
Console.Write("Running .");
cnt = 0;
break;
}
await Task.Delay(1000);
}
Console.Write("Finished!");
Console.CursorVisible = true;
}
I'm going to suggest that you avoid using threads and use an abstraction that deals with your threading issues for you. I suggest making use of Microsoft's Reactive Framework Team's Reactive Extensions (NuGet "System.Reactive") and Interactive Extensions (NuGet "System.Interactive").
Then you can do this:
static void Main(string[] args)
{
var rootPath = #"C:\Users\Mani\Documents";
using (Observable
.Interval(TimeSpan.FromSeconds(1.0))
.Subscribe(x => Console.WriteLine($"Running{"".PadLeft((int)x % 3)}.")))
{
Thread2.processDirectory(rootPath);
}
}
public static class Thread2
{
public static List<string> unauthorized = new List<string>();
public static List<string> emptyFolders = null;
public static void processDirectory(string rootPath)
{
if (!Directory.Exists(rootPath)) return;
emptyFolders =
EnumerableEx
.Expand(Directory.GetDirectories(rootPath), dir => Directory.GetDirectories(dir))
.Where(dir => Directory.GetFiles(dir).Length == 0 && Directory.GetDirectories(dir).Length == 0)
.ToList();
emptyFolders
.AsEnumerable()
.Reverse()
.ForEach(dir =>
{
try
{
Directory.Delete(dir, false);
}
catch (UnauthorizedAccessException uae) { unauthorized.Add(uae.Message); }
});
}
}
The key elements here are:
the Observable.Interval that sets up a timer to display the "Running" message every second.
the EnumerableEx.Expand which recursively builds the list of folders to be deleted.
the Reverse/ForEach which runs through the folders to be deleted (in reverse order) and deletes them.
It's important to not that the deleting happens on the main thread - it's just the "Running" message that comes out on the other thread. If needed, though, it would be fairly easy to push the deleting to another thread, but it isn't necessary.
To handle the case when GetDirectories throws an error, use this code:
Func<string, string[]> getDirectories = dir =>
{
try
{
return Directory.GetDirectories(dir);
}
catch (UnauthorizedAccessException uae)
{
unauthorized.Add(uae.Message);
return new string[] { };
}
};
emptyFolders =
EnumerableEx
.Expand(getDirectories(rootPath), dir => getDirectories(dir))
.Where(dir => Directory.GetFiles(dir).Length == 0 && getDirectories(dir).Length == 0)
.ToList();
You can solve the issue by on of two ways:
Make your method printRunning run synchronously
Add delEmpty.Join() so main thread will wait while delEmpty thread finishes
delEmpty.Start();
printRunning(delEmpty);
delEmpty.Join();
In case of 1st solution replace the printRunning method with the following one
static void printRunning(Thread delEmpty)
{
Console.CursorVisible = false;
for (int cnt = 0; delEmpty.IsAlive; cnt++)
{
switch (cnt % 3)
{
case 0:
Console.Write("Running.");
break;
case 1:
Console.Write("Running..");
break;
case 2:
Console.Write("Running...");
break;
}
Thread.Sleep(1000);
Console.SetCursorPosition(0, 0);
Console.Clear();
}
Console.Write("Finished!");
Console.CursorVisible = true;
}

A producer consumer queue with an additional thread for a periodic backup of data

I'm trying to implement a concurrent producer/consumer queue with multiple producers and one consumer: the producers add some data to the Queue, and the consumer dequeues these data from the queue in order to update a collection. This collection must be periodically backed up to a new file. For this purpose I created a custom serializable collection: serialization could be performed by using the DataContractSerializer.
The queue is only shared between the consumer and the producers, so access to this queue must be managed to avoid race conditions.
The custom collection is shared between the consumer and a backup thread.
The backup thread could be activated periodically using a System.Threading.Timer object: it may initially be scheduled by the consumer, and then it would be scheduled at the end of every backup procedure.
Finally, a shutdown method should stop the queuing by producers, then stop the consumer, perform the last backup and dispose the timer.
The dequeuing of an item at a time may not be efficient, so I thought of using two queues: when the first queue becomes full, the producers notify the consumer by invoking Monitor.Pulse. As soon as the consumer receives the notification, the queues are swapped, so while producers enqueue new items, the consumer can process the previous ones.
The sample that I wrote seems to work properly. I think it is also thread-safe, but I'm not sure about that. The following code, for simplicity I used a Queue<int>. I also used (again for simplicity) an ArrayList instead of collection serializable.
public class QueueManager
{
private readonly int m_QueueMaxSize;
private readonly TimeSpan m_BackupPeriod;
private readonly object m_SyncRoot_1 = new object();
private Queue<int> m_InputQueue = new Queue<int>();
private bool m_Shutdown;
private bool m_Pulsed;
private readonly object m_SyncRoot_2 = new object();
private ArrayList m_CustomCollection = new ArrayList();
private Thread m_ConsumerThread;
private Timer m_BackupThread;
private WaitHandle m_Disposed;
public QueueManager()
{
m_ConsumerThread = new Thread(Work) { IsBackground = true };
m_QueueMaxSize = 7;
m_BackupPeriod = TimeSpan.FromSeconds(30);
}
public void Run()
{
m_Shutdown = m_Pulsed = false;
m_BackupThread = new Timer(DoBackup);
m_Disposed = new AutoResetEvent(false);
m_ConsumerThread.Start();
}
public void Shutdown()
{
lock (m_SyncRoot_1)
{
m_Shutdown = true;
Console.WriteLine("Worker shutdown...");
Monitor.Pulse(m_SyncRoot_1);
}
m_ConsumerThread.Join();
WaitHandle.WaitAll(new WaitHandle[] { m_Disposed });
if (m_InputQueue != null) { m_InputQueue.Clear(); }
if (m_CustomCollection != null) { m_CustomCollection.Clear(); }
Console.WriteLine("Worker stopped!");
}
public void Enqueue(int item)
{
lock (m_SyncRoot_1)
{
if (m_InputQueue.Count == m_QueueMaxSize)
{
if (!m_Pulsed)
{
Monitor.Pulse(m_SyncRoot_1); // it notifies the consumer...
m_Pulsed = true;
}
Monitor.Wait(m_SyncRoot_1); // ... and waits for Pulse
}
m_InputQueue.Enqueue(item);
Console.WriteLine("{0} \t {1} >", Thread.CurrentThread.Name, item.ToString("+000;-000;"));
}
}
private void Work()
{
m_BackupThread.Change(m_BackupPeriod, TimeSpan.FromMilliseconds(-1));
Queue<int> m_SwapQueueRef, m_WorkerQueue = new Queue<int>();
Console.WriteLine("Worker started!");
while (true)
{
lock (m_SyncRoot_1)
{
if (m_InputQueue.Count < m_QueueMaxSize && !m_Shutdown) Monitor.Wait(m_SyncRoot_1);
Console.WriteLine("\nswapping...");
m_SwapQueueRef = m_InputQueue;
m_InputQueue = m_WorkerQueue;
m_WorkerQueue = m_SwapQueueRef;
m_Pulsed = false;
Monitor.PulseAll(m_SyncRoot_1); // all producers are notified
}
Console.WriteLine("Worker\t < {0}", String.Join(",", m_WorkerQueue.ToArray()));
lock (m_SyncRoot_2)
{
Console.WriteLine("Updating custom dictionary...");
foreach (int item in m_WorkerQueue)
{
m_CustomCollection.Add(item);
}
Thread.Sleep(1000);
Console.WriteLine("Custom dictionary updated successfully!");
}
if (m_Shutdown)
{
// schedule last backup
m_BackupThread.Change(0, Timeout.Infinite);
return;
}
m_WorkerQueue.Clear();
}
}
private void DoBackup(object state)
{
try
{
lock (m_SyncRoot_2)
{
Console.WriteLine("Backup...");
Thread.Sleep(2000);
Console.WriteLine("Backup completed at {0}", DateTime.Now);
}
}
finally
{
if (m_Shutdown) { m_BackupThread.Dispose(m_Disposed); }
else { m_BackupThread.Change(m_BackupPeriod, TimeSpan.FromMilliseconds(-1)); }
}
}
}
Some objects are initialized in the Run method to allow you to restart this QueueManager after it is stopped, as shown in the code below.
public static void Main(string[] args)
{
QueueManager queue = new QueueManager();
var t1 = new Thread(() =>
{
for (int i = 0; i < 50; i++)
{
queue.Enqueue(i);
Thread.Sleep(1500);
}
}) { Name = "t1" };
var t2 = new Thread(() =>
{
for (int i = 0; i > -30; i--)
{
queue.Enqueue(i);
Thread.Sleep(3000);
}
}) { Name = "t2" };
t1.Start(); t2.Start(); queue.Run();
t1.Join(); t2.Join(); queue.Shutdown();
Console.ReadLine();
var t3 = new Thread(() =>
{
for (int i = 0; i < 50; i++)
{
queue.Enqueue(i);
Thread.Sleep(1000);
}
}) { Name = "t3" };
var t4 = new Thread(() =>
{
for (int i = 0; i > -30; i--)
{
queue.Enqueue(i);
Thread.Sleep(2000);
}
}) { Name = "t4" };
t3.Start(); t4.Start(); queue.Run();
t3.Join(); t4.Join(); queue.Shutdown();
Console.ReadLine();
}
I would suggest using the BlockingCollection for a producer/consumer queue. It was designed specifically for that purpose. The producers add items using Add and the consumers use Take. If there are no items to take then it will block until one is added. It is already designed to be used in a multithreaded environment, so if you're just using those methods there's no need to explicitly use any locks or other synchronization code.

Categories