Background
A customer asked me to find out why their C# application (we'll call it XXX, delivered by a consultant who has fled the scene) is so flaky, and fix it. The application controls a measurement device over a serial connection. Sometimes the device delivers continuous readings (which are displayed on screen), and sometimes the app needs to stop continuous measurements and go into command-response mode.
How NOT to do it
For continuous measurements, XXX uses System.Timers.Timer for background processing of serial input. When the timer fires, C# runs the timer's ElapsedEventHandler using some thread from its pool. XXX's event handler uses a blocking commPort.ReadLine() with a several second timeout, then calls back to a delegate when a useful measurement arrives on the serial port. This portion works fine, however...
When its time to stop realtime measurements and command the device to do something different, the application tries to suspend background processing from the GUI thread by setting the timer's Enabled = false. Of course, that just sets a flag preventing further events, and a background thread already waiting for serial input continues waiting. The GUI thread then sends a command to the device, and tries to read the reply – but the reply is received by the background thread. Now the background thread becomes confused as its not the expected measurement. The GUI thread meanwhile becomes confused as it didn't receive the command reply expected. Now we know why XXX is so flaky.
Possible Method 1
In another similar application, I used a System.ComponentModel.BackgroundWorker thread for free-running measurements. To suspend background processing I did two things in the GUI thread:
call the CancelAsync method on the thread, and
call commPort.DiscardInBuffer(), which causes a pending (blocked, waiting) comport read in the background thread to throw a System.IO.IOException "The I/O operation has been aborted because of either a thread exit or an application request.\r\n".
In the background thread I catch this exception and clean up promptly, and all works as intended. Unfortunately DiscardInBuffer provoking the exception in another thread's blocking read is not documented behavior anywhere I can find, and I hate relying on undocumented behavior. It works because internally DiscardInBuffer calls the Win32 API PurgeComm, which interrupts the blocking read (documented behavior).
Possible Method 2
Directly use the BaseClass Stream.ReadAsync method, with a monitor cancellation token, using a supported way of interrupting the background IO.
Because the number of characters to be received is variable (terminated by a newline), and no ReadAsyncLine method exists in the framework, I don't know if this is possible. I could process each character individually but would take a performance hit (might not work on slow machines, unless of course the line-termination bit is already implemented in C# within the framework).
Possible Method 3
Create a lock "I've got the serial port". Nobody reads, writes, or discards input from the port unless they have the lock (including repeating the blocking read in background thread). Chop the timeout values in the background thread to 1/4 second for acceptable GUI responsiveness without too much overhead.
Question
Does anybody have a proven solution to deal with this problem?
How can one cleanly stop background processing of the serial port?
I've googled and read dozens of articles bemoaning the C# SerialPort class, but haven't found a good solution.
Thanks in advance!
MSDN article for the SerialPort Class clearly states:
If a SerialPort object becomes blocked during a read operation, do not abort the thread. Instead, either close the base stream or dispose of the SerialPort object.
So the best approach, from my point of view, is second one, with async reading and step by step checking for the line-ending character. As you've stated, the check for each char is very big performance loss, I suggest you to investigate the ReadLine implementation for some ideas how to perform this faster. Note that they use NewLine property of SerialPort class.
I want also to note that there is no ReadLineAsync method by default as the MSDN states:
By default, the ReadLine method will block until a line is received. If this behavior is undesirable, set the ReadTimeout property to any non-zero value to force the ReadLine method to throw a TimeoutException if a line is not available on the port.
So, may be, in your wrapper you can implement similar logic, so your Task will cancel if there is no line end in some given time. Also, you should note this:
Because the SerialPort class buffers data, and the stream contained in
the BaseStream property does not, the two might conflict about how
many bytes are available to read. The BytesToRead property can
indicate that there are bytes to read, but these bytes might not be
accessible to the stream contained in the BaseStream property because
they have been buffered to the SerialPort class.
So, again, I suggest you to implement some wrapper logic with asynchronous read and checking after each read, are there line-end or not, which should be blocking, and wrap it inside async method, which will cancel Task after some time.
Hope this helps.
OK, here's what I did... Comments would be appreciated as C# is still somewhat new to me!
Its crazy to have multiple threads trying to access the serial port concurrently (or any resource, especially an asynchronous resource). To fix up this application without a complete rewrite, I introduced a lock SerialPortLockObject to guarantee exclusive serial port access as follows:
The GUI thread holds SerialPortLockObject except when it has a background operation running.
The SerialPort class is wrapped so that any read or write by a thread not holding SerialPortLockObject throws an exception (helped find several contention bugs).
The timer class is wrapped (class SerialOperationTimer) so that the background worker function is called bracketed by acquiring SerialPortLockObject.
SerialOperationTimer allows only one timer running at a time (helped find several bugs where the GUI forgot to stop background processing before starting up a different timer). This could be improved by using a specific thread for timer work, with that thread holding the lock for the entire time the timer is active (but would be still more work; as coded System.Timers.Timer runs worker function from thread pool).
When a SerialOperationTimer is stopped, it disables the underlying timer and flushes the serial port buffers (provoking an exception from any blocked serial port operation, as explained in possible method 1 above). Then SerialPortLockObject is reacquired by the GUI thread.
Here's the wrapper for SerialPort:
/// <summary> CheckedSerialPort class checks that read and write operations are only performed by the thread owning the lock on the serial port </summary>
// Just check reads and writes (not basic properties, opening/closing, or buffer discards).
public class CheckedSerialPort : SafePort /* derived in turn from SerialPort */
{
private void checkOwnership()
{
try
{
if (Monitor.IsEntered(XXX_Conn.SerialPortLockObject)) return; // the thread running this code has the lock; all set!
// Ooops...
throw new Exception("Serial IO attempted without lock ownership");
}
catch (Exception ex)
{
StringBuilder sb = new StringBuilder("");
sb.AppendFormat("Message: {0}\n", ex.Message);
sb.AppendFormat("Exception Type: {0}\n", ex.GetType().FullName);
sb.AppendFormat("Source: {0}\n", ex.Source);
sb.AppendFormat("StackTrace: {0}\n", ex.StackTrace);
sb.AppendFormat("TargetSite: {0}", ex.TargetSite);
Console.Write(sb.ToString());
Debug.Assert(false); // lets have a look in the debugger NOW...
throw;
}
}
public new int ReadByte() { checkOwnership(); return base.ReadByte(); }
public new string ReadTo(string value) { checkOwnership(); return base.ReadTo(value); }
public new string ReadExisting() { checkOwnership(); return base.ReadExisting(); }
public new void Write(string text) { checkOwnership(); base.Write(text); }
public new void WriteLine(string text) { checkOwnership(); base.WriteLine(text); }
public new void Write(byte[] buffer, int offset, int count) { checkOwnership(); base.Write(buffer, offset, count); }
public new void Write(char[] buffer, int offset, int count) { checkOwnership(); base.Write(buffer, offset, count); }
}
And here's the wrapper for System.Timers.Timer:
/// <summary> Wrap System.Timers.Timer class to provide safer exclusive access to serial port </summary>
class SerialOperationTimer
{
private static SerialOperationTimer runningTimer = null; // there should only be one!
private string name; // for diagnostics
// Delegate TYPE for user's callback function (user callback function to make async measurements)
public delegate void SerialOperationTimerWorkerFunc_T(object source, System.Timers.ElapsedEventArgs e);
private SerialOperationTimerWorkerFunc_T workerFunc; // application function to call for this timer
private System.Timers.Timer timer;
private object workerEnteredLock = new object();
private bool workerAlreadyEntered = false;
public SerialOperationTimer(string _name, int msecDelay, SerialOperationTimerWorkerFunc_T func)
{
name = _name;
workerFunc = func;
timer = new System.Timers.Timer(msecDelay);
timer.Elapsed += new System.Timers.ElapsedEventHandler(SerialOperationTimer_Tick);
}
private void SerialOperationTimer_Tick(object source, System.Timers.ElapsedEventArgs eventArgs)
{
lock (workerEnteredLock)
{
if (workerAlreadyEntered) return; // don't launch multiple copies of worker if timer set too fast; just ignore this tick
workerAlreadyEntered = true;
}
bool lockTaken = false;
try
{
// Acquire the serial lock prior calling the worker
Monitor.TryEnter(XXX_Conn.SerialPortLockObject, ref lockTaken);
if (!lockTaken)
throw new System.Exception("SerialOperationTimer " + name + ": Failed to get serial lock");
// Debug.WriteLine("SerialOperationTimer " + name + ": Got serial lock");
workerFunc(source, eventArgs);
}
finally
{
// release serial lock
if (lockTaken)
{
Monitor.Exit(XXX_Conn.SerialPortLockObject);
// Debug.WriteLine("SerialOperationTimer " + name + ": released serial lock");
}
workerAlreadyEntered = false;
}
}
public void Start()
{
Debug.Assert(Form1.GUIthreadHashcode == Thread.CurrentThread.GetHashCode()); // should ONLY be called from GUI thread
Debug.Assert(!timer.Enabled); // successive Start or Stop calls are BAD
Debug.WriteLine("SerialOperationTimer " + name + ": Start");
if (runningTimer != null)
{
Debug.Assert(false); // Lets have a look in the debugger NOW
throw new System.Exception("SerialOperationTimer " + name + ": Attempted 'Start' while " + runningTimer.name + " is still running");
}
// Start background processing
// Release GUI thread's lock on the serial port, so background thread can grab it
Monitor.Exit(XXX_Conn.SerialPortLockObject);
runningTimer = this;
timer.Enabled = true;
}
public void Stop()
{
Debug.Assert(Form1.GUIthreadHashcode == Thread.CurrentThread.GetHashCode()); // should ONLY be called from GUI thread
Debug.Assert(timer.Enabled); // successive Start or Stop calls are BAD
Debug.WriteLine("SerialOperationTimer " + name + ": Stop");
if (runningTimer != this)
{
Debug.Assert(false); // Lets have a look in the debugger NOW
throw new System.Exception("SerialOperationTimer " + name + ": Attempted 'Stop' while not running");
}
// Stop further background processing from being initiated,
timer.Enabled = false; // but, background processing may still be in progress from the last timer tick...
runningTimer = null;
// Purge serial input and output buffers. Clearing input buf causes any blocking read in progress in background thread to throw
// System.IO.IOException "The I/O operation has been aborted because of either a thread exit or an application request.\r\n"
if(Form1.xxConnection.PortIsOpen) Form1.xxConnection.CiCommDiscardBothBuffers();
bool lockTaken = false;
// Now, GUI thread needs the lock back.
// 3 sec REALLY should be enough time for background thread to cleanup and release the lock:
Monitor.TryEnter(XXX_Conn.SerialPortLockObject, 3000, ref lockTaken);
if (!lockTaken)
throw new Exception("Serial port lock not yet released by background timer thread "+name);
if (Form1.xxConnection.PortIsOpen)
{
// Its possible there's still stuff in transit from device (for example, background thread just completed
// sending an ACQ command as it was stopped). So, sync up with the device...
int r = Form1.xxConnection.CiSync();
Debug.Assert(r == XXX_Conn.CI_OK);
if (r != XXX_Conn.CI_OK)
throw new Exception("Cannot re-sync with device after disabling timer thread " + name);
}
}
/// <summary> SerialOperationTimer.StopAllBackgroundTimers() - Stop all background activity </summary>
public static void StopAllBackgroundTimers()
{
if (runningTimer != null) runningTimer.Stop();
}
public double Interval
{
get { return timer.Interval; }
set { timer.Interval = value; }
}
} // class SerialOperationTimer
Related
We could abort a Thread like this:
Thread thread = new Thread(SomeMethod);
.
.
.
thread.Abort();
But can I abort a Task (in .Net 4.0) in the same way not by cancellation mechanism. I want to kill the Task immediately.
The guidance on not using a thread abort is controversial. I think there is still a place for it but in exceptional circumstance. However you should always attempt to design around it and see it as a last resort.
Example;
You have a simple windows form application that connects to a blocking synchronous web service. Within which it executes a function on the web service within a Parallel loop.
CancellationTokenSource cts = new CancellationTokenSource();
ParallelOptions po = new ParallelOptions();
po.CancellationToken = cts.Token;
po.MaxDegreeOfParallelism = System.Environment.ProcessorCount;
Parallel.ForEach(iListOfItems, po, (item, loopState) =>
{
Thread.Sleep(120000); // pretend web service call
});
Say in this example, the blocking call takes 2 mins to complete. Now I set my MaxDegreeOfParallelism to say ProcessorCount. iListOfItems has 1000 items within it to process.
The user clicks the process button and the loop commences, we have 'up-to' 20 threads executing against 1000 items in the iListOfItems collection. Each iteration executes on its own thread. Each thread will utilise a foreground thread when created by Parallel.ForEach. This means regardless of the main application shutdown, the app domain will be kept alive until all threads have finished.
However the user needs to close the application for some reason, say they close the form.
These 20 threads will continue to execute until all 1000 items are processed. This is not ideal in this scenario, as the application will not exit as the user expects and will continue to run behind the scenes, as can be seen by taking a look in task manger.
Say the user tries to rebuild the app again (VS 2010), it reports the exe is locked, then they would have to go into task manager to kill it or just wait until all 1000 items are processed.
I would not blame you for saying, but of course! I should be cancelling these threads using the CancellationTokenSource object and calling Cancel ... but there are some problems with this as of .net 4.0. Firstly this is still never going to result in a thread abort which would offer up an abort exception followed by thread termination, so the app domain will instead need to wait for the threads to finish normally, and this means waiting for the last blocking call, which would be the very last running iteration (thread) that ultimately gets to call po.CancellationToken.ThrowIfCancellationRequested.
In the example this would mean the app domain could still stay alive for up to 2 mins, even though the form has been closed and cancel called.
Note that Calling Cancel on CancellationTokenSource does not throw an exception on the processing thread(s), which would indeed act to interrupt the blocking call similar to a thread abort and stop the execution. An exception is cached ready for when all the other threads (concurrent iterations) eventually finish and return, the exception is thrown in the initiating thread (where the loop is declared).
I chose not to use the Cancel option on a CancellationTokenSource object. This is wasteful and arguably violates the well known anti-patten of controlling the flow of the code by Exceptions.
Instead, it is arguably 'better' to implement a simple thread safe property i.e. Bool stopExecuting. Then within the loop, check the value of stopExecuting and if the value is set to true by the external influence, we can take an alternate path to close down gracefully. Since we should not call cancel, this precludes checking CancellationTokenSource.IsCancellationRequested which would otherwise be another option.
Something like the following if condition would be appropriate within the loop;
if (loopState.ShouldExitCurrentIteration || loopState.IsExceptional || stopExecuting) {loopState.Stop(); return;}
The iteration will now exit in a 'controlled' manner as well as terminating further iterations, but as I said, this does little for our issue of having to wait on the long running and blocking call(s) that are made within each iteration (parallel loop thread), since these have to complete before each thread can get to the option of checking if it should stop.
In summary, as the user closes the form, the 20 threads will be signaled to stop via stopExecuting, but they will only stop when they have finished executing their long running function call.
We can't do anything about the fact that the application domain will always stay alive and only be released when all foreground threads have completed. And this means there will be a delay associated with waiting for any blocking calls made within the loop to complete.
Only a true thread abort can interrupt the blocking call, and you must mitigate leaving the system in a unstable/undefined state the best you can in the aborted thread's exception handler which goes without question. Whether that's appropriate is a matter for the programmer to decide, based on what resource handles they chose to maintain and how easy it is to close them in a thread's finally block. You could register with a token to terminate on cancel as a semi workaround i.e.
CancellationTokenSource cts = new CancellationTokenSource();
ParallelOptions po = new ParallelOptions();
po.CancellationToken = cts.Token;
po.MaxDegreeOfParallelism = System.Environment.ProcessorCount;
Parallel.ForEach(iListOfItems, po, (item, loopState) =>
{
using (cts.Token.Register(Thread.CurrentThread.Abort))
{
Try
{
Thread.Sleep(120000); // pretend web service call
}
Catch(ThreadAbortException ex)
{
// log etc.
}
Finally
{
// clean up here
}
}
});
but this will still result in an exception in the declaring thread.
All things considered, interrupt blocking calls using the parallel.loop constructs could have been a method on the options, avoiding the use of more obscure parts of the library. But why there is no option to cancel and avoid throwing an exception in the declaring method strikes me as a possible oversight.
But can I abort a Task (in .Net 4.0) in the same way not by
cancellation mechanism. I want to kill the Task immediately.
Other answerers have told you not to do it. But yes, you can do it. You can supply Thread.Abort() as the delegate to be called by the Task's cancellation mechanism. Here is how you could configure this:
class HardAborter
{
public bool WasAborted { get; private set; }
private CancellationTokenSource Canceller { get; set; }
private Task<object> Worker { get; set; }
public void Start(Func<object> DoFunc)
{
WasAborted = false;
// start a task with a means to do a hard abort (unsafe!)
Canceller = new CancellationTokenSource();
Worker = Task.Factory.StartNew(() =>
{
try
{
// specify this thread's Abort() as the cancel delegate
using (Canceller.Token.Register(Thread.CurrentThread.Abort))
{
return DoFunc();
}
}
catch (ThreadAbortException)
{
WasAborted = true;
return false;
}
}, Canceller.Token);
}
public void Abort()
{
Canceller.Cancel();
}
}
disclaimer: don't do this.
Here is an example of what not to do:
var doNotDoThis = new HardAborter();
// start a thread writing to the console
doNotDoThis.Start(() =>
{
while (true)
{
Thread.Sleep(100);
Console.Write(".");
}
return null;
});
// wait a second to see some output and show the WasAborted value as false
Thread.Sleep(1000);
Console.WriteLine("WasAborted: " + doNotDoThis.WasAborted);
// wait another second, abort, and print the time
Thread.Sleep(1000);
doNotDoThis.Abort();
Console.WriteLine("Abort triggered at " + DateTime.Now);
// wait until the abort finishes and print the time
while (!doNotDoThis.WasAborted) { Thread.CurrentThread.Join(0); }
Console.WriteLine("WasAborted: " + doNotDoThis.WasAborted + " at " + DateTime.Now);
Console.ReadKey();
You shouldn't use Thread.Abort()
Tasks can be Cancelled but not aborted.
The Thread.Abort() method is (severely) deprecated.
Both Threads and Tasks should cooperate when being stopped, otherwise you run the risk of leaving the system in a unstable/undefined state.
If you do need to run a Process and kill it from the outside, the only safe option is to run it in a separate AppDomain.
This answer is about .net 3.5 and earlier.
Thread-abort handling has been improved since then, a.o. by changing the way finally blocks work.
But Thread.Abort is still a suspect solution that you should always try to avoid.
And in .net Core (.net 5+) Thread.Abort() will now throw a PlatformNotSupportedException .
Kind of underscoring the 'deprecated' point.
Everyone knows (hopefully) its bad to terminate thread. The problem is when you don't own a piece of code you're calling. If this code is running in some do/while infinite loop , itself calling some native functions, etc. you're basically stuck. When this happens in your own code termination, stop or Dispose call, it's kinda ok to start shooting the bad guys (so you don't become a bad guy yourself).
So, for what it's worth, I've written those two blocking functions that use their own native thread, not a thread from the pool or some thread created by the CLR. They will stop the thread if a timeout occurs:
// returns true if the call went to completion successfully, false otherwise
public static bool RunWithAbort(this Action action, int milliseconds) => RunWithAbort(action, new TimeSpan(0, 0, 0, 0, milliseconds));
public static bool RunWithAbort(this Action action, TimeSpan delay)
{
if (action == null)
throw new ArgumentNullException(nameof(action));
var source = new CancellationTokenSource(delay);
var success = false;
var handle = IntPtr.Zero;
var fn = new Action(() =>
{
using (source.Token.Register(() => TerminateThread(handle, 0)))
{
action();
success = true;
}
});
handle = CreateThread(IntPtr.Zero, IntPtr.Zero, fn, IntPtr.Zero, 0, out var id);
WaitForSingleObject(handle, 100 + (int)delay.TotalMilliseconds);
CloseHandle(handle);
return success;
}
// returns what's the function should return if the call went to completion successfully, default(T) otherwise
public static T RunWithAbort<T>(this Func<T> func, int milliseconds) => RunWithAbort(func, new TimeSpan(0, 0, 0, 0, milliseconds));
public static T RunWithAbort<T>(this Func<T> func, TimeSpan delay)
{
if (func == null)
throw new ArgumentNullException(nameof(func));
var source = new CancellationTokenSource(delay);
var item = default(T);
var handle = IntPtr.Zero;
var fn = new Action(() =>
{
using (source.Token.Register(() => TerminateThread(handle, 0)))
{
item = func();
}
});
handle = CreateThread(IntPtr.Zero, IntPtr.Zero, fn, IntPtr.Zero, 0, out var id);
WaitForSingleObject(handle, 100 + (int)delay.TotalMilliseconds);
CloseHandle(handle);
return item;
}
[DllImport("kernel32")]
private static extern bool TerminateThread(IntPtr hThread, int dwExitCode);
[DllImport("kernel32")]
private static extern IntPtr CreateThread(IntPtr lpThreadAttributes, IntPtr dwStackSize, Delegate lpStartAddress, IntPtr lpParameter, int dwCreationFlags, out int lpThreadId);
[DllImport("kernel32")]
private static extern bool CloseHandle(IntPtr hObject);
[DllImport("kernel32")]
private static extern int WaitForSingleObject(IntPtr hHandle, int dwMilliseconds);
While it's possible to abort a thread, in practice it's almost always a very bad idea to do so. Aborthing a thread means the thread is not given a chance to clean up after itself, leaving resources undeleted, and things in unknown states.
In practice, if you abort a thread, you should only do so in conjunction with killing the process. Sadly, all too many people think ThreadAbort is a viable way of stopping something and continuing on, it's not.
Since Tasks run as threads, you can call ThreadAbort on them, but as with generic threads you almost never want to do this, except as a last resort.
I faced a similar problem with Excel's Application.Workbooks.
If the application is busy, the method hangs eternally. My approach was simply to try to get it in a task and wait, if it takes too long, I just leave the task be and go away (there is no harm "in this case", Excel will unfreeze the moment the user finishes whatever is busy).
In this case, it's impossible to use a cancellation token. The advantage is that I don't need excessive code, aborting threads, etc.
public static List<Workbook> GetAllOpenWorkbooks()
{
//gets all open Excel applications
List<Application> applications = GetAllOpenApplications();
//this is what we want to get from the third party library that may freeze
List<Workbook> books = null;
//as Excel may freeze here due to being busy, we try to get the workbooks asynchronously
Task task = Task.Run(() =>
{
try
{
books = applications
.SelectMany(app => app.Workbooks.OfType<Workbook>()).ToList();
}
catch { }
});
//wait for task completion
task.Wait(5000);
return books; //handle outside if books is null
}
This is my implementation of an idea presented by #Simon-Mourier, using the dotnet thread, short and simple code:
public static bool RunWithAbort(this Action action, int milliseconds)
{
if (action == null) throw new ArgumentNullException(nameof(action));
var success = false;
var thread = new Thread(() =>
{
action();
success = true;
});
thread.IsBackground = true;
thread.Start();
thread.Join(milliseconds);
thread.Abort();
return success;
}
You can "abort" a task by running it on a thread you control and aborting that thread. This causes the task to complete in a faulted state with a ThreadAbortException. You can control thread creation with a custom task scheduler, as described in this answer. Note that the caveat about aborting a thread applies.
(If you don't ensure the task is created on its own thread, aborting it would abort either a thread-pool thread or the thread initiating the task, neither of which you typically want to do.)
using System;
using System.Threading;
using System.Threading.Tasks;
...
var cts = new CancellationTokenSource();
var task = Task.Run(() => { while (true) { } });
Parallel.Invoke(() =>
{
task.Wait(cts.Token);
}, () =>
{
Thread.Sleep(1000);
cts.Cancel();
});
This is a simple snippet to abort a never-ending task with CancellationTokenSource.
I have C# multithreaded application that has to interface with a hardware using SerialPort.
The program is mostly command response sequence but the hardware can send an unsolicited "RESET" message due to an internal error at which time the software has to reinitialize it by sending a sequence of commands setting certain values.
More than one thread (from threadpool) can try to do a TakeSampleNow()
public class ALComm
{
private readonly AutoLoaderManager _manager;
private readonly AutoResetEvent dataArrived = new AutoResetEvent(false);
private SerialPort _alPort;
private string _alResponse;
.... Code to init _alPort and attach datareceived event etc
public void TakeSampleNow()
{
if (Monitor.TryEnter(_alPort, 1000)) //let's wait a second
{
_manager.MessageList.Enqueue("Try sampling");
try
{
Send("Command1");
string response = Receive();
switch(response)
{
case "X": blah blah..
case "Y": blah blah..
}
Send("Command2");
string response = Receive();
while(response != "OK")
{
Send("Command3");
string response = Receive();
Send("Command2");
string response = Receive();
}
}
finally
{
Console.WriteLine("Releasing port");
//Thread.CurrentThread.Priority = ThreadPriority.Normal;
Monitor.Exit(_alPort);
}
else
{
_manager.MessageList.Enqueue("Port is busy!!!");
}
}
public string Receive()
{
string inString = null;
dataArrived.WaitOne(1000);
inString = _alResponse;
return inString;
}
private void AlPort_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
_alResponse = _alPort.ReadLine();
//if (_alResponse.ToUpper().Contains("RESET"))
//{
// _alState = AlState.Reset;
// TryInitialize();
//}
dataArrived.Set();
}
private void TryInitialize()
{
Monitor.Enter(_alPort); //lock so other threads do not access samplenow during initialization
try
{
string response;
Console.WriteLine("Initializing ... The AutoLoader");
_alPort.DiscardInBuffer();
Send("CommandX");
response = Receive();
--- blah blah
_alResponse = "";
}
finally
{
Monitor.Exit(_alPort);
}
}
I can check the response in the datareceived event and wait on lock in TryInitialize() and for other threads in TakeSampleNow to release the lock, I would have to check on each response if _alResponse contains "RESET" and if it does return from the method. Which makes it more convoluted.
Any suggestions of how I can do this better. I believe it can be a state machine but am not able to conceptualize it.
You don't supply much details of your protocol - you don't say if command/rsponse pairs can overlap and, if so, how the responses are matched up with the commands.
You should be able to do this with a state-engine. Run the state-machine with its own thread that waits on a BlockingCollection for events. You will need a 'SerialRecv' thread as well to run your protocol and parse incoming bytes into messages.
I would use just one 'SerialEvent' class to carry events into the SM queue. The class should have an enum to describe the event and members for rx buffer, txData, parsed data, data to assemble tx string from, an exception/errorMess field - everything needed for any event or and forward purpose, (eg. the SM might forward a completed Request/Response to a display or logger).
Some events I can think of straightaway: EsmNewRequestResponse,EsmRxData,EsmResetRx
The event enum may, as some stages, have other values that are not used by the SM, eg: EsmError,EsmLog,EsmDisplay.
If you need timeouts, you can generate one by timing out the take() on the SM input queue.
Yes, there are things I left out.
If several threads issue SerialEvent instances 'at once', the SM will get new SerialEvents while it is still processing the first one. The SM will need another queue/deque to hold the SerialEvents awaiting handling. Due to the serializing of the SM by the BlockingCollection/thread, this 'pending' queue does not have to be thread-safe. The SM should check this pending queue after any request/response has completed to see if there is another one to process.
To handle request/response synchronously from several threads, the requesting threads must have smething to wait on. An AutoResetEvent in the SerialEvent class would do. Submitting a SerialEvent to the system would queue up the SerialEvent instance and wait on the AutoResetEvent. When the processing of the instance is complete, (ie. response received, error or timeout), the SM would set the event and the originating thread would run on with its SerialEvent instance filled in with data.
Next - the SerialEvent class is getting towards the point where it may be better to pool them rather than continually create/CG. That would need another BlockingCollection to act as the pool.
You don't want to have multiple threads trying to read your serial port. You should have a single thread that does nothing but read the port. When it gets some data, it puts a message in a queue or similar data structure that can be processed by the multiple sample threads. This way your single reader thread can reliably find and react to the RESET messages.
As title implies.
Yes, i know it's horribad to use .abort() but hear me out
I'm using 2 threads, The main thread (of my app) and a socket listen thread (familiar sound anyone?)
Instead of using asynchronous .AcceptAsync() calls (tbh, main reason is that i haven't looked too much into them), I have the thread just hang on socket.Accept();
Ofcourse, when i call thread.Abort(), the thread doesn't close because it's still waiting for a connection, once it passes Accept() it'll abort just fine.
Code:
void listenserver()
{
while (run)
{
fConsole.WriteLine("Waiting for connections..");
connectedsock = mainsock.Accept();
connected = true;
fConsole.WriteLine("Got connection from: " + connectedsock.RemoteEndPoint);
...
and elsewhere:
private void button_start_Click(object sender, EventArgs e)
{
if (!run)
{ //code ommitted.
}
else
{
run = false;
listenthread.Join(3000);
if (listenthread.IsAlive)
{
fConsole.WriteLine("Force-closing rogue listen thread");
listenthread.Abort();
}
button_start.Text = "Start";
groupBox_settings.Enabled = true;
}
is there any way of assuring the thread will end, not being to stuff the whole thing in a seperate app and then ending that?
Note that i DO have thread.IsBackground set to true (as suggested in other forum threads), it doesn't make any difference though.
Since you're already using a thread, you might as well just use BeginAccept. Async code doesn't need to complicate your code, since you can use lambdas like this:
var socket = new Socket(...);
socket.BeginAccept(result =>
{
if (this.abort)
{
// We should probably use a signal, but either way, this is where we abort.
return;
}
socket.EndAccept(result);
// Do your sockety stuff
}, null);
Even with a separate method definition for the AsyncCallback, the code isn't complex.
By doing async IO you're also being a lot more efficient with the CPU time, since between the call to BeginAccept and EndAccept, the thread can be reused for other processing. Since you're not using it for anything purposeful while waiting for the connection, holding up a thread is pretty meaningless and inefficient.
super simple question, but I just wanted some clarification. I want to be able to restart a thread using AutoResetEvent, so I call the following sequence of methods to my AutoResetEvent.
setupEvent.Reset();
setupEvent.Set();
I know it's really obvious, but MSDN doesn't state in their documentation that the Reset method restarts the thread, just that it sets the state of the event to non-signaled.
UPDATE:
Yes the other thread is waiting at WaitOne(), I'm assuming when it gets called it will resume at the exact point it left off, which is what I don't want, I want it to restart from the beginning. The following example from this valuable resource illustrates this:
static void Main()
{
new Thread (Work).Start();
_ready.WaitOne(); // First wait until worker is ready
lock (_locker) _message = "ooo";
_go.Set(); // Tell worker to go
_ready.WaitOne();
lock (_locker) _message = "ahhh"; // Give the worker another message
_go.Set();
_ready.WaitOne();
lock (_locker) _message = null; // Signal the worker to exit
_go.Set();
}
static void Work()
{
while (true)
{
_ready.Set(); // Indicate that we're ready
_go.WaitOne(); // Wait to be kicked off...
lock (_locker)
{
if (_message == null) return; // Gracefully exit
Console.WriteLine (_message);
}
}
}
If I understand this example correctly, notice how the Main thread will resume where it left off when the Work thread signals it, but in my case, I would want the Main thread to restart from the beginning.
UPDATE 2:
#Jaroslav Jandek - It's quite involved, but basically I have a CopyDetection thread that runs a FileSystemWatcher to monitor a folder for any new files that are moved or copied into it. My second thread is responsible for replicating the structure of that particular folder into another folder. So my CopyDetection thread has to block that thread from working while a copy/move operation is in progress. When the operation completes, the CopyDetection thread restarts the second thread so it can re-duplicate the folder structure with the newly added files.
UPDATE 3:
#SwDevMan81 - I actually didn't think about that and that would work save for one caveat. In my program, the source folder that is being duplicated is emptied once the duplication process is complete. That's why I have to block and restart the second thread when new items are added to the source folder, so it can have a chance to re-parse the folder's new structure properly.
To address this, I'm thinking of maybe adding a flag that signals that it is safe to delete the source folder's contents. Guess I could put the delete operation on it's own Cleanup thread.
#Jaroslav Jandek - My apologies, I thought it would be a simple matter to restart a thread on a whim. To answer your questions, I'm not deleting the source folder, only it's content, it's a requirement by my employer that unfortunately I cannot change. Files in the source folder are getting moved, but not all of them, only files that are properly validated by another process, the rest must be purged, i.e. the source folder is emptied. Also, the reason for replicating the source folder structure is that some of the files are contained within a very strict sub-folder hierarchy that must be preserved in the destination directory. Again sorry for making it complicated. All of these mechanisms are in place, have been tested and are working, which is why I didn't feel the need to elaborate on them. I only need to detect when new files are added so I may properly halt the other processes while the copy/move operation is in progress, then I can safely replicate the source folder structure and resume processing.
So thread 1 monitors and thread 2 replicates while other processes modify the monitored files.
Concurrent file access aside, you can't continue replicating after a change. So a successful replication only occurs when there is long enough delay between modifications. Replication cannot be stopped immediately since you replicate in chunks.
So the result of monitoring should be a command (file copy, file delete, file move, etc.).
The result of a successful replication should be an execution of a command.
Considering multiple operations can occur, you need a queue (or queued dictionary - to only perform 1 command on a file) of commands.
// T1:
somethingChanged(string path, CT commandType)
{
commandQueue.AddCommand(path, commandType);
}
// T2:
while (whatever)
{
var command = commandQueue.Peek();
if (command.Execute()) commandQueue.Remove();
else // operation failed, do what you like.
}
Now you may ask how to create a thread-safe query, but that probably belongs to another question (there are many implementations on the web).
EDIT (queue-less version with whole dir replication - can be used with query):
If you do not need multiple operations (eg. always replication the whole directory) and expect the replication to always finish or fail and cancel, you can do:
private volatile bool shouldStop = true;
// T1:
directoryChanged()
{
// StopReplicating
shouldStop = true;
workerReady.WaitOne(); // Wait for the worker to stop replicating.
// StartReplicating
shouldStop = false;
replicationStarter.Set();
}
// T2:
while (whatever)
{
replicationStarter.WaitOne();
... // prepare, throw some shouldStops so worker does not have to work too much.
if (!shouldStop)
{
foreach (var file in files)
{
if (shouldStop) break;
// Copy the file or whatever.
}
}
workerReady.Set();
}
I think this example clarifies (to me anyway) how reset events work:
var resetEvent = new ManualResetEvent(false);
var myclass = new MyAsyncClass();
myclass.MethodFinished += delegate
{
resetEvent.Set();
};
myclass.StartAsyncMethod();
resetEvent.WaitOne(); //We want to wait until the event fires to go on
Assume that MyAsyncClass runs the method on a another thread and fires the event when complete.
This basically turns the asynchronous "StartAsyncMethod" into a synchronous one. Many times I find a real-life example more useful.
The main difference between AutoResetEvent and ManualResetEvent, is that using AutoResetEvent doesn't require you to call Reset(), but automatically sets the state to "false". The next call to WaitOne() blocks when the state is "false" or Reset() has been called.
You just need to make it loop like the other Thread does. Is this what you are looking for?
class Program
{
static AutoResetEvent _ready = new AutoResetEvent(false);
static AutoResetEvent _go = new AutoResetEvent(false);
static Object _locker = new Object();
static string _message = "Start";
static AutoResetEvent _exitClient = new AutoResetEvent(false);
static AutoResetEvent _exitWork = new AutoResetEvent(false);
static void Main()
{
new Thread(Work).Start();
new Thread(Client).Start();
Thread.Sleep(3000); // Run for 3 seconds then finish up
_exitClient.Set();
_exitWork.Set();
_ready.Set(); // Make sure were not blocking still
_go.Set();
}
static void Client()
{
List<string> messages = new List<string>() { "ooo", "ahhh", null };
int i = 0;
while (!_exitClient.WaitOne(0)) // Gracefully exit if triggered
{
_ready.WaitOne(); // First wait until worker is ready
lock (_locker) _message = messages[i++];
_go.Set(); // Tell worker to go
if (i == 3) { i = 0; }
}
}
static void Work()
{
while (!_exitWork.WaitOne(0)) // Gracefully exit if triggered
{
_ready.Set(); // Indicate that we're ready
_go.WaitOne(); // Wait to be kicked off...
lock (_locker)
{
if (_message != null)
{
Console.WriteLine(_message);
}
}
}
}
}
Here I am again with questions about multi-threading and an exercise of my Concurrent Programming class.
I have a multi-threaded server - implemented using .NET Asynchronous Programming Model - with GET (download) and PUT (upload) file services. This part is done and tested.
It happens that the statement of the problem says this server must have logging activity with the minimum impact on the server response time, and it should be supported by a low priority thread - logger thread - created for this effect. All logging messages shall be passed by the threads that produce them to this logger thread, using a communication mechanism that may not lock the thread that invokes it (besides the necessary locking to ensure mutual exclusion) and assuming that some logging messages may be ignored.
Here is my current solution, please help validating if this stands as a solution to the stated problem:
using System;
using System.IO;
using System.Threading;
// Multi-threaded Logger
public class Logger {
// textwriter to use as logging output
protected readonly TextWriter _output;
// logger thread
protected Thread _loggerThread;
// logger thread wait timeout
protected int _timeOut = 500; //500ms
// amount of log requests attended
protected volatile int reqNr = 0;
// logging queue
protected readonly object[] _queue;
protected struct LogObj {
public DateTime _start;
public string _msg;
public LogObj(string msg) {
_start = DateTime.Now;
_msg = msg;
}
public LogObj(DateTime start, string msg) {
_start = start;
_msg = msg;
}
public override string ToString() {
return String.Format("{0}: {1}", _start, _msg);
}
}
public Logger(int dimension,TextWriter output) {
/// initialize queue with parameterized dimension
this._queue = new object[dimension];
// initialize logging output
this._output = output;
// initialize logger thread
Start();
}
public Logger() {
// initialize queue with 10 positions
this._queue = new object[10];
// initialize logging output to use console output
this._output = Console.Out;
// initialize logger thread
Start();
}
public void Log(string msg) {
lock (this) {
for (int i = 0; i < _queue.Length; i++) {
// seek for the first available position on queue
if (_queue[i] == null) {
// insert pending log into queue position
_queue[i] = new LogObj(DateTime.Now, msg);
// notify logger thread for a pending log on the queue
Monitor.Pulse(this);
break;
}
// if there aren't any available positions on logging queue, this
// log is not considered and the thread returns
}
}
}
public void GetLog() {
lock (this) {
while(true) {
for (int i = 0; i < _queue.Length; i++) {
// seek all occupied positions on queue (those who have logs)
if (_queue[i] != null) {
// log
LogObj obj = (LogObj)_queue[i];
// makes this position available
_queue[i] = null;
// print log into output stream
_output.WriteLine(String.Format("[Thread #{0} | {1}ms] {2}",
Thread.CurrentThread.ManagedThreadId,
DateTime.Now.Subtract(obj._start).TotalMilliseconds,
obj.ToString()));
}
}
// after printing all pending log's (or if there aren't any pending log's),
// the thread waits until another log arrives
//Monitor.Wait(this, _timeOut);
Monitor.Wait(this);
}
}
}
// Starts logger thread activity
public void Start() {
// Create the thread object, passing in the Logger.Start method
// via a ThreadStart delegate. This does not start the thread.
_loggerThread = new Thread(this.GetLog);
_loggerThread.Priority = ThreadPriority.Lowest;
_loggerThread.Start();
}
// Stops logger thread activity
public void Stop() {
_loggerThread.Abort();
_loggerThread = null;
}
// Increments number of attended log requests
public void IncReq() { reqNr++; }
}
Basically, here are the main points of this code:
Start a low priority thread that loops the logging queue and prints pending logs to the output. After this, the thread is suspended till new log arrives;
When a log arrives, the logger thread is awaken and does it's work.
Is this solution thread-safe? I have been reading Producers-Consumers problem and solution algorithm, but in this problem although I have multiple producers, I only have one reader.
It seems it should be working. Producers-Consumers shouldn't change greatly in case of single consumer. Little nitpicks:
acquiring lock may be an expensive operation (as #Vitaliy Lipchinsky says). I'd recommend to benchmark your logger against naive 'write-through' logger and logger using interlocked operations. Another alternative would be exchanging existing queue with empty one in GetLog and leaving critical section immediately. This way none of producers won't be blocked by long operations in consumers.
make LogObj reference type (class). There's no point in making it struct since you are boxing it anyway. or else make _queue field to be of type LogObj[] (that's better anyway).
make your thread background so that it won't prevent closing your program if Stop won't be called.
Flush your TextWriter. Or else you are risking to lose even those records that managed to fit queue (10 items is a bit small IMHO)
Implement IDisposable and/or finalizer. Your logger owns thread and text writer and those should be freed (and flushed - see above).
While it appears to be thread-safe, I don't believe it is particularly optimal. I would suggest a solution along these lines
NOTE: just read the other responses. What follows is a fairly optimal, optimistic locking solution based on your own. Major differences is locking on an internal class, minimizing 'critical sections', and providing graceful thread termination. If you want to avoid locking altogether, then you can try some of that volatile "non-locking" linked list stuff as #Vitaliy Lipchinsky suggests.
using System.Collections.Generic;
using System.Linq;
using System.Threading;
...
public class Logger
{
// BEST PRACTICE: private synchronization object.
// lock on _syncRoot - you should have one for each critical
// section - to avoid locking on public 'this' instance
private readonly object _syncRoot = new object ();
// synchronization device for stopping our log thread.
// initialized to unsignaled state - when set to signaled
// we stop!
private readonly AutoResetEvent _isStopping =
new AutoResetEvent (false);
// use a Queue<>, cleaner and less error prone than
// manipulating an array. btw, check your indexing
// on your array queue, while starvation will not
// occur in your full pass, ordering is not preserved
private readonly Queue<LogObj> _queue = new Queue<LogObj>();
...
public void Log (string message)
{
// you want to lock ONLY when absolutely necessary
// which in this case is accessing the ONE resource
// of _queue.
lock (_syncRoot)
{
_queue.Enqueue (new LogObj (DateTime.Now, message));
}
}
public void GetLog ()
{
// while not stopping
//
// NOTE: _loggerThread is polling. to increase poll
// interval, increase wait period. for a more event
// driven approach, consider using another
// AutoResetEvent at end of loop, and signal it
// from Log() method above
for (; !_isStopping.WaitOne(1); )
{
List<LogObj> logs = null;
// again lock ONLY when you need to. because our log
// operations may be time-intensive, we do not want
// to block pessimistically. what we really want is
// to dequeue all available messages and release the
// shared resource.
lock (_syncRoot)
{
// copy messages for local scope processing!
//
// NOTE: .Net3.5 extension method. if not available
// logs = new List<LogObj> (_queue);
logs = _queue.ToList ();
// clear the queue for new messages
_queue.Clear ();
// release!
}
foreach (LogObj log in logs)
{
// do your thang
...
}
}
}
}
...
public void Stop ()
{
// graceful thread termination. give threads a chance!
_isStopping.Set ();
_loggerThread.Join (100);
if (_loggerThread.IsAlive)
{
_loggerThread.Abort ();
}
_loggerThread = null;
}
Actually, you ARE introducing locking here. You have locking while pushing a log entry to the queue (Log method): if 10 threads simultaneously pushed 10 items into queue and woke up the Logger thread, then 11th thread will wait until the logger thread log all items...
If you want something really scalable - implement lock-free queue (example is below). With lock-free queue synchronization mechanism will be really straightaway (you can even use single wait handle for notifications).
If you won't manage to find lock-free queue implementation in the web, here is an idea how to do this:
Use linked list for an implementation. Each node in linked list contains a value and a volatile reference to the next node. therefore for operations enqueue and dequeue you can use Interlocked.CompareExchange method. I hope, the idea is clear. If not - let me know and I'll provide more details.
I'm just doing a thought experiment here, since I don't have time to actually try code right now, but I think you can do this without locks at all if you're creative.
Have your logging class contain a method that allocates a queue and a semaphore each time it's called (and another that deallocates the queue and semaphore when the thread is done). The threads that want to do logging will call this method when they start. When they want to log, they push the message onto their own queue and set the semaphore. The logger thread has a big loop that runs through the queues and checks the associated semaphores. If the semaphore associated with the queue is greater than zero, then the queue gets popped off and the semaphore decremented.
Because you're not attempting to pop things off the queue until after the semaphore is set, and you're not setting the semaphore until after you've pushed things onto the queue, I think this will be safe. According to the MSDN documentation for the queue class, if you are enumerating the queue and another thread modifies the collection, an exception is thrown. Catch that exception and you should be good.