Im having some issues around the ManualResetEvent and a backlog of events. My application is subscribing to messages then running a long task.
The issue I have is that I am receiving more messages than I can process. Task is taking around 5s to process but im receiving a new message every 2-3s.
Ideally what I want to do is ignore any new events until i've finished processing the task then start 'listening' again. At present I appear to be backlogging the events in order of being received and processed. As you can imagine after a couple of hours the message being processed is very old.
I cannot run the long running task from multiple threads.
Maybe I require some kind of queing mechanism then clear the last message (Last On First Off) and delete the queue?
Any ideas?
Im also calling the ManualResetEvent.Set() at the end of the long running process - from my research I understand this is correct? Should I Reset() at the beginning of the long running task to caused the thread to block then Set() at the end?
Create a circular buffer that you treat as a LIFO queue (a stack). So, say you want a maximum of 10 entries in the queue:
const int MaxItems = 10;
Item[] theQueue = new Item[];
int insertPoint = 0;
object myLock = new object();
// initialize the array to all NULL.
void Enqueue(Item t)
{
lock (myLock)
{
theQueue[insertPoint] = t;
insertPoint = (insertPoint+1) % 10;
}
}
Item Dequeue()
{
lock (myLock)
{
int takeFrom = insertPoint-1;
if (takeFrom < 0)
takeFrom = MaxItems-1;
if (theQueue[takeFrom] != null)
{
var rslt = theQueue[takeFrom];
insertPoint = takeFrom;
return rslt;
}
// queue is empty. Either return null or throw an exception.
return null;
}
}
Of course you'll want to wrap that all up into a nice object. But that's the basic idea.
How about this:
At any given time, you have one message being processed, and one message waiting to be processed.
When you receive a new message, you overwrite the message waiting to be processed.
Your processing thread waits until there is a message waiting to be processed, then mark it as being processed, processes it, then starts again.
Add some syncing logic, and you get this code:
private object _sync = new object();
private Message _beingProcessed;
private Message _waitingToBeProcesssed;
public void OnMessageReceived(Message message)
{
lock(sync)
{
_waitingToBeProcesssed = message;
Monitor.Pulse(sync);
}
}
public void DoWork()
{
while (true)
{
lock (sync)
{
while (_waitingToBeProcesssed == null)
{
Monitor.Wait(sync);
}
_beingProcessed = _waitingToBeProcesssed;
_waitingToBeProcesssed = null;
}
Process(_beingProcessed); //Do the actual work
}
}
Related
I know that asynchronous programming has seen a lot of changes over the years. I'm somewhat embarrassed that I let myself get this rusty at just 34 years old, but I'm counting on StackOverflow to bring me up to speed.
What I am trying to do is manage a queue of "work" on a separate thread, but in such a way that only one item is processed at a time. I want to post work on this thread and it doesn't need to pass anything back to the caller. Of course I could simply spin up a new Thread object and have it loop over a shared Queue object, using sleeps, interrupts, wait handles, etc. But I know things have gotten better since then. We have BlockingCollection, Task, async/await, not to mention NuGet packages that probably abstract a lot of that.
I know that "What's the best..." questions are generally frowned upon so I'll rephrase it by saying "What is the currently recommended..." way to accomplish something like this using built-in .NET mechanisms preferably. But if a third party NuGet package simplifies things a bunch, it's just as well.
I considered a TaskScheduler instance with a fixed maximum concurrency of 1, but seems there is probably a much less clunky way to do that by now.
Background
Specifically, what I am trying to do in this case is queue an IP geolocation task during a web request. The same IP might wind up getting queued for geolocation multiple times, but the task will know how to detect that and skip out early if it's already been resolved. But the request handler is just going to throw these () => LocateAddress(context.Request.UserHostAddress) calls into a queue and let the LocateAddress method handle duplicate work detection. The geolocation API I am using doesn't like to be bombarded with requests which is why I want to limit it to a single concurrent task at a time. However, it would be nice if the approach was allowed to easily scale to more concurrent tasks with a simple parameter change.
To create an asynchronous single degree of parallelism queue of work you can simply create a SemaphoreSlim, initialized to one, and then have the enqueing method await on the acquisition of that semaphore before starting the requested work.
public class TaskQueue
{
private SemaphoreSlim semaphore;
public TaskQueue()
{
semaphore = new SemaphoreSlim(1);
}
public async Task<T> Enqueue<T>(Func<Task<T>> taskGenerator)
{
await semaphore.WaitAsync();
try
{
return await taskGenerator();
}
finally
{
semaphore.Release();
}
}
public async Task Enqueue(Func<Task> taskGenerator)
{
await semaphore.WaitAsync();
try
{
await taskGenerator();
}
finally
{
semaphore.Release();
}
}
}
Of course, to have a fixed degree of parallelism other than one simply initialize the semaphore to some other number.
Your best option as I see it is using TPL Dataflow's ActionBlock:
var actionBlock = new ActionBlock<string>(address =>
{
if (!IsDuplicate(address))
{
LocateAddress(address);
}
});
actionBlock.Post(context.Request.UserHostAddress);
TPL Dataflow is robust, thread-safe, async-ready and very configurable actor-based framework (available as a nuget)
Here's a simple example for a more complicated case. Let's assume you want to:
Enable concurrency (limited to the available cores).
Limit the queue size (so you won't run out of memory).
Have both LocateAddress and the queue insertion be async.
Cancel everything after an hour.
var actionBlock = new ActionBlock<string>(async address =>
{
if (!IsDuplicate(address))
{
await LocateAddressAsync(address);
}
}, new ExecutionDataflowBlockOptions
{
BoundedCapacity = 10000,
MaxDegreeOfParallelism = Environment.ProcessorCount,
CancellationToken = new CancellationTokenSource(TimeSpan.FromHours(1)).Token
});
await actionBlock.SendAsync(context.Request.UserHostAddress);
Actually you don't need to run tasks in one thread, you need them to run serially (one after another), and FIFO. TPL doesn't have class for that, but here is my very lightweight, non-blocking implementation with tests. https://github.com/Gentlee/SerialQueue
Also have #Servy implementation there, tests show it is twice slower than mine and it doesn't guarantee FIFO.
Example:
private readonly SerialQueue queue = new SerialQueue();
async Task SomeAsyncMethod()
{
var result = await queue.Enqueue(DoSomething);
}
Use BlockingCollection<Action> to create a producer/consumer pattern with one consumer (only one thing running at a time like you want) and one or many producers.
First define a shared queue somewhere:
BlockingCollection<Action> queue = new BlockingCollection<Action>();
In your consumer Thread or Task you take from it:
//This will block until there's an item available
Action itemToRun = queue.Take()
Then from any number of producers on other threads, simply add to the queue:
queue.Add(() => LocateAddress(context.Request.UserHostAddress));
I'm posting a different solution here. To be honest I'm not sure whether this is a good solution.
I'm used to use BlockingCollection to implement a producer/consumer pattern, with a dedicated thread consuming those items. It's fine if there are always data coming in and consumer thread won't sit there and do nothing.
I encountered a scenario that one of the application would like to send emails on a different thread, but total number of emails is not that big.
My initial solution was to have a dedicated consumer thread (created by Task.Run()), but a lot of time it just sits there and does nothing.
Old solution:
private readonly BlockingCollection<EmailData> _Emails =
new BlockingCollection<EmailData>(new ConcurrentQueue<EmailData>());
// producer can add data here
public void Add(EmailData emailData)
{
_Emails.Add(emailData);
}
public void Run()
{
// create a consumer thread
Task.Run(() =>
{
foreach (var emailData in _Emails.GetConsumingEnumerable())
{
SendEmail(emailData);
}
});
}
// sending email implementation
private void SendEmail(EmailData emailData)
{
throw new NotImplementedException();
}
As you can see, if there are not enough emails to be sent (and it is my case), the consumer thread will spend most of them sitting there and do nothing at all.
I changed my implementation to:
// create an empty task
private Task _SendEmailTask = Task.Run(() => {});
// caller will dispatch the email to here
// continuewith will use a thread pool thread (different to
// _SendEmailTask thread) to send this email
private void Add(EmailData emailData)
{
_SendEmailTask = _SendEmailTask.ContinueWith((t) =>
{
SendEmail(emailData);
});
}
// actual implementation
private void SendEmail(EmailData emailData)
{
throw new NotImplementedException();
}
It's no longer a producer/consumer pattern, but it won't have a thread sitting there and does nothing, instead, every time it is to send an email, it will use thread pool thread to do it.
My lib, It can:
Run random in queue list
Multi queue
Run prioritize first
Re-queue
Event all queue completed
Cancel running or cancel wait for running
Dispatch event to UI thread
public interface IQueue
{
bool IsPrioritize { get; }
bool ReQueue { get; }
/// <summary>
/// Dont use async
/// </summary>
/// <returns></returns>
Task DoWork();
bool CheckEquals(IQueue queue);
void Cancel();
}
public delegate void QueueComplete<T>(T queue) where T : IQueue;
public delegate void RunComplete();
public class TaskQueue<T> where T : IQueue
{
readonly List<T> Queues = new List<T>();
readonly List<T> Runnings = new List<T>();
[Browsable(false), DefaultValue((string)null)]
public Dispatcher Dispatcher { get; set; }
public event RunComplete OnRunComplete;
public event QueueComplete<T> OnQueueComplete;
int _MaxRun = 1;
public int MaxRun
{
get { return _MaxRun; }
set
{
bool flag = value > _MaxRun;
_MaxRun = value;
if (flag && Queues.Count != 0) RunNewQueue();
}
}
public int RunningCount
{
get { return Runnings.Count; }
}
public int QueueCount
{
get { return Queues.Count; }
}
public bool RunRandom { get; set; } = false;
//need lock Queues first
void StartQueue(T queue)
{
if (null != queue)
{
Queues.Remove(queue);
lock (Runnings) Runnings.Add(queue);
queue.DoWork().ContinueWith(ContinueTaskResult, queue);
}
}
void RunNewQueue()
{
lock (Queues)//Prioritize
{
foreach (var q in Queues.Where(x => x.IsPrioritize)) StartQueue(q);
}
if (Runnings.Count >= MaxRun) return;//other
else if (Queues.Count == 0)
{
if (Runnings.Count == 0 && OnRunComplete != null)
{
if (Dispatcher != null && !Dispatcher.CheckAccess()) Dispatcher.Invoke(OnRunComplete);
else OnRunComplete.Invoke();//on completed
}
else return;
}
else
{
lock (Queues)
{
T queue;
if (RunRandom) queue = Queues.OrderBy(x => Guid.NewGuid()).FirstOrDefault();
else queue = Queues.FirstOrDefault();
StartQueue(queue);
}
if (Queues.Count > 0 && Runnings.Count < MaxRun) RunNewQueue();
}
}
void ContinueTaskResult(Task Result, object queue_obj) => QueueCompleted((T)queue_obj);
void QueueCompleted(T queue)
{
lock (Runnings) Runnings.Remove(queue);
if (queue.ReQueue) lock (Queues) Queues.Add(queue);
if (OnQueueComplete != null)
{
if (Dispatcher != null && !Dispatcher.CheckAccess()) Dispatcher.Invoke(OnQueueComplete, queue);
else OnQueueComplete.Invoke(queue);
}
RunNewQueue();
}
public void Add(T queue)
{
if (null == queue) throw new ArgumentNullException(nameof(queue));
lock (Queues) Queues.Add(queue);
RunNewQueue();
}
public void Cancel(T queue)
{
if (null == queue) throw new ArgumentNullException(nameof(queue));
lock (Queues) Queues.RemoveAll(o => o.CheckEquals(queue));
lock (Runnings) Runnings.ForEach(o => { if (o.CheckEquals(queue)) o.Cancel(); });
}
public void Reset(T queue)
{
if (null == queue) throw new ArgumentNullException(nameof(queue));
Cancel(queue);
Add(queue);
}
public void ShutDown()
{
MaxRun = 0;
lock (Queues) Queues.Clear();
lock (Runnings) Runnings.ForEach(o => o.Cancel());
}
}
I know this thread is old, but it seems all the present solutions are extremely onerous. The simplest way I could find uses the Linq Aggregate function to create a daisy-chained list of tasks.
var arr = new int[] { 1, 2, 3, 4, 5};
var queue = arr.Aggregate(Task.CompletedTask,
(prev, item) => prev.ContinueWith(antecedent => PerformWorkHere(item)));
The idea is to get your data into an IEnumerable (I'm using an int array), and then reduce that enumerable to a chain of tasks, starting with a default, completed, task.
I have a MessagesManager thread to which different threads may send messages and then this MessagesManager thread is responsible to publish these messages inside SendMessageToTcpIP() (start point of MessagesManager thread ).
class MessagesManager : IMessageNotifier
{
//private
private readonly AutoResetEvent _waitTillMessageQueueEmptyARE = new AutoResetEvent(false);
private ConcurrentQueue<string> MessagesQueue = new ConcurrentQueue<string>();
public void PublishMessage(string Message)
{
MessagesQueue.Enqueue(Message);
_waitTillMessageQueueEmptyARE.Set();
}
public void SendMessageToTcpIP()
{
//keep waiting till a new message comes
while (MessagesQueue.Count() == 0)
{
_waitTillMessageQueueEmptyARE.WaitOne();
}
//Copy the Concurrent Queue into a local queue - keep dequeuing the item once it is inserts into the local Queue
Queue<string> localMessagesQueue = new Queue<string>();
while (!MessagesQueue.IsEmpty)
{
string message;
bool isRemoved = MessagesQueue.TryDequeue(out message);
if (isRemoved)
localMessagesQueue.Enqueue(message);
}
//Use the Local Queue for further processing
while (localMessagesQueue.Count() != 0)
{
TcpIpMessageSenderClient.ConnectAndSendMessage(localMessagesQueue.Dequeue().PadRight(80, ' '));
Thread.Sleep(2000);
}
}
}
The different threads (3-4) send their message by calling the PublishMessage(string Message) (using same object to MessageManager). Once the message comes, I push that message into a concurrent queue and notifies the SendMessageToTcpIP() by setting _waitTillMessageQueueEmptyARE.Set();. Inside SendMessageToTcpIP(), I am copying the message from the concurrent queue inside a local queue and then publish one by one.
QUESTIONS: Is it thread safe to do enqueuing and dequeuing in this way? Could there be some strange effects due to it?
While this is probably thread-safe, there are built-in classes in .NET to help with "many publishers one consumer" pattern, like BlockingCollection. You can rewrite your class like this:
class MessagesManager : IDisposable {
// note that your ConcurrentQueue is still in play, passed to constructor
private readonly BlockingCollection<string> MessagesQueue = new BlockingCollection<string>(new ConcurrentQueue<string>());
public MessagesManager() {
// start consumer thread here
new Thread(SendLoop) {
IsBackground = true
}.Start();
}
public void PublishMessage(string Message) {
// no need to notify here, will be done for you
MessagesQueue.Add(Message);
}
private void SendLoop() {
// this blocks until new items are available
foreach (var item in MessagesQueue.GetConsumingEnumerable()) {
// ensure that you handle exceptions here, or whole thing will break on exception
TcpIpMessageSenderClient.ConnectAndSendMessage(item.PadRight(80, ' '));
Thread.Sleep(2000); // only if you are sure this is required
}
}
public void Dispose() {
// this will "complete" GetConsumingEnumerable, so your thread will complete
MessagesQueue.CompleteAdding();
MessagesQueue.Dispose();
}
}
.NET already provides ActionBlock< T> that allows posting messages to a buffer and processing them asynchronously. By default, only one message is processed at a time.
Your code could be rewritten as:
//In an initialization function
ActionBlock<string> _hmiAgent=new ActionBlock<string>(async msg=>{
TcpIpMessageSenderClient.ConnectAndSendMessage(msg.PadRight(80, ' '));
await Task.Delay(2000);
);
//In some other thread ...
foreach ( ....)
{
_hmiAgent.Post(someMessage);
}
// When the application closes
_hmiAgent.Complete();
await _hmiAgent.Completion;
ActionBlock offers many benefits - you can specify a limit to the number of items it can accept in a buffer and specify that multiple messages can be processed in parallel. You can also combine multiple blocks in a processing pipeline. In a desktop application, a message can be posted to a pipeline in response to an event, get processed by separate blocks and results posted to a final block that updates the UI.
Padding, for example, could be performed by an intermediary TransformBlock< TIn,TOut>. This transformation is trivial and the cost of using the block is greater than the method, but that's just an illustration:
//In an initialization function
TransformBlock<string> _hmiAgent=new TransformBlock<string,string>(
msg=>msg.PadRight(80, ' '));
ActionBlock<string> _tcpBlock=new ActionBlock<string>(async msg=>{
TcpIpMessageSenderClient.ConnectAndSendMessage());
await Task.Delay(2000);
);
var linkOptions=new DataflowLinkOptions{PropagateCompletion = true};
_hmiAgent.LinkTo(_tcpBlock);
The posting code doesn't change at all
_hmiAgent.Post(someMessage);
When the application terminates, we need to wait for the _tcpBlock to complete:
_hmiAgent.Complete();
await _tcpBlock.Completion;
Visual Studio 2015+ itself uses TPL Dataflow for such scenarios
Bar Arnon provides a better example in TPL Dataflow Is The Best Library You're Not Using, that shows how both synchronous and asynchronous methods can be used in a block.
The code is thread safe since both ConcurrentQueue and AutoResetEvent are thread safe. your strings are anyway being read and never being written to, so this code is thread safe.
However, You have to make sure you call SendMessageToTcpIP in some sort of a loop.
otherwise , you have a dangerous race condition - some messages may get lost:
while (!MessagesQueue.IsEmpty)
{
string message;
bool isRemoved = MessagesQueue.TryDequeue(out message);
if (isRemoved)
localMessagesQueue.Enqueue(message);
}
//<<--- what happens if another thread enqueues a message here?
while (localMessagesQueue.Count() != 0)
{
TcpIpMessageSenderClient.ConnectAndSendMessage(localMessagesQueue.Dequeue().PadRight(80, ' '));
Thread.Sleep(2000);
}
Other than that, AutoResetEvent is extremely heavy object. it uses a kernel object to synchronize threads. every call is a system call which may be costly. consider using user mode synchronization object (doesn't .net provides some sort of condition variable?)
This is an refactored code snippet of how I would implement this functionality:
class MessagesManager {
private readonly AutoResetEvent messagesAvailableSignal = new AutoResetEvent(false);
private readonly ConcurrentQueue<string> messageQueue = new ConcurrentQueue<string>();
public void PublishMessage(string Message) {
messageQueue.Enqueue(Message);
messagesAvailableSignal.Set();
}
public void SendMessageToTcpIP() {
while (true) {
messagesAvailableSignal.WaitOne();
while (!messageQueue.IsEmpty) {
string message;
if (messageQueue.TryDequeue(out message)) {
TcpIpMessageSenderClient.ConnectAndSendMessage(message.PadRight(80, ' '));
}
}
}
}
}
Points to note here:
This drains the queue completely: if there is at least one message, it will process all of them
The 2000ms Thread sleep is removed
I want to have a FIFO Queue with the following requirements:
If queue is empty, wait for one element to be added
Start processing as soon as one element is in the Q
If elements pending in the Q are more than X, drop them.
I used a BlockingCollection like this:
public LoggerReal()
{
main = (frmMain)Application.OpenForms[0];
LogQueue = new BlockingCollection<logEntry>(GlobalSettings.LogQueueSize);
Task.Run(() => {
foreach (logEntry LE in LogQueue.GetConsumingEnumerable()) {
try {
ProcessLogEntry(LE);
} catch (Exception E) {
functions.Logger.log("Error processing logEntry" + E.Message, "LOGPROCESSING", LOGLEVEL.ERROR);
functions.printException(E);
}
}
functions.Logger.log("Exiting Queue Task", "LOGPROCESSING", LOGLEVEL.ERROR);
});
}
However, I noticed that the logs seems to show only when the queue was full.
ProcessLogEntry function simply puts them into a ListBox.
I tried using simple queue with no luck.
As far as I could tell, ConcurrentQueue and other Queue might not fulfill these requirements, or am I wrong? I start the Queue processor into a Task so it can be waiting forever, that is not an issue, but it needs to start processing as soon as data is available.
If I understand correctly your requirements, you can use a regular Queue<T> with simple Monitor based signaling like this:
Members:
private readonly int maxSize;
private readonly Queue<logEntry> logQueue;
private bool stopRequest;
Constructor:
maxSize = GlobalSettings.LogQueueSize;
logQueue = new Queue<logEntry>(maxSize);
Producer method:
public void Add(logEntry logEntry)
{
lock (logQueue)
{
if (stopRequest) return;
logQueue.Enqueue(logEntry);
if (logQueue.Count == 1)
Monitor.Pulse(logQueue);
}
}
Method to stop the process worker:
public void Stop()
{
lock (logQueue)
{
if (stopRequest) return;
stopRequest = true;
Monitor.Pulse(logQueue);
}
}
Process worker (the method called with Task.Run):
private void ProcessWorker()
{
while (true)
{
logEntry LE;
lock (logQueue)
{
while (!stopRequest && logQueue.Count == 0)
Monitor.Wait(logQueue);
if (stopRequest) break;
if (logQueue.Count > maxSize)
{
logQueue.Clear();
continue;
}
LE = logQueue.Dequeue();
}
try
{
ProcessLogEntry(LE);
}
catch (Exception E)
{
functions.Logger.log("Error processing logEntry" + E.Message, "LOGPROCESSING", LOGLEVEL.ERROR);
functions.printException(E);
}
}
functions.Logger.log("Exiting Queue Task", "LOGPROCESSING", LOGLEVEL.ERROR);
}
This is just to get the idea, you can further tune the implementation to better suit your needs.
Your title is somewhat confusing (FIFO is a queue, and a blocking collection waits/blocks by definition?). However, I'm going to guess at what you want here...
I'm going to assume you want 2 threads, one which is adding to the queue (writer), and the other which is blocked/waiting to process items as soon as they're added (reader).
Create a blocking collection:
var dataSink = new BlockingCollection<logEntry]>(new ConcurrentQueue<logEntry>());
The 'writer' thread simply adds and continues on it's way
dataSink.Add(logEntryToAdd); // Add to collection and continue
The 'reader' thread blocks until an item is added to the queue
while (dataSink.Count > 0)
{
ProcessLogEntry(dataSink.Take());
}
I'm not sure about your overflow "X" but perhaps during the 'add' operation you can get the count and if it exceeds 'x' either don't add or dequeue the first item (depends on what your logic flow entails).
Obviously, be sure that the UI thread is NOT blocked (UI thread should not be the 'reader', create a 3rd thread if needed which blocks/reads from the queue and then notifies the UI via an invoke to update the Listbox) otherwise your UI will become unresponsive...
So in the end, I used BlockingCollection, but I do a while loop with tryTake instead of using the ConsumingEnumerable:
Task.Run(() => {
while (!LogQueue.IsCompleted) {
logEntry LE;
LogQueue.TryTake(out LE, Timeout.Infinite);
try {
ProcessLogEntry(LE);
} finally {
// Do nothing, because if logging cause issue, logging exception is likely to do so as well...
}
}
//functions.Logger.log("Exiting Queue Task", "LOGPROCESSING", LOGLEVEL.ERROR); // Will not work if exiting Q
});
I'm a Java programmer who has been asked to make some changes to C# applications. I've been working with C# for a week now, and I've finally hit a point where looking at the documentation isn't helping and I can't find solutions when I google.
In this case I have a Windows Service that processes messages that arrive in a MSMQ. When a message is received the currently listening thread picks it up and goes off to do an operation that takes a couple of seconds.
public void Start()
{
this.listen = true;
for (int i = 0; i < Constants.ThreadMaxCount; i++)
{
ThreadPool.QueueUserWorkItem(new WaitCallback(this.StartListening), i);
}
...
private void StartListening(Object threadContext)
{
int threadId = (int)threadContext;
threads[threadId] = Thread.CurrentThread;
PostRequest postReq;
while(this.listen)
{
System.Threading.Monitor.Enter(locker);
try
{
postReq = GettingAMessage();
}
finally
{
System.Threading.Monitor.Exit(locker);
}
}
...
}
GettingAMessage() has the following lines that listen for a message:
Task<Message> ts = Task.Factory.FromAsync<Message>
(queue.BeginReceive(), queue.EndReceive);
ts.Wait();
The problem is, when the Stop() method is called and there are no messages going into the MSMQ the threads all sit there waiting for a message. I have tried using timeouts, but that method doesn't seem elegant to me(and having switched over to the Task Factory, I'm not sure how to implement them currently). My solution to this was to add a reference of each thread to an array, so that I could cancel them. The following is called by each worker thread after being created.
threads[threadId] = Thread.CurrentThread;
and then supposed to be aborted by
public void Stop()
{
try
{
this.listen = false;
foreach(Thread a in threads) {
a.Abort();
}
}
catch
{...}
}
Any advice on why this isn't shutting the threads down? (Or even better, can anyone tell me where I should look for how to cancel the ts.Wait() properly?)
Use the ManualResetEvent class to achieve a proper & graceful stopping of your running threads.
In addition, don't use the ThreadPool for long running threads, use your own created threads, otherwise, with lots of long-running tasks, you could end up with thread-pool starvation, possibly even leading to deadlock:
public class MsmqListener
{
privatec ManualResetEvent _stopRequested = new ManualResetEvent(false);
private List<Thread> _listenerThreads;
private object _locker = new _locker();
//-----------------------------------------------------------------------------------------------------
public MsmqListener
{
CreateListenerThreads();
}
//-----------------------------------------------------------------------------------------------------
public void Start()
{
StartListenerThreads();
}
//-----------------------------------------------------------------------------------------------------
public void Stop()
{
try
{
_stopRequested.Set();
foreach(Thread thread in _listenerThreads)
{
thread.Join(); // Wait for all threads to complete gracefully
}
}
catch( Exception ex)
{...}
}
//-----------------------------------------------------------------------------------------------------
private void StartListening()
{
while( !_stopRequested.WaitOne(0) ) // Blocks the current thread for 0 ms until the current WaitHandle receives a signal
{
lock( _locker )
{
postReq = GettingAMessage();
}
...
}
//-----------------------------------------------------------------------------------------------------
private void CreateListenerThreads()
{
_listenerThreads = new List<Thread>();
for (int i = 0; i < Constants.ThreadMaxCount; i++)
{
listenerThread = new Thread(StartListening);
listenerThreads.Add(listenerThread);
}
}
//-----------------------------------------------------------------------------------------------------
private void StartListenerThreads()
{
foreach(var thread in _listenerThreads)
{
thread.Start();
}
}
}
UPDATE:
I changed the use of AutoResetEvent with ManualResetEvent in order to support the stopping of multiple waiting threads (Using ManualResetEvent, once you signaled, all waiting threads will be notified and be free to proceed theirs job - stop pooling for messages, in your case).
Using volatile bool does not provide all the guaranties. It may still read stale data. Better to use underlying OS synchronisation mechanism as it provides much stronger guaranties. Source: stackoverflow.com/a/11953661/952310
Let's say I have a list and am streaming data from a namedpipe to that list.
hypothetical sample:
private void myStreamingThread()
{
while(mypipe.isconnected)
{
if (mypipe.hasdata)
myList.add(mypipe.data);
}
}
Then on another thread I need to read that list every 1000ms for example:
private void myListReadingThread()
{
while(isStarted)
{
if (myList.count > 0)
{
//do whatever I need to.
}
Thread.Sleep(1000);
}
}
My priority here is to be able to read the list every 1000 ms and do whatever I need with the list but at the same time it is very important to be able to get the new data from it that comes from the pipe.
What is a good method to come with this ?
Forgot to mention I am tied to .NET 3.5
I would recommend using a Queue with a lock.
Queue<string> myQueue = new Queue<string>();
private void myStreamingThread()
{
while(mypipe.isconnected)
{
if (mypipe.hasdata)
{
lock (myQueue)
{
myQueue.add(mypipe.data);
}
}
}
}
If you want to empty the queue every 1000 ms, do not use Thread.Sleep. Use a timer instead.
System.Threading.Timer t = new Timer(myListReadingProc, null, 1000, 1000);
private void myListReadingProc(object s)
{
while (myQueue.Count > 0)
{
lock (myQueue)
{
string item = myQueue.Dequeue();
// do whatever
}
}
}
Note that the above assumes that the queue is only being read by one thread. If multiple threads are reading, then there's a race condition. But the above will work with a single reader and one or more writers.
I would suggest using a ConcurrentQueue (http://msdn.microsoft.com/en-us/library/dd267265.aspx). If you use a simple List<> then you will encourter a lot threading issues.
The other practice would be to use a mutex called outstandingWork and wait on it instead of Thread.Sleep(). Then when you enqueue some work you pulse outstandingWork. This means that you sleep when no work is available but start processing work immediately instead of sleep the entire 1 second.
Edit
As #Prix pointed out, you are using .Net 3.5. So you cannot use ConcurrentQueue. Use the Queue class with the following
Queue<Work> queue;
AutoResetEvent outstandingWork = new AutoResetEvent(false);
void Enqueue(Work work)
{
lock (queue)
{
queue.Enqueue(work);
outstandingWork.Set();
}
}
Work DequeMaybe()
{
lock (queue)
{
if (queue.Count == 0) return null;
return queue.Dequeue();
}
}
void DoWork()
{
while (true)
{
Work work = DequeMaybe();
if (work == null)
{
outstandingWork.WaitOne();
continue;
}
// Do the work.
}
}