I'm using a subscriber/notifier pattern to raise and consume events from my .Net middle-tier in C#. Some of the events are raised in "bursts", for instance, when data is persisted from a batch program importing a file. This executes a potentially long-running task, and I'd like to avoid firing the event several times a second by implementing a "quiet period", whereby the event system waits until the event stream slows down to process the event.
How should I do this when the Publisher takes an active role in notifying subscribers? I don't want to wait until an event comes in to check to see if there are others waiting out the quiet period...
There is no host process to poll the subscription model at the moment. Should I abandon the publish/subscribe pattern or is there a better way?
Here's a rough implementation that might point you in a direction. In my example, the task that involves notification is saving a data object. When an object is saved, the Saved event is raised. In addition to a simple Save method, I've implemented BeginSave and EndSave methods as well as an overload of Save that works with those two for batch saves. When EndSave is called, a single BatchSaved event is fired.
Obviously, you can alter this to suit your needs. In my example, I kept track of a list of all objects that were saved during a batch operation, but this may not be something that you'd need to do...you may only care about how many objects were saved or even simply that a batch save operation was completed. If you anticipate a large number of objects being saved, then storing them in a list as in my example may become a memory issue.
EDIT: I added a "threshold" concept to my example that attempts to prevent a large number of objects being held in memory. This causes the BatchSaved event to fire more frequently, though. I also added some locking to address potential thread safety, though I may have missed something there.
class DataConcierge<T>
{
// *************************
// Simple save functionality
// *************************
public void Save(T dataObject)
{
// perform save logic
this.OnSaved(dataObject);
}
public event DataObjectSaved<T> Saved;
protected void OnSaved(T dataObject)
{
var saved = this.Saved;
if (saved != null)
saved(this, new DataObjectEventArgs<T>(dataObject));
}
// ************************
// Batch save functionality
// ************************
Dictionary<BatchToken, List<T>> _BatchSavedDataObjects = new Dictionary<BatchToken, List<T>>();
System.Threading.ReaderWriterLockSlim _BatchSavedDataObjectsLock = new System.Threading.ReaderWriterLockSlim();
int _SavedObjectThreshold = 17; // if the number of objects being stored for a batch reaches this threshold, then those objects are to be cleared from the list.
public BatchToken BeginSave()
{
// create a batch token to represent this batch
BatchToken token = new BatchToken();
_BatchSavedDataObjectsLock.EnterWriteLock();
try
{
_BatchSavedDataObjects.Add(token, new List<T>());
}
finally
{
_BatchSavedDataObjectsLock.ExitWriteLock();
}
return token;
}
public void EndSave(BatchToken token)
{
List<T> batchSavedDataObjects;
_BatchSavedDataObjectsLock.EnterWriteLock();
try
{
if (!_BatchSavedDataObjects.TryGetValue(token, out batchSavedDataObjects))
throw new ArgumentException("The BatchToken is expired or invalid.", "token");
this.OnBatchSaved(batchSavedDataObjects); // this causes a single BatchSaved event to be fired
if (!_BatchSavedDataObjects.Remove(token))
throw new ArgumentException("The BatchToken is expired or invalid.", "token");
}
finally
{
_BatchSavedDataObjectsLock.ExitWriteLock();
}
}
public void Save(BatchToken token, T dataObject)
{
List<T> batchSavedDataObjects;
// the read lock prevents EndSave from executing before this Save method has a chance to finish executing
_BatchSavedDataObjectsLock.EnterReadLock();
try
{
if (!_BatchSavedDataObjects.TryGetValue(token, out batchSavedDataObjects))
throw new ArgumentException("The BatchToken is expired or invalid.", "token");
// perform save logic
this.OnBatchSaved(batchSavedDataObjects, dataObject);
}
finally
{
_BatchSavedDataObjectsLock.ExitReadLock();
}
}
public event BatchDataObjectSaved<T> BatchSaved;
protected void OnBatchSaved(List<T> batchSavedDataObjects)
{
lock (batchSavedDataObjects)
{
var batchSaved = this.BatchSaved;
if (batchSaved != null)
batchSaved(this, new BatchDataObjectEventArgs<T>(batchSavedDataObjects));
}
}
protected void OnBatchSaved(List<T> batchSavedDataObjects, T savedDataObject)
{
// add the data object to the list storing the data objects that have been saved for this batch
lock (batchSavedDataObjects)
{
batchSavedDataObjects.Add(savedDataObject);
// if the threshold has been reached
if (_SavedObjectThreshold > 0 && batchSavedDataObjects.Count >= _SavedObjectThreshold)
{
// then raise the BatchSaved event with the data objects that we currently have
var batchSaved = this.BatchSaved;
if (batchSaved != null)
batchSaved(this, new BatchDataObjectEventArgs<T>(batchSavedDataObjects.ToArray()));
// and clear the list to ensure that we are not holding on to the data objects unnecessarily
batchSavedDataObjects.Clear();
}
}
}
}
class BatchToken
{
static int _LastId = 0;
static object _IdLock = new object();
static int GetNextId()
{
lock (_IdLock)
{
return ++_LastId;
}
}
public BatchToken()
{
this.Id = GetNextId();
}
public int Id { get; private set; }
}
class DataObjectEventArgs<T> : EventArgs
{
public T DataObject { get; private set; }
public DataObjectEventArgs(T dataObject)
{
this.DataObject = dataObject;
}
}
delegate void DataObjectSaved<T>(object sender, DataObjectEventArgs<T> e);
class BatchDataObjectEventArgs<T> : EventArgs
{
public IEnumerable<T> DataObjects { get; private set; }
public BatchDataObjectEventArgs(IEnumerable<T> dataObjects)
{
this.DataObjects = dataObjects;
}
}
delegate void BatchDataObjectSaved<T>(object sender, BatchDataObjectEventArgs<T> e);
In my example, I choose to use a token concept in order to create separate batches. This allows smaller batch operations running on separate threads to complete and raise events without waiting for a larger batch operation to complete.
I made separete events: Saved and BatchSaved. However, these could just as easily be consolidated into a single event.
EDIT: fixed race conditions pointed out by Steven Sudit on accessing the event delegates.
EDIT: revised locking code in my example to use ReaderWriterLockSlim rather than Monitor (i.e. the "lock" statement). I think there were a couple of race conditions, such as between the Save and EndSave methods. It was possible for EndSave to execute, causing the list of data objects to be removed from the dictionary. If the Save method was executing at the same time on another thread, it would be possible for a data object to be added to that list, even though it had already been removed from the dictionary.
In my revised example, this situation can't happen and the Save method will throw an exception if it executes after EndSave. These race conditions were caused primarily by me trying to avoid what I thought was unnecessary locking. I realized that more code needed to be within a lock, but decided to use ReaderWriterLockSlim instead of Monitor because I only wanted to prevent Save and EndSave from executing at the same time; there wasn't a need to prevent multiple threads from executing Save at the same time. Note that Monitor is still used to synchronize access to the specific list of data objects retrieved from the dictionary.
EDIT: added usage example
Below is a usage example for the above sample code.
static void DataConcierge_Saved(object sender, DataObjectEventArgs<Program.Customer> e)
{
Console.WriteLine("DataConcierge<Customer>.Saved");
}
static void DataConcierge_BatchSaved(object sender, BatchDataObjectEventArgs<Program.Customer> e)
{
Console.WriteLine("DataConcierge<Customer>.BatchSaved: {0}", e.DataObjects.Count());
}
static void Main(string[] args)
{
DataConcierge<Customer> dc = new DataConcierge<Customer>();
dc.Saved += new DataObjectSaved<Customer>(DataConcierge_Saved);
dc.BatchSaved += new BatchDataObjectSaved<Customer>(DataConcierge_BatchSaved);
var token = dc.BeginSave();
try
{
for (int i = 0; i < 100; i++)
{
var c = new Customer();
// ...
dc.Save(token, c);
}
}
finally
{
dc.EndSave(token);
}
}
This resulted in the following output:
DataConcierge<Customer>.BatchSaved: 17
DataConcierge<Customer>.BatchSaved: 17
DataConcierge<Customer>.BatchSaved: 17
DataConcierge<Customer>.BatchSaved: 17
DataConcierge<Customer>.BatchSaved: 17
DataConcierge<Customer>.BatchSaved: 15
The threshold in my example is set to 17, so a batch of 100 items causes the BatchSaved event to fire 6 times.
I am not sure if I understood your question correctly, but I would try to fix the problem at source - make sure the events are not raised in "bursts". You could consider implementing batch operations, which could be used from the file importing program. This would be treated as a single event in your middletier and raise a single event.
I think it will be very tricky to implement some reasonable solution if you can't make the change outlined above - you could try to wrap your publisher in a "caching" publisher, which would implement some heuristic to cache the events if they are coming in bursts. The easiest would be to cache an event if another one of the same type is being currently processed (so your batch would cause at least 2 events - one at the very beginning, and one at the end). You could wait for a short time and only raise an event when the next one hasn't come during that time, but you get a time lag even if there is a single event in the pipeline. You also need to make sure you will raise the event from time to time even if there is constant queue of events - otherwise the publishers will potentially get starved.
The second option is tricky to implement and will contain heuristics, which might go very wrong...
Here's one idea that's just fallen out of my head. I don't know how workable it is and can't see an obvious way to make it more generic, but it might be a start. All it does is provide a buffer for button click events (substitute with your event as necessary).
class ButtonClickBuffer
{
public event EventHandler BufferedClick;
public ButtonClickBuffer(Button button, int queueSize)
{
this.queueSize= queueSize;
button.Click += this.button_Click;
}
private int queueSize;
private List<EventArgs> queuedEvents = new List<EventArgs>();
private void button_Click(object sender, EventArgs e)
{
queuedEvents.Add(e);
if (queuedEvents.Count >= queueSize)
{
if (this.BufferedClick!= null)
{
foreach (var args in this.queuedEvents)
{
this.BufferedClick(sender, args);
}
queuedEvents.Clear();
}
}
}
}
So your subscriber, instead of subscribing as:
this.button1.Click += this.button1_Click;
Would use a buffer, specifying how many events to wait for:
ButtonClickBuffer buffer = new ButtonClickBuffer(this.button1, 5);
buffer.BufferedClick += this.button1_Click;
It works in a simple test form I knocked up, but it's far from production-ready!
You said you didn't want to wait for an event to see if there is a queue waiting, which is exactly what this does. You could substitute the logic inside the buffer to spawn a new thread which monitors the queue and dispatches events as necessary. God knows what threading and locking issues might arise from that!
Related
I am currently trying to keep a counter on C# on a local file folder for new files that are created.
I have two sub directories to CD and LP that I have to keep checking. With counters that makes sure that the count of folders made have not exceeded the count set by the user.
public static int LPmax { get; set; }
public static int CDmax { get; set; }
public static int LPcounter2 { get; set; }
public static int CDcounter2 { get; set; }
public static int LPCreated;
public static int CDCreated;
public static int oldLPCreated;
public static int oldCDCreated;
FileSystemWatcher CDdirWatcher = new FileSystemWatcher();
FileSystemWatcher LPdirWatcher = new FileSystemWatcher();
//watch method should run in the background as checker
public void watch()
{
CDdirWatcher.Path = #"C:\Data\LotData\CD";
CDdirWatcher.Filter = "EM*";
CDdirWatcher.NotifyFilter = NotifyFilters.DirectoryName | NotifyFilters.LastWrite;
CDdirWatcher.EnableRaisingEvents = true;
CDdirWatcher.Created += CDdirWatcher_Created;
LPdirWatcher.Path = #"C:\Data\LotData\LP";
LPdirWatcher.Filter = "EM*";
LPdirWatcher.NotifyFilter = NotifyFilters.DirectoryName;
LPdirWatcher.EnableRaisingEvents = true;
LPdirWatcher.Created += LPdirWatcher_Created;
Thread.Sleep(10);
}
private static void CDdirWatcher_Created(object sender, FileSystemEventArgs e)
{
CDCreated += 1;
}
private static void LPdirWatcher_Created(object sender, FileSystemEventArgs e)
{
LPCreated += 1;
}
The above method works fine, and the criteria is that it has to be less count then the one set
public void checker()
{
if(CDCreated>CDmax)
{
popupbx();
}
if(LPCreated>LPmax)
{
popupbx();
}
}
The problem is my main method where I have two threads which need to continuously check the two criteria to see if the counter has been exceeded.
public Form1()
{
InitializeComponent();
//Implementing Threads asynchronously
Thread oThreadone = new Thread(() =>
{
//Do what u wanna……
watch();
});
Thread oThreadtwo = new Thread(() =>
{
//Do what u wanna……
checker();
});
//Calling thread workers
oThreadone.Start();
oThreadone.IsBackground = true;
oThreadtwo.Start();
oThreadtwo.IsBackground = true;
}
Mkdir fires the counters in debug mode, but thread two doesn't check for the counters after they fire.
The first major thing wrong with your code is that neither of threads you create are needed, nor do they do what you want. Specifically, the FileSystemWatcher object itself is already asynchronous, so you can create it in the main thread. In fact, you should, because there you could set FileSystemWatcher.SynchronizingObject to the current form instance so that it will raise its events in that object's synchronization context. I.e. your event handlers will be executed in the main thread, which is what you want.
So the first method, watch(), rather than being executed in a thread, just call it directly.
Which brings me to the second method, checker(). Your method for the thread doesn't loop, so it will execute the two tests, and then promptly exit. That's the end of that thread. It won't stay around long enough to monitor the counts as they are updated.
You could fix it by looping in the checker() method, so that it never exits. But then you run into problems of excessive CPU usage. Which you'll fix by adding sleep statements. Which then you're wasting a thread most of the time. Which you could fix by using async/await (e.g. await Task.Delay()). Except that would just unnecessarily complicate the code.
Instead, you should just perform each check after each count is updated. In theory, you could just display the message immediately. But that will block the event handler subscribed to FileSystemWatcher, possibly delaying additional reports. So you may instead prefer to use Control.BeginInvoke() to defer display of the message until after the event handler has returned.
So, taking all that into account, your code might instead look more like this:
public Form1()
{
InitializeComponent();
watch();
}
private void CDdirWatcher_Created(object sender, FileSystemEventArgs e)
{
CDCreated += 1;
if (CDCreated > CDmax)
{
BeginInvoke((MethodInvoker)popupbx);
}
}
private static void LPdirWatcher_Created(object sender, FileSystemEventArgs e)
{
LPCreated += 1;
if (LPCreated > LPmax)
{
BeginInvoke((MethodInvoker)popupbx);
}
}
You can remove the checker() method altogether. The watch() method can remain as it is, though I would change the order of subscribing to the Created event and the assignment of EnableRaisingEvents. I.e. don't enable raising events until you've already subscribed to the event. The FileSystemWatcher is unreliable enough as it is, without you giving it a chance to raise an event before you're ready to observe it. :)
This is based on your current implementation. Note though that if files keep getting created, the message will be displayed over and over. If they are created fast enough, new messages will be displayed before the user can dismiss the previously-displayed one(s). You may prefer to modify your code to prevent this. E.g. unsubscribe from the event after you've already exceeded the max count, or at least temporarily inhibit the display of the message while one such message is already being displayed. Exactly what to do is up to you, and beyond the scope of your question and thus the scope of this answer.
Many times in UI development I handle events in such a way that when an event first comes - I immediately start processing, but if there is one processing operation in progress - I wait for it to complete before I process another event. If more than one event occurs before the operation completes - I only process the most recent one.
The way I typically do that my process method has a loop and in my event handler I check a field that indicates if I am currently processing something and if I am - I put my current event arguments in another field that is basically a one item sized buffer and when current processing pass completes - I check if there is some other event to process and I loop until I am done.
Now this seems a bit too repetitive and possibly not the most elegant way to do it, though it seems to otherwise work fine for me. I have two questions then:
Does what I need to do have a name?
Is there some reusable synchronization type out there that could do that for me?
I'm thinking of adding something to the set of async coordination primitives by Stephen Toub that I included in my toolkit.
So first, we'll handle the case that you described in which the method is always used from the UI thread, or some other synchronization context. The Run method can itself be async to handle all of the marshaling through the synchronization context for us.
If we're running we just set the next stored action. If we're not, then we indicate that we're now running, await the action, and then continue to await the next action until there is no next action. We ensure that whenever we're done we indicate that we're done running:
public class EventThrottler
{
private Func<Task> next = null;
private bool isRunning = false;
public async void Run(Func<Task> action)
{
if (isRunning)
next = action;
else
{
isRunning = true;
try
{
await action();
while (next != null)
{
var nextCopy = next;
next = null;
await nextCopy();
}
}
finally
{
isRunning = false;
}
}
}
private static Lazy<EventThrottler> defaultInstance =
new Lazy<EventThrottler>(() => new EventThrottler());
public static EventThrottler Default
{
get { return defaultInstance.Value; }
}
}
Because the class is, at least generally, going to be used exclusively from the UI thread there will generally need to be only one, so I added a convenience property of a default instance, but since it may still make sense for there to be more than one in a program, I didn't make it a singleton.
Run accepts a Func<Task> with the idea that it would generally be an async lambda. It might look like:
public class Foo
{
public void SomeEventHandler(object sender, EventArgs args)
{
EventThrottler.Default.Run(async () =>
{
await Task.Delay(1000);
//do other stuff
});
}
}
Okay, so, just to be verbose, here is a version that handles the case where the event handlers are called from different threads. I know you said that you assume they're all called from the UI thread, but I generalized it a bit. This means locking over all access to instance fields of the type in a lock block, but not actually executing the function inside of a lock block. That last part is important not just for performance, to ensure we're not blocking items from just setting the next field, but also to avoid issues with that action also calling run, so that it doesn't need to deal with re-entrancy issues or potential deadlocks. This pattern, of doing stuff in a lock block and then responding based on conditions determined in the lock means setting local variables to indicate what should be done after the lock ends.
public class EventThrottlerMultiThreaded
{
private object key = new object();
private Func<Task> next = null;
private bool isRunning = false;
public void Run(Func<Task> action)
{
bool shouldStartRunning = false;
lock (key)
{
if (isRunning)
next = action;
else
{
isRunning = true;
shouldStartRunning = true;
}
}
Action<Task> continuation = null;
continuation = task =>
{
Func<Task> nextCopy = null;
lock (key)
{
if (next != null)
{
nextCopy = next;
next = null;
}
else
{
isRunning = false;
}
}
if (nextCopy != null)
nextCopy().ContinueWith(continuation);
};
if (shouldStartRunning)
action().ContinueWith(continuation);
}
}
Does what I need to do have a name?
What you're describing sounds a bit like a trampoline combined with a collapsing queue. A trampoline is basically a loop that iteratively invokes thunk-returning functions. An example is the CurrentThreadScheduler in the Reactive Extensions. When an item is scheduled on a CurrentThreadScheduler, the work item is added to the scheduler's thread-local queue, after which one of the following things will happen:
If the trampoline is already running (i.e., the current thread is already processing the thread-local queue), then the Schedule() call returns immediately.
If the trampoline is not running (i.e., no work items are queued/running on the current thread), then the current thread begins processing the items in the thread-local queue until it is empty, at which point the call to Schedule() returns.
A collapsing queue accumulates items to be processed, with the added twist that if an equivalent item is already in the queue, then that item is simply replaced with the newer item (resulting in only the most recent of the equivalent items remaining in the queue, as opposed to both). The idea is to avoid processing stale/obsolete events. Consider a consumer of market data (e.g., stock ticks). If you receive several updates for a frequently traded security, then each update renders the earlier updates obsolete. There is likely no point in processing earlier ticks for the same security if a more recent tick has already arrived. Thus, a collapsing queue is appropriate.
In your scenario, you essentially have a trampoline processing a collapsing queue with for which all incoming events are considered equivalent. This results in an effective maximum queue size of 1, as every item added to a non-empty queue will result in the existing item being evicted.
Is there some reusable synchronization type out there that could do that for me?
I do not know of an existing solution that would serve your needs, but you could certainly create a generalized trampoline or event loop capable of supporting pluggable scheduling strategies. The default strategy could use a standard queue, while other strategies might use a priority queue or a collapsing queue.
What you're describing sounds very similar to how TPL Dataflow's BrodcastBlock behaves: it always remembers only the last item that you sent to it. If you combine it with ActionBlock that executes your action and has capacity only for the item currently being processed, you get what you want (the method needs a better name):
// returns send delegate
private static Action<T> CreateProcessor<T>(Action<T> executedAction)
{
var broadcastBlock = new BroadcastBlock<T>(null);
var actionBlock = new ActionBlock<T>(
executedAction, new ExecutionDataflowBlockOptions { BoundedCapacity = 1 });
broadcastBlock.LinkTo(actionBlock);
return item => broadcastBlock.Post(item);
}
Usage could be something like this:
var processor = CreateProcessor<int>(
i =>
{
Console.WriteLine(i);
Thread.Sleep(i);
});
processor(100);
processor(1);
processor(2);
Output:
100
2
I'm downloading two JSON files from the webs, after which I want to allow loading two pages, but not before. However, the ManualResetEvent that is required to be set in order to load the page never "fires". Even though I know that it gets set, WaitOne never returns.
Method that launches the downloads:
private void Application_Launching(object sender, LaunchingEventArgs e)
{
PhoneApplicationService.Current.State["doneList"] = new List<int>();
PhoneApplicationService.Current.State["manualResetEvent"] = new ManualResetEvent(false);
Helpers.DownloadAndStoreJsonObject<ArticleList>("http://arkad.tlth.se/api/get_posts/", "articleList");
Helpers.DownloadAndStoreJsonObject<CompanyList>("http://arkad.tlth.se/api/get_posts/?postType=webbkatalog", "catalog");
}
The downloading method, that sets the ManualResetEvent
public static void DownloadAndStoreJsonObject<T>(string url, string objName)
{
var webClient = new WebClient();
webClient.DownloadStringCompleted += (sender, e) =>
{
if (!string.IsNullOrEmpty(e.Result))
{
var obj = ProcessJson<T>(e.Result);
PhoneApplicationService.Current.State[objName] = obj;
var doneList = PhoneApplicationService.Current.State["doneList"] as List<int>;
doneList.Add(0);
if (doneList.Count == 2) // Two items loaded
{
(PhoneApplicationService.Current.State["manualResetEvent"] as ManualResetEvent).Set(); // Signal that it's done
}
}
};
webClient.DownloadStringAsync(new Uri(url));
}
The waiting method (constructor in this case)
public SenastePage()
{
InitializeComponent();
if ((PhoneApplicationService.Current.State["doneList"] as List<int>).Count < 2)
{
(PhoneApplicationService.Current.State["manualResetEvent"] as ManualResetEvent).WaitOne();
}
SenasteArticleList.ItemsSource = (PhoneApplicationService.Current.State["articleList"] as ArticleList).posts;
}
If I wait before trying to access that constructor, it easily passes the if-statement and doesn't get caught in the WaitOne, but if I call it immediately, I get stuck, and it never returns...
Any ideas?
Blocking the UI thread must be prevented at all costs. Especially when downloading data: don't forget that your application is executing on a phone, which has a very instable network. If the data takes two minutes to load, then the UI will be freezed for two minutes. It would be an awful user experience.
There's many ways to prevent that. For instance, you can keep the same logic but waiting in a background thread instead of the UI thread:
public SenastePage()
{
// Write the XAML of your page to display the loading animation per default
InitializeComponent();
Task.Factory.StartNew(LoadData);
}
private void LoadData()
{
((ManualResetEvent)PhoneApplicationService.Current.State["manualResetEvent"]).WaitOne();
Dispatcher.BeginInvoke(() =>
{
SenasteArticleList.ItemsSource = ((ArticleList)PhoneApplicationService.Current.State["articleList"]).posts;
// Hide the loading animation
}
}
That's just a quick and dirty way to reach the result you want. You could also rewrite your code using tasks, and using Task.WhenAll to trigger an action when they're all finished.
Perhaps there is a logic problem. In the SenastePage() constructor you are waiting for the set event only if the doneList count is less than two. However, you don't fire the set event until the doneList count is equal to two. You are listening for the set event before it can ever fire.
I have List newJobs. Some threads add items to that list and other thread removes items from it, if it's not empty. I have ManualResetEvent newJobEvent which is set when items are added to the list, and reset when items are removed from it:
Adding items to the list is performed in the following way:
lock(syncLock){
newJobs.Add(job);
}
newJobEvent.Set();
Jobs removal is performed in the following way:
if (newJobs.Count==0)
newJobEvent.WaitOne();
lock(syncLock){
job = newJobs.First();
newJobs.Remove(job);
/*do some processing*/
}
newJobEvent.Reset();
When the line
job=newJobs.First()
is executed I sometimes get an exception that the list is empty. I guess that the check:
if (newJobs.Count==0)
newJobEvent.WaitOne();
should also be in the lock statement but I'm afraid of deadlocks on the line newJobEvent.WaitOne();
How can I solve it?
Many thanks and sorry for the long post!
You are right. Calling WaitOne inside a lock could lead to a deadlock. And the check to see if the list is empty needs to be done inside the lock otherwise there could be a race with another thread trying to remove an item. Now, your code looks suspiciously like the producer-consumer pattern which is usually implemented with a blocking queue. If you are using .NET 4.0 then you can take advantage of the BlockingCollection class.
However, let me go over a couple of ways you can do it youself. The first uses a List and a ManualResetEvent to demonstrate how this could be done using the data structures in your question. Notice the use of a while loop in the Take method.
public class BlockingJobsCollection
{
private List<Job> m_List = new List<Job>();
private ManualResetEvent m_Signal = new ManualResetEvent(false);
public void Add(Job item)
{
lock (m_List)
{
m_List.Add(item);
m_Signal.Set();
}
}
public Job Take()
{
while (true)
{
lock (m_List)
{
if (m_List.Count > 0)
{
Job item = m_List.First();
m_List.Remove(item);
if (m_List.Count == 0)
{
m_Signal.Reset();
}
return item;
}
}
m_Signal.WaitOne();
}
}
}
But this not how I would do it. I would go with the simplier solution below with uses Monitor.Wait and Monitor.Pulse. Monitor.Wait is useful because it can be called inside a lock. In fact, it is suppose to be done that way.
public class BlockingJobsCollection
{
private Queue<Job> m_Queue = new Queue<Job>();
public void Add(Job item)
{
lock (m_Queue)
{
m_Queue.Enqueue(item);
Monitor.Pulse(m_Queue);
}
}
public Job Take()
{
lock (m_Queue)
{
while (m_Queue.Count == 0)
{
Monitor.Wait(m_Queue);
}
return m_Queue.Dequeue();
}
}
}
Not answering your question, but if you are using .NET framework 4, you can use the new ConcurrentQueue which does all the locking for you.
Regarding your question:
One scenario that I can think of causing such a problem is the following:
The insertion thread enters the lock, calls newJob.Add, leaves the lock.
Context switch to the removal thread. It checks for emptyness, sees an item, enters the locked area, removes the item, resets the event - which hasn't even been set yet.
Context switch back to the insertion thread, the event is set.
Context switch back to the removal thread. It checks for emptyness, sees no items, waits for the event - which is already set, trys to get the first item... Bang!
Set and reset the event inside the lock and you should be fine.
I don't see why object removal in case of zero objects should wait for one to be added and then remove it. It looks to be being against logic.
Let's say I have an existing System.Threading.Timer instance and I'd like to call Change on it to push it's firing time back:
var timer = new Timer(DelayCallback, null, 10000, Timeout.Infinite);
// ... (sometime later but before DelayCallback has executed)
timer.Change(20000, Timeout.Infinite);
I'm using this timer to perform an "idle callback" after a period of no activity. ("Idle" and "no activity" are application-defined conditions in this case...the specifics aren't terribly important.) Every time I perform an "action", I want to reset the timer so that it is always set to fire 10 seconds after that.
However, there is an inherent race condition because when I call Change, I can't tell if the Timer has already fired based on its old settings. (I can, of course, tell if my callback has happened but I can't tell if the CLR's internal timer thread has queued my callback to the threadpool and its execution is imminent.)
Now I know I can call Dispose on the timer instance and re-create it each time I need to "push it back". but this seems less efficient than just changing the existing timer. Of course it may not be...I'll run some micro-benchmarks in a bit and let you all know.
Alternatively, I can always keep track of the expected firing time (via DateTime.Now.AddSeconds(10)) and, if the original Timer fires, ignore it by checking DateTime.Now in the callback. (I have a nagging concern that this may not be 100% reliable on account of the Timer using TimeSpan and my check using DateTime...this may not be an issue but I'm not completely comfortable with it for some reason...)
My questions are:
Is there a good way for me to call Timer.Change and be able to know whether I managed to change it before the callback was queued to the threadpool? (I don't think so, but it doesn't hurt to ask...)
Has anyone else implemented (what I term) a "pushback timer" like this? If so, I'd love to hear how you tackled the problem.
This question is somewhat hypothetical in nature since I already have a couple of working solutions (based on Dispose and based on DateTime.Now)...I'm mainly interested in hearing performance-related suggestions (as I'll be "pushing back" the Timer VERY frequently).
Thanks!
it sounds like what you really want is the application-idle event
System.Windows.Forms.Application.Idle
Im interpreting your questions as a request for an implementatation of the IdleNotifier interface specified below. Also you state that ActionOccured() needs to be fast.
public delegate void IdleCallback();
public interface IdleNotifier
{
// Called by threadpool when more than IdleTimeSpanBeforeCallback
// has passed since last call on ActionOccured.
IdleCallback Callback { set; }
TimeSpan IdleTimeSpanBeforeCallback { set; }
void ActionOccured();
}
I provide an implementation with System.Threading.Timer below.
Important points about the implementation:
We accept that the timer can wake up at any time and make sure this is ok.
Since we assume the timer wakes relatively seldom we can do expensive work at these times.
Since we can do all logic in the timer callback all we need to do to "push the timer" is to remeber when last we pushed it.
Implementation:
public class IdleNotifierTimerImplementation : IdleNotifier
{
private readonly object SyncRoot = new object();
private readonly Timer m_Timer;
private IdleCallback m_IdleCallback = null;
private TimeSpan m_IdleTimeSpanBeforeEvent = TimeSpan.Zero;
// Null means there has been no action since last idle notification.
private DateTime? m_LastActionTime = null;
public IdleNotifierTimerImplementation()
{
m_Timer = new Timer(OnTimer);
}
private void OnTimer(object unusedState)
{
lock (SyncRoot)
{
if (m_LastActionTime == null)
{
m_Timer.Change(m_IdleTimeSpanBeforeEvent, TimeSpan.Zero);
return;
}
TimeSpan timeSinceLastUpdate = DateTime.UtcNow - m_LastActionTime.Value;
if (timeSinceLastUpdate > TimeSpan.Zero)
{
// We are no idle yet.
m_Timer.Change(timeSinceLastUpdate, TimeSpan.Zero);
return;
}
m_LastActionTime = null;
m_Timer.Change(m_IdleTimeSpanBeforeEvent, TimeSpan.Zero);
}
if (m_IdleCallback != null)
{
m_IdleCallback();
}
}
// IdleNotifier implementation below
public void ActionOccured()
{
lock (SyncRoot)
{
m_LastActionTime = DateTime.UtcNow;
}
}
public IdleCallback Callback
{
set
{
lock (SyncRoot)
{
m_IdleCallback = value;
}
}
}
public TimeSpan IdleTimeSpanBeforeCallback
{
set
{
lock (SyncRoot)
{
m_IdleTimeSpanBeforeEvent = value;
// Run OnTimer immediately
m_Timer.Change(TimeSpan.Zero, TimeSpan.Zero);
}
}
}
}
There are many straight-forward performance improvements on this code.
If anyone would be intrested in my first thoughts on this just ask me.
I've actually had to build my own "Timing" class for an MMORPG I've made. It could keep track of over 100,000 "entities" that had timers for processing AI, and other tasks. Based on different actions that could be taken, I would have to momentarily delay an event.
Now, my timing class was completely hand written, so it won't be exactly what you're looking for. But something that you could do that would be similar to the solution I came up with is to do a sort of:
while (sleepyTime > 0)
{
int temp = sleepyTime;
sleepyTime = 0;
Thread.Sleep(temp);
}
// here's where your actual code is.
Then, you can make a "Delay" method that basically just ads to sleepyTime.