Using RX to track multiple items in an observable collection - c#

Heres a quick question. I have an ObservableCollection<IItem> where IItem has a property called Id. Throughout the lifetime of an application items are added, removed and then re-added once again to this collection.
What I need is to track when items with certain id's are present in this collection. When all required dependencies are present, I need to do some initialization, if at least one of the required items is removed, then I need to do a cleanup. If that item is then re-added once again, then I need to do initialization again.
Any suggestions what RX operators to use to build such kind of a query?

Keeping track of the state of the collection will probably be somewhat tedious. Unless your collection is very big you can instead examine the collection on each change to determine if your criteria for initialization is fulfilled. Then you can use DistinctUntilChanged to get an observable that will fire when you need to perform initialization and cleanup
Here is an example:
var collection = new ObservableCollection<Int32>();
var observable = Observable
.FromEventPattern<NotifyCollectionChangedEventHandler, NotifyCollectionChangedEventArgs>(
handler => collection.CollectionChanged += handler,
handler => collection.CollectionChanged -= handler
);
You then need a predicate that determines if initialization is required (the collection "is ready"). This predicate can get expensive if your collection is big because it will be called on each change to the collection, but my assumption is that this is not a problem.
Boolean IsReady(IEnumerable<Int32> items, IReadOnlyList<Int32> itemsRequiredToBeReady) {
return items.Intersect(itemsRequiredToBeReady).Count() == itemsRequiredToBeReady.Count;
}
Then you can use DistinctUntilChanged to get notifications when the IsReady predicate changes from true to false and vice versa:
var isReadyObservable = observable
.Select(ep => IsReady((ObservableCollection<Int32>) ep.Sender, ItemsRequiredToBeReady))
.DistinctUntilChanged();
To initialize and cleanup you need two subscriptions:
isReadyObservable.Where(isReady => isReady).Subscribe(_ => Initialize());
isReadyObservable.Where(isReady => !isReady).Subscribe(_ => Cleanup());

ObservableCollection is not quite observable as it turns out, so first you must consider what is the strategy you are going to employ in this case. If it is just added and removed items than this code should suffice.
internal class Program
{
private static ObservableCollection<IItem> oc = new ObservableCollection<IItem>();
private static readonly long[] crossCheck = {1,2,3};
private static void Main(string[] args)
{
oc.CollectionChanged += oc_CollectionChanged;
oc.Add(new IItem {Id=1,Amount = 100});
oc.Add(new IItem {Id=2,Amount = 200});
oc.Add(new IItem {Id=3,Amount = 300});
oc.RemoveAt(1);
}
private static void oc_CollectionChanged(object sender, NotifyCollectionChangedEventArgs e)
{
Console.WriteLine("{0} {1}", e.Action, oc.Sum(s1 => s1.Amount));
if (crossCheck.SequenceEqual(oc.Select(s1 => s1.Id).Intersect(crossCheck)))
Console.WriteLine("I have all elements I wanted!");
if (e.OldItems != null && e.Action.Equals(NotifyCollectionChangedAction.Remove) &&
e.OldItems.Cast<IItem>().Any(a1 => a1.Id.Equals(2))) Console.WriteLine("I've lost item two");
}
}
internal class IItem
{
public long Id { get; set; }
public int Amount { get; set; }
}
Produces:
Add 100
Add 300
Add 600
I have all elements I wanted!
Remove 400
I've lost item two
Press any key to continue . . .
Of course in your event handler you can process other conditions as needed, for example you probably want to fire some of those data dependent events just once, etc.

Related

System.Timers.Timer throwing an Exception

I am implementing a code using List<>. Each list has an Class Type item. One of the members of this class is a System.Timer.Timer variable called "TimeOut". The TimeOut.Interval value is 10 and TimeOut.Elapsed Event should be triggered every TimeOut.Interval (which is 10).
The Event is to delete that item from the list. The Event is overloaded such that
TimeOut.Elapsed += (sender, e) => DeleteEntry(sender, e, i)
where i is the index of the item in the list.
For the Event method,
public void DeleteEntry(object sender, System.Timers.ElapsedEventArgs e, int i)
{
//Delete Item from List
listTable.RemoveAt(i);
}
My problem is if two items started their timer together and decided to call the EventHandler after the same specified interval, the one added to the list first is deleted, and then when the second one is called immediately., it throws an error telling me that the index is out of bounds. This happens because one of the items is deleted, the lists shifted its index up and and the "i" sent to the DeleteEntry Method is the old one which is no longer existent.
In other cases when I have more than two items in the list, it deletes the wrong one since the method uses a wrong index.
Using Arrays is not an option for me. Too much work to de-fragment the array every deletion of an element.
How can I solve this? Do I need to do something with the ThreadPool, setting a flag or something like that? Can SynchronizingObject property help in anyway?
Thank you.
Solution:
From what your saying it seems that your problem is the list shifting and you don't know at what point the item is. If that is the case you will want to access the Count property of the list which tells you the amount of items in the list. Be careful working with it as it is does not account for the indexing of 0.
Solution:
From what your saying it seems that your problem is the list shifting and you don't know at what point the item is. If that is the case you will want to access the Count property of the list which tells you the amount of items in the list. Be careful working with it as it is does not account for the indexing of 0.
Edit:
Example:
private static List<int> Numbers = new List<int>();
private static readonly object NumbersLock = new object();
static void Main(string[] args)
{
Thread X = new Thread(() => {
AddNumbers();
});
Thread Y = new Thread(() => {
AddNumbers();
});
X.Start();
Y.Start();
}
private static void AddNumbers() {
Random r = new Random(DateTime.UtcNow.Millisecond);
while (true) {
lock (NumbersLock) {
Numbers.Add(r.Next(0, 100));
}
}
}

Batch updating UI component through observable collection with large amounts of data WinRT

I have an WinRT application that fires notifications everytime it recieves data from a device. I also have a UI control that is databound to an observable collection which I wish to add the new data to
While I have made it capable of updating the observable collection, it causes the UI to become very laggy, as the amount of data generated it fast. It would therefore be better to batch the update, maybe every few hundred milliseconds.
Below shows a snippet of the code. First I create the periodic timer
TimerElapsedHandler f = new TimerElapsedHandler(batchUpdate);
CreatePeriodicTimer(f, new TimeSpan(0, 0, 3));
Below is my event handler for when new data comes in, along with the temporary list that stores the information
List<FinancialStuff> lst = new List<FinancialStuff>();
async void myData_ValueChanged(GattCharacteristic sender, GattValueChangedEventArgs args)
{
var data = new byte[args.CharacteristicValue.Length];
DataReader.FromBuffer(args.CharacteristicValue).ReadBytes(data);
lst.Add(new FinancialStuff() { Time = "DateTime.UtcNow.ToString("mm:ss.ffffff")", Amount = data[0] });
}
Then my batch update, which is called peroidically
private void batchUpdate(ThreadPoolTimer source)
{
AddItem<FinancialStuff>(financialStuffList, lst);
}
Then finally, for testing I want to clear the observable collection and items.
public async void AddItem<T>(ObservableCollection<T> oc, List<T> items)
{
lock (items)
{
if (Dispatcher.HasThreadAccess)
{
foreach (T item in items)
oc.Add(item);
}
else
{
Dispatcher.RunAsync(CoreDispatcherPriority.Low, () =>
{
oc.Clear();
for (int i = 0; i < items.Count; i++)
{
items.Count());
oc.Add(items[i]);
}
lst.Clear();
});
}
}
}
While this seems to work, after a few updates the UI locks up and it updates very slowly/if not at all. For testing, it's only getting a few hundred items in the list by the time the timer is fired.
Can anybody enlighten me as to why as to why this is happening - I'm presuming my design is very poor.
Thanks
You're not locking your list in the event handler
// "lst" is never locked in your event handler
List<FinancialStuff> lst = new List<FinancialStuff>();
lst.Add(new FinancialStuff() { Time = "DateTime.UtcNow.ToString("mm:ss.ffffff")", Amount = data[0] });
Passing "lst" above to your async method
AddItem<FinancialStuff>(financialStuffList, lst);
You're locking "items" below, which is really "lst" above. However, you're adding to the list while your processing it. I assume the event handler has a higher priority so your processing is slower than your add. This can lead to "i < items.Count" being true forever.
public async void AddItem<T>(ObservableCollection<T> oc, List<T> items)
{
// "lst" reference is locked here, but it wasn't locked in the event handler
lock (items)
{
if (Dispatcher.HasThreadAccess)
{
foreach (T item in items)
oc.Add(item);
}
else
{
Dispatcher.RunAsync(CoreDispatcherPriority.Low, () =>
{
oc.Clear();
// This may never exit the for loop
for (int i = 0; i < items.Count; i++)
{
items.Count());
oc.Add(items[i]);
}
lst.Clear();
});
}
}
}
EDIT:
Do you need to view every piece of data? There is going to be some overhead when using a lock. If you're getting data quicker than the speed of how fast you can render it, you'll eventually be backed up and/or have a very large collection to render, which might also cause some problems. I suggest you do some filtering to only draw the last x number of items (say 100). Also, I'm not sure why you need the if (Dispatcher.HasThreadAccess) condition either.
Try the following:
public async void AddItem<T>(ObservableCollection<T> oc, List<T> items)
{
// "lst" reference is locked here, but it wasn't locked in the event handler
lock (items)
{
// Change this to what you want
const int maxSize = 100;
// Make sure it doesn't index out of bounds
int startIndex = Math.Max(0, items.Count - maxSize);
int length = items.Count - startIndex;
List<T> itemsToRender = items.GetRange(startIndex, length);
// You can clear it here in your background thread. The references to the objects
// are now in the itemsToRender list.
lst.Clear();
// Dispatcher.RunAsync(CoreDispatcherPriority.Low, () =>
// Please verify this is the correct syntax
Dispatcher.Run(() =>
{
// At second look, this might need to be locked too
// EDIT: This probably will just add overhead now that it's not running async.
// You can probably remove this lock
lock(oc)
{
oc.Clear();
for (int i = 0; i < itemsToRender.Count; i++)
{
// I didn't notice it before, but why are you checking the count again?
// items.Count());
oc.Add(itemsToRender[i]);
}
}
});
}
}
EDIT2:
Since your AddItem method is already on a background thread, I don't think you need to run the Dispatcher.RunAsync. Instead, I think it might be desirable for it to block so you don't end up with multiple calls to that section of code. Try using Dispatcher.Run instead. I've updated the code example above to show the changes. You shouldn't need the lock on the oc anymore since the lock on the items is good enough. Also, verify the syntax for Dispatcher.Run is correct.

Removing items from a list when a condition is met

I'm writing a multi-threaded download manager where download info is managed by a class I wrote (called DownloadOperation). Downloads are held in a list (called download). I need to remove the objects from the list when a function in the class (queryCompleted) returns true but found out that elements could not be removed from lists from within a foreach loop. What is the best way to have the same effect? I'm relatively new to C# so pardon my stupidity.
private void removeInactiveDownloads()
{
foreach (DownloadOperation dl in download)
{
if (dl.queryComplete() == true)
{
// if download is no longer in progress it is removed from the list.
download.Remove(dl);
}
}
}
List<T> has a method
public int RemoveAll(
Predicate<T> match
)
that removes all elements matching a predicate: http://msdn.microsoft.com/en-us/library/wdka673a.aspx
Therefore I suggest something like:
download.RemoveAll(x => x.queryComplete());
(note that == true is not needed since .queryComplete() already returns true or false!)
Iterate backwards in a For loop instead of a Foreach loop
for(int i = download.Count; i >= 0; i--)
{
if (download[i].queryComplete())
{
// if download is no longer in progress it is removed from the list.
download.RemoveAt(i);
}
}
Patashu's answer is the best solution in general, but based on your example code I would suggest taking another approach altogether.
Are you polling the download list periodically to find the completed ones? Event subscription would probably be a better solution. Since you're new to C#, in case you didn't know the language has built-in support for this pattern: Events
A download could raise a Completed event when it completes, which is subscribed to by the code which manages the list, something like:
private void AddDownload(DownloadOperation dl) {
download.Add(dl);
dl.Completed += (s, e) => download.Remove(dl);
}

Making a "modify-while-enumerating" collection thread-safe

I want to create a thread-safe collection that can be modified while being enumerated.
The sample ActionSet class stores Action handlers. It has the Add method that adds a new handler to the list and the Invoke method that enumerates and invokes all of the collected action handlers. The intended working scenarios include very frequent enumerations with occasional modifications while enumerating.
Normal collections throw exception if you modify them using the Add method while the enumeration is not over.
There is an easy, but slow solution to the problem: Just clone the collection before enumeration:
class ThreadSafeSlowActionSet {
List<Action> _actions = new List<Action>();
public void Add(Action action) {
lock(_actions) {
_actions.Add(action);
}
}
public void Invoke() {
lock(_actions) {
List<Action> actionsClone = _actions.ToList();
}
foreach (var action in actionsClone ) {
action();
}
}
}
The problem with this solution is the enumeration overhead and I want enumeration to be very fast.
I've created a rather fast "recursion-safe" collection that allows adding new values even while enumerating. If you add new values while the main _actions collection is being enumerated, the values are added to the temporary _delta collection instead of the main one. After all enumerations are finished, the _delta values are added to the _actions collection. If you add some new values while the main _actions collection is being enumerated (creating the _delta collection) and then re-enter the Invoke method again we have to create a new merged collection (_actions + _delta) and replace _actions with it.
So, this collection looks "recursion-safe", but I want to make it thread-safe. I think that I need to use the Interlocked.* constructs, classes from System.Threading and other synchronization primitives to make this collection thread-safe, but I don't have a good idea on how to do that.
How to make this collection thread-safe?
class RecursionSafeFastActionSet {
List<Action> _actions = new List<Action>(); //The main store
List<Action> _delta; //Temporary buffer for storing added values while the main store is being enumerated
int _lock = 0; //The number of concurrent Invoke enumerations
public void Add(Action action) {
if (_lock == 0) { //_actions list is not being enumerated and can be modified
_actions.Add(action);
} else { //_actions list is being enumerated and cannot be modified
if (_delta == null) {
_delta = new List<Action>();
}
_delta.Add(action); //Storing the new values in the _delta buffer
}
}
public void Invoke() {
if (_delta != null) { //Re-entering Invoke after calling Add: Invoke->Add,Invoke
Debug.Assert(_lock > 0);
var newActions = new List<Action>(_actions); //Creating a new list for merging delta
newActions.AddRange(_delta); //Merging the delta
_delta = null;
_actions = newActions; //Replacing the original list (which is still being iterated)
}
_lock++;
foreach (var action in _actions) {
action();
}
_lock--;
if (_lock == 0 && _delta != null) {
_actions.AddRange(_delta); //Merging the delta
_delta = null;
}
}
}
Update: Added the ThreadSafeSlowActionSet variant.
A simpler approach (used, for example, by ConcurrentBag) is to have GetEnumerator() return an enumerator over a snapshot of the collection's contents. In your case this might look like:
public IEnumerator<Action> GetEnumerator()
{
lock(sync)
{
return _actions.ToList().GetEnumerator();
}
}
If you do this, you don't need a _delta field and the complexity it adds.
Here is your class modified for thread safety:
class SafeActionSet
{
Object _sync = new Object();
List<Action> _actions = new List<Action>(); //The main store
List<Action> _delta = new List<Action>(); //Temporary buffer for storing added values while the main store is being enumerated
int _lock = 0; //The number of concurrent Invoke enumerations
public void Add(Action action)
{
lock(sync)
{
if (0 == _lock)
{ //_actions list is not being enumerated and can be modified
_actions.Add(action);
}
else
{ //_actions list is being enumerated and cannot be modified
_delta.Add(action); //Storing the new values in the _delta buffer
}
}
}
public void Invoke()
{
lock(sync)
{
if (0 < _delta.Count)
{ //Re-entering Invoke after calling Add: Invoke->Add,Invoke
Debug.Assert(0 < _lock);
var newActions = new List<Action>(_actions); //Creating a new list for merging delta
newActions.AddRange(_delta); //Merging the delta
_delta.Clear();
_actions = newActions; //Replacing the original list (which is still being iterated)
}
++_lock;
}
foreach (var action in _actions)
{
action();
}
lock(sync)
{
--_lock;
if ((0 == _lock) && (0 < _delta.Count))
{
_actions.AddRange(_delta); //Merging the delta
_delta.Clear();
}
}
}
}
I made a few other tweaks, for the following reason:
reversed IF expressions to have constant value first, so if I do a
typo and put "=" instead of "==" or "!=" etc., the compiler will
instantly tell me of the typo.
(: a habit I got into because my brain and fingers are often out of sync :)
preallocated _delta, and called .Clear() instead of setting it to null,
because I find it is easier to read.
the various lock(_sync) {...} give you your thread safety on all instance variable access.
:( with the exception of your access to _action in the enumeration itself. ):
Since I actually also needed to delete items from the collection, the implementation that I ultimately used was based on a rewritten LinkedList that locks adjacent nodes on deletion/insertion and doesn't complain if the collection was changed during enumeration.
I also added a Dictionary to make the element search fast.

How should I implement a "quiet period" when raising events?

I'm using a subscriber/notifier pattern to raise and consume events from my .Net middle-tier in C#. Some of the events are raised in "bursts", for instance, when data is persisted from a batch program importing a file. This executes a potentially long-running task, and I'd like to avoid firing the event several times a second by implementing a "quiet period", whereby the event system waits until the event stream slows down to process the event.
How should I do this when the Publisher takes an active role in notifying subscribers? I don't want to wait until an event comes in to check to see if there are others waiting out the quiet period...
There is no host process to poll the subscription model at the moment. Should I abandon the publish/subscribe pattern or is there a better way?
Here's a rough implementation that might point you in a direction. In my example, the task that involves notification is saving a data object. When an object is saved, the Saved event is raised. In addition to a simple Save method, I've implemented BeginSave and EndSave methods as well as an overload of Save that works with those two for batch saves. When EndSave is called, a single BatchSaved event is fired.
Obviously, you can alter this to suit your needs. In my example, I kept track of a list of all objects that were saved during a batch operation, but this may not be something that you'd need to do...you may only care about how many objects were saved or even simply that a batch save operation was completed. If you anticipate a large number of objects being saved, then storing them in a list as in my example may become a memory issue.
EDIT: I added a "threshold" concept to my example that attempts to prevent a large number of objects being held in memory. This causes the BatchSaved event to fire more frequently, though. I also added some locking to address potential thread safety, though I may have missed something there.
class DataConcierge<T>
{
// *************************
// Simple save functionality
// *************************
public void Save(T dataObject)
{
// perform save logic
this.OnSaved(dataObject);
}
public event DataObjectSaved<T> Saved;
protected void OnSaved(T dataObject)
{
var saved = this.Saved;
if (saved != null)
saved(this, new DataObjectEventArgs<T>(dataObject));
}
// ************************
// Batch save functionality
// ************************
Dictionary<BatchToken, List<T>> _BatchSavedDataObjects = new Dictionary<BatchToken, List<T>>();
System.Threading.ReaderWriterLockSlim _BatchSavedDataObjectsLock = new System.Threading.ReaderWriterLockSlim();
int _SavedObjectThreshold = 17; // if the number of objects being stored for a batch reaches this threshold, then those objects are to be cleared from the list.
public BatchToken BeginSave()
{
// create a batch token to represent this batch
BatchToken token = new BatchToken();
_BatchSavedDataObjectsLock.EnterWriteLock();
try
{
_BatchSavedDataObjects.Add(token, new List<T>());
}
finally
{
_BatchSavedDataObjectsLock.ExitWriteLock();
}
return token;
}
public void EndSave(BatchToken token)
{
List<T> batchSavedDataObjects;
_BatchSavedDataObjectsLock.EnterWriteLock();
try
{
if (!_BatchSavedDataObjects.TryGetValue(token, out batchSavedDataObjects))
throw new ArgumentException("The BatchToken is expired or invalid.", "token");
this.OnBatchSaved(batchSavedDataObjects); // this causes a single BatchSaved event to be fired
if (!_BatchSavedDataObjects.Remove(token))
throw new ArgumentException("The BatchToken is expired or invalid.", "token");
}
finally
{
_BatchSavedDataObjectsLock.ExitWriteLock();
}
}
public void Save(BatchToken token, T dataObject)
{
List<T> batchSavedDataObjects;
// the read lock prevents EndSave from executing before this Save method has a chance to finish executing
_BatchSavedDataObjectsLock.EnterReadLock();
try
{
if (!_BatchSavedDataObjects.TryGetValue(token, out batchSavedDataObjects))
throw new ArgumentException("The BatchToken is expired or invalid.", "token");
// perform save logic
this.OnBatchSaved(batchSavedDataObjects, dataObject);
}
finally
{
_BatchSavedDataObjectsLock.ExitReadLock();
}
}
public event BatchDataObjectSaved<T> BatchSaved;
protected void OnBatchSaved(List<T> batchSavedDataObjects)
{
lock (batchSavedDataObjects)
{
var batchSaved = this.BatchSaved;
if (batchSaved != null)
batchSaved(this, new BatchDataObjectEventArgs<T>(batchSavedDataObjects));
}
}
protected void OnBatchSaved(List<T> batchSavedDataObjects, T savedDataObject)
{
// add the data object to the list storing the data objects that have been saved for this batch
lock (batchSavedDataObjects)
{
batchSavedDataObjects.Add(savedDataObject);
// if the threshold has been reached
if (_SavedObjectThreshold > 0 && batchSavedDataObjects.Count >= _SavedObjectThreshold)
{
// then raise the BatchSaved event with the data objects that we currently have
var batchSaved = this.BatchSaved;
if (batchSaved != null)
batchSaved(this, new BatchDataObjectEventArgs<T>(batchSavedDataObjects.ToArray()));
// and clear the list to ensure that we are not holding on to the data objects unnecessarily
batchSavedDataObjects.Clear();
}
}
}
}
class BatchToken
{
static int _LastId = 0;
static object _IdLock = new object();
static int GetNextId()
{
lock (_IdLock)
{
return ++_LastId;
}
}
public BatchToken()
{
this.Id = GetNextId();
}
public int Id { get; private set; }
}
class DataObjectEventArgs<T> : EventArgs
{
public T DataObject { get; private set; }
public DataObjectEventArgs(T dataObject)
{
this.DataObject = dataObject;
}
}
delegate void DataObjectSaved<T>(object sender, DataObjectEventArgs<T> e);
class BatchDataObjectEventArgs<T> : EventArgs
{
public IEnumerable<T> DataObjects { get; private set; }
public BatchDataObjectEventArgs(IEnumerable<T> dataObjects)
{
this.DataObjects = dataObjects;
}
}
delegate void BatchDataObjectSaved<T>(object sender, BatchDataObjectEventArgs<T> e);
In my example, I choose to use a token concept in order to create separate batches. This allows smaller batch operations running on separate threads to complete and raise events without waiting for a larger batch operation to complete.
I made separete events: Saved and BatchSaved. However, these could just as easily be consolidated into a single event.
EDIT: fixed race conditions pointed out by Steven Sudit on accessing the event delegates.
EDIT: revised locking code in my example to use ReaderWriterLockSlim rather than Monitor (i.e. the "lock" statement). I think there were a couple of race conditions, such as between the Save and EndSave methods. It was possible for EndSave to execute, causing the list of data objects to be removed from the dictionary. If the Save method was executing at the same time on another thread, it would be possible for a data object to be added to that list, even though it had already been removed from the dictionary.
In my revised example, this situation can't happen and the Save method will throw an exception if it executes after EndSave. These race conditions were caused primarily by me trying to avoid what I thought was unnecessary locking. I realized that more code needed to be within a lock, but decided to use ReaderWriterLockSlim instead of Monitor because I only wanted to prevent Save and EndSave from executing at the same time; there wasn't a need to prevent multiple threads from executing Save at the same time. Note that Monitor is still used to synchronize access to the specific list of data objects retrieved from the dictionary.
EDIT: added usage example
Below is a usage example for the above sample code.
static void DataConcierge_Saved(object sender, DataObjectEventArgs<Program.Customer> e)
{
Console.WriteLine("DataConcierge<Customer>.Saved");
}
static void DataConcierge_BatchSaved(object sender, BatchDataObjectEventArgs<Program.Customer> e)
{
Console.WriteLine("DataConcierge<Customer>.BatchSaved: {0}", e.DataObjects.Count());
}
static void Main(string[] args)
{
DataConcierge<Customer> dc = new DataConcierge<Customer>();
dc.Saved += new DataObjectSaved<Customer>(DataConcierge_Saved);
dc.BatchSaved += new BatchDataObjectSaved<Customer>(DataConcierge_BatchSaved);
var token = dc.BeginSave();
try
{
for (int i = 0; i < 100; i++)
{
var c = new Customer();
// ...
dc.Save(token, c);
}
}
finally
{
dc.EndSave(token);
}
}
This resulted in the following output:
DataConcierge<Customer>.BatchSaved: 17
DataConcierge<Customer>.BatchSaved: 17
DataConcierge<Customer>.BatchSaved: 17
DataConcierge<Customer>.BatchSaved: 17
DataConcierge<Customer>.BatchSaved: 17
DataConcierge<Customer>.BatchSaved: 15
The threshold in my example is set to 17, so a batch of 100 items causes the BatchSaved event to fire 6 times.
I am not sure if I understood your question correctly, but I would try to fix the problem at source - make sure the events are not raised in "bursts". You could consider implementing batch operations, which could be used from the file importing program. This would be treated as a single event in your middletier and raise a single event.
I think it will be very tricky to implement some reasonable solution if you can't make the change outlined above - you could try to wrap your publisher in a "caching" publisher, which would implement some heuristic to cache the events if they are coming in bursts. The easiest would be to cache an event if another one of the same type is being currently processed (so your batch would cause at least 2 events - one at the very beginning, and one at the end). You could wait for a short time and only raise an event when the next one hasn't come during that time, but you get a time lag even if there is a single event in the pipeline. You also need to make sure you will raise the event from time to time even if there is constant queue of events - otherwise the publishers will potentially get starved.
The second option is tricky to implement and will contain heuristics, which might go very wrong...
Here's one idea that's just fallen out of my head. I don't know how workable it is and can't see an obvious way to make it more generic, but it might be a start. All it does is provide a buffer for button click events (substitute with your event as necessary).
class ButtonClickBuffer
{
public event EventHandler BufferedClick;
public ButtonClickBuffer(Button button, int queueSize)
{
this.queueSize= queueSize;
button.Click += this.button_Click;
}
private int queueSize;
private List<EventArgs> queuedEvents = new List<EventArgs>();
private void button_Click(object sender, EventArgs e)
{
queuedEvents.Add(e);
if (queuedEvents.Count >= queueSize)
{
if (this.BufferedClick!= null)
{
foreach (var args in this.queuedEvents)
{
this.BufferedClick(sender, args);
}
queuedEvents.Clear();
}
}
}
}
So your subscriber, instead of subscribing as:
this.button1.Click += this.button1_Click;
Would use a buffer, specifying how many events to wait for:
ButtonClickBuffer buffer = new ButtonClickBuffer(this.button1, 5);
buffer.BufferedClick += this.button1_Click;
It works in a simple test form I knocked up, but it's far from production-ready!
You said you didn't want to wait for an event to see if there is a queue waiting, which is exactly what this does. You could substitute the logic inside the buffer to spawn a new thread which monitors the queue and dispatches events as necessary. God knows what threading and locking issues might arise from that!

Categories