Note: This is a follow on question from a previous one I asked here.
Just to summarise quickly, my previous problem was with how to databind a BlockingCollection to a control in WPF - which was solved by using the CollectionViewSource.
However, I have thought about my use case a bit more and realised that simply using a BlockingCollection isn't going to work for me. I want the following behaviour;
One source of "work items", submitted to a common pool
Multiple "processors" of these work items
Items which are still "pending" and those which are being "processed" should both show up in the same view for databinding.
For example;
8 work items are submitted simultaneously, and the max level of concurrency is 4. Four of the work items should be moved into the "Processing" state, while the other four remain in "Pending". As each item in the "Processing" state completes, another item from the "Pending" state is picked up for processing. Once an item is finished processing, it is removed from the pool of work items. This is all visible to the user in real time.
The problem I had with the previous approach was that as an item was picked up for processing, it would disappear from the view because it had been "consumed" by the call to GetConsumingEnumerable. What I really wanted was for items to be safely picked out of the "pending" pool for processing, but still remain in the view so that status updates (via INotifyPropertyChanged) could be visible in the UI.
I have addressed the problem of items disappearing from the view by actually using two concurrent collections instead, then wrapping them up as a single CompositeCollection (which I bind to instead of using the ICollectionView)
I have implemented this behaviour as below;
this.currentWorkItems = new ObservableConcurrentCollection<WorkItem>();
this.pendingWorkItems = new ObservableConcurrentCollection<WorkItem>();
this.compositeCollection = new CompositeCollection
{
new CollectionContainer { Collection = this.currentWorkItems},
new CollectionContainer { Collection = this.pendingWorkItems },
};
for (int i = 0; i < workConcurrencyFactor; i++)
{
Task.Factory.StartNew(this.ProcessWorkItems);
}
Then my Add method;
public void Add(WorkItem workItem)
{
this.pendingWorkItems.TryAdd(workItem);
}
Finally, the ProcessWorkItems method;
private void ProcessWorkItems()
{
while (true)
{
Thread.Sleep(100);
WorkItem workItem;
if (this.pendingWorkItems.TryTake(out workItem))
{
this.currentWorkItems.TryAdd(workItem);
workItem.Status = "Simulating First Step";
Thread.Sleep(1000);
workItem.Status = "Simulating Second Step";
Thread.Sleep(1000);
// Finished processing
this.currentWorkItems.TryTake(out workItem);
}
}
}
Note, I'm using ObservableConcurrentCollection from here.
This works OK, but I feel like I'm missing something here, or that I might be incurring totally unnecessary overhead by having multiple tasks sleeping and waking constantly when nothing else is really happening. Also, I feel like I'm abusing the second ObservableConcurrentCollection somewhat, by essentially just using it as a holding area for items that I'm working on, but that I still want to be visible.
Is there a better approach to this problem? What is the standard pattern for concurrent consumers to process a collection "in place", whilst avoiding multiple consumers grabbing the same item?
As Patryk already suggested this is a good example for TPL Dataflow - we do something similar (just with several steps in the pipeline including batching and transforming) here:
Create your Dataflow block to process the tasks and a collection to hold all of them:
var actionBlock = new ActionBlock<WorkItem>(item => ProcessWorkItem(item),
new ExecutionDataflowBlockOptions() { MaxDegreeOfParallelism = threadCount });
var allItems = new ConcurrentDictionary<int, WorkItem>(); // or whatever fits
Then in the the Add method:
public void Add(WorkItem workItem)
{
allItems.Add(workItem.Id, workItem);
actionBlock.Post(workItem);
}
And at the end of ProcessWorkItem do a allItems.Remove(workItem.Id).
P.S.: The dataflow blocks are pretty fast also - we do several hundred Post calls per second here...
Related
I have a WPF application that consist of two threads simulating an enterprise producting and selling items in 52 weeks (only one transaction is allowed per week). I need to use a background worker as well so that I can display the data in a listview. As of right now, my UI freezes when clicking on simulate but I can see that the output is still working in the debugging terminal. I have tried everything that I can think of and to be honest, I have had the help of my teacher and even he couldn't find a working solution.
What is freezing my UI when I call Simulate() ?
When my code is different and my UI isn't freezing, my listview never updates because it seems that DataProgress() doesn't work — e.UserStart is never iterating.
Simulate button calls :
private void Simulate(object sender, RoutedEventArgs e)
{
// Declare BackgroundWorker
Data = new ObservableCollection<Operations>();
worker = new BackgroundWorker();
worker.WorkerReportsProgress = true;
worker.WorkerSupportsCancellation = true;
worker.RunWorkerAsync(52);
worker.DoWork += ShowData;
worker.ProgressChanged += DataProgress;
worker.RunWorkerCompleted += DataToDB;
Production = new Production(qtyProduction, timeExecProd);
Sales = new Sales(qtySales, timeExecSales);
Thread prod = new Thread(Production.Product);
prod.Start();
Thread.Sleep(100);
Thread sales = new Thread(Sales.Sell);
sales.Start();
}
DoWork : ShowData() :
Console.WriteLine("Simulation started | Initial stock : 500");
Production = new Production(qtyProduction, timeExecProd);
Sales = new Sales(qtySales, timeExecSales);
while (Factory.Week < max) // max = 52
{
if (worker.CancellationPending) // also this isn't reacting to worker.CancelAsync();
e.Cancel = true;
// My teacher tried to call my threads from here, but it breaks the purpose of having
// two threads as he was just calling 52 times two functions back to back and therefore
// wasn't "randomizing" the transactions.
int progressPercentage = Convert.ToInt32(((double)(Factory.Week) / max) * 100);
(sender as BackgroundWorker).ReportProgress(progressPercentage, Factory.Week);
}
ProgressChanged : DataProgress() :
if (e.UserState != null) // While using debugger, it looks like this is called over & over
{
Data.Add(new Operations()
{
id = rnd.Next(1,999),
name = Factory.name,
qtyStock = Factory.Stock,
averageStock = Factory.AverageStock,
week = Factory.Week
});
listview.ItemsSource = Data;
}
RunWorkerCompleted : DataToDB() :
// Outputs "Work done" for now.
In case you want to know what happens when I call my threads, it looks like this :
Sell() :
while (Factory.Week <= 52)
{
lock (obj)
{
// some math function callings¸
Factory.Week++;
}
Thread.Sleep(timeExecSales);
}
Should I use a third thread just for updating my listview? I don't see how as I need it to be synced with my static variables. This is my first project for learning multithreading... I'm kind of clueless and flabbergasted that even my teacher can't help.
On the one hand, there isnt enough context in the code posted to get a full picture to answer your questions accurately. We can, however, deduce what is going wrong just from the code you have posted.
First, lets try to answer your two questions. We can likely infer the following:
This code here:
if (e.UserState != null)
{
Data.Add(new Operations()
{
id = rnd.Next(1,999),
name = Factory.name,
qtyStock = Factory.Stock,
averageStock = Factory.AverageStock,
week = Factory.Week
});
listview.ItemsSource = Data;
}
You are using a Windows Forms background thread object to try and update a WPF GUI object which should only be done on the main GUI thread. There is also the obvious no-no of never updating GUI objects from non-UI threads. Using BackgroundWorker also has its own issues with threading (foreground/background), contexts and execution, as it relies on the Dispatcher and SynchronizationContexts to get the job done.
Then there is the curiosity of setting the binding over and over in this line:
listview.ItemsSource = Data;
Let's put a pin in that for a moment...
There is, as the other commenter pointer out already, no exit strategy in your while loop:
while (Factory.Week < max) // max = 52
{
if (worker.CancellationPending) // also this isn't reacting to worker.CancelAsync();
e.Cancel = true;
// My teacher tried to call my threads from here, but it breaks the purpose of having
// two threads as he was just calling 52 times two functions back to back and therefore
// wasn't "randomizing" the transactions.
int progressPercentage = Convert.ToInt32(((double)(Factory.Week) / max) * 100);
(sender as BackgroundWorker).ReportProgress(progressPercentage, Factory.Week);
}
But thats not the bigger problem... in addition to the misuse/misunderstanding of when/how many/how to use threading, there doesnt seem to be any kind of thread synchronization of any kind. There is no way to predict or track thread execution of lifetime in this way.
At this point the question is technically more or less answered, but I feel like this will just leave you more frustrated and no better off than you started. So maybe a quick crash course in basic design might help straighten out this mess, something your teacher should have done.
Assuming you are pursuing software development, and since you have chosen WPF here as your "breadboard" so to speak, you will likely come across terms such as MVC (model view controller) or MVVM (model view view-model). You will also likely come across design principles such as SOLID, separation of concerns, and grouping things into services.
Your code here is a perfect example of why all of these frameworks and principles exist. Lets look at some of the problems you have encountered and how to fix them:
You have threading code (logic and services - controller [loosely speaking]) mixed in with presentation code (listview update - view) and collection update (your observable collection - model). Thats one reason (of many) you are having such a difficult time coding, fixing and maintaining the problem at hand. To clean it up, separate it out (separation of concerns). You might even move each operation into its own class with an interface/API to that class (service/ micro-service).
Not everything needs to be solved with threads. But for now, lets learn to crawl, then walk before we run. Before you start learning about async/await or the TPL (task parallel library) lets go old school. Get a good book... something even from 20 years ago is find... go old school, and learn how to use the ThreadPool and kernel synchronization objects such as mutexes, events, etc and how to signal between threads. Once you master that, then learn about TPL and async/await.
Dont cross the streams. Dont mix WinForms, WPF and I even saw Console.WriteLine.
Learn about Data Binding, and in particular how it works in WPF. ObservableCollection is your friend, bind your ItemsSource to it once, then update the ObservableCollection and leave the GUI object alone.
Hopefully this will help you straighten out the code and get things running.
Good luck!
I already know that to update control, I can use Invoke Action
progressBar1.Invoke(new Action(() => { progressBar1.Maximum = 0; }));
But that only work with property of control which are single data. Now I have a thread that need to change the list view collection of list view items, clear it then update with new items, new texts and icon images. What is the different between them, an integer or a bool compare to a list, array or a collection of integer, bool or even component, control. Can I just simple add
string[] newItemText = {"item1", "item2", ...};
listView1.Invoke(new Action() => {
i = 0;
foreach(var item in listView1.Items)
{
item.Text = newItemText[i];
i++;
}
}));
I may need a solution for my list view but if you have an answer, please explain clearly why and how it work, cause I need to learn thoroughly about it. Also you can notice me about the risk when trying that practice.
EDIT: The question is not about how to update control from other thread. IT about when and why need to do this and not to to that. Moreover, I need to know the different between update a single data and update the whole collection of data.
There is no difference in updating a value or adding more elements to a list.
What internally happens is that the graphic resources being used to draw in Windows (it can also be applied to Android) it requires the usages of Pens, Brushes, and those type of objects.
Those objects belong to the context of the thread in which they have been instantiated originally. This typically happens in the main thread, but if you would create the graphical object, in this case the List, in a separate thread different that the main, that specific thread would be the "owner" thread of your control.
If you want to change something in the Control which requires refreshing the UI (requires the usages of Pens, Brushes, etc.) you have to do that from the same thread which "owns" your Control. That's why you need to use Invoke.
That's the main principle. If you update a scalar value as Integer, or if you add elements to a list, there is no difference. Both require the usage of graphical resources, so both require being called from the same thread that owns the Control.
Let me know if it is clear.
Edit. In any case your code looks a bit strange, because you are filling the List iterating from the own list. Wouldn't make more sense to do this?:
string[] newItemText = {"item1", "item2", ...};
listView1.Invoke(new Action() => {
for(int i = 0; i < newItemText.length; i++)
{
listView1.add(newItemText[i]);
}
});
without knowing the rest of the context, it sounds more logical to me.
I have to create a WPF UI, which subsribes to real time Fx Rate(Currency + rate) updates and displays them in a grid (roughly 1000 updates per second, which means each row in the grid could get updated upto 1000 times per second).The grid would have atleast 50 rows at any point in time.
Towards this, I have created a Viewmodel which subscribes to the update events, and store those updates inside a concurrent dictionary with key as symbol and value as a RateViewModel object. Then I have another observable collection which has all those rateviewmodel objects, and bind that to a grid.
Code:
public class MyViewModel
{
private readonly IRatesService ratesService;
private readonly ConcurrentDictionary<string, RateViewModel> rateDictionary;
private object _locker = new object();
public MyViewModel(IRatesService ratesService)
{
this.ratesService = ratesService;
this.ratesService.OnUpdate += OnUpdate;
rateDictionary = new ConcurrentDictionary<string, RateViewModel>();
RateViewModels = new ObservableCollection<RateViewModel>();
}
private void OnUpdate(object sender, RateUpdateEventArgs e)
{
RateViewModel exisistingRate;
if (!rateDictionary.TryGetValue(e.Update.Currency, out exisistingRate))
{
exisistingRate = new RateViewModel(new Rate(e.Update.Currency, e.Update.Rate));
rateDictionary.TryAdd(e.Update.Currency, exisistingRate);
return;
}
lock (_locker)
{
exisistingRate.UpdateRate(e.Update.Rate);
}
Application.Current.Dispatcher.BeginInvoke(new Action(() => SearchAndUpdate(exisistingRate)));
}
public ObservableCollection<RateViewModel> RateViewModels { get; set; }
private void SearchAndUpdate(RateViewModel rateViewModel)
{
//Equals is based on Currency
if (!RateViewModels.Contains(rateViewModel))
{
RateViewModels.Add(rateViewModel);
return;
}
var index = RateViewModels.IndexOf(rateViewModel);
RateViewModels[index] = rateViewModel;
}
}
I have 4 questions over this:
Is there a way I can eliminate the ObservableCollection, as it's leading to 2 different datastructures storing the same items - but still have my updates relayed to the UI?
I have used Concurrent Dictionary, which leads to locking the whole update operation. Is there any other clever way of handling this rather than locking the whole dicitionary or for that matter any datastructure?
My UpdateRate method also locks - all my properties on my RateviewModel is read only except the price, as this is getting updated. Is there a way to make this atomic, please note that the price is coming in as a double.
Is there a way I can optimize the SearchAndUpdate method, this is kind of related to 1st. At the moment I believe it's an O(n)operation.
Using .NET 4.0 and have omitted INPC for brevity.
*EDIT:*Could you please help me in rewriting this in a better manner taking all the 4 points into account? Psuedocode will do.
Thanks,
-Mike
1) I wouldn't worry about 50 extra refs floating around
2) Yes, lockless data structures are doable. Interlocked Is your friend here and they are pretty much all one offs. ReaderWriterLock is another good option if you aren't changing what items are in your dictionary often.
3) Generally, if you are dealing with more data more data than the UI can handle you are going to want to do the updates in the background, only fire INPC on the UI thread, and more importantly have a facility to drop UI updates (while still updating the backing field). Basic approach is going to be something like:
Do an Interlocked.Exchange on the backing field
Use Interlocked.CompareExchange to set a private field to 1, if this returns 1 exit becuase there is still a pending UI update
If Interlocked.CompareExchange returned 0, invoke to the UI and fire your property changed event and update you throttling field to 0 (technically there is more you need to do if you care about non x86)
4) SearchAndUpdate Seems superfluous... UpdateRate should be bubbling to the UI and you only need to Invoke to the UI thread if you need to add or remove an item to the observable collection.
Update: here is a sample implementation... things are little more complicated because you are using doubles which don't get atomicity for free on 32 bit CPUs.
class MyViewModel : INotifyPropertyChanged
{
private System.Windows.Threading.Dispatcher dispatcher;
public MyViewModel(System.Windows.Threading.Dispatcher dispatcher)
{
this.dispatcher = dispatcher;
}
int myPropertyUpdating; //needs to be marked volatile if you care about non x86
double myProperty;
double MyPropery
{
get
{
// Hack for Missing Interlocked.Read for doubles
// if you are compiled for 64 bit you should be able to just do a read
var retv = Interlocked.CompareExchange(ref myProperty, myProperty, -myProperty);
return retv;
}
set
{
if (myProperty != value)
{
// if you are compiled for 64 bit you can just do an assignment here
Interlocked.Exchange(ref myProperty, value);
if (Interlocked.Exchange(ref myPropertyUpdating, 1) == 0)
{
dispatcher.BeginInvoke(() =>
{
try
{
PropertyChanged(this, new PropertyChangedEventArgs("MyProperty"));
}
finally
{
myPropertyUpdating = 0;
Thread.MemoryBarrier(); // This will flush the store buffer which is the technically correct thing to do... but I've never had problems with out it
}
}, null);
}
}
}
}
public event PropertyChangedEventHandler PropertyChanged = delegate {};
}
Mike -
I would approach this a little differently. You really dont need an Observable Collection unless new Fx rows are being added. Observable Collection as you know only gives you built-in change notification in that scenario. If you have a list of 50 rows (for example) and the Fx object (which represents each individual row) is updated 1000 times a second - then you can very well use the INotifyPropertyChanged on the Fx Properties on the Object and let that mechanism update the UI as they change. My line of thought is - this is a simpler approach for UI updates rather than move them from one collection to another
Now with regards to your second point - 1000 updates in a second (to an existing FX object) - which technically is unreadable from a UI perspective - the approach I have taken is freeze and thaw - which means you essentially intercept the InotifyPropertyChanged (as its firing to the UI) and keep it frequency based - so for example - every 1 sec - whatever my status of all FX objects is (refresh the UI). Now within that second - whatever updates happen to the FX properties - they keep overwriting on themselves - and the latest/correct value when the 1 second interval happens - is shown to UI. That way - data being shown to UI is always correct and relevant when its displayed to UI.
There are a couple of factors to take into account, especially if the number of displayed rates will change dynamically. I'm assuming the 1000 updates/sec are coming from a thread other than the UI thread.
The first is that you will need to marshall the updates to the UI thread - done for you for updates to an existing ViewModel, not done for you for new/deleted ViewModels. With a 1000 updates a second you probably want to control the granularity of the marshalling to the UI thread and the context switching that this entails. Ian Griffiths wrote a great blog series on this.
The second is that if you want your UI to remain responsive you probably want to avoid as many gen 2 garbage collections as possible which means minimising the pressure on the GC. This might be an issue in your case as you create a new Rate object update for each update.
Once you start to have a few screens that do the same thing you'll want to find a way to abstract this updating behaviour out into a common component. Other wise you'll be sprinkling threading code through your ViewModels which is error prone.
I've created an open source project, ReactiveTables, which addresses these three concerns and adds a couple of other features such as being able to filter, sort, join your model collections. Also there are demos showing how to use it with virtual grids to get the best performance. Maybe this can help you out/inspire you.
I'm having problems with Futures in Nhibernate 3 and can't realize what's wrong.
The following code (without Futures), works as expected:
SessionHandler.DoInTransaction(transaction =>
{
var criteria = SessionHandler.Session.CreateCriteria<T>();
var clonedCriteria = (ICriteria)criteria.Clone();
var count = criteria
.SetProjection(Projections.RowCount())
.UniqueResult<Int32>();
var result = clonedCriteria
.SetMaxResults(PageSize)
.SetFirstResult(page * PageSize)
.List<T>();
ItemList = result;
TotalResults = count;
RecalculatePageCount();
});
SessionHandler just stores a Session for this context, and DoInTransaction is a convenience method:
public void DoInTransaction(Action<ITransaction> action)
{
using (var transaction = Session.BeginTransaction())
{
action(transaction);
transaction.Commit();
}
}
Now, the following code causes GenericAdoException:
SessionHandler.DoInTransaction(transaction =>
{
var criteria = SessionHandler.Session.CreateCriteria<T>();
var clonedCriteria = (ICriteria)criteria.Clone();
var count = criteria
.SetProjection(Projections.RowCount())
.FutureValue<Int32>();
var result = clonedCriteria
.SetMaxResults(PageSize)
.SetFirstResult(page * PageSize)
.Future<T>();
ItemList = result;
TotalResults = count.Value;
RecalculatePageCount();
});
I'm using PostgreSQL 9.2, Npgsql 2.0.11.0 and NHibernate 3.3.1.4000. If that matters, I use Fluent NHibernate for my mappings
Thank you for any advice.
EDIT:
After more research, I found that this error only occurrs after I add an item. At starting, I'm loading data in my form, and it works just fine. I get the exception when I reload the data in my form after adding an item. But it is pretty strange. The item is added correctly. The code for adding or updating items looks like this:
if (IsEditing)
{
SessionHandler.DoInTransaction(tx => SessionHandler.Session.Update(CurrentItem));
}
else
{
SessionHandler.DoInTransaction(tx => SessionHandler.Session.Save(CurrentItem));
}
What is strange is that I (sometimes, I think) get this exception when I'm raising the PropertyChanged event. I noticed that sometimes the InnerException is different. Sounds like a threading problem, but it is strange that it works without futures. But I'm not using threads for loading the data, just for adding items (hmm, but maybe, because I notify when my items are added, and I load the items in answer to "that message", I think that load would be executed in another thread)
EDIT 2:
The error seems pretty random. Sometimes I get it, sometimes not :S
I think I've found the issue.
This may sound stupid, but I think it makes sense. I had to swap these lines:
ItemList = result;
TotalResults = count.Value;
So they resulted
TotalResults = count.Value;
ItemList = result;
The problem was, basically, multithreading (I think I didn't mentioned it pretty much in my question, but the randomness of the errors were a bit suspicious). So first, I'll tell you some background so the solution is clearer:
When a new element is added to the database, a message is (globally) sent, so everyone 'interested' can update its elements to reflect the changes. As I'm using MVVM Light, I do it like this:
Messenger.Default.Send(new DataReloadSuggested<MyEntityType>(theUpdatedId));
I was using Tasks to add the elements, so when I clicked on the 'Add' button, something like this was executed:
Task.Factory.StartNew(CommitCurrentItem);
And, inside CommitCurrentItem, I added the item to the database and notified the program that the list was updated (sending the message as mentioned above).
My main form registered to that messages, like this:
Messenger.Default.Register<DataReloadSuggested<T>>(this, true, unused => LoadPage(CurrentPage));
Even though the LoadPage function was not creating a different thread manually (at that moment), it was executed in the same thread as the CommitCurrentItem. A Task is not guaranteed to run in a different thread, but in this case it was. LoadPage just called the code that was in the question. My code is guaranteed to raise the PropertyChanged event (from the INotifyPropertyChanged interface) in the UI thread, so when I set the ItemList property (of type IEnumerable), the UI was notified so it shows the new list. So, the UI retrieved the new value of ItemList while (in the other thread), the line TotalResults = count.Value; was executing.
In this case, I guess the query is not executed against the database until I retrieve the first value (the first item in the list or the RowCount).
Remember that ISession is not thread safe, so this situation was unreliable: the UI thread and the other thread was using the same session at the same time. In my code I'm not sharing the sessions between the ViewModels, and each ViewModel uses the Session with only one thread at the same time, in order to prevent this kind of situations.
So, the final solution was to force the execution of the query in the same thread I was working, so I simply called count.Value before setting ItemList to result.
I have an app that has a ConcurrentQueue of items that have an ID property and a ConcurrentQueue of tasks for each item, the queue items look like:
class QueueItem {
public int ID { get; set; }
public ConcurrentQueue<WorkItem> workItemQueue { get; set; }
}
and the queue itself looks like:
ConcurrentQueue<QueueItem> itemQueue;
I have one thread doing a foreach over the itemQueue, deQueueing an item from each queue and doing work on it:
foreach(var queueItem in itemQueue) {
WorkItem workItem;
if (queueItem.workItemQueue.TryDequeue(out workItem))
doWork(workItem);
else
// no more workItems for this queueItem
}
I'm using ConcurrentQueues because I have a separate thread potentially adding queueItems to the itemQueue, and adding workItems to each workItemQueue.
My problem comes when I have no more workItems in a queueItem - I'd like to remove that queueItem from the itemQueue - something like...
if (queueItem.workItemQueue.TryDequeue(out workItem))
doWork(workItem);
else
itemQueue.TryRemove(queueItem);
...but I can't find a way to do that easily. The way i've come up with is to dequeue each QueueItem and then Enqueue it if there's still WorkItems in the workItemQueue:
for (int i = 0; i < itemQueue.Count; i++) {
QueueItem item;
itemQueue.TryDequeue(out queueItem);
if (queueItem.workItemQueue.TryDequeue(out workItem)) {
itemQueue.Enqueue(queueItem);
doWork(workItem);
}
else
break;
}
Is there a better way to accomplish what I want using the PFX ConcurrentQueue, or is this a reasonable way to do this, should I use a custom concurrent queue/list implementation or am I missing something?
In general, there is no efficient ways to remove specific items from queues. They generally have O(1) queue and dequeues, but O(n) removes, which is what your implementation does.
One alternative structure is something called a LinkedHashMap. Have a look at the Java implementation if you are interested.
It is essentially a Hash table and a linked list, which allows O(1) queue, dequeue and remove.
This isn't implemented in .Net yet, but there are a few implementations floating around the web.
Now, the question is, why is itemQueue a queue? From your code samples, you never enqueue or dequeue anything from it (except to navigate around the Remove problem). I have a suspicion that your problem could be simplified if a more suitable data structure is used. Could you give examples on what other pieces of code access itemQueue?
This may not work for everyone, but the following is the solution I came up with for removing an item from a concurrent queue, since this is the first google result, I thought I would leave my solution behind.
What I did was temporarily replace the working queue with an empty, convert the original to a list and remove the item(s), then create a new queue from the modified list and put it back.
In code (sorry this is VB.net rather C#):
Dim found As Boolean = False
//'Steal the queue for a second, wrap the rest in a try-finally block to make sure we give it back
Dim theCommandQueue = Interlocked.Exchange(_commandQueue, New ConcurrentQueue(Of Command))
Try
Dim cmdList = theCommandQueue.ToList()
For Each item In cmdList
If item Is whateverYouAreLookingFor Then
cmdList.Remove(item)
found = True
End If
Next
//'If we found the item(s) we were looking for, create a new queue from the modified list.
If found Then
theCommandQueue = New ConcurrentQueue(Of Command)(cmdList)
End If
Finally
//'always put the queue back where we found it
Interlocked.Exchange(_commandQueue, theCommandQueue)
End Try
Aside: This is my first answer, so feel free to put up some editing advice and/or edit my answer.
Queues are meant when you want to handle items in a FIFO style, Stacks for LIFO. There is also a concurrentdictionary and a concurrentbag. Make sure that a queue is actually what you want. I don't think I would ever do a foreach on a concurrentqueue.
What you likely want is a single queue for your work items (have them use a common interface and make a queue on the interface, the interface should expose the inherited type to which it can be recast later if needed). If the workitems belong to a parent, then a property can be used which will hold a key to the parent (consider a GUID for the key), and the parent can be kept in a concurrentdictionary and referenced/removed as needed.
If you must do it the way you have it, consider adding a flag. you can then mark the item in the itemqueue as 'closed' or whatever, so that when it is dequeued, it will get ignored.