I have a scenario that I'm trying to turn into a more responsive UI by pre-fetching some sub-elements of the results before they're actually required by the user if possible. I'm unclear on how best to approach the threading, so I'm hoping someone can provide some advice.
Scenario
There is search form (.NET rich client) that enable the user to select an account for a given customer. The user searches for given text to find a collection of customers which are then displayed in a result grid. Then when the user selects a customer, the list of accounts for that customer are searched for and displayed in a second grid for user selection in order to make up the final context (that is an account) to open.
Existing System
I have this all running in a request/response manner using regular background threading to resolve customers and accounts for a customer respectively in direct response to the user selections. The UI is locked/disabled (but responsive) until the accounts are found.
Goal
What I want to achieve is to commence fetching of the accounts for the top N customers before the user has selected them... Where N is the number of items being displayed in the grid.
As the user scrolls the grid, the newly displayed items will be added to the "queue" to be fetched.
Questions
Is the thread pool an appropriate mechanism for managing the threads? If so, can you force just a single queued work item to jump up in priority? - e.g. if the user selects that customer before they have started/finished fetching.
If not, what else should I be doing?
Either way, are you aware of any good blog posts and/or open source projects that exhibit this functionality?
Yes, the threadpool is a good choice, maybe behind a Backgroundworker or .NET4's TaskParallel library.
But you cannot (should not) 'bump' a ThreadPool thread but I don't think that is going to be useful anyway.
What you should probably do is to use a thread-safe queue (of the top N items) and use 2+ threads to process the Queue. When an item is selected and not yet being processed you move it up or start a separate thread immediately.
By way of an update / solution... I followed Henk's solution (sort of) such that I keep a queue of work item objects but I still process them using the ThreadPool. Selected items are 'bumped' by putting them on the front of the queue (rather than the back) [note: needed a special collection for this].
The following might describe it in detail (in lieu of code)
Modified Customer to keep a property of IList<Account> named KnownValues and also an object used for locking named KnownValuesSyncObject (since I want the KnownValues to be null when they aren't yet known).
Search form maintains an instance variable Deque<CustomerAccountLoadingWorkItem> (from PowerCollections)
A CustomerAccountLoadingWorkItem maintains a reference to the Customer it's meant to process for, as well as it's ManualResetEvent handle that it was created with.
In order to only start loading for visible items (i.e. not scrolled off the screen) I used answers from my other post to use Data Virtualization as a mechanism for queuing CustomerAccountLoadingWorkItem's as each 'page' was loaded. Immediately after adding to the work items queue, a task is added to the ThreadPool queue. The context/state for the ThreadPool.QueueUserWorkItem call was my _accountLoadingWorkItems queue.
The WaitCallback delegate launched by the ThreadPool, took the work item queue passed to it, then (using locks) dequeued the top item (if there wasn't one, then returned immediately); then processed that item. The function used was a static function as part of CustomerAccountLoadingWorkItem so that it could access it's private readonly state (that is, the Customer and ManualResetEvent)
At the end of processing the static processing function also sets the Customer.KnownValues (using the KnownValuesSyncObject for locking).
When the user selected a value in the Customers grid, in it's already separate thread (via BackgroundWorker) if the Customer.KnownValues isn't already filled (i.e. it's probably sitting in the queue somewhere), then it adds a new CustomerAccountLoadingWorkItem to the work item queue (but at the top!!! [this was why the Deque was required]) and also adds a new processing task to the ThreadPool. Then since it created the work item it calls ManaulResetEvent.WaitOne() to force waiting for the thread pool task to complete.
I hope that made sense...
Of note, since my solution still uses the ThreadPool when the user selects an item I still have to wait for currently running thread pool thread's to finish before my work item gets picked up, I figured that was OK and possibly even desirable seeing as though the resources used to query for the accounts (a webservice) will be semi-locked up it'll probably come up equivalently quick anyway (due to some poor architecting and a shared web service proxy).
Overall, I certainly made what should have been an easy-ish job somewhat more difficult and should I have been able to use Framework 4 I would have looked at going down the TPL route for sure.
If .NET 4 is an option, might I recommend the new ConcurrentStack collection.
http://msdn.microsoft.com/en-us/library/dd267331(v=VS.100).aspx
You can add all the items you want to pre-fetch and if an item is selected by the user, you can push that selection onto the stack making it the next instance to be retrieved. This works great with the new PLINQ and TPL and makes use of the new ThreadPool improvements in .NET 4.
http://channel9.msdn.com/shows/Going+Deep/Erika-Parsons-and-Eric-Eilebrecht--CLR-4-Inside-the-new-Threadpool/
Related
I thought I had a pretty good handle on threading until I came across a scenario in which I wanted to benchmark updates to different grids.
I created a window with a grid control, bound an ObservableCollection to it and populated it with 5000 rows of some complex data type (which contains no locks).
Then I created a task using
Task.Factory.StartNew()
This went through a very tight loop (10 million iterations) updating a random property on a random item in my ObservableCollection, each of which raises an INotifyPropertyChanged event of course.
Now since the updates are all happening on the background thread I expected the UI to update, albeit hard-pressed to keep up with this background thread spinning in a tight loop.
Instead the UI froze for several seconds (but didn't go blank or produce the usual spinning cursor of doom) and then came back once the background thread finished.
My understanding was that the background thread would be taxing a core pretty heavily while producing tons of INPC's, each of which get marshalled automagically by the WPF runtime to the UI thread.
Now the UI thread is doing nothing so I expected it to consume all these INPC's and update the grid but it didn't; not a single update occurred. However, when I do this using a Timer (instead of a tight loop) it works fine.
Would someone please enlighten me as to what the heck the UI thread is doing? Thanks in advance!
If you clog up the message pump with a lot of dispatched updates like this, other messages won't get a chance to be processed, which causes the 'freeze' effect you observe.
One thing that can help here is to use Data Virtualization on your UI control so that only the visible rows are actually bound and listening to INPC updates. This is turned on by default for DataGrid, but if you're using a more custom approach to visualizing the data, this could be an issue.
That said, this won't help with frequent modifications to items that are currently visible, as truly rapid fire updates will still clog up the dispatcher. If you have a use case like this, you probably want to isolate your view model objects a bit and have a way to 'batch' your updates. One technique is to have a way to suppress notification while you do a bunch of updates, then call RaisePropertyChanged(null) (or whatever equivalent method on your INPC helper base class) on each instance to update all bindings to that instance.
Another mechanism is to make the data updates in some other layer (whatever model object(s) your view model instance is representing), then copy over those properties to the view model class at well-defined intervals. For rapidly updating background data, I often use a polling loop rather than triggering on events, simply because the events would occur more frequently than the UI cares about and it slows down the background processing to send all these unnecessary notifications constantly.
Is there a way in C# to send a message to another thread based on the thread's thread id or name?
Basically for a project in school, my professor wants us to do a producer/consumer deal, but passing objects serialized to a string(such as xml) from producer to consumer. Once a string is pulled from a buffer in the consumer thread, each of those strings is decoded(including the threadid) and processed and the original producer is notified via callback. So how do I send an event to the original producer thread with just the thread id?
You can write a class which has a Dictionary< string, thread > member containing all your threads. When you create a thread add it to the dictionary so you can return it by name (key) later from anywhere in the class. This way you can also share resources among your threads, but be sure to lock any shared resources to prevent concurrency issues.
Imagine you run a company, and you could hire as many employees as you liked, but each employee was really single-minded, so you could only give them one order ever. You couldn't get much done with them, right? So if you were a smart manager, what you'd do is say "Your order is 'wait by your inbox until you get a letter telling you what to do, do the work, and then repeat'". Then you could put work items into the worker's inboxes as you needed work done.
The problem then is what happens if you give an employee a long-running, low priority task (let's say, "drive to Topeka to pick up peanut butter for the company picnic"). The employee will happily go off and do that. But then the building catches fire, and you need to know that if you issue the order "grab the fire extinguisher and put the fire out!" someone is going to do that quickly. You can solve that problem by having multiple employees share a single inbox- that way, there is a higher probability that someone will be ready to execute the order to douse the flames, and not be off driving through Kansas.
Guess what? Threads are those difficult employees.
You don't "pass messages to a thread". What you can do is set up a thread or group of threads to observe a common, shared data structure such as a blocking queue (BlockingCollection in .NET, for example), and then put messages (like your strings) into that queue for processing by the consumer threads (which need to listen on the queue for work).
For bidirectional communication, you would need two queues (one for the message, and one for the response). The reason is that your "main" thread is also a bad employee- it only can process responses one at a time, and while it is processing a response from one worker, another worker might come back with another response. You'd want to build a request/response coordination protocol so that the original requestor knows which request a response is associated with- usually requests have an ID, and responses reference the request's ID so that the original requestor knows which request each response is for.
Finally you need proper thread synchronization (locking) on the queues if that isn't built in to the Producer/Consumer queue that you are working with. Imagine if you were putting a message into a worker's inbox, and that worker was so eager to read the message that he grabbed it from your hand and tore it in half. What you need is the ability to prevent more than one thread from accessing the queue at a time.
When using threads you do not try to send messages between them. Threads can use a shared memory to synchronize themselves - This is called synchronized objects. In order to manage threads for a consumer/producer system you can use a queue (a data structure) and not a message system. (see example here: C# producer/consumer).
Another possible solution (which I would not recommend) is : You can use GetThreadId to return the ID of a given native thread. Then all you need to find is the thread handle and pass it to that function. GetCurrentThreadId returns the ID of the current thread. There you can access it's name property.
A message is simply a method call, and to make a method call you first need an instance object which expose some methods to be called, thus sending a message to a thread means finding active object which lives in that thread and calling it's specific method.
Finding each thread's main worker's object could be handled through the threads coordinator, so if an object in a specific thread wants to send a message to another object (in other thread), it would first send it's request to threads coordinator and the coordinator sends the message/request to it's destination.
What i have now is a real-time API get bunch of messages from network and feed into pubsub manager class. there might be up to 1000 msg/sec or more at times. there are 2 different threads each connected to its own pubsub. subscribers are WPF windows. manager keeps list of windows and their DispatcherSynchornisationContext.
A thread calls the manager through interface method.
Manager publishes through Post:
foreach (var sub in Subscribers[subName])
{
sub.Context.Post(sub.WpfWindow.MyDelegate, data);
}
can this be optimised.
P.S. Please dont ask why do I think it is slow and all.. I dont have limits. Any solution is infinitely slow. I have to do my best to make it as fast as possible. I am asking for help to assess - can it be done faster? Thank you.
EDIT: found this: http://msdn.microsoft.com/en-us/library/aa969767.aspx
The argument with a queue stays. WHat I do is put stuff into the queue, the queue triggers a task that then invokes into the messaging thread and pulls X items of data (1000, or how many there are). The one thing that killed me was permanent single item invokation (which is slow), but doing it batchy works nicely. I can keep up wiitz nearly zero cpu load on a very busy ES data feed in crazy times for time and sales.
I have a special set of componetns for that that I will open source one of the next week and there is an ActionQueue (taking a delegate to call when items need processing). This is now a Task (was a queued thread work item before). I took time to process up 1000 messages per invocation - but if you do a price grid you may need more.
Note: use WPF hints to enable gpu caching of rendered bitmaps.
In addition:
Run every window on it's own thread / message pump
HEAVILY use async queues. The publisher should never block, every window has it's own target queue that is async.
You want processing as decoupled as possible. Brutally decoupled.
Here is my suggestion for you:
I would use a ConcurrentQueue (comes with the namespace System.Collections.Concurrent;) The background workers feed their messages in that queue. The UI Thread takes a timer and draws (let's say every 500 msec) a bunch of messages out of that queue and shows them to the user. Another possible way is, that the UI thread only will do that on demand of the user. The ConcurrentQueue is designed to be used from different thread and concurrently (as the name says ;-) )
I am programming a TAPI application which uses the state pattern for dealing with the different states a TK can be in. Incoming and outgoing calls are recorded via an ObservableCollection in a ListView (call journal). The call data gets compared with contacts stored in a SQL-Server database to determine possible matches. That information is then used to update the call journal. All this in real time of course and all governed by/in the different states of the FSM (finite state machine).
To distinguish calls, I do use a call ID (which is provided by TAPI). When the phone rings or I start calling out, a new record including its call ID are added to the call journal and the customer database is searched for the number and certain data in the journal is updated accordingly. When proceeding through the different call states the application dynamically updates the journal (i.e. changing an icon that visually shows the state of the specific call, etc).
Exactly those updates to the ObservableCollection are giving me headaches, as they need to happen in a certain order. For example, when receiving a call the associated state will create a new entry in the ObservableCollection. When the state changes the new state might try to update the collection even though it is not clear weather the entry that is to be changed has been added already. The states happen to switch really fast, apparently faster than updating the collection can happen.
Would some kind of message queue be a possible/good solution? If so, how could such a message queue be implemented - in the context of either a state machine or an ObservableCollection. I am not looking for complete solutions, but any information which I cannot easily find via google or stackoverflow would be appreciated.
Edit: greatly rephrased the question.
Edit: I added my own solution for the problem, but will wait and see if there is possibly someone with a better idea.
Have you checked whether the result of FirstOrDefault is null? This can happen if no element with given id exists in the collection.
For example:
var element = this.FirstOrDefault(p => p.ID == id);
if (element != null) {
// Do something with element.Number.
}
Or you could just call First and see if you get InvalidOperationException.
--- EDIT ---
I see from your comment that you seem to be accessing the same ObservableCollection from multiple threads concurrently. If that is the case, you need to protect the shared data structure through locking. It is entirely possible that one thread begins inserting a new element just at the moment the other one is searching for it, leading to all sorts of undefined behavior. According to MSN documentation for ObservableCollection:
"Any instance members are not guaranteed to be thread safe."
As for debugging, you can "freeze" other threads and so you can concentrate only on the thread of interest without excessive "jumping". See the Threads panel, right-click menu, Freeze and Thaw options.
Updating the ObservableCollection is a long running process, at least compared to receiving and handling the TAPI-events. This can lead to race conditions, where a call state which would have to edit a call entry could not find the entry as it aquired the lock to writing/updating the collection prior to the call state that would actually have to add the call. Also, not handling the TAPI-events in the proper order would break the state machine.
I decided to implement a simplified Command Pattern. The TAPI-Events, who used to trigger the performance heavy state transactions, get added to a thread safe, non-blocking and observable command queue. When a command gets enqueued the queue class starts "executing" (and dequeuing) the commands in a new thread, that is it is triggering the proper call states in the finite state machine, until there are no commands left in the queue. If there is a dequeuing thread already running no new thread is created (multi-threading would lead to race conditions again) and the queue class is blocking reentrancy to make sure that only one command will ever be executed at the any one time.
So basically: all TAPI-events (the invoker) are added to a queue (the client) in the order they are happening, as fast as possible. The queue then relays the TAPI information to the receiver, the finite state machine performing the business logic, taking its time but making sure the information gets updated in the proper order.
Edit: Starting from .NET 4.0 you can use the ConcurrentQueue(T) Class to achieve the same result.
I need to write an application in c# that keeps track of multiple tasks, each being implemented as a class instance running on its own thread. A user interface will be used to display the status of each task instance depending on which task I select from tree view which will display the list of tasks.
An idea I have is to create some other class, called PropertyClass which will have an instance of the TaskClass and some properties relating to this TaskClass instance. Then whenever the TaskClass instance changes its state the related property in the PropertyClass instance will get updated and then the UI will be updated with these property values from the PropertyClass when the task is selected from the Tree View list.
There will probably be hundreds of these tasks running which will be communicating with a service on a remote machine.
How else can I go about coding this solution in an efficient way?
Read this document from the MSDN on the Task Parallel Library first.
I have a few suggestions.
First, you need a way to make sure you don't end up with threads blocking your app from closing. One sure fire way to do this is to make sure all your threads are background threads. That can be a little problematic if you have to make sure a thread's work is done before it is joined or aborted.
Second, you could look at using the ThreadPool class which should make creating and using threads more efficient. The thread pool is there to help you manage your threads.
Third, you will need a method of synchronizing your data access from the GUI to data in the other threads. In WPF you use the Dispatcher and in WinForms you'll use Invoke.
Forth, the BackgroundWorker class can help with all of these if it'll fit in the model of your application.
Fifth, events and delegates can be BeginInvoked which essentially puts them on another thread. It's kind of implicit multi-threading and can be useful.
Sixth, and I've not yet had the chance to use this, .Net 4 has the Parallel Task Library that may be of use to you.
Seventh, safe shared data access and synchronization can be accomplished using lock and/or Monitor.
Hope this helps.
-Nate
If each TaskClass instance corresponds to a node on the tree view, you can store the TaskClass instance in the tree view item's Tag property. Or you could create a dictionary of TaskClasses, keyed by a unique identifier such as a GUID, and store the identifier in the Tag property.
In either case, use a callback method to signal that a TaskClass instance has an update.