Queuing ObservableCollection Updates - c#

I am programming a TAPI application which uses the state pattern for dealing with the different states a TK can be in. Incoming and outgoing calls are recorded via an ObservableCollection in a ListView (call journal). The call data gets compared with contacts stored in a SQL-Server database to determine possible matches. That information is then used to update the call journal. All this in real time of course and all governed by/in the different states of the FSM (finite state machine).
To distinguish calls, I do use a call ID (which is provided by TAPI). When the phone rings or I start calling out, a new record including its call ID are added to the call journal and the customer database is searched for the number and certain data in the journal is updated accordingly. When proceeding through the different call states the application dynamically updates the journal (i.e. changing an icon that visually shows the state of the specific call, etc).
Exactly those updates to the ObservableCollection are giving me headaches, as they need to happen in a certain order. For example, when receiving a call the associated state will create a new entry in the ObservableCollection. When the state changes the new state might try to update the collection even though it is not clear weather the entry that is to be changed has been added already. The states happen to switch really fast, apparently faster than updating the collection can happen.
Would some kind of message queue be a possible/good solution? If so, how could such a message queue be implemented - in the context of either a state machine or an ObservableCollection. I am not looking for complete solutions, but any information which I cannot easily find via google or stackoverflow would be appreciated.
Edit: greatly rephrased the question.
Edit: I added my own solution for the problem, but will wait and see if there is possibly someone with a better idea.

Have you checked whether the result of FirstOrDefault is null? This can happen if no element with given id exists in the collection.
For example:
var element = this.FirstOrDefault(p => p.ID == id);
if (element != null) {
// Do something with element.Number.
}
Or you could just call First and see if you get InvalidOperationException.
--- EDIT ---
I see from your comment that you seem to be accessing the same ObservableCollection from multiple threads concurrently. If that is the case, you need to protect the shared data structure through locking. It is entirely possible that one thread begins inserting a new element just at the moment the other one is searching for it, leading to all sorts of undefined behavior. According to MSN documentation for ObservableCollection:
"Any instance members are not guaranteed to be thread safe."
As for debugging, you can "freeze" other threads and so you can concentrate only on the thread of interest without excessive "jumping". See the Threads panel, right-click menu, Freeze and Thaw options.

Updating the ObservableCollection is a long running process, at least compared to receiving and handling the TAPI-events. This can lead to race conditions, where a call state which would have to edit a call entry could not find the entry as it aquired the lock to writing/updating the collection prior to the call state that would actually have to add the call. Also, not handling the TAPI-events in the proper order would break the state machine.
I decided to implement a simplified Command Pattern. The TAPI-Events, who used to trigger the performance heavy state transactions, get added to a thread safe, non-blocking and observable command queue. When a command gets enqueued the queue class starts "executing" (and dequeuing) the commands in a new thread, that is it is triggering the proper call states in the finite state machine, until there are no commands left in the queue. If there is a dequeuing thread already running no new thread is created (multi-threading would lead to race conditions again) and the queue class is blocking reentrancy to make sure that only one command will ever be executed at the any one time.
So basically: all TAPI-events (the invoker) are added to a queue (the client) in the order they are happening, as fast as possible. The queue then relays the TAPI information to the receiver, the finite state machine performing the business logic, taking its time but making sure the information gets updated in the proper order.
Edit: Starting from .NET 4.0 you can use the ConcurrentQueue(T) Class to achieve the same result.

Related

How do I wake up a specific thread?

I have a bunch of threads blocked waiting for a message. Each message has an ID which points to a specific thread. I have the following implementations:
1) All threads are waiting on the same lock object using Monitor.Wait. When a message comes in, I call Monitor.PulseAll and each thread checks its own ID with the message ID. If there is a match, thread continues. Otherwise it waits again on the same object. With this approach, every message arrival causes N-1 threads to wake up and mismatch the ID and go back to sleep.
2) Each thread creates a ManualResetEvent and add it to a dictionary. The dictionary maps message id to its event. When the message arrives, it calls map[message.Id].Set() which wakes up the specific thread.
3) This last implementation is very similar to #2, except it uses a lock object instead of ManualResetEvent. The hypothesis is that ManualResetEvent is an expensive object. This approach is more complex if compared to ManualResetEvent.
What's the best approach here? Is there a better one?
The question description is fairly vague, so it's hard to know for sure what your best approach would be. That said…
I would not use #1 or #2 at all. #1 requires waking every thread up just so one thread can run, which is obviously inefficient, and #2 uses the unmanaged Windows-based synchronization objects, which is not as efficient as using a built-in .NET mechanism.
Your #3 option is on the face of it not unreasonable given the problem description. However, IMHO you should not be reimplementing this yourself. I.e. as near as I can tell, you (for some reason) have messages that need to be provided to specific threads, i.e. a given message must be processed only by one specific thread.
In this case, I think you should just create a separate message queue for each thread, and add the message to the appropriate queue. There are lots of ways to implement the queue, but the most obvious for this particular example seems to me to be to use BlockingCollection<T>. By default, this uses a queue as the underlying collection data structure. The other feature that's important here is the GetConsumingEnumerable() method, which allows you to write a foreach loop in the dependent thread to retrieve messages as they are queued. The loop will block when no message is available, waiting for one to be provided via some other thread.
You can use a dictionary to map message ID to the appropriate queue for each thread.
Note that this not really IMHO a performance issue. It's more about using an appropriate data structure for the given problem. I.e. you seem to have a queue of messages, where you want to dispatch each message to a different thread depending on its ID. Instead, I think you should implement multiple queues of messages, one for each thread, and then use the existing .NET features to implement your logic so that you don't have to reinvent the wheel.
Note also that if you still must maintain a single input queue for the messages (e.g. because that's the interface presented to some other component in your program), you can and should still do the above. You'll just have some adapter code that dequeues a message from the main, single message queue and routes to the appropriate thread-specific queue.
To wake up a specific thread or to even start a thread - Thread.Start().
To check if your thread hasn't aborted yet - Thread.IsAlive property.
To check if a thread is running - Thread.ThreadState().
You can use the above three props and methods to have desired control over the threads and manage them at a very fine granularity.
When you are initializing the threads, put them all in a Dictionary<ID, Thread>(). Now, whenever you get a message, simply get the thread with required ID and wake it up.

Appropriate usage of C# event handlers

I'm currently building a C# application which will automatically authenticate a user against certain network resources when they connect to specific wireless networks.
At the moment, I'm using the Managed Wifi API to discover when a user connects / disconnects from a wireless network. I have an event handler, so that when any of these activities occurs, one of my methods is called to inspect the current state of the wireless connection.
To manage the state of the application, I have another class which is called the "conductor", which performs the operations required to change the state of the application. For instance, when the wireless card connects to the correct network, the conductor needs to change the system state from "Monitoring" to "Authenticating". If authentication succeeds, the conductor needs to change the state to "Connected". Disconnection results in the "Monitoring" state again, and an authentication error results in an "Error" state. These state changes (if the user requests) can result in TrayIcon notifications, so the user knows that they are being authenticated.
My current idea involves having the method used to inspect the current state of the wireless call the "authenticate" or "disconnect" methods within the state manager. However, I'm not sure if this is an appropriate use of the event handler -- should it instead be setting a flag or sending a message via some form of IPC to a separate thread which will begin the authentication / disconnection process?
In addition to the event handler being able to request connection / disconnection, a user can also perform it via the tray icon. As a result, I need to ensure these background operations are not blocking the tray's interactions with the user.
Only one component should be able to request a change of the system state at any time, so I would need to use a mutex to prevent concurrent state changes. However, how I should synchronous the rest of these components is a slight mystery to me.
Any advice or literature I should read would be appriciated. I have no formal training in C# language, so I apologize if I've misstated anything.
EDIT: Most importantly, I want to verify that an event will be executed as a separate thread, so it cannot block the main UI. In addition, I want to verify that if I have an event handler subscribed to an event, it will handle events serially, not in parallel (so if the user connects and disconnects before the first connection event is processed, two state changes will not be occurring simultaneously).
Any advice or literature I should read would be appriciated. I have no formal training in C# language, so I apologize if I've misstated anything.
That explains a few things. :)
I would read up on threads, event handling, and creation of system tray icons/interfaces.
It is important to note the following:
Events are processed on the same thread they are called from. If you want the processing of an event not to lock the GUI then you will need to have the button move the work to a different thread.
When an event is fired it passes the appropriate arguments to all the methods in its list. This is pretty much the same as calling one method which in turn calls all the others (see EventFired example). The purpose of events is not to call methods as we can do that already, it is to call methods which may not be known when the code is compiled (the click event on a button control would not be known when the library the control is in is compiled for example). In short, if you can call the method instead of using an event the do so.
void EventFired(int arg1, object arg2)
{
subscribedMethod1(arg1, arg2);
SubscribedMethod2(arg1, arg2);
SubscribedMethod3(arg1, arg2);
SubscribedMethod4(arg1, arg2);
SubscribedMethod5(arg1, arg2);
SubscribedMethod6(arg1, arg2);
SubscribedMethod7(arg1, arg2);
}
If you want to prevent a user interface from locking do the work on another thread. Remember though, user interface elements (forms, buttons, grids, labels, etc.) can only be accessed from their host thread. Use the control.Invoke method to call methods on their thread.
Removing an option from an interface is not a good way to prevent raceway conditions (the user starts a connect/disconnect while one is already running) as the user interface will be on a different thread and could be out of sync (it takes time for separate threads to sync up). While there are many ways to resolve this problem, the easiest for someone new to threading is to use a lock on the value. This way .NET will make sure only one thread can change the setting at a time. You will still need to update the user interface so the user knows the update is occurring.
Your general design sound fine. You could use 2-3 threads (1 for the user interface (tray icon), 1 for checking for new network connections, and 1 (could be merged with connection check) which checks the internet connection.
Hope this helps, let us know if you need more (or accept an answer).
As an option, alternative...
If I were you, and since you're starting anew anyway, I would seriously consider the
Rx Reactive Extensions
It gives a completely fresh look at events and event based programming and helps a lot exactly with the things you're dealing with (including synchronizing, dealing with threads, combining events, stopping, starting etc. etc.).
It might be a bit of a 'steep curve' to learn at start, but again, it might be worth it.
hope this helps,
To me it seems that you're going to overengineer the project.
You basically need to implement an event in Commander and in main application subscribe to them. That is.
If there is always one component can make a change and you can have more then one, using some sync mechanism, like a Mutex noted by you, is perfectly valid choice.
Hope this helps.
If you want to have at most one state change pending at any time it is probably best to have the event handlers of the external events you are listening to hold a lock during their execution. This ensure an easy way to program because you are guaranteed that the state of your app does not change underneath you. A separate thread is not needed in this particular case.
You need to make a distinction between the current state of the application and the target state. The user dictates the target state ("connected", "disconnected"). The actual state might be different. Example: the user wants to be disconnected but the actual state is authenticating. Once the authentication step is completed the state machine must examine the target state:
targetState == connected => set current state to connected
targetState == disconnected => begin to disconnect and set state to disconnecting
Separating actual and target state allows the user to change his mind any time and the state machine to steer towards the desired state.
It's hard to give a precise answer without seeing the whole (proposed) structure of your app. But in general, yes, it's OK to use an event hander for that sort of thing - though I'd probably move the actual implementation out to a separate method, so that you can more easily trigger it from other locations.
The comment about disabling the "Connect" button sounds right on to me, though it's quite conceivable you might need other forms of synchronization as well. If your app doesn't need to be multi-threaded, though, I'd steer away from introducing multiple threads just for the sake of it. If you do, look into the new Task API's that have been included as part of the Task Parallel Library. They abstract a lot of that stuff fairly well.
And the comment about not over-thinking the issue is also well-taken. If I were in your shoes, just beginning with a new language, I'd avoid trying to get the architecture just right at the start. Dive in, and develop it with the cognitive toolset you've already got. As you explore more, you'll figure out, "Oh, crap, this is a much better way to do that." And then go and do it that way. Refactoring is your friend.

Can One Thread Communicate with Another?

Is there a way in C# to send a message to another thread based on the thread's thread id or name?
Basically for a project in school, my professor wants us to do a producer/consumer deal, but passing objects serialized to a string(such as xml) from producer to consumer. Once a string is pulled from a buffer in the consumer thread, each of those strings is decoded(including the threadid) and processed and the original producer is notified via callback. So how do I send an event to the original producer thread with just the thread id?
You can write a class which has a Dictionary< string, thread > member containing all your threads. When you create a thread add it to the dictionary so you can return it by name (key) later from anywhere in the class. This way you can also share resources among your threads, but be sure to lock any shared resources to prevent concurrency issues.
Imagine you run a company, and you could hire as many employees as you liked, but each employee was really single-minded, so you could only give them one order ever. You couldn't get much done with them, right? So if you were a smart manager, what you'd do is say "Your order is 'wait by your inbox until you get a letter telling you what to do, do the work, and then repeat'". Then you could put work items into the worker's inboxes as you needed work done.
The problem then is what happens if you give an employee a long-running, low priority task (let's say, "drive to Topeka to pick up peanut butter for the company picnic"). The employee will happily go off and do that. But then the building catches fire, and you need to know that if you issue the order "grab the fire extinguisher and put the fire out!" someone is going to do that quickly. You can solve that problem by having multiple employees share a single inbox- that way, there is a higher probability that someone will be ready to execute the order to douse the flames, and not be off driving through Kansas.
Guess what? Threads are those difficult employees.
You don't "pass messages to a thread". What you can do is set up a thread or group of threads to observe a common, shared data structure such as a blocking queue (BlockingCollection in .NET, for example), and then put messages (like your strings) into that queue for processing by the consumer threads (which need to listen on the queue for work).
For bidirectional communication, you would need two queues (one for the message, and one for the response). The reason is that your "main" thread is also a bad employee- it only can process responses one at a time, and while it is processing a response from one worker, another worker might come back with another response. You'd want to build a request/response coordination protocol so that the original requestor knows which request a response is associated with- usually requests have an ID, and responses reference the request's ID so that the original requestor knows which request each response is for.
Finally you need proper thread synchronization (locking) on the queues if that isn't built in to the Producer/Consumer queue that you are working with. Imagine if you were putting a message into a worker's inbox, and that worker was so eager to read the message that he grabbed it from your hand and tore it in half. What you need is the ability to prevent more than one thread from accessing the queue at a time.
When using threads you do not try to send messages between them. Threads can use a shared memory to synchronize themselves - This is called synchronized objects. In order to manage threads for a consumer/producer system you can use a queue (a data structure) and not a message system. (see example here: C# producer/consumer).
Another possible solution (which I would not recommend) is : You can use GetThreadId to return the ID of a given native thread. Then all you need to find is the thread handle and pass it to that function. GetCurrentThreadId returns the ID of the current thread. There you can access it's name property.
A message is simply a method call, and to make a method call you first need an instance object which expose some methods to be called, thus sending a message to a thread means finding active object which lives in that thread and calling it's specific method.
Finding each thread's main worker's object could be handled through the threads coordinator, so if an object in a specific thread wants to send a message to another object (in other thread), it would first send it's request to threads coordinator and the coordinator sends the message/request to it's destination.

How to create a "Spool" service for a class in C#

I am looking into a C# programming fairly scrub to the language. I would like to think I have a good understanding of object oriented programming in general, and what running multiple threads means, at a high level, but actual implementation I am as said scrub.
What I am looking to do is to create a tool that will have many threads running and interacting with each other independent, each will serve their own task and may call others.
My strategy to ensure communication (without losing anything with multiple updates occurring same time from different threads) is on each class to create a spool like task that can be called external, and add tasks to a given thread, or spool service for these. I am not sure if I should place this on the class or external and have the class itself call the spool for new tasks and keeping track of the spool. Here I am in particular considering how to signal the class if an empty spool gets a task (listener approach, so tasks can subscribe to pools if they want to be awoken if new stuff arrive), or make a "check every X seconds if out of tasks and next task is not scheduled" approach
What would a good strategy be to create this, should I create this in the actual class, or external? What are the critical regions in the implementation, as the "busy wait check" allows it to only be on adding new jobs, and removing jobs on the actual spool, while the signaling will require both adding/removing jobs, but also the goto sleep on signaling to be critical, and that suddenly add a high requirement for the spool of what to do if the critical region has entered, as this could result in blocks, causing other blocks, and possible unforeseen deadlocks.
I use such a model often, on various systems. I define a class for the agents, say 'AgentClass' and one for the requests, say 'RequestClass'. The agent has two abstract methods, 'submit(RequestClass *message)' and 'signal()'. Typically, a thread in the agent constructs a producer-consumer queue and waits on it for RequestClass instances, the submit() method queueing the passed RequestClass instances to the queue. The RequestClass usually contains a 'command' enumeration that tells the agent what needs doing, together with all data required to perform the request and the 'sender' agent instance. When an agent gets a request, it switches on the enumeration to call the correct function to do the request. The agent acts only on the data in the RequestClass - results, error messages etc. are placed in data members of the RequestClass. When the agent has performed the request, (or failed and generated error data), it can either submit() the request back to the sender, (ie. the request has been performed asynchronously), or call the senders signal() function, whch signals an event upon which the sender was waiting, (ie. the request was performed synchronously).
I usually construct a fixed number of RequestClass instances at startup and store them in a global 'pool' P-C queue. Any agent/thread/whatever than needs to send a request can dequeue a RequestClass instance, fill in data, submit() it to the agent and then wait asynchronously or synchronously for the request to be performed. When done with, the RequestClass is returned to the pool. I do this to avoid continual malloc/free/new/dispose, ease debugging, (I dump the pool level to a status bar using a timer, so I always notice if a request leaks or gets double-freed), and to eliminate the need for explicit thread termination on app close, (if multiple threads are only ever reading/writing to data areas that outlive the application forms etc, the app will close easily and the OS can deal with all the threads - there are hundreds of posts about 'cleanly shutting down threads upon app close' - I never bother!).
Such message-passing designs are quite resistant to deadlocks since the only locks, (if any), are in the P-C queues, though you can certainly achieve it if you try hard enough:)
Is this the sort of system that you seem to need , or have I got it wrong?
Rgds,
Martin

Is ThreadPool appropriate for this threading scenario?

I have a scenario that I'm trying to turn into a more responsive UI by pre-fetching some sub-elements of the results before they're actually required by the user if possible. I'm unclear on how best to approach the threading, so I'm hoping someone can provide some advice.
Scenario
There is search form (.NET rich client) that enable the user to select an account for a given customer. The user searches for given text to find a collection of customers which are then displayed in a result grid. Then when the user selects a customer, the list of accounts for that customer are searched for and displayed in a second grid for user selection in order to make up the final context (that is an account) to open.
Existing System
I have this all running in a request/response manner using regular background threading to resolve customers and accounts for a customer respectively in direct response to the user selections. The UI is locked/disabled (but responsive) until the accounts are found.
Goal
What I want to achieve is to commence fetching of the accounts for the top N customers before the user has selected them... Where N is the number of items being displayed in the grid.
As the user scrolls the grid, the newly displayed items will be added to the "queue" to be fetched.
Questions
Is the thread pool an appropriate mechanism for managing the threads? If so, can you force just a single queued work item to jump up in priority? - e.g. if the user selects that customer before they have started/finished fetching.
If not, what else should I be doing?
Either way, are you aware of any good blog posts and/or open source projects that exhibit this functionality?
Yes, the threadpool is a good choice, maybe behind a Backgroundworker or .NET4's TaskParallel library.
But you cannot (should not) 'bump' a ThreadPool thread but I don't think that is going to be useful anyway.
What you should probably do is to use a thread-safe queue (of the top N items) and use 2+ threads to process the Queue. When an item is selected and not yet being processed you move it up or start a separate thread immediately.
By way of an update / solution... I followed Henk's solution (sort of) such that I keep a queue of work item objects but I still process them using the ThreadPool. Selected items are 'bumped' by putting them on the front of the queue (rather than the back) [note: needed a special collection for this].
The following might describe it in detail (in lieu of code)
Modified Customer to keep a property of IList<Account> named KnownValues and also an object used for locking named KnownValuesSyncObject (since I want the KnownValues to be null when they aren't yet known).
Search form maintains an instance variable Deque<CustomerAccountLoadingWorkItem> (from PowerCollections)
A CustomerAccountLoadingWorkItem maintains a reference to the Customer it's meant to process for, as well as it's ManualResetEvent handle that it was created with.
In order to only start loading for visible items (i.e. not scrolled off the screen) I used answers from my other post to use Data Virtualization as a mechanism for queuing CustomerAccountLoadingWorkItem's as each 'page' was loaded. Immediately after adding to the work items queue, a task is added to the ThreadPool queue. The context/state for the ThreadPool.QueueUserWorkItem call was my _accountLoadingWorkItems queue.
The WaitCallback delegate launched by the ThreadPool, took the work item queue passed to it, then (using locks) dequeued the top item (if there wasn't one, then returned immediately); then processed that item. The function used was a static function as part of CustomerAccountLoadingWorkItem so that it could access it's private readonly state (that is, the Customer and ManualResetEvent)
At the end of processing the static processing function also sets the Customer.KnownValues (using the KnownValuesSyncObject for locking).
When the user selected a value in the Customers grid, in it's already separate thread (via BackgroundWorker) if the Customer.KnownValues isn't already filled (i.e. it's probably sitting in the queue somewhere), then it adds a new CustomerAccountLoadingWorkItem to the work item queue (but at the top!!! [this was why the Deque was required]) and also adds a new processing task to the ThreadPool. Then since it created the work item it calls ManaulResetEvent.WaitOne() to force waiting for the thread pool task to complete.
I hope that made sense...
Of note, since my solution still uses the ThreadPool when the user selects an item I still have to wait for currently running thread pool thread's to finish before my work item gets picked up, I figured that was OK and possibly even desirable seeing as though the resources used to query for the accounts (a webservice) will be semi-locked up it'll probably come up equivalently quick anyway (due to some poor architecting and a shared web service proxy).
Overall, I certainly made what should have been an easy-ish job somewhat more difficult and should I have been able to use Framework 4 I would have looked at going down the TPL route for sure.
If .NET 4 is an option, might I recommend the new ConcurrentStack collection.
http://msdn.microsoft.com/en-us/library/dd267331(v=VS.100).aspx
You can add all the items you want to pre-fetch and if an item is selected by the user, you can push that selection onto the stack making it the next instance to be retrieved. This works great with the new PLINQ and TPL and makes use of the new ThreadPool improvements in .NET 4.
http://channel9.msdn.com/shows/Going+Deep/Erika-Parsons-and-Eric-Eilebrecht--CLR-4-Inside-the-new-Threadpool/

Categories