Is there a way in C# to send a message to another thread based on the thread's thread id or name?
Basically for a project in school, my professor wants us to do a producer/consumer deal, but passing objects serialized to a string(such as xml) from producer to consumer. Once a string is pulled from a buffer in the consumer thread, each of those strings is decoded(including the threadid) and processed and the original producer is notified via callback. So how do I send an event to the original producer thread with just the thread id?
You can write a class which has a Dictionary< string, thread > member containing all your threads. When you create a thread add it to the dictionary so you can return it by name (key) later from anywhere in the class. This way you can also share resources among your threads, but be sure to lock any shared resources to prevent concurrency issues.
Imagine you run a company, and you could hire as many employees as you liked, but each employee was really single-minded, so you could only give them one order ever. You couldn't get much done with them, right? So if you were a smart manager, what you'd do is say "Your order is 'wait by your inbox until you get a letter telling you what to do, do the work, and then repeat'". Then you could put work items into the worker's inboxes as you needed work done.
The problem then is what happens if you give an employee a long-running, low priority task (let's say, "drive to Topeka to pick up peanut butter for the company picnic"). The employee will happily go off and do that. But then the building catches fire, and you need to know that if you issue the order "grab the fire extinguisher and put the fire out!" someone is going to do that quickly. You can solve that problem by having multiple employees share a single inbox- that way, there is a higher probability that someone will be ready to execute the order to douse the flames, and not be off driving through Kansas.
Guess what? Threads are those difficult employees.
You don't "pass messages to a thread". What you can do is set up a thread or group of threads to observe a common, shared data structure such as a blocking queue (BlockingCollection in .NET, for example), and then put messages (like your strings) into that queue for processing by the consumer threads (which need to listen on the queue for work).
For bidirectional communication, you would need two queues (one for the message, and one for the response). The reason is that your "main" thread is also a bad employee- it only can process responses one at a time, and while it is processing a response from one worker, another worker might come back with another response. You'd want to build a request/response coordination protocol so that the original requestor knows which request a response is associated with- usually requests have an ID, and responses reference the request's ID so that the original requestor knows which request each response is for.
Finally you need proper thread synchronization (locking) on the queues if that isn't built in to the Producer/Consumer queue that you are working with. Imagine if you were putting a message into a worker's inbox, and that worker was so eager to read the message that he grabbed it from your hand and tore it in half. What you need is the ability to prevent more than one thread from accessing the queue at a time.
When using threads you do not try to send messages between them. Threads can use a shared memory to synchronize themselves - This is called synchronized objects. In order to manage threads for a consumer/producer system you can use a queue (a data structure) and not a message system. (see example here: C# producer/consumer).
Another possible solution (which I would not recommend) is : You can use GetThreadId to return the ID of a given native thread. Then all you need to find is the thread handle and pass it to that function. GetCurrentThreadId returns the ID of the current thread. There you can access it's name property.
A message is simply a method call, and to make a method call you first need an instance object which expose some methods to be called, thus sending a message to a thread means finding active object which lives in that thread and calling it's specific method.
Finding each thread's main worker's object could be handled through the threads coordinator, so if an object in a specific thread wants to send a message to another object (in other thread), it would first send it's request to threads coordinator and the coordinator sends the message/request to it's destination.
Related
I am still learning C# so please be easy on me. I am thinking about my application I am working on and I can't seem to figure out the best approach. This is not a forms application but rather a console. I am listening to a UDP port. I get UDP messages as fast as 10 times per second. I then look for a trigger in the UDP message. I am using an event handler that is raised each time i get a new UDP packet which will then call methods to parse the packet and look for my trigger. So, i have these questions.
With regard to threading, I assume a thread like my thread that listens to the UDP data should be a permanent thread?
Also on threading, when I get my trigger and decide to do something, in this case send a message out, i gather that I should use a thread pool each time I want to perform this task?
On thread pools, I am reading that they are not very high priority, is that true? If the message I need to send out is critical, can i rely on thread pools?
With the event handler which is raised when i get a UDP packet and then calls methods, what is the best way to ensure my methods all complete before the next packet/event is raised? At times I see event queue problems because if any of the methods take a bit longer than they should (for exampe writing to a DB) and the next packet comes in 100ms later, you get event queue growth because you cannot consume events in a timely manner. Is there a good way to address this?
With regard to threading, I assume a thread like my thread that listens to the UDP data should be a permanent thread?
There are no permanent threads. However there should be a thread that is responsible for receiving. Once you start it, let it run until you no longer need to receive any messages.
Also on threading, when I get my trigger and decide to do something, in this case send a message out, i gather that I should use a thread pool each time I want to perform this task?
That depends on how often would you send out messages. If your situation is more like consumer/producer than a separate thread for sending is a good idea. But if you send out a message only rarely, you can use thread pool. I can't define how often rare means in this case, you should watch your app and decide.
On thread pools, I am reading that they are not very high priority, is that true? If the message I need to send out is critical, can i rely on thread pools?
You can, it's more like your message will be delayed because of slow message processing or slow network rather than the thread pool.
With the event handler which is raised when i get a UDP packet and then calls methods, what is the best way to ensure my methods all complete before the next packet/event is raised? At times I see event queue problems because if any of the methods take a bit longer than they should (for exampe writing to a DB) and the next packet comes in 100ms later, you get event queue growth because you cannot consume events in a timely manner. Is there a good way to address this?
Queue is a perfect solution. You can have more queues if some messages are independent of others and their execution won't collide and then execute them in parallel.
I'll adress your points:
your listeting thread must be a 'permanent' thread that gets messages and distribute them.
(2+3) - Look at the TPL libarary you should use it instead of working with threads and thread pools (unless you need some fine control over the operations which, from your question, seems like you dont need) - as MSDN states:
The Task Parallel Library (TPL) is based on the concept of a task, which represents an asynchronous operation. In some ways, a task resembles a thread or ThreadPool work item, but at a higher level of abstraction
Look into using MessageQueues since what you need is a place to receive messages, store them for some time (in memory in your case)and handle them at your own pace.
You could implement this yourself but you'll find it gets complicated quickly,
I recommend looking into NetMQ - it's easy to use, especially for what you describe, and it's in c#.
I have a bunch of threads blocked waiting for a message. Each message has an ID which points to a specific thread. I have the following implementations:
1) All threads are waiting on the same lock object using Monitor.Wait. When a message comes in, I call Monitor.PulseAll and each thread checks its own ID with the message ID. If there is a match, thread continues. Otherwise it waits again on the same object. With this approach, every message arrival causes N-1 threads to wake up and mismatch the ID and go back to sleep.
2) Each thread creates a ManualResetEvent and add it to a dictionary. The dictionary maps message id to its event. When the message arrives, it calls map[message.Id].Set() which wakes up the specific thread.
3) This last implementation is very similar to #2, except it uses a lock object instead of ManualResetEvent. The hypothesis is that ManualResetEvent is an expensive object. This approach is more complex if compared to ManualResetEvent.
What's the best approach here? Is there a better one?
The question description is fairly vague, so it's hard to know for sure what your best approach would be. That said…
I would not use #1 or #2 at all. #1 requires waking every thread up just so one thread can run, which is obviously inefficient, and #2 uses the unmanaged Windows-based synchronization objects, which is not as efficient as using a built-in .NET mechanism.
Your #3 option is on the face of it not unreasonable given the problem description. However, IMHO you should not be reimplementing this yourself. I.e. as near as I can tell, you (for some reason) have messages that need to be provided to specific threads, i.e. a given message must be processed only by one specific thread.
In this case, I think you should just create a separate message queue for each thread, and add the message to the appropriate queue. There are lots of ways to implement the queue, but the most obvious for this particular example seems to me to be to use BlockingCollection<T>. By default, this uses a queue as the underlying collection data structure. The other feature that's important here is the GetConsumingEnumerable() method, which allows you to write a foreach loop in the dependent thread to retrieve messages as they are queued. The loop will block when no message is available, waiting for one to be provided via some other thread.
You can use a dictionary to map message ID to the appropriate queue for each thread.
Note that this not really IMHO a performance issue. It's more about using an appropriate data structure for the given problem. I.e. you seem to have a queue of messages, where you want to dispatch each message to a different thread depending on its ID. Instead, I think you should implement multiple queues of messages, one for each thread, and then use the existing .NET features to implement your logic so that you don't have to reinvent the wheel.
Note also that if you still must maintain a single input queue for the messages (e.g. because that's the interface presented to some other component in your program), you can and should still do the above. You'll just have some adapter code that dequeues a message from the main, single message queue and routes to the appropriate thread-specific queue.
To wake up a specific thread or to even start a thread - Thread.Start().
To check if your thread hasn't aborted yet - Thread.IsAlive property.
To check if a thread is running - Thread.ThreadState().
You can use the above three props and methods to have desired control over the threads and manage them at a very fine granularity.
When you are initializing the threads, put them all in a Dictionary<ID, Thread>(). Now, whenever you get a message, simply get the thread with required ID and wake it up.
I have a queue of tasks or work items that needs to be executed in sequence, in the background. These tasks will be of the "fire and forget" type, meaning that once they are started, I do not really care if they complete or not, no need for cancellation or status update. If they do not complete, the user will be able to retry or diagnose manually.
The goal is to be able to keep a reference to the queue and only have to do
myQueue.Add( () => DoMyStuff() );
in order to add something to the queue.
The System.Threading.Task class only seems to be able to queue tasks one after the other, not by referencing a common queue. I do not want to manage the complexity of getting the latest taks and attach to it.
Threadpools do not guarantee sequencing and will execute work items in parallel. (Which is great, but not what I need)
Is there any built-in class that can handle that that I did not think of?
Edit:
We need to be able to add tasks to the queue at a later time. The scenario is that we want to send commands to a device (think switching a light bulb on or off) when the user clicks on a button. The commands take 5 seconds to process and we want the user to be able to click more than once and queue the requests. We do not know upfront how many tasks will be queued nor what will the tasks be.
Create a BlockingCollection, by default it will use a ConcurrentQueue as its internal data structure. Ensure that any task with prerequisites has its prerequisites added to the collection first. The BlockingCollection can be a collection of Tasks, some custom item representing the parameters for a method to be called, a Func<> or Action<> to execute, or whatever. Personally I'd go with either Task or Action.
Then you just need to have a single thread that goes through each item in the collection and executes them synchronously.
You can add new items to the queue while it's working and it won't have any problems.
You can create a queue object as a wrapper around System.Threading.Task. If you limit the number of concurrently executing threads to just 1 in the underlying thread pool, I think your problem is solved.
Limiting the number of executing tasks: System.Threading.Tasks - Limit the number of concurrent Tasks
How about starting all the threads at the same time and make them listen to a job completion event.
say your threads have id according to the sequence to run, all the threads can start at the same time, but will sleep till they get the job complete/ timeout of the previous job
The Job complete/ timeout event will also help your monitoring thread to keep track of the worker threads
What i have now is a real-time API get bunch of messages from network and feed into pubsub manager class. there might be up to 1000 msg/sec or more at times. there are 2 different threads each connected to its own pubsub. subscribers are WPF windows. manager keeps list of windows and their DispatcherSynchornisationContext.
A thread calls the manager through interface method.
Manager publishes through Post:
foreach (var sub in Subscribers[subName])
{
sub.Context.Post(sub.WpfWindow.MyDelegate, data);
}
can this be optimised.
P.S. Please dont ask why do I think it is slow and all.. I dont have limits. Any solution is infinitely slow. I have to do my best to make it as fast as possible. I am asking for help to assess - can it be done faster? Thank you.
EDIT: found this: http://msdn.microsoft.com/en-us/library/aa969767.aspx
The argument with a queue stays. WHat I do is put stuff into the queue, the queue triggers a task that then invokes into the messaging thread and pulls X items of data (1000, or how many there are). The one thing that killed me was permanent single item invokation (which is slow), but doing it batchy works nicely. I can keep up wiitz nearly zero cpu load on a very busy ES data feed in crazy times for time and sales.
I have a special set of componetns for that that I will open source one of the next week and there is an ActionQueue (taking a delegate to call when items need processing). This is now a Task (was a queued thread work item before). I took time to process up 1000 messages per invocation - but if you do a price grid you may need more.
Note: use WPF hints to enable gpu caching of rendered bitmaps.
In addition:
Run every window on it's own thread / message pump
HEAVILY use async queues. The publisher should never block, every window has it's own target queue that is async.
You want processing as decoupled as possible. Brutally decoupled.
Here is my suggestion for you:
I would use a ConcurrentQueue (comes with the namespace System.Collections.Concurrent;) The background workers feed their messages in that queue. The UI Thread takes a timer and draws (let's say every 500 msec) a bunch of messages out of that queue and shows them to the user. Another possible way is, that the UI thread only will do that on demand of the user. The ConcurrentQueue is designed to be used from different thread and concurrently (as the name says ;-) )
I am looking into a C# programming fairly scrub to the language. I would like to think I have a good understanding of object oriented programming in general, and what running multiple threads means, at a high level, but actual implementation I am as said scrub.
What I am looking to do is to create a tool that will have many threads running and interacting with each other independent, each will serve their own task and may call others.
My strategy to ensure communication (without losing anything with multiple updates occurring same time from different threads) is on each class to create a spool like task that can be called external, and add tasks to a given thread, or spool service for these. I am not sure if I should place this on the class or external and have the class itself call the spool for new tasks and keeping track of the spool. Here I am in particular considering how to signal the class if an empty spool gets a task (listener approach, so tasks can subscribe to pools if they want to be awoken if new stuff arrive), or make a "check every X seconds if out of tasks and next task is not scheduled" approach
What would a good strategy be to create this, should I create this in the actual class, or external? What are the critical regions in the implementation, as the "busy wait check" allows it to only be on adding new jobs, and removing jobs on the actual spool, while the signaling will require both adding/removing jobs, but also the goto sleep on signaling to be critical, and that suddenly add a high requirement for the spool of what to do if the critical region has entered, as this could result in blocks, causing other blocks, and possible unforeseen deadlocks.
I use such a model often, on various systems. I define a class for the agents, say 'AgentClass' and one for the requests, say 'RequestClass'. The agent has two abstract methods, 'submit(RequestClass *message)' and 'signal()'. Typically, a thread in the agent constructs a producer-consumer queue and waits on it for RequestClass instances, the submit() method queueing the passed RequestClass instances to the queue. The RequestClass usually contains a 'command' enumeration that tells the agent what needs doing, together with all data required to perform the request and the 'sender' agent instance. When an agent gets a request, it switches on the enumeration to call the correct function to do the request. The agent acts only on the data in the RequestClass - results, error messages etc. are placed in data members of the RequestClass. When the agent has performed the request, (or failed and generated error data), it can either submit() the request back to the sender, (ie. the request has been performed asynchronously), or call the senders signal() function, whch signals an event upon which the sender was waiting, (ie. the request was performed synchronously).
I usually construct a fixed number of RequestClass instances at startup and store them in a global 'pool' P-C queue. Any agent/thread/whatever than needs to send a request can dequeue a RequestClass instance, fill in data, submit() it to the agent and then wait asynchronously or synchronously for the request to be performed. When done with, the RequestClass is returned to the pool. I do this to avoid continual malloc/free/new/dispose, ease debugging, (I dump the pool level to a status bar using a timer, so I always notice if a request leaks or gets double-freed), and to eliminate the need for explicit thread termination on app close, (if multiple threads are only ever reading/writing to data areas that outlive the application forms etc, the app will close easily and the OS can deal with all the threads - there are hundreds of posts about 'cleanly shutting down threads upon app close' - I never bother!).
Such message-passing designs are quite resistant to deadlocks since the only locks, (if any), are in the P-C queues, though you can certainly achieve it if you try hard enough:)
Is this the sort of system that you seem to need , or have I got it wrong?
Rgds,
Martin