how to create singleton which always running in separate thread? - c#

I would like to rephrase my previous question How to create Singleton with async method?
Imagine messaging application (like icq) - something that should be always connected to server and can post messages.
I need to implment class Connection. It should be singleton, because it contains "socket" inside and that socket should persist during entirely application lifetime.
Then I want to implement async method Connection.postMessage
Because postMessage can take significant ammount of time:
postMessage should be async
postMessage should queue messages if neccesary
Note my application posts dozens messages per second, so it is not appropiate to create new Thread for each postMessage call.
I diffenetely need to create exactly one extra thread for messages posting but I don't know where and how.
upd: good example http://msdn.microsoft.com/en-us/library/yy12yx1f(v=vs.80).aspx

No, Postmessage (itself) should not be async .
It should
be Thread-safe
ensure the Processing thread is running
queue the message (ConcurrentQueue)
return
And the Processing Thread should
Wait on the Queue
Process the messages
maybe Terminate itself when idle for xx milliseconds
What you have is a classic Producer/Consumer situation with 1 Consumer and multiple Producers.
PostMessage is the entry-point for all producers.

jp,
You're looking at a classic producer/consumer problem here... During initialisation the Connection should create a MessageQueue start a Sender in it's own background thread.
Then the connection posts just messages to the queue, for the Sender to pickup and forward when ready.
The tricky bit is managing the maximum queue size... If the producer consistently outruns the consumer then queue can grow to an unmanagable size. The simplest approach is to block the producer thread until the queue is no longer full. This can be done with a back-off-ARQ. ie: while(queue.isFull) sleep(100, "milliseconds"); queue.add(message); If you don't require 100% transmission (like a chat-app, for instance) then you can simply throw a MessageQueueFullException, and the poor client will just have to get over it... just allways allow them to resubmit later... allowing the user manage the retrys for you.
That's how I'd tackle it anyway. I'll be interested to see what others suggestions are muted.
Hope things work out for you. Cheers. Keith.

Related

Event handler timing and threading

I am still learning C# so please be easy on me. I am thinking about my application I am working on and I can't seem to figure out the best approach. This is not a forms application but rather a console. I am listening to a UDP port. I get UDP messages as fast as 10 times per second. I then look for a trigger in the UDP message. I am using an event handler that is raised each time i get a new UDP packet which will then call methods to parse the packet and look for my trigger. So, i have these questions.
With regard to threading, I assume a thread like my thread that listens to the UDP data should be a permanent thread?
Also on threading, when I get my trigger and decide to do something, in this case send a message out, i gather that I should use a thread pool each time I want to perform this task?
On thread pools, I am reading that they are not very high priority, is that true? If the message I need to send out is critical, can i rely on thread pools?
With the event handler which is raised when i get a UDP packet and then calls methods, what is the best way to ensure my methods all complete before the next packet/event is raised? At times I see event queue problems because if any of the methods take a bit longer than they should (for exampe writing to a DB) and the next packet comes in 100ms later, you get event queue growth because you cannot consume events in a timely manner. Is there a good way to address this?
With regard to threading, I assume a thread like my thread that listens to the UDP data should be a permanent thread?
There are no permanent threads. However there should be a thread that is responsible for receiving. Once you start it, let it run until you no longer need to receive any messages.
Also on threading, when I get my trigger and decide to do something, in this case send a message out, i gather that I should use a thread pool each time I want to perform this task?
That depends on how often would you send out messages. If your situation is more like consumer/producer than a separate thread for sending is a good idea. But if you send out a message only rarely, you can use thread pool. I can't define how often rare means in this case, you should watch your app and decide.
On thread pools, I am reading that they are not very high priority, is that true? If the message I need to send out is critical, can i rely on thread pools?
You can, it's more like your message will be delayed because of slow message processing or slow network rather than the thread pool.
With the event handler which is raised when i get a UDP packet and then calls methods, what is the best way to ensure my methods all complete before the next packet/event is raised? At times I see event queue problems because if any of the methods take a bit longer than they should (for exampe writing to a DB) and the next packet comes in 100ms later, you get event queue growth because you cannot consume events in a timely manner. Is there a good way to address this?
Queue is a perfect solution. You can have more queues if some messages are independent of others and their execution won't collide and then execute them in parallel.
I'll adress your points:
your listeting thread must be a 'permanent' thread that gets messages and distribute them.
(2+3) - Look at the TPL libarary you should use it instead of working with threads and thread pools (unless you need some fine control over the operations which, from your question, seems like you dont need) - as MSDN states:
The Task Parallel Library (TPL) is based on the concept of a task, which represents an asynchronous operation. In some ways, a task resembles a thread or ThreadPool work item, but at a higher level of abstraction
Look into using MessageQueues since what you need is a place to receive messages, store them for some time (in memory in your case)and handle them at your own pace.
You could implement this yourself but you'll find it gets complicated quickly,
I recommend looking into NetMQ - it's easy to use, especially for what you describe, and it's in c#.

RabbitMQ best practice for creating many consumers

We are just starting to use RabbitMQ with C#. My current plan is to configure in the database the number and kind of consumers to run on a given server. We have an existing windows service and when that starts I want to spawn all of the RabbitMQ consumers. My question is what is the best way to spwan these from a windows service?
My current plan is to read the configuration out of the database and spawn a long running task for each consumer.
var t = new Task(() =>
{
var instance = LoadConsumerClass(consumerEnum, consumerName);
instance.StartConsuming();//blocking call
}, TaskCreationOptions.LongRunning);
t.Start();
Is this better or worse than creating a thread for each consumer?
var messageConsumer = LoadConsumerClass(consumerEnum, consumerName);
var thread = new Thread(messageConsumer.StartConsuming);
I'm hoping that more than a few others have already tried what I'm doing and can provide me with some ideas for what worked well and what didn't.
In EasyNetQ we have a single dispatcher thread for all consumers on a single connection. We also provide a facility to to return a Task from the message handler, so it's easy to do async IO if you want to make a database call, go to the file system, or make a web service request.
Having said that it's perfectly legitimate to have each consumer consuming on a different thread. I guess it depends on your message throughput, how many consumers you have and the nature of your message handlers.
I'd stick with Tasks as they give you more features and generally allow for less boilerplate code.
And, If I understand your code correctly, you'd be sharing a channel (IModel) in second case. This might cause troubles as the default IModel implementation is not thread safe (or used to be). There're more subtle nuances regarding thread safety you'd have to watch out.
But it depends on your usage patterns. If you don't expect many messages/sec on each consumer, or if your app can handle messages fast then perhaps a single thread for all consumers will be you best option.
Task is great, but you not really going to use all the stuff it can do. The only thing you need is to do work in parallel.
I faced the same question couple of months ago, what I finished with - is a thread per computation type (per queue) which is blocking on message arrival and doesn't consume cpu when waiting for messages.
Open a new channel for each one of the threads.
As for connections - if you application is meant to deal with high load of messages, I suggest you opening connection for every X workers (figure you your X), since only one channel can send the messages through the connection, so assuming one worker is consuming large message the others are blocked on connection level waiting it to be free.

C# Sockets: Do I really need so many separate threads

I'm writing a server in C#. The asynchronous example on msdn.microsoft.com suggests the following.
BeginAccept to listen for client (& start a new thread when client calls).
BeginReceive to receive the data from client ( & start a new thread to do it on).
BeginSend to reply to send data to client ( & start yet another thread to it on).
At this point there seems to be 4 separate threads, when from my (possibly naive) point of view there really only needs to be 2. 1 for the server to keep listening on and 1 for the conversation with the client. Why does my conversation with the client need 3 threads since I have to wait for a reply before I send and I won't be doing anything else while waiting to receive data from the client?
Cheers
BeginAccept does not start a new thread. It is attaching a handler to an OS level hook. No thread is going to be doing the meat of the work for this operation. The same is true for BeginReceive and BeginSend. None of these are starting new threads.
When the events that they are adding handlers for actually fire, then a thread pool thread is created to respond to the action happening. The CPU bound work done here should generally be quite low. What you'll see here is a lot of thread pool threads requested, but very little work being done by them, so they are sent back to the pool very quickly.
The thread pool is designed for this type of use. Rather than creating full threads for each event response (which would be expensive) you can create just 1-2 threads and continually re-use them to respond to all of these events in turn. The pool will only create as many threads as it needs to keep up with a sufficiently small backlog.
While your main thread will be marshalled around these operations, you should not have to "start a thread" yourself.
BeginAccept is a non blocking method - .NET will return immediately from it but invoke your callback on the thread pool when it fulfils its purpose asynchronously.
The threadpool is optimised outside of your control.

Raise event in one thread to invoke methods in a second thread

I'm working on a program which reacts to events coming from an internet socket, and possibly from timers as well. It seems natural to use two threads:
One for the main program
A second one which listens to the socket, parses the input, and raises an appropriate event.
Additional requirements:
The application should not rely on a UI thread (it may be run as a console application).
The main program should process messages synchronously, i.e. in the order in which they arrived.
The main thread must not block on waiting for timers (I guess this means I have to run timers on different threads).
And now for some questions :-):
I'm guessing requirement #1 means that I don't have a built-in message pump, so I can't use Invoke() from the socket listener / timer threads. Is this correct?
How can I safely raise events on one thread (e.g. the listener), and have the subscribers run synchronously on another (the main thread)?
It is very likely that new events will be raised before the subsequent handler is done. What will happen in this case? Will the event be buffed somewhere by the CLR, or will it be ignored?
And last but not least: I guess I'm aiming for the parallel for the message Producer/Consumer paradigm, but instead of messages, I want to use events. Do you think there is a better approach?
Thanks,
Boaz
EDIT
I want to explain my motivation for using events in the first place. The application is an automated trading engine which has to respond to events that happen in the market (e.g. a change in the price of a stock). When this happens, there may be multiple subscribers on the main thread which should be invoked, which is a classical scenario to use events.
I guess I can always use the Producer/Consumer with some message queue, and have the consumer raise events on the main thread, but I figured there might be a more direct way.
I think using messages will be the simplest way. If you are using C# 4 this is very easy thanks to the BlockingCollection<>
So have a shared BlockingCollection, where Message is your message class.
Then in your worker thread you do this
var msgEnum = blockingCollection.GetConsumingEnumerable();
//Per thread
foreach( Message message in msgEnum )
{
//Process messages here
}
That is it.
The GetConsumingEnumerable() will block until there is a message to process. It will then remove the message from the queue and your loop will process it.
What is nice about this is that you can add more threads and in each one you just have the foreach loop.
When you are done call blockingCollection.CompletedAdding();
BTW the queue handles concurrency and will queue messages sent at the same time etc.
Hope this helps
Andre
You could implement a shared queue between your threads. Whenever an event is raised you could push it in the queue. The main thread is an endless loop that checks for new events, removes them from the queue, handles the event and when there are no more events it sleeps for some time.

Processing packets in the async loop or not?

In C#, when receiving network data with the BeginReceive/EndReceive methods, is there any reason you shouldn't process the packets as soon as you receive them? Some of the tasks can be decently cpu intensive. I ask because I've seen some implementations that push the packets off into a processing queue and then handle them there. To me this seems redundant because, as far as I know, the async methods also operate on a thread pool.
Generally, you need to receive 'enough' packets to have a data item that is 'processable'.
IMO, It's better to have one thread whose job is receiving data, and another to actually process it.
As Mitch points out, you need to be able to receive enough packets to have a complete message/frame . But there's no reason why you shouldn't start processing that frame immediately and issue another BeginReceive. In fact, if you believe your processing could take some time, you're better off handing it off to the worker thread-pool rather than block a thread from the i/o pool (which is where your callback will fire).
In addition, unless you're expecting a low number of connections, spawning a thread to handle each connection is not a very scalable approach, although it does have the benefit of some simplicity.
I recently wrote an article on pipelining data-processing off a network socket, which you can find here.

Categories