I need to be able to asynchronously receive messages from the server at any time. However, I also want to use Synchronous sockets; when I send a message I want to block until I get a response. Do synchronous and asynchronous play well with each other, or will it cause problems?
In other words, I use BeginReceive() to asynchronously listen. If I call Receive() (synchronous version as I understand), will the next incoming message be picked up by the Receive callback, the BeginReceive callback, both, neither, or worse?
This would be happening in the client, the server can stay 100% asynchronous.
Do synchronous and asynchronous play well with each other?
No. Generally speaking, they don't play well together.
That's not to say that it can never be done, but it's sometimes impossible, and usually confusing and hard to work with. Unless you have a compelling reason to do otherwise, I'd suggest sticking with just one or the other.
From MSDN:
BeginReceive: "Begins to asynchronously receive data from a connected Socket."
So I would say even though BeginReceive is derived from the Socket class it is meant to receive asynchronous data, were as the Receive method is used to retrieve data synchronously from a bound sock.
Yes, it is completely possible, the trick is to hide the asynchronous behaviour in a wrapper which 'appears' to act in a synchronous manner. There is an article on doing exactly that here for the network library NetworkComms.Net.
Disclaimer: I'm a developer for this library.
Related
I am developping a client library for a network application protocol.
Client code calls the library to init it and to connect to server.
The client can of course send requests to the server, but the server can also send requests (Commands, called Cmd below) to the client.
The transport protocol is TCP/IP, so basically the client library connect to the server and make a call to an async method to retrieve the next request or response from the server in order to avoid I/O blocking while waiting for response/requests from the server.
That being said, I see two possible solutions (only using C# constructs and no specific third party framework) in the library to allow the client to receive requests from the server :
Either offer an event in the library such as
public EventHandler<ReceivedCmdEventArgs> event ReceivedCmd;
that the client would subscribe to, in order to get notidied of requests incoming from the server.
Of course for this mechanism I will have to make an async loop in the client library to receive requests from the server and raise the event on Cmd reception.
Or the other solution would be to make such a method in the client library
public async Task<Cmd> GetNextCmdAsync()
that the client code would call in an async loop to receive the cmds.
Are these solutions kind of the same ? Is it better to fully use async/await constrcuts of C#5 and not rely on events anymore ? What are the differences ? Any recommendation, remark ?
Thanks !
I think that the event-driven approach is better in your case.
In fact, you're talking about an observable/observer pattern. An unknown number of listeners/observers are waiting to do something if some command is received.
Async/await pattern wouldn't work as well as event-driven approach, because it'd be something like I expect one result in opposite to I'll do what you want whenever you report me that you received a command.
Conceptually talking, I prefer the event-driven approach because it fits better with the goal of your architecture.
Async/await pattern in C# 5 isn't designed for your case, but it's for when some code executes an async task and next code lines should be executed after the task has received a result.
Task represents a single asynchronous action, such as receiving a single command. As such, it is not directly suitable for streams of events.
The ultimate library for streams of events is Reactive Extensions (Rx), but it unfortunately has a rather steep learning curve.
A newer option is the lesser-known TPL Dataflow, which allows building up async-friendly dataflow meshes. In fact, I'm writing an async-friendly TCP/IP socket wrapper, and I'm exposing ISourceBlock<ArraySegment<byte>> as the reading stream. The end-user can then either receive from that block directly (ReceiveAsync), or they can just "link" it into a dataflow of their own (e.g., that can do message framing, parsing, and even handling).
Dataflow is slightly less efficient than Rx, but I think the lower learning curve is worth it.
I would not recommend a bare event - you either end up with a free-threaded event (think about how you would handle socket closure - could an event happen after disposal?) or the Event-based Asynchronous Pattern (which has its own similar problems syncing to the provided SynchronizationContext). Both Rx and Dataflow provide better solutions for synchronization and disposal/unsubscription.
Since you are making a library, events seem better suited.
Events allow you to build the library without enforcing that a call back must be specified.
Consumers of your library decide what they are interested in and listen to those events.
Async tasks on the other hand are meant where you know that there will be delays ( IO, Network, etc.) Async tasks allow you to free resources while these delays take place, thus resulting in better utilization of resources.
Async tasks are not a replacement for events that you raise.
I thought C# was an event-driven programming language.
This type of thing seems rather messy and inefficient to me:
tcpListener.Start();
while (true)
{
TcpClient client = this.tcpListener.AcceptTcpClient();
Thread clientThread = new Thread(new ParameterizedThreadStart(HandleClientCommunication));
clientThread.Start(client);
}
I also have to do the same kind of thing when waiting for new messages to arrive. Obviously these functions are enclosed within a thread.
Is there not a better way to do this that doesn't involve infinite loops and wasted CPU cycles? Events? Notifications? Something? If not, is it bad practice to do a Thread.Sleep so it isn't processing as nearly as often?
There is absolutely nothing wrong with the method you posted. There are also no wasted CPU cycles like you mentioned. TcpClient.AcceptTcpClient() blocks the thread until a client connects to it, which means it does not take up any CPU cycles. So the only time the loop actually loops is when a client connects.
Of course you may want to use something other than while(true) if you want a way to exit the loop and stop listening for connections, but that's another topic. In short, this is a good way to accept connections and I don't see any purpose of having a Thread.Sleep anywhere in here.
There are actually three ways to handle IO operations for sockets. The first one is to use the blocking functions just as you do. They are usually used to handle a client socket since the client expects and answer directly most of the time (and therefore can use blocking reads)
For any other socket handlers I would recommend to use one of the two asynchronous (non-blocking) models.
The first model is the easiest one to use. And it's recognized by the Begin/End method names and the IAsyncResult return value from the Begin method. You pass a callback (function pointer) to the Begin method which will be invoked when something has happened. As an example take a look at BeginReceive.
The second asynchronous model is more like the windows IO model (IO Completion Ports) . It's also the newest model in .NET and should give you the best performance. As SocketAsyncEventArgs object is used to control the behavior (like which method to invoke when an operation completes). You also need to be aware of that an operation can be completed directly and that the callback method will not be invoked then. Read more about RecieveAsync.
An instance of System.Net.Sockets.Socket
can be shared by 2 threads so one use the send() method and another it's receive() method ?
Is it safe?
Well, I need it to be not only thread-safe, but also that the send/receive methods be non-syncronized, so as to let each thread call them concurrently.
Do I have another way of doing it ?
Thanks for helping, I am experienced in java but having a hard time trying to make this one.
It should be safe, yes. The Socket class is quoted by MSDN to be fully thread-safe.
I don't know if it's a good idea however. You might be making it difficult for yourself by using two threads. You probably want to look at BeginSend and BeginReceive for asynchronous versions, in which case you shouldn't need multiple threads.
Little off topic, but using de sync methods is only usefull if you have limited clients. I figured out that async sockets are slower with response. The async soockets are much better with handling many clients.
So:
sync is way faster.
async is more scalable
Yes, it is perfectly safe to access send and recieve from two different threads at the same time.
If you want your application to scale to 100's of active sockets then you'll want to use the BeginReceiveve/BeginSend methods as opposed to creating threads manually. This will do magic behind the scenes so that you don't spawn 100's of threads to process the sockets. What exactly it does is platform dependent. On windows you'll use the 'high performance' io completion ports. Under linux (mono) you'll use epoll I believe. Either way, you'll end up using a lot less threads than active sockets, which is always a good thing :)
I'm writing a server in C# which creates a (long, possibly even infinite) IEnumerable<Result> in response to a client request, and then streams those results back to the client.
Can I set it up so that, if the client is reading slowly (or possibly not at all for a couple of seconds at a time), the server won't need a thread stalled waiting for buffer space to clear up so that it can pull the next couple of Results, serialize them, and stuff them onto the network?
Is this how NetworkStream.BeginWrite works? The documentation is unclear (to me) about when the callback method will be called. Does it happen basically immediately, just on another thread which then blocks on EndWrite waiting for the actual writing to happen? Does it happen when some sort of lower-level buffer in the sockets API underflows? Does it happen when the data has been actually written to the network? Does it happen when it has been acknowledged?
I'm confused, so there's a possibility that this whole question is off-base. If so, could you turn me around and point me in the right direction to solve what I'd expect is a fairly common problem?
I'll answer the third part of your question in a bit more detail.
The MSDN documentation states that:
When your application calls BeginWrite, the system uses a separate thread to execute the specified callback method, and blocks on EndWrite until the NetworkStream sends the number of bytes requested or throws an exception.
As far as my understanding goes, whether or not the callback method is called immediately after calling BeginSend depends upon the underlying implementation and platform. For example, if IO completion ports are available on Windows, it won't be. A thread from the thread pool will block before calling it.
In fact, the NetworkStream's BeginWrite method simply calls the underlying socket's BeginSend method on my .Net implementation. Mine uses the underlying WSASend Winsock function with completion ports, where available. This makes it far more efficient than simply creating your own thread for each send/write operation, even if you were to use a thread pool.
The Socket.BeginSend method then calls the OverlappedAsyncResult.CheckAsyncCallOverlappedResult method if the result of WSASend was IOPending, which in turn calls the native RegisterWaitForSingleObject Win32 function. This will cause one of the threads in the thread pool to block until the WSASend method signals that it has completed, after which the callback method is called.
The Socket.EndSend method, called by NetworkStream.EndSend, will wait for the send operation to complete. The reason it has to do this is because if IO completion ports are not available then the callback method will be called straight away.
I must stress again that these details are specific to my implementation of .Net and my platform, but that should hopefully give you some insight.
First, the only way your main thread can keep executing while other work is being done is through the use of another thread. A thread can't do two things at once.
However, I think what you're trying to avoid is mess with the Thread object and yes that is possible through the use of BeginWrite. As per your questions
The documentation is unclear (to me)
about when the callback method will be
called.
The call is made after the network driver reads the data into it's buffers.
Does it happen basically immediately,
just on another thread which then
blocks on EndWrite waiting for the
actual writing to happen?
Nope, just until it's in the buffers handled by the network driver.
Does it happen when some sort of
lower-level buffer in the sockets API
underflows?
If by underflow your mean it has room for it then yes.
Does it happen when the data has been
actually written to the network?
No.
Does it happen when it has been
acknowledged?
No.
EDIT
Personally I would try using a Thread. There's a lot of stuff that BeginWrite is doing behind the scenes that you should probably recognize... plus I'm weird and I like controlling my threads.
I'm just trying to make some socket programming, using non-blocking sockets in c#.
The various samples that i've found, such as this, seems to use a while(true) loop, but this approach causes the cpu to burst at 100%.
Is there a way to use non-blocking sockets using a event programming style?
Thanks
See the MSDN example here. The example shows how to receive data asynchronously. You can also use the Socket BeginSend/EndSend methods to send data asynchronously.
You should note that the callback delegate executes in the context of a ThreadPool thread. This is important if the data received inside the callback needs to be shared with another thread, e.g., the main UI thread that displays the data in a Windows form. If so, you will need to synchronized access to the data using the lock keyword, for example.
As you've noticed, with nonblocking sockets and a while loop, the processor is pegged at 100%. The asynchronous model will only invoke the callback delegate when there is data to send or receive.
Talking generally about blocking/non-blocking IO, applicable generally:
The key thing is that in real life your program does other things whilst not doing IO. The examples are all contrived in this way.
In blocking IO, your thread 'blocks' while waiting for IO. The OS goes and does other things, e.g. allows other threads to run. So your application can do many things (conceptually) in parallel by using many threads.
In non-blocking IO, your thread queries to see if IO is possible, and otherwise goes and does something else. So you do many things in parallel by explicitly - at an application level - swapping between them.
To avoid a CPU issue in heavy while loop, when no data receive put thread.sleep(100) or less. That will let other processes change to do their task
Socket.BeginReceive and AsyncCallback