When talking sockets programming in C# what does the term blocking mean?
I need to build a server component (possibly a Windows service) that will receive data, do some processing and return data back to the caller. The caller can wait for the reply but I need to ensure that multiple clients can call in at the same time.
If client 1 connects and I take say 10 seconds to process their request, will the socket be blocked for client 2 calling in 2 seconds later? Or will the service start processing a second request on a different thread?
In summary, my clients can wait for a response but I must be able to handle multiple requests simultaneously.
Blocking means that the call you make (send/ receive) does not return ('blocks') until the underlying socket operation has completed.
For read that means until some data has been received or the socket has been closed.
For write it means that all data in the buffer has been sent out.
For dealing with multiple clients start a new thread for each client/ give the work to a thread in a threadpool.
Connected TCP sockets can not be shared, so it must be one socket per client anyway.
This means you can't use the socket for anything else on the current executing thread.
It has nothing to do with szerver side.
It means the thread pauses whilst it waits for a response from the socket.
If you don't want it to pause, use the async methods.
Read more: http://www.developerfusion.com/article/28/introduction-to-tcpip/8/
A blocking call will hold the currently executing thread until the call completes.
For example, if you wish to read 10 bytes from a network stream call the Read method as follows
byte[] buf = new byte[10];
int bytesRead = stream.Read(buf, 0, buf.Length);
The currently executing thread will block on the Read call until 10 bytes has been read (or the ReadTimeout has expired).
There are Async variants of Read and Write to prevent blocking the current thread. These follow the standard APM pattern in .NET. The Async variants prevent you having to deal out a Thread (which will be blocked) to each client which increases you scalability.
Blocking operations are usually those that send or receive data and those that establish connections (i.e. listen for new clients or connect to other listeners).
To answer your question, blocking basically means that the control stays within a function or block of code (such as readfile() in c++) until it returns and does not move to the code following this code block.
This can be either in a Single threaded or a Multi-threaded context. Though having blocking calls in a single threaded code is basically recipe for disaster.
Solution:
To solve this in C#, you can simply use asynchronous methods for example BeginInvoke(), and EndInvoke() in the sockets context, that will not block your calls. This is called asynchronous programming method.
You can call BeginInvoke() and EndInvoke() either on a delegate or a control depending on which ASYNCHRONOUS method you follow to achieve this.
You can use the function Socket.Select()
Select(IList checkRead, IList checkWrite, IList checkError, int microSeconds)
to check multiple Sockets for both readability or writability. The advantage is that this is simple. It can be done from a single thread and you can specify how long you want to wait, either forever (-1 microseconds) or a specific duration. And you don't have to make your sockets asynchronous (i.e.: keep them blocking).
It also works for listening sockets. It will report writability. when there is a connection to accept. From experimenting, i can say that it also reports readability for graceful disconnects.
It's probably not as fast as asyncrhonous sockets. It's also not ideal for detecting errors. I haven't had much use for the third parameter, because it doesn't detect an ungraceful disconnect.
You should use one socket per thread. Blocking sockets (synchronous) wait for a response before returning. Non-blocking (asynchronous) can peek to see if any data received and return if no data there yet.
Related
I have a WebSocket Server that uses the System.IO.Stream class to communicate (1 Stream per connection). The server needs to send and receive, (C# .NET 2.0) and the Stream object is created from the generated TcpClient when I accept a connection.
The desired setup is I have Stream.Read on one thread handling all the incoming messages. It's a loop where Stream.Read() is expected to block as messages come in.
On another thread, I need to occasionally send messages back to the client using Stream.Write().
My question is, would there ever be a race condition? Is it possible when I fire off a Stream.Write() while Stream.Read() is waiting/reading that I could muddle up the incoming read data? or is Stream smart enough to lock up the resources for me? Is there any case where having these two sitting on Read() and Write() at the same time could break things?
After some more research, it turns out it's a NetworkStream object. Which does indeed handle synchronous read/write without a race condition:
https://msdn.microsoft.com/en-us/library/system.net.sockets.networkstream(v=vs.110).aspx
Read and write operations can be performed simultaneously on an instance of the NetworkStream class without the need for synchronization. As long as there is one unique thread for the write operations and one unique thread for the read operations, there will be no cross-interference between read and write threads and no synchronization is required."
I am implementing a TCP client in my Unity3D game and I am wondering if it's actually safe or not to call the NetworkStream.BeginWrite without waiting until the previous call finishes writing.
From what I understood while reading the documentation, it's safe until I am not performing concurrent BeginWrite calls in the different threads (and Unity has only one thread for the game main loop).
For my reading I call BeginRead right after making a connection with the asynchronous callback in which I read the incoming data from the TcpClient.GetStream(), put it to the separate MemoryStream with lock(readMemoryStream), and run BeginRead again. Besides that, in my Update() function (in the main game thread) I check for the new data in the readMemoryStream, check for the solid message and unpack it (using the same lock(readMemoryStream) of course) and perform operations on the game objects based on the message from server.
Will this approach work fine? Won't BeginRead interfere with BeginWrite?
Again, I am using callback thread to read the data and main thread to write.
As long as no two threads are calling BeginWrite() concurrently, all is well. The same thread, or even other threads, can call BeginWrite() consecutively before earlier calls have completed.
Do note that the completion callbacks might be executed out of order; if you do implement it this way and the order of the execution of the completion callbacks matters, it is up to you to keep track of which asynchronous operation is which. Of course, for writing to the socket, this often doesn't matter, as you may not have anything to do in the completion callback other than to call EndWrite().
Reading from and writing to a socket are completely independent operations. The socket is full-duplex and can safely handle concurrently pending read and write operations on the same socket.
You didn't ask, but like BeginWrite(), you can also call BeginRead() multiple times without earlier operations completing. And again, as with BeginWrite(), it's up to you to keep track of the correct order of the operations so that when your completion callback is executed for each one, you know which order the received data should be in.
Note that since the order of the completions is critical for read operations (something often not the case for write operations), it is common for all but the largest-scale implementations to never overlap read operations on a given socket. The code is much simpler when for a given socket, only one read operation is in progress at a time.
One last caveat: do note that your buffers are pinned for the duration of the I/O operation. Too many outstanding I/O operations can interfere with the efficient management of the heap, due to fragmentation. This is unlikely to be an issue in a client implementation, but a large-scale server implementation should take this into account (e.g. by allocating large buffers so that they come from the LOH, where things are always pinned anyway).
I thought C# was an event-driven programming language.
This type of thing seems rather messy and inefficient to me:
tcpListener.Start();
while (true)
{
TcpClient client = this.tcpListener.AcceptTcpClient();
Thread clientThread = new Thread(new ParameterizedThreadStart(HandleClientCommunication));
clientThread.Start(client);
}
I also have to do the same kind of thing when waiting for new messages to arrive. Obviously these functions are enclosed within a thread.
Is there not a better way to do this that doesn't involve infinite loops and wasted CPU cycles? Events? Notifications? Something? If not, is it bad practice to do a Thread.Sleep so it isn't processing as nearly as often?
There is absolutely nothing wrong with the method you posted. There are also no wasted CPU cycles like you mentioned. TcpClient.AcceptTcpClient() blocks the thread until a client connects to it, which means it does not take up any CPU cycles. So the only time the loop actually loops is when a client connects.
Of course you may want to use something other than while(true) if you want a way to exit the loop and stop listening for connections, but that's another topic. In short, this is a good way to accept connections and I don't see any purpose of having a Thread.Sleep anywhere in here.
There are actually three ways to handle IO operations for sockets. The first one is to use the blocking functions just as you do. They are usually used to handle a client socket since the client expects and answer directly most of the time (and therefore can use blocking reads)
For any other socket handlers I would recommend to use one of the two asynchronous (non-blocking) models.
The first model is the easiest one to use. And it's recognized by the Begin/End method names and the IAsyncResult return value from the Begin method. You pass a callback (function pointer) to the Begin method which will be invoked when something has happened. As an example take a look at BeginReceive.
The second asynchronous model is more like the windows IO model (IO Completion Ports) . It's also the newest model in .NET and should give you the best performance. As SocketAsyncEventArgs object is used to control the behavior (like which method to invoke when an operation completes). You also need to be aware of that an operation can be completed directly and that the callback method will not be invoked then. Read more about RecieveAsync.
At the first look when using asynchronous methods on the socket level, there shouldn't be problems with sending and receiving data from the associated network stream, or 'directly' on the socket. But, you already probably know, there is.
The problem is particularly highlighted on the Windows Mobile platform and the Compact Framework.
I use asynchronous methods, BeginReceive and the callback function which performs ends a pending asynchronous read for the received data (EndReceive) from the async. result.
As i need to constantly receive data from the socket, there is a loop which waits for the data.
The problems begins when i want to send data. For that purposes before sending some data through the socket i'm "forcing" ends of asynchronous read with EndReceive. There is a great delay on that (sometimes you just can't wait for this timeout). Timeout is too long, and i need to immediately send the data. How? I don't want to close the socket and reconnect.
I use synchronous method Send for sending data (although the results are the same with async. methods BeginSend/EndSend). When sending is finished, i start to receive data again.
Resources that i know about:
stackoverflow.com...properly-handling-network-timeouts-on-windows-ce - comment about timeouts,
developerfusion.com...socket-programming-in-c-part-2/ - solution for simple client/server using asynchronous methods for receiving and synchronous method Send for sending data.
P.S.:I tried to send the data without ending asynchronous receive but then i got SocketException: A blocking operation is currently executing (ErrorCode: 10036).
Thanks in advance! I hope that i'm not alone in this 'one little problem'.. : )
Have you considered to use Poll (or Select for multiple sockets) static method instead of BeginReceive to check if there are data for read? In my opinion this is causing you the trouble.
I'm writing a server in C# which creates a (long, possibly even infinite) IEnumerable<Result> in response to a client request, and then streams those results back to the client.
Can I set it up so that, if the client is reading slowly (or possibly not at all for a couple of seconds at a time), the server won't need a thread stalled waiting for buffer space to clear up so that it can pull the next couple of Results, serialize them, and stuff them onto the network?
Is this how NetworkStream.BeginWrite works? The documentation is unclear (to me) about when the callback method will be called. Does it happen basically immediately, just on another thread which then blocks on EndWrite waiting for the actual writing to happen? Does it happen when some sort of lower-level buffer in the sockets API underflows? Does it happen when the data has been actually written to the network? Does it happen when it has been acknowledged?
I'm confused, so there's a possibility that this whole question is off-base. If so, could you turn me around and point me in the right direction to solve what I'd expect is a fairly common problem?
I'll answer the third part of your question in a bit more detail.
The MSDN documentation states that:
When your application calls BeginWrite, the system uses a separate thread to execute the specified callback method, and blocks on EndWrite until the NetworkStream sends the number of bytes requested or throws an exception.
As far as my understanding goes, whether or not the callback method is called immediately after calling BeginSend depends upon the underlying implementation and platform. For example, if IO completion ports are available on Windows, it won't be. A thread from the thread pool will block before calling it.
In fact, the NetworkStream's BeginWrite method simply calls the underlying socket's BeginSend method on my .Net implementation. Mine uses the underlying WSASend Winsock function with completion ports, where available. This makes it far more efficient than simply creating your own thread for each send/write operation, even if you were to use a thread pool.
The Socket.BeginSend method then calls the OverlappedAsyncResult.CheckAsyncCallOverlappedResult method if the result of WSASend was IOPending, which in turn calls the native RegisterWaitForSingleObject Win32 function. This will cause one of the threads in the thread pool to block until the WSASend method signals that it has completed, after which the callback method is called.
The Socket.EndSend method, called by NetworkStream.EndSend, will wait for the send operation to complete. The reason it has to do this is because if IO completion ports are not available then the callback method will be called straight away.
I must stress again that these details are specific to my implementation of .Net and my platform, but that should hopefully give you some insight.
First, the only way your main thread can keep executing while other work is being done is through the use of another thread. A thread can't do two things at once.
However, I think what you're trying to avoid is mess with the Thread object and yes that is possible through the use of BeginWrite. As per your questions
The documentation is unclear (to me)
about when the callback method will be
called.
The call is made after the network driver reads the data into it's buffers.
Does it happen basically immediately,
just on another thread which then
blocks on EndWrite waiting for the
actual writing to happen?
Nope, just until it's in the buffers handled by the network driver.
Does it happen when some sort of
lower-level buffer in the sockets API
underflows?
If by underflow your mean it has room for it then yes.
Does it happen when the data has been
actually written to the network?
No.
Does it happen when it has been
acknowledged?
No.
EDIT
Personally I would try using a Thread. There's a lot of stuff that BeginWrite is doing behind the scenes that you should probably recognize... plus I'm weird and I like controlling my threads.