I have a WebSocket Server that uses the System.IO.Stream class to communicate (1 Stream per connection). The server needs to send and receive, (C# .NET 2.0) and the Stream object is created from the generated TcpClient when I accept a connection.
The desired setup is I have Stream.Read on one thread handling all the incoming messages. It's a loop where Stream.Read() is expected to block as messages come in.
On another thread, I need to occasionally send messages back to the client using Stream.Write().
My question is, would there ever be a race condition? Is it possible when I fire off a Stream.Write() while Stream.Read() is waiting/reading that I could muddle up the incoming read data? or is Stream smart enough to lock up the resources for me? Is there any case where having these two sitting on Read() and Write() at the same time could break things?
After some more research, it turns out it's a NetworkStream object. Which does indeed handle synchronous read/write without a race condition:
https://msdn.microsoft.com/en-us/library/system.net.sockets.networkstream(v=vs.110).aspx
Read and write operations can be performed simultaneously on an instance of the NetworkStream class without the need for synchronization. As long as there is one unique thread for the write operations and one unique thread for the read operations, there will be no cross-interference between read and write threads and no synchronization is required."
Related
I'm creating a network protocol based on Tcp, and i am using berkeley socket via C#.
is the socket buffer gonna mixed up when two threads trying to send data via socket.send method at a time?
should i use lock for accessing one thread at a time?
According to MSDN, the Socket class is thread safe. So, that means you don't need to lock the send method and you can safely send and receive on different threads. But be aware that the order is not guaranteed, so the data won't be mixed if you send it all at a single call to the send method instead of chuncked calls.
In addition to that, I would recommend to lock and flush it just in case you don't want the server swapping the responses across multiple concurrent requests. But that doesn't seems to be this case.
I'm playing with SocketAsyncEventArgs and IO Completion Ports.
I've been looking but I can't seem to find how .NET handles race conditions.
Need clarification on this stack overflow question:
https://stackoverflow.com/a/28690948/855421
As a side note, don't forget that your request might have completed synchronously. Perhaps you're reading from a TCP stream in a while loop, 512 bytes at a time. If the socket buffer has enough data in it, multiple ReadAsyncs can return immediately without doing any thread switching at all. [emphasis mine]
For the sake of simplicity. Let's assume one client one server. The server is using a IOCP. If the client is a fast writer but server is a slow reader, does IOCP mean the kernel/underlying process can signal multiple threads?
1 So, socket reads 512 bytes, kernel signals a IOCP thread
2 Server processes new bytes
3 socket receives another X bytes but server is still processing previous buffer
Does the kernel spin up another thread? SocketAsyncEventArgs has a Buffer which by definition is: "Gets the data buffer to use with an asynchronous socket method." So the buffer should not change over the lifetime of the SocketAsyncEventArgs if I understand that correctly.
What's preventing SocketAsyncEventArgs.Buffer from getting corrupted by IOCP thread 2?
Or does the .NET framework synchronize IOCP threads? If so, what's the point of spinning up a new thread then if IOCP thread 1 blocks the previous read?
I've been looking but I can't seem to find how .NET handles race conditions.
For the most part, it doesn't. It's up to you to do that. But, it's not clear from your question that you really have a race condition problem.
You are asking about this text, in the other answer:
If the socket buffer has enough data in it, multiple ReadAsyncs can return immediately without doing any thread switching at all
First, to be clear: the method's name is ReceiveAsync(), not ReadAsync(). Other classes, like StreamReader and NetworkStream have ReadAsync() methods, and these methods have very little to do with what your question is about. Now, that clarified…
That quote is about the opposite of a race condition. The author of that text is warning you that, should you happen to call ReceiveAsync() on a socket that already has data ready to be read, the data will be read synchronously and the SocketAsyncEventArgs.Completed event will not be raised later. It will be the responsibility of the thread that called ReceiveAsync() to also process the data that was read.
All of this would happen in a single thread. There wouldn't be any race condition in that scenario.
Now, let's consider your "fast writer, slow reader" scenario. The worst that can happen there is that the first read, which could take place in any thread, does not complete immediately, but by the time the Completed event is raised, the writer has overrun the reader's pace. In this case, since part of handling the Completed event is likely to be calling ReceiveAsync() again, which now will return synchronously, an IOCP thread pool thread will get tied up looping on the calls to ReceiveAsync(). No new thread is needed, because the current IOCP thread is doing all the work synchronously. But it does prevent that thread from handling other IOCP events.
All that will mean though, is that if you have some other socket the server is handling and which also needs to call ReceiveAsync(), the framework will have to ensure there's another thread in the IOCP thread pool available to handle that I/O. But, that's a completely different socket and you would necessarily be using a completely different buffer for that socket anyway.
Again, no race condition.
Now, all that said, if you want to get really confused, it is possible to use asynchronous I/O in the .NET Socket API (whether with BeginReceive() or ReceiveAsync() or even wrapping the socket in a NetworkStream and using ReadAsync()) in such a way that you do have a race condition for a particular socket.
I hesitate to even mention it, because there's no evidence in your question this pertains to you at all, nor that you're even really interested in having this level of detail. Adding this explanation could just confuse things. But, for the sake of completeness…
It is possible to have issued more than one read operation on a socket at any given time. This would be somewhat akin to double- or triple-buffered video display (if you're familiar with that concept). The idea being that you might still be handling a read operation while new data comes in, and it would be more performant to have a new read operation already in progress to handle that data before you're done handling the current read operation.
This sounds great, but in practice because of the way Windows schedules threads, and in particular does not guarantee a particular ordering of thread scheduling, if you try to implement your code that way, you create the possibility that your code will see read operations completed out of order. That is, if you for example call ReceiveAsync() twice in a row (with two different SocketAsyncEventArgs objects and two different buffers, of course), your Completed event handler might get called with the second buffer first.
This isn't because the read operations themselves complete out of order. They don't. Hence the emphasis on "your" above. The problem is that while the IOCP threads handling the IO completions become runnable in the correct order (because the buffers are filled in the order you provided them by calling ReceiveAsync() multiple times), the second IOCP thread to become runnable could wind up being the first thread to actually be scheduled to run by Windows.
This is not hard to deal with. You just have to make sure that you track the buffer sequence as you issue the read operations, so that you can reassemble the buffers in the correct order later. All of the async options available provide a mechanism for you to include additional user state data (e.g. SocketAsyncEventArgs.UserToken), so you can use this to track the order of your buffers.
Again, this is not common. For most scenarios, a completely orderly implementation, where you only issue a new read operation after you're completely done with the current read operation, is completely sufficient. If you're worried at all about getting a multi-buffer read implementation correct, just don't bother. Stick with the simple approach.
When talking sockets programming in C# what does the term blocking mean?
I need to build a server component (possibly a Windows service) that will receive data, do some processing and return data back to the caller. The caller can wait for the reply but I need to ensure that multiple clients can call in at the same time.
If client 1 connects and I take say 10 seconds to process their request, will the socket be blocked for client 2 calling in 2 seconds later? Or will the service start processing a second request on a different thread?
In summary, my clients can wait for a response but I must be able to handle multiple requests simultaneously.
Blocking means that the call you make (send/ receive) does not return ('blocks') until the underlying socket operation has completed.
For read that means until some data has been received or the socket has been closed.
For write it means that all data in the buffer has been sent out.
For dealing with multiple clients start a new thread for each client/ give the work to a thread in a threadpool.
Connected TCP sockets can not be shared, so it must be one socket per client anyway.
This means you can't use the socket for anything else on the current executing thread.
It has nothing to do with szerver side.
It means the thread pauses whilst it waits for a response from the socket.
If you don't want it to pause, use the async methods.
Read more: http://www.developerfusion.com/article/28/introduction-to-tcpip/8/
A blocking call will hold the currently executing thread until the call completes.
For example, if you wish to read 10 bytes from a network stream call the Read method as follows
byte[] buf = new byte[10];
int bytesRead = stream.Read(buf, 0, buf.Length);
The currently executing thread will block on the Read call until 10 bytes has been read (or the ReadTimeout has expired).
There are Async variants of Read and Write to prevent blocking the current thread. These follow the standard APM pattern in .NET. The Async variants prevent you having to deal out a Thread (which will be blocked) to each client which increases you scalability.
Blocking operations are usually those that send or receive data and those that establish connections (i.e. listen for new clients or connect to other listeners).
To answer your question, blocking basically means that the control stays within a function or block of code (such as readfile() in c++) until it returns and does not move to the code following this code block.
This can be either in a Single threaded or a Multi-threaded context. Though having blocking calls in a single threaded code is basically recipe for disaster.
Solution:
To solve this in C#, you can simply use asynchronous methods for example BeginInvoke(), and EndInvoke() in the sockets context, that will not block your calls. This is called asynchronous programming method.
You can call BeginInvoke() and EndInvoke() either on a delegate or a control depending on which ASYNCHRONOUS method you follow to achieve this.
You can use the function Socket.Select()
Select(IList checkRead, IList checkWrite, IList checkError, int microSeconds)
to check multiple Sockets for both readability or writability. The advantage is that this is simple. It can be done from a single thread and you can specify how long you want to wait, either forever (-1 microseconds) or a specific duration. And you don't have to make your sockets asynchronous (i.e.: keep them blocking).
It also works for listening sockets. It will report writability. when there is a connection to accept. From experimenting, i can say that it also reports readability for graceful disconnects.
It's probably not as fast as asyncrhonous sockets. It's also not ideal for detecting errors. I haven't had much use for the third parameter, because it doesn't detect an ungraceful disconnect.
You should use one socket per thread. Blocking sockets (synchronous) wait for a response before returning. Non-blocking (asynchronous) can peek to see if any data received and return if no data there yet.
We are writing a TCPServer and Client program. How much space is there in the TcpClient buffer? Like, at what point will it begin to throw away data? We are trying to determine if the TcpClient can be blocking or if it should go into it's own background thread(so that the buffer can not get full)..
You can get the buffer sizes from TcpClient.ReceiveBufferSize and TcpClient.SendBufferSize .
The available buffer sizes will vary as data is received/acknowledged(or not) at the TCP level. TcpClient is blocking by default.
No data will be thrown away as a result of full buffers, though data could be throw away in under error conditions (such as the peer disappears/crashes/exits etc.)
The MSDN documentation says the default size of the send and receive buffers for TcpClient is 8192 bytes, or 8K. The documentation does not specify a limit as to how big these buffers can be.
As I'm sure you're aware, you send and receive data via the TcpClient using its underlying NetworkStream object. You are in control over whether these are synchronous or asynchronous operations. If you want synchronous behavior, use the Read and Write methods of NetworkStream. If you want asynchronous behavior, use the BeginRead/EndRead and BeginWrite/EndWrite operations.
If you are receiving data as part of some front-end application, I would highly recommend doing this in a secondary thread, whether you do this using the asynchronous methods or synchronously in a separate thread. This will allow your UI to be responsive to the user while still handling the sending and receiving of data in the background.
At the first look when using asynchronous methods on the socket level, there shouldn't be problems with sending and receiving data from the associated network stream, or 'directly' on the socket. But, you already probably know, there is.
The problem is particularly highlighted on the Windows Mobile platform and the Compact Framework.
I use asynchronous methods, BeginReceive and the callback function which performs ends a pending asynchronous read for the received data (EndReceive) from the async. result.
As i need to constantly receive data from the socket, there is a loop which waits for the data.
The problems begins when i want to send data. For that purposes before sending some data through the socket i'm "forcing" ends of asynchronous read with EndReceive. There is a great delay on that (sometimes you just can't wait for this timeout). Timeout is too long, and i need to immediately send the data. How? I don't want to close the socket and reconnect.
I use synchronous method Send for sending data (although the results are the same with async. methods BeginSend/EndSend). When sending is finished, i start to receive data again.
Resources that i know about:
stackoverflow.com...properly-handling-network-timeouts-on-windows-ce - comment about timeouts,
developerfusion.com...socket-programming-in-c-part-2/ - solution for simple client/server using asynchronous methods for receiving and synchronous method Send for sending data.
P.S.:I tried to send the data without ending asynchronous receive but then i got SocketException: A blocking operation is currently executing (ErrorCode: 10036).
Thanks in advance! I hope that i'm not alone in this 'one little problem'.. : )
Have you considered to use Poll (or Select for multiple sockets) static method instead of BeginReceive to check if there are data for read? In my opinion this is causing you the trouble.