WSAEWOULDBLOCK handling - c#

I have written a socket for a server in C++ CLI that is using winsock. The sockets are using async methods for sending, receiving and accepting connections. After implementing my socket in the production environment, the send function stops working giving me the error WSAEWOULDBLOCK. Out from my research on the net, this means the network buffer for socket IO is full or the networking is too busy to do my operation at this moment. However, I have not seen any specific solution which can address this problem. My temporary solution was to create a do-while loop around the WSASend function, making the thread sleep for X amount of MS and then try again. This resulted in far higher latency than the previous socket (.NET socket class) and large lag spikes.
My code for sending data is as following:
void Connectivity::ConnectionInformation::SendData(unsigned char data[], const int length)
{
if (isClosed || sendError)
return;
Monitor::Enter(this->syncRoot);
try
{
sendInfo->buf = (char*)data;
sendInfo->len = length;
do
{
state = 0;
if (WSASend(connection, sendInfo, 1, bytesSent, 0, NULL, NULL) == SOCKET_ERROR)
{
state = WSAGetLastError();
if (state == WSAEWOULDBLOCK)
{
Thread::Sleep(SleepTime);
//Means the networking is busy and we need to wait a bit for data to be sent
//Might wanna decrease the value since this could potentially lead to lagg
}
else if (state != WSA_IO_PENDING)
{
this->sendError = true;
//The send error bool makes sure that the close function doesn't get called
//during packet processing which could cause a lot of null reffernce exceptions.
}
}
}
while (state == WSAEWOULDBLOCK);
}
finally
{
Monitor::Exit(this->syncRoot);
}
}
Is there a way to use for example the WSAEventSelect method in order to get a callback when I am able to send data? Out from the documentation on MSDN, the wait for data method could also get stuck in this error. Anyone got any solutions for getting around this?

The error code WSAEWOULDBLOCK means that you attempted to operate on a non-blocking socket but the operation could not be completed immediately. This is not a real error - it means that you can retry later or schedule an asynchronous IO (which wouldn't fail). But this is not what you want in the first place. Let me explain:
You are supposed to use sockets in one of two ways:
Synchronous, blocking.
Asynchronous, non-blocking, callback-based.
You are mixing the two which gets you the worst of both. You created a non-blocking socket and use it in a potentially blocking way.
Alas I'm not full qualified to give best-practices for native-code sockets. I suggest you read all of the docs for WSASend because they seem to explain all of this.
Now, why would this strange error code even exist? It is a performance optimization. You can speculatively try to send synchronously (which is very fast). And only if it fails you are supposed to schedule an asynchronous IO. If you don't need that optimization (which you don't) don't do it.

As #usr says, I need to have either LPWSAOVERLAPPED or LPWSAOVERLAPPED_COMPLETION_ROUTINE set to a value in order to make the operation non-blocking. However, after testing, I found out I need t have a LPWSAOVERLAPPED object in order to make the completion routine called. It is also mentioned on MSDN on the documentation of the WSASend function that if the overlapped object and the completion routine is NULL, the socket would behave as a blocking socket.
Thanks, and merry xmas everyone! :)

Related

No synchronized non-blocking read method in basic Stream/StreamReader class

Recently I'm trying some .Net.Sockets secured networking by using BouncyCastle library.
The TlsStream class in BouncyCastle inherits the original Stream (not NetworkStream), and StreamReader/StreamWriter seem to be a convenient way for read/write.
Since I tend to use 1 thread for 1 end(server or client) to handle both read and write :
void CommunicationLoop() // Loops in Thread A
{
while (true)
{
ReadFromStream(); // If data available. It always hangs/blocks here(if there's no data to be read.)
WriteToStream(); // If user input something.
}
}
void ReadFromStream()
{
String line;
while ( StreamReader.Peek() > -1 )
// Or ((line = StreamReader.ReadLine()) != null) / (Stream.Read(buff, 0, buff.Length) > 0)
// or any synchronized Readxxx() methods.
// It always hangs/blocks here(if there's no data to be read.)
{
line = StreamReader.ReadLine();
Console.WriteLine($"Received: {line}");
}
}
void WriteToStream()
{
//...
}
I did a lot of research, everyone suggests to use async method to solve the problem.
I would like to know that, is there really no official method/function to check if there is data to be read in StreamReader/Stream, if no data then skip(instead of hanging there waiting for the input, like the NetworkStream.DataAvailable)?
Also, if the the communication for 1 connection is not heavy, isn't using 1 thread dealing with both read/write in server side (there might be multiple connections from MANY CLIENTS to ONE SERVER) more efficient(saves resource)?
Thanks.
I would like to know that, is there really no official method/function to check if there is data to be read in StreamReader/Stream
Check the documentation for StreamReader. As far as I can see there is no way to check for waiting data without using the async methods.
Also, if the the communication for 1 connection is not heavy, isn't using 1 thread dealing with both read/write in server side (there might be multiple connections from MANY CLIENTS to ONE SERVER) more efficient(saves resource)
This should not be more efficient than using the async methods. Consider the case where all clients are idle. Your method would use 1 thread per client. Using async methods would not use any threads. Assuming the async methods use non-blocking IO in the backend. It is possible the sync methods have slightly lower overhead since they can do the synchronization in the kernel rather than in .Net, but I think this would need benchmarking to verify.
Is there some specific reason you do not want to use the async methods?

What is the best way to cancel a socket receive request?

In C# you have 3 ways to try and receive TCP data on a Socket:
Socket.Receive is a synchronous, blocking method. It doesn't return until it succeeds, barring failure or [optionally] timeout.
Socket.BeginReceive is asynchronous, a supplied callback/delegate is called when there is data to receive, using the now-antiquated Begin/End pattern
Socket.ReceiveAsync begins an asynchronous request to receive data
However my understanding is none of these actually let you cancel the receive operation? The docs suggest EndReceive is used for completing a read, not something one could call to terminate the request?
You often see code like
while(socket.Available==0 && !cancel)Sleep(50); if(!cancel)socket.Receive(...);
But that's pretty terrible.
If I want to sit waiting for data but at some point cancel the receive, say the user hits "stop" button how can this neatly be done so I don't get a callback triggered later on when unexpected?
I had wondered about closing the socket, which would cause the Receive operation to fail, but it seems somewhat ugly. Am I thinking along the right lines or do the various API methods listed above allow a direct cancellation so I don't risk dangling async operations?
There is no known way to cancel it (AFAIK)
One thing you can do it set the Socket.Blocking = false. The receive will return immediately when there is no data. This way it will not hang.
You should check the Socket.Blocking property.
I advise you to use the BeginReceive(IList<ArraySegment<Byte>>, SocketFlags, SocketError, AsyncCallback, Object) overload to prevent it throwing exceptions.
Check the SocketError on "Would Block", meaning "there is not data". So you can try again.
Didn't tested it but ->
A nice idea is using the non-async version Receive to receive 0 bytes (use a static byte[] emptyBuffer = new byte[0]) , and if the sockerError returns with a 'would block', you can have a short delay and retry it. When it doesn't return a socketError there is probably data. So you can start an async version.
What you could do is get a NetworkStream from the socket being read and use it's ReadTimeout property, for example:
// Get stream from socket:
using NetworkStream ns = new NetworkStream(socket);
// Set timeout:
stream.ReadTimeout = 10 * 1000; // 10 sec
var buffer = new List<byte>();
try
{
do
{
buffer.Add((byte) stream.ReadByte());
}
while (stream.DataAvailable);
}
catch (IOException)
{
// Timeout
}
return buffer.ToArray();

Transactional receive with timeout

I have a method that reads a list of messages from a message queue. The signature is:
IList<TMsg> Read<TMsg>(MessageQueue queue, int timeout, MessageQueueTransaction tx);
The basic functionality is that it will reads as many messages as it can from the queue, within the timeout, using the given transaction. The problem I'm having is deciding on how best to enforce the timeout. I have two working versions at the moment. These are:
Using BeginPeek with a timeout. If it succeeds, the message is removed with a transactional Receive call. The timeout for BeginPeek is recalculated after each call, based on the time the Read began, and the current time.
Using Receive with the timeout value, and catching the exception when the timeout expires.
The problem with the first approach is that it requires the queue to be read in DenySharedReceive mode, otherwise you can't guarantee the message will still be there between the Peek and Receive. The problem with the second method is that an exception needs to be thrown and handled (albeit, internally and transparently) which is probably not a great design since each call will always end in an exception which goes against the idea of throwing exceptions only in exceptional circumstances.
Does anyone have any other suggestions how I might achieve this, or comments on these two techniques and my concerns?
You could use the reactive excensions to create a producer of messages, use Observable.Buffer to manage the timeout and later subscribe to this producer
public IEnumerable<Message> GetMessage()
{
//do the peek and receive a single message
yield return message;
}
//and then something like
var producer = GetMessage().ToObservable();
// this is where your timeout goes
var bufferedMessages = producer.Buffer(TimeSpan.FromSeconds(3));
var disp = bufferedMessages.Subscribe(messages =>
{
Console.WriteLine("You've got {0} new messages", messages.Count());
foreach (var message in messages)
Console.WriteLine("> {0}", message); // process messages here
});
disp.Dispose(); // when you no longer want to subscribe to the messages
For more reactive examples look here
After a bit of investigation, hatchet's comment is the closest to the 'answer', at least as far as .NET is concerned. The wrapped native methods provide a return value (rather than error value) for 'TIMEOUT', but this is considered an exception by .NET and re-wrapping the native code is just not worth it. I tried. :p

C# "using" SerialPort transmit with data loss

I'm new to this forum, and I have a question that has been bothering me for a while.
My setup is a serial enabled character display connected to my pc with a usb/uart converter. I'm transmitting bytes to the display via the serialPort class in a separate write buffer thread in a C++ style:
private void transmitThread(){
while(threadAlive){
if(q.Count > 0){ // Queue not empty
byte[] b = q.Dequeue();
s.Write(b,0,b.Length);
System.Threading.Thread.Sleep(100);
}
else{ // Queue empty
System.Threading.Thread.Sleep(10);
}
}
}
Assuming the serial port is already opened, this works perfectly and transmits all the data to the display. There are though no exception handling at all in this snippet. Therefore I was looking into implementing a typical C# feature, the 'using' statement and only opening the port when needed, like so:
private void transmitThread(){
while(threadAlive){
if(q.Count > 0){ // Queue not empty
byte[] b = q.Dequeue();
using(s){ //using the serialPort
s.Open();
s.Write(b,0,b.Length);
s.Close();
}
System.Threading.Thread.Sleep(100);
}
else{ // Queue empty
System.Threading.Thread.Sleep(10);
}
}
}
The problem with this function is, that it only transmits a random amount of the data, typically about one third of the byte-array of 80 bytes. I have tried different priority settings of the thread, but nothing changes.
Am I missing something important, or do I simply close the port too fast after a transmit request?
I hope you can help me. Thanks :)
No, that was a Really Bad Idea. The things that go wrong, roughly in the order you'll encounter them:
the serial port driver discards any bytes left in the transmit buffer that were not yet transmitted when you close the port. Which is what you are seeing now.
the MSDN article for SerialPort.Close() warns that you must "wait a while" before opening the port again. There's an internal worker thread that needs to shut down. The amount of time you have to wait is not specified and is variable, depending on machine load.
closing a port allows another program to grab the port and open it. Serial ports cannot be shared, your program will fail when you try to open it again.
Serial ports were simply not designed to be opened and closed on-the-fly. Only open it at the start of your program, close it when it ends. Not calling Close() at all is quite acceptable and avoids a deadlock scenario.
I think you're missing the point of the using block. A typical using block will look like this:
using (var resource = new SomeResource())
{
resource.DoSomething();
}
The opening happens at the very beginning. Typically as part of the constructor. But sometimes on the first line of the using block.
But the big red flag I see is that the closing happens automatically. You don't need the .Close() call.
If the successful operation of your serial device is dependent on the calls to Thread.Sleep then perhaps the thread is being interrupted at some point, sufficient to make the data transmission out of sync with the device. There would most likely be ways to solve this but the first thing I would do is try to use the .NET SerialPort class instead. The Write method is very similar to what you want to do, and there are C++ code examples in those articles.

Stack overflow when using the System.Net.Sockets.Socket.AcceptAsync model

With respect to C# and .NET's System.Net.Sockets.Socket.AcceptAsync method, one would be required to handle a return value of "false" in order to handle the immediately available SocketAsyncEventArgs state from the synchronously processed connection. Microsoft provides examples (found on the System.Net.Sockets.SocketAsyncEventArgs class page) which will cause a stack overflow if there are a large amount of pending connections, which can be exploited on any system that implements their handling model.
Other ideas for getting around this issue are to make a loop that calls the handler method, with the condition being that the value Socket.AcceptAsync returns is equal to false, and to break the loop (to allow deferred processing) if the value is indicating that the operation is being completed asynchronously (true). However, this solution also causes a stack overflow vulnerability because of the fact that the callback associated with the SocketAsyncEventArgs passed to Socket.AcceptAsync has at the end of the method, a call to Socket.AcceptAsync, which also has a loop for immediately available, synchronously accepted, connections.
As you can see, this is a pretty solid problem, and I've yet to find a good solution that does not involve System.Threading.ThreadPool and creating tons of other methods and scheduling processing. As far as I can see, the asynchronous socket model relating to Socket.AcceptAsync requires more than what is demonstrated in the examples on MSDN.
Does anyone have a clean and efficient solution to handling immediately pending connections that are accepted synchronously from Socket.AcceptAsync without going into creating separate threads to handle the connections and without utilizing recursion?
I wouldn't use AcceptAsync, but rather BeginAccept/EndAccept, and implement the common async pattern correctly, that is, checking for CompletedSynchronously to avoid callbacks in the callback thread on operations which completed .
See also AsyncCallBack CompletedSynchronously
Edit regarding the requirement to use AcceptAsync:
The MSDN documentation explicitly says that the callback will NOT be invoked for operations which completed synchronously. This is different to the common async pattern where the callback is always invoked.
Returns true if the I/O operation is
pending. The
SocketAsyncEventArgs.Completed event
on the e parameter will be raised upon
completion of the operation. Returns
false if the I/O operation completed
synchronously. The
SocketAsyncEventArgs.Completed event
on the e parameter will not be raised
and the e object passed as a parameter
may be examined immediately after the
method call returns to retrieve the
result of the operation.
I currently don't see how a loop would not solve the stack overflow issue. Maybe you can be more specific on the code that causes the problem?
Edit 2: I'm thinking of code like this (only in regard to AcceptAsync, the rest was just to get a working app to try it out with):
static void Main(string[] args) {
Socket listenSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
listenSocket.Bind(new IPEndPoint(IPAddress.Loopback, 4444));
listenSocket.Listen(100);
SocketAsyncEventArgs e = new SocketAsyncEventArgs();
e.Completed += AcceptCallback;
if (!listenSocket.AcceptAsync(e)) {
AcceptCallback(listenSocket, e);
}
Console.ReadKey(true);
}
private static void AcceptCallback(object sender, SocketAsyncEventArgs e) {
Socket listenSocket = (Socket)sender;
do {
try {
Socket newSocket = e.AcceptSocket;
Debug.Assert(newSocket != null);
// do your magic here with the new socket
newSocket.Send(Encoding.ASCII.GetBytes("Hello socket!"));
newSocket.Disconnect(false);
newSocket.Close();
} catch {
// handle any exceptions here;
} finally {
e.AcceptSocket = null; // to enable reuse
}
} while (!listenSocket.AcceptAsync(e));
}
I have resolved this problem by simply changing the placement of the loop. Instead of recursively calling the accept handler from within itself, wrapping the code in a do-while loop with the condition being "!Socket.AcceptAsync(args)" prevents a stack overflow.
The reasoning behind this is that you utilize the callback thread for processing the connections which are immediately available, before bothering to asynchronously wait for other connections to come across. It's re-using a pooled thread, effectively.
I appreciate the responses but for some reason none of them clicked with me and didn't really resolve the issue. However, it seems something in there triggered my mind into coming up with that idea. It avoids manually working with the ThreadPool class and doesn't use recursion.
Of course, if someone has a better solution or even an alternative, I'd be happy to hear it.
I haven't looked carefully, but it smells like this might be helpful (see the section called "stack dive"):
http://blogs.msdn.com/b/mjm/archive/2005/05/04/414793.aspx
newSocket.Send(Encoding.ASCII.GetBytes("Hello socket!"));
newSocket.Disconnect(false);
newSocket.Close();
The problem with this snippet above is that this will block your next accept operation.
A better way is like this:
while (true)
{
if (e.SocketError == SocketError.Success)
{
//ReadEventArg object user token
SocketAsyncEventArgs readEventArgs = m_readWritePool.Pop();
Socket socket = ((AsyncUserToken)readEventArgs.UserToken).Socket = e.AcceptSocket;
if (!socket.ReceiveAsync(readEventArgs))
ThreadPool.QueueUserWorkItem(new WaitCallback(ProcessReceiveEx), readEventArgs); .
}
else
{
HadleBadAccept(e);
}
e.AcceptSocket = null;
m_maxNumberAcceptedClients.WaitOne();
if (listenSocket.AcceptAsync(e))
break;
}
The SocketTaskExtensions contains useful method overloads for the Socket class. Rather than using the AsyncCallback pattern, the AcceptAsync extension method can be called with ease. It is also compatible with the task asynchronous programming (TAP) model.
There is two basic operation to consider:
Start the listening: As usual socket needs to Bind to a specific IP address and port. Then place the socket in listening state (Listen method). After that it is ready to handle the incoming communication.
Stop the listening: It stops accepting the incoming requests.
bool _isListening = false;
public Task<bool> StartListening()
{
Socket listeningSocket = new Socket(SocketType.Stream, ProtocolType.Tcp);
listeningSocket.Bind(new IPEndPoint(IPAddress.Any, 0));
listeningSocket.Listen(10);
return HandleRequests(listeningSocket);
}
public void StopListening()
{
_isListening = false;
}
In order to handle incoming requests, the listening socket accepts (AcceptAsync) the incoming client connection. Then Send or Receive message from the accepted socket. It accepts incoming connection until StopListening was called.
internal async Task<bool> HandleRequests(Socket listeningSocket)
{
try
{
_isListening = true;
while (_isListening)
{
byte[] message = Encoding.UTF8.GetBytes("Message");
byte[] receivedMessage = new byte[1024];
using (Socket acceptedSocket = await listeningSocket.AcceptAsync())
{
// Send messages
acceptedSocket.Send(message);
// Receive messagges
acceptedSocket.Receive(receivedMessage);
}
}
}
catch (SocketException)
{
// Handle error during communication.
return false;
}
return true;
}
Note:
Messages could be exceed the buffer size. In that case try continuously receive until end of the data. Stephen Clearly message framing blog post is good starting point.
Sending and receiving also could be asynchronous. NetworkStream can be created from the accepted socket then we can await to the ReadAsnyc and WriteAsync operations.

Categories