Socket buffers the data it receives - c#

I have a client .NET application and a server .NET application, connected through sockets.
The client sends a string of 20 or so characters every 500 milliseconds.
On my local development machine, this works perfectly, but once the client and the server are on two different servers, the server is not receiving the string immediately when it's sent. The client still sends perfectly, I've confirmed this with Wireshark. I have also confirmed that the the server does receive the strings every 500 milliseconds.
The problem is that my server application that is waiting for the message only actually receives the message every 20 seconds or so - and then it receives all the content from those 20 seconds.
I use asynchronous sockets and for some reason the callback is just not invoked more than once every 20 seconds.
In AcceptCallback it establishes the connection and call BeginReceive
handler.BeginReceive(state.buffer, 0, StateObject.BufferSize, 0, new AsyncCallback(ReadCallback), state);
This works on my local machine, but on my production server the ReadCallback doesn't happen immediately.
The BufferSize is set to 1024. I also tried setting it to 10. It makes a difference in how much data it will read from the socket at one time once the ReadCallback is invoked, but that's not really the problem here. Once it invokes ReadCallback, the rest works fine.
I'm using Microsofts Asynchronous Server Socket Example so you can see there what my ReadCallback method looks like.
How can I get the BeginReceive callback immediately when data arrives at the server?
--
UPDATE
This has been solved. It was because the server had a a single processor and single core. After adding another core, the problem was instantly solved. ReadCallback is now called immediately when the call goes through to the server.
Thankyou all for your suggestions!!

One approach might be to adjust the SO_SNDBUF option for the send side. SInce you are not running into this problem when both server/client are on the same box, it is possible that having a small buffer is throttling the send side since due to (a possible) slower sending rate between the servers. If the sender cannot send fast enough, then the send-side buffer might be filling up sooner.
Update: we did some debugging and turns out that the issue is with the application being slower.

It might be that the Nagle algorithm is waiting on the sender side for more packets. If you are sending small chunks of data, they will be merged in one so you don't pay a huge TCP header overhead for small data.
You can disable it using: StreamSocketControl.NoDelay
See: http://msdn.microsoft.com/en-us/library/windows/apps/windows.networking.sockets.streamsocketcontrol.nodelay
The Nagle algorithm might be disabled for loopback and this is a possible explanation of why it works when you have both the sender and the receiver on the same machine.

By the request of OP, duplicating my "comment/answer" here.
My guess was, the problem appeared because of thread scheduling on a single-core machine. This is an old problem, almost extinct in the modern age of hyper-threading/multi-core processors. When a thread is spawned in the course of execution of the program, it needs scheduled time to run.
On a single-core machine, if one thread continues to execute without explicitly passing control to OS scheduler (by waiting for mutex/signal or by calling Sleep), the execution of any other thread (in the same process and with lower priority) may be postponed indefinitely by the scheduler. Hence, in the case described, the asynchronous network thread was (most likely) just starved for execution time - getting only pieces from time to time.
Adding second CPU/core, obviously, fixed that by providing a parallel scheduling environment.

Related

Use increased priority for sockets communication in .NET?

I have a C# application that uses System.Net.Sockets.Socket to communicate with some devices on a local network, each time that a message is sent from the application to a device an acknowledge from the device is expected - usually within 200 milliseconds, if the acknowledge is not received within a given timeout period an exception is thrown.
There is one socket per device
Reception is by the socket ReceiveAsync method.
Some users report seeing the acknowledge timeout exception even though I have increased the timeout period to one second, my worry is that the users may be running another application that is cpu intensive and thus interfering with the reception of packages from the devices.
Should I consider raising the priority of my application or does .NET already assign an increased priority to socket events or is the systems time slice for each thread short enough that I do not need to worry about this.
Thank you.
Networks performance can take a "burp" and have a performance hit all the time regardless of the process priority.
With that in mind, 1 second is a really short timeout interval. You didn't say TCP or UDP. For TCP, I'd wait longer - much longer. For UDP, add retry logic.
Users don't want to see exceptions. They want to see the application working.

Wcf NetNamedPipesBinding replying slow on heavy load

I have a wcf server using NetNamedPipesBinding.
I can see when the server is loaded with requests the reply is very slow (1-7 seconds).
The application code runs very fast but the time between sending the reply and receiving the reply takes long.
Is this because there are lots of messages at the pipe and they are processed sequentially ? is there a way to improve that ?
there are only 2 processes involves (caller and service) and the calls are 2 way, the caller process uses different threads to call.
Thanks.
If you are creating a separate Thread for each request, you could be starving your system. Since both client and server are on the same machine, it may be the client's fault the server is slow.
There are lots of ways to do multithreading in .NET and a new Thread may be the worst. At worst you should move your calls to the thread pool (http://msdn.microsoft.com/en-us/library/3dasc8as.aspx)
or you may want to use the async methods of the proxy (http://msdn.microsoft.com/en-us/library/ms730059.aspx).

Communication WCF constant connection

When using WCF for 2 computer to communicate over the network, i am executing a method on the remote server, the time the operation can take is not known it can take from 1 second to a day or more, so i want to set the ((IClientChannel)pipeProxy).OperationTimeout property to a high value, but is this the way to go or is this a dirty way of programming, because a connection is active for the whole time (it is all on a relatively stable lan network).
I wouldn't do it like that. Such a long timeout is likely to cause issues.
I would split the operation into two: One call from client to server which starts the operation, and then a callback from the server to the client to say that it's finished. The callback would of course include any result information (success, failure etc).
For something which takes such a long time, you might also want to introduce a "keep alive" mechanism where the client periodically calls the server to check that it is still responding.
If you have a very long timeout, it makes it hard to know if something has actually gone wrong. But if you split the operation into two, it makes it impossible to know if something has gone wrong unless you poll occasionally with a keep-alive (or more accurately, "are you alive?") style message.
Alternatively, you could have the server call back occasionally with a progress message, but that's a bit harder to manage than having the client polling the server occasionally (because the client would have to track the last time the server called it back to determine if the server had stopped responding).

Total upload speed is slower when all connections are accepted by one TcpListener

I've recently encountered a strange situation in C# .NET Framework 4.0:
In a simple program, i create a TcpListener, specify its local port, start it and use async accept function to receive incoming connection requests.
Once it has pending connections inbound, the server accepts the TcpClient from async callback function and record it into a container (to be more specified, a List<TcpClient>).
And I write another simple client program which just connects to the server once it starts and then calls async receive function.
After all clients are connected, the server starts a group of parallel tasks using System.Threading.Tasks.Parallel.ForEach().
In each task, i use the TcpClient stored in that list to send data to the corresponding client. all TcpClients are sending data at the same time (I checked the client-side and they are all receiving data). The data is just a byte[8192] with random data generated when the server program starts. I make the server sending it repeatedly.
The client's receive callback is simple. Once data arrives, the client just ignores the data and run another async receive function.
The test environment is a 1Gbps LAN, one server and several clients.
The result is: no matter how many clients (from 3 ~ 8) are connected to the server, the server's total upload speed never exceeds 13MByte/s.
Then i tried another way:
I create a TcpListener at client-side also. Once the client connects to the server, the server will connect to the client's listening port also. Then the server will store this outgoing connection into the list instead of the incoming one.
This time, the test result changes a lot: when 3 clients are receiving data from the server, the total upload speed of server is nearly 30MByte/s; with 5 clients, the total upload speed goes up to nearly 50MBytes/s.
Though this 10MByte/s-per-client limit may due to hardware or network configuration, it is still far much better than the case above.
Anyone know why?
I don't know the cause of this behavior, but as a workaround I suggest sending much bigger buffers. Like 1MB (or at least 64k). On a 1Gbps LAN you are likely to be more efficient if your app is sending bigger chunks (and less packets). Also, enable jumbo frames.
Don't use threads or Tasks for the processing. It will hurt your performance.
I've made a framework which will help you develop performant networking applications whithout having to care about the actual IO processing.
http://blog.gauffin.org/2012/05/griffin-networking-a-somewhat-performant-networking-library-for-net/

Why doesn't the server receive all UDP packets in a local transfer using sockets in C#?

I have a server and a client application where the client sends a bunch of packets to the server. The protocol used is UDP. The client application spawns a new thread to send the packets in a loop. The server application also spwans a new thread to wait for packets in a loop.
Both of these applications need to keep the UI updated with the transfer progress. How to properly keep the UI updated has been solved with this question. Basically, both the server and client applications will raise an event (code below) for each loop iteration and both will keep the UI updated with the progress. Something like this:
private void EVENTHANDLER_UpdateTransferProgress(long transferedBytes) {
receivedBytesCount += transferedBytes;
packetCount++;
}
A timer in each application will keep the UI updated with the latest info from receivedBytesCount and packetCount.
The client application has no problems at all, everything seems to be working as expected and the UI is updated properly every time a packet is sent. The server is the problematic one...
When the transfer is complete, receivedBytesCount and packetCount will not match the total size in bytes sent nor the number of packets the client sent. Each packet is 512 bytes in size by the way. The server application is counting the packets received right after the call from Socket.ReceiveFrom() is returned. And it seems that for some reason is not receiving all the packets it should.
I know that I'm using UDP which doesn't guarantee the packets will actually arrive at the destination and no retransmission will be performed so there might be some packet loss. But my question is, since I'm actually testing this locally, both the server/client are on the same machine, why exactly is this happening?
If I put a Thread.Sleep(1) (which seems to translates to a 15ms pause) in the client sending loop, the server will receive all the packets. Since I'm doing this locally, the client is sending packets so fast (without the Sleep() call) that the server can't keep up. Is this the problem or it may lie somewhere else?
'If I put a Thread.Sleep(1) (which seems to translates to a 15ms pause) in the client sending loop, the server will receive all the packets'
The socket buffers are getting full and the stack is discarding messages. UDP has no flow-control and so, if you try send a huge number of datagrams in a tight loop, some will be discarded.
Use your sleep() loop, (ugh!), implement some form of flow-control on top of UDP, implement some form of non-network flow-control, (eg. using async calls, buffer pools and inter-thread comms), or use a different protocol with flow-control built in.
If you shovel stuff at the network stack faster than it can digest it, you should not be surprised if it throws up occasionally.

Categories