.NET Async Server receiving data for no apparent reason - c#

I am totally confused right now.
Edit: Okay, nevermind. The Python socket as well is starting to do it now.
Edit 2: Well, not quite sure if this is causing high CPU usage, but something randomly is. Is there an efficient way to figure out what is causing spikes in the usage? This project is a bit large and has various threads.
I have an asynchronous server that listens and waits for incoming connections, then keeps them alive and waits for the socket to flush and give the server data. It is only closed when the user wants the socket to be closed.
However, whenever I let a socket & stream stay connected, it starts to go haywire and starts sending empty data on an endless loop... it may take anywhere from 15 seconds to over a minute before it starts going wack. If I let it go for a really long time, it starts to cause really high CPU usage.
Aside from the high CPU usage, oddly enough, everything works as it should; messages are sent & received fine.
This is my read callback function:
protected void ReadCallback(IAsyncResult ar)
{
StateObject state = (StateObject)ar.AsyncState;
Socket handler = state.SocketHandle;
try
{
int bytesRead = (state.BytesRead += handler.EndReceive(ar)), offset = 0;
string line = m_Encoder.GetString(state.Buffer, 0, bytesRead);
if ( state.Buddy != null )
Console.WriteLine(state.Buddy.Address);
if (bytesRead > 0)
{
Console.WriteLine("!!!");
/* A complete request? */
if (line.EndsWith("\n") || line.EndsWith("\x00"))
{
string[] lines = line.Split('\n'); // ... *facepalm*
foreach (string ln in lines)
this.MessageReceieved(ln, state);
state.Buffer = new byte[StateObject.BUFFER_SIZE];
state.BytesRead = 0; // reset
}
/* Incomplete; resize the array to accommodate more data... */
else
{
offset = bytesRead;
Array.Resize<byte>(ref state.Buffer, bytesRead + StateObject.BUFFER_SIZE);
}
}
if (handler != null && handler.Connected )
handler.BeginReceive(state.Buffer, offset, state.Buffer.Length - offset, SocketFlags.None, new AsyncCallback(ReadCallback), state);
}
catch (SocketException)
{
if (state.Buddy != null)
state.Buddy.Kill();
else
handler.Close();
}
}
I know this is somehow caused by calling BeginReceive, but I don't know how else to keep the connection alive.

There is nothing in that code that can make it go haywire.
I do see some problems though.
Connection detection
No need to check if the socket is connected. You can detect disconnections in two ways in the receive callback:
Zero bytes is returned by EndReceive
An exception is being thrown.
I would recommend that the first thing you do after EndReceive is to check the return value and handle disconnect accordingly. It makes the code clearer.
Your code will currently do nothing if 0 bytes are received. The handler will just stop receiving and still think that the connection is open.
Buffer handling
Your buffer handling is very inneffecient. Do not resize the buffer every time. It will slow your server down a lot. Allocate a large buffer from start.
String handling
Don't build a string every time you receive something. Check inside the byte buffer after new line or null instead. THEN build a string, and only make it as large as needed. You might receive more bytes that just a message (for instance one and a half message)

Related

Socket.EndReceive of IAsyncResult is collecting more TCP messages than one

I have a situation here.
I'm using System.Net.Sockets.Socket to read and send TCP messages.
Using the recursion for receiving the data and reading again new data.
void rcvTCP(IAsyncResult ar)
{
var socket = (Socket)ar.AsyncState;
try
{
var bytesRead = socket.EndReceive(ar);
if (bytesRead > 0)
{
var data = new byte[bytesRead];
Array.Copy(this.mBuffer, data, data.Length);
dataReceived(this, mMaster.SlaveAddress, data);
}
socket.BeginReceive(mBuffer, 0, mBuffer.Length, SocketFlags.None, new AsyncCallback(rcvTCP), socket);
}
catch (Exception ex)
{
Debug.WriteLine(ex.Message);
}
}
I'm sending messages from a device to my app, each 10ms, 8 bytes each message. I'm receiving the messages ok, so the "bytesRead" is 8, until a certain point when the app freezes randomly, and "bytesRead" is 768. When I look inside the data that came, I see that I have 96 messages in one.
I've read in the internet, like the messages are coming Sync instead of Async, so checking the "CompletedSinchronously" gives true at previous messages and this message also...
if (ar.CompletedSynchronously)
{
Debugger.Break();
}
I tried with TcpClient + NetworkStream instead of Socket(I know that TcpClient is pretty the same as Socket), and I have the same result.
Please help me. I want all the messages separated instead of collected, but I can't find any solution to this...
Any idea how to do this?
PS: I am already putting the flag - NoDelay to true.
FIX: For me the fix was splitting the collected messages after the receive, and running the splitting in a separate thread, so the UI does not freeze. Keep in mind, if you have a property used in different Threads, don't forget to use "lock".

Can i know who disconnect from my TCP Server? C# [duplicate]

I'm working on a client/server relationship that is meant to push data back and forth for an indeterminate amount of time.
The problem I'm attempting to overcome is on the client side, being that I cannot manage to find a way to detect a disconnect.
I've taken a couple of passes at other peoples solutions, ranging from just catching IO Exceptions, to polling the socket on all three SelectModes. I've also tried using a combination of a poll, with a check on the 'Available' field of the socket.
// Something like this
Boolean IsConnected()
{
try
{
bool part1 = this.Connection.Client.Poll(1000, SelectMode.SelectRead);
bool part2 = (this.Connection.Client.Available == 0);
if (part1 & part2)
{
// Never Occurs
//connection is closed
return false;
}
return true;
}
catch( IOException e )
{
// Never Occurs Either
}
}
On the server side, an attempt to write an 'empty' character ( \0 ) to the client forces an IO Exception and the server can detect that the client has disconnected ( pretty easy gig ).
On the client side, the same operation yields no exception.
// Something like this
Boolean IsConnected( )
{
try
{
this.WriteHandle.WriteLine("\0");
this.WriteHandle.Flush();
return true;
}
catch( IOException e )
{
// Never occurs
this.OnClosed("Yo socket sux");
return false;
}
}
A problem that I believe I am having in detecting a disconnect via a poll, is that I can fairly easily encounter a false on a SelectRead, if my server hasn't yet written anything back to the client since the last check... Not sure what to do here, I've chased down every option to make this detection that I can find and nothing has been 100% for me, and ultimately my goal here is to detect a server (or connection) failure, inform the client, wait to reconnect, etc. So I am sure you can imagine that this is an integral piece.
Appreciate anyone's suggestions.
Thanks ahead of time.
EDIT: Anyone viewing this question should note the answer below, and my FINAL Comments on it. I've elaborated on how I overcame this problem, but have yet to make a 'Q&A' style post.
One option is to use TCP keep alive packets. You turn them on with a call to Socket.IOControl(). Only annoying bit is that it takes a byte array as input, so you have to convert your data to an array of bytes to pass in. Here's an example using a 10000ms keep alive with a 1000ms retry:
Socket socket; //Make a good socket before calling the rest of the code.
int size = sizeof(UInt32);
UInt32 on = 1;
UInt32 keepAliveInterval = 10000; //Send a packet once every 10 seconds.
UInt32 retryInterval = 1000; //If no response, resend every second.
byte[] inArray = new byte[size * 3];
Array.Copy(BitConverter.GetBytes(on), 0, inArray, 0, size);
Array.Copy(BitConverter.GetBytes(keepAliveInterval), 0, inArray, size, size);
Array.Copy(BitConverter.GetBytes(retryInterval), 0, inArray, size * 2, size);
socket.IOControl(IOControlCode.KeepAliveValues, inArray, null);
Keep alive packets are sent only when you aren't sending other data, so every time you send data, the 10000ms timer is reset.

Retrieve multiple objects from a buffer in C#

First things first, let me explain my situation: I'm working on a client and a server in C# which use socket for communication.
For practical reason, I use the asynchronous part of both socket to transmit binary serialized objects from the client to the server and vice-versa.
My problem is that when I send too much object at once, the receiver object "stack" into the buffer and when I try to unserialize the buffer content, it give me only one object.
My question is : How can I separate each object from a buffer ?
Here is my ReceiveCallback function :
private void ReceiveMessageCallback(IAsyncResult asyncResult)
{
Socket socket = (Socket)asyncResult.AsyncState;
try
{
int read = socket.EndReceive(asyncResult);
if (read > 0)
{
Log("Reception of " + read + " Bytes");
// Jumper is an object that I use to transport every message
Jumper pod = Common.Serializer.DeSerialize<Jumper>(this.readbuf);
Buffer.SetByte(this.readbuf, 0, 0);
socket.BeginReceive(this.readbuf, 0, this.readbuf.Length, SocketFlags.None, new AsyncCallback(ReceiveMessageCallback), socket);
//We fire an event to externalise the analyse process
Receiver(pod, socket);
}
}
catch (SocketException ex)
{
if (ex.SocketErrorCode == System.Net.Sockets.SocketError.ConnectionReset)
{
socket.Close();
Log("Distant socket closed");
}
else
Log(ex.Message);
}
catch (Exception ex)
{
Log(ex.Message);
}
}
Deserialization will consume your buffer for one object, and will left 'read pointer' of the buffer at the start of the next object.
My suggestion would be to either:
move data from socket stream to some other (memory) buffer and deserialize from there, and have control over whole 'buffering' by hand
or
when you get a callback, call deserialize several times in a row, until you get an exception.
Second approach will most likely fail because nobody can guarantee that you will always get WHOLE objects in one go, since stream that is attached to a socket is a sequence of bytes, and it can be broken at any point.
Hope that makes some sense.
EDIT:
In fact, the proper thing to do this would be to encapsulate each 'message' (serialized object on sending) into a packet, in which for example you'll have a small header telling you LENGTH of the packet, and upon reception, you will read the socket stream until you have complete packet data, then serialize. And then the next packet, and so on.
EVEN MORE:
Say you send data at a rate of 1000 bytes at the sender side into the socket. On the receiving side, you'll probably NEVER get 1000 by 1000 bytes out of the socket, in fact you can expect that only under two conditions:
delay between each send is very large, and in between sends read occurs at receiver
pure accident

How to send files over tcp with TcpListener/Client? SocketException problem

I'm developing a simple application to send files over TCP using the TCPListener and TCPClient classes. Here's the code that sends the file.
Stop is a volatile boolean which helps stopping the process at any time and WRITE_BUFFER_SIZE might be changed in runtime (another volatile)
while (remaining > 0 && !stop)
{
DateTime current = DateTime.Now;
int bufferSize = WRITTE_BUFFER_SIZE;
buffer = new byte[bufferSize];
int readed = fileStream.Read(buffer, 0, bufferSize);
stream.Write(buffer, 0, readed);
stream.Flush();
remaining -= readed;
// Wait in order to guarantee send speed
TimeSpan difference = DateTime.Now.Subtract(current);
double seconds = (bufferSize / Speed);
int wait = (int)Math.Floor(seconds * 1000);
wait -= difference.Milliseconds;
if (wait > 10)
Thread.Sleep(wait);
}
stream.Close();
and this is the code that handles the receiver side:
do
{
readed = stream.Read(buffer, 0, READ_BUFFER_SIZE);
// write to .part file and flush to disk
outputStream.Write(buffer, 0, readed);
outputStream.Flush();
offset += readed;
} while (!stop && readed > 0);
Now, when the speed is low (about 5KBps) everything works ok but, as I increase the speed the receiver size becomes more prone to raise a SocketException when reading from the stream. I'm guessing it has to do with the remote socket being closed before all data can be read, but What's the correct way to do this? When should I close the sending client?
I haven't found any good examples of file transmission on google, and the ones that I've found have a similar implementation of what I'm doing so I guess I'm missing something.
Edit: I get this error "Unable to read data from the transport connection". This is an IOException whose inner exception is a SocketException.
I've added this in the sender function, still I get the same error, the code never reaches the stream.close() and of course the tcpclient never really get closed... so I'm completely lost now.
buffer = new byte[1];
client.Client.Receive(buffer);
stream.Close();
Typically you want to set the LINGER option on the socket. Under C++ this would be SO_LINGER, but under windows this doesn't actually work as expected. You really want to do this:
Finish sending data.
Call shutdown() with the how parameter set to 1.
Loop on recv() until it returns 0.
Call closesocket().
Taken from: http://tangentsoft.net/wskfaq/newbie.html#howclose
C# sharp may have corrected this in its libraries, but I doubt it since they are built on top of the winsock API.
Edit:
Looking at your code in more detail. I see that you are sending no header across at all, so on the receiving side you have no idea of how many bytes you are actually supposed to read. Knowing the number of bytes to read of the socket makes this a much easier problem to debug. Keep in mind that shutting down the socket can still snip of the last bit of data if you don't close it properly.
Additionally having your buffer size be volatile is not thread safe and really doesn't buy you anything. Using stop as a volatile is safe, but don't expect it to be instant. In other words the loop could run several more times before it gets the updated value of stop. This is especially true on multiprocessor machines.
Edit_02:
For the TCPClientClass you want to do the following (as far as I can tell without having access to a C# at the moment).
// write all the bytes
// Then do the following
client.client.Shutdown(Shutdown.Send) // This assumes you have access to this protected member
while (stream.read(buffer, 0, READ_BUFFER_SIZE) != 0);
client.close()

C# Begin/EndReceive - how do I read large data?

When reading data in chunks of say, 1024, how do I continue to read from a socket that receives a message bigger than 1024 bytes until there is no data left? Should I just use BeginReceive to read a packet's length prefix only, and then once that is retrieved, use Receive() (in the async thread) to read the rest of the packet? Or is there another way?
edit:
I thought Jon Skeet's link had the solution, but there is a bit of a speedbump with that code. The code I used is:
public class StateObject
{
public Socket workSocket = null;
public const int BUFFER_SIZE = 1024;
public byte[] buffer = new byte[BUFFER_SIZE];
public StringBuilder sb = new StringBuilder();
}
public static void Read_Callback(IAsyncResult ar)
{
StateObject so = (StateObject) ar.AsyncState;
Socket s = so.workSocket;
int read = s.EndReceive(ar);
if (read > 0)
{
so.sb.Append(Encoding.ASCII.GetString(so.buffer, 0, read));
if (read == StateObject.BUFFER_SIZE)
{
s.BeginReceive(so.buffer, 0, StateObject.BUFFER_SIZE, 0,
new AyncCallback(Async_Send_Receive.Read_Callback), so);
return;
}
}
if (so.sb.Length > 0)
{
//All of the data has been read, so displays it to the console
string strContent;
strContent = so.sb.ToString();
Console.WriteLine(String.Format("Read {0} byte from socket" +
"data = {1} ", strContent.Length, strContent));
}
s.Close();
}
Now this corrected works fine most of the time, but it fails when the packet's size is a multiple of the buffer. The reason for this is if the buffer gets filled on a read it is assumed there is more data; but the same problem happens as before. A 2 byte buffer, for exmaple, gets filled twice on a 4 byte packet, and assumes there is more data. It then blocks because there is nothing left to read. The problem is that the receive function doesn't know when the end of the packet is.
This got me thinking to two possible solutions: I could either have an end-of-packet delimiter or I could read the packet header to find the length and then receive exactly that amount (as I originally suggested).
There's problems with each of these, though. I don't like the idea of using a delimiter, as a user could somehow work that into a packet in an input string from the app and screw it up. It also just seems kinda sloppy to me.
The length header sounds ok, but I'm planning on using protocol buffers - I don't know the format of the data. Is there a length header? How many bytes is it? Would this be something I implement myself? Etc..
What should I do?
No - call BeginReceive again from the callback handler, until EndReceive returns 0. Basically, you should keep on receiving asynchronously, assuming you want the fullest benefit of asynchronous IO.
If you look at the MSDN page for Socket.BeginReceive you'll see an example of this. (Admittedly it's not as easy to follow as it might be.)
Dang. I'm hesitant to even reply to this given the dignitaries that have already weighed in, but here goes. Be gentle, O Great Ones!
Without having the benefit of reading Marc's blog (it's blocked here due the corporate internet policy), I'm going to offer "another way."
The trick, in my mind, is to separate the receipt of the data from the processing of that data.
I use a StateObject class defined like this. It differs from the MSDN StateObject implementation in that it does not include the StringBuilder object, the BUFFER_SIZE constant is private, and it includes a constructor for convenience.
public class StateObject
{
private const int BUFFER_SIZE = 65535;
public byte[] Buffer = new byte[BUFFER_SIZE];
public readonly Socket WorkSocket = null;
public StateObject(Socket workSocket)
{
WorkSocket = workSocket;
}
}
I also have a Packet class that is simply a wrapper around a buffer and a timestamp.
public class Packet
{
public readonly byte[] Buffer;
public readonly DateTime Timestamp;
public Packet(DateTime timestamp, byte[] buffer, int size)
{
Timestamp = timestamp;
Buffer = new byte[size];
System.Buffer.BlockCopy(buffer, 0, Buffer, 0, size);
}
}
My ReceiveCallback() function looks like this.
public static ManualResetEvent PacketReceived = new ManualResetEvent(false);
public static List<Packet> PacketList = new List<Packet>();
public static object SyncRoot = new object();
public static void ReceiveCallback(IAsyncResult ar)
{
try {
StateObject so = (StateObject)ar.AsyncState;
int read = so.WorkSocket.EndReceive(ar);
if (read > 0) {
Packet packet = new Packet(DateTime.Now, so.Buffer, read);
lock (SyncRoot) {
PacketList.Add(packet);
}
PacketReceived.Set();
}
so.WorkSocket.BeginReceive(so.Buffer, 0, so.Buffer.Length, 0, ReceiveCallback, so);
} catch (ObjectDisposedException) {
// Handle the socket being closed with an async receive pending
} catch (Exception e) {
// Handle all other exceptions
}
}
Notice that this implementation does absolutely no processing of the received data, nor does it have any expections as to how many bytes are supposed to have been received. It simply receives whatever data happens to be on the socket (up to 65535 bytes) and stores that data in the packet list, and then it immediately queues up another asynchronous receive.
Since processing no longer occurs in the thread that handles each asynchronous receive, the data will obviously be processed by a different thread, which is why the Add() operation is synchronized via the lock statement. In addition, the processing thread (whether it is the main thread or some other dedicated thread) needs to know when there is data to process. To do this, I usually use a ManualResetEvent, which is what I've shown above.
Here is how the processing works.
static void Main(string[] args)
{
Thread t = new Thread(
delegate() {
List<Packet> packets;
while (true) {
PacketReceived.WaitOne();
PacketReceived.Reset();
lock (SyncRoot) {
packets = PacketList;
PacketList = new List<Packet>();
}
foreach (Packet packet in packets) {
// Process the packet
}
}
}
);
t.IsBackground = true;
t.Name = "Data Processing Thread";
t.Start();
}
That's the basic infrastructure I use for all of my socket communication. It provides a nice separation between the receipt of the data and the processing of that data.
As to the other question you had, it is important to remember with this approach that each Packet instance does not necessarily represent a complete message within the context of your application. A Packet instance might contain a partial message, a single message, or multiple messages, and your messages might span multiple Packet instances. I've addressed how to know when you've received a full message in the related question you posted here.
You would read the length prefix first. Once you have that, you would just keep reading the bytes in blocks (and you can do this async, as you surmised) until you have exhausted the number of bytes you know are coming in off the wire.
Note that at some point, when reading the last block you won't want to read the full 1024 bytes, depending on what the length-prefix says the total is, and how many bytes you have read.
Also I troubled same problem.
When I tested several times, I found that sometimes multiple BeginReceive - EndReceive makes packet loss. (This loop was ended improperly)
In my case, I used two solution.
First, I defined the enough packet size to make only 1 time BeginReceive() ~ EndReceive();
Second, When I receive large size of data, I used NetworkStream.Read() instead of BeginReceive() - EndReceive().
Asynchronous socket is not easy to use, and it need a lot of understanding about socket.
For info (general Begin/End usage), you might want to see this blog post; this approach is working OK for me, and saving much pain...
There seems to be a lot of confusion surrounding this. The examples on MSDN's site for async socket communication using TCP are misleading and not well explained. The EndReceive call will indeed block if the message size is an exact multiple of the receive buffer. This will cause you to never get your message and the application to hang.
Just to clear things up - You MUST provide your own delimiter for data if you are using TCP. Read the following (this is from a VERY reliable source).
The Need For Application Data
Delimiting
The other impact of TCP treating
incoming data as a stream is that data
received by an application using TCP
is unstructured. For transmission, a
stream of data goes into TCP on one
device, and on reception, a stream of
data goes back to the application on
the receiving device. Even though the
stream is broken into segments for
transmission by TCP, these segments
are TCP-level details that are hidden
from the application. So, when a
device wants to send multiple pieces
of data, TCP provides no mechanism for
indicating where the “dividing line”
is between the pieces, since TCP
doesn't examine the meaning of the
data at all. The application must
provide a means for doing this.
Consider for example an application
that is sending database records. It
needs to transmit record #579 from the
Employees database table, followed by
record #581 and record #611. It sends
these records to TCP, which treats
them all collectively as a stream of
bytes. TCP will package these bytes
into segments, but in a manner the
application cannot predict. It is
possible that each will end up in a
different segment, but more likely
they will all be in one segment, or
part of each will end up in different
segments, depending on their length.
The records themselves must have some
sort of explicit markers so the
receiving device can tell where one
record ends and the next starts.
Source: http://www.tcpipguide.com/free/t_TCPDataHandlingandProcessingStreamsSegmentsandSequ-3.htm
Most examples I see online for using EndReceive are wrong or misleading. It usually causes no problems in the examples because only one predefined message is sent and then the connection is closed.
This a very old topic, but I got here looking for something else and found this:
Now this corrected works fine most of the time, but it fails when the packet's size is a multiple of the buffer. The reason for this is if the buffer gets filled on a read it is assumed there is more data; but the same problem happens as before. A 2 byte buffer, for exmaple, gets filled twice on a 4 byte packet, and assumes there is more data. It then blocks because there is nothing left to read. The problem is that the receive function doesn't know when the end of the packet is.
I had this same problem, and since none of the replies seems to solve this, the way I did it was using Socket.Available
public static void Read_Callback(IAsyncResult ar)
{
StateObject so = (StateObject) ar.AsyncState;
Socket s = so.workSocket;
int read = s.EndReceive(ar);
if (read > 0)
{
so.sb.Append(Encoding.ASCII.GetString(so.buffer, 0, read));
if (s.Available == 0)
{
// All data received, process it as you wish
}
}
// Listen for more data
s.BeginReceive(so.buffer, 0, StateObject.BUFFER_SIZE, 0,
new AyncCallback(Async_Send_Receive.Read_Callback), so);
}
Hope this helps others, SO have helped me many times, thank you all!

Categories