I am using udp raw sockets.
I wish to read only the first, for example, 64 bytes of every packet.
ipaddr = IPAddress.Parse( "10.1.2.3" );
sock = new Socket(AddressFamily.InterNetwork, SocketType.Raw, ProtocolType.IP);
sock.Bind(new IPEndPoint(ipaddr, 0));
sock.SetSocketOption(SocketOptionLevel.IP, SocketOptionName.HeaderIncluded, true);
sock.IOControl(IOControlCode.ReceiveAll, BitConverter.GetBytes(RCVALL_IPLEVEL), null);
sock.ReceiveBufferSize = 32768;
byte[] buffer = new byte[64]; // max IP header, plus tcp/udp ports
while (!bTheEnd )
{
int ret = sock.Receive(buffer, buffer.Length, SocketFlags.None);
...
}
I receive the packets, but all with IP header' "total length" <= 64.
If I use a bigger buffer ( byte[] buffer = new byte[32768] ), I got the right "total length" ( now its value is <= 32768 ).
The goal is to get all the packets, only the IP header, with their corret packet length;
my routine doesn't have to cause packet fragmentation into the tcp/ip stack.
SocketFlags.Peek means the data returned will be left intact for a subsequent read - that's why you get same data after reading again. To read subsequent packets you don't want to use Peek, just perform a regular read with no special flags.
According to documentation:
If the datagram you receive is larger than the size of the buffer
parameter, buffer gets filled with the first part of the message, the
excess data is lost and a SocketException is thrown.
Is that the behavior you're after?
Related
I am very new in Socket Programming.
I am using the following code to receive incoming data from a pathology machine.
byte[] buffer = new byte[2048];
IPAddress ipAddress = IPAddress.Parse(SERVER_IP);
IPEndPoint localEndpoint = new IPEndPoint(ipAddress, PORT_NO);
Socket sock = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
try
{
sock.Connect(localEndpoint);
}
catch (Exception ex)
{
throw ex;
}
int recv = 0;
string Printed = string.Empty;
StringBuilder sb = new StringBuilder();
while ((recv = sock.Receive(buffer)) > 0)
{
if (sock.Receive(buffer).ToString().Length > 1) // I used this line because it's receiving some garbage value all the time.
{
sb.Append(Encoding.UTF8.GetString(buffer));
}
else
{
if (sb.Length > 50 && Printed == string.Empty)
{
Console.WriteLine(sb);
Printed = "Y";
}
}
}
Issues I am facing
My program is not receiving complete data. Maybe because of this line if (sock.Receive(buffer).ToString().Length > 1). But I used this line because it's always receiving something.
My program goes to endless loop. I am looking for the program which should stop for sometime after receiving the data and start listening again for new incoming data.
There's a few things here;
you need to store the read count, and use only that many bytes, i.e. var bytes = sock.Receive(buffer); (and use bytes for both the EOF test, and for how many bytes to process)
we can't use ToString().Length > 1 here, because it is an integer and every integer, as a string, has a non-zero length; instead, simply: if (bytes > 0) (minutiae: there is a scenario where an open socket can return zero without meaning EOF, but... it doesn't apply here)
even for a text protocol, you can't necessarily simply use Encoding.UTF8.GetString(buffer, 0, bytes), because UTF8 is a multi-byte encoding, meaning: you might have partial characters; additionally, you don't yet know whether that is one message, half a message, or 14 and a bit messages; you need to read about the protocol's "framing" - which might simply mean "buffer bytes until you see a newline ('\n') character, decode those buffered bytes via the encoding, process that message, and repeat"
I have a Synchronous client socket talking to a LRS (Long Range Systems) transmitter, it takes XML input and TCPIP connection. I am able to create a connection with the device and receive an response once connected; but when I tried to send some texts and call Receive again, no reply or eventually time out. Can you please explain why?
My sample code:
Socket tcpSocket = new Socket (AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
System.Net.IPAddress[] IPs = System.Net.Dns.GetHostAddresses("valid_IP_address");
tcpSocket.Connect(IPs[0], PORT_NUMBER);
int nBytes = 0;
byte[] RcvBytes = new byte[BUF_SIZE];
if (tcpSocket.Connected)
{
tcpSocket.ReceiveTimeout = 60000; //1 minute timeout
//connected is true and below Receive call returns some bytes
//RcvBytes contains a valid response, ie, <LRSN services="blah, blah" ... />
nBytes = tcpSocket.Receive(RcvBytes, 0, tcpSocket.Available, SocketFlags.None);
}
//below Send returns 8 bytes, the lenth of "SomeText"
nBytes = tcpSocket.Send(Encoding.ASCII.GetBytes("SomeText"));
//*** FAILS, below Receive call never returns, eventually time out
nBytes = tcpSocket.Receive(RcvBytes, 0, tcpSocket.Available, SocketFlags.None);
I think you've probably run into this issue:
LRSN Message Transport All messages are XML based. With some XML
parsers, it is difficult to process a continuous XML stream. To ease
parsing incoming messages, the following message framing scheme is
used: ā¢ Newline characters (ā\nā) are used to delimit the end of a
message. The data between two newlines should form a parsable XML
document (i.e. all tags balanced).
So try sending some newlines through.
i have a hardware than joins to a laptop(server) in ad hoc Network.
when server sends data alone, it works correctly. and client sends data alone , works correctly too.
but when server and client send data together , after a period of time , time out will occur.
after 35 and sometimes 33 packet time out will occur.
i changed transfer rate of hardware but it disconnects too.
although hard ware supports full duplex.
after time out , i ping hard ware and it is not on port.
and check port on server , and it is open.
how can i do?
byte[] bytes = new byte[512];
//try
//{
IPHostEntry ipHost = Dns.GetHostEntry("");
// Gets first IP address associated with a localhost
IPAddress add = ipHost.AddressList[3];
TcpListener tcpListener = new TcpListener(add, 6000);
tcpListener.Start();
TcpClient tcpClient = tcpListener.AcceptTcpClient();
NetworkStream stream = tcpClient.GetStream();
String data = null;
while (true)
{
int j = 0;
int i;
while ((i = stream.Read(bytes, 0, bytes.Length)) != 0)
{
j = j + 1;
// Translate data bytes to a ASCII string.
data = System.Text.Encoding.ASCII.GetString(bytes, 0, i);
AddItem("j="+j+" Received:"+ data);
// Process the data sent by the client.
//data = data.ToUpper();
byte[] msg = System.Text.Encoding.ASCII.GetBytes("thanks");
// Send back a response.
stream.Write(msg, 0, msg.Length);
AddItem("Sent:"+"thanks");
}
// Shutdown and end connection
tcpClient.Close();
The standard socket calls are all blocking, so if both participants are sending to each other, they each wait for their opposite to receive the message they send, causing a deadlock.
In .NET, there are three typical solutions:
Microsoft has a parallel API for asynchronous socket activity. It requires more overhead code than your example, but handles just about everything in a Windows-like manner.
You can handle the asynchronous activity yourself by testing for readable data before you write with Socket.Select(). This is a typical polling approach, but you're doing everything yourself and need to make sure there's no starvation or other bias.
Put your Read and Write code in different threads, so that blocking one doesn't block the entire program.
I have a Socket code which is communicating through TCP/IP.The machine to which i am communicating has buffer data in its buffer.At present i am trying to get the buffer data using this code.
byte data = new byte[1024];
int recv = sock.Receive(data);
stringData = Encoding.ASCII.GetString(data, 0, recv);
But this code retrieves only 11 lines of data whereas more data is there in the machines buffer.Is this because i have used int recv = sock.Receive(data); and data is 1024 ?
If yes ,How to get the total buffer size and retrieve it into string.
If you think you are missing some data, then you need to check recv and almost certainly: loop. Fortunately, ASCII is always single byte - in most other encodings you would also have to worry about receiving partial characters.
A common approach is basically:
int recv;
while((recv = sock.Receive(data)) > 0)
{
// process recv-many bytes
// ... stringData = Encoding.ASCII.GetString(data, 0, recv);
}
Keep in mind that there is no guarantee that stringData will be any particular entire unit of work; what you send is not always what you receive, and that could be a single character, 14 lines, or the second half of one word and the first half of another. You generally need to maintain your own back-buffer of received data until you have a complete logical frame to process.
Note, however, Receive always tries to return something (at least one byte), unless the inbound stream has closed - and will block to do so. If this is a problem, you may need to check the available buffer (sock.Available) to decide whether to do synchronous versus asynchronous receive (i.e. read synchronously while data is available, otherwise request an asynchronous read).
Try something along these lines:
StringBuilder sbContent=new StringBuilder();
byte data = new byte[1024];
int numBytes;
while ((numBytes = sock.Receive(data))>0)
{
sbContent.Append(Encoding.UTF8.GetString(data));
}
// use sbContent.ToString()
Socket tcpSocket = new Socket(ipe.AddressFamily, SocketType.Stream, ProtocolType.Tcp);
Console.WriteLine(" ReceiveBufferSize {0}", tcpSocket.ReceiveBufferSize);
For actual data you can put below condition:-
int receiveBytes;
while((receiveBytes = tcpSocket.Receive.Data(receiveBytes)) > 0)
{
}
When using a blocking TCP socket, I don't have to specify a buffer size. For example:
using (var client = new TcpClient())
{
client.Connect(ServerIp, ServerPort);
using (reader = new BinaryReader(client.GetStream()))
using (writer = new BinaryWriter(client.GetStream()))
{
var byteCount = reader.ReadInt32();
reader.ReadBytes(byteCount);
}
}
Notice how the remote host could have sent any number of bytes.
However, when using async TCP sockets, I need to create a buffer and thus hardcode a maximum size:
var buffer = new byte[BufferSize];
socket.BeginReceive(buffer, 0, buffer.Length, SocketFlags.None, callback, null);
I could simply set the buffer size to, say, 1024 bytes. That'll work if I only need to receive small chunks of data. But what if I need to receive a 10 MB serialized object? I could set the buffer size to 10*1024*1024... but that would waste a constant 10 MB of RAM for as long as the application is running. This is silly.
So, my question is: How can I efficiently receive big chunks of data using async TCP sockets?
Two examples are not equivalent - your blocking code assumes the remote end sends the 32-bit length of the data to follow. If the same protocol is valid for the async - just read that length (blocking or not) and then allocate the buffer and initiate the asynchronous IO.
Edit 0:
Let me also add that allocating buffers of user-entered, and especially of network-input, size is a receipt for disaster. An obvious problem is a denial-of-service attack when client requests a huge buffer and holds on to it - say sends data very slowly - and prevents other allocations and/or slows the whole system.
Common wisdom here is accepting a fixed amount of data at a time and parsing as you go. That of course affects your application-level protocol design.
EDITED
The best approach for this problem found by me, after a long analysis was the following:
First, you need to set the buffer size in order to receive data from the server/client.
Second, you need to find the upload/download speed for that connection.
Third, you need to calculate how many seconds should the connection timeout last in accordance with the size of package to be sent or received.
Set the buffer size
The buffer size can be set in two ways, arbitrary or objectively. If the information to be received is text based, it is not large and it does not require character comparison, than an arbitrary pre-set buffer size is optimal. If the information to be received needs to be processed character by character, and/or large, an objective buffer size is optimal choice
// In this example I used a Socket wrapped inside a NetworkStream for simplicity
// stability, and asynchronous operability purposes.
// This can be done by doing this:
//
// For server:
//
// Socket server= new Socket();
// server.ReceiveBufferSize = 18000;
// IPEndPoint iPEndPoint = new IPEndPoint(IPAddress.Any, port);
// server.Bind(iPEndPoint);
// server.Listen(3000);
//
//
// NetworkStream ns = new NetworkStream(server);
// For client:
//
// Socket client= new Socket();
// client.Connect("127.0.0.1", 80);
//
// NetworkStream ns = new NetworkStream(client);
// In order to set an objective buffer size based on a file's size in order not to
// receive null characters as extra characters because the buffer is bigger than
// the file's size, or a corrupted file because the buffer is smaller than
// the file's size.
// The TCP protocol follows the Sys, Ack and Syn-Ack paradigm,
// so within a TCP connection if the client or server began the
// connection by sending a message, the next message within its
// connection must be read, and if the client or server began
// the connection by receiving a message, the next message must
// be sent.
// [SENDER]
byte[] file = new byte[18032];
byte[] file_length = Encoding.UTF8.GetBytes(file.Length.ToString());
await Sender.WriteAsync(file_length, 0, file_length.Length);
byte[] receiver_response = new byte[1800];
await Sender.ReadAsync(receiver_response, 0, receiver_response.Length);
await Sender.WriteAsync(file, 0, file.Length);
// [SENDER]
// [RECEIVER]
byte[] file_length = new byte[1800];
await Receiver.ReadAsync(file_length, 0, file_length.Length);
byte[] encoded_response = Encoding.UTF8.GetBytes("OK");
await Receiver.WriteAsync(encoded_response, 0, encoded_response.Length);
byte[] file = new byte[Convert.ToInt32(Encoding.UTF8.GetString(file_length))];
await Receiver.ReadAsync(file, 0, file.Length);
// [RECEIVER]
The buffers that are used to receive the payload length are using an arbitrary buffer size. The length of the payload to be sent is converted to string and then the string is converted in a UTF-8 encoded byte array. The received length of the payload is then converted back into a string format and then converted to an integer in order to set the length of the buffer that will receive the payload. The length is converted to string, then to int and then to byte[], in order to avoid data corruption due to the fact that the information related to the payload length will not be sent into a buffer that has the same size as the information. When the receiver will convert the byte[] content to a string and then to an int, the extra characters will be removed and the information will remain the same.
Get the upload/download speed of the connection and calculate the Socket receive and send buffer size
First, Make a class that is responsible for calculating the buffer size for each connection.
// In this example I used a Socket wrapped inside a NetworkStream for simplicity
// stability, and asynchronous operability purposes.
// This can be done by doing this:
//
// For server:
//
// Socket server= new Socket();
// server.ReceiveBufferSize = 18000;
// IPEndPoint iPEndPoint = new IPEndPoint(IPAddress.Any, port);
// server.Bind(iPEndPoint);
// server.Listen(3000);
//
// NetworkStream ns = new NetworkStream(server);
// For client:
//
// Socket client= new Socket();
// client.Connect("127.0.0.1", 80);
//
// NetworkStream ns = new NetworkStream(client);
class Internet_Speed_Checker
{
public async Task<bool>> Optimum_Buffer_Size(System.Net.Sockets.NetworkStream socket)
{
System.Diagnostics.Stopwatch latency_counter = new System.Diagnostics.Stopwatch();
byte[] test_payload = new byte[2048];
// The TCP protocol follows the Sys, Ack and Syn-Ack paradigm,
// so within a TCP connection if the client or server began the
// connection by sending a message, the next message within its
// connection must be read, and if the client or server began
// the connection by receiving a message, the next message must
// be sent.
//
// In order to test the connection, the client and server must
// send and receive a package of the same size. If the client
// or server began the connection by sending a message, the
// client or server must do the this connection test by
// initiating a write-read sequence, else it must do this
// connection test initiating a read-write sequence.
latency_counter .Start();
await client_secure_network_stream.ReadAsync(test_payload, 0, test_payload.Length);
await client_secure_network_stream.WriteAsync(test_payload, 0, test_payload.Length);
latency_counter .Stop();
int bytes_per_second = (int)(test_payload.Length * (1000 / latency_time_counter.Elapsed.TotalMilliseconds));
int optimal_connection_timeout = (Convert.ToInt32(payload_length) / download_bytes_per_second) * 1000 + 1000;
double optimal_buffer_size_double = ((download_bytes_per_second / 125000) * (latency_time_counter.Elapsed.TotalMilliseconds / 1000)) * 1048576;
int optimal_buffer_size = (int)download_optimal_buffer_size_double + 1024;
// If you want to upload data to the client/server --> client.SendBufferSize = optimal_buffer_size;
// client.SendTimeout = optimal_connection_timeout;
// If you want to download data from the client/server --> client.ReceiveBufferSize = optimal_buffer_size;
// client.ReceiveTimeout = optimal_connection_timeout;
}
}
The aforementioned method is ensuring that the data transmitted between the client buffer and server buffer uses an appropriate socket buffer size and socket connection timeout in order to avoid data corruption and fragmentation. When the data is sent through a socket with an async Read/Write operation, the length of the information to be sent will be segmented in packets. The packet size has a default value but it does not cover the fact that the upload/download speed of the connection is varying. In order to avoid data corruption and an optimal download/upload speed of the connection, the packet size must be set in accordance with the speed of the connection. In the aforementioned example I also showcased also the how to calculate the timeout in relation with the connection speed. The packet size for upload/download can be set by using the socket.ReceiveBufferSize = ... / socket.SendBufferSize = ... respectively.
For more information related to the equations and principles used check:
https://www.baeldung.com/cs/calculate-internet-speed-ping
https://docs.oracle.com/cd/E36784_01/html/E37476/gnkor.html#:~:text=You%20can%20calculate%20the%20correct,value%20of%20the%20connection%20latency.