Transmitting strings between a C# client and a Node server - c#

I have a node TCP server working and waiting for data and for every socket I have
socket.on("data", function () {
});
Now, as far as I understand, this will get invoked whenever there's any data received. That means that if I send a large string, it will get segmented into multiple packets and each of those will invoke the event separately. Therefore I could concatenate the data until the "end" event is invoked. According to the Node documentation this happens when the FIN packet is sent.
I have to admit I don't know much about networking but this FIN packet, do I have to send it manually when sending data from my C# app or will thise code
var stream = client.GetStream();
using (var writer = new StreamWriter(stream)) writer.Write(request);
send it automatically when it manages to send the whole request string?
Secondly, how does it work from the other end? How do I send a "batch" of data from Node to my C# client so that it knows that the whole "batch" should be considered one thing, despite it being in multiple packets?
Also, is there an equivalent of the "end" even in .NET? Currently, I'm blocking until the stream's DataAvailable is true but that will trigger on the first packet, right? It won't wait for the whole thing.
I'd appreciate if someone could shed some light on this for me.

The TCP FIN packet will be sent when you call writer.Close() in C#, which will trigger the end event in Node as you said.
Without seeing how your C# reading code looks I can't give specifics, but C# will not fire an event when Node closes the connection. It will no longer be stream.CanRead, and if you had a current stream.Read call blocking, it will throw an exception.
TCP provides a stream of bytes, and nothing more. If you are planning to send several messages back and forth from Node and C#, it is up to you to send your messages in such a way that they can be separated. For instance, you could prefix each message with the length, so that you read one byte, and then read that many bytes after it for the message. If your messages are always text, you could encode it as JSON and separate messages with newlines.

Related

C# Socket.Send( ): does it send all data or not?

I was reading about sockets from a book called "C# Network Programming" by Richard Blum. The following excerpt states that the Send() method is not guaranteed to send all the data passed to it.
byte[] data = new byte[1024];
int sent = socket.Send(data);
On the basis of this code, you might be tempted to presume that the
entire 1024-byte data buffer was sent to the remote device... but this
might be a bad assumption. Depending on the size of the internal TCP
buffer and how much data is being transferred, it is possible that not
all the data supplied to the Send() mehtod was actually sent.
However, when I went and looked at the Microsoft documentation https://msdn.microsoft.com/en-us/library/w93yy28a(v=vs.110).aspx it says:
If you are using a connection-oriented protocol, Send will block until
all of the bytes in the buffer are sent, unless a time-out was set
So which is it? The book was published in 2004, so has it changed since then?
I'm planning to use asynchronous sockets, so my next question is, would BeginSend() send all data?
All you had to do was read the rest of the exact same paragraph you quoted. There's even an exception to your quote given in the very same sentence.
If you are using a connection-oriented protocol, Send will block until all of the bytes in the buffer are sent, unless a time-out was set by using Socket.SendTimeout. If the time-out value was exceeded, the Send call will throw a SocketException. In nonblocking mode, Send may complete successfully even if it sends less than the number of bytes in the buffer. It is your application's responsibility to keep track of the number of bytes sent and to retry the operation until the application sends the bytes in the buffer.
For BeginSend, the behavior is also described:
Your callback method should invoke the EndSend method. When your application calls BeginSend, the system will use a separate thread to execute the specified callback method, and will block on EndSend until the Socket sends the number of bytes requested or throws an exception.
That's not a very nice design and defeats the whole point of a callback! Consider using SendAsync instead (and then you still need to check the BytesTransferred property).
Both of the resources you quoted are correct. I think the wording could have been better though.
Inthe docs, MSDN it is also written that
There is also no guarantee that the data you send will appear on the
network immediately. To increase network efficiency, the underlying
system may delay transmission until a significant amount of outgoing
data is collected.
So Send method is blocking until underlying system has had room to buffer your data for a network send.
A successful completion of the Send method means that the underlying
system has had room to buffer your data for a network send.

TcpClient wait for CRLF

I'm writing a class library to communicate with a PLC by using TCP. The communication is based on sending a data string which is terminated by a CRLF and next waiting for an acknowledge string (also terminated by a CRLF) to confirm the data is received (yes I know this is also included in the TCPIP protocol, but this is another discussion).
Currently I'm facing two major problems:
I'm setting the TcpClient.SendTimeout property, however it looks like when the data is send (by TcpClient.Client.Send), the sender does not wait for the receiver the data to be read. Why?
Because of the sender is not waiting, an acknowledge string and immediately the next data string can be send. So, the receiver is getting two packages. Is there a way to read the buffer only till the first CRLF (acknowledge) and leave the next data string in the buffer for the next TcpClient.Client.Read command.?
Thanks in advance,
Mark
TCP is a streaming protocol. There are no packets that you can program against. The receiver must be able to decode the data no matter in what chunks it arrives. Assume one byte chunks, for example.
Here, it seems the receiver can just read until it finds a CRLF. StreamReader can do that.
the sender does not wait for the receiver the data to be read
TCP is asynchronous. When your Send completes the receiver hasn't necessarily processed the data. This is impossible to ensure at the TCP stack level. The receiving app might have called Receive and gotten the data but it might not have processed it. The TCP stack can't know.
You must design your protocol so that this information is not needed.
I just read one byte till
That can work but it is very CPU intensive and inefficient.

tcpclient - how to guarantee response is for the correct async command?

Forgive me if this is covered somewhere but my google skills have failed me and there doesn't appear to be anything that covers this specific problem. I've come to use tcpclient for the first time (TcpClient and NetworkStream with StreamReader and StreamWriter) and there's a few intricacies that i'm trying to understand.
Background
I'm communicating with a printer, I open up the network stream reader that loops infinitely and parses incoming data. Outward commands are sent asynchronously from user inputs on the ui. All good so far.
A lot of examples show you sending data and then waiting upon the response, fine in most circumstances. My issue is that the printer can randomly send me data that I have to respond to (out of ink / faults etc) and I send commands to it asynchronously. I'm also paranoid that it may not always process commands in order.
I know the size of the expected responses and I can always split the commands out from the stream reliably (they always use start and end characters). The issue is that a lot of the responses have the same size and format.
My initial thought was to create a queue of outgoing commands, when a response comes back I can check it against the first in the queue to see if it matches the expected return format and the others if it doesn't.
If it doesn't match anything in the queue, treat it as a new responses and try and figure out what it might have sent me.
Question
I guess simply put, are my assumptions correct? (I haven't experienced these problems yet, but I don't want to be surprised in production) and are there any commonly accepted ways of dealing with this type of scenario?
Thanks

Socket.Send has a delay?

I am building an application and sending data back to the other side,
In my application, I have:
System.Net.Sockets.Socket.Send(byte[]) function.
My customer told me that there is 530 ms delay of receiving this packet. However, I have logged every where until: System.Net.Sockets.Socket.Send(byte[])
I measured that it took about 15ms to get to send an array from Socket. My customer advised me to check:
To flush after sending, but I dont see a flush function in Socket?
Fill up the data buffer before sending out otherwise, if the data is short, then I have to force the transmission
Is one of this advice correct? I see there are also another parameter of the method Send, which is: SocketFlags <= Is there any help of using this SocketFlags?
There's common problem with Nagle algorithm that tries to 'glue' data sent together into single packet. It is possible that your code suffers from it as well.
Try to disable it as shown here or by setting SocketOptionName.NoDelay option with SetSocketOption method.

How is.NET's NetworkStream delimiting multiple messages in the same packet?

So I've been tasked with creating a tool for our QA department that can read packets off the wire and reassemble the messages correctly (they don't trust our logs... long story).
The application whose communication I'm attempting to listen in on is using .NET's TcpListener and TcpClient classes to communicate. Intercepting the packets isn't a problem (I'm using SharpPcap). However, correctly reassembling the packets into application level messages is proving slightly difficult.
Some packets have the end of one message and the beginning of the next message in them and I can't figure out how the NetworkStream object in .NET is able to tell where one application level message ends and the other begins.
I have been able to figure out that any packet that contains the end of an application level message will have the TCP header "PSH" (Push) flag turned on. But I can't figure out how .NET knows where exactly the end of the message is inside that packet.
The data of one packet might look like:
/></Message><Message><Header fromSystem=http://blah
How does the stream know to only send up to the end of </Message> to the application and store the rest until the rest of the message is complete?
There are no IP level flags set for fragmentation, and the .NET sockets have no knowledge of the application level protocol. So I find this incredibly vexing. Any insight would be appreciated.
The stream doesn't know the end of anything - it should be part of the application protocol.
NetworkStream doesn't have anything built into it to convert the data into an object. What makes you think it does? What does the code which manages to use the NetworkStream look like? Are you perhaps doing some form of XML deserialization, and the reading code automatically stops when it reaches the closing tag?
Basically:
If your protocol doesn't have any message delimiter or length prefix, it probably should have
NetworkStream itself is highly unlikely to be doing anything clever - but if you could tell us what you're observing, we can maybe work out what's going on.

Categories