TCP sockets,Sending Multiple objects on Single Network stream C# - c#

I am trying to bind Sound and Image Sequence Data through ArrayList in order to get it synchronized and serializing it through Binary formatter to be send over Network stream.
The Server end threw an exception:
THE STREAM CAN NOT SUPPORT SEEK OPERATION.
What should I have to do in order to sync Objects to be sent over a single Network stream Instance

TCP is stream based and not message based (as UDP is). That means that there is no telling when a message starts or ends. TCP only guarantees that all bytes are received and in the correct order. It does not guarantee that everything sent with one Send() will be received with one Receive().
Hence you need to specify some kind of message identification mechanism. In this case, a header is the way to go as Jon suggested.
However, you need to understand that the entire header might not be received at once. And that two messages might arrive at once. So you need to parse the received buffer before sending anything to the BinaryFormatter for deserialization.

I would split each object you want to send out into a separate "message" where a message consists of (say) 4 bytes indicating the body length, and then the body itself.
When you want to send a serialized object, you serialize to a byte array, write out the length, then write out the data.
At the server side, you read the length, read that much data into a byte array, then deserialize from that message. The incoming stream is only used to read messages, not objects.

Related

gRPC how to stream a huge string

I got a c# grpc service which can get a request to get a setting in a form of a SINGLE json string which can be very large and I don't want to send it all at once. However, the string can not be processed before it arrives completely.
I know there's an option to use the stream keyword in the protobuf and read the string as it arrives, instead of sending it as a whole.
I have only found examples of receiving several items from a stream (with asyncServerStreamingCall.ResponseStream.MoveNext()), However I don't know how to stream a single string.
My questions are:
Is streaming a correct approach for this situation where you need to stream only a single object (string in this case), and CANT process it before it arrives completely?
How do I stream and receive a string like that (from both server and client sides)?
Streaming in gRPC is, as you mentioned, talking about sending multiple discreet operations on a single open channel. Whether unary or streaming, individual messages in gRPC must be received and decided in their entirety - there isn't a concept of an open byte-stream. Instead, you would need to send multiple messages that each contain some fragment of the payload, presumably just in a bytes field, and send the large payload split over some number of such messages. The receiver would need to combine them, perhaps appending to a file, or similar, if it will be inconveniently large for in-memory storage.

C# sending huge and small data buffers by socket and TCP

I want to send huge buffers (from 100MB to 1GB) of data by TCP. I solved it by dividing buffer to smaller (approximately 1MB buffers) and sending by socket.send(). Each call of socket.send() method, send part of data (smaller buffer) packed in specific structure: [start byte(1B), Timestamp(4B), Command(4B), Length of data(4B), Data to send(?B), CRC(1B), End byte(1B)]. Everything works fine, when only one huge buffer is sent by specific port. But when I try to send in the same time buffer with another data (very small, e.g. 20 bytes) using the same TCP port, then TCP mixed data in buffers and it's not possible to decode buffer any more. 'Start byte' and 'end byte' in the buffer are not useful to find start and end of the buffer, because it's probable that these bytes appear in data.
EDIT: Issue does not affect the order or IDs between packages but bytes in the packages. At the beginning everything works fine and each buffer is decoded properly. After a while it is not possible to decode buffer, because it contains incorrect data. It looks like bytes in the buffer were moved or changed. Fields at the beginning of the buffer (timestamp, command, length) contain impossible values. So when I want to get length of sent data, I get e.g. value like -1534501133 instead of 1048556 (1048556 is a correct maximum size of the sent data in one package). It happens randomly but it is always connected with the moment when smaller independent buffer is sent. The additional information is, that the smaller buffers are sent repetitively using timers and the problem happens in random moments. Sometimes it is even possible to send whole data (e.g. 300 MB) without problem but it happens very rarely.
I hope, I described it clearly enough.
Do you have any suggestions how to avoid this problem?
Tag your data with a unique id so you know what data relates to what message. Also, separate the packet header from the packet payload.
So, your first request would be [ID][PACKETTYPE][TIME][COMMAND][LENGTH][CRC]
Second would be [ID][PACKETTYPE][DATA]
You can then match up the IDs with the types of packet. So packet type would be 'HEADER' or 'PAYLOAD', the header would contain the meta data for the payload allowing you to make sure that it doesnt get mixed up with other data.

Network stream reading before all data has been written

My Server will send a message using networkStream.Write(messageBytes);
My Client will receive the message using networkStream.Read(). The client will read byte by byte for a sequence, when it has found the sequence it reads the rest of the header.
The header contains a payload length. Once I have this I read the stream using the payload length to get the payload.
My problem is when I write the data, my client will try to read straight away and all the data may not have been written by this point.
So I end up with a message with the end effectively chopped off:
"blablablablabl\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
Is there anyway to wait for all the information to be received?
Thanks.
This doesn't really make sense. Socket read functions tell you how much data is available / has been read into the buffer, so if you're processing a bunch of NULs in the buffer because the rest of the data hasn't been received, that's on you for not checking how much data has been received. If you're expecting more data, keep receiving until you get as much as you're expecting (avoid overwriting the previously-received chunks by advancing the pointer into the receive buffer or copying to a new buffer, etc).
System network buffers are not infinite, if you dont read them from windows socket to your internal memory, they will stop receiving.
In the case, you might want to create a memory storage for your data:
byte[] data = new byte[dataLength];
And keep pushing data into the array until it is all read up.

Transmitting strings between a C# client and a Node server

I have a node TCP server working and waiting for data and for every socket I have
socket.on("data", function () {
});
Now, as far as I understand, this will get invoked whenever there's any data received. That means that if I send a large string, it will get segmented into multiple packets and each of those will invoke the event separately. Therefore I could concatenate the data until the "end" event is invoked. According to the Node documentation this happens when the FIN packet is sent.
I have to admit I don't know much about networking but this FIN packet, do I have to send it manually when sending data from my C# app or will thise code
var stream = client.GetStream();
using (var writer = new StreamWriter(stream)) writer.Write(request);
send it automatically when it manages to send the whole request string?
Secondly, how does it work from the other end? How do I send a "batch" of data from Node to my C# client so that it knows that the whole "batch" should be considered one thing, despite it being in multiple packets?
Also, is there an equivalent of the "end" even in .NET? Currently, I'm blocking until the stream's DataAvailable is true but that will trigger on the first packet, right? It won't wait for the whole thing.
I'd appreciate if someone could shed some light on this for me.
The TCP FIN packet will be sent when you call writer.Close() in C#, which will trigger the end event in Node as you said.
Without seeing how your C# reading code looks I can't give specifics, but C# will not fire an event when Node closes the connection. It will no longer be stream.CanRead, and if you had a current stream.Read call blocking, it will throw an exception.
TCP provides a stream of bytes, and nothing more. If you are planning to send several messages back and forth from Node and C#, it is up to you to send your messages in such a way that they can be separated. For instance, you could prefix each message with the length, so that you read one byte, and then read that many bytes after it for the message. If your messages are always text, you could encode it as JSON and separate messages with newlines.

C# ==> Asyncsocket reading buffer?

I have a C# server that accepts multiple clients, and multiple messages from each client.
1- In order to start reading from each client i need to pass a buffer (bytes), but the problem is I don't know how much data is the client is going to send. So is there a way to know how much data a client is going to send so that i can start reading for the correct amount of data?
2- Is it oK if i use only 1 byte array to read from all clients? or do i need to create a byte array for reading from each client?
Unless your protocol dictates how much data will be sent, no. Typically you read one buffer's-worth and then potentially read more. It will really depend on the protocol though. If the client can only send one message on each connection, you'll typically keep reading until the next call returns 0 bytes. Otherwise either the messages have delimiters or a length-prefix.
Absolutely not - assuming you're going to be reading from multiple clients concurrently (why else would you use asynchronous communications?) you'd end up with the different clients' data all being written over each other. Create a new byte array for each client. Depending on exactly what you do with the data you may be able to reuse the same byte array for the next read for the same client - and you could reuse the byte array for later clients, if you really wanted... but don't read from multiple clients at the same time into the same buffer.
So is there a way to know how much data a client is going to send so that i can start reading for the correct amount of data?
Any protocol ought to have some mechanism for a client to indicate when it is done sending data, either as a "length" value that is sent before the actual data, or as a special terminating sequence that is sent after the data.
Is it oK if i use only 1 byte array to read from all clients? or do i need to create a byte array for reading from each client?
Depends on how your program works. If you'll have multiple simultaneous clients, obviously you can't have just a single buffer because they'll end up overwriting each other. If it's one client after the other, but only one at a time, there's no problem in having just one buffer.

Categories