C# TCP data transfer with BinarySerializer - c#

I am attempting to connect my laptop with my standalone pc using C# TCPClient class.
Laptop is running a simple console application and plays the role of the server.
PC is a Unity aplication (2018.1.6f1 with .Net4.x Mono)
The code for sending is
public void SendData() {
Debug.Log("Sending data");
NetworkStream ns = client.GetStream();
BinaryFormatter bf = new BinaryFormatter();
TCPData data = new TCPData(true);
using (MemoryStream ms = new MemoryStream()) {
bf.Serialize(ms, data);
byte[] bytes = ms.ToArray();
ns.Write(bytes, 0, bytes.Length);
}
}
The same code is used in the Laptop's project, except Debug.Log() is replaced by Console.WriteLine()
For data reception I use
public TCPData ReceiveData() {
Debug.Log("Waiting for Data");
using (MemoryStream ms = new MemoryStream()) {
byte[] buffer = new byte[2048];
int i = stream.Read(buffer, 0, buffer.Length);
stream.Flush();
ms.Write(buffer, 0, buffer.Length);
ms.Seek(0, SeekOrigin.Begin);
BinaryFormatter bf = new BinaryFormatter();
bf.Binder = new CustomBinder();
TCPData receivedData = (TCPData)bf.Deserialize(ms);
Debug.Log("Got the data");
foreach (string s in receivedData.stuff) {
Debug.Log(s);
}
return receivedData;
}
}
Again the same on both sides,
The data I am trying to transfer looks like this
[Serializable, StructLayout(LayoutKind.Sequential)]
public struct TCPData {
public TCPData(bool predefined) {
stuff = new string[2] { "Hello", "World" };
ints = new List<int>() {
0,1,2,3,4,5,6,7,8,9
};
}
public string[] stuff;
public List<int> ints;
}
The custom binder is from here
without it I get an assembly error
with it I get Binary stream '0' does not contain a valid BinaryHeader. Possible causes are invalid stream or object version change between serialization and deserialization.
Now the problem:
Sending this from PC to Laptop - 100% success rate
Sending this from Laptop to PC - 20% success rate (80% is the Exception above)
How is it even possible that it "sometimes" works ?
Shouldn't it be 100% or 0% ?
How do I get it to work ?
Thanks
E: Ok thanks to all the suggestions I managed to increase the chances of success, but it still occasionally fails.
I send a data size "packet" which is 80% of the time received correctly, but in some cases the number I get from the byte[] is 3096224743817216 (insanely big) compared to the sent ~500.
I am using Int64 data type.
E2: In E1 I was sending the data length packet separately, now I have them merged, which does interpret the length properly, but now I am unable to deserialize the data... every time I get The input stream is not a valid binary format. The starting contents (in bytes) are: 00-00-00-00-00-00-04-07-54-43-50-44-61-74-61-02-00 ...
I read the first 8 bytes from the stream and the remaining 'x' are the data, deserializing it on server works, deserializing the same data throws.
E3: Fixed it by rewriting the stream handling code, I made a mistake somewhere in there ;)

NetworkStream.Read() doesn't block until it reads the requested number of bytes:
"This method reads data into the buffer parameter and returns the number of bytes successfully read. If no data is available for reading, the Read method returns 0. The Read operation reads as much data as is available, up to the number of bytes specified by the size parameter. If the remote host shuts down the connection, and all available data has been received, the Read method completes immediately and return zero bytes."
You must
1) Know how many bytes you are expecting
and
2) Loop on Read() until you have received the expected bytes.
If you use a higher-level protocol like HTTP or Web Sockets they will handle this "message framing" for you. If you code on TCP/IP directly, then that's your responsibility.

Related

C# TCP stream, how to receive data when you don't know how much you will get

I'm coding a game with Unity and I was wondering how can I receive data from my server without knowing how much i will get.
Until now I was using this to communicate with the server :
public string buffer;
// client is a class where I store the TCPclient and the NetworkStream (it should be called server but I didn't change it)
private void sendToClient(string message)
{
byte[] response = new byte[256];
byte[] data = System.Text.Encoding.ASCII.GetBytes(message);
buffer = string.Empty;
client.stream.Write(data, 0, data.Length);
client.stream.Read(response, 0, response.Length);
buffer = System.Text.Encoding.ASCII.GetString(response).ToString();
}
The problem is that the server can send me informations about ennemies on the map but it depends on the number of enemies on the map. If there is only one I receive his position, but if there are 2 I receive 2 times the positions until the number of informations overflow the response variable.
So I was wondering how can I do this with a good method.
Edit:
I was able to fix my problem with the use of
StreamWriter and StreamReader from the System.IO namespace

How to ensure the whole packet is received

public void read(byte[] bytess)
{
int davar = this.clientSocket.Receive(bytess);
MemoryStream m = new MemoryStream(bytess);
BinaryFormatter b = new BinaryFormatter();
m.Position = 0;
SPacket information = b.Deserialize(m) as SPacket;
Image imageScreenShot = information.ScreenShot;
if (information.Premissionize)
Premitted = true;
if (information.Text != "")
{
cE.GetMessageFromServer(information.Text);
}
if (imageScreenShot == null)
return;
Bitmap screenShot = new Bitmap(imageScreenShot);
cE.UpdatePhoto(screenShot);
//screenShot.Dispose();
//Form1.t.Text = forText;
}
I have this read function in the client and when I run it online between 2 lan computers deserialization exception is thrown.
I guess that something delaying all the packet and only part of it has arrived. It said that the binary header is not valid.
How can I make sure in C# that I got the whole packet?
By the way this is TCP
The Receive function reads at least one byte and at most as many bytes as have been sent. Right now you assume that a single read will read everything which is not the case.
Deserialize from a new NetworkStream(socket). This allows BinaryFormatter to draw bytes from the socket.
What you wrote there about packets being delayed and such is not accurate. TCP shields you from that.

C# TCP NetworkStream missing a few bytes of data

I'm having two problems and after trying a few techniques I've read on stackoverflow, the problem persists. I'm trying to send a file from the server to client with the following code below but the problem is that the file is always a few bytes short, causing file corruption.. The second problem is that the stream doesn't close despite implementing a zero length packet at the end to indicate the transfer is finished without closing the connection.
Server code snippet:
/*
* Received request from client for file, sending file to client.
*/
//open file to send to client
FileStream fs = new FileStream(fileLocation, FileMode.Open, FileAccess.Read);
byte[] data = new byte[1024];
long fileSize = fs.Length;
long sent = 0;
int count = 0;
while (sent < fileSize)
{
count = fs.Read(data, 0, data.Length);
netStream.Write(data, 0, count);
sent += count;
}
netStream.Write(new byte[1024], 0, 0); //send zero length byte stream to indicate end of file.
fs.Flush();
netStream.Flush();
Client code snippet:
TcpClient client;
NetworkStream serverStream;
/*
* [...] client connect
*/
//send request to server for file
byte[] dataToSend = SerializeObject(obj);
serverStream.Write(dataToSend, 0, dataToSend.Length);
//create filestream to save file
FileStream fs = new FileStream(fileName, FileMode.Create, FileAccess.Write);
//handle response from server
byte[] response = new byte[client.ReceiveBufferSize];
byte[] bufferSize = new byte[1024];
int bytesRead;
while ((bytesRead = serverStream.Read(bufferSize, 0, bufferSize.Length)) > 0 && client.ReceiveBufferSize > 0)
{
Debug.WriteLine("Bytes read: " + bytesRead);
fs.Write(response, 0, bytesRead);
}
fs.Close();
With UDP you can transmit an effectively empty packet, but TCP won't allow you to do that. At the application layer the TCP protocol is a stream of bytes, with all of the packet-level stuff abstracted away. Sending zero bytes will not result in anything happening at the stream level on the client side.
Signalling the end of a file transfer can be as simple as having the server close the connection after sending the last block of data. The client will receive the final data packet then note that the socket has been closed, which indicates that the data has been completely delivered. The flaw in this method is that the TCP connection can be closed for other reasons, leaving a client in a state where it believes that it has all the data even though the connection was dropped for another reason.
So even if you are going to use the 'close on complete' method to signal end of transfer, you need to have a mechanism that allows the client to identify that the file is actually complete.
The most common form of this is to send a header block at the start of the transfer that tells you something about the data being transferred. This might be as simple as a 4-byte length value, or it could be a variable-length descriptor structure that includes various metadata about the file such as its length, name, create/modify times and a checksum or hash that you can use to verify the received content. The client reads the header first, then processes the rest of the data in the stream as content.
Let's take the simplest case, sending a 4-byte length indicator at the start of the stream.
Server Code:
public void SendStream(Socket client, Stream data)
{
// Send length of stream as first 4 bytes
byte[] lenBytes = BitConverter.GetBytes((int)data.Length);
client.Send(lenBytes);
// Send stream data
byte[] buffer = new byte[1024];
int rc;
data.Position = 0;
while ((rc = data.Read(buffer, 0, 1024)) > 0)
client.Send(buffer, rc, SocketFlags.None);
}
Client Code:
public bool ReceiveStream(Socket server, Stream outdata)
{
// Get length of data in stream from first 4 bytes
byte[] lenBytes = new byte[4];
if (server.Receive(lenBytes) < 4)
return false;
long len = (long)BitConverter.ToInt32(lenBytes, 0);
// Receive remainder of stream data
byte[] buffer = new byte[1024];
int rc;
while ((rc = server.Receive(buffer)) > 0)
outdata.Write(buffer, 0, rc);
// Check that we received the expected amount of data
return len == outdata.Position;
}
Not much in the way of error checking and so on, and blocking code in all directions, but you get the idea.
There is no such thing as sending "zero bytes" in a stream. As soon as the stream sees you're trying to send zero bytes it can just return immediately and will have done exactly what you asked.
Since you're using TCP, it is up to you to use an agreed-upon protocol between the client and server. For example:
The server could close the connection after sending all its data. The client would see this as a "Read" that completes with zero bytes returned.
The server could send a header of a fixed size (maybe 4 bytes) that includes the length of the upcoming data. The client could then read those 4 bytes and would then know how many more bytes to wait for.
Finally, you might need a "netStream.Flush()" in your server code above (if you intended to keep the connection open).

Under what conditions does a NetworkStream not read in all the data at once?

In the callback for NetworkStream.BeginRead I seem to notice that all bytes are always read. I see many tutorials check to see if the BytesRead is less than the total bytes and if so, read again, but this never seems to be the case.
The condition if (bytesRead < totalBytes) never fires, even if a lot of data is sent at once (thousands of characters) and even if the buffer size is set to a very small value (16 or so).
I have not tested this with the 'old-fashioned way' as I am using Task.Factory.FromAsync instead of calling NetworkStream.BeginRead and providing a callback where I call EndRead. Perhaps Tasks automatically include this functionality of not returning until all data is read? I'm not sure.
Either way, I am still curious as to when all data would not be read at once. Is it even required to check if not all data was read, and if so, read again? I cannot seem to get the conditional to ever run.
Thanks.
Try sending megabytes of data over a slow link. Why would the stream want to wait until it was all there before giving the caller any of it? What if the other side hadn't closed the connection - there is no concept of "all the data" at that point.
Suppose you open a connection to another server and call BeginRead (or Read) with a large buffer, but it only sends 100 bytes, then waits for your reply - what would you expect NetworkStream to do? Never give you the data, because you gave it too big a buffer? That would be highly counterproductive.
You should absolutely not assume that any stream (with the arguable exception of MemoryStream) will fill the buffer you give it. It's possible that FileStream always will for local files, but I'd expect it not to for shared files.
EDIT: Sample code which shows the buffer not being filled - making an HTTP 1.1 request (fairly badly :)
// Please note: this isn't nice code, and it's not meant to be. It's just quick
// and dirty to demonstrate the point.
using System;
using System.IO;
using System.Net;
using System.Net.Sockets;
using System.Text;
class Test
{
static byte[] buffer;
static void Main(string[] arg)
{
TcpClient client = new TcpClient("www.yoda.arachsys.com", 80);
NetworkStream stream = client.GetStream();
string text = "GET / HTTP/1.1\r\nHost: yoda.arachsys.com:80\r\n" +
"Content-Length: 0\r\n\r\n";
byte[] bytes = Encoding.ASCII.GetBytes(text);
stream.Write(bytes, 0, bytes.Length);
stream.Flush();
buffer = new byte[1024 * 1024];
stream.BeginRead(buffer, 0, buffer.Length, ReadCallback, stream);
Console.ReadLine();
}
static void ReadCallback(IAsyncResult ar)
{
Stream stream = (Stream) ar.AsyncState;
int bytesRead = stream.EndRead(ar);
Console.WriteLine(bytesRead);
Console.WriteLine("Asynchronous read:");
Console.WriteLine(Encoding.ASCII.GetString(buffer, 0, bytesRead));
string text = "Bad request\r\n";
byte[] bytes = Encoding.ASCII.GetBytes(text);
stream.Write(bytes, 0, bytes.Length);
stream.Flush();
Console.WriteLine();
Console.WriteLine("Synchronous:");
StreamReader reader = new StreamReader(stream);
Console.WriteLine(reader.ReadToEnd());
}
}

Send server multiple messages? C#

I have a quick and dirty question. So as it stands, i have two clients and a server running. I can communicate messages from the clients to the server without any problem. my problem appears when i want to read two messages from the client - rather than just one message.
The error which i receive is: IOException was unhandled. Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
Here is my code on the server side:
private static void HandleClientComm(object client)
{
/** creating a list which contains DatabaseFile objects **/
List theDatabase = new List();
TcpClient tcpClient = (TcpClient)client;
NetworkStream clientStream = tcpClient.GetStream();
byte[] message = new byte[4096];
int bytesRead;
do
{
bytesRead = 0;
try
{
// Blocks until a client sends a message
bytesRead = clientStream.Read(message, 0, 4096);
}
catch (Exception)
{
// A socket error has occured
break;
}
if (bytesRead == 0)
{
// The client has disconnected from the server
break;
}
// Message has successfully been received
ASCIIEncoding encoder = new ASCIIEncoding();
Console.WriteLine("To: " + tcpClient.Client.LocalEndPoint);
Console.WriteLine("From: " + tcpClient.Client.RemoteEndPoint);
Console.WriteLine(encoder.GetString(message, 0, bytesRead));
if (encoder.GetString(message, 0, bytesRead) == "OptionOneInsert")
{
byte[] message2 = new byte[4096];
int bytesRead2 = 0;
**bytesRead2 = clientStream.Read(message, 0, 4096);** //ERROR occurs here!
Console.WriteLine("Attempting to go inside insert)");
Menu.Insert(theDatabase, bytesRead2);
}
Here is my client code:
ASCIIEncoding encoder = new ASCIIEncoding();
byte[] buffer = encoder.GetBytes("OptionOneInsert");
Console.ReadLine();
clientStream.Write(buffer, 0, buffer.Length);
clientStream.Flush();
NetworkStream clientStream2 = client.GetStream();
String text = System.IO.File.ReadAllText("FirstNames.txt");
clientStream2.Write(buffer, 0, buffer.Length);
clientStream2.Flush();
ASCIIEncoding encoder2 = new ASCIIEncoding();
byte[] buffer2 = encoder2.GetBytes(text);
Console.WriteLine("buffer is filled with content");
Console.ReadLine();
When the client sends the message "optionOne" it is received by the server just fine. It's only when i attempt to send the string called "text" that the issues appears!
Any help would be greatly appreciated - I'm not all that familiar with Sockets, hence i've been struggling with trying to understand this for sometime now
You've got a big problem here - there's nothing to specify the end of one message and the start of another. It's quite possible that the server will receive two messages in one go, or half a message and then the other half.
The simplest way of avoiding that is to prefix each message with the number of bytes in it, e.g. as a fixed four-byte format. So to send a message you would:
Encoding it from a string to bytes (ideally using UTF-8 instead of ASCII unless you're sure you'll never need any non-ASCII text)
Write out the length of the byte array as a four-byte value
Write out the content
On the server:
Read four bytes (looping if necessary - there's no guarantee you'd even read those four bytes together, although you almost certainly will)
Convert the four bytes into an integer
Allocate a byte array of that size
Loop round, reading from "the current position" to the end of the buffer until you've filled the buffer
Convert the buffer into a string
Another alternative is simply to use BinaryReader and BinaryWriter - the ReadString and WriteString use length-prefixing, admittedly in a slightly different form.
Another alternative you could use is to have a delimiter (e.g. carriage-return) but that means you'll need to add escaping in if you ever need to include the delimiter in the text to transmit.

Categories