TCP client to server corrupts data - c#

This is very weird.
When I send data to a TCP server, with a TCP client. It corrupts the data, for some extremely odd, and quite annoying reason.
Here is the server code:
TcpListener lis = new TcpListener(IPAddress.Any, 9380); // it needs to be 9380, crucial
lis.Start();
Socket sk = lis.AcceptSocket();
byte[] pd = new byte[sk.ReceiveBufferSize];
sk.Receive(pd);
// cd is the current directory, filename is the file, ex picture.png,
// that was previously sent to the server with UDP.
File.WriteAllBytes(Path.Combine(cd, filename), pd);
Here is the client code:
// disregard "filename" var, it was previously assigned in earlier code
byte[] p = File.ReadAllBytes(filename)
TcpClient client = new TcpClient();
client.Connect(IPAddress.Parse(getIP()), 9380);
Stream st = client.GetStream();
st.Write(p, 0, p.Length);
What happens is data loss. Extreme, sometimes I would upload a 5KB file, but when the server receives and writes the file I send it, it would turn out crazy like 2 bytes. Or it would be 8KB instead! Applications send through this won't even run, pictures show errors, ect.
I would like to note, however, with this, client -> server fails, on the other hand, server -> client works. Strange.
Btw, in case you are interested this is for sending files... Also, using .NET 4.5. Also, my network is extremely reliable.

Hi I think you have some misconceptions about TCP.
I can see you are setting up a server with a receive buffer of x bytes.
Firstly have you checked to see how many bytes that is? I suspect it is very small something like 1024 or something.
When writing data over TCP the data is split into frames. Each time you receive you will get some of the data sent and it will tell you how much of the data has been received . As I can see from your use case you do not know the size of the data to receive so you will have to build up a protocol between your client and server to communicate this. The simplest of such would be to write a 4 byte integer specifying the size of the data to be received.
The communication would go like this.
Client:
Write 4 bytes (file size)
Write Data bytes
Server:
Read 4 bytes (file size)
While we have yet to receive file size read
the stream and push bytes into memory/file stream

Related

TCP/IP significant data loss. Am I sending too much ? Beginner TCP/IP programmer [duplicate]

This question already has an answer here:
Why does my client socket not receive what my server socket sends?
(1 answer)
Closed 8 years ago.
I am a beginner when it comes to tcp/ip applications. I'm coding a game and recently I added network multiplayer. I used TcpListener and TcpClient objects from System.Net.Sockets to implement networking. Game works great when I test on localhost or LAN. Later, I tested it over a greater distance: between my pc and my azure VM. Results are shocking. Client received only about 7% of messages sent by server. Server received 84% of messages. I know that TCP/IP doesn't understand what message is because it sends data as a stream. This is what I consider a message:
NetworkStream networkStream = ClientSocket.GetStream();
networkStream.Write(_bytes, 0, _bytes.Length); //_bytes is array of bytes
networkStream.Flush();
My server sends about 20-40 messages per second but 99% of them are 10-15 bytes long. Client sends ~4 messages per second. My machine has access to fast and reliable internet connection. I guess that windows azure data center should have good connection as well. How can I improve network performance of my application ?
EDIT: How client is receiving messages:
NetworkStream serverStream = ClientSocket.GetStream();
byte[] inStream = new byte[10025];
serverStream.Read(inStream, 0, inStream.Length);
I just realized that it might be interpretation error, meaning that data is received but it's somehow misinterpreted. For instace, I also send inside a message number that represents the total count of sent messages. This number is interpreted fine by this 7% of messages received by cleint. However, messages received by server have some strange numbers in them. For example i received message 31,32,33 and then 570425344 then 35 and then 0. So I guess bytes might be offset. I don't know how and why would that happen.
You might not get all data in one chunk.
Use the code
var actualReceivedLength = serverStream.Read(inStream, 0, inStream.Length);
and read actualReceivedLength to know how much you received, call Read again until you have read as much as you expect.

Lost data in TCP stream

I know TCP can't lose packets because It is streamed, however. I'm trying to send (with nodejs) a stream with 5 "52" packets. Format of it should be (1,52,1,52,1,52,1,52,1,52) where 1 is length of the packet.
I'm receiving the same stream both in C# console App on the same PC what server is.
And on android device with Java app in local network.
C# output is:
"1,52,1,52,1,52,1,52,1,52"
But the java output looks like:
"1,52,1,52,52,1,52,1,52"
Nodejs code:
b = new Buffer(1);
b.writeInt8(1,0);
this.sock.write(b);
this.sock.write(String.fromCharCode(event)); //event == 52
Java code:
while(true)
{
int a = in.read(); //in is an instance of InputStream
if(a!= -1)Log.v(getTag(),""+a);
}
Does anyone have an idea what's the problem?
Thanks in advance
/UPDATE:
socket.bytesWritten returns 10 so it's not on the server side.
Ok. That was my bad. Inside java application I've had a lost connection handler which was reading one byte to check if connection is still alive, it was taking it from the stream.

UDP Client receives only 1 message

I have a server client application that I am currently working on. The server is receiving data fine over a WAN and the client seems to receive the data, but the client is only receiving one communication. Is there anything over a WAN that would make a client always only receive the first return UDP communication and none of the subsequent. Thanks for the help.
Client UDP Listening code
private void receiveUDP()
{
System.Net.IPEndPoint test = new System.Net.IPEndPoint(System.Net.IPAddress.Any,UDP_PORT_NUMBER);
System.Net.EndPoint serverIP = (System.Net.EndPoint)test;
server.Bind(serverIP);
//server.Ttl = 50;
EndPoint RemoteServ = (EndPoint)listenUDP;
do
{
byte[] content = new byte[1024];
int data = server.ReceiveFrom(content, ref RemoteServ);
string message = Encoding.ASCII.GetString(content);
ProcessCommands(message);
} while (true);
}
This is a bit of a stab in the dark (since you don't provide enough code to really say what's going on definitively), but there's one major reason why you might consistently see some UDP datagrams not be delivered over a WAN, while others always arrive successfully. This reason is MTU; the Maximum Transmission Unit which can be sent in a single UDP datagram. This can easily produce behaviour such as what you're seeing if (for example), your first datagram is a short "I accept your connection" message, and you then follow that with a datagrams containing large files; the first (small) datagram is smaller than the MTU and is delivered, while the following (large) datagrams are larger than the MTU, and are discarded en route.
For UDP over a WAN, the MTU will not be higher than about 1500 bytes, and in many situations may be as low as 1200 bytes. Any packets larger than that will be silently dropped somewhere between endpoints. To send large blocks of data via UDP, you need to chop them up into pieces smaller than the MTU for the network segment across which you're transmitting them.
On a LAN, you can usually get away with sending datagrams of any size. But as soon as they're being sent over the Internet or otherwise through heterogenous networks, they're likely to be silently discarded.
If you do need to send large files, you might choose to transmit them via TCP instead; TCP automatically manages chopping data up to fit within the MTU, and ensures that its packets are all received, and are received in order; guarantees that you will not receive from datagrams sent via UDP.
As I mentioned above, this is a complete stab in the dark and may not actually be related to your actual troubles. But it's the elephant in the room, when all we have to go on is that the first packet always arrives successfully, and later packets never do.

C# socket image transfer: file sometimes partially transferred

Problem
I have PHP client which sends a image file to a C# socket server. My problem is about 30% of the time the file is partially transferred and stops.
PHP END ->
$file = file_get_contents('a.bmp');
socket_write($socket,$file);
C# END ->
int l= Socket.Receive(buffer, buffer.Length, SocketFlags.None);
//create the file using a file stream
How can I always transfer the full file without intermediate states? And why does it happen?
From the documentation for Socket.Receive:
If you are using a connection-oriented Socket, the Receive method will read as much data as is available, up to the number of bytes specified by the size parameter. If the remote host shuts down the Socket connection with the Shutdown method, and all available data has been received, the Receive method will complete immediately and return zero bytes.
This means you may get less than the total amount. This is just the way sockets work.
So if if you get a partial read, you should call Socket.Receive again. You can use the overload of Socket.Receive to continue reading into the same buffer.
Here is an article that shows how "keep reading" until you get what you want:
Socket Send and Receive
If you don't know how big the data is, you must keep reading until Socket.Receive returns zero.

How does NetworkStream work in two directions?

I've read an example of a Tcp Echo Server and some things are unclear to me.
TcpClient client = null;
NetworkStream netStream = null;
try {
client = listener.AcceptTcpClient();
netStream = client.GetStream();
int totalBytesEchoed = 0;
while ((bytesRcvd = netStream.Read(rcvBuffer, 0, rcvBuffer.Length)) > 0) {
netStream.Write(rcvBuffer, 0, bytesRcvd);
totalBytesEchoed += bytesRcvd;
}
netStream.Close();
client.Close();
} catch {
netStream.Close();
}
When the server receives a packet (the while loop), he reads the data into rcvBuffer and writes it to the stream.
What confuses me is the chronological order of messages in communication. Is the data which was written with netStream.Write() sent immediately to the client (who may even still be sending), or only after the data which is already written to the stream (by client) processed.
The following question may even clarify the previous: If a client sends some data by writing to the stream, is that data moved to the message queue on the server side waiting to be read so the stream is actually "empty"? That would explain why the server can immediately write to stream - because the data which comes from the stream is actually buffered elsewhere...?
A TCP connection is, in principal, full duplex. So you are dealing with 2 separate channels and yes, both sides could be writing at the same time.
Hint: The method call NetworkStream.Read is blocking in that example.
The book is absolutely correct -- raw access to TCP streams does not imply any sort of extra "chunking" and, in this example for instance, a single byte could easily be processed at a time. However, performing the reading and writing in batches (normally with exposed buffers) can allow for more efficient processing (often as a result of less system calls). The network layer and network hardware also employ there own forms of buffers.
There is actually no guarantee that data written from Write() will actually be written before more Reads() successfully complete: even if data is flushed in one layer it does not imply it is flushed in another and there is absolutely no guarantee that the data has made its way back over to the client. This is where higher-level protocols come into play.
With this echo example the data is simply shoved through as fast as it can be. Both the Write and the Read will block based upon the underlying network stack (the send and receive buffers in particular), each with their own series of buffers.
[This simplifies things a bit of course -- one could always look at the TCP [protocol] itself which does impose transmission characteristics on the actual packet flow.]
You are right that technically when performing Read() operation, you are not reading bits off the wire. You are basically reading buffered data (chunks received by a TCP and arranged in a correct order). When sending you can Flush() that should in theory should send data immediately, but modern TCP stacks have a bit of logic how to gather data in appropriate size packets and burst them to the wire.
As Henk Holterman explained, TCP is a full duplex protocol (if supported by all underlying infrastructure), so sending and receiving data is more of when you server/client reads and writes data. It's not like when you server send data, a client will read it immediately. Client can be sending it's own data and then perform Read(), in this case data will stay in network buffer longer and can be discarded after some time it no-one want to read it. At least I've experienced this when dealing with my supa dupa server/client library (-.

Categories