PHP to C# - Socket - File corrupts randomly - c#

I have a TCP connection , which client is PHP and server is C#
this socket connection transfers a image to the socket server , but
randomly some times the transfer get corrupted [image hash is different]
PHP Client
$file = file_get_contents('img.bmp');
socket_write($socket,$file.$cordinates); it sends //image + sme other data
$recv = socket_read ($socket, 500, PHP_BINARY_READ) // read the server response
This stream always transfer a Bitmap image + some data .
C#
this.DataSocket = this.Listner.Accept();
int filelength = this.DataSocket.Receive(this.buffer, this.buffer.Length, SocketFlags.None)
i investigated that in a fresh-browser [newly opened ] this never failed. but when i using this created service several times frequently in the same browser this intended to fail.
when i check with a different browser or new instance of the browser it never failed in first few attempts.
i thought it was some problem with caching but i disable caching using headers but same problem exists

You can't simply expect to write an entire file to the socket at once, nor can you expect to read the file from the socket in one operation. The socket read and write APIs for just about any network programming API from BSD sockets to WinSock to .NET network classes are all going to transmit or receive data up to the desired byte count.
If you look at the documentation for PHP socket_write for example:
Returns the number of bytes successfully written to the socket or FALSE on failure. The error code can be retrieved with socket_last_error(). This code may be passed to socket_strerror() to get a textual explanation of the error.
Note:
It is perfectly valid for socket_write() to return zero which means no bytes have been written. Be sure to use the == operator to check for FALSE in case of an error.
You will typically want to choose a block size like 4096 or 16384 and loop transmitting or receiving that block size until you get the desired number of bytes transmitted or received. Your code will have to check the return value of the send or receive function you're calling and adjust your file pointer accordingly. If transmit returns 0, that could just mean the send buffer is full (not fatal) so continue sending (might want a Sleep(0) delay). If receive returns 0, this usually means the other side has cleanly closed the connection.
One of the most critical flaws in your simple network code usage is that you're not sending the size of the file before you send the file data, so there's no way for the receiver to know how much to read before sending their response. For a simple operation like this, I'd suggest just sending a binary 32bit integer (4 bytes). This would be part of the schema for your operation. So the receiver would first read 4 bytes and from that know how many more bytes need to be read (one buffer size at a time). The receiver keeps reading until they have that many bytes.
I hope this helps. It would be great if socket code were as simple as the usage you attempted, but unfortunately it isn't. You have to select a buffer size, and then keep reading or writing buffers of that size until you get what you want, and you have to convey to the other side how much data you plan on sending.

That you think caching has anything to do with the problem implies that either there is a lot of functionality outside of the code you've published which is affecting the result or that you are a very long way from understanding the problem.
Without knowing the structure of bmp files, my first concern would be how you separate the file from the additional info sent. A few things you could try...
If '$cordinates' (sic) is a fixed size, then put this at the front of the message, not the back
Log the size sent from PHP and the size received.
base64 encode the binary file before sending it (and decode at the receiving end)

Non of above solutions didn't work for me , but i found out that create a new instance every time after a one request will solve the problem. but i don't think its a reliable way.
i tried the client using ASP.NET but same results. i think its not a problem with client PHP its surely a problem of the socket server

Related

C# Socket.Send( ): does it send all data or not?

I was reading about sockets from a book called "C# Network Programming" by Richard Blum. The following excerpt states that the Send() method is not guaranteed to send all the data passed to it.
byte[] data = new byte[1024];
int sent = socket.Send(data);
On the basis of this code, you might be tempted to presume that the
entire 1024-byte data buffer was sent to the remote device... but this
might be a bad assumption. Depending on the size of the internal TCP
buffer and how much data is being transferred, it is possible that not
all the data supplied to the Send() mehtod was actually sent.
However, when I went and looked at the Microsoft documentation https://msdn.microsoft.com/en-us/library/w93yy28a(v=vs.110).aspx it says:
If you are using a connection-oriented protocol, Send will block until
all of the bytes in the buffer are sent, unless a time-out was set
So which is it? The book was published in 2004, so has it changed since then?
I'm planning to use asynchronous sockets, so my next question is, would BeginSend() send all data?
All you had to do was read the rest of the exact same paragraph you quoted. There's even an exception to your quote given in the very same sentence.
If you are using a connection-oriented protocol, Send will block until all of the bytes in the buffer are sent, unless a time-out was set by using Socket.SendTimeout. If the time-out value was exceeded, the Send call will throw a SocketException. In nonblocking mode, Send may complete successfully even if it sends less than the number of bytes in the buffer. It is your application's responsibility to keep track of the number of bytes sent and to retry the operation until the application sends the bytes in the buffer.
For BeginSend, the behavior is also described:
Your callback method should invoke the EndSend method. When your application calls BeginSend, the system will use a separate thread to execute the specified callback method, and will block on EndSend until the Socket sends the number of bytes requested or throws an exception.
That's not a very nice design and defeats the whole point of a callback! Consider using SendAsync instead (and then you still need to check the BytesTransferred property).
Both of the resources you quoted are correct. I think the wording could have been better though.
Inthe docs, MSDN it is also written that
There is also no guarantee that the data you send will appear on the
network immediately. To increase network efficiency, the underlying
system may delay transmission until a significant amount of outgoing
data is collected.
So Send method is blocking until underlying system has had room to buffer your data for a network send.
A successful completion of the Send method means that the underlying
system has had room to buffer your data for a network send.

C# sending huge and small data buffers by socket and TCP

I want to send huge buffers (from 100MB to 1GB) of data by TCP. I solved it by dividing buffer to smaller (approximately 1MB buffers) and sending by socket.send(). Each call of socket.send() method, send part of data (smaller buffer) packed in specific structure: [start byte(1B), Timestamp(4B), Command(4B), Length of data(4B), Data to send(?B), CRC(1B), End byte(1B)]. Everything works fine, when only one huge buffer is sent by specific port. But when I try to send in the same time buffer with another data (very small, e.g. 20 bytes) using the same TCP port, then TCP mixed data in buffers and it's not possible to decode buffer any more. 'Start byte' and 'end byte' in the buffer are not useful to find start and end of the buffer, because it's probable that these bytes appear in data.
EDIT: Issue does not affect the order or IDs between packages but bytes in the packages. At the beginning everything works fine and each buffer is decoded properly. After a while it is not possible to decode buffer, because it contains incorrect data. It looks like bytes in the buffer were moved or changed. Fields at the beginning of the buffer (timestamp, command, length) contain impossible values. So when I want to get length of sent data, I get e.g. value like -1534501133 instead of 1048556 (1048556 is a correct maximum size of the sent data in one package). It happens randomly but it is always connected with the moment when smaller independent buffer is sent. The additional information is, that the smaller buffers are sent repetitively using timers and the problem happens in random moments. Sometimes it is even possible to send whole data (e.g. 300 MB) without problem but it happens very rarely.
I hope, I described it clearly enough.
Do you have any suggestions how to avoid this problem?
Tag your data with a unique id so you know what data relates to what message. Also, separate the packet header from the packet payload.
So, your first request would be [ID][PACKETTYPE][TIME][COMMAND][LENGTH][CRC]
Second would be [ID][PACKETTYPE][DATA]
You can then match up the IDs with the types of packet. So packet type would be 'HEADER' or 'PAYLOAD', the header would contain the meta data for the payload allowing you to make sure that it doesnt get mixed up with other data.

Socket.Send has a delay?

I am building an application and sending data back to the other side,
In my application, I have:
System.Net.Sockets.Socket.Send(byte[]) function.
My customer told me that there is 530 ms delay of receiving this packet. However, I have logged every where until: System.Net.Sockets.Socket.Send(byte[])
I measured that it took about 15ms to get to send an array from Socket. My customer advised me to check:
To flush after sending, but I dont see a flush function in Socket?
Fill up the data buffer before sending out otherwise, if the data is short, then I have to force the transmission
Is one of this advice correct? I see there are also another parameter of the method Send, which is: SocketFlags <= Is there any help of using this SocketFlags?
There's common problem with Nagle algorithm that tries to 'glue' data sent together into single packet. It is possible that your code suffers from it as well.
Try to disable it as shown here or by setting SocketOptionName.NoDelay option with SetSocketOption method.

Minimum guaranteed bytes sent in one go via NetworkStream

I am using the NetworkStream of a TcpClient to send bytes from a .NET server to a Silverlight Client that receives them via a Socket. Sending from the client is done via Socket.SendAsync.
My question is, what is the minimum amount of bytes I can expect to be received "in one go" on both sides, without the need to send the message length too and put several byte messages together by myself.
Thanks,
Andrej
You absolutely should send the message length. Network streams are precisely that - streams of information. Basically there are three ways of recognising the end of a message:
Prefix the data with the length (don't forget that it's at least theoretically possible that you won't even get all of the length data in one packet)
Use a message delimiter/terminator
Only send one message each way per connection, and close the connection afterwards
It depends on your network settings, but the default length for network nodes is 1500 bytes and most modems take something off that. So most homes have 1460 bytes packet size.
The optimum size for your situation can be calculated
But people can always have their own settings, so there's no guarantee that you get the optimal packet size for all clients.

C# ==> Asyncsocket reading buffer?

I have a C# server that accepts multiple clients, and multiple messages from each client.
1- In order to start reading from each client i need to pass a buffer (bytes), but the problem is I don't know how much data is the client is going to send. So is there a way to know how much data a client is going to send so that i can start reading for the correct amount of data?
2- Is it oK if i use only 1 byte array to read from all clients? or do i need to create a byte array for reading from each client?
Unless your protocol dictates how much data will be sent, no. Typically you read one buffer's-worth and then potentially read more. It will really depend on the protocol though. If the client can only send one message on each connection, you'll typically keep reading until the next call returns 0 bytes. Otherwise either the messages have delimiters or a length-prefix.
Absolutely not - assuming you're going to be reading from multiple clients concurrently (why else would you use asynchronous communications?) you'd end up with the different clients' data all being written over each other. Create a new byte array for each client. Depending on exactly what you do with the data you may be able to reuse the same byte array for the next read for the same client - and you could reuse the byte array for later clients, if you really wanted... but don't read from multiple clients at the same time into the same buffer.
So is there a way to know how much data a client is going to send so that i can start reading for the correct amount of data?
Any protocol ought to have some mechanism for a client to indicate when it is done sending data, either as a "length" value that is sent before the actual data, or as a special terminating sequence that is sent after the data.
Is it oK if i use only 1 byte array to read from all clients? or do i need to create a byte array for reading from each client?
Depends on how your program works. If you'll have multiple simultaneous clients, obviously you can't have just a single buffer because they'll end up overwriting each other. If it's one client after the other, but only one at a time, there's no problem in having just one buffer.

Categories