Got unknown issue with System.Net.Sockets in WP8.
The communication built on next schema - 4 first bytes - for package length, next 4 bytes for package number, and data. So [4][4][{any}] is a TCP package.
Reading of incoming data goes by next steps.
1. Read first 8 bytes.
2. Get package length from the first 4 bytes to determine size of incoming data.
3. Resize buffer to a proper size.
4. Read incoming data in buffer with offset 8 bytes.
I am sending a lot of packages to server.
Sometimes server's responses in incoming buffer are valid and can be read one by one.
But sometimes it seems the first 8 bytes from incoming data are skipped and with the steps 1-4 I am reading the first 8 bytes from the package's data.
Infinite loop for receiving
while (_channel.Opened)
{
Debug.WriteLine("Wait for incoming... ");
Stream responseStream = await _channel.Receive();
HandleIncomingData(responseStream);
}
Here code for socket:
public async Task<Stream> Receive()
{
byte[] buff = new byte[8];
ManualResetEventSlim mre = new ManualResetEventSlim();
var args = new SocketAsyncEventArgs();
args.SetBuffer(buff, 0, buff.Length);
EventHandler<SocketAsyncEventArgs> completed = (sender, eventArgs) => mre.Set();
EventHandler<SocketAsyncEventArgs> removeSubscription = (sender, eventArgs) => args.Completed -= completed;
args.Completed += completed;
args.Completed += removeSubscription;
_connectionSocket.ReceiveAsync(args);
mre.Wait();
args.Completed -= removeSubscription;
int len = BitConverter.ToInt32(buff, 0);
int num = BitConverter.ToInt32(buff, 4);
if (Math.Abs(_packageNumber - num) < 3)
{
Array.Resize(ref buff, len);
args.SetBuffer(buff, 8, buff.Length - 8);
args.Completed += completed;
args.Completed += removeSubscription;
mre.Reset();
_connectionSocket.ReceiveAsync(args);
mre.Wait();
}
Debug.WriteLine("Recv TCP package: {0}", args.Buffer.ToDebugString());
if (args.BytesTransferred == 0)
throw new SocketException();
byte[] result = new byte[buff.Length - 8];
Array.ConstrainedCopy(buff, 8, result, 0, result.Length);
MemoryStream stream = new MemoryStream(result, false);
return await Task.FromResult(stream);
}
It is 100% issue with crossthreading. Did the same in Console app.
If here
mre.Reset();
_connectionSocket.ReceiveAsync(args);
mre.Wait();
before mre.Wait() put Thread.Sleep(n), where n > 1 - everything works fine.
But this is very rude solution
Related
I am working on serial port communication. While using BaseStream I am writing and reading the port.
port.BaseStream.Write(dataItems, 0, dataItems.Length);
int receivedBytes = port.BaseStream.Read(buffer, 0, (int)buffer.Length);
Thread.Sleep(100);
var receiveData = BitConverter.ToString(buffer, 0, receivedBytes);
Hereafter write, I am sleeping the thread so that I will get full bytes. Is there any other way around that I can wait that all bytes are available?
Note
The last byte should be 22. Also the above code is running in Task named as public async Task PortHitmethod(Iterations iterations)
This is certainly a wrong way to use Stream.Read. The correct pattern is in the documentation:
Stream s = new MemoryStream();
...
// Now read s into a byte buffer with a little padding.
byte[] bytes = new byte[s.Length + 10];
int numBytesToRead = (int)s.Length;
int numBytesRead = 0;
do
{
// Read may return anything from 0 to 10.
int n = s.Read(bytes, numBytesRead, 10);
numBytesRead += n;
numBytesToRead -= n;
} while (numBytesToRead > 0);
s.Close();
P.S. If you think you should use Thread.Sleep then you can be sure you are certainly doing something wrong.
I am writing a simple Tcp communication programs using TcpListener and TcpClient , .Net 4.7.1
I designed my own protocol to be:
For each "data unit", the first 4 bytes is an int, indicating the length of data body. Once a complete "data unit" is received, the data body (as a byte[]) is passed to upper level.
My read and write functions are:
public static byte[] ReadBytes(Stream SocketStream)
{
int numBytesToRead = 4, numBytesRead = 0, n;
byte[] Length = new byte[4];
do
{
n = SocketStream.Read(Length, numBytesRead, numBytesToRead);
numBytesRead += n;
numBytesToRead -= n;
} while (numBytesToRead > 0 && n != 0);
if (n == 0) return null; //network error
if (!BitConverter.IsLittleEndian) Array.Reverse(Length);
numBytesToRead = BitConverter.ToInt32(Length, 0); //get the data body length
numBytesRead = 0;
byte[] Data = new byte[numBytesToRead];
do
{
n = SocketStream.Read(Data, numBytesRead, numBytesToRead);
numBytesRead += n;
numBytesToRead -= n;
} while (numBytesToRead > 0 && n != 0);
if (n == 0) return null; //network error
return Data;
}
public static void SendBytes(Stream SocketStream, byte[] Data)
{
byte[] Length = BitConverter.GetBytes(Data.Length);
if (!BitConverter.IsLittleEndian) Array.Reverse(Length);
SocketStream.Write(Length, 0, Length.Length);
SocketStream.Write(Data, 0, Data.Length);
SocketStream.Flush();
}
And I made a simple echo program to test the RTT:
private void EchoServer()
{
var Listener = new TcpListener(System.Net.IPAddress.Any, 23456);
Listener.Start();
var ClientSocket = Listener.AcceptTcpClient();
var SW = new System.Diagnostics.Stopwatch();
var S = ClientSocket.GetStream();
var Data = new byte[1];
Data[0] = 0x01;
Thread.Sleep(2000);
SW.Restart();
SendBytes(S, Data); //send the PING signal
ReadBytes(S); //this method blocks until signal received from client
//System.Diagnostics.Debug.WriteLine("Ping: " + SW.ElapsedMilliseconds);
Text = "Ping: " + SW.ElapsedMilliseconds;
SW.Stop();
}
private void EchoClient()
{
var ClientSocket = new TcpClient();
ClientSocket.Connect("serverIP.com", 23456);
var S = ClientSocket.GetStream();
var R = ReadBytes(S); //wait for PING signal from server
SendBytes(S, R); //response immediately
}
In "ReadBytes", I have to read 4 bytes from the NetworkStream first in order to know how many bytes I have to read next. So in total I have to call NetworkStream.Read twice, as shown in above codes.
The problem is: I discovered that calling it twice resulted in around 110ms RTT. While calling it once(regardless of data completeness) is only around 2~10ms(put a "return Length;" immediately after the first do-while loop, or comment out the first do-while loop and hard-code the data length, or read as much as it can in one call to "Read").
If I go for the "read as much as it can in one call" method, it may result in "over-read" of data and I have to write more lines to handle the over-read data to assemble next "data unit" correctly.
Anyone knows what's the cause of the almost 50 times overhead?
As I read from Micrisoft
Microsoft improved the performance of all streams in the .NET Framework by including a built-in buffer.
so even if I call .Read twice, it's only reading from memory, am I correct?
(if you want to test the codes, please do it on a real server and connect from your home PC maybe, do it in localhost always returns 0ms)
Thanks for Jeroen Mostert's comment reminder. I added:
ClientSocket.NoDelay = true;
ClientSocket.Client.NoDelay = true;
to both the client and server side and the annoying delay is gone, the RTT is back to expected.
TcpClient.NoDelay Property
Further tests showed that both sides(client and server) contributed around 50ms delay without modifying "NoDelay" option, so in total around 100ms RTT "overhead".
Im building an application where i need to reed 15 byes from a serial device. (ScaleXtric c7042 powerbase) The bytes need to come in the right order, and the last one is a crc.
Using this code in an backgroundworker, I get the bytes:
byte[] data = new byte[_APB.ReadBufferSize];
_APB.Read(data, 0, data.Length);
The problem is that I don't get the first bytes first, Its like it stores some of the bytes in the buffer, so next time the DataRecieved event fires, I get the last x bytes from the previous message, and only the 15-x byte from the new. I write the bytes to a text box, and its all over the place, so some bytes are missing somewhere.
I have tried to clear the buffer after each read, but no luck.
_APB = new SerialPort(comboBoxCommAPB.SelectedItem.ToString());
_APB.BaudRate = 19200;
_APB.DataReceived += new SerialDataReceivedEventHandler(DataReceivedHandlerDataFromAPB);
_APB.Open();
_APB.DiscardInBuffer();
Hope any one can help me here
Use this Method to read fixed amout of bytes from serial port, for your case toread = 15;
public byte[] ReadFromSerialPort(SerialPort serialPort, int toRead)
{
byte[] buffer = new byte[toRead];
int offset = 0;
int read;
while (toRead > 0 && (read = serialPort.Read(buffer, offset, toRead)) > 0)
{
offset += read;
toRead -= read;
}
if (toRead > 0) throw new EndOfStreamException();
return buffer;
}
Im trying to send an object( in my case an image) over a networkstream.. however- im not getting the full image..
This is the client code:
private void Form1_Load(object sender, EventArgs e)
{
TcpClient c = new TcpClient();
c.Connect("10.0.0.4", 10);
NetworkStream ns = c.GetStream();
Bitmap f = GetDesktopImage();
byte[] buffer = imageToByteArray(f);
byte[] len = BitConverter.GetBytes(buffer.Length);
MemoryStream stream = new MemoryStream();
stream.Write(len, 0, len.Length);
stream.Write(buffer, 0, buffer.Length);
stream.Position = 0;
stream.CopyTo(ns);
}
As you can see i write the entire content to a regular MemoryStream first(because i dont want to use twice the NetworkStream- only then, i copy the MemoryStream content into the NetworkStream.
The server code:
private void Form1_Load(object sender, EventArgs e)
{
TcpListener tl = new TcpListener(IPAddress.Any, 10);
tl.Start();
TcpClient c = tl.AcceptTcpClient();
network = new NetworkStream(c.Client);
byte[] buff = new byte[4];
network.Read(buff, 0, 4);
int len = BitConverter.ToInt32(buff, 0);
buff = new byte[len];
network.Read(buff, 0, buff.Length);
pictureBox1.Image = byteArrayToImage(buff);
Thread th = new Thread(method);
}
Now when i run both application im getting only the top part of the captured image... It's even more odd because writing directly both to a network stream works perfect... For example:
ns.Write(len, 0, len.Length);
ns.Write(buffer, 0, buffer.Length);
This would work fine and i'll get the full image in the other side.. but i dont want to use it twice(it's just an example, in my real project i would have to use it alot so i would like to reduce the network usage as much as posibble ,and not trigger it for every single data).
Why it's not working using simply the CopyTo method?
I would appreciate any help!
Thanks
In your server code you are ignoring the result of NetworkStream.Read which is the actual number of bytes read. This can be less than the number of bytes you actually requested. You need to keep calling Read until you have received all the bytes you need.
The differences you seeing between 1 or multiple writes has to do with the buffering/timing in the network stack. I suspect you just getting lucky receiving the image when doing a single write - this will not always be the case! You need to handle the result of Read as explained above.
Code Example:
void ReadBuffer(byte[] buffer, int offset, int count)
{
int num;
int num2 = 0;
do
{
num2 += num = _stream.Read(buffer, offset + num2, count - num2);
}
while ((num > 0) && (num2 < count));
if (num2 != count)
throw new EndOfStreamException
(String.Format("End of stream reached with {0} bytes left to read", count - num2));
}
I am trying to send some text over the network using sockets and memory streams. The full data length in my example is 20480 bytes long. Buffer size is 8192.
Before I can receive the last 4096 bytes, the socket receives only 3088 bytes and the whole thread exits without throwing an exception just before receiving the last chunk of data.
// Send
while (sentBytes < ms.Length)
{
if (streamSize < Convert.ToInt64(buffer.Length))
{
ms.Read(buffer, 0, Convert.ToInt32(streamSize));
count = socket.Send(buffer, 0, Convert.ToInt32(streamSize), SocketFlags.None);
sentBytes += Convert.ToInt64(count);
streamSize -= Convert.ToInt64(count);
}
else
{
ms.Read(buffer, 0, buffer.Length);
count = socket.Send(buffer, 0, buffer.Length, SocketFlags.None);
sentBytes += Convert.ToInt64(count);
streamSize -= Convert.ToInt64(count);
}
}
// Receive
while (readBytes < size)
{
if (streamSize < Convert.ToInt64(buffer.Length))
// exits after this, before receiving the last 1008 bytes
{
count = socket.Receive(buffer, 0, Convert.ToInt32(streamSize), SocketFlags.None);
if (count > 0)
{
ms.Write(buffer, 0, count);
readBytes += Convert.ToInt64(count);
streamSize -= Convert.ToInt64(count);
}
}
else
{
count = socket.Receive(buffer, 0, buffer.Length, SocketFlags.None);
if (count > 0)
{
ms.Write(buffer, 0, count);
readBytes += Convert.ToInt64(count);
streamSize -= Convert.ToInt64(count);
}
}
}
I use the exact same algorithm to send/receive files having bigger sizes (over 1 GB) and the transfer works perfectly, no files are corrupted (I use file streams for that).
Interestingly, this code works in the debugger if I add a breakpoint on the sender side.
Also works with this modification:
if (streamSize < Convert.ToInt64(buffer.Length))
{
if (count > 0)
{
ms.Write(buffer, 0, Convert.ToInt32(streamSize));
readBytes += streamSize;
streamSize -= streamSize;
}
}
but this comes with no checking on how much data is received and also doesn't work to transfer files.
Could anybody point it out what is going on here?
Thread started like this:
public ClientConnection(Socket clientSocket, Server mainForm)
{
this.clientSocket = clientSocket;
clientThread = new Thread(ReceiveData);
clientConnected = true;
this.mainForm = mainForm;
clientThread.Start(clientSocket);
}
Added from comment by OP
// text is 10240 characters long
MemoryStream ms = new MemoryStream(UnicodeEncoding.Unicode.GetBytes(text));
// streamsize is 20480, which is sent prior to text in a header to the receiver
long streamSize = ms.Length;
Update:
Tested with more files, now the file transfer fails as well. The problem is with the last 1008 bytes in all cases.
I found it... When I expected to receive the header, I hadn't prepare the software to receive exactly header sized data.
//byte[] buffer = new byte[1024];
byte[] buffer = new byte[16];
readBytes = socket.Receive(buffer, 0, buffer.Length, SocketFlags.None);
This somehow caused a rogue 16 bytes of data written on the socket every time I was receiving the last chunk of the payload, the socket disconnected and the thread exited not throwing any exceptions whatsoever. I hope this answer will help one day someone else running into the same issue. All data transfer works properly now.
Please consider the simplified implementation of the functionality using NetworkStream class to perform the I/O-operations instead of the Socket class: the NetworkStream class allows to slightly increase the level of abstraction. An instance of the NetworkStream class can be created using an instance of the Socket class.
Sender
The implementation of the Sender is pretty straightforward using the Stream.CopyTo Method:
private static void CustomSend(Stream inputStream, Socket socket)
{
using (var networkStream = new NetworkStream(socket))
{
inputStream.CopyTo(networkStream, BufferSize);
}
}
Receiver
Let's introduce the following extension methods for the Stream class which copies the exact number of bytes from one instance of the Stream class to another using the specified buffer:
using System;
using System.IO;
public static class StreamExtensions
{
public static bool TryCopyToExact(this Stream inputStream, Stream outputStream, byte[] buffer, int bytesToCopy)
{
if (inputStream == null)
{
throw new ArgumentNullException("inputStream");
}
if (outputStream == null)
{
throw new ArgumentNullException("outputStream");
}
if (buffer.Length <= 0)
{
throw new ArgumentException("Invalid buffer specified", "buffer");
}
if (bytesToCopy <= 0)
{
throw new ArgumentException("Bytes to copy must be positive", "bytesToCopy");
}
int bytesRead;
while (bytesToCopy > 0 && (bytesRead = inputStream.Read(buffer, 0, Math.Min(buffer.Length, bytesToCopy))) > 0)
{
outputStream.Write(buffer, 0, bytesRead);
bytesToCopy -= bytesRead;
}
return bytesToCopy == 0;
}
public static void CopyToExact(this Stream inputStream, Stream outputStream, byte[] buffer, int bytesToCopy)
{
if (!TryCopyToExact(inputStream, outputStream, buffer, bytesToCopy))
{
throw new IOException("Failed to copy the specified number of bytes");
}
}
}
So, the Receiver can be implemented as follows:
private static void CustomReceive(Socket socket)
{
// It seems your receiver implementation "knows" the "size to receive".
const int SizeToReceive = 20480;
var buffer = new byte[BufferSize];
var outputStream = new MemoryStream(new byte[SizeToReceive], true);
using (var networkStream = new NetworkStream(socket))
{
networkStream.CopyToExact(outputStream, buffer, SizeToReceive);
}
// Use the outputStream instance...
}
Important note
Please do not forget to call the Dispose() method of the instances of the Socket class (for both Sender and Receiver). The absence of the method call can be a root cause of the problems.