I have an RFID card reader connected to my pc on serial port. It's using RS485, so I need switching between send and receive state. The communication frames contains header and CRC (CRC16 ccitt - Xmodem).
After every writing on the port I'm waiting the answer, then computing the CRC and if it failed, request frame again. Then if everything correct process it.
It works fine with the "simple" commands. (Request Firmware version, Enable/Disable antenna, etc..).
With the important commands (Logging into the reader's interface, config. it, etc..) I'm facing the next: Rarely the answer comes correctly, with a maximum delay of 5 secs, but in the most of the cases, I don't get anything on the buffer. I can wait for minutes, but nothing.
Conclusion: If I get answer it happens in the first seconds, if I don't I can wait anytime, it won't happen.
My question is: Could it be the hardware's fault, or maybe I miss something in my software?
Here is the send & receive part of my code:
int size;
bool msg_ok = false;
do
{
int max_attemps = 50;
port.DtrEnable = true;
port.RtsEnable = false;
port.Write(fullMsg, 0, fullMsg.Length);
port.DtrEnable = false;
port.RtsEnable = true;
do
{
Thread.Sleep(200);
size = port.BytesToRead;
}while(size <= 3 && max_attemps-- > 0);
if(size > 3){
answer = new byte[size];
port.Read(answer, 0, size);
int end = answer.Length - 1; //Trim 0-s after end
while (answer[end] == 0)
--end;
int start = 0;
while (answer[start] == 0) //Trim 0-s before header
++start;
trimmed = new byte[(end - start) + 1];
Array.Copy(answer, start, trimmed, 0, (end - start) + 1);
checkSum = new byte[2];
checkSum = crc.ComputeChecksumBytes(trimmed, trimmed.Length); //Calculate crc
if (checkSum[0] == trimmed[trimmed.Length - 1] && checkSum[1] == trimmed[trimmed.Length - 2])
{
msg_ok = true; //If it's still false on the end, restart this whole block and request again, if it's true, I can send the answer for processing
}
} else {
Console.WriteLine("Timed out.");
}
}while(!msg_ok);
When data is sent over a serial port, the operating system buffers the data as it arrives. If you query the data when only some of it has arrived, you will get a partial packet. You need to keep reading until you receive the full packet before you start trying to decode it. Otherwise your decode will fail on the first half of the packet, fail on the second half, and then sit waiting for another message that will never come.
The best approach for using a serial port is to subscribe to the DataReceived event, because this means you are called by the port if and when data arrives. This avoids having to sleep to try to get around the timing issues. You will still sometimes need to stitch several chunks of received data together to form a valid packet however, so you should write your code to keep reading and appending into a receive buffer until it recognises a valid, complete packet.
You also shouldn't need to flip the handshaking bits unless the device on the other end of the serial line is very unusual - just send your data and wait for the reply. By changing the low level states on the port manually you are likely to introduce transmission problems into the system.
Try starting with the example code on the DataReceived event page (above) and you should have more reliable results.
Related
Kindly bear with me for this confusing question. I'm finding it as hard to describe as it is involving and tiresome. Read it and you'll know why.
I've been hounding this issue for over a month now without much progress. I'm using an STM32 (STM32F103C8 mounted on a BluePill board) to communicate with a C# app through an FT232r Serial-USB converter. The complete communication protocol is a bit complex. I'm writing here a simplistic version of the code that explains my problem quite accurately.
STM32 does the following.
In the initial setup,
Serial.begin at 2000000 (Yes it's very high but I've analyzed it using an oscilloscope and the signal is very healthy; impedance matching and clock jitter is very accurate).
Waits for a command from the C# end to enter the loop
In the loop, it does the following.
TX a byte buffer of length N on the serial port. Packet structure is 0xAA, N bytes, 1 byte checksum.
repeat the loop
And on the C# side (Pseudo code),
new Thread(()=>{while(true) IOTick(); Thread.Sleep(30); }).Start();
IOTick() is defined as:
{
while(SerialPortObject.BytesToRead > 1)
{
header = read();
if (header != 0xAA) continue;
byte [] buffer = new byte[N + 1];
receivedBytes = readBytes(buffer, N + 1, Timeout = 500ms); // receivedBytes is never less than N + 1 for timeout greater than 120)
use the N=16 bytes. Check Nth byte to compare checksum. Doen't take too much CPU time.
Send a packet received software event.
}
}
readBytes is defined as
int readBytes(byte[] buffer, int count, int timeout)
{
var st = DateTime.Now;
for (int i = 0; i < count; i++)
{
var b_ = read(timeout);
if (b_ == -1)
return i;
buffer[i] = (byte)b_;
timeout -= (int)(DateTime.Now - st).TotalMilliseconds;
}
return count;
}
int buffer2ReadIndex = 0;
byte[] buffer2= new byte[0];
int read(int timeout)
{
DateTime start = DateTime.Now;
if (buffer2.Length == 0)
{
while (SerialPortObject.BytesToRead <= 0)
{
if ((DateTime.Now - start).TotalMilliseconds > timeout)
return -1;
System.Threading.Thread.Sleep(30);
}
buffer2 = new byte[SerialPortObject.BytesToRead];
sp.Read(buffer2, 0, buffer2.Length);
}
if (buffer2.Length > 0)
{
var b = buffer2[buffer2ReadIndex];
buffer2ReadIndex++;
if (buffer2ReadIndex >= buffer2.Length)
{
buffer2ReadIndex = 0;
buffer2 = new byte[0];
}
return b;
}
return -1;
}
Now, everything is working as expected. The packet received software event is triggered not later than every ~30ms (the windows tick time). The problem starts if I have to wait between each packet TX at the STM side. First, I suspected that the I2C I was using for some tasks between each packet TX was causing some HW or software conflict with serial data which gets corrupted. But then I noticed that only if I introduce a delay of 1 millisecond using Arduino delay() between each packet TX, the same thing happens. Almost, 1K packets should be received every second now. Almost 1 out of 10 packets after a successful header exception get either not delivered completely or delivered with corrupted checksum, causing the C# app to lose the packet Header. The new header trace obviously requires flushing some bytes, losing some packets in the communication. Even this doesn't sound too bad for an app that can afford 5% data packet loss, strangely though, when this anomaly occurs, the packet received software interrupt waits for more than 1 second after every couple hundred of consecutive events.
I'm completely blind here. Even tried it with 115200 baud rate, does the same loss with a slightly lesser loss ratio. It should be noted that at 9600 baud, the issue doesn't happen. This is the only hint I've got right now.
It looks like I've found an answer.
After digging deep into SerialPort and SerialPort.base stream class and after doing some document reading and benchmarking, here is what I've observed:
SerialPort.BytesToRead updates are not uniform. DataReceived event seems to be following it. When bytes are coming at ~200kHz, (baud = 2Mbps), It is updated almost instantaneously (or within 30ms, worst case). When they are coming at ~20kHz or slower (evenly spaced on time using a micrcontroller), the SerialPort.BytesToRead can take up to 400ms to update. This happens only after a dozen 30ms updates.
So, observing this, I can say that SerialPort.BytesToRead is updated on two conditions. Some amount of time has passed since the data arrived (and this time is not constrained to 30ms) or the data is coming too fast.
This is a strange behavior. No data is lost when this anomaly is occurring. Not to surprise, 0.06% of bytes are lost when working at full bandwidth (200KBps at baud of 2Mbps).
I'm working on a client/server relationship that is meant to push data back and forth for an indeterminate amount of time.
The problem I'm attempting to overcome is on the client side, being that I cannot manage to find a way to detect a disconnect.
I've taken a couple of passes at other peoples solutions, ranging from just catching IO Exceptions, to polling the socket on all three SelectModes. I've also tried using a combination of a poll, with a check on the 'Available' field of the socket.
// Something like this
Boolean IsConnected()
{
try
{
bool part1 = this.Connection.Client.Poll(1000, SelectMode.SelectRead);
bool part2 = (this.Connection.Client.Available == 0);
if (part1 & part2)
{
// Never Occurs
//connection is closed
return false;
}
return true;
}
catch( IOException e )
{
// Never Occurs Either
}
}
On the server side, an attempt to write an 'empty' character ( \0 ) to the client forces an IO Exception and the server can detect that the client has disconnected ( pretty easy gig ).
On the client side, the same operation yields no exception.
// Something like this
Boolean IsConnected( )
{
try
{
this.WriteHandle.WriteLine("\0");
this.WriteHandle.Flush();
return true;
}
catch( IOException e )
{
// Never occurs
this.OnClosed("Yo socket sux");
return false;
}
}
A problem that I believe I am having in detecting a disconnect via a poll, is that I can fairly easily encounter a false on a SelectRead, if my server hasn't yet written anything back to the client since the last check... Not sure what to do here, I've chased down every option to make this detection that I can find and nothing has been 100% for me, and ultimately my goal here is to detect a server (or connection) failure, inform the client, wait to reconnect, etc. So I am sure you can imagine that this is an integral piece.
Appreciate anyone's suggestions.
Thanks ahead of time.
EDIT: Anyone viewing this question should note the answer below, and my FINAL Comments on it. I've elaborated on how I overcame this problem, but have yet to make a 'Q&A' style post.
One option is to use TCP keep alive packets. You turn them on with a call to Socket.IOControl(). Only annoying bit is that it takes a byte array as input, so you have to convert your data to an array of bytes to pass in. Here's an example using a 10000ms keep alive with a 1000ms retry:
Socket socket; //Make a good socket before calling the rest of the code.
int size = sizeof(UInt32);
UInt32 on = 1;
UInt32 keepAliveInterval = 10000; //Send a packet once every 10 seconds.
UInt32 retryInterval = 1000; //If no response, resend every second.
byte[] inArray = new byte[size * 3];
Array.Copy(BitConverter.GetBytes(on), 0, inArray, 0, size);
Array.Copy(BitConverter.GetBytes(keepAliveInterval), 0, inArray, size, size);
Array.Copy(BitConverter.GetBytes(retryInterval), 0, inArray, size * 2, size);
socket.IOControl(IOControlCode.KeepAliveValues, inArray, null);
Keep alive packets are sent only when you aren't sending other data, so every time you send data, the 10000ms timer is reset.
I have written some code to get a webpage through a proxy using sockets. In essence, it works but reading the response has some strange behavior that is really tripping me up.
When I go to read the response after sending the GET command it is 0 bytes. It takes a few ticks before there is data to read. I don't want to hard code a delay in here as I am trying to write performant reliable code so I have coded a while loop that keeps reading the response until it more than 0.
This works for the first chunk but trying to read subsequent chunks is a problem. If i instantly try to read the response it will be 0 bytes so I need to check the subsequent reads also if they are greater than 0.
So to read the whole response I tried to check if the response is equal to the size of the buffer. If it is equal to the size of the buffer then I carry on and try to read another chunk. This has a few issues also. Sometimes the response will read less than the size of the buffer but there is still more to come, i guess I am reading it faster than they are sending it because if I add a Thread.Sleep() then the buffer will always be full but I don't think it is good practice to hardcode this because I don't know how fast they will be sending. This code will be used for multiple things and will be running on hundreds of threads so performance is everything.
Also if the last chunk just happens to be the size of my buffer then I think the loop will lock, This whole approach I have taken is horrible but I can't see how I should be reading it. I have seen the asynchronous examples but I think that will add to the overall complexity of my code as I just have 1 set process which I will run in many threads.
How do I efficiently read the response when I can't guarantee the next chunk will have data or be full even if there is more data to come?
Sorry for long text but I wanted to explain my thinking. Here is my code:
// Data buffer for incoming data.
byte[] bytes = new byte[1024];
// Connect to a remote device.
try
{
var proxyIpAddress = IPAddress.Parse("123.123.123.123"); //omitted
IPEndPoint remoteEP = new IPEndPoint(proxyIpAddress, 60099);
// Create a TCP/IP socket.
Socket sender = new Socket(proxyIpAddress.AddressFamily,
SocketType.Stream, ProtocolType.Tcp);
// Connect the socket to the remote endpoint. Catch any errors.
try
{
sender.Connect(remoteEP);
Console.WriteLine("Socket connected to {0}",
sender.RemoteEndPoint.ToString());
sender.Send(Encoding.ASCII.GetBytes($"CONNECT google.com:80 HTTP/1.0\r\n\r\n"));
int bytesRec = 0;
while (bytesRec == 0)
{
// Receive the response from the remote device.
bytesRec = sender.Receive(bytes);
Console.WriteLine("{0}",
Encoding.ASCII.GetString(bytes, 0, bytesRec));
}
//clear buffer
bytes = new byte[1024];
bytesRec = 0;
sender.Send(Encoding.ASCII.GetBytes("GET / HTTP/1.0\r\n\r\n"));
//wait for response
while (bytesRec == 0) //if i dont add this it returns before it actually gets data
{
// Receive the response from the remote device.
bytesRec = sender.Receive(bytes);
Console.WriteLine("{0}",
Encoding.ASCII.GetString(bytes, 0, bytesRec));
}
if(bytes.Length == bytesRec) //full buffer so likely more but maybe not if final packet exactly 1024?
{
while (bytes.Length == bytesRec) //again if i miss this it returns too early
{
int subsequentBytes = 0;
while(subsequentBytes == 0) //this can get stuck if last packet exactly size of buffer i think
{
subsequentBytes = sender.Receive(bytes);
Console.WriteLine("{0}",
Encoding.ASCII.GetString(bytes, 0, subsequentBytes));
//this doesn't work. even when there are subsequent bytes sometimes it reads less
//than the size of the buffer so it exits prematurely. If I add a Thread.Sleep() here
// then it works but I don't want to hardcode the delay. How do I read this buffer properly?
Thread.Sleep(1000);
if (subsequentBytes > 0) bytesRec = subsequentBytes;
}
}
}
// Release the socket.
sender.Shutdown(SocketShutdown.Both);
sender.Close();
}
catch (Exception e)
{
Console.WriteLine("Unexpected exception : {0}", e.ToString());
}
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
I understand this is difficult to follow and a lot of writing so if anyone perseveres with this they have my gratitude as the only option I can see is hardcoded pauses which will hurt performance and may still have issues.
EDIT
I have done some experiementing with different servers. If I ping the server then set a Thread.Sleep(pingValue) it works fine but if i set the sleep to lower than ping i get same issue.
Is there some good way with the .net libraries to account for this latency so I am not under/overestimating?
I've been working with windows app store programming in c# recently, and I've come across a problem with sockets.
I need to be able to read data with an unknown length from a DataReader().
It sounds simple enough, but I've not been able to find a solution after a few days of searching.
Here's my current receiving code (A little sloppy, need to clean it up after I find a solution to this problem. And yes, a bit of this is from the Microsoft example)
DataReader reader = new DataReader(args.Socket.InputStream);
try
{
while (true)
{
// Read first 4 bytes (length of the subsequent string).
uint sizeFieldCount = await reader.LoadAsync(sizeof(uint));
if (sizeFieldCount != sizeof(uint))
{
// The underlying socket was closed before we were able to read the whole data.
return;
}
reader.InputStreamOptions
// Read the string.
uint stringLength = reader.ReadUInt32();
uint actualStringLength = await reader.LoadAsync(stringLength);
if (stringLength != actualStringLength)
{
// The underlying socket was closed before we were able to read the whole data.
return;
}
// Display the string on the screen. The event is invoked on a non-UI thread, so we need to marshal
// the text back to the UI thread.
//MessageBox.Show("Received data: " + reader.ReadString(actualStringLength));
MessageBox.updateList(reader.ReadString(actualStringLength));
}
}
catch (Exception exception)
{
// If this is an unknown status it means that the error is fatal and retry will likely fail.
if (SocketError.GetStatus(exception.HResult) == SocketErrorStatus.Unknown)
{
throw;
}
MessageBox.Show("Read stream failed with error: " + exception.Message);
}
You are going down the right lines - read the first INT to find out how many bytes are to be sent.
Franky Boyle is correct - without a signalling mechanism it is impossible to ever know the length of a stream. Thats why it is called a stream!
NO socket implementation (including the WinSock) will ever be clever enough to know when a client has finished sending data. The client could be having a cup of tea half way through sending the data!
Your server and its sockets will never know! What are they going to do? Wait forever? I suppose they could wait until the client had 'closed' the connection? But your client could have had a blue screen and the server will never get that TCP close packet, it will just be sitting there thinking it is getting more data one day?
I have never used a DataReader - i have never even heard of that class! Use NetworkStream instead.
From my memory I have written code like this in the past. I am just typing, no checking of syntax.
using(MemoryStream recievedData = new MemoryStream())
{
using(NetworkStream networkStream = new NetworkStream(connectedSocket))
{
int totalBytesToRead = networkStream.ReadByte();
// This is your mechanism to find out how many bytes
// the client wants to send.
byte[] readBuffer = new byte[1024]; // Up to you the length!
int totalBytesRead = 0;
int bytesReadInThisTcpWindow = 0;
// The length of the TCP window of the client is usually
// the number of bytes that will be pushed through
// to your server in one SOCKET.READ method call.
// For example, if the clients TCP window was 777 bytes, a:
// int bytesRead =
// networkStream.Read(readBuffer, 0, int.Max);
// bytesRead would be 777.
// If they were sending a large file, you would have to make
// it up from the many 777s.
// If it were a small file under 777 bytes, your bytesRead
// would be the total small length of say 500.
while
(
(
bytesReadInThisTcpWindow =
networkStream.Read(readBuffer, 0, readBuffer.Length)
) > 0
)
// If the bytesReadInThisTcpWindow = 0 then the client
// has disconnected or failed to send the promised number
// of bytes in your Windows server internals dictated timeout
// (important to kill it here to stop lots of waiting
// threads killing your server)
{
recievedData.Write(readBuffer, 0, bytesReadInThisTcpWindow);
totalBytesToRead = totalBytesToRead + bytesReadInThisTcpWindow;
}
if(totalBytesToRead == totalBytesToRead)
{
// We have our data!
}
}
}
I'm working on a messenger program using c#, and have some issues.
The server, client has three connections(each for chatting, filetrans, cardgames).
For the first, and second connection, it's working just fine.
But the problem occurred on the third one, which handles less amount of packet types compared to the first two sockets.
It's not about not receiving the packet or not getting a connection, but it's getting(or sending)
more then one(which I intend to send) packets at a time. The server log keeps on saying that
on one click, the server receives about 3~20 same packets and sends them to the targeted client.
Before my partial codes for the third connection, I'll explain how this thing is suppose to work.
The difference between connection1,2 and connection3(which is making this issue) is only the time
when I make the connection. The 1,2 makes it's connection on the main form's form_load function, and works fine.
The connection 3 makes connection when I load the gaming form(not the main form). Also, the first
two socket's listening thread are on the main form, and the third has it's listening thread on it's
own form. That's the only difference that I can find. The connections, and listening threads are
the very same. Here are my codes for the gaming form.
public void GPACKET() //A Thread function for receiving packets from the server
{
int read = 0;
while (isGameTcpClientOnline)
{
try
{
read = 0;
read = gameNetStream.Read(greceiveBuffer, 0, 1024 * 4);
if (read == 0)
{
isGameTcpClientOnline = false;
break;
}
}
catch
{
isGameTcpClientOnline = false;
gameNetStream = null;
}
Packet.Packet packet = (Packet.Packet)Packet.Packet.Desirialize(greceiveBuffer);
switch ((int)packet.Type)
{
case (int)PacketType.gameInit:
{
gameinit = (GameInit)Packet.Packet.Desirialize(greceiveBuffer);
//codes for handling the datas from the packet...
break;
}
case (int)PacketType.gamepacket:
{
gp = (GamePacket)Packet.Packet.Desirialize(greceiveBuffer);
//codes for handling the datas from the packet...
break;
}
}
}
}
public void setPacket(bool turn) //makes the packet, and sends it to the server..
{
if (turn)
turnSetting(false);
else
turnSetting(true);
gps = new GamePacket();
gps.Type = (int)PacketType.gamepacket;
gps.isFirstPacket = false;
gps.sym = symbol;
gps.ccapacity = cardCapacity;
gps.currentList = current_list[0].Tag.ToString();
gps.isturn = turn;
gps.opname = opid;
List<string> tempList = new List<string>();
foreach (PictureBox pb in my_list)
{
tempList.Add(pb.Image.Tag.ToString());
}
gps.img_list = tempList;
Packet.Packet.Serialize(gps).CopyTo(this.gsendBuffer, 0);
this.Send();
label5.Text = symbol + ", " + current_list[0].Tag.ToString();
}
public void Send() //actually this part sends the Packet through the netstream.
{
gameNetStream.Write(this.gsendBuffer, 0, this.gsendBuffer.Length);
gameNetStream.Flush();
for (int j = 0; j < 1024 * 4; j++)
{
this.gsendBuffer[j] = 0;
}
}
I really don't know why I'm having this problem.
Is it about the connection point? or is it about the receiving point? Or is it about the sending point? If I establish this connection on the same place to the connection1,2(which is on the main form. If i do this, I should make the "GPACKET" function running on the main form as well)?
This looks like a classic "assume we read an entire packet", where by "packet" here I mean your logical message, not the underlying transport packet. For example:
read = gameNetStream.Read(greceiveBuffer, 0, 1024 * 4);
...
Packet.Packet packet = (Packet.Packet)Packet.Packet.Desirialize(greceiveBuffer);
Firstly, it strikes me as very off that read wouldn't be needed in Desirialize, but: what makes you think we read an entire packet? we could have read:
one entire packet (only)
half of one packet
one byte
three packets
the last 2 bytes of one packet, 1 entire packet, and the first 5 bytes of a third packet
TCP is just a stream; all that Read is guaranteed to give you is "at least 1 byte and at most {count} bytes, or an EOF". It is very unusual that calls to Write would map anything like the calls to Read. It is your job to understand the protocol, and decide how much data to buffer, and then how much of that buffer to treat as one packet vs holding them back for the next packet(s).
See also: How many ways can you mess up IO?, in partuclar "Network packets: what you send is not (usually) what you get".
To fill exactly a 4096 byte buffer:
int expected = 4096, offset = 0, read;
while(expected != 0 &&
(read = gameNetStream.Read(greceiveBuffer, offset, expected)) > 0)
{
offset += read;
expected -= read;
}
if(expected != 0) throw new EndOfStreamException();