xamarin bluetooth data receive delay - c#

I have my xamarin app running on android. It connects to a custom device through bluetooth using SPP. The App issues commands and the device responds with about 260 bytes.
My problem is that there appears to be a large delay between data being sent by the device and that data being available to my app through the socket. This results in the throughput of the connection being very low.
Scope image here: https://imgur.com/a/gBPaWHJ
In the image, the yellow trace is the data being sent to the device, the blue is the response. As you can see, the device responds immediately after the command is sent. I have measured the peroid from the start of a command to the end of the response to be 12ms.
In the code, I measured the time between the app receiving the last byte of a response to the the sending of the next command. The time was always 0 or 1ms. This is not what the scope is telling me, there is a clear 92ms period between the end of a response and the sending of the next command.
I also measured the time between the line of code that sends data, and the first byte of the response being received, it always takes 50 to 80ms. This here is the problem.
I have been through my code and there are no delays or timers that prevent a command being sent. If it has received a full resonse, it will send a request for data straight away.
I have a System.Threading.Thread which loops around handling the sending and receiving of data. I have timed this loop and it always takes less than 3ms to complete (mostly it is 0ms). This shows there is no delay in my loop causing this. I wouldnt expect any delay as we're only talking about 260 bytes of data to read and process.
Is there something in Xamarin Android that might cause a delay between data arriving at the tablet over bluetooth and data being available to my app. Perhaps something is only updating the BluetoothSocket every 100ms? I want those empty gaps on my scope to be gone.

In general, the factors that affect Bluetooth transmission are as follows: Connection Interval / number of frames sent per Connection Event / length of each frame of data, and an operation type (not considered at this time).
According to the optimal value supported by the Android protocol, you can set the Connection Interval to 7.5ms, and the data size of each frame is 20 bytes.
If you need to send 260 bytes of data, then the time required for the calculation is 97.5ms. Sometimes it may involve fluctuations in the stability of the Bluetooth connection, which takes about 100ms.
why is it limited to 20 bytes?
The core spec defines the default MTU of the ATT to be 23 bytes. After removing one byte of the ATT opcode and the ATT handle2 bytes, the remaining 20 bytes are reserved for the GATT.
Considering that some Bluetooth smart devices are weak and don't dare to use memory space too much, the core spec requires that each device must support an MTU of 23.
At the beginning of the connection between the two devices, everyone is like a new friend, I don't know the other party's fine, so strictly follow the routine, that is, send up to 20 bytes at a time, which is the most insurance.
How to break through 20?
Since the maximum length of the ATT is 512 bytes, it is sufficient to change the MTU of the transmitted ATT. On Android (API 21), the interface for changing the ATT MTU is:
public boolean requestMtu (int mtu)
#Added in API level 21
#Request an MTU size used for a given connection.
#When performing a write request operation (write without response), the data sent is truncated to the MTU size. This function may be used to request a larger MTU size to be able to send more data at once.
#A onMtuChanged(BluetoothGatt, int, int) callback will indicate whether this operation was successful.
#Requires BLUETOOTH permission.
#Returns true, if the new MTU value has been requested successfully
If your peripheral application changes the MTU and succeeds, then this callback will also be called.
#Override
public void onMtuChanged(BluetoothGatt gatt, int mtu, int status) {
super.onMtuChanged(gatt, mtu, status);
if (status == BluetoothGatt.GATT_SUCCESS) {
this.supportedMTU = mtu;//local var to record MTU size
}
}
After that, you can happily send the length of the supportedMTU data.
So this is actually not related to xamarin, this is just a limitation imposed by Android.

Related

How to reduce delays caused by a Server TCP Spurious retransmission and subsequent Client TCP retransmission?

I have a Dotnet application (running on a Windows PC) which communicates with a Linux box via OPC UA. The use case here is to make ~40 read requests to the server in serial. Once these 40 read calls are complete, the next cycle of 40 read calls begins. Each read call returns a response from the server carrying a payload of ~16KB which is fragmented and delivered to the client. For most requests, the server finishes delivering the complete response within 5ms. However for some requests it takes ~300 ms to complete.
In scenarios where this delay exists, I can see the following pattern of re-transmissions.
[71612] A new Read request is sent to the server.
[71613-71630] The response is delivered to the client.
[71631] A new Read request is sent to the server.
[71632] A TCP Spurious Retransmission occurs from the server for packet [71844] with Seq No. 61624844
[71633] Client sends a DUP ACK for the packet.
[71634] Client does a TCP Retransmission for the read request in [71846] after 288ms
This delay adds up and causes some 5-6 seconds of delay for a complete cycle of 40 requests to complete. I want to figure out what is causing these retransmissions (hence delays) and what can possibly be done to-
Reduce the frequency of retransmissions.
Reduce the 300ms delay from the client side to quickly retransmit the obstructed read request.
I have tried disabling the Nagle algorithm on the server to possibly improve performance but it did not have any effect. Also, when reducing the response size by half (8KB), the retransmissions are rare and hence the delay is minute as well. But reducing the response is not a valid solution in our use case.
The connection to the Linux box is through a switch, however while directly connecting to it point-point, there is marginal reduction in the delay.
I can share relevant code but I think this issue is likely with the TCP stack (or at least, some configuration that should be enabled?) hence it would make little difference.

C# Receiving Packet in System.Sockets

In my Client Server Application i wondered how to make a packet and send it to the server via the Client
Then on the server i recognize which packet is this and send the proper replay
but suddenly i got across this topic and it make me worry if i may fall in this problem
The Problem
One of the most common beginner mistakes for people designing
protocols for TCP/IP is that they assume that message boundaries are
preserved. For example, they assume a single "Send" will result in a
single "Receive".
Some TCP/IP documentation is partially to blame. Many people read
about how TCP/IP preserves packets - splitting them up when necessary
and re-ordering and re-assembling them on the receiving side. This is
perfectly true; however, a single "Send" does not send a single
packet.
Local machine (loopback) testing confirms this misunderstanding,
because usually when client and server are on the same machine they
communicate quickly enough that single "sends" do in fact correspond
to single "receives". Unfortunately, this is only a coincidence.
This problem usually manifests itself when attempting to deploy a
solution to the Internet (increasing latency between client and
server) or when trying to send larger amounts of data (requiring
fragmentation). Unfortunately, at this point, the project is usually
in its final stages, and sometimes the application protocol has even
been published!
True story: I once worked for a company that developed custom client/server software.
The original communications code had made this
common mistake. However, they were all on dedicated networks with
high-end hardware, so the underlying problem only happened very
rarely. When it did, the operators would just chalk it up to "that
buggy Windows OS" or "another network glitch" and reboot. One of my
tasks at this company was to change the communication to include a lot
more information; of course, this caused the problem to manifest
regularly, and the entire application protocol had to be changed to
fix it. The truly amazing thing is that this software had been used in
countless 24x7 automation systems for 20 years; it was fundamentally
broken and no one noticed.
So how could i send something like AUTH_CALC,VALUE1=10,VALUE2=12 packet and receive it from the server in a safe way...
And if you wanna an example of what i am doing here it is below
[CLIENT]
Send(Encoding.ASCII.GetBytes("1001:UN=user123&PW=123456")) //1001 is the ID
[SERVER]
private void OnReceivePacket(byte[] arg1, Wrapper Client)
{
try
{
int ID;
string V = Encoding.ASCII.GetString(arg1).Split(':')[0];
int.TryParse(V, out ID);
switch (ID)
{
case 1001://Login Packet
AppendToRichEditControl("LOGIN PACKET RECEIVED");
break;
case 1002:
//OTHER IDs
break;
default:
break;
}
}
catch { }
}
So is this is a good way to structure a Message and handling it on the server ?
Also which is better encoding to use ASCII or UTF8 ?
The best way you can do is by using length indicator. Suppose you are sending a file of 10000 bytes, first send the length of the file and receive the ack i.e "OK" string from other side, then keep on sending 10,000 bytes chunk by chunk(may be u can take 4096 bytes). Send 4096 bytes each time for two time and send 2000 odd bytes on the last chunk. On the receiver side there is no gauranty that for one send you will receive the whole 4096 bytes, so you need to wait until u get 4096 bytes and then proceed with next 4096 bytes.

Occasional WCF 5 seconds response spikes

We are developing a client-server system where the client connects to a service and fetches an image from a buffer. The request runs at 25 hertz (25 requests per second) over a NetTcpBinding. The image data which is sent contains the image buffer (byte[]) and some meta data about the image.
What we are experiencing is that occasionally, the server does not respond for 5 seconds (5020 to 5050 ms), and we can't figure out why.
Running svc logging on the client we see the following
Activity Boundary Suspend 10:00:00:000
Activity Boundary Resume 10:00:00:003
Received a message over a channel Information 10:00:05:017
This occurs both when running the server as a managed WCF service, and an unmanaged WWS service
It can happen once every 100.000 requests, once per night, or several times per minute at seemingly random intervals.
Does anyone know what might cause this issue?
We found the solution buried in the Microsoft customer support database.
The 5 second delay is due to the firing of the SWS(Silly Window
Syndrome) avoidance timer. The SWS timer is scheduled to send the
remaining data which is less than 1 MSS (Maximum Segment Size, 1460
bytes) and the receiver is supposed to send an ACK advertising the
increased receive window and indicating that the remaining data bytes
can be sent. However, if the receiver sends an ACK when it can be
ready for sufficient buffer within 5 seconds, the SWS timer cannot
recover the 5 seconds delay status due to a race condition.
http://support.microsoft.com/kb/2020447
This issue only occurs when using localhost or 127.0.0.1. The delays do not occur when running the service and client on different machines.

UDP Client receives only 1 message

I have a server client application that I am currently working on. The server is receiving data fine over a WAN and the client seems to receive the data, but the client is only receiving one communication. Is there anything over a WAN that would make a client always only receive the first return UDP communication and none of the subsequent. Thanks for the help.
Client UDP Listening code
private void receiveUDP()
{
System.Net.IPEndPoint test = new System.Net.IPEndPoint(System.Net.IPAddress.Any,UDP_PORT_NUMBER);
System.Net.EndPoint serverIP = (System.Net.EndPoint)test;
server.Bind(serverIP);
//server.Ttl = 50;
EndPoint RemoteServ = (EndPoint)listenUDP;
do
{
byte[] content = new byte[1024];
int data = server.ReceiveFrom(content, ref RemoteServ);
string message = Encoding.ASCII.GetString(content);
ProcessCommands(message);
} while (true);
}
This is a bit of a stab in the dark (since you don't provide enough code to really say what's going on definitively), but there's one major reason why you might consistently see some UDP datagrams not be delivered over a WAN, while others always arrive successfully. This reason is MTU; the Maximum Transmission Unit which can be sent in a single UDP datagram. This can easily produce behaviour such as what you're seeing if (for example), your first datagram is a short "I accept your connection" message, and you then follow that with a datagrams containing large files; the first (small) datagram is smaller than the MTU and is delivered, while the following (large) datagrams are larger than the MTU, and are discarded en route.
For UDP over a WAN, the MTU will not be higher than about 1500 bytes, and in many situations may be as low as 1200 bytes. Any packets larger than that will be silently dropped somewhere between endpoints. To send large blocks of data via UDP, you need to chop them up into pieces smaller than the MTU for the network segment across which you're transmitting them.
On a LAN, you can usually get away with sending datagrams of any size. But as soon as they're being sent over the Internet or otherwise through heterogenous networks, they're likely to be silently discarded.
If you do need to send large files, you might choose to transmit them via TCP instead; TCP automatically manages chopping data up to fit within the MTU, and ensures that its packets are all received, and are received in order; guarantees that you will not receive from datagrams sent via UDP.
As I mentioned above, this is a complete stab in the dark and may not actually be related to your actual troubles. But it's the elephant in the room, when all we have to go on is that the first packet always arrives successfully, and later packets never do.

.Net SendAsync always sends all data?

Will Socket.SendAsync always send all data in the byte[] buffer that the SocketAsyncEventArgs has been assigned with? I've tested some code but only on a local network and there it seems to be that way..
Edit:
Ok but does it always send all data before running the completed event?
the only socket.BeginSend did not if I remember right..
It will attempt to send all data, however, from the docs on MSDN:
"For message-oriented sockets, do not exceed the maximum message size of the underlying Windows sockets service provider. If the data is too long to pass atomically through the underlying service provider, no data is transmitted and the SendAsync method throws a SocketException with the SocketAsyncEventArgs.SocketError set to the native Winsock WSAEMSGSIZE error code (10040)."
There are times when a buffer that is too large should be split up. It depends on the underlying socket implementation.
No it will not. There are a lot of factors to consider here including buffering, timeouts, etc ...
The simplest to consider though is the limit on packets at the IPV4 level. IPV4 packets have a strict limit that cannot be exceeded (65,535 bytes). It's therefore not possible for SendAsync to push data which is larger than an IPV4 packet size into a single packet.

Categories