I am working on a webrtc implementation for uwp using the following sdk: https://github.com/webrtc-uwp/webrtc-uwp-sdk/tree/releases/m71
The implementation generally works fine though I am having some issues with call quality when packet loss occurs. What happens is that when a packet or a few packets are lost framerate decreases a lot. Think from 30 FPS to less than 1 FPS. The audio is fine however. This would not be a problem if my client would recover when connection is improved again. However at this moment, if the problem occurs then the video is bad for the rest of the call.
I do not know where the issue might lay. As I understand it webrtc is supposed to compensate for packet loss? I was thinking that the issue might be in the sdk I am using. When I get a videotrack from the other peer I just connect it to a mediaelement in uwp so I do not handle the incoming frames myself. As a side note I have tested to just pause the debugger during a call and this also results in bad framerate both for the remote and local track. However worth to note is that even when my local video gets bad framerate it looks good on the other client. This would indicate that something is not working when playing video from a video track locally.
I include my local sdp just in case it is something wrong with that one:
v=0
o=- 3875426963439162405 2 IN IP4 127.0.0.1
s=-
t=0 0
a=group:BUNDLE 0 1
a=msid-semantic: WMS
m=video 52241 UDP/TLS/RTP/SAVPF 96 98
c=IN IP4 ...MY IP ADRESS...
b=AS:1264
a=rtcp:9 IN IP4 0.0.0.0
... SOME ICE CANDIDATES ...
a=ice-ufrag:6ZNW
a=ice-pwd:1JMvi96Ju3YZCX9S+ChJNH2C
a=fingerprint:sha-256 7B:F5:B5:49:E7:76:54:5F:55:D6:D3:2E:97:38:E0:63:63:5F:2E:53:49:BC:BD:B9:1D:40:45:4B:EC:1E:EE:D4
a=setup:actpass
a=mid:0
a=extmap:2 urn:ietf:params:rtp-hdrext:toffset
a=extmap:3 http://www.webrtc.org/experiments/rtp-hdrext/abs-send-time
a=extmap:4 urn:3gpp:video-orientation
a=extmap:5 http://www.ietf.org/id/draft-holmer-rmcat-transport-wide-cc-extensions-01
a=extmap:6 http://www.webrtc.org/experiments/rtp-hdrext/playout-delay
a=extmap:7 http://www.webrtc.org/experiments/rtp-hdrext/video-content-type
a=extmap:8 http://www.webrtc.org/experiments/rtp-hdrext/video-timing
a=extmap:10 http://tools.ietf.org/html/draft-ietf-avtext-framemarking-07
a=extmap:9 urn:ietf:params:rtp-hdrext:sdes:mid
a=sendrecv
a=msid:- SELF_VIDEO
a=rtcp-mux
a=rtcp-rsize
a=rtpmap:96 VP8/90000
a=rtcp-fb:96 goog-remb
a=rtcp-fb:96 transport-cc
a=rtcp-fb:96 ccm fir
a=rtcp-fb:96 nack
a=rtcp-fb:96 nack pli
a=rtpmap:98 VP9/90000
a=rtcp-fb:98 goog-remb
a=rtcp-fb:98 transport-cc
a=rtcp-fb:98 ccm fir
a=rtcp-fb:98 nack
a=rtcp-fb:98 nack pli
a=fmtp:98 x-google-profile-id=0
a=ssrc-group:FID 2190372283 111930078
a=ssrc:2190372283 cname:C+phDL3HvwhlyByD
a=ssrc:2190372283 msid: SELF_VIDEO
a=ssrc:2190372283 mslabel:
a=ssrc:2190372283 label:SELF_VIDEO
a=ssrc:111930078 cname:C+phDL3HvwhlyByD
a=ssrc:111930078 msid: SELF_VIDEO
a=ssrc:111930078 mslabel:
a=ssrc:111930078 label:SELF_VIDEO
m=audio 52242 UDP/TLS/RTP/SAVPF 111 103 104 9 102 0 8 106 105 13 110 112 113 126
c=IN IP4 ...MY IP ADRESS...
a=rtcp:9 IN IP4 0.0.0.0
... SOME ICE CANDIDATES ...
a=ice-ufrag:6ZNW
a=ice-pwd:1JMvi96Ju3YZCX9S+ChJNH2C
a=fingerprint:sha-256 7B:F5:B5:49:E7:76:54:5F:55:D6:D3:2E:97:38:E0:63:63:5F:2E:53:49:BC:BD:B9:1D:40:45:4B:EC:1E:EE:D4
a=setup:actpass
a=mid:1
a=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level
a=extmap:9 urn:ietf:params:rtp-hdrext:sdes:mid
a=sendrecv
a=msid:- SELF_AUDIO
a=rtcp-mux
a=rtpmap:111 opus/48000/2
a=rtcp-fb:111 transport-cc
a=fmtp:111 minptime=10;useinbandfec=1
a=rtpmap:103 ISAC/16000
a=rtpmap:104 ISAC/32000
a=rtpmap:9 G722/8000
a=rtpmap:102 ILBC/8000
a=rtpmap:0 PCMU/8000
a=rtpmap:8 PCMA/8000
a=rtpmap:106 CN/32000
a=rtpmap:105 CN/16000
a=rtpmap:13 CN/8000
a=rtpmap:110 telephone-event/48000
a=rtpmap:112 telephone-event/32000
a=rtpmap:113 telephone-event/16000
a=rtpmap:126 telephone-event/8000
a=ssrc:2339515350 cname:C+phDL3HvwhlyByD
a=ssrc:2339515350 msid: SELF_AUDIO
a=ssrc:2339515350 mslabel:
a=ssrc:2339515350 label:SELF_AUDIO
Any help or suggestions as to what might be wrong would be greatly appreciated!
Are you sure the framerate decreases to 1fps? That is too low. I would advice you to check the exact framerate with different conditions, this can give you some more detailed information about the issue. You can do it for free in Loadero, here are some resources, that will help you create some free tests to get insights:
A blog post with a guide to testing webRTC apps https://blog.loadero.com/2020/08/10/how-to-set-up-an-automated-webrtc-test-with-loadero/
Info about setting network conditions, packet loss, jitter and more https://wiki.loadero.com/test-creation/participant-configuration/network-conditions
How to use post-run assertions, this will allow you get exact FPS https://blog.loadero.com/2020/01/29/how-to-get-the-best-out-of-post-run-assertions/
Sign up here to run those tests and get detailed data https://loadero.com/home
Running tests with different network conditions will help you get detailed info about which network issues cause the FPS decrease and how much of a decrease that is. Hope this will allow you to find the issue in your application.
Related
I used to filter packets into Wireshark with the simple dtls argument as filter. (Data Transport Layer Security which is some UDP TLS protocol)
Now, i wanted to do the same using C# and PcapDOTNet wrapper that uses WinPcap filters.
Sadly, i can't find anywhere the equivalent, and dtls is not recognised in the C# app, and so doesn't grab any packet anymore. (Simply it crashes the interpreter since the string is not recognised)
using (PacketCommunicator communicator = selectedDevice.Open(65536, PacketDeviceOpenAttributes.None, 1000))
{
using (BerkeleyPacketFilter filter = communicator.CreateFilter("dtls") {
communicator.SetFilter(filter);
communicator.ReceivePackets(1, packetHandler);
}
}
Is there any equivalent, please ?
EDIT : It looks like dtls is only a DISPLAY filter, and not a CAPTURE one. I could only capture filter by using udp port xx (xx being the port) but since the used ports are always randoms, i can't. So i would be glad to find another filtering workaround if you have one! I prefer only capturing the desired packets, rather than capturing everything then filtering the datas...
Wireshark : DTLS
There is only two packets i would like to capture. The one containing The Server Hello Done message or the one containing handshake message (the one with Record Layer) :
EDIT 2 : Ok, i am close to find what i need, but i need your help.
This answer from here must be the solution. tcp[((tcp[12] & 0xf0) >> 2)] = 0x16 is looking for handshake 22, but dtls is udp and not tcp and so the 12 offset might be different. Can anyone help me figure out what would be the correct formula to adapt it for dtls instead of tcp tls ?
I tried to use this on wireshark, but the filter is invalid and i don't really know why. If at least you could make it to work into wireshark, i could experiment differents value myself and come back with a final answer. udp[((udp[12] & 0xf0) >> 2)] = 0x16 is not a valid filter on wireshark.
So, i gave up on the dynamical way of finding the correct position of the data.
But this is what i ended with :
using (PacketCommunicator communicator = selectedDevice.Open(65536, PacketDeviceOpenAttributes.None, 1000))
{
using (BerkeleyPacketFilter filter = communicator.CreateFilter("udp && ((ether[62:4] = 0x16fefd00) || (ether[42:4] = 0x16fefd00))") {
communicator.SetFilter(filter);
communicator.ReceivePackets(1, packetHandler);
}
}
bytes[62:4] is the position of 16fefd00 for ipv6 packets, (42 for ipv4).
The 16 is for Content type handshake protocol 22 and the following fefd is for DTLS version 1.2. The last two zeros are just because using slice of bytes only works for 1,2 or 4, not 3. So i had to take them in consideration.
This is absolutly not perfect, i know, but for now, it works, since i couldn't find any other workaround yet.
I've done some playing around with SerialPort (frustratingly so) and have finally hit a point where I absolutely have no idea why this isn't working. There's a USB CDC device that I'm trying to send hex commands to, the way I'm doing this is over the COM port interface it exposes. I can handshake with the device, when I say HI it replies with HI back, but then I send another command to it which must be followed by a zero byte packet or else the device stops responding altogether. Keep in mind, this zero byte packet has ABSOLUTELY nothing in it, meaning it doesn't have a \0 or a 0x00 or 0 or even a null (SerialPort throws an exception on null).
Now, one way I was able to circumvent this was to use libusbdotnet. I accessed the CDC device directly instead of the COM interface, set the endpoints correctly and sent hex commands like that. I'm able to successfully send "0 byte" packets using this method with the following c# code:
string zlpstring = "";
byte[] zlpbyte = Encoding.Default.GetBytes(zlpstring);
....snip
ecWrite = writer.SubmitAsyncTransfer(zlpbyte, 0, zlpbyte.Length, 100, out usbWriteTransfer);
zlpbyte is the buffer, 0 is the offset, zlpbyte.Length is the packet length in bytes, 100 is the timeout, and out usbWriteTransfer is the transfer context.
When I use this same method on the COM port:
string zlpstring = "";
byte[] zlpbyte = Encoding.Default.GetBytes(zlpstring);
_serialPort.Write(zlpbyte, 0, zlpbyte.Length);
the USB logger reports that absolutely nothing was sent. It's as if the COM port is ignoring the zero byte transfer. Before it's mentioned that "you cannot do this", there's various programs out there that can send a zero-byte packet to this exact device's COM port without doing ANY driver manipulation. This is what I'm going for, which is why I'm trying to ditch libusbdotnet and go straight to the COM port.
EDIT:
After some more toying around and a different USB logger I don't find zero bytes being sent but rather this:
IRP_MJ_DEVICE_CONTROL (IOCTL_SERIAL_WAIT_ON_MASK)
I think this may be the issue. If a 0 byte was being sent then I assume it would show up as:
IRP_MJ_WRITE > UP > STATUS_SUCCESS > (blank) > (blank)
My program is sending back a response of 01 00 00 00, however while logging another successful program it's SETTING the wait mask:
IRP_MJ_DEVICE_CONTROL (IOCTL_SERIAL_SET_WAIT_MASK) DOWN STATUS_SUCCESS 01 00 00 00
If my assumptions are right, this question might've just turned into how do I set a serial port's/COM port's wait mask? There's absolutely nothing about this in the c# SerialPort class...which is why I can now see why so many articles called it "lacking". I also took a look around c++: https://msdn.microsoft.com/en-us/library/aa363214(v=vs.85).aspx this also does not seem to cover the wait mask. Using the USB filter libusb is starting to look a lot more pleasing each minute...(although I'm going to question myself forever why sending a zero byte works there but it doesn't over SerialPort).
SECOND EDIT:
I'm a moron. It was definitely a setting that the manufacturer probably didn't figure anyone would ever touch nor know how to set:
#define EV_RXFLAG 0x0001
SetCommMask(hSerial, EV_RXFLAG);
I then saw this over the USB logs:
IRP_MJ_DEVICE_CONTROL (IOCTL_SERIAL_SET_WAIT_MASK) DOWN STATUS_SUCCESS 01 00 00 00
Bingo. The RXFLAG was originally set to 0x0002. I couldn't find a way to change this in C# yet. So I had to do with some C++ code for now. It totally works, and sends the "zero byte" like it's supposed to without me actually sending it from the code. This setting I assume was the "handshake" method between my device and whatever else it's interacting with in Flash mode. Hope this helps someone else out there whose COM/Serial device is rejecting/discarding zero byte packets yet requiring ZLP at the same time...how goofy?!
have you tried to concatenate an extra new line or carriage return or both at the end of the data?
I would say add a 0xA (new line), or 0xD (carriage return), or both 0xA and 0xD to the end of your byte array and see if you get something.
byte[] zlpbyte = new byte[1] {0};
_serialPort.Write(zlpbyte, 0, 1);
[Update]
Based on our discussions, it appears that you are trying to have control over the control signals of the serial port. I have not tried it before but I can see that it is possible to set the control signals (if i understand the source properly) into certain states.
Try to set the Handshake property
public enum Handshake
{
None,
XOnXOff,
RequestToSend,
RequestToSendXOnXOff,
}
I am not sure exactly how it affects the IOCTL settings but it should be able to affect it somehow I believe
I am using DirectShowLib.net in C# and I am trying to set change my ALLOCATOR_PROPERTIES settings, as I am using a live video source and changes are not "immediately" visible on screen.
When constructing a filter graph in GraphStudioNext the ALLOCATOR_PROPERTIES show for the upstream and the downstream pin, although only after connection.
I'd like to set the ALLOCATOR_PROPERTIES using IAMBufferNegotiation, but when trying to get the interface from my capture filter (AV/C Tape Recorder/Player) I get an E_UNEXPECTED (0x8000ffff) error. Here is the relevant C# code:
DS.IAMBufferNegotiation iamb = (DS.IAMBufferNegotiation)capturePin;
DS.AllocatorProperties allocatorProperties = new DS.AllocatorProperties();
hr = iamb.GetAllocatorProperties(allocatorProperties);
DS.DsError.ThrowExceptionForHR(hr);
When I used the downstream video decoder input pin, I get a System.InvalidCastException as the interface is not supported.
How I can I change the cBuffers value of ALLOCATOR_PROPERTIES?
Changing number of buffers is not going to help you here. The number of buffers are negotiated between filters are are, basically, not to be changed externally, however in your case there is no real buffering in the pipeline: once the video frame is available on the first output pin, then it immediately goes through and reaches video renderer. If you see a delay there, it means that either DV player has internal latency, or it is time stamping frames "late" and video renderer has to wait before presentation. You can troubleshoot the latter case by inserting Smart Tee Filter in between and connecting its Preview output pin downward to the video renderer - if this helps out, then the issue is frame time stamping on the source. Amount of buffers does not cause any presentation lag here.
I have an Arduino Uno with a Bluesmirf Silver module connected. My Arduino has a temperature sensor which records the temp regularly. The Arduino listens for any string being sent to it over bluetooth and responds with the latest data.
I have written a C# application to fetch this data but I am seeing some strange behaviour. I am using the following code to connect, send a string and get the returned data.
mPort = new SerialPort(mPortName, 115200, Parity.None, 8, StopBits.One);
mPort.Open();
mPort.Write("download");
Thread.Sleep(1000);
while (mPort.BytesToRead > 0)
{
String data = mPort.ReadExisting();
this.BeginInvoke(new Action<String>(AddMessage), data);
}
The data I get back looks like this:
Line added locally within C# application:
Send: download
Lines added based on data received from Arduino:
Read: d???+?
GotData
------
Total Readings, 1069
Num Readings, 360
Lost Readings, 709
Reading Interval, 240000
------
350,19.34
351,19.34
352,19.34
353,20.31
....
All the text looks fine apart from the string which is being echoed back which I sent to the Arduino. Have I done something wrong with the way I sent the data?
FYI - The datasheet for the bluetooth module is here: http://www.sparkfun.com/datasheets/Wireless/Bluetooth/rn-bluetooth-um.pdf
#Jeff - This is the code which I use on my Arduino to receive data: https://github.com/mchr3k/arduino/blob/master/tempsensor/StringReader.cpp
#Jeff - stringDataLen defines the length and I call the overall function from this file: https://github.com/mchr3k/arduino/blob/master/tempsensor/tempsensor.ino
EDIT: Here is the complete source code
Arduino - https://github.com/mchr3k/arduino/tree/master/tempsensor
C# application - https://github.com/mchr3k/arduino/tree/master/serialdownload
The C# code is definitely getting the flow control wrong for some reason. I have switched to use the following code in C# and this gets a string through without corruption.
private void write(SerialPort mPort, string str)
{
foreach (char c in str)
{
mPort.Write(new char[] {c}, 0, 1);
Thread.Sleep(10);
}
}
An incorrect encoding perhaps?
mPort = new SerialPort(mPortName, 115200, Parity.None, 8, StopBits.One);
mPort.Encoding = System.Text.Encoding.ASCII; // Or System.Text.Encoding.UTF8
mPort.Open();
mPort.Write("download");
Read byte-by-byte and check each byte one by one to debug lower level problems. ReadExisting() converts bytes to a String based on the Encoding property.
My issue was caused by me using the SoftwareSerial class to communicate with my Bluetooth module on pins 2 & 3. I was using a baud rate of 115200 which is claimed to be supported on this page: http://arduino.cc/en/Reference/SoftwareSerial
However, this page ( http://arduino.cc/en/Reference/SoftwareSerialBegin ) states that the maximum baud rate supported is actually 9600. I'm not sure whether this is accurate but reducing my baud rate to 9600 has fixed my issue.
I suggest you to decrease communication speed, because there is no reason to use 115200bps (only if your module demand this speed, then it's ok). Also you are sending string "download" which is not good, rather use markers something like "#D" which internally for your Arduino device means, send data to computer. In this way you are sending only two bytes instead eight, and you will decrease probability of error, and Arduino code will be better.
Now, let's try fix the problem. First try use something like this when you are reading data from Arduino device:
ArayList dataReceaved=new ArrayList():
while(serialPort.BytesToRead>0 && serialPort.IsOpen){
dataReceaved.Add(serialPort.ReadByte());
}
So I suggest you to read byte by byte, in this or similar way. Also you shold be careful if you are sending numbers from Arduino device. If you are, then use something like this:
Serial.print(temperatureValue,BYTE);
With this code you explicitly say that data you are sending is byte long. If this not help, please let me know, so we can try something else.
I'd call Ingenico's tech support, but I don't have a month to wait for their callback.
Our app uses the 6550 and it displays all the forms just fine except, on one machine it's not showing the signature box on the signature capture form. It shows the buttons and text just fine.
I've tried using our app, I've tried the Ingenico test app. Everything seems to check out fine. The only thing I get in th log is this:
2/17/2011 8:43:33 AM (31813 ms) EC0000 Device name [Ing6XXX] - UPOS-Interface-App error code=0xFD
It's followed by these lines after I dismiss the form:
2/17/2011 8:43:33 AM (31860 ms) EC0000 Device name [Ing6XXX] - Last platform error code from device=0x2, desc=SingleButtonEntry: ssaSecFuncKe
2/17/2011 8:43:33 AM (31860 ms) EC0111 Device name [Ing6XXX] - SIG - Direct IO - Command 12 - Invalid command, or function code missing. Length 5 [Package {00 05 95 FD 6D}] [Translation {iDataLength 0}{ucFunctionCode 95}{ucResponseCode FD}{ucResultCode 6D}{sData }]
2/17/2011 8:43:33 AM (31860 ms) EC0111 Device name [Ing6XXX] - SO APP - Direct IO - Command 12 - Invalid command, or function code missing. Length 5 [Package {00 05 95 FD 6D}] [Translation {iDataLength 0}{ucFunctionCode 95}{ucResponseCode FD}{ucResultCode 6D}{sData }]
I'm not sure if that's related. Does anyone have experience with these things. Any idea what might cause the failure to display the signature box?
The problem turned out to be a missing registry setting for the form location. Not sure how we missed that.