Get Timestamp rtp packet - c#

I'm using PacketDotNet to retrieve data from RTP header. But there are times when the timestamp is a negative value.
GetTimeStamp(UdpPacket packetUdp)
{
byte[] packet = packetUdp.PayloadData;
long timestamp = GetRTPHeaderValue(packet, 32, 63);
return timestamp;
}
private static int GetRTPHeaderValue(byte[] packet, int startBit, int endBit)
{
int result = 0;
// Number of bits in value
int length = endBit - startBit + 1;
// Values in RTP header are big endian, so need to do these conversions
for (int i = startBit; i <= endBit; i++)
{
int byteIndex = i / 8;
int bitShift = 7 - (i % 8);
result += ((packet[byteIndex] >> bitShift) & 1) *
(int)Math.Pow(2, length - i + startBit - 1);
}
return result;
}

It could be caused by the RTCP packets. If the RTP data is coming from a phone, then phone send periodic RTCP reports. They seem to pop in about every 200'th packet. The format is different and your code is probably reading it the same way - you will need to handle the RTCP packets.
The packets format: http://www.cl.cam.ac.uk/~jac22/books/mm/book/node162.html

Related

Websocket - How does the Mask work For Clientside

i read the article on https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_WebSocket_server and found it very interesting as i have not done anything with websockets before. I have already managed that my client connects and the handshake takes place (was also the easy part).
However, I can't manage to set the mask on my client so that the message arrives correctly at the server. Somehow I don't quite understand this yet....
At the server it looks like this:
bool fin = (bytes[0] & 0b10000000) != 0,
mask = (bytes[1] & 0b10000000) != 0; // must be true, "All messages from the client to the server have this bit set"
int opcode = bytes[0] & 0b00001111, // expecting 1 - text message
offset = 2;
ulong msglen = (ulong)(bytes[1] & 0b01111111);
if (msglen == 126)
{
// bytes are reversed because websocket will print them in Big-Endian, whereas
// BitConverter will want them arranged in little-endian on windows
msglen = BitConverter.ToUInt16(new byte[] { bytes[3], bytes[2] }, 0);
offset = 4;
}
else if (msglen == 127)
{
// To test the below code, we need to manually buffer larger messages — since the NIC's autobuffering
// may be too latency-friendly for this code to run (that is, we may have only some of the bytes in this
// websocket frame available through client.Available).
msglen = BitConverter.ToUInt64(new byte[] { bytes[9], bytes[8], bytes[7], bytes[6], bytes[5], bytes[4], bytes[3], bytes[2] }, 0);
offset = 10;
}
if (msglen == 0)
{
Console.WriteLine("msglen == 0");
}
else if (mask)
{
byte[] decoded = new byte[msglen];
byte[] masks = new byte[4] { bytes[offset], bytes[offset + 1], bytes[offset + 2], bytes[offset + 3] };
offset += 4;
for (ulong i = 0; i < msglen; ++i)
decoded[i] = (byte)(bytes[offset + (int)i] ^ masks[i % 4]);
string text = Encoding.Default.GetString(decoded);
I have written the client in C# as TcpClient. But I just can't get it to encode the message so that it is decoded correctly at the server.
can someone of you help me? What is the inverse function to what is done at the server to decode this?
I only found Articles in Javascript or in C# without masks...
Thank u very much!

How to convert a hex value back to an ASCII string?

I'm trying to read from an accelerometer over BLE from my Arduino device. The only problem is that I'm not sure how to convert it back into a readable string value.
My Arduino sketch (part of it) looks like this:
if((x - lastX) > threshold) {
STATUS = "MOVING";
toggleIsMoving();
} else if((y - lastY) > threshold) {
STATUS = "MOVING";
toggleIsMoving();
} else if((z - lastZ) > threshold) {
STATUS = "MOVING";
toggleIsMoving();
} else {
STATUS = "STOPPED";
toggleIsStopped();
}
lastX = x;
lastY = y;
lastZ = z;
Serial.print(STATUS);
The hex value that I receive in my Xamarin Android application (through BLE) is this:
A0-08-00-00-00-4D-4F-56-49-4E-47-CE-3C
My current implementation is:
public static string FromHex(string hex) {
hex = hex.Replace("-", "");
byte[] raw = new byte[hex.Length / 2];
for (int i = 0; i < raw.Length; i++)
raw[i] = Convert.ToByte(hex.Substring(i * 2, 2), 16);
return Encoding.ASCII.GetString(raw);
}
This result is:
?������MOVING?<
Why is this happening and how can I convert this back to a readable string in C#?
Update 1
I did some research in the bean-sdk source code and found this:
/**
* Represents a LightBlue Serial Transport Message
*
* Defined as:
*
* [1 byte] - Length (Message ID + Payload)
* [1 byte] - Reserved
* [2 byte] BE - Message ID
* [0-64 bytes] LE - Payload
* [2 bytes] LE - CRC (Everything before CRC)
*
* #param messageId
* #param definition
*/
Update 2
Finally got it working by implementing the protocol mentioned in update 1. The actual payload is located inside, so it needs to be extracted from the hex string.
Anything over 0x7F is NOT ASCII. The first character is 0xA0. Which, both in Unicode and IOS LATIN 1 is NBSP. The sequence A0-08-00-00-00 is probably meaningful, but not in a plain text way. You should have a look at the specs of your accelerometer and see what it is spewing out to interpret it correctly.
EDIT
Also, since in your code you set the variable STATUS to "MOVING", it could be that the characters before and after MOVING are spurious characters.
I have used the following:
private static string hexToASCII(string hexValue)
{
StringBuilder output = new StringBuilder("");
for (int i = 0; i < hexValue.Length; i += 2)
{
string str = hexValue.Substring(i, 2);
output.Append((char)Convert.ToInt32(str, 16));
}
return output.ToString();
}
Reference: converted from a Java function at Convert Hex to ASCII and ASCII to Hex

C# and Arduino Master/Slave reading 3 bytes from AD

I'm currently working on a master/slave where the Master is a C# program and the Slave is an Arduino Uno. The Arduino is reading several values and is working as expected, but I'm having some troubles on the C# side. I'm reading 3 bytes from an AD converter (AD7680), which returns 3 bytes of data structured in the following way:
0000 | 16 bit number | 0000
My C# program is reading the returned value in a double, which is the expected value. BUT I didn't find out how to get rid of the last four 0's and obtain the 2 byte number I need.
What should be the best approach to get the right value without loosing data? I tried 'BitConverter' but it´s not what I'm expecting, and I have no clue how to proceed. I currently can´t attach the code unfortunately, but I could reference anything on it if needed.
Thanks for reading!
EDIT: This is the function on the C# side:
public double result(byte[] command)
{
try
{
byte[] buffer = command;
arduinoBoard.Open();
arduinoBoard.Write(buffer, 0, 3);
int intReturnASCII = 0;
char charReturnValue = (Char)intReturnASCII;
Thread.Sleep(200);
int count = arduinoBoard.BytesToRead;
double returnResult = 0;
string returnMessage = "";
while (count > 0)
{
intReturnASCII = arduinoBoard.ReadByte();
//string str = char.ConvertFromUtf32(intReturnASCII);
returnMessage = returnMessage + Convert.ToChar(intReturnASCII);
count--;
}
returnResult = double.Parse(returnMessage, System.Globalization.CultureInfo.InvariantCulture);
arduinoBoard.Close();
return returnResult;
}
catch (Exception e)
{
return 0;
}
}
And the Arduino function that communicates with it is this one:
unsigned long ReturnPressure(){
long lBuffer = 0;
byte rtnVal[3];
digitalWrite(SLAVESELECT , LOW);
delayMicroseconds(1);
rtnVal[0] = SPI.transfer(0x00);
delayMicroseconds(1);
rtnVal[1] = SPI.transfer(0x00);
delayMicroseconds(1);
rtnVal[2] = SPI.transfer(0x00);
delayMicroseconds(1);
digitalWrite(SLAVESELECT, HIGH);
// assemble into long type
lBuffer = lBuffer | rtnVal[0];
lBuffer = lBuffer << 8;
lBuffer = lBuffer | rtnVal[1];
lBuffer = lBuffer << 8;
lBuffer = lBuffer | rtnVal[2];
return lBuffer;
}
Okay, you have to do a few steps:
Firstly: Its much easier to save the bytes in an array like this:
byte Received = new byte[3];
for(int i = 0; i < 3; i++)
{
Received[i] = (byte)arduinoBoard.ReadByte();
}
After received the three bytes, shift it together (check if the three bytes are in the right order: Most significant byte is here at index 0)
UInt64 Shifted = (UInt64)(Received[0] << 16) | (UInt64)(Received[1] << 8) | (UInt64)(Received[0])
Now shift out the four ending zeros:
UInt64 Shifted = Shifted >> 4;
To find out, what your voltage is, you have to know the scale of your converter. The Data sheet says, "The LSB size is VDD/65536". You could define a constant
const double VDD = 5; //For example 5V
After that you can calculate your needed double with
return Shifted * (VDD / 65539); //Your voltage
Hope this helps.

Playing a RTP stream to PC speakers

I'm using UdpClient to get a RTP stream from phone calls through Avaya DMCC sdk. I would like to play this stream through the computer's speakers. After a lot of searching I've only been able to find solutions that require saving to a file and then playing the file but I need to play the stream through the speakers without saving to a file. I'd like to send audio to the speakers as I receive it.
public void StartClient()
{
// Create new UDP client. The IP end point tells us which IP is sending the data
client = new UdpClient(port);
endPoint = new IPEndPoint(System.Net.IPAddress.Any, port);
selectedCodec = new MuLawChatCodec();
waveOut = new WaveOut();
waveProvider = new BufferedWaveProvider(selectedCodec.RecordFormat);
waveOut.Init(waveProvider);
waveOut.Play();
listening = true;
listenerThread = new Thread(ReceiveCallback);
listenerThread.Start();
}
private void ReceiveCallback()
{
// Begin looking for the next packet
while (listening)
{
// Receive packet
byte[] packet = client.Receive(ref endPoint);
// Packet header
int version = GetRTPValue(packet, 0, 1);
int padding = GetRTPValue(packet, 2, 2);
int extension = GetRTPValue(packet, 3, 3);
int csrcCount = GetRTPValue(packet, 4, 7);
int marker = GetRTPValue(packet, 8, 8);
int payloadType = GetRTPValue(packet, 9, 15);
int sequenceNum = GetRTPValue(packet, 16, 31);
int timestamp = GetRTPValue(packet, 32, 63);
int ssrcId = GetRTPValue(packet, 64, 95);
int csrcid = (csrcCount == 0) ? -1 : GetRTPValue(packet, 96, 95 + 32 * (csrcCount));
int extHeader = (csrcCount == 0) ? -1 : GetRTPValue(packet, 128 + (32 * csrcCount), 127 + (32 * csrcCount));
int payloadIndex = csrcCount == 0 ? 96 : 128 + 32 * csrcCount;
int payload = GetRTPValue(packet, payloadIndex, packet.Length);
byte[] Payload = new byte[packet.Length - payloadIndex];
Buffer.BlockCopy(packet, payloadIndex, Payload, 0, packet.Length - payloadIndex);
byte[] decoded = selectedCodec.Decode(Payload, 0, Payload.Length);
}
}
private int GetRTPValue(byte[] packet, int startBit, int endBit)
{
int result = 0;
// Number of bits in value
int length = endBit - startBit + 1;
// Values in RTP header are big endian, so need to do these conversions
for (int i = startBit; i <= endBit; i++)
{
int byteIndex = i / 8;
int bitShift = 7 - (i % 8);
result += ((packet[byteIndex] >> bitShift) & 1) * (int)Math.Pow(2, length - i + startBit - 1);
}
return result;
}
I now successfully have audio from the call being played over the speakers by adding a byte[] containing just the payload to NAudio's BufferedWaveProvider
There's a demo of how to play audio received over the network included with the NAudio source code (see Network Chat Demo in the NAudioDemo project). Basically use an AcmStream to decode the audio, and then put it into a BufferedWaveProvider which the soundcard is playing from.

C# - Converting a Sequence of Numbers into Bytes

I am trying to send a UDP packet of bytes corresponding to the numbers 1-1000 in sequence. How do I convert each number (1,2,3,4,...,998,999,1000) into the minimum number of bytes required and put them in a sequence that I can send as a UDP packet?
I've tried the following with no success. Any help would be greatly appreciated!
List<byte> byteList = new List<byte>();
for (int i = 1; i <= 255; i++)
{
byte[] nByte = BitConverter.GetBytes((byte)i);
foreach (byte b in nByte)
{
byteList.Add(b);
}
}
for (int g = 256; g <= 1000; g++)
{
UInt16 st = Convert.ToUInt16(g);
byte[] xByte = BitConverter.GetBytes(st);
foreach (byte c in xByte)
{
byteList.Add(c);
}
}
byte[] sendMsg = byteList.ToArray();
Thank you.
You need to use :
BitConverter.GetBytes(INTEGER);
Think about how you are going to be able to tell the difference between:
260, 1 -> 0x1, 0x4, 0x1
1, 4, 1 -> 0x1, 0x4, 0x1
If you use one byte for numbers up to 255 and two bytes for the numbers 256-1000, you won't be able to work out at the other end which number corresponds to what.
If you just need to encode them as described without worrying about how they are decoded, it smacks to me of a contrived homework assignment or test, and I'm uninclined to solve it for you.
I think you are looking for something along the lines of a 7-bit encoded integer:
protected void Write7BitEncodedInt(int value)
{
uint num = (uint) value;
while (num >= 0x80)
{
this.Write((byte) (num | 0x80));
num = num >> 7;
}
this.Write((byte) num);
}
(taken from System.IO.BinaryWriter.Write(String)).
The reverse is found in the System.IO.BinaryReader class and looks something like this:
protected internal int Read7BitEncodedInt()
{
byte num3;
int num = 0;
int num2 = 0;
do
{
if (num2 == 0x23)
{
throw new FormatException(Environment.GetResourceString("Format_Bad7BitInt32"));
}
num3 = this.ReadByte();
num |= (num3 & 0x7f) << num2;
num2 += 7;
}
while ((num3 & 0x80) != 0);
return num;
}
I do hope this is not homework, even though is really smells like it.
EDIT:
Ok, so to put it all together for you:
using System;
using System.IO;
namespace EncodedNumbers
{
class Program
{
protected static void Write7BitEncodedInt(BinaryWriter bin, int value)
{
uint num = (uint)value;
while (num >= 0x80)
{
bin.Write((byte)(num | 0x80));
num = num >> 7;
}
bin.Write((byte)num);
}
static void Main(string[] args)
{
MemoryStream ms = new MemoryStream();
BinaryWriter bin = new BinaryWriter(ms);
for(int i = 1; i < 1000; i++)
{
Write7BitEncodedInt(bin, i);
}
byte[] data = ms.ToArray();
int size = data.Length;
Console.WriteLine("Total # of Bytes = " + size);
Console.ReadLine();
}
}
}
The total size I get is 1871 bytes for numbers 1-1000.
Btw, could you simply state whether or not this is homework? Obviously, we will still help either way. But we would much rather you try a little harder so you can actually learn for yourself.
EDIT #2:
If you want to just pack them in ignoring the ability to decode them back, you can do something like this:
protected static void WriteMinimumInt(BinaryWriter bin, int value)
{
byte[] bytes = BitConverter.GetBytes(value);
int skip = bytes.Length-1;
while (bytes[skip] == 0)
{
skip--;
}
for (int i = 0; i <= skip; i++)
{
bin.Write(bytes[i]);
}
}
This ignores any bytes that are zero (from MSB to LSB). So for 0-255 it will use one byte.
As states elsewhere, this will not allow you to decode the data back since the stream is now ambiguous. As a side note, this approach crams it down to 1743 bytes (as opposed to 1871 using 7-bit encoding).
A byte can only hold 256 distinct values, so you cannot store the numbers above 255 in one byte. The easiest way would be to use short, which is 16 bits. If you realy need to conserve space, you can use 10 bit numbers and pack that into a byte array ( 10 bits = 2^10 = 1024 possible values).
Naively (also, untested):
List<byte> bytes = new List<byte>();
for (int i = 1; i <= 1000; i++)
{
byte[] nByte = BitConverter.GetBytes(i);
foreach(byte b in nByte) bytes.Add(b);
}
byte[] byteStream = bytes.ToArray();
Will give you a stream of bytes were each group of 4 bytes is a number [1, 1000].
You might be tempted to do some work so that i < 256 take a single byte, i < 65535 take two bytes, etc. However, if you do this you can't read the values out of the stream. Instead, you'd add length encoding or sentinels bits or something of the like.
I'd say, don't. Just compress the stream, either using a built-in class, or gin up a Huffman encoding implementation using an agree'd upon set of frequencies.

Categories