C# - SerialPort.read() speed issue - c#

My code is designed to get data from a serial device and print its contents to a MS Forms Application. The IDE i use is Visual Studio 2019 - Community.
The device does send a variable size packet. First i have to decode the packet "header" to get crucial information for further processing which is the first packet channel as well as the packet length.
Since the packet does neither contain a line ending, nor a fixed character at the end, the functions SerialPort.ReadTo() and SerialPort.ReadLine() are not useful. Therefore only SerialPort.Read(buf,offset,count) can be used
Since sending rather large packets (512bytes) does take time, I've implemented a function for calculation of a desired wait time, which is defined as (1000ms/baud-rate*(8*byte-count))+100ms
While testing, I've experienced delay, much more than the desired wait times, so implemented a measure function for different parts of the function.
In regular cases (with desired wait times) i except a log to console like this:
Load Header(+122ms) Load Data (+326ms) Transform (+3ms)
But its only like this for a few variable amount of records, usually 10, after that, the execution times are much worse:
Load Header(+972ms) Load Data (+990ms) Transform (+2ms)
Here you can see the complete function:
private void decodeWriteResponse(int identifier, object sender, SerialDataReceivedEventArgs e)
{
/* MEASURE TIME NOW */
long start = new DateTimeOffset(DateTime.UtcNow).ToUnixTimeMilliseconds();
var serialPort = (SerialPort)sender; //new serial port object
int delay = ComPort.getWaitTime(7); //Returns the wait time (1s/baudrate * bytecount *8) +100ms Additional
Task.Delay(delay).Wait(); //wait until the device has send all its data!
byte[] databytes = new byte[6]; //new buffer
try
{
serialPort.Read(databytes, 0, 6); //read the data
}
catch (Exception) { };
/* MEASURE TIME NOW */
long between_header = new DateTimeOffset(DateTime.UtcNow).ToUnixTimeMilliseconds();
/* Read the Data from Port */
int rec_len = databytes[1] | databytes[2] << 8; //Extract number of channels
int start_chnl = databytes[3] | databytes[4] << 8; //Extract the first channel
delay = ComPort.getWaitTime(rec_len+7); //get wait time
Task.Delay(delay).Wait(); //wait until the device has send all its data!
byte[] buf = new byte[rec_len-3]; //new buffer
try
{
serialPort.Read(buf, 0, rec_len-3); //read the data
}
catch (Exception) {}
/* MEASURE TIME NOW */
long after_load = new DateTimeOffset(DateTime.UtcNow).ToUnixTimeMilliseconds();
/* Now perform spectrum analysis */
decodeSpectrumData(buf, start_chnl, rec_len-4);
/*MEASURE TIME NOW */
long end = new DateTimeOffset(DateTime.UtcNow).ToUnixTimeMilliseconds();
Form.rtxtDataArea.AppendText("Load Header(+" + (between_header - start).ToString() + "ms) Load Data (+" + (after_load - between_header).ToString() + "ms) Transform (+" + (end - after_load) + "ms)\n");
/*Update the Write handler */
loadSpectrumHandler(1);
}
What could cause this issue?
I already tested this with "debug" in Visual Studio, and as "Release" standalone, but there is no difference.

Instead of trying to figure out how long a message will take to arrive at the port, why not just read the data in a loop until you have it all? For example, read the header and calculate the msg size. Then read that number of bytes. Ex:
// See if there are at least enough bytes for a header
if (serialPort.BytesToRead >= 6) {
byte[] databytes = new byte[6];
serialPort.Read(databytes, 0, 6);
// Parse the header - you have to create this function
int calculatedMsgSize = ValidateHeader(databytes);
byte [] msg = new byte[calculatedMsgSize];
int bytesRead = 0;
while (bytesRead < calculatedMsgSize) {
if (serialPort.BytesToRead) {
bytesRead += serialPort.Read(msg, bytesRead,
Math.min(calculatedMsgSize - bytesRead, serialPort.BytesToRead));
}
}
// You should now have a complete message
HandleMsg(msg);
}

Related

COM: Device only sends data when things are changing. How to deal with that?

So I want to connect to a device via Serial that only sends data when things are changing with the settings on the device (a measuring device).
I use C# and .Net's SerialPort.
I am able to read data and it looks kind of good. But there are a few problems I encountered.
I realized my reading- method (ReadExistingData()) so that it will constantly use the SerialDataReceivedEventHandler when there's new data.
Unfortunately, when I read it like that (probably because of varying package sizes) it will read very chaotically and thus I need to "catch" the first initiating byte (here it's 0xA5).
That means, I always check whether the byte I just received is a 0xA5 and if it is, I read the rest.
But I feel like that way I am missing some commands my device sends me and thats why I cant display the data consistently right, only from time to time.
On a side note: The device sends the device time and a value. The value is delayed and kind of unaccurate, but the time is always right on spot. The other parameters it sends are always weirded out and dont seem to be recognized and thus changed at all.
To display data I use the console for testing purposes, and the whole construct seems to be very reactive to Console outputs.
Here's a little code snippet:
class Device
{
private int stdMessageLengthInBytes = 5;
public DeviceData processedData;
byte[] dataBuffer;
int receivedMessages = 0;
public Device()
{
// Initialize BaseClass (SerialPort derivative)
this.port = new System.IO.Ports.SerialPort();
// Initialize Device
this.data = new byte[stdMessageLengthInBytes];
this.processedData = new P8005_Data();
dataBuffer = new byte[stdMessageLengthInBytes];
}
// Is supposed to read the data from serial port
protected override void ReadExistingData()
{
// Check whether buffer is empty -> Try to catch a 0xA5
if (dataBuffer[0] == 0x00)
{
port.Read(dataBuffer, 0, 1);
}
// If no 0xA5 was catched, restart
if (dataBuffer[0] != 0xA5)
{
dataBuffer = new byte[stdMessageLengthInBytes]; // Reset buffer
return;
}
// Read next byte -> Command byte
port.Read(dataBuffer, 1, 1);
// If command was empty, restart
if (dataBuffer[1] == 0x00)
{
dataBuffer = new byte[stdMessageLengthInBytes]; // Reset buffer
return;
}
// If its time that is communicated: Read 3 bytes
if (dataBuffer[1] == 0x06)
{
// 4 ms delay seems to be needed otherwise it wont function correctly somehow
System.Threading.Thread.Sleep(5);
port.Read(dataBuffer, 2, 3);
// Write buffer to actual raw data byte array
this.data = dataBuffer;
dataBuffer = new byte[stdMessageLengthInBytes]; // Reset buffer
}
// Otherwise: Just read 2 bytes
System.Threading.Thread.Sleep(5); // Needed delay
port.Read(dataBuffer, 2, 2);
// Write buffer to actual raw data byte array
this.data = dataBuffer;
dataBuffer = new byte[stdMessageLengthInBytes]; // Reset buffer
}
// Method called by SerialDataReceivedEventHandler
protected override void DataReceivedOnComPort(object sender, SerialDataReceivedEventArgs e)
{
bool valid = false;
ReadExistingData(); // Read data from COM- Port
lock (processedData)
{
switch (data[1]) // Check command byte
{
// Time (3 btyes)
case (0x06):
processedData.currentTime = String.Format("{0:D2}:{1:D2}:{2:D2}", DecodeBcd(data[2]), DecodeBcd(data[3]), DecodeBcd(data[4]));
valid = true;
receivedMessages++;
break;
// Value (2 bytes)
case (0x0D):
double val = 0;
val += DecodeBcd(data[2]) * 100;
val += DecodeBcd(data[3]);
val /= 10;
processedData.currentValue = val;
valid = true;
receivedMessages++;
break;
// ... here are various other hex- code that represent a command from the device (2 btyes)
default:
valid = false;
break;
}
}
// only to check when
if (valid)
{
Console.WriteLine("Received Valid Messages: {0}", receivedMessages);
ConsoleOutput();
}
}
}
On a note: The initialization of the port happens in another method from the base class and works fine.
Is there anything I am missing? How to deal with something like that? Are there any improvements that would help improving my performance? I thought about threading with locks, but I dont think that is the solution somehow... Or maybe everything is just a console problem?
EDIT:
I know changed my code (as #jdweng suggested) so that I put everything in a buffer (basically List<byte> mainBuffer. Then, I take all bytes in the buffer whenever its possbile and work with them, skimming it for 0xA5. When one is found, I read the command and determine how long the "message" has to be according to it (Time -> +3 bytes, Data -> +2 bytes, Other -> +1 byte). Then I can work off those messages (I put them into a List<byte[]>) and determine my output to my screen.
However, even after outsourcing the chopping up into messages and processing the messages, I still seem to either miss some messages, since some action are just not registered and have a big delay, or my processing is wrong. What I can think of is that because I lock my mainBuffer maybe some data isnt written to it.
Is this really this time critical? There is a software that comes with the device and it doesnt seem to have such big problems with delay and slightly wrong values...
Since you don't have the exact specs and/or an unreliable connection (which with serial data has to be expected) you need to sync to the 0xa5 at every message. I would just run every single byte you receive through a parser while keeping the state of the currently received message.
Make sure you validate your input since there are a bunch of things that can go wrong if you get messed up serial data. For example if there is an 0xa5 in the other message types, you might miss your next message. To prevent that I strongly recommend to either get to the specs if possible or code more logic based on data observations.
private const int MESSAGE_LENGTH = 5;
private const int VALUE_COMMAND = 0x0D;
private const int VALUE_SIZE = 4;
private const int TIME_COMMAND = 0x06;
private const int TIME_SIZE = 5;
private byte[] _message = new byte[MESSAGE_LENGTH];
private int _messagePos = 0;
private void port_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
var data = new byte[_serialPort.BytesToRead];
_serialPort.Read(data, 0, data.Length);
foreach (var b in data)
{
_message[_messagePos] = b;
if (_messagePos == 0 && b != 0xa5)
continue;
++_messagePos;
if (_messagePos > 2) // if command byte present, process command of any size
ProcessCommand(_message[1]);
}
}
private void ProcessCommand(byte command)
{
if (_messagePos == VALUE_SIZE && command == VALUE_COMMAND)
{
// parse value...
_messagePos = 0;
}
else if (_messagePos == TIME_SIZE && _message[1] == TIME_COMMAND)
{
// parse time...
_messagePos = 0;
}
}

Read unknown length by DataReader

I've been working with windows app store programming in c# recently, and I've come across a problem with sockets.
I need to be able to read data with an unknown length from a DataReader().
It sounds simple enough, but I've not been able to find a solution after a few days of searching.
Here's my current receiving code (A little sloppy, need to clean it up after I find a solution to this problem. And yes, a bit of this is from the Microsoft example)
DataReader reader = new DataReader(args.Socket.InputStream);
try
{
while (true)
{
// Read first 4 bytes (length of the subsequent string).
uint sizeFieldCount = await reader.LoadAsync(sizeof(uint));
if (sizeFieldCount != sizeof(uint))
{
// The underlying socket was closed before we were able to read the whole data.
return;
}
reader.InputStreamOptions
// Read the string.
uint stringLength = reader.ReadUInt32();
uint actualStringLength = await reader.LoadAsync(stringLength);
if (stringLength != actualStringLength)
{
// The underlying socket was closed before we were able to read the whole data.
return;
}
// Display the string on the screen. The event is invoked on a non-UI thread, so we need to marshal
// the text back to the UI thread.
//MessageBox.Show("Received data: " + reader.ReadString(actualStringLength));
MessageBox.updateList(reader.ReadString(actualStringLength));
}
}
catch (Exception exception)
{
// If this is an unknown status it means that the error is fatal and retry will likely fail.
if (SocketError.GetStatus(exception.HResult) == SocketErrorStatus.Unknown)
{
throw;
}
MessageBox.Show("Read stream failed with error: " + exception.Message);
}
You are going down the right lines - read the first INT to find out how many bytes are to be sent.
Franky Boyle is correct - without a signalling mechanism it is impossible to ever know the length of a stream. Thats why it is called a stream!
NO socket implementation (including the WinSock) will ever be clever enough to know when a client has finished sending data. The client could be having a cup of tea half way through sending the data!
Your server and its sockets will never know! What are they going to do? Wait forever? I suppose they could wait until the client had 'closed' the connection? But your client could have had a blue screen and the server will never get that TCP close packet, it will just be sitting there thinking it is getting more data one day?
I have never used a DataReader - i have never even heard of that class! Use NetworkStream instead.
From my memory I have written code like this in the past. I am just typing, no checking of syntax.
using(MemoryStream recievedData = new MemoryStream())
{
using(NetworkStream networkStream = new NetworkStream(connectedSocket))
{
int totalBytesToRead = networkStream.ReadByte();
// This is your mechanism to find out how many bytes
// the client wants to send.
byte[] readBuffer = new byte[1024]; // Up to you the length!
int totalBytesRead = 0;
int bytesReadInThisTcpWindow = 0;
// The length of the TCP window of the client is usually
// the number of bytes that will be pushed through
// to your server in one SOCKET.READ method call.
// For example, if the clients TCP window was 777 bytes, a:
// int bytesRead =
// networkStream.Read(readBuffer, 0, int.Max);
// bytesRead would be 777.
// If they were sending a large file, you would have to make
// it up from the many 777s.
// If it were a small file under 777 bytes, your bytesRead
// would be the total small length of say 500.
while
(
(
bytesReadInThisTcpWindow =
networkStream.Read(readBuffer, 0, readBuffer.Length)
) > 0
)
// If the bytesReadInThisTcpWindow = 0 then the client
// has disconnected or failed to send the promised number
// of bytes in your Windows server internals dictated timeout
// (important to kill it here to stop lots of waiting
// threads killing your server)
{
recievedData.Write(readBuffer, 0, bytesReadInThisTcpWindow);
totalBytesToRead = totalBytesToRead + bytesReadInThisTcpWindow;
}
if(totalBytesToRead == totalBytesToRead)
{
// We have our data!
}
}
}

Send PC information over TCP

I am trying to send various bits of PC information such as free HDD space, total RAM etc to a Windows Service over TCP. I have the following code which basically creates a string of information split by a |, ready for processing within the Windows Service TCP server to be put in to a SQL table.
Is it best to do this as I have done or is there a better way?
public static void Main(string[] args)
{
Program stc = new Program(clientType.TCP);
stc.tcpClient(serverAddress, Environment.MachineName.ToString() + "|" + FormatBytes(GetTotalFreeSpace("C:\\")).ToString());
Console.WriteLine("The TCP server is disconnected.");
}
public void tcpClient(String serverName, String whatEver)
{
try
{
//Create an instance of TcpClient.
TcpClient tcpClient = new TcpClient(serverName, tcpPort);
//Create a NetworkStream for this tcpClient instance.
//This is only required for TCP stream.
NetworkStream tcpStream = tcpClient.GetStream();
if (tcpStream.CanWrite)
{
Byte[] inputToBeSent = System.Text.Encoding.ASCII.GetBytes(whatEver.ToCharArray());
tcpStream.Write(inputToBeSent, 0, inputToBeSent.Length);
tcpStream.Flush();
}
while (tcpStream.CanRead && !DONE)
{
//We need the DONE condition here because there is possibility that
//the stream is ready to be read while there is nothing to be read.
if (tcpStream.DataAvailable)
{
Byte[] received = new Byte[512];
int nBytesReceived = tcpStream.Read(received, 0, received.Length);
String dataReceived = System.Text.Encoding.ASCII.GetString(received);
Console.WriteLine(dataReceived);
DONE = true;
}
}
}
catch (Exception e)
{
Console.WriteLine("An Exception has occurred.");
Console.WriteLine(e.ToString());
}
}
Thanks
Because TCP is stream-based, it is important to have some indicator in the message to signal the other end when it has read the complete message. There are two traditional ways of doing this. First, you could have some special byte pattern at the end of each message. When the other end reads the data, it knows that it has read a full message when that special byte pattern is seen. Using this mechanism requires a byte pattern that is not likely to be included in the actual message. The other way is to include the length of the data at the beginning of the message. This is the way I do it. All my TCP messages include a short header structured like this:
class MsgHeader
{
short syncPattern; // e.g., 0xFDFD
short msgType; // useful if you have different messages
int msgLength; // length of the message minus header
}
When the other side starts receiving data, it reads the first 8 bytes, verifies the sync pattern (for the sake of sanity), and then uses the message length to read the actual message. Once the message has been read, it processes the message based on the message type.
I'd suggest creating a class that gathers the system information you're interested in and is capable of encoding/decoding it, something like:
using System;
using System.Text;
class SystemInfo
{
private string machineName;
private int freeSpace;
private int processorCount;
// Private so no one can create it directly.
private SystemInfo()
{
}
// This is a static method now. Call SystemInfo.Encode() to use it.
public static byte[] Encode()
{
// Convert the machine name to an ASCII-based byte array.
var machineNameAsByteArray = Encoding.ASCII.GetBytes(Environment.MachineName);
// *THIS IS IMPORTANT* The easiest way to encode a string value so that it
// can be easily decoded is to prepend the length of the string. Otherwise,
// you're left guessing on the decode side about how long the string is.
// Calculate the message length. This does *NOT* include the size of
// the message length itself.
// NOTE: As new fields are added to the message, account for their
// respective size here and encode them below.
var messageLength = sizeof(int) + // length of machine name string
machineNameAsByteArray.Length + // the machine name value
sizeof(int) + // free space
sizeof(int); // processor count
// Calculate the required size of the byte array. This *DOES* include
// the size of the message length.
var byteArraySize = messageLength + // message itself
sizeof(int); // 4-byte message length field
// Allocate the byte array.
var bytes = new byte[byteArraySize];
// The offset is used to keep track of where the next field should be
// placed in the byte array.
var offset = 0;
// Encode the message length (a very simple header).
Buffer.BlockCopy(BitConverter.GetBytes(messageLength), 0, bytes, offset, sizeof(int));
// Increment offset by the number of bytes added to the byte array.
// Note that the increment is equal to the value of the last parameter
// in the preceding BlockCopy call.
offset += sizeof(int);
// Encode the length of machine name to make it easier to decode.
Buffer.BlockCopy(BitConverter.GetBytes(machineNameAsByteArray.Length), 0, bytes, offset, sizeof(int));
// Increment the offset by the number of bytes added.
offset += sizeof(int);
// Encode the machine name as an ASCII-based byte array.
Buffer.BlockCopy(machineNameAsByteArray, 0, bytes, offset, machineNameAsByteArray.Length);
// Increment the offset. See the pattern?
offset += machineNameAsByteArray.Length;
// Encode the free space.
Buffer.BlockCopy(BitConverter.GetBytes(GetTotalFreeSpace("C:\\")), 0, bytes, offset, sizeof(int));
// Increment the offset.
offset += sizeof(int);
// Encode the processor count.
Buffer.BlockCopy(BitConverter.GetBytes(Environment.ProcessorCount), 0, bytes, offset, sizeof(int));
// No reason to do this, but it completes the pattern.
offset += sizeof(int).
return bytes;
}
// Static method. Call is as SystemInfo.Decode(myReceivedByteArray);
public static SystemInfo Decode(byte[] message)
{
// When decoding, the presumption is that your socket code read the first
// four bytes from the socket to determine the length of the message. It
// then allocated a byte array of that size and read the message into that
// byte array. So the byte array passed into this function does *NOT* have
// the 4-byte message length field at the front of it. It makes no sense
// in this class anyway.
// Create the SystemInfo object to be populated and returned.
var si = new SystemInfo();
// Use the offset to navigate through the byte array.
var offset = 0;
// Extract the length of the machine name string since that is the first
// field encoded in the message.
var machineNameLength = BitConverter.ToInt32(message, offset);
// Increment the offset.
offset += sizeof(int);
// Extract the machine name now that we know its length.
si.machineName = Encoding.ASCII.GetString(message, offset, machineNameLength);
// Increment the offset.
offset += machineNameLength;
// Extract the free space.
si.freeSpace = BitConverter.ToInt32(message, offset);
// Increment the offset.
offset += sizeof(int);
// Extract the processor count.
si.processorCount = BitConverter.ToInt32(message, offset);
// No reason to do this, but it completes the pattern.
offset += sizeof(int);
return si;
}
}
To encode the data, call the Encode method like this:
byte[] msg = SystemInfo.Encode();
To decode the data once it's been read from the socket, call the Decode method like this:
SystemInfo si = SystemInfo.Decode(msg);
As to your actual code, I'm not sure why you're reading from the socket after writing to it unless you're expecting a return value.
A few things to consider. Hope this helps.
EDIT
First of all, use the MsgHeader if you feel you need it. The example above simply uses the message length as the header, i.e., it does not include a sync pattern or a message type. Whether you need to use this additional information is up to you.
For every new field you add to the SystemInfo class, the overall size of the message will increased, obviously. Thus, the messageLength value needs to be adjusted accordingly. For example, if you add an int to include the number of processors, messageLength will increase by sizeof(int). Then, to add it to the byte array, simply use the same System.Buffer.BlockCopy call. I've adjusted the example to show this with a little more detail, including making the method static.

problems about getting repeated sockets which i do not intend

I'm working on a messenger program using c#, and have some issues.
The server, client has three connections(each for chatting, filetrans, cardgames).
For the first, and second connection, it's working just fine.
But the problem occurred on the third one, which handles less amount of packet types compared to the first two sockets.
It's not about not receiving the packet or not getting a connection, but it's getting(or sending)
more then one(which I intend to send) packets at a time. The server log keeps on saying that
on one click, the server receives about 3~20 same packets and sends them to the targeted client.
Before my partial codes for the third connection, I'll explain how this thing is suppose to work.
The difference between connection1,2 and connection3(which is making this issue) is only the time
when I make the connection. The 1,2 makes it's connection on the main form's form_load function, and works fine.
The connection 3 makes connection when I load the gaming form(not the main form). Also, the first
two socket's listening thread are on the main form, and the third has it's listening thread on it's
own form. That's the only difference that I can find. The connections, and listening threads are
the very same. Here are my codes for the gaming form.
public void GPACKET() //A Thread function for receiving packets from the server
{
int read = 0;
while (isGameTcpClientOnline)
{
try
{
read = 0;
read = gameNetStream.Read(greceiveBuffer, 0, 1024 * 4);
if (read == 0)
{
isGameTcpClientOnline = false;
break;
}
}
catch
{
isGameTcpClientOnline = false;
gameNetStream = null;
}
Packet.Packet packet = (Packet.Packet)Packet.Packet.Desirialize(greceiveBuffer);
switch ((int)packet.Type)
{
case (int)PacketType.gameInit:
{
gameinit = (GameInit)Packet.Packet.Desirialize(greceiveBuffer);
//codes for handling the datas from the packet...
break;
}
case (int)PacketType.gamepacket:
{
gp = (GamePacket)Packet.Packet.Desirialize(greceiveBuffer);
//codes for handling the datas from the packet...
break;
}
}
}
}
public void setPacket(bool turn) //makes the packet, and sends it to the server..
{
if (turn)
turnSetting(false);
else
turnSetting(true);
gps = new GamePacket();
gps.Type = (int)PacketType.gamepacket;
gps.isFirstPacket = false;
gps.sym = symbol;
gps.ccapacity = cardCapacity;
gps.currentList = current_list[0].Tag.ToString();
gps.isturn = turn;
gps.opname = opid;
List<string> tempList = new List<string>();
foreach (PictureBox pb in my_list)
{
tempList.Add(pb.Image.Tag.ToString());
}
gps.img_list = tempList;
Packet.Packet.Serialize(gps).CopyTo(this.gsendBuffer, 0);
this.Send();
label5.Text = symbol + ", " + current_list[0].Tag.ToString();
}
public void Send() //actually this part sends the Packet through the netstream.
{
gameNetStream.Write(this.gsendBuffer, 0, this.gsendBuffer.Length);
gameNetStream.Flush();
for (int j = 0; j < 1024 * 4; j++)
{
this.gsendBuffer[j] = 0;
}
}
I really don't know why I'm having this problem.
Is it about the connection point? or is it about the receiving point? Or is it about the sending point? If I establish this connection on the same place to the connection1,2(which is on the main form. If i do this, I should make the "GPACKET" function running on the main form as well)?
This looks like a classic "assume we read an entire packet", where by "packet" here I mean your logical message, not the underlying transport packet. For example:
read = gameNetStream.Read(greceiveBuffer, 0, 1024 * 4);
...
Packet.Packet packet = (Packet.Packet)Packet.Packet.Desirialize(greceiveBuffer);
Firstly, it strikes me as very off that read wouldn't be needed in Desirialize, but: what makes you think we read an entire packet? we could have read:
one entire packet (only)
half of one packet
one byte
three packets
the last 2 bytes of one packet, 1 entire packet, and the first 5 bytes of a third packet
TCP is just a stream; all that Read is guaranteed to give you is "at least 1 byte and at most {count} bytes, or an EOF". It is very unusual that calls to Write would map anything like the calls to Read. It is your job to understand the protocol, and decide how much data to buffer, and then how much of that buffer to treat as one packet vs holding them back for the next packet(s).
See also: How many ways can you mess up IO?, in partuclar "Network packets: what you send is not (usually) what you get".
To fill exactly a 4096 byte buffer:
int expected = 4096, offset = 0, read;
while(expected != 0 &&
(read = gameNetStream.Read(greceiveBuffer, offset, expected)) > 0)
{
offset += read;
expected -= read;
}
if(expected != 0) throw new EndOfStreamException();

How to avoid the silent tickeling noises when streaming sound from microphone to speakers using DirectSound and C#?

I try to stream sound samples from my microphone to my speakers by using DirectSound and C#. It should be similar to 'listening to microphone', but later I want to use this for something else. By testing my approach I've noticed silent tickeling, cracking noises in the background. I would guess this has something to do with the delay between writing and playing the buffer, which must be greater than the latency to write the chunks.
If I set the delay between recording and playout to less than 50ms. Mostly it works but sometimes I get really loud cracking noises. So I've decided to a delay about at least 50ms. This works okay for me, but the delay of the systems "listen to device" seems to be much shorter. I would guess it is about 15-30ms, and nearly not noticeable. For 50ms I get at least a little reverb effect.
In the following I'll show you my microphone code (partially):
The initialisation is done like this:
capture = new Capture(device);
// Creating the buffer
// Determining the buffer size
bufferSize = format.AverageBytesPerSecond * bufferLength / 1000;
while (bufferSize % format.BlockAlign != 0) bufferSize += 1;
chunkSize = Math.Max(bufferSize, 256);
bufferSize = chunkSize * BUFFER_CHUNKS;
this.bufferLength = chunkSize * 1000 / format.AverageBytesPerSecond; // Redetermining the buffer Length that will be used.
captureBufferDescription = new CaptureBufferDescription();
captureBufferDescription.BufferBytes = bufferSize;
captureBufferDescription.Format = format;
captureBuffer = new CaptureBuffer(captureBufferDescription, capture);
// Creating Buffer control
bufferARE = new AutoResetEvent(false);
// Adding notifier to buffer.
bufferNotify = new Notify(captureBuffer);
BufferPositionNotify[] bpns = new BufferPositionNotify[BUFFER_CHUNKS];
for(int i = 0 ; i < BUFFER_CHUNKS ; i ++) bpns[i] = new BufferPositionNotify() { Offset = chunkSize * (i+1) - 1, EventNotifyHandle = bufferARE.SafeWaitHandle.DangerousGetHandle() };
bufferNotify.SetNotificationPositions(bpns);
The capturing will run like this in an extra thread:
// Initializing
MemoryStream tempBuffer = new MemoryStream();
// Capturing
while (isCapturing && captureBuffer.Capturing)
{
bufferARE.WaitOne();
if (isCapturing && captureBuffer.Capturing)
{
captureBuffer.Read(currentBufferPart * chunkSize, tempBuffer, chunkSize, LockFlag.None);
ReportChunk(applyVolume(tempBuffer.GetBuffer()));
currentBufferPart = (currentBufferPart + 1) % BUFFER_CHUNKS;
tempBuffer.Dispose();
tempBuffer = new MemoryStream(); // Reset Buffer;
}
}
// Finalizing
isCapturing = false;
tempBuffer.Dispose();
captureBuffer.Stop();
if (bufferARE.WaitOne(bufferLength + 1)) currentBufferPart = (currentBufferPart + 1) % BUFFER_CHUNKS; // That on next start the correct bufferpart will be read.
stateControlARE.Set();
While capturing ReportChunk takes the data to the speaker as an event that could be subscribed. The speaker part is initialized like this:
// Creating the dxdevice.
dxdevice = new Device(device);
dxdevice.SetCooperativeLevel(hWnd, CooperativeLevel.Normal);
// Creating the buffer
bufferDescription = new BufferDescription();
bufferDescription.BufferBytes = bufferSize;
bufferDescription.Format = input.Format;
bufferDescription.ControlVolume = true;
bufferDescription.GlobalFocus = true; // That sound doesn't stop if the hWnd looses focus.
bufferDescription.StickyFocus = true; // - " -
buffer = new SecondaryBuffer(bufferDescription, dxdevice);
chunkQueue = new Queue<byte[]>();
// Creating buffer control
bufferARE = new AutoResetEvent(false);
// Register at input device
input.ChunkCaptured += new AInput.ReportBuffer(input_ChunkCaptured);
The data is put by the event method into the queue, simply by:
chunkQueue.Enqueue(buffer);
bufferARE.Set();
Filling the playbackbuffer and starting/stopping the playback buffer is done by another thread:
// Initializing
int wp = 0;
bufferARE.WaitOne(); // wait for first chunk
// Playing / writing data to play buffer.
while (isPlaying)
{
Thread.Sleep(1);
bufferARE.WaitOne(BufferLength * 3); // If a chunk is played and there is no new chunk we try to continue and may stop playing, else may the buffer runs out.
// Note that this may fails if the sender was interrupted within one chunk
if (isPlaying)
{
if (chunkQueue.Count > 0)
{
while (chunkQueue.Count > 0) wp = writeToBuffer(chunkQueue.Dequeue(), wp);
if (buffer.PlayPosition > wp - chunkSize * 3 / 2) buffer.SetCurrentPosition(((wp - chunkSize * 2 + bufferSize) % bufferSize));
if (!buffer.Status.Playing)
{
buffer.SetCurrentPosition(((wp - chunkSize * 2 + bufferSize) % bufferSize)); // We have 2 chunks buffered so we step back 2 chunks and play them while getting new chunks.
buffer.Play(0, BufferPlayFlags.Looping);
}
}
else
{
buffer.Stop();
bufferARE.WaitOne(); // wait for a filling chunk
}
}
}
// Finalizing
isPlaying = false;
buffer.Stop();
stateControlARE.Set();
writeToBuffer simply writes the enqueued chunk to the buffer by this.buffer.Write(wp, data, LockFlag.None); and caring about bufferSize and chunkSize and wp, which represents the last writing position. I think this is everything that is important about my code. Maybe the definitions are missing and at least there is another method that starts/stops=controls the threads.
I've posted this code in case I've made a mistake in filling the buffer or my initialisation is wrong. But I would guess that this problem occurs because the execution of C# bytecode is too slow or something like that. But in the end my question is still open: My question is how to reduce the latency and how to avoid noises that shouldn't be there?
I know the reason of your problem and the way that you can solve it, but I can't implement it in C# and .Net, so I will explain it in hope that you can find your way.
Audio will be recorded by your mic. with an specified frequency( for example 44100 ) and then played on the sound card at same sample rate( again 44100 ), the problem is the crystal that count the time in input device( mic. for example ) is not same as the crystal that play sound in sound card.
also the difference is so small they are not the same( there is no 2 exact same crystal in entire world ) so after a while there will be a gap in your playback routines.
Now the solution is re-sample data to match the sample rate of the output but I don't know how to do that in C# and .Net
A long time ago I figured out, that this problem was caused by the Thread.Sleep(1); in combination with high CPU usage. Due the windows timerresolution is 15,6ms by default, this sleep doesn't mean sleep for 1ms, but sleep until the next clock interrupt is reached. (For more read this paper) Combined with high CPU usage it may stacks up to the length of a chunk or even more.
For example: If my chunksize is 40ms, this could be about 46,8ms (3 * 15,6ms) and this causes the tickeling. One solution for that is setting the resolution down to 1ms. That can be done in this way:
[DllImport("winmm.dll", EntryPoint="timeBeginPeriod", SetLastError=true)]
private static extern uint timeBeginPeriod(uint uiPeriod);
[DllImport("winmm.dll", EntryPoint="timeEndPeriod", SetLastError=true)]
private static extern uint timeEndPeriod(uint uiPeriod);
void routine()
{
Thead.Sleep(1); // May takes about 15,6ms or even longer.
timeBeginPeriod(1); // Should be set at the startup of the application.
Thead.Sleep(1); // May takes about 1, 2 or 3 ms depending on the CPU usage.
// ... time depending routines goes here ...
timeEndPeriod(1); // Should end at application shutdown.
}
As far as I know this should be already done by directx. But due this setting is a global setting, other parts of the application or other applications maybe change it. This shouldn't happen if an application sets and revokes the setting once. But somehow it seems to happen caused by any dirty programmed part or other running application.
One more thing that needs to be watched, is whether you're still using the correct position of the directx buffer, if you're skipping one chunk for any reason. In this case a resynchronization is required.

Categories