As a feature in the application which Im developing, I need to show the total estimated time left to upload/download a file to/from server.
how would it possible to get the download/upload speed to the server from client machine.
i think if im able to get speed then i can calculate time by -->
for example ---for a 200 Mb file = 200(1024 kb) = 204800 kb and
divide it by 204800 Mb / speed Kb/s = "x" seconds
The upload/download speed is no static property of a server, it depends on your specific connection and may also vary over time. Most application I've seen do an estimation over a short time window. That means they start downloading/uploading and measure the amount of data over, lets say 10 seconds. This is then taken as the current transfer speed and used to calculate the remaining time (e.g. 2500kB / 10s -> 250Kb/s). The time window is moved on and recalculated continuously to keep the calculation accurate to the current speed.
Although this is a quite basic approach, it will serve well in most cases.
Try something like this:
int chunkSize = 1024;
int sent = 0
int total = reader.Length;
DateTime started = DateTime.Now;
while (reader.Position < reader.Length)
{
byte[] buffer = new byte[
Math.Min(chunkSize, reader.Length - reader.Position)];
readBytes = reader.Read(buffer, 0, buffer.Length);
// send data packet
sent += readBytes;
TimeSpan elapsedTime = DateTime.Now - started;
TimeSpan estimatedTime =
TimeSpan.FromSeconds(
(total - sent) /
((double)sent / elapsedTime.TotalSeconds));
}
This is only tangentially related, but I assume if you're trying to calculate total time remaining, you're probably also going to be showing it as some kind of progress bar. If so, you should read this paper by Chris Harrison about perceptual differences. Here's the conclusion straight from his paper (emphasis mine).
Different progress bar behaviors appear to have a significant effect on user perception of process duration. By minimizing negative behaviors and incorporating positive behaviors, one can effectively make progress bars and their associated processes appear faster. Additionally, if elements of a multistage operation can be rearranged, it may be possible to reorder the stages in a more pleasing and seemingly faster sequence.
http://www.chrisharrison.net/projects/progressbars/ProgBarHarrison.pdf
I don't know why do you need this but i would go simpliest way possible and ask user what connection type he has. Then take file size divide it by speed and then by 8 to get number of seconds.
Point is you won't need processing power to calculate speeds. Microsoft on their website use function that calculates a speed for most default connections based on file size which you can get while uploading the file or to enter it manually.
Again, maybe you have other needs and you must calculate upload on fly...
The following code computes the remaining time in minute.
long totalRecieved = 0;
DateTime lastProgressChange = DateTime.Now;
Stack<int> timeSatck = new Stack<int>(5);
Stack<long> byteSatck = new Stack<long>(5);
using (WebClient c = new WebClient())
{
c.DownloadProgressChanged += delegate(object s, DownloadProgressChangedEventArgs args)
{
long bytes;
if (totalRecieved == 0)
{
totalRecieved = args.BytesReceived;
bytes = args.BytesReceived;
}
else
{
bytes = args.BytesReceived - totalRecieved;
}
timeSatck.Push(DateTime.Now.Subtract(lastProgressChange).Seconds);
byteSatck.Push(bytes);
double r = timeSatck.Average() * ((args.TotalBytesToReceive - args.BytesReceived) / byteSatck.Average());
this.textBox1.Text = (r / 60).ToString();
totalRecieved = args.BytesReceived;
lastProgressChange = DateTime.Now;
};
c.DownloadFileAsync(new Uri("http://www.visualsvn.com/files/VisualSVN-1.7.6.msi"), #"C:\SVN.msi");
}
I think I ve got the estimated time to download.
double timeToDownload = ((((totalFileSize/1024)-((fileStream.Length)/1024)) / Math.Round(currentSpeed, 2))/60);
this.Invoke(new UpdateProgessCallback(this.UpdateProgress), new object[] {
Math.Round(currentSpeed, 2), Math.Round(timeToDownload,2) });
where
private void UpdateProgress(double currentSpeed, double timeToDownload)
{
lblTimeUpdate.Text = string.Empty;
lblTimeUpdate.Text = " At Speed of " + currentSpeed + " it takes " + timeToDownload +" minute to complete download";
}
and current speed is calculated like
TimeSpan dElapsed = DateTime.Now - dStart;
if (dElapsed.Seconds > 0) {currentSpeed = (fileStream.Length / 1024) / dElapsed.Seconds;
}
Related
I have been going round in circles of how to ultimately synchronize the movement of 20 objects through 60 chosen points each. I had several options the main were
Using sync vars to synchronize their progress between points (1 to 0), but that is open to lag and fairly intensive
Using unity's built in network transform, but that would seem costly in terms of networking ability as the other device only need know the points and when to start
First, sending the list of points, which i have successfully done and then making the devices start movement at a certain time at a fixed rate
So this is how i came to the conclusion that i needed to have a networked universal time accurate to around 0.1 seconds so that the users would see the same movement. Please if you see this as the wrong approach to this networking please let me know as I very much a beginner to networking.
So far I have tried three methods to get this synced time.
I used System.DateTime believing that it was an accurate and universal time but found variation beyond a second between devices
I tried to calculate the latency between the two devices so i could calulate and remove the device time variation, by
(A) Using the in built method of GetAveragePing(NetworkPlayer player); but this was old and depreciated as the NetworkPlayer class seemed to be no longer functional and compatible
(B) Using another in built method of GetCurrentRtt(int hostId, int connectionId, out byte error); which returned 0 and again i believe may be depreciated
(C) By sending a message from the Server to the Client and back then dividing the time taken by two but this seemed inaccurate as I was trying to calculate latency between the server and client which was not the same as between the client and server so not exactly half
I tried to access a form of synced network time from a server, by
(A) Using code from here to get a network time then worked out the System.DateTime difference between the network date time at a certain point so that i could synchronize them. The scripts which I used to do this are below.
(B) Getting network time stamps from GetNetworkTimestamp(); which I thought may be what i wanted
So for all of these methods have failed so today I am asking you;
Is there another method for syncing time?
Should one of these methods work, so have I gone wrong, in which case i can give further details on my issues and code used?
Is my approach to networking totally wrong, or at least mostly wrong and how do you suggest i achieve my desired goal?
Thank you very much for reading this I hope it is detailed and still clear and if i can help you advance your understanding of my question i will be very happy to assist. Thanks for any answers / enlightening comments.
Code for Networked Time
Script A (Gets Network Time)
using UnityEngine;
using System.Collections;
using System.Net;
using System.Net.Sockets;
using UnityEngine.UI;
public class GetNetworkTime : MonoBehaviour {
public static System.DateTime NetworkTime()
{
//default Windows time server
const string ntpServer = "time.windows.com";
// NTP message size - 16 bytes of the digest (RFC 2030)
var ntpData = new byte[48];
//Setting the Leap Indicator, Version Number and Mode values
ntpData[0] = 0x1B; //LI = 0 (no warning), VN = 3 (IPv4 only), Mode = 3 (Client Mode)
var addresses = Dns.GetHostEntry(ntpServer).AddressList;
//The UDP port number assigned to NTP is 123
var ipEndPoint = new IPEndPoint(addresses[0], 123);
//NTP uses UDP
var socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
socket.Connect(ipEndPoint);
//Stops code hang if NTP is blocked
socket.ReceiveTimeout = 3000;
socket.Send(ntpData);
socket.Receive(ntpData);
socket.Close();
//Offset to get to the "Transmit Timestamp" field (time at which the reply
//departed the server for the client, in 64-bit timestamp format."
const byte serverReplyTime = 40;
//Get the seconds part
ulong intPart = System.BitConverter.ToUInt32(ntpData, serverReplyTime);
//Get the seconds fraction
ulong fractPart = System.BitConverter.ToUInt32(ntpData, serverReplyTime + 4);
//Convert From big-endian to little-endian
intPart = SwapEndianness(intPart);
fractPart = SwapEndianness(fractPart);
var milliseconds = (intPart * 1000) + ((fractPart * 1000) / 0x100000000L);
//**UTC** time
var networkDateTime = (new System.DateTime(1900, 1, 1, 0, 0, 0, System.DateTimeKind.Utc)).AddMilliseconds((long)milliseconds);
//return networkDateTime.ToLocalTime();
return networkDateTime;
}
// stackoverflow.com/a/3294698/162671
static uint SwapEndianness(ulong x)
{
return (uint)(((x & 0x000000ff) << 24) +
((x & 0x0000ff00) << 8) +
((x & 0x00ff0000) >> 8) +
((x & 0xff000000) >> 24));
}
}
Script B (Difference Calculator and Time Logger)
using UnityEngine;
using System.Collections;
using UnityEngine.Networking;
using UnityEngine.UI;
public class SyncTime2 : NetworkBehaviour {
public float serverTimeDif;
public float clientTimeDif;
GetNetworkTime GetNetworkTime;
GameObject time;
Text TimeLog;
// Use this for initialization
void Start () {
GetNetworkTime = GetComponent<GetNetworkTime>();
time = GameObject.Find("time");
TimeLog = time.GetComponent<Text>();
if (isServer)
{
serverTimeDif = (float)System.DateTime.UtcNow.TimeOfDay.TotalSeconds - (float) GetNetworkTime.NetworkTime().TimeOfDay.TotalSeconds;
StartCoroutine("DisplayTime", serverTimeDif);
}
else
{
clientTimeDif = (float)System.DateTime.UtcNow.TimeOfDay.TotalSeconds - (float)GetNetworkTime.NetworkTime().TimeOfDay.TotalSeconds;
StartCoroutine("DisplayTime", clientTimeDif);
}
}
IEnumerator DisplayTime (float TimeDif)
{
for (;;)
{
TimeLog.text = "" + ((float)System.DateTime.UtcNow.TimeOfDay.TotalSeconds + TimeDif);
// Log time so i can see if it is different on different devices
yield return new WaitForSeconds(0.01f);
}
}
}
Hmmmff... Seems simply adding the time difference rather than minus-ing it works and synchronizes the time, so rather than
TimeLog.text = "" + ((float)System.DateTime.UtcNow.TimeOfDay.TotalSeconds - TimeDif);
I should use
TimeLog.text = "" - ((float)System.DateTime.UtcNow.TimeOfDay.TotalSeconds + TimeDif);
If anyone could confirm that this is/isn't the correct method of networking I would deem that an answer
I have made a data logging application in C#. It connects to 4 USB sensors with the SerialPort class. I have the data received event threshold triggered on every byte. When data is received, the program checks to see if that byte is the end of the line. If it isn't, the input is added to a buffer. If it is a line end, the program adds a timestamp and writes the data and timestamp to a file (each input source gets a dedicated file).
Issues arise when using more than one COM port inputs. What I see in the output files is:
Any of the 4 Files:
...
timestamp1 value1
timestamp2 value2
timestamp3 value3
value4
value5
value6
timestamp7 value7
...
So, what it looks like is the computer isn't fast enough to get to all 4 interrupts before the next values arrive. I have good reason to believe that this is the culprit because sometimes I'll see output like this:
...
timestamp value
timestamp value
value
val
timestamp ue
timestamp value
...
It might be due to the fact that I changed the processor affinity to run only on Core 2. I did this because the timestamps I'm using are counted with processor cycles, so I can't have multiple time references depending on which core is running. I've put some of the code snippets below; any suggestions that might help with the dropped timestamps would be greatly appreciated!
public mainLoggerIO()
{
//bind to one cpu
Process proc = Process.GetCurrentProcess();
long AffinityMask = (long)proc.ProcessorAffinity;
AffinityMask &= 0x0002; //use only the 2nd processor
proc.ProcessorAffinity = (IntPtr)AffinityMask;
//prevent any other threads from using core 2
Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.High;
Thread.CurrentThread.Priority = ThreadPriority.Highest;
long frequency = Stopwatch.Frequency;
Console.WriteLine(" Timer frequency in ticks per second = {0}",
frequency);
long nanosecPerTick = (1000L * 1000L * 1000L) / frequency;
Console.WriteLine(" Timer is accurate within {0} nanoseconds",
nanosecPerTick);
if (Stopwatch.IsHighResolution)
MessageBox.Show("High Resolution Timer Available");
else
MessageBox.Show("No High Resolution Timer Available on this Machine");
InitializeComponent();
}
And so on. Each data return interrupt looks like this:
private void serialPort1_DataReceived(object sender, System.IO.Ports.SerialDataReceivedEventArgs e)
{
//serialPort1.DataReceived = serialPort1_DataReceived;
rawPort1Data = "";
rawPort1Data = serialPort1.ReadExisting();
this.Invoke((MethodInvoker)delegate { returnTextPort1(); });
}
The method returnTextPort#() is:
private void returnTextPort1()
{
if (string.Compare(saveFileName, "") == 0)//no savefile specified
return;
bufferedString += rawPort1Data;
if(bufferedString.Contains('\r')){
long timeStamp = DateTime.Now.Ticks;
textBox2.AppendText(nicknames[0] + " " + timeStamp / 10000 + ", " + rawPort1Data);
//write to file
using (System.IO.StreamWriter file = new System.IO.StreamWriter(#saveFileName, true))
{
file.WriteLine(nicknames[0] + " " + timeStamp / 10000 + ", " + rawPort1Data);//in Ms
}
bufferedString = "";
}
}
A cleaner approach would be to use a ConcurrentQueue<T> between the data received event handler and a separate Thread that will deal with the resulting data. That way the event handler can return immediately AND instead of modifying rawPort1 data in a totally non-thread-safe manner you could move to a thread-safe solution.
Create a Thread that reads from the concurrent queue, writes to the file and Invokes the UI changes. Note that writing to the file should NOT be on the UI thread.
Your ConcurrentQueue<T> can capture in the class T that you will implement: the port number, the data received and the timestamp at which it was received.
Note also that DateTime.Now is rarely ever the right answer, for most locations it jumps by an hour twice every year when daylight savings time starts or ends, instead of DateTime.UtcNow. Note however that neither has the accuracy you seem to be trying to obtain with your StopWatch code.
You should not need to manipulate processes or thread priorities to do this: the serial port has a buffer, you'll not miss data provided you handle it efficiently.
I have been working on a private project where i wanted to learn how to program on a windows phone, and at a point i started to fiddle with sockets and the camera, and a great idea came to mind video feed (dumb me to even attempt).
but now I'm here, I have something that well, it works like a charm but a Lumia 800 cannot chug through the for-loop fast enough. It sends a frame per lets say 7-8 seconds something i think is strange since well, it should be strong enough. It feels and looks like watching porn on a 56k modem without the porn.
I also realized that a frame is 317000 pixels and that would sum up to roughly 1MB per frame I'm also sending xy coordinates so mine takes up 2.3MB per frame still working on a different way to solve this to keep it down. so I'm guessing i would need to do dome magic to make both position and pixel values of an acceptable size. because atm would i get5 it up at an acceptable speed it would require at least 60MB/s to get something like 30fps but thats a problem for another day.
//How many pixels to send per burst (1000 seems to be the best)
const int PixelPerSend = 1000;
int bSize = 7 * PixelPerSend;
//Comunication thread UDP feed
private void EthernetComUDP() //Runs in own thread
{
//Connect to Server
clientUDP = new SocketClientUDP();
int[] ImageContent = new int[(int)cam.PreviewResolution.Height * (int)cam.PreviewResolution.Width];
byte[] PacketContent = new byte[bSize];
string Pixel,l;
while (SendingData)
{
cam.GetPreviewBufferArgb32(ImageContent);
int x = 1, y = 1, SenderCount = 0;
//In dire need of a speedup
for (int a = 0; a < ImageContent.Length; a++) //this loop
{
Pixel = Convert.ToString(ImageContent[a], 2).PadLeft(32, '0');
//A - removed to conserve bandwidth
//PacketContent[SenderCount] = Convert.ToByte(Pixel.Substring(0, 8), 2);//0
//R
PacketContent[SenderCount] = Convert.ToByte(Pixel.Substring(8, 8), 2);//8
//G
PacketContent[SenderCount + 1] = Convert.ToByte(Pixel.Substring(16, 8), 2);//16
//B
PacketContent[SenderCount + 2] = Convert.ToByte(Pixel.Substring(24, 8), 2);//24
//Coordinates
//X
l = Convert.ToString(x, 2).PadLeft(16, '0');
//X bit(1-8)
PacketContent[SenderCount + 3] = Convert.ToByte(l.Substring(0, 8), 2);
//X bit(9-16)
PacketContent[SenderCount + 4] = Convert.ToByte(l.Substring(8, 8), 2);
//Y
l = Convert.ToString(y, 2).PadLeft(16, '0');
//Y bit(1-8)
PacketContent[SenderCount + 5] = Convert.ToByte(l.Substring(0, 8), 2);
//Y bit(9-16)
PacketContent[SenderCount + 6] = Convert.ToByte(l.Substring(8, 8), 2);
x++;
if (x == cam.PreviewResolution.Width)
{
y++;
x = 1;
}
SenderCount += 7;
if (SenderCount == bSize)
{
clientUDP.Send(ConnectToIP, PORT + 1, PacketContent);
SenderCount = 0;
}
}
}
//Close on finish
clientUDP.Close();
}
i have tried for simplicity to just send the pixels induvidialy using
BitConverter.GetBytes(ImageContent[a]);
instead of the string parsing mess i have created (to be fixed just wanted a proof of concept) but to do the simple BitConverter did not speed it up to much.
so now im on my last idea the UDP sender socket witch is rhoughly identical to the one on msdn's library.
public string Send(string serverName, int portNumber, byte[] payload)
{
string response = "Operation Timeout";
// We are re-using the _socket object that was initialized in the Connect method
if (_socket != null)
{
// Create SocketAsyncEventArgs context object
SocketAsyncEventArgs socketEventArg = new SocketAsyncEventArgs();
// Set properties on context object
socketEventArg.RemoteEndPoint = new DnsEndPoint(serverName, portNumber);
// Inline event handler for the Completed event.
// Note: This event handler was implemented inline in order to make this method self-contained.
socketEventArg.Completed += new EventHandler<SocketAsyncEventArgs>(delegate(object s, SocketAsyncEventArgs e)
{
response = e.SocketError.ToString();
// Unblock the UI thread
_clientDone.Set();
});
socketEventArg.SetBuffer(payload, 0, payload.Length);
// Sets the state of the event to nonsignaled, causing threads to block
_clientDone.Reset();
// Make an asynchronous Send request over the socket
_socket.SendToAsync(socketEventArg);
// Block the UI thread for a maximum of TIMEOUT_MILLISECONDS milliseconds.
// If no response comes back within this time then proceed
_clientDone.WaitOne(TIMEOUT_MILLISECONDS);
}
else
{
response = "Socket is not initialized";
}
return response;
}
All in all i have ended up on 3 solutions
Accept defeat (but that wont happen so lets look at 2)
Work down the amount of data sent (destroys quality 640x480 is small enough i think)
Find the obvious problem (Google and friend's ran out of good ideas, thats why I'm here)
The problem is almost certainly the messing about with the data. Converting a megabyte of binary data into several megabytes of text and then extracting and sending individual characters will add a massive overhead per byte of source data. Looping through individual pixels to build a send buffer will take (relatively speaking) geological timescales.
The fastest way to do this is likely to be to grab a buffer of binary data from the camera, and send it with one UDP write. Only process or break up the data on the phone if you have to, and be careful to access the original binary data directly - don't convert it all to strings and back to binary. Every extra method call you add into this process will just add overhead. If you have to use a loop, try to pre-calculate as much as you can outside the loop to minimise the work that is done on each iteration.
A couple things come to mind: #1 Break up the raw image array into pieces to be sent over the wire. Not sure if Linq is available on Windows Phone but something like this. #2 Converting from int to string to byte will be very inefficient because of the processing time and memory usage. A better approach would be to bulk copy chunks of the int array to a byte array directly. Example
I want to get accurate download/upload speed through a Network Interface using C# .NET
I know that it can be calculated using GetIPv4Statistics().BytesReceived and putting the Thread to sleep for sometime. But it's not giving the output what I am getting in my browser.
Here is a quick snippet of code from LINQPad. It uses a very simple moving average. It shows "accurate speeds" using "Speedtest.net". Things to keep in mind are Kbps is in bits and HTTP data is often compressed so the "downloaded bytes" will be significantly smaller for highly compressible data. Also, don't forget that any old process might be doing any old thing on the internet these days (without stricter firewall settings) ..
I like flindenberg's answer (don't change the accept), and I noticed that some polling periods would return "0" that aligns with his/her conclusions.
Use at your own peril.
void Main()
{
var nics = System.Net.NetworkInformation.NetworkInterface.GetAllNetworkInterfaces();
// Select desired NIC
var nic = nics.Single(n => n.Name == "Local Area Connection");
var reads = Enumerable.Empty<double>();
var sw = new Stopwatch();
var lastBr = nic.GetIPv4Statistics().BytesReceived;
for (var i = 0; i < 1000; i++) {
sw.Restart();
Thread.Sleep(100);
var elapsed = sw.Elapsed.TotalSeconds;
var br = nic.GetIPv4Statistics().BytesReceived;
var local = (br - lastBr) / elapsed;
lastBr = br;
// Keep last 20, ~2 seconds
reads = new [] { local }.Concat(reads).Take(20);
if (i % 10 == 0) { // ~1 second
var bSec = reads.Sum() / reads.Count();
var kbs = (bSec * 8) / 1024;
Console.WriteLine("Kb/s ~ " + kbs);
}
}
}
Please try this. To check internet connection speed.
public double CheckInternetSpeed()
{
// Create Object Of WebClient
System.Net.WebClient wc = new System.Net.WebClient();
//DateTime Variable To Store Download Start Time.
DateTime dt1 = DateTime.UtcNow;
//Number Of Bytes Downloaded Are Stored In ‘data’
byte[] data = wc.DownloadData("http://google.com");
//DateTime Variable To Store Download End Time.
DateTime dt2 = DateTime.UtcNow;
//To Calculate Speed in Kb Divide Value Of data by 1024 And Then by End Time Subtract Start Time To Know Download Per Second.
return Math.Round((data.Length / 1024) / (dt2 - dt1).TotalSeconds, 2);
}
It gives you the speed in Kb/Sec and share the result.
By looking at another answer to a question you posted in NetworkInterface.GetIPv4Statistics().BytesReceived - What does it return? I believe the issue might be that you are using to small intervals. I believe the counter only counts whole packages, and if you for example are downloading a file the packages might get as big as 64 KB (65,535 bytes, IPv4 max package size) which is quite a lot if your maximum download throughput is 60 KB/s and you are measuring 200 ms intervals.
Given that your speed is 60 KB/s I would have set the running time to 10 seconds to get at least 9 packages per average. If you are writing it for all kinds of connections I would recommend you make the solution dynamic, ie if the speed is high you can easily decrease the averaging interval but in the case of slow connections you must increase the averaging interval.
Either do as #pst recommends by having a moving average or simply increase the sleep up to maybe 1 second.
And be sure to divide by the actual time taken rather than the time passed to Thread.Sleep().
Additional thought on intervals
My process would be something like this, measure for 5 second and gather data, ie bytes recieved as well as the number of packets.
var timePerPacket = 5000 / nrOfPackets; // Time per package in ms
var intervalTime = Math.Max(d, Math.Pow(2,(Math.Log10(timePerPacket)))*100);
This will cause the interval to increase slowly from about several tens of ms up to the time per packet. That way we always get at least (on average) one package per interval and we will not go nuts if we are on a 10 Gbps connection. The important part is that the measuring time should not be linear to the amount of data received.
The SSL handshake takes some time as a result modified #sandeep answer. I first created a request and then measure the time to download the content. I believe this is a little more accurate but still not 100%. It is an approximation.
public async Task<int> GetInternetSpeedAsync(CancellationToken ct = default)
{
const double kb = 1024;
// do not use compression
using var client = new HttpClient();
int numberOfBytesRead = 0;
var buffer = new byte[10240].AsMemory();
// create request
var stream = await client.GetStreamAsync("https://www.google.com", ct);
// start timer
DateTime dt1 = DateTime.UtcNow;
// download stuff
while (true)
{
var i = await stream.ReadAsync(buffer, ct);
if (i < 1)
break;
numberOfBytesRead += i;
}
// end timer
DateTime dt2 = DateTime.UtcNow;
double kilobytes = numberOfBytesRead / kb;
double time = (dt2 - dt1).TotalSeconds;
// speed in Kb per Second.
return (int)(kilobytes / time);
}
I try to stream sound samples from my microphone to my speakers by using DirectSound and C#. It should be similar to 'listening to microphone', but later I want to use this for something else. By testing my approach I've noticed silent tickeling, cracking noises in the background. I would guess this has something to do with the delay between writing and playing the buffer, which must be greater than the latency to write the chunks.
If I set the delay between recording and playout to less than 50ms. Mostly it works but sometimes I get really loud cracking noises. So I've decided to a delay about at least 50ms. This works okay for me, but the delay of the systems "listen to device" seems to be much shorter. I would guess it is about 15-30ms, and nearly not noticeable. For 50ms I get at least a little reverb effect.
In the following I'll show you my microphone code (partially):
The initialisation is done like this:
capture = new Capture(device);
// Creating the buffer
// Determining the buffer size
bufferSize = format.AverageBytesPerSecond * bufferLength / 1000;
while (bufferSize % format.BlockAlign != 0) bufferSize += 1;
chunkSize = Math.Max(bufferSize, 256);
bufferSize = chunkSize * BUFFER_CHUNKS;
this.bufferLength = chunkSize * 1000 / format.AverageBytesPerSecond; // Redetermining the buffer Length that will be used.
captureBufferDescription = new CaptureBufferDescription();
captureBufferDescription.BufferBytes = bufferSize;
captureBufferDescription.Format = format;
captureBuffer = new CaptureBuffer(captureBufferDescription, capture);
// Creating Buffer control
bufferARE = new AutoResetEvent(false);
// Adding notifier to buffer.
bufferNotify = new Notify(captureBuffer);
BufferPositionNotify[] bpns = new BufferPositionNotify[BUFFER_CHUNKS];
for(int i = 0 ; i < BUFFER_CHUNKS ; i ++) bpns[i] = new BufferPositionNotify() { Offset = chunkSize * (i+1) - 1, EventNotifyHandle = bufferARE.SafeWaitHandle.DangerousGetHandle() };
bufferNotify.SetNotificationPositions(bpns);
The capturing will run like this in an extra thread:
// Initializing
MemoryStream tempBuffer = new MemoryStream();
// Capturing
while (isCapturing && captureBuffer.Capturing)
{
bufferARE.WaitOne();
if (isCapturing && captureBuffer.Capturing)
{
captureBuffer.Read(currentBufferPart * chunkSize, tempBuffer, chunkSize, LockFlag.None);
ReportChunk(applyVolume(tempBuffer.GetBuffer()));
currentBufferPart = (currentBufferPart + 1) % BUFFER_CHUNKS;
tempBuffer.Dispose();
tempBuffer = new MemoryStream(); // Reset Buffer;
}
}
// Finalizing
isCapturing = false;
tempBuffer.Dispose();
captureBuffer.Stop();
if (bufferARE.WaitOne(bufferLength + 1)) currentBufferPart = (currentBufferPart + 1) % BUFFER_CHUNKS; // That on next start the correct bufferpart will be read.
stateControlARE.Set();
While capturing ReportChunk takes the data to the speaker as an event that could be subscribed. The speaker part is initialized like this:
// Creating the dxdevice.
dxdevice = new Device(device);
dxdevice.SetCooperativeLevel(hWnd, CooperativeLevel.Normal);
// Creating the buffer
bufferDescription = new BufferDescription();
bufferDescription.BufferBytes = bufferSize;
bufferDescription.Format = input.Format;
bufferDescription.ControlVolume = true;
bufferDescription.GlobalFocus = true; // That sound doesn't stop if the hWnd looses focus.
bufferDescription.StickyFocus = true; // - " -
buffer = new SecondaryBuffer(bufferDescription, dxdevice);
chunkQueue = new Queue<byte[]>();
// Creating buffer control
bufferARE = new AutoResetEvent(false);
// Register at input device
input.ChunkCaptured += new AInput.ReportBuffer(input_ChunkCaptured);
The data is put by the event method into the queue, simply by:
chunkQueue.Enqueue(buffer);
bufferARE.Set();
Filling the playbackbuffer and starting/stopping the playback buffer is done by another thread:
// Initializing
int wp = 0;
bufferARE.WaitOne(); // wait for first chunk
// Playing / writing data to play buffer.
while (isPlaying)
{
Thread.Sleep(1);
bufferARE.WaitOne(BufferLength * 3); // If a chunk is played and there is no new chunk we try to continue and may stop playing, else may the buffer runs out.
// Note that this may fails if the sender was interrupted within one chunk
if (isPlaying)
{
if (chunkQueue.Count > 0)
{
while (chunkQueue.Count > 0) wp = writeToBuffer(chunkQueue.Dequeue(), wp);
if (buffer.PlayPosition > wp - chunkSize * 3 / 2) buffer.SetCurrentPosition(((wp - chunkSize * 2 + bufferSize) % bufferSize));
if (!buffer.Status.Playing)
{
buffer.SetCurrentPosition(((wp - chunkSize * 2 + bufferSize) % bufferSize)); // We have 2 chunks buffered so we step back 2 chunks and play them while getting new chunks.
buffer.Play(0, BufferPlayFlags.Looping);
}
}
else
{
buffer.Stop();
bufferARE.WaitOne(); // wait for a filling chunk
}
}
}
// Finalizing
isPlaying = false;
buffer.Stop();
stateControlARE.Set();
writeToBuffer simply writes the enqueued chunk to the buffer by this.buffer.Write(wp, data, LockFlag.None); and caring about bufferSize and chunkSize and wp, which represents the last writing position. I think this is everything that is important about my code. Maybe the definitions are missing and at least there is another method that starts/stops=controls the threads.
I've posted this code in case I've made a mistake in filling the buffer or my initialisation is wrong. But I would guess that this problem occurs because the execution of C# bytecode is too slow or something like that. But in the end my question is still open: My question is how to reduce the latency and how to avoid noises that shouldn't be there?
I know the reason of your problem and the way that you can solve it, but I can't implement it in C# and .Net, so I will explain it in hope that you can find your way.
Audio will be recorded by your mic. with an specified frequency( for example 44100 ) and then played on the sound card at same sample rate( again 44100 ), the problem is the crystal that count the time in input device( mic. for example ) is not same as the crystal that play sound in sound card.
also the difference is so small they are not the same( there is no 2 exact same crystal in entire world ) so after a while there will be a gap in your playback routines.
Now the solution is re-sample data to match the sample rate of the output but I don't know how to do that in C# and .Net
A long time ago I figured out, that this problem was caused by the Thread.Sleep(1); in combination with high CPU usage. Due the windows timerresolution is 15,6ms by default, this sleep doesn't mean sleep for 1ms, but sleep until the next clock interrupt is reached. (For more read this paper) Combined with high CPU usage it may stacks up to the length of a chunk or even more.
For example: If my chunksize is 40ms, this could be about 46,8ms (3 * 15,6ms) and this causes the tickeling. One solution for that is setting the resolution down to 1ms. That can be done in this way:
[DllImport("winmm.dll", EntryPoint="timeBeginPeriod", SetLastError=true)]
private static extern uint timeBeginPeriod(uint uiPeriod);
[DllImport("winmm.dll", EntryPoint="timeEndPeriod", SetLastError=true)]
private static extern uint timeEndPeriod(uint uiPeriod);
void routine()
{
Thead.Sleep(1); // May takes about 15,6ms or even longer.
timeBeginPeriod(1); // Should be set at the startup of the application.
Thead.Sleep(1); // May takes about 1, 2 or 3 ms depending on the CPU usage.
// ... time depending routines goes here ...
timeEndPeriod(1); // Should end at application shutdown.
}
As far as I know this should be already done by directx. But due this setting is a global setting, other parts of the application or other applications maybe change it. This shouldn't happen if an application sets and revokes the setting once. But somehow it seems to happen caused by any dirty programmed part or other running application.
One more thing that needs to be watched, is whether you're still using the correct position of the directx buffer, if you're skipping one chunk for any reason. In this case a resynchronization is required.