I have been working on a private project where i wanted to learn how to program on a windows phone, and at a point i started to fiddle with sockets and the camera, and a great idea came to mind video feed (dumb me to even attempt).
but now I'm here, I have something that well, it works like a charm but a Lumia 800 cannot chug through the for-loop fast enough. It sends a frame per lets say 7-8 seconds something i think is strange since well, it should be strong enough. It feels and looks like watching porn on a 56k modem without the porn.
I also realized that a frame is 317000 pixels and that would sum up to roughly 1MB per frame I'm also sending xy coordinates so mine takes up 2.3MB per frame still working on a different way to solve this to keep it down. so I'm guessing i would need to do dome magic to make both position and pixel values of an acceptable size. because atm would i get5 it up at an acceptable speed it would require at least 60MB/s to get something like 30fps but thats a problem for another day.
//How many pixels to send per burst (1000 seems to be the best)
const int PixelPerSend = 1000;
int bSize = 7 * PixelPerSend;
//Comunication thread UDP feed
private void EthernetComUDP() //Runs in own thread
{
//Connect to Server
clientUDP = new SocketClientUDP();
int[] ImageContent = new int[(int)cam.PreviewResolution.Height * (int)cam.PreviewResolution.Width];
byte[] PacketContent = new byte[bSize];
string Pixel,l;
while (SendingData)
{
cam.GetPreviewBufferArgb32(ImageContent);
int x = 1, y = 1, SenderCount = 0;
//In dire need of a speedup
for (int a = 0; a < ImageContent.Length; a++) //this loop
{
Pixel = Convert.ToString(ImageContent[a], 2).PadLeft(32, '0');
//A - removed to conserve bandwidth
//PacketContent[SenderCount] = Convert.ToByte(Pixel.Substring(0, 8), 2);//0
//R
PacketContent[SenderCount] = Convert.ToByte(Pixel.Substring(8, 8), 2);//8
//G
PacketContent[SenderCount + 1] = Convert.ToByte(Pixel.Substring(16, 8), 2);//16
//B
PacketContent[SenderCount + 2] = Convert.ToByte(Pixel.Substring(24, 8), 2);//24
//Coordinates
//X
l = Convert.ToString(x, 2).PadLeft(16, '0');
//X bit(1-8)
PacketContent[SenderCount + 3] = Convert.ToByte(l.Substring(0, 8), 2);
//X bit(9-16)
PacketContent[SenderCount + 4] = Convert.ToByte(l.Substring(8, 8), 2);
//Y
l = Convert.ToString(y, 2).PadLeft(16, '0');
//Y bit(1-8)
PacketContent[SenderCount + 5] = Convert.ToByte(l.Substring(0, 8), 2);
//Y bit(9-16)
PacketContent[SenderCount + 6] = Convert.ToByte(l.Substring(8, 8), 2);
x++;
if (x == cam.PreviewResolution.Width)
{
y++;
x = 1;
}
SenderCount += 7;
if (SenderCount == bSize)
{
clientUDP.Send(ConnectToIP, PORT + 1, PacketContent);
SenderCount = 0;
}
}
}
//Close on finish
clientUDP.Close();
}
i have tried for simplicity to just send the pixels induvidialy using
BitConverter.GetBytes(ImageContent[a]);
instead of the string parsing mess i have created (to be fixed just wanted a proof of concept) but to do the simple BitConverter did not speed it up to much.
so now im on my last idea the UDP sender socket witch is rhoughly identical to the one on msdn's library.
public string Send(string serverName, int portNumber, byte[] payload)
{
string response = "Operation Timeout";
// We are re-using the _socket object that was initialized in the Connect method
if (_socket != null)
{
// Create SocketAsyncEventArgs context object
SocketAsyncEventArgs socketEventArg = new SocketAsyncEventArgs();
// Set properties on context object
socketEventArg.RemoteEndPoint = new DnsEndPoint(serverName, portNumber);
// Inline event handler for the Completed event.
// Note: This event handler was implemented inline in order to make this method self-contained.
socketEventArg.Completed += new EventHandler<SocketAsyncEventArgs>(delegate(object s, SocketAsyncEventArgs e)
{
response = e.SocketError.ToString();
// Unblock the UI thread
_clientDone.Set();
});
socketEventArg.SetBuffer(payload, 0, payload.Length);
// Sets the state of the event to nonsignaled, causing threads to block
_clientDone.Reset();
// Make an asynchronous Send request over the socket
_socket.SendToAsync(socketEventArg);
// Block the UI thread for a maximum of TIMEOUT_MILLISECONDS milliseconds.
// If no response comes back within this time then proceed
_clientDone.WaitOne(TIMEOUT_MILLISECONDS);
}
else
{
response = "Socket is not initialized";
}
return response;
}
All in all i have ended up on 3 solutions
Accept defeat (but that wont happen so lets look at 2)
Work down the amount of data sent (destroys quality 640x480 is small enough i think)
Find the obvious problem (Google and friend's ran out of good ideas, thats why I'm here)
The problem is almost certainly the messing about with the data. Converting a megabyte of binary data into several megabytes of text and then extracting and sending individual characters will add a massive overhead per byte of source data. Looping through individual pixels to build a send buffer will take (relatively speaking) geological timescales.
The fastest way to do this is likely to be to grab a buffer of binary data from the camera, and send it with one UDP write. Only process or break up the data on the phone if you have to, and be careful to access the original binary data directly - don't convert it all to strings and back to binary. Every extra method call you add into this process will just add overhead. If you have to use a loop, try to pre-calculate as much as you can outside the loop to minimise the work that is done on each iteration.
A couple things come to mind: #1 Break up the raw image array into pieces to be sent over the wire. Not sure if Linq is available on Windows Phone but something like this. #2 Converting from int to string to byte will be very inefficient because of the processing time and memory usage. A better approach would be to bulk copy chunks of the int array to a byte array directly. Example
Related
Kindly bear with me for this confusing question. I'm finding it as hard to describe as it is involving and tiresome. Read it and you'll know why.
I've been hounding this issue for over a month now without much progress. I'm using an STM32 (STM32F103C8 mounted on a BluePill board) to communicate with a C# app through an FT232r Serial-USB converter. The complete communication protocol is a bit complex. I'm writing here a simplistic version of the code that explains my problem quite accurately.
STM32 does the following.
In the initial setup,
Serial.begin at 2000000 (Yes it's very high but I've analyzed it using an oscilloscope and the signal is very healthy; impedance matching and clock jitter is very accurate).
Waits for a command from the C# end to enter the loop
In the loop, it does the following.
TX a byte buffer of length N on the serial port. Packet structure is 0xAA, N bytes, 1 byte checksum.
repeat the loop
And on the C# side (Pseudo code),
new Thread(()=>{while(true) IOTick(); Thread.Sleep(30); }).Start();
IOTick() is defined as:
{
while(SerialPortObject.BytesToRead > 1)
{
header = read();
if (header != 0xAA) continue;
byte [] buffer = new byte[N + 1];
receivedBytes = readBytes(buffer, N + 1, Timeout = 500ms); // receivedBytes is never less than N + 1 for timeout greater than 120)
use the N=16 bytes. Check Nth byte to compare checksum. Doen't take too much CPU time.
Send a packet received software event.
}
}
readBytes is defined as
int readBytes(byte[] buffer, int count, int timeout)
{
var st = DateTime.Now;
for (int i = 0; i < count; i++)
{
var b_ = read(timeout);
if (b_ == -1)
return i;
buffer[i] = (byte)b_;
timeout -= (int)(DateTime.Now - st).TotalMilliseconds;
}
return count;
}
int buffer2ReadIndex = 0;
byte[] buffer2= new byte[0];
int read(int timeout)
{
DateTime start = DateTime.Now;
if (buffer2.Length == 0)
{
while (SerialPortObject.BytesToRead <= 0)
{
if ((DateTime.Now - start).TotalMilliseconds > timeout)
return -1;
System.Threading.Thread.Sleep(30);
}
buffer2 = new byte[SerialPortObject.BytesToRead];
sp.Read(buffer2, 0, buffer2.Length);
}
if (buffer2.Length > 0)
{
var b = buffer2[buffer2ReadIndex];
buffer2ReadIndex++;
if (buffer2ReadIndex >= buffer2.Length)
{
buffer2ReadIndex = 0;
buffer2 = new byte[0];
}
return b;
}
return -1;
}
Now, everything is working as expected. The packet received software event is triggered not later than every ~30ms (the windows tick time). The problem starts if I have to wait between each packet TX at the STM side. First, I suspected that the I2C I was using for some tasks between each packet TX was causing some HW or software conflict with serial data which gets corrupted. But then I noticed that only if I introduce a delay of 1 millisecond using Arduino delay() between each packet TX, the same thing happens. Almost, 1K packets should be received every second now. Almost 1 out of 10 packets after a successful header exception get either not delivered completely or delivered with corrupted checksum, causing the C# app to lose the packet Header. The new header trace obviously requires flushing some bytes, losing some packets in the communication. Even this doesn't sound too bad for an app that can afford 5% data packet loss, strangely though, when this anomaly occurs, the packet received software interrupt waits for more than 1 second after every couple hundred of consecutive events.
I'm completely blind here. Even tried it with 115200 baud rate, does the same loss with a slightly lesser loss ratio. It should be noted that at 9600 baud, the issue doesn't happen. This is the only hint I've got right now.
It looks like I've found an answer.
After digging deep into SerialPort and SerialPort.base stream class and after doing some document reading and benchmarking, here is what I've observed:
SerialPort.BytesToRead updates are not uniform. DataReceived event seems to be following it. When bytes are coming at ~200kHz, (baud = 2Mbps), It is updated almost instantaneously (or within 30ms, worst case). When they are coming at ~20kHz or slower (evenly spaced on time using a micrcontroller), the SerialPort.BytesToRead can take up to 400ms to update. This happens only after a dozen 30ms updates.
So, observing this, I can say that SerialPort.BytesToRead is updated on two conditions. Some amount of time has passed since the data arrived (and this time is not constrained to 30ms) or the data is coming too fast.
This is a strange behavior. No data is lost when this anomaly is occurring. Not to surprise, 0.06% of bytes are lost when working at full bandwidth (200KBps at baud of 2Mbps).
I have been going round in circles of how to ultimately synchronize the movement of 20 objects through 60 chosen points each. I had several options the main were
Using sync vars to synchronize their progress between points (1 to 0), but that is open to lag and fairly intensive
Using unity's built in network transform, but that would seem costly in terms of networking ability as the other device only need know the points and when to start
First, sending the list of points, which i have successfully done and then making the devices start movement at a certain time at a fixed rate
So this is how i came to the conclusion that i needed to have a networked universal time accurate to around 0.1 seconds so that the users would see the same movement. Please if you see this as the wrong approach to this networking please let me know as I very much a beginner to networking.
So far I have tried three methods to get this synced time.
I used System.DateTime believing that it was an accurate and universal time but found variation beyond a second between devices
I tried to calculate the latency between the two devices so i could calulate and remove the device time variation, by
(A) Using the in built method of GetAveragePing(NetworkPlayer player); but this was old and depreciated as the NetworkPlayer class seemed to be no longer functional and compatible
(B) Using another in built method of GetCurrentRtt(int hostId, int connectionId, out byte error); which returned 0 and again i believe may be depreciated
(C) By sending a message from the Server to the Client and back then dividing the time taken by two but this seemed inaccurate as I was trying to calculate latency between the server and client which was not the same as between the client and server so not exactly half
I tried to access a form of synced network time from a server, by
(A) Using code from here to get a network time then worked out the System.DateTime difference between the network date time at a certain point so that i could synchronize them. The scripts which I used to do this are below.
(B) Getting network time stamps from GetNetworkTimestamp(); which I thought may be what i wanted
So for all of these methods have failed so today I am asking you;
Is there another method for syncing time?
Should one of these methods work, so have I gone wrong, in which case i can give further details on my issues and code used?
Is my approach to networking totally wrong, or at least mostly wrong and how do you suggest i achieve my desired goal?
Thank you very much for reading this I hope it is detailed and still clear and if i can help you advance your understanding of my question i will be very happy to assist. Thanks for any answers / enlightening comments.
Code for Networked Time
Script A (Gets Network Time)
using UnityEngine;
using System.Collections;
using System.Net;
using System.Net.Sockets;
using UnityEngine.UI;
public class GetNetworkTime : MonoBehaviour {
public static System.DateTime NetworkTime()
{
//default Windows time server
const string ntpServer = "time.windows.com";
// NTP message size - 16 bytes of the digest (RFC 2030)
var ntpData = new byte[48];
//Setting the Leap Indicator, Version Number and Mode values
ntpData[0] = 0x1B; //LI = 0 (no warning), VN = 3 (IPv4 only), Mode = 3 (Client Mode)
var addresses = Dns.GetHostEntry(ntpServer).AddressList;
//The UDP port number assigned to NTP is 123
var ipEndPoint = new IPEndPoint(addresses[0], 123);
//NTP uses UDP
var socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
socket.Connect(ipEndPoint);
//Stops code hang if NTP is blocked
socket.ReceiveTimeout = 3000;
socket.Send(ntpData);
socket.Receive(ntpData);
socket.Close();
//Offset to get to the "Transmit Timestamp" field (time at which the reply
//departed the server for the client, in 64-bit timestamp format."
const byte serverReplyTime = 40;
//Get the seconds part
ulong intPart = System.BitConverter.ToUInt32(ntpData, serverReplyTime);
//Get the seconds fraction
ulong fractPart = System.BitConverter.ToUInt32(ntpData, serverReplyTime + 4);
//Convert From big-endian to little-endian
intPart = SwapEndianness(intPart);
fractPart = SwapEndianness(fractPart);
var milliseconds = (intPart * 1000) + ((fractPart * 1000) / 0x100000000L);
//**UTC** time
var networkDateTime = (new System.DateTime(1900, 1, 1, 0, 0, 0, System.DateTimeKind.Utc)).AddMilliseconds((long)milliseconds);
//return networkDateTime.ToLocalTime();
return networkDateTime;
}
// stackoverflow.com/a/3294698/162671
static uint SwapEndianness(ulong x)
{
return (uint)(((x & 0x000000ff) << 24) +
((x & 0x0000ff00) << 8) +
((x & 0x00ff0000) >> 8) +
((x & 0xff000000) >> 24));
}
}
Script B (Difference Calculator and Time Logger)
using UnityEngine;
using System.Collections;
using UnityEngine.Networking;
using UnityEngine.UI;
public class SyncTime2 : NetworkBehaviour {
public float serverTimeDif;
public float clientTimeDif;
GetNetworkTime GetNetworkTime;
GameObject time;
Text TimeLog;
// Use this for initialization
void Start () {
GetNetworkTime = GetComponent<GetNetworkTime>();
time = GameObject.Find("time");
TimeLog = time.GetComponent<Text>();
if (isServer)
{
serverTimeDif = (float)System.DateTime.UtcNow.TimeOfDay.TotalSeconds - (float) GetNetworkTime.NetworkTime().TimeOfDay.TotalSeconds;
StartCoroutine("DisplayTime", serverTimeDif);
}
else
{
clientTimeDif = (float)System.DateTime.UtcNow.TimeOfDay.TotalSeconds - (float)GetNetworkTime.NetworkTime().TimeOfDay.TotalSeconds;
StartCoroutine("DisplayTime", clientTimeDif);
}
}
IEnumerator DisplayTime (float TimeDif)
{
for (;;)
{
TimeLog.text = "" + ((float)System.DateTime.UtcNow.TimeOfDay.TotalSeconds + TimeDif);
// Log time so i can see if it is different on different devices
yield return new WaitForSeconds(0.01f);
}
}
}
Hmmmff... Seems simply adding the time difference rather than minus-ing it works and synchronizes the time, so rather than
TimeLog.text = "" + ((float)System.DateTime.UtcNow.TimeOfDay.TotalSeconds - TimeDif);
I should use
TimeLog.text = "" - ((float)System.DateTime.UtcNow.TimeOfDay.TotalSeconds + TimeDif);
If anyone could confirm that this is/isn't the correct method of networking I would deem that an answer
I have never used pointers before and now I need to and even though I've read up on them I still have no clue on how to use them!
I am using a third party product (Mitov Signal Lab)and I am using a control that has an event that will allow me to intercept the data. When I enter the event I have access to "args" and one of the statements will return a pointer. Now this points to a buffer of 4096 double values (one a real number and the next an imaginiary number etc). So this is the code:
private void genericComplex1_ProcessData(object Sender, Mitov.SignalLab.ProcessComplexNotifyArgs Args)
{
unsafe
{
IntPtr pointer = Args.InBuffer.Read();
// now what???? I want to get each double value, do some calculations on them
// and return them to the buffer.
}
args.sendOutPutData = true;
}
I'm assuming that i need to assign pointer to Args.InBuffer.Read()
But I've tried some examples I've seen and I just get errors. I'm using visual studio 2010 and i'm compiling with the /unsafe.
Here's an update. There are additional methods such as Args.Inbuffer.Write(), and Args.Inbuffer.Modify() but I still can't figure out how to access the individual components of the buffer.
Ok here are more details. I used the Marshall.copy. Here is the code:
private void genericComplex1_ProcessData(object Sender, Mitov.SignalLab.ProcessComplexNotifyArgs Args)
{
//used to generate a phase corrected Q signal
//equations:
// real = real
// imaginary = (1 + amplitudeimbalance)(real*cos(phaseImbalance) + imaginary*sin(phaseImbalance)
//but amplitude imbalance is already corrected therefore
//imaginary = real * cos(phaseCorrection) + imaginary*sin(phaseCorrection)
double real;
double imaginary;
double newImaginary;
var bufferSize = Convert.ToInt32(Args.InBuffer.GetSize());
Mitov.SignalLab.ComplexBuffer ComplexDataBuffer = new Mitov.SignalLab.ComplexBuffer(bufferSize);
ComplexDataBuffer.Zero();
IntPtr pointer = Args.InBuffer.Read();
IntPtr pointer2 = ComplexDataBuffer.Write();
var data = new double[8192];
Marshal.Copy(pointer, data, 0, 8192);
for (int i = 0; i < 8192; i+=2)
{
data[i] = data[i];
data[i + 1] = data[i] * Math.Cos(phaseCorrection) + data[i+1] * Math.Sin(phaseCorrection);
}
Marshal.Copy(data, 0, pointer2, 8192);
Args.SendOutputData = false;
genericComplex1.SendData(ComplexDataBuffer);
}
Now, this does not work. The reason is it takes too long. I am sampling audio from a sound card at 192,000 samples a second. The event above fires each time there is a 4096 buffer available. Thus every 20 milliseconds a new buffer arrives. This buffer is then sent to a fast fourier transform routine that produces a graph of signal strength vs frequency. The imaginary data is actually the same as the real data but it is phase shifted by 90 degrees. This is how it is in a perfect world. However the phase is never 90 degrees due to imbalances in the signal, different electronics paths, different code and so on. Thus one needs a correction factor and that is what the equations do. Now if I replace the
data[i] = data[i];
data[i + 1] = data[i] * Math.Cos(phaseCorrection) + data[i+1] * Math.Sin(phaseCorrection);
with
data[i] = data[i];
data[i+1] = data[i+1];
it works ok.
So I'm going to need pointers. But will this be substantially faster or will the sin and cosine functions just take too long? Cn I send the data directly to the processor (now thats getting real unsafe!).
You can't access unmanaged data directly in safe context, so you need to do a copy of data to managed container:
using System.Runtime.InteropServices;
// ...
var data = new double[4096];
Marshal.Copy(pointer, data, 0, 4096);
// process data here
And as far as I see it, unsafe context is not required for this part of code.
As for the writing data back it's actually quite the same, but note the parameters order:
// finished processing, copying data back
Marshal.Copy(data, 0, pointer, 4096);
As for your last notes on perfprmance. As I already mentioned, move var data = new double[8192]; to a data class member and initialize it in class constructor, so it's not allocated each time (and allocation is pretty slow). You can do Math.Cos(phaseCorrection) outside of cycle since phaseCorrection is unchanged, same is for Math.Sin(phaseCorrection) of course. Something like this:
Marshal.Copy(pointer, this.data, 0, 8192);
var cosCache = Math.Cos(phaseCorrection);
var sinCache = Math.Sin(phaseCorrection);
for (int i = 0; i < 8192; i+=2)
{
data[i] = data[i];
data[i + 1] = data[i] * cosCache + data[i+1] * sinCache;
}
Marshal.Copy(this.data, 0, pointer2, 8192);
All this should speed up things tremendously.
If you have an IntPtr, you can get a pointer from it. Using your example:
unsafe
{
IntPtr pointer = Args.InBuffer.Read();
// create a pointer to a double
double* pd = (double*)pointer.ToPointer();
// Now you can access *pd
double firstDouble = *pd;
pd++; // moves to the next double in the memory block
double secondDouble = *pd;
// etc.
}
Now, if you want to put data back into the array, it's much the same:
double* pd = (double*)pointer.ToPointer();
*pd = 3.14;
++pd;
*pd = 42.424242;
You need to read and understand Unsafe Code and Pointers. Be very careful with this; you're programming without a safety net here.
I try to stream sound samples from my microphone to my speakers by using DirectSound and C#. It should be similar to 'listening to microphone', but later I want to use this for something else. By testing my approach I've noticed silent tickeling, cracking noises in the background. I would guess this has something to do with the delay between writing and playing the buffer, which must be greater than the latency to write the chunks.
If I set the delay between recording and playout to less than 50ms. Mostly it works but sometimes I get really loud cracking noises. So I've decided to a delay about at least 50ms. This works okay for me, but the delay of the systems "listen to device" seems to be much shorter. I would guess it is about 15-30ms, and nearly not noticeable. For 50ms I get at least a little reverb effect.
In the following I'll show you my microphone code (partially):
The initialisation is done like this:
capture = new Capture(device);
// Creating the buffer
// Determining the buffer size
bufferSize = format.AverageBytesPerSecond * bufferLength / 1000;
while (bufferSize % format.BlockAlign != 0) bufferSize += 1;
chunkSize = Math.Max(bufferSize, 256);
bufferSize = chunkSize * BUFFER_CHUNKS;
this.bufferLength = chunkSize * 1000 / format.AverageBytesPerSecond; // Redetermining the buffer Length that will be used.
captureBufferDescription = new CaptureBufferDescription();
captureBufferDescription.BufferBytes = bufferSize;
captureBufferDescription.Format = format;
captureBuffer = new CaptureBuffer(captureBufferDescription, capture);
// Creating Buffer control
bufferARE = new AutoResetEvent(false);
// Adding notifier to buffer.
bufferNotify = new Notify(captureBuffer);
BufferPositionNotify[] bpns = new BufferPositionNotify[BUFFER_CHUNKS];
for(int i = 0 ; i < BUFFER_CHUNKS ; i ++) bpns[i] = new BufferPositionNotify() { Offset = chunkSize * (i+1) - 1, EventNotifyHandle = bufferARE.SafeWaitHandle.DangerousGetHandle() };
bufferNotify.SetNotificationPositions(bpns);
The capturing will run like this in an extra thread:
// Initializing
MemoryStream tempBuffer = new MemoryStream();
// Capturing
while (isCapturing && captureBuffer.Capturing)
{
bufferARE.WaitOne();
if (isCapturing && captureBuffer.Capturing)
{
captureBuffer.Read(currentBufferPart * chunkSize, tempBuffer, chunkSize, LockFlag.None);
ReportChunk(applyVolume(tempBuffer.GetBuffer()));
currentBufferPart = (currentBufferPart + 1) % BUFFER_CHUNKS;
tempBuffer.Dispose();
tempBuffer = new MemoryStream(); // Reset Buffer;
}
}
// Finalizing
isCapturing = false;
tempBuffer.Dispose();
captureBuffer.Stop();
if (bufferARE.WaitOne(bufferLength + 1)) currentBufferPart = (currentBufferPart + 1) % BUFFER_CHUNKS; // That on next start the correct bufferpart will be read.
stateControlARE.Set();
While capturing ReportChunk takes the data to the speaker as an event that could be subscribed. The speaker part is initialized like this:
// Creating the dxdevice.
dxdevice = new Device(device);
dxdevice.SetCooperativeLevel(hWnd, CooperativeLevel.Normal);
// Creating the buffer
bufferDescription = new BufferDescription();
bufferDescription.BufferBytes = bufferSize;
bufferDescription.Format = input.Format;
bufferDescription.ControlVolume = true;
bufferDescription.GlobalFocus = true; // That sound doesn't stop if the hWnd looses focus.
bufferDescription.StickyFocus = true; // - " -
buffer = new SecondaryBuffer(bufferDescription, dxdevice);
chunkQueue = new Queue<byte[]>();
// Creating buffer control
bufferARE = new AutoResetEvent(false);
// Register at input device
input.ChunkCaptured += new AInput.ReportBuffer(input_ChunkCaptured);
The data is put by the event method into the queue, simply by:
chunkQueue.Enqueue(buffer);
bufferARE.Set();
Filling the playbackbuffer and starting/stopping the playback buffer is done by another thread:
// Initializing
int wp = 0;
bufferARE.WaitOne(); // wait for first chunk
// Playing / writing data to play buffer.
while (isPlaying)
{
Thread.Sleep(1);
bufferARE.WaitOne(BufferLength * 3); // If a chunk is played and there is no new chunk we try to continue and may stop playing, else may the buffer runs out.
// Note that this may fails if the sender was interrupted within one chunk
if (isPlaying)
{
if (chunkQueue.Count > 0)
{
while (chunkQueue.Count > 0) wp = writeToBuffer(chunkQueue.Dequeue(), wp);
if (buffer.PlayPosition > wp - chunkSize * 3 / 2) buffer.SetCurrentPosition(((wp - chunkSize * 2 + bufferSize) % bufferSize));
if (!buffer.Status.Playing)
{
buffer.SetCurrentPosition(((wp - chunkSize * 2 + bufferSize) % bufferSize)); // We have 2 chunks buffered so we step back 2 chunks and play them while getting new chunks.
buffer.Play(0, BufferPlayFlags.Looping);
}
}
else
{
buffer.Stop();
bufferARE.WaitOne(); // wait for a filling chunk
}
}
}
// Finalizing
isPlaying = false;
buffer.Stop();
stateControlARE.Set();
writeToBuffer simply writes the enqueued chunk to the buffer by this.buffer.Write(wp, data, LockFlag.None); and caring about bufferSize and chunkSize and wp, which represents the last writing position. I think this is everything that is important about my code. Maybe the definitions are missing and at least there is another method that starts/stops=controls the threads.
I've posted this code in case I've made a mistake in filling the buffer or my initialisation is wrong. But I would guess that this problem occurs because the execution of C# bytecode is too slow or something like that. But in the end my question is still open: My question is how to reduce the latency and how to avoid noises that shouldn't be there?
I know the reason of your problem and the way that you can solve it, but I can't implement it in C# and .Net, so I will explain it in hope that you can find your way.
Audio will be recorded by your mic. with an specified frequency( for example 44100 ) and then played on the sound card at same sample rate( again 44100 ), the problem is the crystal that count the time in input device( mic. for example ) is not same as the crystal that play sound in sound card.
also the difference is so small they are not the same( there is no 2 exact same crystal in entire world ) so after a while there will be a gap in your playback routines.
Now the solution is re-sample data to match the sample rate of the output but I don't know how to do that in C# and .Net
A long time ago I figured out, that this problem was caused by the Thread.Sleep(1); in combination with high CPU usage. Due the windows timerresolution is 15,6ms by default, this sleep doesn't mean sleep for 1ms, but sleep until the next clock interrupt is reached. (For more read this paper) Combined with high CPU usage it may stacks up to the length of a chunk or even more.
For example: If my chunksize is 40ms, this could be about 46,8ms (3 * 15,6ms) and this causes the tickeling. One solution for that is setting the resolution down to 1ms. That can be done in this way:
[DllImport("winmm.dll", EntryPoint="timeBeginPeriod", SetLastError=true)]
private static extern uint timeBeginPeriod(uint uiPeriod);
[DllImport("winmm.dll", EntryPoint="timeEndPeriod", SetLastError=true)]
private static extern uint timeEndPeriod(uint uiPeriod);
void routine()
{
Thead.Sleep(1); // May takes about 15,6ms or even longer.
timeBeginPeriod(1); // Should be set at the startup of the application.
Thead.Sleep(1); // May takes about 1, 2 or 3 ms depending on the CPU usage.
// ... time depending routines goes here ...
timeEndPeriod(1); // Should end at application shutdown.
}
As far as I know this should be already done by directx. But due this setting is a global setting, other parts of the application or other applications maybe change it. This shouldn't happen if an application sets and revokes the setting once. But somehow it seems to happen caused by any dirty programmed part or other running application.
One more thing that needs to be watched, is whether you're still using the correct position of the directx buffer, if you're skipping one chunk for any reason. In this case a resynchronization is required.
As a feature in the application which Im developing, I need to show the total estimated time left to upload/download a file to/from server.
how would it possible to get the download/upload speed to the server from client machine.
i think if im able to get speed then i can calculate time by -->
for example ---for a 200 Mb file = 200(1024 kb) = 204800 kb and
divide it by 204800 Mb / speed Kb/s = "x" seconds
The upload/download speed is no static property of a server, it depends on your specific connection and may also vary over time. Most application I've seen do an estimation over a short time window. That means they start downloading/uploading and measure the amount of data over, lets say 10 seconds. This is then taken as the current transfer speed and used to calculate the remaining time (e.g. 2500kB / 10s -> 250Kb/s). The time window is moved on and recalculated continuously to keep the calculation accurate to the current speed.
Although this is a quite basic approach, it will serve well in most cases.
Try something like this:
int chunkSize = 1024;
int sent = 0
int total = reader.Length;
DateTime started = DateTime.Now;
while (reader.Position < reader.Length)
{
byte[] buffer = new byte[
Math.Min(chunkSize, reader.Length - reader.Position)];
readBytes = reader.Read(buffer, 0, buffer.Length);
// send data packet
sent += readBytes;
TimeSpan elapsedTime = DateTime.Now - started;
TimeSpan estimatedTime =
TimeSpan.FromSeconds(
(total - sent) /
((double)sent / elapsedTime.TotalSeconds));
}
This is only tangentially related, but I assume if you're trying to calculate total time remaining, you're probably also going to be showing it as some kind of progress bar. If so, you should read this paper by Chris Harrison about perceptual differences. Here's the conclusion straight from his paper (emphasis mine).
Different progress bar behaviors appear to have a significant effect on user perception of process duration. By minimizing negative behaviors and incorporating positive behaviors, one can effectively make progress bars and their associated processes appear faster. Additionally, if elements of a multistage operation can be rearranged, it may be possible to reorder the stages in a more pleasing and seemingly faster sequence.
http://www.chrisharrison.net/projects/progressbars/ProgBarHarrison.pdf
I don't know why do you need this but i would go simpliest way possible and ask user what connection type he has. Then take file size divide it by speed and then by 8 to get number of seconds.
Point is you won't need processing power to calculate speeds. Microsoft on their website use function that calculates a speed for most default connections based on file size which you can get while uploading the file or to enter it manually.
Again, maybe you have other needs and you must calculate upload on fly...
The following code computes the remaining time in minute.
long totalRecieved = 0;
DateTime lastProgressChange = DateTime.Now;
Stack<int> timeSatck = new Stack<int>(5);
Stack<long> byteSatck = new Stack<long>(5);
using (WebClient c = new WebClient())
{
c.DownloadProgressChanged += delegate(object s, DownloadProgressChangedEventArgs args)
{
long bytes;
if (totalRecieved == 0)
{
totalRecieved = args.BytesReceived;
bytes = args.BytesReceived;
}
else
{
bytes = args.BytesReceived - totalRecieved;
}
timeSatck.Push(DateTime.Now.Subtract(lastProgressChange).Seconds);
byteSatck.Push(bytes);
double r = timeSatck.Average() * ((args.TotalBytesToReceive - args.BytesReceived) / byteSatck.Average());
this.textBox1.Text = (r / 60).ToString();
totalRecieved = args.BytesReceived;
lastProgressChange = DateTime.Now;
};
c.DownloadFileAsync(new Uri("http://www.visualsvn.com/files/VisualSVN-1.7.6.msi"), #"C:\SVN.msi");
}
I think I ve got the estimated time to download.
double timeToDownload = ((((totalFileSize/1024)-((fileStream.Length)/1024)) / Math.Round(currentSpeed, 2))/60);
this.Invoke(new UpdateProgessCallback(this.UpdateProgress), new object[] {
Math.Round(currentSpeed, 2), Math.Round(timeToDownload,2) });
where
private void UpdateProgress(double currentSpeed, double timeToDownload)
{
lblTimeUpdate.Text = string.Empty;
lblTimeUpdate.Text = " At Speed of " + currentSpeed + " it takes " + timeToDownload +" minute to complete download";
}
and current speed is calculated like
TimeSpan dElapsed = DateTime.Now - dStart;
if (dElapsed.Seconds > 0) {currentSpeed = (fileStream.Length / 1024) / dElapsed.Seconds;
}