I have been going round in circles of how to ultimately synchronize the movement of 20 objects through 60 chosen points each. I had several options the main were
Using sync vars to synchronize their progress between points (1 to 0), but that is open to lag and fairly intensive
Using unity's built in network transform, but that would seem costly in terms of networking ability as the other device only need know the points and when to start
First, sending the list of points, which i have successfully done and then making the devices start movement at a certain time at a fixed rate
So this is how i came to the conclusion that i needed to have a networked universal time accurate to around 0.1 seconds so that the users would see the same movement. Please if you see this as the wrong approach to this networking please let me know as I very much a beginner to networking.
So far I have tried three methods to get this synced time.
I used System.DateTime believing that it was an accurate and universal time but found variation beyond a second between devices
I tried to calculate the latency between the two devices so i could calulate and remove the device time variation, by
(A) Using the in built method of GetAveragePing(NetworkPlayer player); but this was old and depreciated as the NetworkPlayer class seemed to be no longer functional and compatible
(B) Using another in built method of GetCurrentRtt(int hostId, int connectionId, out byte error); which returned 0 and again i believe may be depreciated
(C) By sending a message from the Server to the Client and back then dividing the time taken by two but this seemed inaccurate as I was trying to calculate latency between the server and client which was not the same as between the client and server so not exactly half
I tried to access a form of synced network time from a server, by
(A) Using code from here to get a network time then worked out the System.DateTime difference between the network date time at a certain point so that i could synchronize them. The scripts which I used to do this are below.
(B) Getting network time stamps from GetNetworkTimestamp(); which I thought may be what i wanted
So for all of these methods have failed so today I am asking you;
Is there another method for syncing time?
Should one of these methods work, so have I gone wrong, in which case i can give further details on my issues and code used?
Is my approach to networking totally wrong, or at least mostly wrong and how do you suggest i achieve my desired goal?
Thank you very much for reading this I hope it is detailed and still clear and if i can help you advance your understanding of my question i will be very happy to assist. Thanks for any answers / enlightening comments.
Code for Networked Time
Script A (Gets Network Time)
using UnityEngine;
using System.Collections;
using System.Net;
using System.Net.Sockets;
using UnityEngine.UI;
public class GetNetworkTime : MonoBehaviour {
public static System.DateTime NetworkTime()
{
//default Windows time server
const string ntpServer = "time.windows.com";
// NTP message size - 16 bytes of the digest (RFC 2030)
var ntpData = new byte[48];
//Setting the Leap Indicator, Version Number and Mode values
ntpData[0] = 0x1B; //LI = 0 (no warning), VN = 3 (IPv4 only), Mode = 3 (Client Mode)
var addresses = Dns.GetHostEntry(ntpServer).AddressList;
//The UDP port number assigned to NTP is 123
var ipEndPoint = new IPEndPoint(addresses[0], 123);
//NTP uses UDP
var socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
socket.Connect(ipEndPoint);
//Stops code hang if NTP is blocked
socket.ReceiveTimeout = 3000;
socket.Send(ntpData);
socket.Receive(ntpData);
socket.Close();
//Offset to get to the "Transmit Timestamp" field (time at which the reply
//departed the server for the client, in 64-bit timestamp format."
const byte serverReplyTime = 40;
//Get the seconds part
ulong intPart = System.BitConverter.ToUInt32(ntpData, serverReplyTime);
//Get the seconds fraction
ulong fractPart = System.BitConverter.ToUInt32(ntpData, serverReplyTime + 4);
//Convert From big-endian to little-endian
intPart = SwapEndianness(intPart);
fractPart = SwapEndianness(fractPart);
var milliseconds = (intPart * 1000) + ((fractPart * 1000) / 0x100000000L);
//**UTC** time
var networkDateTime = (new System.DateTime(1900, 1, 1, 0, 0, 0, System.DateTimeKind.Utc)).AddMilliseconds((long)milliseconds);
//return networkDateTime.ToLocalTime();
return networkDateTime;
}
// stackoverflow.com/a/3294698/162671
static uint SwapEndianness(ulong x)
{
return (uint)(((x & 0x000000ff) << 24) +
((x & 0x0000ff00) << 8) +
((x & 0x00ff0000) >> 8) +
((x & 0xff000000) >> 24));
}
}
Script B (Difference Calculator and Time Logger)
using UnityEngine;
using System.Collections;
using UnityEngine.Networking;
using UnityEngine.UI;
public class SyncTime2 : NetworkBehaviour {
public float serverTimeDif;
public float clientTimeDif;
GetNetworkTime GetNetworkTime;
GameObject time;
Text TimeLog;
// Use this for initialization
void Start () {
GetNetworkTime = GetComponent<GetNetworkTime>();
time = GameObject.Find("time");
TimeLog = time.GetComponent<Text>();
if (isServer)
{
serverTimeDif = (float)System.DateTime.UtcNow.TimeOfDay.TotalSeconds - (float) GetNetworkTime.NetworkTime().TimeOfDay.TotalSeconds;
StartCoroutine("DisplayTime", serverTimeDif);
}
else
{
clientTimeDif = (float)System.DateTime.UtcNow.TimeOfDay.TotalSeconds - (float)GetNetworkTime.NetworkTime().TimeOfDay.TotalSeconds;
StartCoroutine("DisplayTime", clientTimeDif);
}
}
IEnumerator DisplayTime (float TimeDif)
{
for (;;)
{
TimeLog.text = "" + ((float)System.DateTime.UtcNow.TimeOfDay.TotalSeconds + TimeDif);
// Log time so i can see if it is different on different devices
yield return new WaitForSeconds(0.01f);
}
}
}
Hmmmff... Seems simply adding the time difference rather than minus-ing it works and synchronizes the time, so rather than
TimeLog.text = "" + ((float)System.DateTime.UtcNow.TimeOfDay.TotalSeconds - TimeDif);
I should use
TimeLog.text = "" - ((float)System.DateTime.UtcNow.TimeOfDay.TotalSeconds + TimeDif);
If anyone could confirm that this is/isn't the correct method of networking I would deem that an answer
Related
Kindly bear with me for this confusing question. I'm finding it as hard to describe as it is involving and tiresome. Read it and you'll know why.
I've been hounding this issue for over a month now without much progress. I'm using an STM32 (STM32F103C8 mounted on a BluePill board) to communicate with a C# app through an FT232r Serial-USB converter. The complete communication protocol is a bit complex. I'm writing here a simplistic version of the code that explains my problem quite accurately.
STM32 does the following.
In the initial setup,
Serial.begin at 2000000 (Yes it's very high but I've analyzed it using an oscilloscope and the signal is very healthy; impedance matching and clock jitter is very accurate).
Waits for a command from the C# end to enter the loop
In the loop, it does the following.
TX a byte buffer of length N on the serial port. Packet structure is 0xAA, N bytes, 1 byte checksum.
repeat the loop
And on the C# side (Pseudo code),
new Thread(()=>{while(true) IOTick(); Thread.Sleep(30); }).Start();
IOTick() is defined as:
{
while(SerialPortObject.BytesToRead > 1)
{
header = read();
if (header != 0xAA) continue;
byte [] buffer = new byte[N + 1];
receivedBytes = readBytes(buffer, N + 1, Timeout = 500ms); // receivedBytes is never less than N + 1 for timeout greater than 120)
use the N=16 bytes. Check Nth byte to compare checksum. Doen't take too much CPU time.
Send a packet received software event.
}
}
readBytes is defined as
int readBytes(byte[] buffer, int count, int timeout)
{
var st = DateTime.Now;
for (int i = 0; i < count; i++)
{
var b_ = read(timeout);
if (b_ == -1)
return i;
buffer[i] = (byte)b_;
timeout -= (int)(DateTime.Now - st).TotalMilliseconds;
}
return count;
}
int buffer2ReadIndex = 0;
byte[] buffer2= new byte[0];
int read(int timeout)
{
DateTime start = DateTime.Now;
if (buffer2.Length == 0)
{
while (SerialPortObject.BytesToRead <= 0)
{
if ((DateTime.Now - start).TotalMilliseconds > timeout)
return -1;
System.Threading.Thread.Sleep(30);
}
buffer2 = new byte[SerialPortObject.BytesToRead];
sp.Read(buffer2, 0, buffer2.Length);
}
if (buffer2.Length > 0)
{
var b = buffer2[buffer2ReadIndex];
buffer2ReadIndex++;
if (buffer2ReadIndex >= buffer2.Length)
{
buffer2ReadIndex = 0;
buffer2 = new byte[0];
}
return b;
}
return -1;
}
Now, everything is working as expected. The packet received software event is triggered not later than every ~30ms (the windows tick time). The problem starts if I have to wait between each packet TX at the STM side. First, I suspected that the I2C I was using for some tasks between each packet TX was causing some HW or software conflict with serial data which gets corrupted. But then I noticed that only if I introduce a delay of 1 millisecond using Arduino delay() between each packet TX, the same thing happens. Almost, 1K packets should be received every second now. Almost 1 out of 10 packets after a successful header exception get either not delivered completely or delivered with corrupted checksum, causing the C# app to lose the packet Header. The new header trace obviously requires flushing some bytes, losing some packets in the communication. Even this doesn't sound too bad for an app that can afford 5% data packet loss, strangely though, when this anomaly occurs, the packet received software interrupt waits for more than 1 second after every couple hundred of consecutive events.
I'm completely blind here. Even tried it with 115200 baud rate, does the same loss with a slightly lesser loss ratio. It should be noted that at 9600 baud, the issue doesn't happen. This is the only hint I've got right now.
It looks like I've found an answer.
After digging deep into SerialPort and SerialPort.base stream class and after doing some document reading and benchmarking, here is what I've observed:
SerialPort.BytesToRead updates are not uniform. DataReceived event seems to be following it. When bytes are coming at ~200kHz, (baud = 2Mbps), It is updated almost instantaneously (or within 30ms, worst case). When they are coming at ~20kHz or slower (evenly spaced on time using a micrcontroller), the SerialPort.BytesToRead can take up to 400ms to update. This happens only after a dozen 30ms updates.
So, observing this, I can say that SerialPort.BytesToRead is updated on two conditions. Some amount of time has passed since the data arrived (and this time is not constrained to 30ms) or the data is coming too fast.
This is a strange behavior. No data is lost when this anomaly is occurring. Not to surprise, 0.06% of bytes are lost when working at full bandwidth (200KBps at baud of 2Mbps).
I have been working on a private project where i wanted to learn how to program on a windows phone, and at a point i started to fiddle with sockets and the camera, and a great idea came to mind video feed (dumb me to even attempt).
but now I'm here, I have something that well, it works like a charm but a Lumia 800 cannot chug through the for-loop fast enough. It sends a frame per lets say 7-8 seconds something i think is strange since well, it should be strong enough. It feels and looks like watching porn on a 56k modem without the porn.
I also realized that a frame is 317000 pixels and that would sum up to roughly 1MB per frame I'm also sending xy coordinates so mine takes up 2.3MB per frame still working on a different way to solve this to keep it down. so I'm guessing i would need to do dome magic to make both position and pixel values of an acceptable size. because atm would i get5 it up at an acceptable speed it would require at least 60MB/s to get something like 30fps but thats a problem for another day.
//How many pixels to send per burst (1000 seems to be the best)
const int PixelPerSend = 1000;
int bSize = 7 * PixelPerSend;
//Comunication thread UDP feed
private void EthernetComUDP() //Runs in own thread
{
//Connect to Server
clientUDP = new SocketClientUDP();
int[] ImageContent = new int[(int)cam.PreviewResolution.Height * (int)cam.PreviewResolution.Width];
byte[] PacketContent = new byte[bSize];
string Pixel,l;
while (SendingData)
{
cam.GetPreviewBufferArgb32(ImageContent);
int x = 1, y = 1, SenderCount = 0;
//In dire need of a speedup
for (int a = 0; a < ImageContent.Length; a++) //this loop
{
Pixel = Convert.ToString(ImageContent[a], 2).PadLeft(32, '0');
//A - removed to conserve bandwidth
//PacketContent[SenderCount] = Convert.ToByte(Pixel.Substring(0, 8), 2);//0
//R
PacketContent[SenderCount] = Convert.ToByte(Pixel.Substring(8, 8), 2);//8
//G
PacketContent[SenderCount + 1] = Convert.ToByte(Pixel.Substring(16, 8), 2);//16
//B
PacketContent[SenderCount + 2] = Convert.ToByte(Pixel.Substring(24, 8), 2);//24
//Coordinates
//X
l = Convert.ToString(x, 2).PadLeft(16, '0');
//X bit(1-8)
PacketContent[SenderCount + 3] = Convert.ToByte(l.Substring(0, 8), 2);
//X bit(9-16)
PacketContent[SenderCount + 4] = Convert.ToByte(l.Substring(8, 8), 2);
//Y
l = Convert.ToString(y, 2).PadLeft(16, '0');
//Y bit(1-8)
PacketContent[SenderCount + 5] = Convert.ToByte(l.Substring(0, 8), 2);
//Y bit(9-16)
PacketContent[SenderCount + 6] = Convert.ToByte(l.Substring(8, 8), 2);
x++;
if (x == cam.PreviewResolution.Width)
{
y++;
x = 1;
}
SenderCount += 7;
if (SenderCount == bSize)
{
clientUDP.Send(ConnectToIP, PORT + 1, PacketContent);
SenderCount = 0;
}
}
}
//Close on finish
clientUDP.Close();
}
i have tried for simplicity to just send the pixels induvidialy using
BitConverter.GetBytes(ImageContent[a]);
instead of the string parsing mess i have created (to be fixed just wanted a proof of concept) but to do the simple BitConverter did not speed it up to much.
so now im on my last idea the UDP sender socket witch is rhoughly identical to the one on msdn's library.
public string Send(string serverName, int portNumber, byte[] payload)
{
string response = "Operation Timeout";
// We are re-using the _socket object that was initialized in the Connect method
if (_socket != null)
{
// Create SocketAsyncEventArgs context object
SocketAsyncEventArgs socketEventArg = new SocketAsyncEventArgs();
// Set properties on context object
socketEventArg.RemoteEndPoint = new DnsEndPoint(serverName, portNumber);
// Inline event handler for the Completed event.
// Note: This event handler was implemented inline in order to make this method self-contained.
socketEventArg.Completed += new EventHandler<SocketAsyncEventArgs>(delegate(object s, SocketAsyncEventArgs e)
{
response = e.SocketError.ToString();
// Unblock the UI thread
_clientDone.Set();
});
socketEventArg.SetBuffer(payload, 0, payload.Length);
// Sets the state of the event to nonsignaled, causing threads to block
_clientDone.Reset();
// Make an asynchronous Send request over the socket
_socket.SendToAsync(socketEventArg);
// Block the UI thread for a maximum of TIMEOUT_MILLISECONDS milliseconds.
// If no response comes back within this time then proceed
_clientDone.WaitOne(TIMEOUT_MILLISECONDS);
}
else
{
response = "Socket is not initialized";
}
return response;
}
All in all i have ended up on 3 solutions
Accept defeat (but that wont happen so lets look at 2)
Work down the amount of data sent (destroys quality 640x480 is small enough i think)
Find the obvious problem (Google and friend's ran out of good ideas, thats why I'm here)
The problem is almost certainly the messing about with the data. Converting a megabyte of binary data into several megabytes of text and then extracting and sending individual characters will add a massive overhead per byte of source data. Looping through individual pixels to build a send buffer will take (relatively speaking) geological timescales.
The fastest way to do this is likely to be to grab a buffer of binary data from the camera, and send it with one UDP write. Only process or break up the data on the phone if you have to, and be careful to access the original binary data directly - don't convert it all to strings and back to binary. Every extra method call you add into this process will just add overhead. If you have to use a loop, try to pre-calculate as much as you can outside the loop to minimise the work that is done on each iteration.
A couple things come to mind: #1 Break up the raw image array into pieces to be sent over the wire. Not sure if Linq is available on Windows Phone but something like this. #2 Converting from int to string to byte will be very inefficient because of the processing time and memory usage. A better approach would be to bulk copy chunks of the int array to a byte array directly. Example
I want to get accurate download/upload speed through a Network Interface using C# .NET
I know that it can be calculated using GetIPv4Statistics().BytesReceived and putting the Thread to sleep for sometime. But it's not giving the output what I am getting in my browser.
Here is a quick snippet of code from LINQPad. It uses a very simple moving average. It shows "accurate speeds" using "Speedtest.net". Things to keep in mind are Kbps is in bits and HTTP data is often compressed so the "downloaded bytes" will be significantly smaller for highly compressible data. Also, don't forget that any old process might be doing any old thing on the internet these days (without stricter firewall settings) ..
I like flindenberg's answer (don't change the accept), and I noticed that some polling periods would return "0" that aligns with his/her conclusions.
Use at your own peril.
void Main()
{
var nics = System.Net.NetworkInformation.NetworkInterface.GetAllNetworkInterfaces();
// Select desired NIC
var nic = nics.Single(n => n.Name == "Local Area Connection");
var reads = Enumerable.Empty<double>();
var sw = new Stopwatch();
var lastBr = nic.GetIPv4Statistics().BytesReceived;
for (var i = 0; i < 1000; i++) {
sw.Restart();
Thread.Sleep(100);
var elapsed = sw.Elapsed.TotalSeconds;
var br = nic.GetIPv4Statistics().BytesReceived;
var local = (br - lastBr) / elapsed;
lastBr = br;
// Keep last 20, ~2 seconds
reads = new [] { local }.Concat(reads).Take(20);
if (i % 10 == 0) { // ~1 second
var bSec = reads.Sum() / reads.Count();
var kbs = (bSec * 8) / 1024;
Console.WriteLine("Kb/s ~ " + kbs);
}
}
}
Please try this. To check internet connection speed.
public double CheckInternetSpeed()
{
// Create Object Of WebClient
System.Net.WebClient wc = new System.Net.WebClient();
//DateTime Variable To Store Download Start Time.
DateTime dt1 = DateTime.UtcNow;
//Number Of Bytes Downloaded Are Stored In ‘data’
byte[] data = wc.DownloadData("http://google.com");
//DateTime Variable To Store Download End Time.
DateTime dt2 = DateTime.UtcNow;
//To Calculate Speed in Kb Divide Value Of data by 1024 And Then by End Time Subtract Start Time To Know Download Per Second.
return Math.Round((data.Length / 1024) / (dt2 - dt1).TotalSeconds, 2);
}
It gives you the speed in Kb/Sec and share the result.
By looking at another answer to a question you posted in NetworkInterface.GetIPv4Statistics().BytesReceived - What does it return? I believe the issue might be that you are using to small intervals. I believe the counter only counts whole packages, and if you for example are downloading a file the packages might get as big as 64 KB (65,535 bytes, IPv4 max package size) which is quite a lot if your maximum download throughput is 60 KB/s and you are measuring 200 ms intervals.
Given that your speed is 60 KB/s I would have set the running time to 10 seconds to get at least 9 packages per average. If you are writing it for all kinds of connections I would recommend you make the solution dynamic, ie if the speed is high you can easily decrease the averaging interval but in the case of slow connections you must increase the averaging interval.
Either do as #pst recommends by having a moving average or simply increase the sleep up to maybe 1 second.
And be sure to divide by the actual time taken rather than the time passed to Thread.Sleep().
Additional thought on intervals
My process would be something like this, measure for 5 second and gather data, ie bytes recieved as well as the number of packets.
var timePerPacket = 5000 / nrOfPackets; // Time per package in ms
var intervalTime = Math.Max(d, Math.Pow(2,(Math.Log10(timePerPacket)))*100);
This will cause the interval to increase slowly from about several tens of ms up to the time per packet. That way we always get at least (on average) one package per interval and we will not go nuts if we are on a 10 Gbps connection. The important part is that the measuring time should not be linear to the amount of data received.
The SSL handshake takes some time as a result modified #sandeep answer. I first created a request and then measure the time to download the content. I believe this is a little more accurate but still not 100%. It is an approximation.
public async Task<int> GetInternetSpeedAsync(CancellationToken ct = default)
{
const double kb = 1024;
// do not use compression
using var client = new HttpClient();
int numberOfBytesRead = 0;
var buffer = new byte[10240].AsMemory();
// create request
var stream = await client.GetStreamAsync("https://www.google.com", ct);
// start timer
DateTime dt1 = DateTime.UtcNow;
// download stuff
while (true)
{
var i = await stream.ReadAsync(buffer, ct);
if (i < 1)
break;
numberOfBytesRead += i;
}
// end timer
DateTime dt2 = DateTime.UtcNow;
double kilobytes = numberOfBytesRead / kb;
double time = (dt2 - dt1).TotalSeconds;
// speed in Kb per Second.
return (int)(kilobytes / time);
}
My Friend asked me to write a program to limit the Internet usage to 40 Mb per day. If the 40 Mb of daily quota is reached no other programs in the system must be able to access the Internet.
You need to monitor the network activity. One can use IPGlobalProperties class for that.
Keep in mind that the statistics are reseted each time the connection is lost so you'll have to store them somewhere.
You need to disable internet connection, see Code to enable/disable internet connectivity
Tell him you wrote a program, but instead hire a guy to watch his internet usage, and pull out the plug when the limit is hit.
EDIT: Apparently my sense of humor is off.
Anyway I think this would be quite an undertaking, I could not find any code for it. But I did find this Net Limiter
Yes.
More of a proof of concept than a working solution.
using System;
using System.Linq;
using System.Threading;
using System.Net.NetworkInformation;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
const double ShutdownValue = 40960000D;
const string NetEnable = "interface set interface \u0022{0}\u0022 DISABLED";
const string NetDisable = "interface set interface \u0022{0}\u0022 ENABLED";
double Incoming = 0;
double Outgoing = 0;
double TotalInterface;
string SelectedInterface = "Local Area Connection";
NetworkInterface netInt = NetworkInterface.GetAllNetworkInterfaces().Single(n => n.Name.Equals(SelectedInterface));
for (; ; )
{
IPv4InterfaceStatistics ip4Stat = netInt.GetIPv4Statistics();
Incoming += (ip4Stat.BytesReceived - Incoming);
Outgoing += (ip4Stat.BytesSent - Outgoing);
TotalInterface = Incoming + Outgoing;
string Shutdown = ((TotalInterface > ShutdownValue) ? "YES" : "NO");
if (Shutdown == "YES")
{
System.Diagnostics.Process.Start("netsh", string.Format(NetDisable, SelectedInterface));
}
string output = string.Format("Shutdown: {0} | {1} KB/s", Shutdown, TotalInterface.ToString());
Console.WriteLine(output);
Thread.Sleep(3000);
}
}
}
}
This should be doable in a router or with cFosSpeed which also provides quotas, however I do now know any freeware/open-source applications that already do this.
For writing it yourself you'd have to somehow keep count of the amount of data sent so far, and if multiple computers are on the same network it's going to be even harder to keep track of.
As a feature in the application which Im developing, I need to show the total estimated time left to upload/download a file to/from server.
how would it possible to get the download/upload speed to the server from client machine.
i think if im able to get speed then i can calculate time by -->
for example ---for a 200 Mb file = 200(1024 kb) = 204800 kb and
divide it by 204800 Mb / speed Kb/s = "x" seconds
The upload/download speed is no static property of a server, it depends on your specific connection and may also vary over time. Most application I've seen do an estimation over a short time window. That means they start downloading/uploading and measure the amount of data over, lets say 10 seconds. This is then taken as the current transfer speed and used to calculate the remaining time (e.g. 2500kB / 10s -> 250Kb/s). The time window is moved on and recalculated continuously to keep the calculation accurate to the current speed.
Although this is a quite basic approach, it will serve well in most cases.
Try something like this:
int chunkSize = 1024;
int sent = 0
int total = reader.Length;
DateTime started = DateTime.Now;
while (reader.Position < reader.Length)
{
byte[] buffer = new byte[
Math.Min(chunkSize, reader.Length - reader.Position)];
readBytes = reader.Read(buffer, 0, buffer.Length);
// send data packet
sent += readBytes;
TimeSpan elapsedTime = DateTime.Now - started;
TimeSpan estimatedTime =
TimeSpan.FromSeconds(
(total - sent) /
((double)sent / elapsedTime.TotalSeconds));
}
This is only tangentially related, but I assume if you're trying to calculate total time remaining, you're probably also going to be showing it as some kind of progress bar. If so, you should read this paper by Chris Harrison about perceptual differences. Here's the conclusion straight from his paper (emphasis mine).
Different progress bar behaviors appear to have a significant effect on user perception of process duration. By minimizing negative behaviors and incorporating positive behaviors, one can effectively make progress bars and their associated processes appear faster. Additionally, if elements of a multistage operation can be rearranged, it may be possible to reorder the stages in a more pleasing and seemingly faster sequence.
http://www.chrisharrison.net/projects/progressbars/ProgBarHarrison.pdf
I don't know why do you need this but i would go simpliest way possible and ask user what connection type he has. Then take file size divide it by speed and then by 8 to get number of seconds.
Point is you won't need processing power to calculate speeds. Microsoft on their website use function that calculates a speed for most default connections based on file size which you can get while uploading the file or to enter it manually.
Again, maybe you have other needs and you must calculate upload on fly...
The following code computes the remaining time in minute.
long totalRecieved = 0;
DateTime lastProgressChange = DateTime.Now;
Stack<int> timeSatck = new Stack<int>(5);
Stack<long> byteSatck = new Stack<long>(5);
using (WebClient c = new WebClient())
{
c.DownloadProgressChanged += delegate(object s, DownloadProgressChangedEventArgs args)
{
long bytes;
if (totalRecieved == 0)
{
totalRecieved = args.BytesReceived;
bytes = args.BytesReceived;
}
else
{
bytes = args.BytesReceived - totalRecieved;
}
timeSatck.Push(DateTime.Now.Subtract(lastProgressChange).Seconds);
byteSatck.Push(bytes);
double r = timeSatck.Average() * ((args.TotalBytesToReceive - args.BytesReceived) / byteSatck.Average());
this.textBox1.Text = (r / 60).ToString();
totalRecieved = args.BytesReceived;
lastProgressChange = DateTime.Now;
};
c.DownloadFileAsync(new Uri("http://www.visualsvn.com/files/VisualSVN-1.7.6.msi"), #"C:\SVN.msi");
}
I think I ve got the estimated time to download.
double timeToDownload = ((((totalFileSize/1024)-((fileStream.Length)/1024)) / Math.Round(currentSpeed, 2))/60);
this.Invoke(new UpdateProgessCallback(this.UpdateProgress), new object[] {
Math.Round(currentSpeed, 2), Math.Round(timeToDownload,2) });
where
private void UpdateProgress(double currentSpeed, double timeToDownload)
{
lblTimeUpdate.Text = string.Empty;
lblTimeUpdate.Text = " At Speed of " + currentSpeed + " it takes " + timeToDownload +" minute to complete download";
}
and current speed is calculated like
TimeSpan dElapsed = DateTime.Now - dStart;
if (dElapsed.Seconds > 0) {currentSpeed = (fileStream.Length / 1024) / dElapsed.Seconds;
}