Saving images via producer-consumer pattern using BlockingCollection - c#

I'm facing a producer-consumer problem: I have a camera that sends images very quickly and I have to save them to disk. The images are in the form of ushort[]. The camera always overrides the same variable of type ushort[]. So between one acquisition and another I have to copy the array and when possible, save it in order to free up the memory of that image. The important thing is not to lose any images from the camera, even if it means increasing the memory used: it is entirely acceptable that the consumer (saving images with freeing of memory) is slower than the producer; however, it is not acceptable not to copy the image into memory in time.
I've written sample code that should simulate the problem:
immage_ushort: is the image generated by the camera that must be copied to the BlockingCollection before the next image arrives
producerTask: has a cycle that should simulate the arrival of the image every time_wait; within this time the producer should copy the image in the BlockingCollection.
consumerTask: must work on the BlockingCollection by saving the images to disk and thus freeing up the memory; it doesn't matter if the consumer works slower than the producer.
I put a time_wait of 1 millisecond, to test the performance (actually the camera will not be able to reach that speed). The times are respected (with a maximum delay of 1-2 ms, therefore acceptable) if there is no saving to disk in the code (commenting image1.ImWrite (file_name)). But with saving to disk on, I instead get delays that sometimes exceed 100ms.
This is my code:
private void Execute_test_producer_consumer1()
{
//Images are stored as ushort array, so we create a BlockingCollection<ushort[]>
//to keep images when they arrive from camera
BlockingCollection<ushort[]> imglist = new BlockingCollection<ushort[]>();
string lod_date = "";
/*producerTask simulates a camera that returns an image every time_wait
milliseconds. The image is copied and inserted in the BlockingCollection
to be then saved on disk in the consumerTask*/
Task producerTask = Task.Factory.StartNew(() =>
{
//Number of images to process
int num_img = 3000;
//Time between one image and the next
long time_wait = 1;
//Time log variables
var watch1 = System.Diagnostics.Stopwatch.StartNew();
long watch_log = 0;
long delta_time = 0;
long timer1 = 0;
List<long> timer_delta_log = new List<long>();
List<long> timer_delta_log_time = new List<long>();
int ii = 0;
Console.WriteLine("-----START producer");
watch1.Restart();
//Here I expect every wait_time (or a little more) an image will be inserted
//into imglist
while (ii < num_img)
{
timer1 = watch1.ElapsedMilliseconds;
delta_time = timer1 - watch_log;
if (delta_time >= time_wait || ii == 0)
{
//Add image
imglist.Add((ushort[])immage_ushort.Clone());
//Inserting data for time log
timer_delta_log.Add(delta_time);
timer_delta_log_time.Add(timer1);
watch_log = timer1;
ii++;
}
}
imglist.CompleteAdding();
watch1.Stop();
lod_date = DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss.fff");
Console.WriteLine("-----END producer: " + lod_date);
// We only print images that are not inserted on schedule
int gg = 0;
foreach (long timer_delta_log_t in timer_delta_log)
{
if (timer_delta_log_t > time_wait)
{
Console.WriteLine("-- Image " + (gg + 1) + ", delta: "
+ timer_delta_log_t + ", time: " + timer_delta_log_time[gg]);
}
gg++;
}
});
Task consumerTask = Task.Factory.StartNew(() =>
{
string file_name = "";
int yy = 0;
// saving images and removing data
foreach (ushort[] imm in imglist.GetConsumingEnumerable())
{
file_name = #"output/" + yy + ".png";
Mat image1 = new Mat(row, col, MatType.CV_16UC1, imm);
//By commenting on this line, the timing of the producer is respected
image1.ImWrite(file_name);
image1.Dispose();
yy++;
}
imglist.Dispose();
lod_date = DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss.fff");
Console.WriteLine("-----END consumer: " + lod_date);
});
}
I thought, also, that the BlockingCollection could remain blocked for the entire duration of the foreach and therefore of saving the image to disk. So I also tried replacing the foreach with this:
while(!imglist.IsCompleted)
{
ushort[] elem = imglist.Take();
file_name = #"output/" + yy + ".png";
Mat image1 = new Mat(row, col, MatType.CV_16UC1, elem);
//By commenting on this line, the timing of the producer is respected
image1.ImWrite(file_name);
image1.Dispose();
yy++;
}
But the result doesn't change.
What am I doing wrong?

You migth want to start your tasks with the "LongRunning" option:
LongRunning
Specifies that a task will be a long-running, coarse-grained operation involving fewer, larger components than fine-grained systems. It provides a hint to the TaskScheduler that oversubscription may be warranted. Oversubscription lets you create more threads than the available number of hardware threads. It also provides a hint to the task scheduler that an additional thread might be required for the task so that it does not block the forward progress of other threads or work items on the local thread-pool queue.

Related

memory management in C# for high speed applications

I am building an application that processes an incoming image from file or buffer and outputs results in the form of an array or doubles of fixed size. The application needs to be relatively quick. I ran into a problem with cycle time. I started recording cycle time while processing one image and it went from the minimum at 65ms and gradually started increasing to all the way 500ms which is way too slow. Sure enough, I checked on the memory usage and it was steadily increasing as well.
I'm running GC after every cycle and dumping unused variable as ofter as possible. I don't create new objects within the processing loop. Image processing is done on its own thread so that all the resources get dumped. It seems the majority of the cycle time increase happens when I'm pulling the image from file. What could I be missing?
here's the rough code, the full thing is pretty large. Main Function
private void button4_Click(object sender, EventArgs e)
{
Stopwatch sw = new Stopwatch();
sw.Start();
cogDisplay1.Image = null;
ImageFromFile.Operator.Open(Properties.Settings.Default.ImageFile, CogImageFileModeConstants.Read);
ImageFromFile.Run();
cogDisplay1.Image = ImageFromFile.OutputImage;
cogDisplay1.AutoFit = true;
Thread t = new Thread(Vision);
t.Start();
textBox3.Clear();
sw.Stop();
textBox3.AppendText(sw.ElapsedMilliseconds.ToString() + Environment.NewLine + "TS: " + t.ThreadState.ToString());
textBox3.AppendText("GC" + GC.GetTotalMemory(true).ToString());
GC.Collect(GC.MaxGeneration , GCCollectionMode.Forced, false);
}
Image Processing
public void Vision()
{
Stopwatch sw = new Stopwatch();
sw.Start();
try
{
AlignmentParams.ApproximateNumberToFind = 1;
AlignmentParams.SearchRegionMode = 0;
AlignmentResult = AlignmentPat.Execute(cogDisplay1.Image as CogImage8Grey, null , AlignmentParams);
Fixture.InputImage = cogDisplay1.Image;
Fixture.RunParams.UnfixturedFromFixturedTransform = AlignmentResult[0].GetPose();
Fixture.Run();
AlignmentResult = null;
#region FindLineTools
#endregion
#region FindEllipse
#endregion
sw.Stop();
SetText("V" + sw.ElapsedMilliseconds.ToString() + Environment.NewLine);
}
catch (Exception err)
{
SetText(Environment.NewLine + "***********Error***********" + Environment.NewLine);
SetText(err.ToString() + Environment.NewLine);
SetText("***************************" + Environment.NewLine);
}
}
First, I would recommend to post a cleaned-up code for better readability (remove all commented-off stuff). Second, focus only on essential part, namely: your problem is memory overuse/leak due to heavy image processing (correct if wrong). Therefore, in your thread named Vision de-reference the image objects and set them to null immediately after processing completion (as mentioned above, GC is not a big help in your case). The concept briefly demonstrated by following code snippet:
public void Vision()
{
Stopwatch sw = new Stopwatch();
sw.Start();
try
{
AlignmentParams.ApproximateNumberToFind = 1;
AlignmentParams.SearchRegionMode = 0;
AlignmentResult = AlignmentPat.Execute(cogDisplay1.Image as CogImage8Grey, null , AlignmentParams);
Fixture.InputImage = cogDisplay1.Image;
Fixture.RunParams.UnfixturedFromFixturedTransform = AlignmentResult[0].GetPose();
Fixture.Run();
AlignmentResult = null;
// your coding stuff
sw.Stop();
SetText("V" + sw.ElapsedMilliseconds.ToString() + Environment.NewLine);
}
catch (Exception err)
{
SetText(Environment.NewLine + "***********Error***********" + Environment.NewLine);
SetText(err.ToString() + Environment.NewLine);
SetText("***************************" + Environment.NewLine);
}
finally{Fixture.InputImage=null}
Don't call GC.Collect. Let the VM decide when it is out of memory and must do the collection. You typically are not in a good position to decide the best time for a GC.Collect unless your running some other heartbeat or idle watching threads.
Secondly, ensure that whatever resources you're receiving from the method calls are being Disposed. Setting variables to NULL do NOT do this, you should be explicitly calling Dispose or within a Using block:
using(SomeResource resource = Class.GiveMeResource("image.png"))
{
int width = resource.Width;
int height = resource.Height;
Console.Write("that image has {0} pixels", width*height);
} //implicitly calls IDisposable.Dispose() here.
You also need to do some memory and call profiling to detect where, if any, leaks exist.

Multiple SerialPort returns in real-time

I have made a data logging application in C#. It connects to 4 USB sensors with the SerialPort class. I have the data received event threshold triggered on every byte. When data is received, the program checks to see if that byte is the end of the line. If it isn't, the input is added to a buffer. If it is a line end, the program adds a timestamp and writes the data and timestamp to a file (each input source gets a dedicated file).
Issues arise when using more than one COM port inputs. What I see in the output files is:
Any of the 4 Files:
...
timestamp1 value1
timestamp2 value2
timestamp3 value3
value4
value5
value6
timestamp7 value7
...
So, what it looks like is the computer isn't fast enough to get to all 4 interrupts before the next values arrive. I have good reason to believe that this is the culprit because sometimes I'll see output like this:
...
timestamp value
timestamp value
value
val
timestamp ue
timestamp value
...
It might be due to the fact that I changed the processor affinity to run only on Core 2. I did this because the timestamps I'm using are counted with processor cycles, so I can't have multiple time references depending on which core is running. I've put some of the code snippets below; any suggestions that might help with the dropped timestamps would be greatly appreciated!
public mainLoggerIO()
{
//bind to one cpu
Process proc = Process.GetCurrentProcess();
long AffinityMask = (long)proc.ProcessorAffinity;
AffinityMask &= 0x0002; //use only the 2nd processor
proc.ProcessorAffinity = (IntPtr)AffinityMask;
//prevent any other threads from using core 2
Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.High;
Thread.CurrentThread.Priority = ThreadPriority.Highest;
long frequency = Stopwatch.Frequency;
Console.WriteLine(" Timer frequency in ticks per second = {0}",
frequency);
long nanosecPerTick = (1000L * 1000L * 1000L) / frequency;
Console.WriteLine(" Timer is accurate within {0} nanoseconds",
nanosecPerTick);
if (Stopwatch.IsHighResolution)
MessageBox.Show("High Resolution Timer Available");
else
MessageBox.Show("No High Resolution Timer Available on this Machine");
InitializeComponent();
}
And so on. Each data return interrupt looks like this:
private void serialPort1_DataReceived(object sender, System.IO.Ports.SerialDataReceivedEventArgs e)
{
//serialPort1.DataReceived = serialPort1_DataReceived;
rawPort1Data = "";
rawPort1Data = serialPort1.ReadExisting();
this.Invoke((MethodInvoker)delegate { returnTextPort1(); });
}
The method returnTextPort#() is:
private void returnTextPort1()
{
if (string.Compare(saveFileName, "") == 0)//no savefile specified
return;
bufferedString += rawPort1Data;
if(bufferedString.Contains('\r')){
long timeStamp = DateTime.Now.Ticks;
textBox2.AppendText(nicknames[0] + " " + timeStamp / 10000 + ", " + rawPort1Data);
//write to file
using (System.IO.StreamWriter file = new System.IO.StreamWriter(#saveFileName, true))
{
file.WriteLine(nicknames[0] + " " + timeStamp / 10000 + ", " + rawPort1Data);//in Ms
}
bufferedString = "";
}
}
A cleaner approach would be to use a ConcurrentQueue<T> between the data received event handler and a separate Thread that will deal with the resulting data. That way the event handler can return immediately AND instead of modifying rawPort1 data in a totally non-thread-safe manner you could move to a thread-safe solution.
Create a Thread that reads from the concurrent queue, writes to the file and Invokes the UI changes. Note that writing to the file should NOT be on the UI thread.
Your ConcurrentQueue<T> can capture in the class T that you will implement: the port number, the data received and the timestamp at which it was received.
Note also that DateTime.Now is rarely ever the right answer, for most locations it jumps by an hour twice every year when daylight savings time starts or ends, instead of DateTime.UtcNow. Note however that neither has the accuracy you seem to be trying to obtain with your StopWatch code.
You should not need to manipulate processes or thread priorities to do this: the serial port has a buffer, you'll not miss data provided you handle it efficiently.

BookSleeve - Poor Performance When Setting Hashes

I'm in the process of updating my web service to use the latest BookSleeve library, 1.3.38. Previously I was using 1.1.0.7
While doing some benchmarking, I noticed that setting hashes in Redis using the new version of BookSleeve is many times slower than the old version. Please consider the following C# benchmarking code:
public void TestRedisHashes()
{
int numItems = 1000; // number of hash items to set in redis
int numFields = 30; // number of fields in each redis hash
RedisConnection redis = new RedisConnection("10.0.0.01", 6379);
redis.Open();
// wait until the connection is open
while (!redis.State.Equals(BookSleeve.RedisConnectionBase.ConnectionState.Open)) { }
Stopwatch timer = new Stopwatch();
timer.Start();
for (int i = 0; i < numItems; i++)
{
string key = "test_" + i.ToString();
for (int j = 0; j < numFields; j++)
{
// set a value for each field in the hash
redis.Hashes.Set(0, key, "field_" + j.ToString(), "testdata");
}
redis.Keys.Expire(0, key, 30); // 30 second ttl
}
timer.Stop();
Console.WriteLine("Elapsed time for hash writes: {0} ms", timer.ElapsedMilliseconds);
}
BookSleeve 1.1.0.7 takes about 20ms to set 1000 hashes to Redis 2.6, while 1.3.38 takes around 400ms. That's 20X slower! Everything other part of BookSleeve 1.3.38 that I've tested is either as fast or faster than the old version. I've also tried the same test using Redis 2.4 as well as wrapping everything in a transaction. In both cases I got similar performance.
Has anyone else noticed anything like this? I must be doing something wrong... am I setting hashes correctly using the new version of BookSleeve? Is this the right way to do fire-and-forget commands? I've looked though the unit tests as an example of how to use hashes, but haven't been able to find what I'm doing differently. Is it possible that the newest version is just slower in this case?
To actually test the overall speed you would need to add code that waits for the last of the messages to be processed, for example:
Task last = null;
for (int i = 0; i < numItems; i++)
{
string key = "test_" + i.ToString();
for (int j = 0; j < numFields; j++)
{
// set a value for each field in the hash
redis.Hashes.Set(0, key, "field_" + j.ToString(), "testdata");
}
last = redis.Keys.Expire(0, key, 30); // 30 second ttl
}
redis.Wait(last);
Otherwise all you are timing is how fast the call to Set/Expire is. And in this case, that could matter. You see, in 1.1.0.7, all messages are immediately placed onto a queue, and a separate dedicated writer thread then picks up that message and writes it to the stream. In 1.3.38, the dedicated writer thread is gone (for various reasons). So if the socket is available, the calling thread writes to the underlying stream (if the socket is in use, there is a mechanism to handle that). More importantly, it is possible that in your original test against 1.1.0.7, no useful work has actually happened yet - there is no guarantee that work is anywhere near the socket, etc.
In most scenarios, this does not cause any overhead (and is less overhead when amortized), however: it is possible that in your case you are being impacted by effectively buffer under-run - in 1.1.0.7 you would have filled the buffer really quickly, and the worker thread would have probably always found more waiting messages - so it would not flush the stream until the end; in 1.3.38, it is probably flushing between messages. So: let's fix that:
Task last = null;
redis.SuspendFlush();
try {
for (int i = 0; i < numItems; i++)
{
string key = "test_" + i.ToString();
for (int j = 0; j < numFields; j++)
{
// set a value for each field in the hash
redis.Hashes.Set(0, key, "field_" + j.ToString(), "testdata");
}
last = redis.Keys.Expire(0, key, 30); // 30 second ttl
}
}
finally {
redis.ResumeFlush();
}
redis.Wait(last);
The SuspendFlush() / ResumeFlush() pair is ideal when calling a large batch of operations on a single thread to avoid any additional flushing. To copy the intellisense notes:
//
// Summary:
// Temporarily suspends eager-flushing (flushing if the write-queue becomes
// empty briefly). Buffer-based flushing will still occur when the data is full.
// This is useful if you are performing a large number of operations in close
// duration, and want to avoid packet fragmentation. Note that you MUST call
// ResumeFlush at the end of the operation - preferably using Try/Finally so
// that flushing is resumed even upon error. This method is thread-safe; any
// number of callers can suspend/resume flushing concurrently - eager flushing
// will resume fully when all callers have called ResumeFlush.
//
// Remarks:
// Note that some operations (transaction conditions, etc) require flushing
// - this will still occur even if the buffer is only part full.
Note that in most high throughput scenarios there are multiple operations coming in from multiple threads: in those scenarios, any work from concurrent threads will automatically be queued in a way that minimises the number of threads.

How to get accurate download/upload speed in C#.NET?

I want to get accurate download/upload speed through a Network Interface using C# .NET
I know that it can be calculated using GetIPv4Statistics().BytesReceived and putting the Thread to sleep for sometime. But it's not giving the output what I am getting in my browser.
Here is a quick snippet of code from LINQPad. It uses a very simple moving average. It shows "accurate speeds" using "Speedtest.net". Things to keep in mind are Kbps is in bits and HTTP data is often compressed so the "downloaded bytes" will be significantly smaller for highly compressible data. Also, don't forget that any old process might be doing any old thing on the internet these days (without stricter firewall settings) ..
I like flindenberg's answer (don't change the accept), and I noticed that some polling periods would return "0" that aligns with his/her conclusions.
Use at your own peril.
void Main()
{
var nics = System.Net.NetworkInformation.NetworkInterface.GetAllNetworkInterfaces();
// Select desired NIC
var nic = nics.Single(n => n.Name == "Local Area Connection");
var reads = Enumerable.Empty<double>();
var sw = new Stopwatch();
var lastBr = nic.GetIPv4Statistics().BytesReceived;
for (var i = 0; i < 1000; i++) {
sw.Restart();
Thread.Sleep(100);
var elapsed = sw.Elapsed.TotalSeconds;
var br = nic.GetIPv4Statistics().BytesReceived;
var local = (br - lastBr) / elapsed;
lastBr = br;
// Keep last 20, ~2 seconds
reads = new [] { local }.Concat(reads).Take(20);
if (i % 10 == 0) { // ~1 second
var bSec = reads.Sum() / reads.Count();
var kbs = (bSec * 8) / 1024;
Console.WriteLine("Kb/s ~ " + kbs);
}
}
}
Please try this. To check internet connection speed.
public double CheckInternetSpeed()
{
// Create Object Of WebClient
System.Net.WebClient wc = new System.Net.WebClient();
//DateTime Variable To Store Download Start Time.
DateTime dt1 = DateTime.UtcNow;
//Number Of Bytes Downloaded Are Stored In ‘data’
byte[] data = wc.DownloadData("http://google.com");
//DateTime Variable To Store Download End Time.
DateTime dt2 = DateTime.UtcNow;
//To Calculate Speed in Kb Divide Value Of data by 1024 And Then by End Time Subtract Start Time To Know Download Per Second.
return Math.Round((data.Length / 1024) / (dt2 - dt1).TotalSeconds, 2);
}
It gives you the speed in Kb/Sec and share the result.
By looking at another answer to a question you posted in NetworkInterface.GetIPv4Statistics().BytesReceived - What does it return? I believe the issue might be that you are using to small intervals. I believe the counter only counts whole packages, and if you for example are downloading a file the packages might get as big as 64 KB (65,535 bytes, IPv4 max package size) which is quite a lot if your maximum download throughput is 60 KB/s and you are measuring 200 ms intervals.
Given that your speed is 60 KB/s I would have set the running time to 10 seconds to get at least 9 packages per average. If you are writing it for all kinds of connections I would recommend you make the solution dynamic, ie if the speed is high you can easily decrease the averaging interval but in the case of slow connections you must increase the averaging interval.
Either do as #pst recommends by having a moving average or simply increase the sleep up to maybe 1 second.
And be sure to divide by the actual time taken rather than the time passed to Thread.Sleep().
Additional thought on intervals
My process would be something like this, measure for 5 second and gather data, ie bytes recieved as well as the number of packets.
var timePerPacket = 5000 / nrOfPackets; // Time per package in ms
var intervalTime = Math.Max(d, Math.Pow(2,(Math.Log10(timePerPacket)))*100);
This will cause the interval to increase slowly from about several tens of ms up to the time per packet. That way we always get at least (on average) one package per interval and we will not go nuts if we are on a 10 Gbps connection. The important part is that the measuring time should not be linear to the amount of data received.
The SSL handshake takes some time as a result modified #sandeep answer. I first created a request and then measure the time to download the content. I believe this is a little more accurate but still not 100%. It is an approximation.
public async Task<int> GetInternetSpeedAsync(CancellationToken ct = default)
{
const double kb = 1024;
// do not use compression
using var client = new HttpClient();
int numberOfBytesRead = 0;
var buffer = new byte[10240].AsMemory();
// create request
var stream = await client.GetStreamAsync("https://www.google.com", ct);
// start timer
DateTime dt1 = DateTime.UtcNow;
// download stuff
while (true)
{
var i = await stream.ReadAsync(buffer, ct);
if (i < 1)
break;
numberOfBytesRead += i;
}
// end timer
DateTime dt2 = DateTime.UtcNow;
double kilobytes = numberOfBytesRead / kb;
double time = (dt2 - dt1).TotalSeconds;
// speed in Kb per Second.
return (int)(kilobytes / time);
}

C# -calculate download/upload time using network bandwidth

As a feature in the application which Im developing, I need to show the total estimated time left to upload/download a file to/from server.
how would it possible to get the download/upload speed to the server from client machine.
i think if im able to get speed then i can calculate time by -->
for example ---for a 200 Mb file = 200(1024 kb) = 204800 kb and
divide it by 204800 Mb / speed Kb/s = "x" seconds
The upload/download speed is no static property of a server, it depends on your specific connection and may also vary over time. Most application I've seen do an estimation over a short time window. That means they start downloading/uploading and measure the amount of data over, lets say 10 seconds. This is then taken as the current transfer speed and used to calculate the remaining time (e.g. 2500kB / 10s -> 250Kb/s). The time window is moved on and recalculated continuously to keep the calculation accurate to the current speed.
Although this is a quite basic approach, it will serve well in most cases.
Try something like this:
int chunkSize = 1024;
int sent = 0
int total = reader.Length;
DateTime started = DateTime.Now;
while (reader.Position < reader.Length)
{
byte[] buffer = new byte[
Math.Min(chunkSize, reader.Length - reader.Position)];
readBytes = reader.Read(buffer, 0, buffer.Length);
// send data packet
sent += readBytes;
TimeSpan elapsedTime = DateTime.Now - started;
TimeSpan estimatedTime =
TimeSpan.FromSeconds(
(total - sent) /
((double)sent / elapsedTime.TotalSeconds));
}
This is only tangentially related, but I assume if you're trying to calculate total time remaining, you're probably also going to be showing it as some kind of progress bar. If so, you should read this paper by Chris Harrison about perceptual differences. Here's the conclusion straight from his paper (emphasis mine).
Different progress bar behaviors appear to have a significant effect on user perception of process duration. By minimizing negative behaviors and incorporating positive behaviors, one can effectively make progress bars and their associated processes appear faster. Additionally, if elements of a multistage operation can be rearranged, it may be possible to reorder the stages in a more pleasing and seemingly faster sequence.
http://www.chrisharrison.net/projects/progressbars/ProgBarHarrison.pdf
I don't know why do you need this but i would go simpliest way possible and ask user what connection type he has. Then take file size divide it by speed and then by 8 to get number of seconds.
Point is you won't need processing power to calculate speeds. Microsoft on their website use function that calculates a speed for most default connections based on file size which you can get while uploading the file or to enter it manually.
Again, maybe you have other needs and you must calculate upload on fly...
The following code computes the remaining time in minute.
long totalRecieved = 0;
DateTime lastProgressChange = DateTime.Now;
Stack<int> timeSatck = new Stack<int>(5);
Stack<long> byteSatck = new Stack<long>(5);
using (WebClient c = new WebClient())
{
c.DownloadProgressChanged += delegate(object s, DownloadProgressChangedEventArgs args)
{
long bytes;
if (totalRecieved == 0)
{
totalRecieved = args.BytesReceived;
bytes = args.BytesReceived;
}
else
{
bytes = args.BytesReceived - totalRecieved;
}
timeSatck.Push(DateTime.Now.Subtract(lastProgressChange).Seconds);
byteSatck.Push(bytes);
double r = timeSatck.Average() * ((args.TotalBytesToReceive - args.BytesReceived) / byteSatck.Average());
this.textBox1.Text = (r / 60).ToString();
totalRecieved = args.BytesReceived;
lastProgressChange = DateTime.Now;
};
c.DownloadFileAsync(new Uri("http://www.visualsvn.com/files/VisualSVN-1.7.6.msi"), #"C:\SVN.msi");
}
I think I ve got the estimated time to download.
double timeToDownload = ((((totalFileSize/1024)-((fileStream.Length)/1024)) / Math.Round(currentSpeed, 2))/60);
this.Invoke(new UpdateProgessCallback(this.UpdateProgress), new object[] {
Math.Round(currentSpeed, 2), Math.Round(timeToDownload,2) });
where
private void UpdateProgress(double currentSpeed, double timeToDownload)
{
lblTimeUpdate.Text = string.Empty;
lblTimeUpdate.Text = " At Speed of " + currentSpeed + " it takes " + timeToDownload +" minute to complete download";
}
and current speed is calculated like
TimeSpan dElapsed = DateTime.Now - dStart;
if (dElapsed.Seconds > 0) {currentSpeed = (fileStream.Length / 1024) / dElapsed.Seconds;
}

Categories