I am experimenting / learning the new Task library and I have written a very simple html downloader using WebClient and Task.Run. However I can never reach anything more than 5% on my network usage. I would like to understand why and how I can improve my code to reach 100% network usage / throughput (probably not possible but it has to be a lot more than 5%).
I would also like to be able to limit the number of thread however it seems it's not as easy as I thought (i.e. custom task scheduler). Is there a way to just do something like this to set the max thread count: something.SetMaxThread(2)?
internal static class Program
{
private static void Main()
{
for (var i = 0; i < 1000000; i++)
{
Go(i, Thread.CurrentThread.ManagedThreadId);
}
Console.Read();
}
private static readonly Action<int, int> Go = (counter, threadId) => Task.Run(() =>
{
var stopwatch = new Stopwatch();
stopwatch.Start();
var webClient = new WebClient();
webClient.DownloadString(new Uri("http://stackoverflow.com"));
stopwatch.Stop();
Console.Write("{0} == {1} | ", threadId.ToString("D3"), Thread.CurrentThread.ManagedThreadId.ToString("D3"));
Console.WriteLine("{0}: {1}ms ", counter.ToString("D3"), stopwatch.ElapsedMilliseconds.ToString("D4"));
});
}
This is the async version according to #spender. However my understanding is that await will "remember" the point in time and hand off the download to OS level and skip (the 2 console.write) and return to main immediately and continue scheduling the remaining Go method in the for loop. Am I understanding it correctly? So there's no blocking on the UI.
private static async void Go(int counter, int threadId)
{
using (var webClient = new WebClient())
{
var stopWatch = new Stopwatch();
stopWatch.Start();
await webClient.DownloadStringTaskAsync(new Uri("http://ftp.iinet.net.au/test500MB.dat"));
stopWatch.Stop();
Console.Write("{0} == {1} | ", threadId.ToString("D3"), Thread.CurrentThread.ManagedThreadId.ToString("D3"));
Console.WriteLine("{0}: {1}ms ", counter.ToString("D3"), stopWatch.ElapsedMilliseconds.ToString("D4"));
}
}
What I noticed was that when I am downloading large files there's no that much difference in terms of download speed / network usage. They (threading version and the async version) both peaked at about 12.5% network usage and about 12MByte download /sec. I also tried to run multiple instances (multiple .exe running) and again there's no huge difference between the two. And when I am trying to download large files from 2 URLs concurrently (20 instances) I get similar network usage (12.5%) and download speed (10-12MByte /sec). I guess I am reaching the peak?
As it stands, your code is suboptimal because, although you are using Task.Run to create asynchronous code that runs in the ThreadPool, the code that is being run in the ThreadPool is still blocking on the line:
webClient.DownloadString(...
This amounts to an abuse of the ThreadPool because it is not designed to run blocking tasks, and is slow to spin up additional threads to deal with peaks in workload. This in turn will have a seriously degrading effect on the smooth running of any API that uses the ThreadPool (timers, async callbacks, they're everywhere), because they'll schedule work that goes to the back of the (saturated) queue for the ThreadPool (which is tied up reluctantly and slowly spinning up hundreds of threads that are going to spend 99.9% of their time doing nothing).
Stop blocking the ThreadPool and switch to proper async methods that do not block.
So now you can literally break your router and seriously upset the SO site admins with the following simple mod:
private static void Main()
{
for (var i = 0; i < 1000000; i++)
{
Go(i, Thread.CurrentThread.ManagedThreadId);
}
Console.Read();
}
private static async Task Go(int counter, int threadId)
{
var stopwatch = new Stopwatch();
stopwatch.Start();
using (var webClient = new WebClient())
{
await webClient.DownloadStringTaskAsync(
new Uri("http://stackoverflow.com"));
}
//...
}
HttpWebRequest (and therefore WebClient) are also constrained by a number of limits.
Related
My console application opens 100 threads which do exactly the same - sends some date to host in internal network. The host is very responsive, I have checked that it can handle much bigger number of requests in every second. The console application also is quite primitive and responsive (it doesn't use database or something) - it only sends requests to host. Increasing the number of threads doesn't improve the speed. It seems something is throttling the speed of communication the app with the host. Moreover I have run three instances of the same console application in the same time, and they have made 3x time more, so it seems the limitation is one the level of application.
I have already increased DefaultConnectionLimit but with no effect.
class Program
{
static void Main(string[] args)
{
System.Net.ServicePointManager.DefaultConnectionLimit = 200;
for (var i = 1; i <= 100; i++)
{
int threadId = i;
Thread thread = new Thread(() =>
{
Testing(threadId);
});
thread.Start();
}
}
private static void Testing(int threadId)
{
//just communicate with host
}
}
The thing is that craeting more threads than you have cores in your processors is pointless.
For example you have 4 cores and create 100 threads: where do you expect 96 threads to run? They have to wait and decrease of performance is due to creating and managing unnecessary threads.
You should use ThreadPool, which will optimize number of threads created and scheduled to work.
Creation of a new Thread everytime is very expensive. You shouldn't create threads explicitly. Use task api instead to run this on threadpool:
var tasks = new Task[100];
for (var i = 0; i < 100; i++)
{
int threadId = i;
tasks[i] = Task.Run(() => Testing(threadId));
}
Task.WhenAll(tasks).GetAwaiter().GetResult();
I have a method called asyncStartList, which sends a list of emails provided it, and I'm trying to figure out how to use multiple threads to speed up the process in cases where there are a lot of emails:
public async Task asyncStartList()
{
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
for (int i = 0; i < listLength; i++)
{
currentMailAddress = emailingList[i];
await Task.Run(() => MailingFunction());
currentMailAddress = "";
Console.WriteLine("Your mail to {0} was successfully sent!", emailingList[i]);
}
stopWatch.Stop();
TimeSpan ts = stopWatch.Elapsed;
string elapsedTime = String.Format("{0:00}:{1:00}:{2:00}.{3:00}",
ts.Hours, ts.Minutes, ts.Seconds, ts.Milliseconds / 10);
Console.WriteLine("Time for completion " + elapsedTime);
Console.ReadLine();
}
The MailingFunction() is a simple SmtpClient and mail message.
Your solution actually not run parallel, because of you wait for every each send operation. You can use paralel foreach/for keyword. Otherwise, you have to wait after all send operation executed.
public async Task asyncStartList()
{
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
// option 1
Task[] tasks = emailingList.Select(s => Task.Run(() => { SendEmail(s); }).ToArray();
Task.WaitAll(tasks);
// option 1 end
// option 2
Parallel.ForEach(emailingList, email =>
{
SendEmail(email);
});
// option 2 end
// option 3
Parallel.For(0, emailingList.Length, i =>
{
SendEmail(emailingList[i]);
});
// option 3 end
stopWatch.Stop();
TimeSpan ts = stopWatch.Elapsed;
string elapsedTime = String.Format("{0:00}:{1:00}:{2:00}.{3:00}", ts.Hours, ts.Minutes, ts.Seconds, ts.Milliseconds / 10);
Console.WriteLine("Time for completion " + elapsedTime);
Console.ReadLine();
}
private void SendEmail(string emailAddress)
{
// Do send operation
}
Use Parallel.ForEach from the System.Threading.Tasks namespace. So instead for for int i = 0;... use Parallel.ForEach(emailingList, address => {...})
See https://learn.microsoft.com/en-us/dotnet/standard/parallel-programming/how-to-write-a-simple-parallel-foreach-loop for an example
If your solution's performance is CPU-bound, that is when you want to use parallel threads. If your solution is bound by something else-- e.g. the ability of the email server to handle requests-- what you actually should use is async, which is much simpler and much safer.
There are many ways to use async in this scenario, but here is a short and simple pattern that would work:
await Task.WhenAll
(
emailingList.Select( async address => MailingFunctionAsync(address) )
);
Yes, that is all there is to it. This assumes that your email client not only has a MailingFunction() method but also a MailingFunctionAsync() method (e.g. using Outlook's SendAsync() method or something similar).
Here is a sample MailingFunctionAsync() stolen from this question:
public async Task MailingFunctionAsync(string toEmailAddress)
{
var message = new MailMessage();
message.To.Add(toEmailAddress);
message.Subject = SOME_SUBJECT;
message.Body = SOME_BODY;
using (var smtpClient = new SmtpClient())
{
await smtpClient.SendMailAsync(message);
}
}
The common answer here is to use Parallel.ForEach (well apart from John Wu's answer that you should really consider). While on-the-outset Parallel.ForEach seems like an easy and good idea, its actually not the most optimal approach.
Here is the problem:
Parallel.ForEach uses the thread pool. Moreover, IO bound operations will block those threads waiting for a device to respond and tie up resources.
If you have CPU bound code, Parallelism is appropriate;
Though if you have IO bound code, Asynchrony is appropriate.
In this case, sending mail is clearly I/O, so the ideal consuming code would be asynchronous.
Furthermore, to use asynchronous and parallel features of the .NET properly, you should also understand the concept of I/O threads.
Not everything in a program consumes CPU time. When a thread tries to read data from a file on disk or sends a TCP/IP packet through network, the only thing it does is delegate the actual work to a device; disk or network adapter; and wait for results.
It’s very expensive to spend a threads time on waiting. Even through threads sleep and don’t consume CPU time while waiting for the results, it doesn’t really pay off because it’s a waste of system resources.
To be simplistic, every thread holds memory for stack variables, local storage and so on. Also, the more threads you have, the more time it takes to switch among them.
Though, the nice thing about Parallel.ForEach is its easy to implement, you can also set up options like Max Degree of Parallelism.
So what can you do...
You are best to use async/await pattern and/or some type of limit on concurrent tasks, another neat solution is to ActionBlock<TInput> Class in the TPL dataflow library.
Dataflow example
var block = new ActionBlock<MySomething>(
mySomething => MyMethodAsync(mySomething),
new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = 50 });
foreach (var something in ListOfSomethings)
{
block.Post(something );
}
block.Complete();
await block.Completion;
This approach gives you Asynchrony, it also gives you MaxDegreeOfParallelism, it doesn't waste resources, and lets IO be IO without chewing up unnecessary resources
Disclaimer, DataFlow may not be where you want to be, however I just thought I'd give you some more information on the different
approaches on offer.
I've got an ASP.NET site that is running a modest amount of requests (about 500rpm split across 3 servers), and usually the requests take about 15ms. However, I've found that there are frequently requests that take much longer (1s or more). I've narrowed the latency down to a call to Task.WhenAll. Here's an example of the offending code:
var taskA = dbA.GetA(id);
var taskB = dbB.GetB(id);
var taskC = dbC.GetC(id);
var taskD = dbD.GetD(id);
await Task.WhenAll(taskA, taskB, taskC, taskD);
Each individual task is measured and takes less than 10ms to complete. I've pinpointed the delay down to the Task.WhenAll call, and it seems to have something to do with how the task is scheduled. As far as I can tell, there's not a lot of pressure on the TPL task pool, so I'm at a loss for why the performance is so sporadic.
Async operation involve context switches, which are time consuming. Unfortunately, not always in a deterministic way. To speed things up in your case, try to prefix your Task.WhenAll call with ConfigureAwait(false), as follows:
await Task.WhenAll(taskA, taskB, taskC, taskD).ConfigureAwait(false);
This will eliminate an additional context switch, which is actually recommended approach for server-side applications.
Creating threads takes overhead. Depending on what you're doing, you can also try a Parallel.ForEach.
public static void yourMethod(int id){
var tasks = new List<IMyCustomType> { new dbA.GetA(id), new dbB.GetB(id), new dbC.GetC(id), new dbD.GetD(id)};
// Your simple stopwatch for timing
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
// For each 'tasks' list item, call 'executeTasks' (Max 10 occurrences)
// - Processing for all tasks will be complete before
// continuing processing on the main thread
Parallel.ForEach(tasks, new ParallelOptions { MaxDegreeOfParallelism = 10 }, executeTasks);
stopWatch.Stop();
Console.WriteLine("Completed execution in: " + stopWatch.Elapsed.TotalSeconds);
}
private static void executeTasks(string obj)
{
// Your task's work here.
}
I am creating a console program, which can test read / write to a Cache by simulating multiple clients, and have written following code. Please help me understand:
Is it correct way to achieve the multi client simulation
What can I do more to make it a genuine load test
void Main()
{
List<Task<long>> taskList = new List<Task<long>>();
for (int i = 0; i < 500; i++)
{
taskList.Add(TestAsync());
}
Task.WaitAll(taskList.ToArray());
long averageTime = taskList.Average(t => t.Result);
}
public static async Task<long> TestAsync()
{
// Returns the total time taken using Stop Watch in the same module
return await Task.Factory.StartNew(() => // Call Cache Read / Write);
}
Adjusted your code slightly to see how many threads we have at a particular time.
static volatile int currentExecutionCount = 0;
static void Main(string[] args)
{
List<Task<long>> taskList = new List<Task<long>>();
var timer = new Timer(Print, null, TimeSpan.FromSeconds(1), TimeSpan.FromSeconds(1));
for (int i = 0; i < 1000; i++)
{
taskList.Add(DoMagic());
}
Task.WaitAll(taskList.ToArray());
timer.Change(Timeout.Infinite, Timeout.Infinite);
timer = null;
//to check that we have all the threads executed
Console.WriteLine("Done " + taskList.Sum(t => t.Result));
Console.ReadLine();
}
static void Print(object state)
{
Console.WriteLine(currentExecutionCount);
}
static async Task<long> DoMagic()
{
return await Task.Factory.StartNew(() =>
{
Interlocked.Increment(ref currentExecutionCount);
//place your code here
Thread.Sleep(TimeSpan.FromMilliseconds(1000));
Interlocked.Decrement(ref currentExecutionCount);
return 4;
}
//this thing should give a hint to scheduller to use new threads and not scheduled
, TaskCreationOptions.LongRunning
);
}
The result is: inside a virtual machine I have from 2 to 10 threads running simultaneously if I don't use the hint. With the hint — up to 100. And on real machine I can see 1000 threads at once. Process explorer confirms this. Some details on the hint that would be helpful.
If it is very busy, then apparently your clients have to wait a while before their requests are serviced. Your program does not measure this, because your stopwatch starts running when the service request starts.
If you also want to measure what happen with the average time before a request is finished, you should start your stopwatch when the request is made, not when the request is serviced.
Your program takes only threads from the thread pool. If you start more tasks then there are threads, some tasks will have to wait before TestAsync starts running. This wait time would be measured if you remember the time Task.Run is called.
Besides the flaw in time measurements, how many service requests do you expect simultaneously? Are there enough free threads in your thread pool to simulate this? If you expect about 50 service requests at the same time, and the size of your thread pool is only 20 threads, then you'll never run 50 service requests at the same time. Vice versa: if your thread pool is way bigger than your number of expected simultaneous service requests, then you'll measure longer times than are actual the case.
Consider changing the number of threads in your thread pool, and make sure no one else uses any threads of the pool.
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 7 years ago.
Improve this question
Sorry for bad title. I am currently learning TPL and reading this blog article which states
The ability to invoke a synchronous method asynchronously does nothing for scalability, because you’re typically still consuming the same amount of resources you would have if you’d invoked it synchronously (in fact, you’re using a bit more, since there’s overhead incurred to scheduling something ).
So I thought let's give it a try and I created demo application that uses WebClient's DownloadStringTaskAsync and DownloadString (synchronous) method.
My demo application is having two methods
DownloadHtmlNotAsyncInAsyncWay
This provides asynchronous method wrapper around the synchronous method DownloadString which should not scale good.
DownloadHTMLCSAsync
This calls async method DownloadStringTaskAsync.
I created 100 task from both methods and compared time consumed and found that option 1 consumed less time compare to second. why?
Here is my code.
using System;
using System.Diagnostics;
using System.Net;
using System.Threading.Tasks;
public class Program
{
public static void Main()
{
const int repeattime = 100;
var s = new Sample();
var sw = new Stopwatch();
var tasks = new Task<string>[repeattime];
sw.Start();
for (var i = 0; i < repeattime; i++)
{
tasks[i] = s.DownloadHtmlNotAsyncInAsyncWay();
}
Task.WhenAll(tasks);
Console.WriteLine("==========Time elapsed(non natural async): " + sw.Elapsed + "==========");
sw.Reset();
sw.Start();
for (var i = 0; i < repeattime; i++)
{
tasks[i] = s.DownloadHTMLCSAsync();
}
Task.WhenAll(tasks);
Console.WriteLine("==========Time elapsed(natural async) : " + sw.Elapsed + "==========");
sw.Reset();
}
}
public class Sample
{
private const string Url = "https://www.google.co.in";
public async Task<string> DownloadHtmlNotAsyncInAsyncWay()
{
return await Task.Run(() => DownloadHTML());
}
public async Task<string> DownloadHTMLCSAsync()
{
using (var w = new WebClient())
{
var content = await w.DownloadStringTaskAsync(new Uri(Url));
return GetWebTitle(content);
}
}
private string DownloadHTML()
{
using (var w = new WebClient())
{
var content = w.DownloadString(new Uri(Url));
return GetWebTitle(content);
}
}
private static string GetWebTitle(string content)
{
int titleStart = content.IndexOf("<title>", StringComparison.InvariantCultureIgnoreCase);
if (titleStart < 0)
{
return null;
}
int titleBodyStart = titleStart + "<title>".Length;
int titleBodyEnd = content.IndexOf("</title>", titleBodyStart, StringComparison.InvariantCultureIgnoreCase);
return content.Substring(titleBodyStart, titleBodyEnd - titleBodyStart);
}
}
Here is dotnetfiddle link.
Why did first option completed in less time than second time?
You aren't actually measuring anything.
Task.WhenAll(tasks); returns a Task of the completion of all of those tasks.
You don't do anything with that task, so you aren't waiting for anything to finish.
Therefore, you're just measuring the synchronous initialization of each alternative. Task.Run() just queues a delegate to the thread pool; it does less work than setting up an HTTP request.
in fact, you’re using a bit more, since there’s overhead incurred to scheduling something
Even if you were correctly awaiting the tasks, as SLaks suggested, it would be near impossible to accurately measure this overhead.
Your test is downloading a webpage, which requires network access.
The overhead you're trying to measure is soooo much smaller than the variance in the network latency, that it would be lost in the noise.