Async calls of several WCF WebServices - c#

I have a console application that runs a loop calling 4 WCF Web Services in a specific order.
Each iteration of the loop is not dependent on the previous or the next iteration.
Each iteration of the loop takes about 5 seconds.
I would like to use some parallel or async work to get this time down.
Basically, what I am thinking is that while I am waiting on one of the services, I could be getting another iteration calling one of the other 4 services.
However, in my attempts, this has not worked.
I have tried to do something like this:
dataRows.ToList().ForEach(async patientRow =>
{
// Calls made here. Example calls below:
var personData = await personOperations.SavePersonAsync(personRow);
var saveResponse = await orderingBrokerOperations
.SaveAsync(orderedTestContracts, personData);
printOperations.Print(saveResponse );
});
But this ends up front loading tons of the calls. (Meaning that the 1st web service gets blasted with many many requests.) Eventually one of the calls times outs because it is getting too many requests.
So then I tried this:
Parallel.ForEach(dataRows,
new ParallelOptions{MaxDegreeOfParallelism = 10}, (patientRow) =>
{
// WCF calls are made synchronously
});
This kept my calls from going overboard, but did not really increase the speed much.
I also tried doing this
Parallel.ForEach(dataRows,
new ParallelOptions{MaxDegreeOfParallelism = 10}, async (patientRow) =>
{
// Do the calls here. Each call is made using the await keyword.
});
but it had the same problems as the first option.
Out of frustration I then tried this:
long inProgress = 0;
dataRows.ToList().ForEach(async patientRow =>
{
inProgress++;
if (inProgress > 10)
Thread.Sleep();
// Do the calls here. Each call is made using the await keyword.
inProgress--;
});
This did not really speed things up either.
My Dev Servers are not really strong, but they should be strong enough to handle at least 10 calls at the same time. I am not sure what is going on.
Is there a good way to spread out the calls to my services without running synchronously?

Have a look at this implementation of ForEachAsync by Stephen Toub. It should enable you to do what you want and it allows control of how many tasks you want to execute concurrently.
I would also recommend making your WCF calls asynchronous if possible. Have a look at this post for more detail.
On a final not if you want to execute many async operations concurrently, you do not await them as you are kicking them off. You store each Task in a list as they are being kicked off and then you used Task.WhenAll to await their completion.

Related

EventHub ForEach Parallel Async

Always managing to confuse myself working with async, I'm after a bit of validation/confirmation here that i'm doing what i think i'm doing in the following scenarios..
given the following trivial example:
// pretend / assume these are json msgs or something ;)
var strEvents = new List<string> { "event1", "event2", "event3" };
i can post each event to an eventhub simply as follows:
foreach (var e in strEvents)
{
// Do some things
outEventHub.Add(e); // ICollector
}
the foreach will run on a single thread, and execute each thing inside sequentially.. the posting to eventhub will also remain on the same thread too i guess??
Changing ICollector to IAsyncCollector, and achieve the following:
foreach (var e in strEvents)
{
// Do some things
await outEventHub.AddAsync(e);
}
I think i am right here in saying that the foreach will run on a single thread, the actual sending to the event hub will be pushed off elsewhere? Or at least not block that same thread..
Changing to Parallel.ForEach event as these events will be arriving 100+ or so at a time:
Parallel.ForEach(events, async (e) =>
{
// Do some things
await outEventHub.AddAsync(e);
});
Starting to get a bit hazy now, as i am not sure what really is going on now... afaik the each event has it's own thread (within the bounds of the hardware) and steps within that thread do not block it.. so this trivial example aside.
Finally, i could turn them all in to Tasks i thought..
private static async Task DoThingAsync(string e, IAsyncCollector<string> outEventHub)
{
await outEventHub.AddAsync(e);
}
var t = new List<Task>();
foreach (var e in strEvents)
{
t.Add(DoThingAsync(e, outEventHub));
}
await Task.WhenAll(t);
now i am really hazy, and i think this is prepping everything on a single thread.. and then running everything exactly at the same time, on any thread available??
I appreciate that in order to determine which is right for the job at hand benchmarking is required... but an explanation of what the framework is doing in each situation would be super helpful for me right now..
Parallel != async
This is the main idea here. Both of them have their uses, and they can be used together, but they are very different. You are mostly right with your assumptions, but let me clarify:
Simple foreach
This is non-parallel and non-async. Nothing to talk about.
Await inside foreach
This is async code that is non-parallel.
foreach (var e in strEvents)
{
// Do some things
await outEventHub.AddAsync(e);
}
This will all take place on a single thread. It takes an event, starts adding it to your event hub, and while it is being completed (I'm guessing it does some sort of network IO) it hands back the thread to the thread pool (or UI if it was called on a UI thread) so it can do other work while wating on AddAsync to return. But as you said, is is not parallel at all.
Parallel Foreach (async)
This one is a trap! In short, Parallel.Foreach is designed for synchronous workloads. We'll get back to this but first let's assume you used it with the non-async code.
Parallel foreach (sync)
A.k.a. Parallel but not async.
Parallel.ForEach(events, (e) =>
{
// Do some things
outEventHub.Add(e);
});
Each item will get its own "Task", but they won't spawn a thread. Creating threads is expensive, and in an optimal case there is no point in having more threads than CPU cores. Instead these tasks run on a ThreadPool, which has just as many Threads as optimal. Each thread takes a task, works on it, then takes another one, etc.
You can think of it as - on a 4 core machine - having 4 workers around a pile of tasks, so 4 of them are being run at a time. You can imagine that this is not ideal in case of IO bound workloads (which this most likely is). If your network is slow, you can have all 4 threads blocked on trying to send the event out, while they could be doing useful work. This leads us to...
Tasks
Async and potentially parallel (depends on the usage).
Your description is correct here, too, except for the ThreadPool, it is kikking off all the tasks at once (on the main thread), which then run on the pool's threads. While they are running, the main thread is released, which then can do other work, as needed. Up to this point it is the same as the Parallel.Foreach case. But:
What happens is that a TaskPool thread picks up a task, does the necessary preprocessing, then sends out the network request asynchronously. This means that this task will not block while waiting for the network, but rather it releases the ThreadPool thread to pick up another workitem. When the network request completes, the tasks continuation (the remaining code lines after the network request) is scheduled back to the list of tasks.
You can see that theoretically this is the most efficient process, so fast that you have to be careful not to flood your network.
Back to Parallel.Foreach and async
At this point you should be able to spot the problem. All your async lambda async (e) => { await outEventHub.AddAsync(e);} is doing is to kick off the work, it will return right after it hits the await. (Remember that async/await is releasing threads while waiting.) Parallel.Foreach returns right after it started all of them. But nothing is awaiting these tasks! These become fire and forget, which is usually a bad practice. It is like you deleted the await Task.WhenAll call from your task example.
I hope this cleared most things for you, if not, let me know what to improve on.
Why don't you send those events asynchronously in parallel, like this:
var tasks = new List<Task>();
foreach( var e in strEvents )
{
tasks.Add(outEventHub.AddAsync(e));
}
await Task.WhenAll(tasks);
await outEventHub.FlushAsync();

Why an additional async operation is making my code faster than when the operation is not taking place at all?

I'm working on a SMS-based game (Value Added Service), in which a question must be sent to each subscriber on a daily basis. There are over 500,000 subscribers and therefore performance is a key factor. Since each subscriber can be a difference state of the competition with different variables, database must be queried separately for each subscriber before sending a text message. To achieve the best performance I'm using .Net Task Parallel Library (TPL) to spawn parallel threadpool threads and do as much async operations as possible in each thread to finally send texts asap.
Before describing the actual problem there are some more information necessary to give about the code.
At first there was no async operation in the code. I just scheduled some 500,000 tasks with the default task scheduler into the Threadpool and each task would work through the routines, blocking on all EF (Entity Framework) queries and sequentially finishing its job. It was good, but not fast enough. Then I changed all EF queries to Async, the outcome was superb in speed but there has been so many deadlocks and timeouts in SQL server that about a third of the subscribers never received a text! After trying different solutions, I decided not to do too many Async Database operations while I have over 500,000 tasks running on a 24 core server (with at least 24 concurrent threadpool threads)!
I rolled back all the changes (the Asycn ones) expect for one web service call in each task which remained Async.
Now the weird case:
In my code, I have a boolean variable named "isCrossSellActive". When the variable is set some more DB operations take place and an asycn webservice call will happen on which the thread awaits. When this variable is false, none of these operations will happen including the async webservice call. Awkwardly when the variable is set the code runs so much faster than when it's not! It seems like for some reason the awaited async code (the cooperative thread) is making the code faster.
Here is the code:
public async Task AutoSendMessages(...)
{
//Get list of subscriptions plus some initialization
LimitedConcurrencyLevelTaskScheduler lcts = new LimitedConcurrencyLevelTaskScheduler(numberOfThreads);
TaskFactory taskFactory = new TaskFactory(lcts);
List<Task> tasks = new List<Task>();
//....
foreach (var sub in subscriptions)
{
AutoSendData data = new AutoSendData
{
ServiceId = serviceId,
MSISDN = sub.subscriber,
IsCrossSellActive = bolCrossSellHeader
};
tasks.Add(await taskFactory.StartNew(async (x) =>
{
await SendQuestion(x);
}, data));
}
GC.Collect();
try
{
Task.WaitAll(tasks.ToArray());
}
catch (AggregateException ae)
{
ae.Handle((ex) =>
{
_logRepo.LogException(1, "", ex);
return true;
});
}
await _autoSendRepo.SetAutoSendingStatusEnd(statusId);
}
public async Task SendQuestion(object data)
{
//extract variables from input parameter
try
{
if (isCrossSellActive)
{
int pieceCount = subscriptionRepo.GetSubscriberCarPieces(curSubscription.service, curSubscription.subscriber).Count(c => c.isConfirmed);
foreach (var rule in csRules)
{
if (rule.Applies)
{
if (await HttpClientHelper.GetJsonAsync<bool>(url, rule.TargetServiceBaseAddress))
{
int noOfAddedPieces = SomeCalculations();
if (noOfAddedPieces > 0)
{
crossSellRepo.SetPromissedPieces(curSubscription.subscriber, curSubscription.service,
rule.TargetShortCode, noOfAddedPieces, 0, rule.ExpirationLimitDays);
}
}
}
}
}
// The rest of the code. (Some db CRUD)
await SmsClient.SendSoapMessage(subscriber, smsBody);
}
catch (Exception ex){//...}
}
Ok, thanks to #usr and the clue he gave me, the problem is finally solved!
His comment drew my attention to the awaited taskFactory.StartNew(...) line which sequentially adds new tasks to the "tasks" list which is then awaited on by Task.WaitAll(tasks);
At first I removed the await keyword before the taskFactory.StartNew() and it led the code towards a horrible state of malfunction! I then returned the await keyword to before taskFactory.StartNew() and debugged the code using breakpoints and amazingly saw that the threads are ran one after another and sequentially before the first thread reaches the first await inside the "SendQuestion" routine. When the "isCrossSellActive" flag was set despite the more jobs a thread should do the first await keyword is reached earlier thus enabling the next scheduled task to run. But when its not set the only await keyword is the last line of the routine so its most likely to run sequentially to the end.
usr's suggestion to remove the await keyword in the for loop seemed to be correct but the problem was the Task.WaitAll() line would wait on the wrong list of Task<Task<void>> instead of Task<void>. I finally used Task.Run instead of TaskFactory.StartNew and everything changed. Now the service is working well. The final code inside the for loop is:
tasks.Add(Task.Run(async () =>
{
await SendQuestion(data);
}));
and the problem was solved.
Thank you all.
P.S. Read this article on Task.Run and why TaskFactory.StartNew is dangerous: http://blog.stephencleary.com/2013/08/startnew-is-dangerous.html
It's extremly hard to tell unless you add some profiling that tell you which code is taking longer now.
Without seeing more numbers my best guess would be that the SMS service doesn't like when you send too many requests in a short time and chokes. When you add the extra DB calls the extra delay make the sms service work better.
A few other small details:
await Task.WhenAll is usually a bit better than Task.WaitAll. WaitAll means the thread will sit around waiting. Making a deadlock slightly more likely.
Instead of:
tasks.Add(await taskFactory.StartNew(async (x) =>
{
await SendQuestion(x);
}, data));
You should be able to do
tasks.Add(SendQuestion(data));

ContinueWith in fire and forget?

I have multiple async methods that all make httpclient calls to other servers. I want to run these at the end of a webapi call and immediately return. The calls need to each record the time they take and when ALL complete, I need to log those times to a file. I spent a while tinkering until I got it to work, but I don't know why it works and the other ways don't.
Could you shed some light on this for me?
Here's the basic way that didn't work. That is, the calls were made, but the LogTimeTaken was not (or at least didn't write the log file).
//inside webapi action
var tasks = new List<Task>
{
MakeCall1Async(data,timeTaken),
MakeCall2Async(data, timeTaken),
MakeCall3Async(data, timeTaken)
};
Task.Run(async () =>
{
await Task.WhenAll(tasks);
LogTimeTaken(timeTaken);
});
//finish webapi action and return
Here's the way that did work:
//inside webapi action
Task.Run(async () =>
{
var tasks = new List<Task>
{
MakeCall1Async(data, timeTaken),
MakeCall2Async(data, timeTaken),
MakeCall3Async(data, timeTaken)
};
await Task.WhenAll(tasks).ContinueWith(t => LogTimeTaken(timeTaken));
});
//finish webapi action and return
Why?
Also, I am aware of the risks of using fire and forget inside of webapi, and that it won't always run to completion (like when an app pool recycles). 95%+ is good enough in this case.
EDIT
I understand everyone's concerns regarding the technology choice. I may be changing to a pub/sub architecture or use the QueueBackgroundWorkItem. Given that I only need successful completion 95% of the time, I think running it as I am is fine, however. The real answer I am trying to get is why the first way fails and the second way succeeds to write to the log.
You should look at background job schedulers like hangfire or Quartz.net
Below is a nice blog entry on these solutions.
How to run Background Tasks in ASP.Net

Multithreading a web scraper?

I've been thinking about making my web scraper multithreaded, not like normal threads (egThread scrape = new Thread(Function);) but something like a threadpool where there can be a very large number of threads.
My scraper works by using a for loop to scrape pages.
for (int i = (int)pagesMin.Value; i <= (int)pagesMax.Value; i++)
So how could I multithread the function (that contains the loop) with something like a threadpool? I've never used threadpools before and the examples I've seen have been quite confusing or obscure to me.
I've modified my loop into this:
int min = (int)pagesMin.Value;
int max = (int)pagesMax.Value;
ParallelOptions pOptions = new ParallelOptions();
pOptions.MaxDegreeOfParallelism = Properties.Settings.Default.Threads;
Parallel.For(min, max, pOptions, i =>{
//Scraping
});
Would that work or have I got something wrong?
The problem with using pool threads is that they spend most of their time waiting for a response from the Web site. And the problem with using Parallel.ForEach is that it limits your parallelism.
I got the best performance by using asynchronous Web requests. I used a Semaphore to limit the number of concurrent requests, and the callback function did the scraping.
The main thread creates the Semaphore, like this:
Semaphore _requestsSemaphore = new Semaphore(20, 20);
The 20 was derived by trial and error. It turns out that the limiting factor is DNS resolution and, on average, it takes about 50 ms. At least, it did in my environment. 20 concurrent requests was the absolute maximum. 15 is probably more reasonable.
The main thread essentially loops, like this:
while (true)
{
_requestsSemaphore.WaitOne();
string urlToCrawl = DequeueUrl(); // however you do that
var request = (HttpWebRequest)WebRequest.Create(urlToCrawl);
// set request properties as appropriate
// and then do an asynchronous request
request.BeginGetResponse(ResponseCallback, request);
}
The ResponseCallback method, which will be called on a pool thread, does the processing, disposes of the response, and then releases the semaphore so that another request can be made.
void ResponseCallback(IAsyncResult ir)
{
try
{
var request = (HttpWebRequest)ir.AsyncState;
// you'll want exception handling here
using (var response = (HttpWebResponse)request.EndGetResponse(ir))
{
// process the response here.
}
}
finally
{
// release the semaphore so that another request can be made
_requestSemaphore.Release();
}
}
The limiting factor, as I said, is DNS resolution. It turns out that DNS resolution is done on the calling thread (the main thread in this case). See Is this really asynchronous? for more information.
This is simple to implement and works quite well. It's possible to get even more than 20 concurrent requests, but doing so takes quite a bit of effort, in my experience. I had to do a lot of DNS caching and ... well, it was difficult.
You can probably simplify the above by using Task and the new async stuff in C# 5.0 (.NET 4.5). I'm not familiar enough with those to say how, though.
It's better to go with the TPL, namely Parallel.ForEach using an overload with a Partitioner. It manages workload automatically.
FYI. You should understand that more threads doesn't mean faster. I'd advice you to make some tests to compare unparametrized Parallel.ForEach and user defined.
Update
public void ParallelScraper(int fromInclusive, int toExclusive,
Action<int> scrape, int desiredThreadsCount)
{
int chunkSize = (toExclusive - fromInclusive +
desiredThreadsCount - 1) / desiredThreadsCount;
ParallelOptions pOptions = new ParallelOptions
{
MaxDegreeOfParallelism = desiredThreadsCount
};
Parallel.ForEach(Partitioner.Create(fromInclusive, toExclusive, chunkSize),
rng =>
{
for (int i = rng.Item1; i < rng.Item2; i++)
scrape(i);
});
}
Note You could be better with async in your situation.
If you think your web scraper like using for loop, you could have a look at Parallel.ForEach() that would similar to foreach loop; however, in that, it iterates over an enumerable data. Parallel.ForEach use multiple threads to invoke loop body.
For more details, see Parallel loops
Update:
Parallel.For() is very similar to Parallel.ForEach(), it depends on the context like you use for or foreach loop.
This is a perfect scenario for TPL Dataflow's ActionBlock. You can easily configure it to limit concurrency. Here is one of the examples from the documentation:
var downloader = new ActionBlock<string>(async url =>
{
byte [] imageData = await DownloadAsync(url);
Process(imageData);
}, new DataflowBlockOptions { MaxDegreeOfParallelism = 5 });
downloader.Post("http://msdn.com/concurrency ");
downloader.Post("http://blogs.msdn.com/pfxteam");
You can read about ActionBlock (including the referenced example) by downloading Introduction to TPL Dataflow.
During the tests for our "Crawler-Lib Framework" I found that parallel, TPL or threading attempts won't get you the throughput you want to have. You stuck on 300-500 requests per second on a local machine. If you want to execute thousands of requests in parallel, you must execute them async pattern and process the results in parallel. Our Crawler-Lib Engine (a workflow enabled request processor) does this with about 10.000 - 20.000 requests / second on a local machine. If you want to have a fast scraper don't try to use TPL. Instead use the Async Pattern (Begin... End...) and start all your requests in one thread.
If many of your requests tend to time out lets say after 30 seconds, the situation is even worse. In tis case the TPL based solutions will get an ugly bad throughput of 5? 1? requests per second. The async pattern gives you at least 100-300 requests per second. The Crawler-Lib Engine handles this well and get the maximum possible requests. Lets say your TCP/IP tack is configured to have 60000 outbound connections (65535 is the maximum, because every connection need a outbound port) then you will get a throughput of 60000 connections / 30 seconds timeout = 2000 requests / second.

Tracking progress of a multi-step Task

I am working on a simple server that exposes webservices to clients. Some of the requests may take a long time to complete, and are logically broken into multiple steps. For such requests, it is required to report progress during execution. In addition, a new request may be initiated before a previous one completes, and it is required that both execute concurrently (barring some system-specific limitations).
I was thinking of having the server return a TaskId to its clients, and having the clients track the progress of the requests using the TaskId. I think this is a good approach, and I am left with the issue of how tasks are managed.
Never having used the TPL, I was thinking it would be a good way to approach this problem. Indeed, it allows me to run multiple tasks concurrently without having to manually manage threads. I can even create multi-step tasks relatively easily using ContinueWith.
I can't come up with a good way of tracking a task's progress, though. I realize that when my requests consist of a single "step", then the step has to cooperatively report its state. This is something I would prefer to avoid at this point. However, when a request consists of multiple steps, I would like to know which step is currently executing and report progress accordingly. The only way I could come up with is extremely tiresome:
Task<int> firstTask = new Task( () => { DoFirstStep(); return 3.14; } );
firstTask.
ContinueWith<int>( task => { UpdateProgress("50%"); return task.Result; } ).
ContinueWith<string>( task => { DoSecondStep(task.Result); return "blah"; }.
ContinueWith<string>( task => { UpdateProgress("100%"); return task.Result; } ).
And even this is not perfect since I would like the Task to store its own progress, instead of having UpdateProgress update some known location. Plus it has the obvious downside of having to change a lot of places when adding a new step (since now the progress is 33%, 66%, 100% instead of 50%, 100%).
Does anyone have a good solution?
Thanks!
This isn't really a scenario that the Task Parallel Library supports that fully.
You might consider an approach where you fed progress updates to a queue and read them on another Task:
static void Main(string[] args)
{
Example();
}
static BlockingCollection<Tuple<int, int, string>> _progressMessages =
new BlockingCollection<Tuple<int, int, string>>();
public static void Example()
{
List<Task<int>> tasks = new List<Task<int>>();
for (int i = 0; i < 10; i++)
tasks.Add(Task.Factory.StartNew((object state) =>
{
int id = (int)state;
DoFirstStep(id);
_progressMessages.Add(new Tuple<int, int, string>(
id, 1, "10.0%"));
DoSecondStep(id);
_progressMessages.Add(new Tuple<int, int, string>(
id, 2, "50.0%"));
// ...
return 1;
},
(object)i
));
Task logger = Task.Factory.StartNew(() =>
{
foreach (var m in _progressMessages.GetConsumingEnumerable())
Console.WriteLine("Task {0}: Step {1}, progress {2}.",
m.Item1, m.Item2, m.Item3);
});
List<Task> waitOn = new List<Task>(tasks.ToArray());
waitOn.Add(logger);
Task.WaitAll(waitOn.ToArray());
Console.ReadLine();
}
private static void DoSecondStep(int id)
{
Console.WriteLine("{0}: First step", id);
}
private static void DoFirstStep(int id)
{
Console.WriteLine("{0}: Second step", id);
}
This sample doesn't show cancellation, error handling or account for your requirement that your task may be long running. Long running tasks place special requirements on the scheduler. More discussion of this can be found at http://parallelpatterns.codeplex.com/, download the book draft and look at Chapter 3.
This is simply an approach for using the Task Parallel Library in a scenario like this. The TPL may well not be the best approach here.
If your web services are running inside ASP.NET (or a similar web application server) then you should also consider the likely impact of using threads from the thread pool to execute tasks, rather than service web requests:
How does Task Parallel Library scale on a terminal server or in a web application?
I don't think the solution you are looking for will involve the Task API. Or at least, not directly. It doesn't support the notion of percentage complete, and the Task/ContinueWith functions need to participate in that logic because it's data that is only available at that level (only the final invocation of ContinueWith is in any position to know the percentage complete, and even then, doing so algorithmically will be a guess at best because it certainly doesn't know if one task is going to take a lot longer than the other. I suggest you create your own API to do this, possibly leveraging the Task API to do the actual work.
This might help: http://blog.stephencleary.com/2010/06/reporting-progress-from-tasks.html. In addition to reporting progress, this solution also enables updating form controls without getting the Cross-thread operation not valid exception.

Categories