Multiple parallel awaitable tasks - c#

I'm experimenting with ways to improve performance in our ASP.NET applications. One of the things I'm looking at is using parallelism and making operations async to try to reduce processing time and improve throughput. I started by mocking up something we do fairly frequently, issue multiple database retrievals to render a page.
public ActionResult Index()
{
var dal = new Dal();
var cases = new List<Case>();
cases.AddRange( dal.GetAssignedCases() );
cases.AddRange( dal.GetNewCases() );
return View( "Cases", cases );
}
The two Dal methods use Thread.Sleep(2000) to simulate a query and just return a collection of hard-coded objects. I run this with Apache Bench using ab -c 1 -n 1 and it takes about four seconds. My first attempt to try to improve it was:
public ActionResult Index()
{
var dal = new Dal();
var assignedCases = Task.Factory.StartNew( () => dal.GetAssignedCases() );
var newCases = Task.Factory.StartNew( () => dal.GetNewCases() );
IEnumerable<Case>[] allCases = Task.WhenAll( assignedCases, newCases ).Result;
return View( "Cases", allCases.SelectMany( c => c ) );
}
When I run this using the same ab command it shows about two seconds, which makes sense because I'm running two tasks each of which takes two seconds but they're running in parallel.
When I change the benchmark to 10 concurrent requests (i.e. ab -n 10 -c 10) I get the following.
Fulfilled Original Parallel
50% 4014 2038
66% 4015 2039
75% 4017 4011
The rest of the numbers up to 100% are similar in both columns.
I'm assuming that what I'm running into here is thread pool contention. About 2/3 of the requests are fulfilled quickly and after that stuff is sitting around waiting for threads to service the requests. So I think maybe if I added async to the mix I could get even more requests being served more quickly. That's where I start having problems and I don't know if the problem is the way I'm simulating long-running queries or the way I'm using the language feature or if I'm just completely on the wrong track and the light at the end of the tunnel is an on-coming train. :-)
First thing I did was create a DalAsync. In DalAsync I replaced the Thread.Sleep(2000) with await Task.Delay(2000), marked each method with the async keyword and changed the return type from IEnumerable<Case> to Task<IEnumerable<Case>>. I then wrote a new controller method pieced together from information I've read in a half-dozen blog posts and MSDN articles.
public async Task<ActionResult> Index()
{
var dal = new DalAsync();
var assignedCases = dal.GetAssignedCasesAsync();
var newCases = dal.GetNewCasesAsync();
var allCases = await Task.WhenAll( assignedCases, newCases );
return View( "Cases", allCases.SelectMany( c => c ) );
}
When I run this using ab it never finishes, even with one request it ends up timing out. I also tried the following variation, which works but returns numbers almost identical to the original version (which sort of makes sense, because it seems I'm serializing the queries again).
var assignedCases = await dal.GetAssignedCasesAsync();
var newCases = await dal.GetNewCasesAsync();
var allCases = new List<Case>( assignedCases );
allCases.AddRange( newCases );
What I'd like to have happen is:
Run the two queries in parallel
When the controller is waiting for the Dal methods to respond it frees up the thread and lets other
requests execute.

Your first code sample should work, however to my eyes it appears a bit strange. Task.WhenAll was introduced to be a non-blocking operation, i.e. you would use await Task.WhenAll(myTasks). By using the .Result, you are turning it into a blocking operation, however it does not feel entirely natural to be used this way.
I think what you are really after is Task.WaitAll(params Task[]) which is designed to be a blocking operation.
Your second code sample however, looks near perfect and exactly what I would go for. Implementing asynchronous code throughout the code-base always makes for a much cleaner implementation.

Though I was unable to reproduce you case and mine runs OK but to me it seems like you are running into a deadlock. Try forcing your async tasks to return their result to a different sync context like this:
public async Task<ActionResult> Index()
{
var dal = new DalAsync();
var assignedCases = Task.Run(async () => await dal.GetAssignedCasesAsync());
var newCases = Task.Run(async () => await dal.GetNewCasesAsync());
var allCases = await Task.WhenAll( assignedCases, newCases).ConfigureAwait(false);
return View( "Cases", allCases.SelectMany( c => c ) );
}

Related

C# Add to a List Asynchronously in API

I have an API which needs to be run in a loop for Mass processing.
Current single API is:
public async Task<ActionResult<CombinedAddressResponse>> GetCombinedAddress(AddressRequestDto request)
We are not allowed to touch/modify the original single API. However can be run in bulk, using foreach statement. What is the best way to run this asychronously without locks?
Current Solution below is just providing a list, would this be it?
public async Task<ActionResult<List<CombinedAddressResponse>>> GetCombinedAddress(List<AddressRequestDto> requests)
{
var combinedAddressResponses = new List<CombinedAddressResponse>();
foreach(AddressRequestDto request in requests)
{
var newCombinedAddress = (await GetCombinedAddress(request)).Value;
combinedAddressResponses.Add(newCombinedAddress);
}
return combinedAddressResponses;
}
Update:
In debugger, it has to go to combinedAddressResponse.Result.Value
combinedAddressResponse.Value = null
and Also strangely, writing combinedAddressResponse.Result.Value gives error below "Action Result does not contain a definition for for 'Value' and no accessible extension method
I'm writing this code off the top of my head without an IDE or sleep, so please comment if I'm missing something or there's a better way.
But effectively I think you want to run all your requests at once (not sequentially) doing something like this:
public async Task<ActionResult<List<CombinedAddressResponse>>> GetCombinedAddress(List<AddressRequestDto> requests)
{
var combinedAddressResponses = new List<CombinedAddressResponse>(requests.Count);
var tasks = new List<Task<ActionResult<CombinedAddressResponse>>(requests.Count);
foreach (var request in requests)
{
tasks.Add(Task.Run(async () => await GetCombinedAddress(request));
}
//This waits for all the tasks to complete
await tasks.WhenAll(tasks.ToArray());
combinedAddressResponses.AddRange(tasks.Select(x => x.Result.Value));
return combinedAddressResponses;
}
looking for a way to speed things up and run in parallel thanks
What you need is "asynchronous concurrency". I use the term "concurrency" to mean "doing more than one thing at a time", and "parallel" to mean "doing more than one thing at a time using threads". Since you're on ASP.NET, you don't want to use additional threads; you'd want to use a form of concurrency that works asynchronously (which uses fewer threads). So, Parallel and Task.Run should not be parts of your solution.
The way to do asynchronous concurrency is to build a collection of tasks, and then use await Task.WhenAll. E.g.:
public async Task<ActionResult<IReadOnlyList<CombinedAddressResponse>>> GetCombinedAddress(List<AddressRequestDto> requests)
{
// Build the collection of tasks by doing an asynchronous operation for each request.
var tasks = requests.Select(async request =>
{
var combinedAddressResponse = await GetCombinedAdress(request);
return combinedAddressResponse.Value;
}).ToList();
// Wait for all the tasks to complete and get the results.
var results = await Task.WhenAll(tasks);
return results;
}

Why doesn't async execution work in `select`?

I am calling this action (ASP.Net Core 2.0) over AJAX:
[HttpGet]
public async Task<IActionResult> GetPostsOfUser(Guid userId, Guid? categoryId)
{
var posts = await postService.GetPostsOfUserAsync(userId, categoryId);
var postVMs = await Task.WhenAll(
posts.Select(async p => new PostViewModel
{
PostId = p.Id,
PostContent = p.Content,
PostTitle = p.Title,
WriterAvatarUri = fileService.GetFileUri(p.Writer.Profile.AvatarId, Url),
WriterFullName = p.Writer.Profile.FullName,
WriterId = p.WriterId,
Liked = await postService.IsPostLikedByUserAsync(p.Id, UserId),// TODO this takes too long!!!!
}));
return Json(postVMs);
}
But it took too long to response (20 seconds!!!) in case I have many post objects in posts
array (e.g. 30 posts).
That is caused because of this line await postService.IsPostLikedByUserAsync.
Digging into the source code of this function:
public async Task<bool> IsPostLikedByUserAsync(Guid postId, Guid userId)
{
logger.LogDebug("Place 0 passed!");
var user = await dbContext.Users
.SingleOrDefaultAsync(u => u.Id == userId);
logger.LogDebug("Place 1 passed!");
var post = await dbContext.Posts
.SingleOrDefaultAsync(u => u.Id == postId);
logger.LogDebug("Place 2 passed!");
if (user == null || post == null)
return false;
return post.PostLikes.SingleOrDefault(pl => pl.UserId == userId) != null;
}
The investigations showed, after some seconds, ALL "Place 1 passed!" logging methods are executed together for every post object. In other words, it seems that every post awaits until the previous post finishes executing this part:
var user = await dbContext.Users
.Include(u => u.PostLikes)
.SingleOrDefaultAsync(u => u.Id == userId);
And then -when every post finishes that part- the place 1 of log is executed for all post objects.
The same happens for logging place 2, every single post seems to await for the previous post to finish executing var post = await dbContext.Pos..., and then the function can go further to execute log place 2 (after few seconds from log 1, ALL log 2 appear together).
That means I have no asynchronous execution here. Could some one help me to understand and solve this problem?
UPDATE:
Changing the code a bit to look like this:
/// <summary>
/// Returns all post of a user in a specific category.
/// If the category is null, then all of that user posts will be returned from all categories
/// </summary>
/// <param name="userId"></param>
/// <param name="categoryId"></param>
/// <returns></returns>
[Authorize]
[HttpGet]
public async Task<IActionResult> GetPostsOfUser(Guid userId, Guid? categoryId)
{
var posts = await postService.GetPostsOfUserAsync(userId, categoryId);
var i = 0;
var j = 0;
var postVMs = await Task.WhenAll(
posts.Select(async p =>
{
logger.LogDebug("DEBUG NUMBER HERE BEFORE RETURN: {0}", i++);
var isLiked = await postService.IsPostLikedByUserAsync(p.Id, UserId);// TODO this takes too long!!!!
logger.LogDebug("DEBUG NUMBER HERE AFTER RETURN: {0}", j++);
return new PostViewModel
{
PostId = p.Id,
PostContent = p.Content,
PostTitle = p.Title,
WriterAvatarUri = fileService.GetFileUri(p.Writer.Profile.AvatarId, Url),
WriterFullName = p.Writer.Profile.FullName,
WriterId = p.WriterId,
Liked = isLiked,
};
}));
return Json(postVMs);
}
That shows, that this line "DEBUG NUMBER HERE AFTER RETURN" is printed for ALL select methods together, that means that ALL select methods waits for each other before going further, how can I prevent that?
UPDATE 2
Substituting the previous IsPostLikedByUserAsyncmethod, with the following one:
public async Task<bool> IsPostLikedByUserAsync(Guid postId, Guid userId)
{
await Task.Delay(1000);
}
Showed no problem in async running, I had to wait only 1 second, not 1 x 30.
That means it is something specific to EF.
Why does the problem happen ONLY with entity framework (with the original function)? I notice the problem even with only 3 post objects! Any new ideas?
The deductions you've made are not necessarily true.
If these methods were firing in a non-asynchronous fashion, you would see all of the logs from one method invocation reach the console before the next method invocation's console logs. You would see the pattern 123123123 instead of 111222333. What you are seeing is that the three awaits seem to synchronize after some asynchronous batching occurs. Thus it appears that the operations are made in stages. But why?
There are a couple reasons this might happen. Firstly, the scheduler may be scheduling all of your tasks to the same thread, causing each task to be queued and then processed when the previous execution flow is complete. Since Task.WhenAll is awaited outside of the Select loop, all synchronous portions of your async methods are executed before any one Task is awaited, therefore causing all of the "first" log invocations to be called immediately following the invocation of that method.
So then what's the deal with the others syncing up later? The same thing is happening. Once all of your methods hit their first await, the execution flow is yielded to whatever code invoked that method. In this case, that is your Select statement. Behind the scenes, however, all of those async operations are processing. This creates a race condition.
Shouldn't there be some chance of the third log of some methods being called before the second log of another method, due to varying request/response times? Most of the time, yes. Except you've introduced a sort of "delay" in to the equation, making the race condition more predictable. Console logging is actually quite slow, and is also synchronous. This causes all of your methods to block at the logging line until the previous logs have completed. But blocking, by itself, may not be enough to make all of those log calls sync up in pretty little batches. There may be another factor at play.
It would appear that you are querying a database. Since this is an IO operation, it takes considerably longer to complete than other operations (including console logging, probably). This means that, although the queries aren't synchronous, they will in all likelihood receive a response after all of the queries/requests have already been sent, and therefore after the second log line from each method has already executed. The remaining log lines are processed eventually, and therefore fall in to the last batch.
Your code is being processed asynchronously. It just doesn't look quite how you might expect. Async doesn't mean random order. It just means some code flow is paused until a later condition is met, allowing other code to be processed in the mean time. If the conditions happen to sync up, then so does your code flow.
Actually async execution works, but it doesn't work as you expect. Select statement starts tasks for all posts and then they all work concurrently that leads you to performance problems you.
The best approach to achieve expected behavior is to reduce the degree of parallelism. There are no build-in tools to do that so I can offer 2 workarounds:
Use TPL DataFlow library. It is developed by Microsoft but not very popular. You can easily find enough examples though.
Manage parallel tasks by yourself with SemaphoreSlim. It would look like this:
semaphore = new SemaphoreSlim(degreeOfParallelism);
cts = new CancellationTokenSource();
var postVMs = await Task.WhenAll(
posts.Select(async p =>
{
await semaphore.WaitAsync(cts.Token).ConfigureAwait(false);
cts.Token.ThrowIfCancellationRequested();
new PostViewModel
{
PostId = p.Id,
PostContent = p.Content,
PostTitle = p.Title,
WriterAvatarUri = fileService.GetFileUri(p.Writer.Profile.AvatarId, Url),
WriterFullName = p.Writer.Profile.FullName,
WriterId = p.WriterId,
Liked = await postService.IsPostLikedByUserAsync(p.Id, UserId),// TODO this takes too long!!!!
}
semaphore.Release();
}));
And don't forget to use .ConfigureAwait(false) whenever it's possible.

Threading in c# while making put calls

I am new to threading world of c#. I read there are different ways to do threading like sequential.
My scenario is below. Which one would be more suitable for the below.
I have list of complex objects. I will be making calls to PUT endpoint for each object [body of put] separately. There can be 1000 or more objects in the list. And I cannot pass all the objects at one and hence I have to pass each object in every call to the put endpoint. In this way, I have to make 1000 calls separately if there are 1000 objects.
Each put call is independent of each other while I have to store the properties of the response back from each call.
I was thinking to apply threading concept to above but not sure which one and how to do it.
Any suggestions would be greatly appreciated.
Thanking in advance.
As per the comments below,
Putting the method signatures here and adding more details.
I have IEnumerable<CamelList>. For each camel, I have to make a put request call and update the table from the response of each call. I will write a new method that will accept this list and make use of below 2 methods to make call and update table. I have to ensure, I am making not more than 100 calls at the same time and the API I am calling can be called by the same user 100 times per minute.
We have a method as
public Camel SendRequest(handler, uri, route, Camel); //basically takes all the parameters and provide you the Camel.
We have a method as public void updateInTable(Entity Camel); //updates the table.
HTTP calls are typically made using the HttpClient class, whose HTTP methods are already asynchronous. You don't need to create your own threads or tasks.
All asynchronous methods return a Task or Task<T> value. You need to use theawaitkeyword to await for the operation to complete asynchronously - that means the thread is released until the operation completes. When that happens, execution resumes after theawait`.
You can see how to write a PUT request here. The example uses the PutAsJsonAsync method to reduce the boilerplate code needed to serialize a Product class into a string and create a StringContent class with the correct content type, eg:
var response = await client.PutAsJsonAsync($"api/products/{product.Id}", product);
response.EnsureSuccessStatusCode();
If you want to PUT 1000 products, all you need is an array or list with the products. You can use LINQ to make multiple calls and await the tasks they return at the end :
var callTasks = myProducts.Select(product=>client.PutAsJsonAsync($"api/products/{product.Id}", product);
var responses = await Task.WhenAll(callTasks);
This means that you have to wait for all requests to finish before you can check if any one succeeded. You can change the body of Select to await the response itself :
var callTasks = myProducts.Select(async product=>{
var response=await client.PutAsJsonAsync($"api/products/{product.Id}", product);
if (!response.IsSuccessStatusCode)
{
//Log the error
}
return response.StatusCode;
});
var responses=await Task.WhenAll(callTasks);
It's better to conver the lambda into a separate method though, eg PutProductAsync :
async Task<HttpStatusCode> PutProduct(Product product,HttpClient client)
{
var response=await client.PutAsJsonAsync($"api/products/{product.Id}", product);
if (!response.IsSuccessStatusCode)
{
//Log the error
}
return response.StatusCode;
};
var callTasks = myProducts.Select(product=>PutProductAsync(product));
var responses=await Task.WhenAll(callTasks);
I'm going to suggest using Microsoft's Reactive Framework for this. You need to NuGet "System.Reactive" to get the bits.
Then you can do this:
var urls = new string[1000]; //somehow populated;
Func<string, HttpContent, IObservable<string>> putCall = (u, c) =>
Observable
.Using(
() => new HttpClient(),
hc =>
from resp in Observable.FromAsync(() => hc.PutAsync(u, c))
from body in Observable.FromAsync(() => resp.Content.ReadAsStringAsync())
select body);
var callsPerTimeSpanAllowed = 100;
var timeSpanAllowed = TimeSpan.FromMinutes(1.0);
IObservable<IList<string>> bufferedIntervaledUrls =
Observable.Zip(
Observable.Interval(timeSpanAllowed),
urls.ToObservable().Buffer(callsPerTimeSpanAllowed),
(_, buffered_urls) => buffered_urls);
var query =
from bufferedUrls in bufferedIntervaledUrls
from url in bufferedUrls
from result in putCall(url, new StringContent("YOURCONTENTHERE"))
select new { url, result };
IDisposable subscription =
query
.Subscribe(
x => { /* do something with each `x.url` & `x.result` */ },
() => { /* do something when it is all finished */ });
This code is breaking the URLs into blocks (or buffers) of 100 and putting them on a timeline (or interval) of 1 minute apart. It then calls the putCall for each URL and returns the result.
It's probably a little advanced for you now, but I thought this answer might be useful just to see how clean this can be.

Correct way to link Tasks together when return values are needed at different times #2

I asked a question yesterday and, unfortunately, even with the answers provided, I'm still hitting up on stumbling blocks about how to do things correctly... My issue is that my code actually works, but I'm a complete novice at concurrency programming and it feels like I'm not programming the correct way and, most importantly, I'm afraid of developing bad habits.
To make up a simplistic example to elaborate on yesterday's question, suppose I had the following methods:
static Task<IEnumerable<MyClass>> Task1(CancellationToken ct)
static Task<IEnumerable<int>> Task2(CancellationToken ct, List<string> StringList)
static Task<IEnumerable<String>> Task3(CancellationToken ct)
static Task<IEnumerable<Double>> Task4(CancellationToken ct)
static Task Task5(CancellationToken ct, IEnumerable<int> Task2Info, IEnumerable<string> Task3Info, IEnumerable<double> Task4Info)
static Task Task6(CancellationToken ct, IEnumerable<int> Task2Info, IEnumerable<MyClass> Task1Info)
And the code I've written that utilizes them looks as follows:
static Task execute(CancellationToken ct)
{
IEnumerable<MyClass> Task1Info = null;
List<string> StringList = null;
IEnumerable<int> Task2Info = null;
IEnumerable<string> Task3Info = null;
IEnumerable<double> Task4Info = null;
var TaskN = Task.Run(() =>
{
Task1Info = Task1(ct).Result;
}
, ct)
.ContinueWith(res =>
{
StringList = Task1Info.Select(k=> k.StringVal).ToList();
Task2Info = Task2(ct, StringList).Result;
}
, ct);
return Task.Run(() =>
{
return Task.WhenAll
(
TaskN,
Task.Run(() => { Task3Info = Task3(ct).Result; }, ct),
Task.Run(() => { Task4Info = Task4(ct).Result; }, ct)
)
.ContinueWith(res =>
{
Task5(ct, Task2Info, Task3Info, Task4Info).Wait();
}
, ct)
.ContinueWith(res =>
{
Task6(ct, Task2Info, Task1Info).Wait();
}
, ct);
});
}
In other words:
I need the results of Task1 to calculate StringList and to run Task2
Task2, Task3 and Task4 can all run concurrently
I need the return values from all of the above for later method calls
Once these are run, I use their results to run Task5
Once Task5 is run, I use all the results in running Task6
As a simple explanation, imagine the first portion is data gathering, the second is data cleansing and the third data reporting
Like I said, my challenge is that this actually runs, but I simply feel that it's more of a "hack" that the right way to program - Concurrency programming is very new to me and I definitely want to learn the best ways this should be done...
I would feel better about my answer if I could see how your TaskN methods were implemented. Specifically, I would want to validate the need for your TaskN method calls to be wrapped inside calls to Task.Run() if they are already returning a Task return value.
But personally, I would just use the async-await style of programming. I find it fairly easy to read/write. Documentation can be found here.
I would then envision your code looking something like this:
static async Task execute(CancellationToken ct)
{
// execute and wait for task1 to complete.
IEnumerable<MyClass> Task1Info = await Task1(ct);
List<string> StringList = Task1Info.Select(k=> k.StringVal).ToList();
// start tasks 2 through 4
Task<IEnumerable<int>> t2 = Task2(ct, StringList);
Task<IEnumerable<string>> t3 = Task3(ct);
Task<IEnmerable<Double>> t4 = Task4(ct);
// now that tasks 2 to 4 have been started,
// wait for all 3 of them to complete before continuing.
IEnumerable<int> Task2Info = await t2;
IEnumerable<string> Task3Info = await t3;
IEnumerable<Double> Task4Info = await t4;
// execute and wait for task 5 to complete
await Task5(ct, Task2Info, Task3Info, Task4Info);
// finally, execute and wait for task 6 to complete
await Task6(ct, Task2Info, Task1Info);
}
I don't fully understand the code in question but it is very easy to link tasks by dependency. Example:
var t1 = ...;
var t2 = ...;
var t3 = Task.WhenAll(t1, t2).ContinueWith(_ => RunT3(t1.Result, t2.Result));
You can express an arbitrary dependency DAG like that. This also makes it unnecessary to store into local variables, although you can do that if you need.
That way you also can get rid of the awkward Task.Run(() => { Task3Info = Task3(ct).Result; }, ct) pattern. This is equivalent to Task3(ct) except for the store to the local variable which probably should not exist in the first place.
.ContinueWith(res =>
{
Task5(ct, Task2Info, Task3Info, Task4Info).Wait();
}
This probably should be
.ContinueWith(_ => Task5(ct, Task2Info, Task3Info, Task4Info)).Unwrap()
Does that help? Leave a comment.
it feels like I'm not programming the correct way and, most importantly, I'm afraid of developing bad habits.
I definitely want to learn the best ways this should be done...
First, distinguish between asynchronous and parallel, which are two different forms of concurrency. An operation is a good match for "asynchronous" if it doesn't need a thread to do its work - e.g., I/O and other logical operations where you wait for something like timers. "Parallelism" is about splitting up CPU-bound work across multiple cores - you need parallelism if your code is maxing out one CPU at 100% and you want it to run faster by using other CPUs.
Next, follow a few guidelines for which APIs are used in which situations. Task.Run is for pushing CPU-bound work to other threads. If your work isn't CPU-bound, you shouldn't be using Task.Run. Task<T>.Result and Task.Wait are even more on the parallel side; with few exceptions, they should really only be used for dynamic task parallelism. Dynamic task parallelism is extremely powerful (as #usr pointed out, you can represent any DAG), but it's also low-level and awkward to work with. There are almost always better approaches. ContinueWith is another example of an API that is for dynamic task parallelism.
Since your method signatures return Task, I'm going to assume that they're naturally asynchronous (meaning: they don't use Task.Run, and are probably implemented with I/O). In that case, Task.Run is the incorrect tool to use. The proper tools for asynchronous work are the async and await keywords.
So, taking your desired behavior one at a time:
// I need the results of Task1 to calculate StringList and to run Task2
var task1Result = await Task1(ct);
var stringList = CalculateStringList(task1Result);
// Task2, Task3 and Task4 can all run concurrently
var task2 = Task2(ct, stringList);
var task3 = Task3(ct);
var task4 = Task4(ct);
await Task.WhenAll(task2, task3, task4);
// I need the return values from all of the above for later method calls
var task2Result = await task2;
var task3Result = await task3;
var task4Result = await task4;
// Once these are run, I use their results to run Task5
await Task5(ct, task2Result, task3Result, task4Result);
// Once Task5 is run, I use all the results in running Task6
await Task6(ct, task2Result, task1Result);
For more about what APIs are appropriate in which situations, I have a Tour of Task on my blog, and I cover a lot of concurrency best practices in my book.

Alternative to Task.Run

I have an ASP.NET MVC 4 program and I wrote the following code where I wanted to make the Results() method async:
public async Task<ActionResult> Results()
{
var result1 = SomeMethodAsync(1);
var result2 = SomeMethodAsync(2);
var result3 = SomeMethodAsync(3);
await Task.WhenAll(result1, result2, result3);
ViewBag.Result1 = result1;
ViewBag.Result2 = result2;
ViewBag.Result3 = result3;
return View()
}
public async Task<int> SomeMethodAsync(int i)
{
//do some logic
//make db call
return await Task.Run( () => DbCall(i));
}
public int DbCall(i)
{
//make db call
return valueFromDb;
}
Since I am not using Entityframework 6 I cannot make the DbCall() async. I was reading that its not a good idea to use Task.Run in ASP.NET projects since Task.Run will borrow a thread from the ASP.Net thread pool and therefore can cause queuing issues with large number of requests (as there will be lower number of available threads for processing incoming requests).
1) How can I make my method async without using Task.Run?
2) Is it better to make the method synchronous than using Task.Run ?
2) Is it better to make the method synchronous than using Task.Run ?
Yes.
In this case, since you don't have naturally-asynchronous APIs available from EF6, the next best solution is to just be synchronous.
Note that even if you did have asynchronous APIs, a single DbContext can only handle one operation at a time anyway. You'd have to create multiple contexts in order to run multiple simultaneous queries, and that can get complex.

Categories