We have a database with around 400k elements we need to compute. Below is shown a sample of an orchestrator function.
[FunctionName("Crawl")]
public static async Task<List<string>> RunOrchestrator(
[OrchestrationTrigger] DurableOrchestrationContext context)
{
if (!context.IsReplaying)
{
}
WriteLine("In orchistration");
var outputs = new List<string>();
var tasks = new Task<string>[3];
var retryOptions = new RetryOptions(
firstRetryInterval: TimeSpan.FromSeconds(60),
maxNumberOfAttempts: 3);
// Replace "hello" with the name of your Durable Activity Function.
tasks[0] = context.CallActivityWithRetryAsync<string>("Crawl_Hello",retryOptions, "Tokyo");
tasks[1] = context.CallActivityWithRetryAsync<string>("Crawl_Hello", retryOptions, "Seattle");
tasks[2] = context.CallActivityWithRetryAsync<string>("Crawl_Hello",retryOptions, "London");
await Task.WhenAll(tasks);
return outputs;
}
Every time an activty is called the orchestration function is called. But I dont want to get 400k items from the database each time an activity is getting called. Would just just add all the activity code inside the if statement or what is the right approach here? I can't see that working with the WaitAll function.
Looks like you've figured out the approach for this as you've mentioned in your other query but elaborating the same here for the benefit of others.
Ideally, you should have an activity function to first fetch all the data that you need first, batch them up and call another activity function that processes that data.
Since you have a large number of elements to compute on, its best to split compute into separate sub-orchestrators because the fan-in operation is performed on a single instance.
For further reading, there are some documented performance targets that could help when deploying durable functions.
Related
I've been creating a service using C# in Azure Functions. I've read guides on how the best usage of async/await but don't understand its value in the context of Azure functions.
In my Azure function, I have 3 calls being made to a external API. I tried to use async/await to kick off my API calls. The idea is that the the first two tasks return a list each, which will be concatenated, and then compared against the third list.
After this comparison is complete, all the items are then sent to a storage container queue, for processing by another function that uses a queue trigger.
My async implementation is below:
var firstListTask = GetResourceListAsync(1);
var secondListTask = GetResourceListAsync(2);
var thirdListTask = GetOtherResourceListAsync(1);
var completed = await Task.WhenAll(firstListTask, secondListTask);
var resultList = completed[0].Concat(completed[1]);
var compareList = await thirdListTask;
// LINQ here to filter out resultList based on compareList
Running the above, I get an execution time of roughly 38 seconds.
My understanding of the async implementation is that I kick off all 3 async calls to get my lists.
The first two tasks are awaited with 'await Task.WhenAll...' - at this point the thread exits the async method and 'does something else' until the API returns the payload
API payload is received, the method is then resumed and continues executing the next instruction (concatenating the two lists)
The third task is then awaited with 'await thirdListTask', which exits the async method and 'does something else' until the API returns the payload
API payload is received, the method is then resumed and continues executing the next instruction (filtering lists)
Now if I run the same code synchronously, I get an execution time of about 40 seconds:
var firstList = GetResourceList(1)
var secondList = GetResourceList(2);
var resultList = firstList.Concat(secondListTask)
var compaireList = GetOtherResourceList(1);
var finalList = // linq query to filter out resultList based on compareList
I can see that the async function runs 2 seconds faster than the sync function, I'm assuming this is because the thirdListTask is being kicked off at the same time as firstListTask and secondListTask?
My problem with the async implementation is that I don't understand what 'does something else' entails in the context of Azure Functions. From my understanding there is nothing else to do other than the operations on the next line, but it can't progress there until the payload has returned from the API.
Moreover, is the following code sample doing the same thing as my first async implementation? I'm extremely confused seeing examples of Azure Functions that use await for each async call, just to await another call in the next line.
var firstList = await GetResourceListAsync(1);
var secondList = await GetResourceListAsync(2);
var resultList = firstList.Concat(secondList);
var compareList= await GetOtherResourceListAsync(1);
// LINQ here to filter out resultList based on compareList
I've tried reading MS best practice for Azure Functions and similar questions around async/await on stackoverflow, but I can't seem to wrap my head around the above. Can anyone help simplify this?
var firstListTask = GetResourceListAsync(1);
var secondListTask = GetResourceListAsync(2);
var thirdListTask = GetOtherResourceListAsync(1);
This starts all 3 tasks. All 3 API calls are now running.
var completed = await Task.WhenAll(firstListTask, secondListTask);
This async awaits until both tasks finish. It frees up the thread to go "do something else" What is this something else? Whatever the framework needs it to be. It's a resource being freed, so it could be used in running another async operation's continuation etc.
var compareList = await thirdListTask;
At this point, your API call has most likely completed already, as it was started with the other 2. If it completed, the await will pull out the value or throw an exception if the task was faulted. If it is still ongoing, it will async wait for it to complete, freeing up the thread to "go do something else"
var firstList = await GetResourceListAsync(1);
var secondList = await GetResourceListAsync(2);
var resultList = firstList.Concat(secondList);
var compareList= await GetOtherResourceListAsync(1);
This is different from your first example. If e.g. all your API calls take 5 seconds to complete, the total running time will be 15 seconds, as you sequentially start and await for it to complete. In your first example, the total running time will roughly be 5 seconds, so 3 times quicker.
I have an existing Function App with 2 Functions and a storage queue. F1 is triggered by a message in a service bus topic. For each msg received, F1 calculates some sub-tasks (T1,T2,...) which have to be executed with varying amount of delay. Ex - T1 to be fired after 3 mins, T2 after 5min etc. F1 posts messages to a storage queue with appropriate visibility timeouts (to simulate the delay) and F2 is triggered whenever a message is visible in the queue. All works fine.
I now want to migrate this app to use 'Durable Functions'. F1 now only starts the orchestrator. The orchestrator code is something as follows -
public static async Task Orchestrator([OrchestrationTrigger] DurableOrchestrationContext context, TraceWriter log)
{
var results = await context.CallActivityAsync<List<TaskInfo>>("CalculateTasks", "someinput");
List<Task> tasks = new List<Task>();
foreach (var value in results)
{
var pnTask = context.CallActivityAsync("PerformSubTask", value);
tasks.Add(pnTask);
}
//dont't await as we want to fire and forget. No fan-in!
//await Task.WhenAll(tasks);
}
[FunctionName("PerformSubTask")]
public async static Task Run([ActivityTrigger]TaskInfo info, TraceWriter log)
{
TimeSpan timeDifference = DateTime.UtcNow - info.Origin.ToUniversalTime();
TimeSpan delay = TimeSpan.FromSeconds(info.DelayInSeconds);
var actualDelay = timeDifference > delay ? TimeSpan.Zero : delay - timeDifference;
//will still keep the activity function running and incur costs??
await Task.Delay(actualDelay);
//perform subtask work after delay!
}
I would only like to fan-out (no fan-in to collect the results) and start the subtasks. The orchestrator starts all the tasks and avoids call 'await Task.WhenAll'. The activity function calls 'Task.Delay' to wait for the specified amount of time and then does its work.
My questions
Does it make sense to use Durable Functions for this workflow?
Is this the right approach to orchestrate 'Fan-out' workflow?
I do not like the fact that the activity function is running for specified amount of time (3 or 5 mins) doing nothing. It will incurs costs,or?
Also if a delay of more than 10 minutes is required there is no way for an activity function to succeed with this approach!
My earlier attempt to avoid this was to use 'CreateTimer' in the orchestrator and then add the activity as a continuation, but I see only timer entries in the 'History' table. The continuation does not fire! Am I violating the constraint for orchestrator code - 'Orchestrator code must never initiate any async operation' ?
foreach (var value in results)
{
//calculate time to start
var timeToStart = ;
var pnTask = context.CreateTimer(timeToStart , CancellationToken.None).ContinueWith(t => context.CallActivityAsync("PerformSubTask", value));
tasks.Add(pnTask);
}
UPDATE : using approach suggested by Chris
Activity that calculates subtasks and delays
[FunctionName("CalculateTasks")]
public static List<TaskInfo> CalculateTasks([ActivityTrigger]string input,TraceWriter log)
{
//in reality time is obtained by calling an endpoint
DateTime currentTime = DateTime.UtcNow;
return new List<TaskInfo> {
new TaskInfo{ DelayInSeconds = 10, Origin = currentTime },
new TaskInfo{ DelayInSeconds = 20, Origin = currentTime },
new TaskInfo{ DelayInSeconds = 30, Origin = currentTime },
};
}
public static async Task Orchestrator([OrchestrationTrigger] DurableOrchestrationContext context, TraceWriter log)
{
var results = await context.CallActivityAsync<List<TaskInfo>>("CalculateTasks", "someinput");
var currentTime = context.CurrentUtcDateTime;
List<Task> tasks = new List<Task>();
foreach (var value in results)
{
TimeSpan timeDifference = currentTime - value.Origin;
TimeSpan delay = TimeSpan.FromSeconds(value.DelayInSeconds);
var actualDelay = timeDifference > delay ? TimeSpan.Zero : delay - timeDifference;
var timeToStart = currentTime.Add(actualDelay);
Task delayedActivityCall = context
.CreateTimer(timeToStart, CancellationToken.None)
.ContinueWith(t => context.CallActivityAsync("PerformSubtask", value));
tasks.Add(delayedActivityCall);
}
await Task.WhenAll(tasks);
}
Simply scheduling tasks from within the orchestrator seems to work.In my case I am calculating the tasks and the delays in another activity (CalculateTasks) before the loop. I want the delays to be calculated using the 'current time' when the activity was run. I am using DateTime.UtcNow in the activity. This somehow does not play well when used in the orchestrator. The activities specified by 'ContinueWith' just don't run and the orchestrator is always in 'Running' state.
Can I not use the time recorded by an activity from within the orchestrator?
UPDATE 2
So the workaround suggested by Chris works!
Since I do not want to collect the results of the activities I avoid calling 'await Tasks.WhenAll(tasks)' after scheduling all activities. I do this in order to reduce the contention on the control queue i.e. be able to start another orchestration if reqd. Nevertheless the status of the 'orchestrator' is still 'Running' till all the activities finish running. I guess it moves to 'Complete' only after the last activity posts a 'done' message to the control queue.
Am I right? Is there any way to free the orchestrator earlier i.e right after scheduling all activities?
The ContinueWith approach worked fine for me. I was able to simulate a version of your scenario using the following orchestrator code:
[FunctionName("Orchestrator")]
public static async Task Orchestrator(
[OrchestrationTrigger] DurableOrchestrationContext context,
TraceWriter log)
{
var tasks = new List<Task>(10);
for (int i = 0; i < 10; i++)
{
int j = i;
DateTime timeToStart = context.CurrentUtcDateTime.AddSeconds(10 * j);
Task delayedActivityCall = context
.CreateTimer(timeToStart, CancellationToken.None)
.ContinueWith(t => context.CallActivityAsync("PerformSubtask", j));
tasks.Add(delayedActivityCall);
}
await Task.WhenAll(tasks);
}
And for what it's worth, here is the activity function code.
[FunctionName("PerformSubtask")]
public static void Activity([ActivityTrigger] int j, TraceWriter log)
{
log.Warning($"{DateTime.Now:o}: {j:00}");
}
From the log output, I saw that all activity invocations ran 10 seconds apart from each other.
Another approach would be to fan out to multiple sub-orchestrations (like #jeffhollan suggested) which are simple a short sequence of a durable timer delay and your activity call.
UPDATE
I tried using your updated sample and was able to reproduce your problem! If you run locally in Visual Studio and configure the exception settings to always break on exceptions, then you should see the following:
System.InvalidOperationException: 'Multithreaded execution was detected. This can happen if the orchestrator function code awaits on a task that was not created by a DurableOrchestrationContext method. More details can be found in this article https://learn.microsoft.com/en-us/azure/azure-functions/durable-functions-checkpointing-and-replay#orchestrator-code-constraints.'
This means the thread which called context.CallActivityAsync("PerformSubtask", j) was not the same as the thread which called the orchestrator function. I don't know why my initial example didn't hit this, or why your version did. It has something to do with how the TPL decides which thread to use to run your ContinueWith delegate - something I need to look more into.
The good news is that there is a simple workaround, which is to specify TaskContinuationOptions.ExecuteSynchronously, like this:
Task delayedActivityCall = context
.CreateTimer(timeToStart, CancellationToken.None)
.ContinueWith(
t => context.CallActivityAsync("PerformSubtask", j),
TaskContinuationOptions.ExecuteSynchronously);
Please try that and let me know if that fixes the issue you're observing.
Ideally you wouldn't need to do this workaround when using Task.ContinueWith. I've opened an issue in GitHub to track this: https://github.com/Azure/azure-functions-durable-extension/issues/317
Since I do not want to collect the results of the activities I avoid calling await Tasks.WhenAll(tasks) after scheduling all activities. I do this in order to reduce the contention on the control queue i.e. be able to start another orchestration if reqd. Nevertheless the status of the 'orchestrator' is still 'Running' till all the activities finish running. I guess it moves to 'Complete' only after the last activity posts a 'done' message to the control queue.
This is expected. Orchestrator functions never actually complete until all outstanding durable tasks have completed. There isn't any way to work around this. Note that you can still start other orchestrator instances, there just might be some contention if they happen to land on the same partition (there are 4 partitions by default).
await Task.Delay is definitely not the best option: you will pay for this time while your function won't do anything useful. The max delay is also bound to 10 minutes on Consumption plan.
In my opinion, raw Queue messages are the best option for fire-and-forget scenarios. Set the proper visibility timeouts, and your scenario will work reliably and efficiently.
The killer feature of Durable Functions are awaits, which do their magic of pausing and resuming while keeping the scope. Thus, it's a great way to implement fan-in, but you don't need that.
I think durable definitely makes sense for this workflow. I do think the best option would be to leverage the delay / timer feature as you said, but based on the synchronous nature of execution I don't think I would add everything to a task list which is really expecting a .WhenAll() or .WhenAny() which you aren't aiming for. I think I personally would just do a sequential foreach loop with timer delays for each task. So pseudocode of:
for(int x = 0; x < results.Length; x++) {
await context.CreateTimer(TimeSpan.FromMinutes(1), ...);
await context.CallActivityAsync("PerformTaskAsync", results[x]);
}
You need those awaits in there regardless, so just avoiding the await Task.WhenAll(...) is likely causing some issues in code sample above. Hope that helps
You should be able to use the IDurableOrchestrationContext.StartNewOrchestration() method that's been added in 2019 to suport this scenario. See https://github.com/Azure/azure-functions-durable-extension/issues/715 for context
I'm working on a SMS-based game (Value Added Service), in which a question must be sent to each subscriber on a daily basis. There are over 500,000 subscribers and therefore performance is a key factor. Since each subscriber can be a difference state of the competition with different variables, database must be queried separately for each subscriber before sending a text message. To achieve the best performance I'm using .Net Task Parallel Library (TPL) to spawn parallel threadpool threads and do as much async operations as possible in each thread to finally send texts asap.
Before describing the actual problem there are some more information necessary to give about the code.
At first there was no async operation in the code. I just scheduled some 500,000 tasks with the default task scheduler into the Threadpool and each task would work through the routines, blocking on all EF (Entity Framework) queries and sequentially finishing its job. It was good, but not fast enough. Then I changed all EF queries to Async, the outcome was superb in speed but there has been so many deadlocks and timeouts in SQL server that about a third of the subscribers never received a text! After trying different solutions, I decided not to do too many Async Database operations while I have over 500,000 tasks running on a 24 core server (with at least 24 concurrent threadpool threads)!
I rolled back all the changes (the Asycn ones) expect for one web service call in each task which remained Async.
Now the weird case:
In my code, I have a boolean variable named "isCrossSellActive". When the variable is set some more DB operations take place and an asycn webservice call will happen on which the thread awaits. When this variable is false, none of these operations will happen including the async webservice call. Awkwardly when the variable is set the code runs so much faster than when it's not! It seems like for some reason the awaited async code (the cooperative thread) is making the code faster.
Here is the code:
public async Task AutoSendMessages(...)
{
//Get list of subscriptions plus some initialization
LimitedConcurrencyLevelTaskScheduler lcts = new LimitedConcurrencyLevelTaskScheduler(numberOfThreads);
TaskFactory taskFactory = new TaskFactory(lcts);
List<Task> tasks = new List<Task>();
//....
foreach (var sub in subscriptions)
{
AutoSendData data = new AutoSendData
{
ServiceId = serviceId,
MSISDN = sub.subscriber,
IsCrossSellActive = bolCrossSellHeader
};
tasks.Add(await taskFactory.StartNew(async (x) =>
{
await SendQuestion(x);
}, data));
}
GC.Collect();
try
{
Task.WaitAll(tasks.ToArray());
}
catch (AggregateException ae)
{
ae.Handle((ex) =>
{
_logRepo.LogException(1, "", ex);
return true;
});
}
await _autoSendRepo.SetAutoSendingStatusEnd(statusId);
}
public async Task SendQuestion(object data)
{
//extract variables from input parameter
try
{
if (isCrossSellActive)
{
int pieceCount = subscriptionRepo.GetSubscriberCarPieces(curSubscription.service, curSubscription.subscriber).Count(c => c.isConfirmed);
foreach (var rule in csRules)
{
if (rule.Applies)
{
if (await HttpClientHelper.GetJsonAsync<bool>(url, rule.TargetServiceBaseAddress))
{
int noOfAddedPieces = SomeCalculations();
if (noOfAddedPieces > 0)
{
crossSellRepo.SetPromissedPieces(curSubscription.subscriber, curSubscription.service,
rule.TargetShortCode, noOfAddedPieces, 0, rule.ExpirationLimitDays);
}
}
}
}
}
// The rest of the code. (Some db CRUD)
await SmsClient.SendSoapMessage(subscriber, smsBody);
}
catch (Exception ex){//...}
}
Ok, thanks to #usr and the clue he gave me, the problem is finally solved!
His comment drew my attention to the awaited taskFactory.StartNew(...) line which sequentially adds new tasks to the "tasks" list which is then awaited on by Task.WaitAll(tasks);
At first I removed the await keyword before the taskFactory.StartNew() and it led the code towards a horrible state of malfunction! I then returned the await keyword to before taskFactory.StartNew() and debugged the code using breakpoints and amazingly saw that the threads are ran one after another and sequentially before the first thread reaches the first await inside the "SendQuestion" routine. When the "isCrossSellActive" flag was set despite the more jobs a thread should do the first await keyword is reached earlier thus enabling the next scheduled task to run. But when its not set the only await keyword is the last line of the routine so its most likely to run sequentially to the end.
usr's suggestion to remove the await keyword in the for loop seemed to be correct but the problem was the Task.WaitAll() line would wait on the wrong list of Task<Task<void>> instead of Task<void>. I finally used Task.Run instead of TaskFactory.StartNew and everything changed. Now the service is working well. The final code inside the for loop is:
tasks.Add(Task.Run(async () =>
{
await SendQuestion(data);
}));
and the problem was solved.
Thank you all.
P.S. Read this article on Task.Run and why TaskFactory.StartNew is dangerous: http://blog.stephencleary.com/2013/08/startnew-is-dangerous.html
It's extremly hard to tell unless you add some profiling that tell you which code is taking longer now.
Without seeing more numbers my best guess would be that the SMS service doesn't like when you send too many requests in a short time and chokes. When you add the extra DB calls the extra delay make the sms service work better.
A few other small details:
await Task.WhenAll is usually a bit better than Task.WaitAll. WaitAll means the thread will sit around waiting. Making a deadlock slightly more likely.
Instead of:
tasks.Add(await taskFactory.StartNew(async (x) =>
{
await SendQuestion(x);
}, data));
You should be able to do
tasks.Add(SendQuestion(data));
I am reading azure tables' data - around 5k tables and collects different metrics and save them back to some other azure tables, everything in a asynchronous way. The problem I am facing is when there are huge data which can happen occasionally, application starts hanging. The same code is working fine with less data. The steps I am doing (all of them are asynchronous using Rx, async and await) are
Read all the table names from Azure
Read all the tables previous metric data (1 & 2 are in parallel - Task.WhenAll)
Get data from each table, process and save (Task.WhenAll)
what I want is, use asynchronousy till it doesn't make my application hanging. If there are more data than what can be handled, it should not read any more table's data instead focus on completing the available data processing.
Does Parallel.ForEach takes care of that?
The code: edited as per Stephen Cleary, Still not working for all the tables. whereas it is working for 500 tables,
I think it is the amount of data that brings the app (console app) to a standstill rather than the number of threads. (One thread may end up retrieving million rows, in thousands and each thousand is passed to a method and its count is added to dictionary hence can be garbage collected when there is a need for more memory) or is it the way I have implemented Semaphoreslim that is wrong?
public async Task CalculateMetricsForAllTablesAsync()
{
var allWizardTableNamesTask = GetAllWizardTableNamesAsync();
var allTablesNamesWithLastRunTimeTask = GetAllTableNamesWithLastRunTimeAsync();
await Task.WhenAll(allWizardTableNamesTask, allTablesNamesWithLastRunTimeTask).ConfigureAwait(false);
var allWizardTableNames = allWizardTableNamesTask.Result;
var allTablesNamesWithLastRunTime = allTablesNamesWithLastRunTimeTask.Result;
var throttler = new SemaphoreSlim(10);
var concurrentTableProcessingTasks = new ConcurrentStack<Task>();
foreach (var tname in allWizardTableNames)
{
await throttler.WaitAsync();
try
{
concurrentTableProcessingTasks.Push(ProcessTableDataAsync(tname, getTableNameWithLastRunTime(tname)));
}
finally
{
throttler.Release();
}
}
await Task.WhenAll(concurrentTableProcessingTasks).ConfigureAwait(false);
}
private async Task ProcessTableDataAsync(string tableName, Tuple<string, string> matchingTable)
{
var tableDataRetrieved = new TaskCompletionSource<bool>();
var metricCountsForEachDay = new ConcurrentDictionary<string, Tuple<int, int>>();
_fromATS.GetTableDataAsync<DynamicTableEntity>(tableName, GetFilter(matchingTable))
.Subscribe(entities => ProcessWizardDataChunk(metricCountsForEachDay, entities), () => tableDataRetrieved.TrySetResult(true));
await tableDataRetrieved.Task;
await SaveMetricDataAsync(tableName, metricCountsForEachDay).ConfigureAwait(false);
}
Since your async is wrapping Rx, I'd recommend throttling at the async level. You can do this by defining a SemaphoreSlim and wrapping your method logic within a WaitAsync/Release.
Alternatively, consider TPL Dataflow. Dataflow has built-in options for throttling (MaxDegreeOfParallelism), and also interoperates naturally with async and Rx.
I'm using the .NET API available from parse.com,
https://parse.com/docs/dotnet_guide#objects-saving
A snippet of my code looks like this;
public async void UploadCurrentXML()
{
...
var query = ParseObject.GetQuery("RANDOM_TABLE").WhereEqualTo("some_field", "string");
var count = await query.CountAsync();
ParseObject temp_A;
temp_A = await query.FirstAsync();
...
// do lots of stuff
...
await temp_A.SaveAsync();
}
To summarize; A query is made to a remote database. From the result a specific object (or its reference) is obtained from the database. Multiple operations are performed on the object and in the end, its saved back into the database.
All the database operations happen via await ParseObject.randomfunction() . Is it possible to call these functions in a synchronous manner? Or at least wait till the operation returns without moving on? The application is designed for maintenance purposes and time of operation is NOT an issue.
I'm asking this because as things stand, I get an error which states
The number of count operations in progress has reached its limit.
I've tried,
var count = await query.CountAsync().ConfigureAwait(false);
in all the await calls, but it doesn't help - the code is still running asynchronously.
var count = query.CountAsync().Result;
causes the application to get stuck - fairly certain that I've hit a deadlock.
A bit of searching led me to this question,
How would I run an async Task<T> method synchronously?
But I don't understand how it could apply to my case, since I do not have access to the source of ParseObject. Help? (Am using .NET 4.5)
I recommend that you use asynchronous programming throughout. If you're running into some kind of resource issue (i.e., multiple queries on a single db not allowed), then you should structure your code so that cannot happen (e.g., disabling UI buttons while operations are in flight). Or, if you must, you can use SemaphoreSlim to throttle your async code:
private readonly SemaphoreSlim _mutex = new SemaphoreSlim(1);
public async Task UploadCurrentXMLAsync()
{
await _mutex.WaitAsync();
try
{
...
var query = ParseObject.GetQuery("RANDOM_TABLE").WhereEqualTo("some_field", "string");
var count = await query.CountAsync();
ParseObject temp_A;
temp_A = await query.FirstAsync();
...
// do lots of stuff
...
await temp_A.SaveAsync();
}
finally
{
_mutex.Release();
}
}
But if you really, really want to synchronously block, you can do it like this:
public async Task UploadCurrentXMLAsync();
Task.Run(() => UploadCurrentXMLAsync()).Wait();
Again, I just can't recommend this last "solution", which is more of a hack than a proper solution.
if the api method returns an async task, you can get the awaiter and get the result synchronously
api.DoWorkAsync().GetAwaiter().GetResult();