Placing a global HTTP requests per second limit - c#

due to server limitations, I cannot make more than one requests per 3 second, I am using Thread.Sleep() to limit the number of requests I can make. Is there a better way without having to pause the thread? Thanks.
static void main(string[] args)
{
// getids
List<string> requestIds = GetMyRequestIds();
foreach(string requestId in requestIds)
{
Thread.Sleep(3000);
// one request for each Id
result = FetchStatus(requestId);
}
}
public Dictionary<string, object> FetchStatus(string requestId)
{
// build http request and query the server
// ... requestId... http... etc... read to end
return results;
}

If your limitation is just one request per 3 seconds you could set up a timer which fires a callback every 3 seconds. As this is executed in a separate thread, it is possible that two long running requests execute simultaneously.
System.Threading.Timer

Related

HttpClient.SendAsync processes two requests at a time when the limit is higher

I have a Windows service that reads data from the database and processes this data using multiple REST API calls.
Originally, this service ran on a timer where it would read unprocessed data from the database and process it using multiple threads limited using SemaphoreSlim. This worked well except that the database read had to wait for all processing to finish before reading again.
ServicePointManager.DefaultConnectionLimit = 10;
Original that works:
// Runs every 5 seconds on a timer
private void ProcessTimer_Elapsed(object sender, ElapsedEventArgs e)
{
var hasLock = false;
try
{
Monitor.TryEnter(timerLock, ref hasLock);
if (hasLock)
{
ProcessNewData();
}
else
{
log.Info("Failed to acquire lock for timer."); // This happens all of the time
}
}
finally
{
if (hasLock)
{
Monitor.Exit(timerLock);
}
}
}
public void ProcessNewData()
{
var unproceesedItems = GetDatabaseItems();
if (unproceesedItems.Count > 0)
{
var downloadTasks = new Task[unproceesedItems.Count];
var maxThreads = new SemaphoreSlim(semaphoreSlimMinMax, semaphoreSlimMinMax); // semaphoreSlimMinMax = 10 is max threads
for (var i = 0; i < unproceesedItems .Count; i++)
{
maxThreads.Wait();
var iClosure = i;
downloadTasks[i] =
Task.Run(async () =>
{
try
{
await ProcessItemsAsync(unproceesedItems[iClosure]);
}
catch (Exception ex)
{
// handle exception
}
finally
{
maxThreads.Release();
}
});
}
Task.WaitAll(downloadTasks);
}
}
To improve efficiency, I rewrite the service to run GetDatabaseItems in a separate thread from the rest so that there is a ConcurrentDictionary of unprocessed items between them that GetDatabaseItems fills and ProcessNewData empties.
The problem is that while 10 unprocessed items are send to ProcessItemsAsync, they are processed two at a time instead of all 10.
The code inside of ProcessItemsAsync calls var response = await client.SendAsync(request); where the delay occurs. All 10 threads make it to this code but come out of it two at a time. None of this code changed between the old version and the new.
Here is the code in the new version that did change:
public void Start()
{
ServicePointManager.DefaultConnectionLimit = maxSimultaneousThreads; // 10
// Start getting unprocessed data
getUnprocessedDataTimer.Interval = getUnprocessedDataInterval; // 5 seconds
getUnprocessedDataTimer.Elapsed += GetUnprocessedData; // writes data into a ConcurrentDictionary
getUnprocessedDataTimer.Start();
cancellationTokenSource = new CancellationTokenSource();
// Create a new thread to process data
Task.Factory.StartNew(() =>
{
try
{
ProcessNewData(cancellationTokenSource.Token);
}
catch (Exception ex)
{
// error handling
}
}, TaskCreationOptions.LongRunning
);
}
private void ProcessNewData(CancellationToken token)
{
// Check if task has been canceled.
while (!token.IsCancellationRequested)
{
if (unprocessedDictionary.Count > 0)
{
try
{
var throttler = new SemaphoreSlim(maxSimultaneousThreads, maxSimultaneousThreads); // maxSimultaneousThreads = 10
var tasks = unprocessedDictionary.Select(async item =>
{
await throttler.WaitAsync(token);
try
{
if (unprocessedDictionary.TryRemove(item.Key, out var item))
{
await ProcessItemsAsync(item);
}
}
catch (Exception ex)
{
// handle error
}
finally
{
throttler.Release();
}
});
Task.WhenAll(tasks);
}
catch (OperationCanceledException)
{
break;
}
}
Thread.Sleep(1000);
}
}
Environment
.NET Framework 4.7.1
Windows Server 2016
Visual Studio 2019
Attempts to fix:
I tried the following with the same bad result (two await client.SendAsync(request) completing at a time):
Set Max threads and ServicePointManager.DefaultConnectionLimit to 30
Manually create threads using Thread.Start()
Replace async/await pattern with sync HttpClient calls
Call data processing using Task.Run(async () => and Task.WaitAll(downloadTasks);
Replace the new long-running thread for ProcessNewData with a timer
What I want is to run GetUnprocessedData and ProcessNewData concurrently with an HttpClient connection limit of 10 (set in config) so that 10 requests are processed at the same time.
Note: the issue is similar to HttpClient.GetAsync executes only 2 requests at a time? but the DefaultConnectionLimit is increased and the service runs on a Windows Server. It also creates more than 2 connections when original code runs.
Update
I went back to the original project to make sure it still worked, it did. I added a new timer to perform some unrelated operations and the httpClient issue came back. I removed the timer, everything worked. I added a new thread to do parallel processing, the problem came back.
This is not a direct answer to your question, but a suggestion for simplifying your service that could make the debugging of any problem easier. My suggestion is to implement the producer-consumer pattern using an iterator for producing the unprocessed items, and a parallel loop for consuming them. Ideally the parallel loop would have async delegates, but since you are targeting the .NET Framework you don't have access to the .NET 6 method Parallel.ForEachAsync. So I will suggest the slightly wasteful approach of using a synchronous parallel loop that blocks threads. You could use either the Parallel.ForEach method, or the PLINQ like in the example below:
private IEnumerable<Item> Iterator(CancellationToken token)
{
while (true)
{
Task delayTask = Task.Delay(5000, token);
foreach (Item item in GetDatabaseItems()) yield return item;
delayTask.GetAwaiter().GetResult();
}
}
public void Start()
{
//...
ThreadPool.SetMinThreads(degreeOfParallelism, Environment.ProcessorCount);
new Thread(() =>
{
try
{
Partitioner
.Create(Iterator(token), EnumerablePartitionerOptions.NoBuffering)
.AsParallel()
.WithDegreeOfParallelism(degreeOfParallelism)
.WithCancellation(token)
.ForAll(item => ProcessItemAsync(item).GetAwaiter().GetResult());
}
catch (OperationCanceledException) { } // Ignore
}).Start();
}
Online demo.
The Iterator fetches unprocessed items from the database in batches, and yields them one by one. The database won't be hit more frequently than once every 5 seconds.
The PLINQ query is going to fetch a new item from the Iterator each time it has a worker available, according to the WithDegreeOfParallelism policy. The setting EnumerablePartitionerOptions.NoBuffering ensures that it won't try to fetch more items in advance.
The ThreadPool.SetMinThreads is used in order to boost the availability of ThreadPool threads, since the PLINQ is going to use lots of them. Without it the ThreadPool will not be able to satisfy the demand immediately, although it will gradually inject more threads and eventually will catch up. But since you already know how many threads you'll need, you can configure the ThreadPool from the start.
In case you dislike the idea of blocking threads, you can find a simple substitute of the Parallel.ForEachAsync here, based on the TPL Dataflow library. It requires installing a NuGet package.
The issue turned out to be the place where ServicePointManager.DefaultConnectionLimit is set.
In the version where HttpClient was only doing two requests at a time, ServicePointManager.DefaultConnectionLimit was being set before the threads were being created but after the HttpClient was initialized.
Once I moved it into the constructor before the HttpClient is initialized, everything started working.
Thank you very much to #Theodor Zoulias for the help.
TLDR; Set ServicePointManager.DefaultConnectionLimit before initializing the HttpClient.

Implementation of HttpClient request limiter and buffer

In our project, we have a few services that make requests to a 3rd party API, using a key.
This API has a shared rate limit between all endpoints (meaning request to one endpoint will require 2 seconds cooldown before we can use a different endpoint).
We've handled this using timed background jobs, only making requests to only one of the endpoints at any time.
After some architectural redesign, we've come to a spot where we don't rely as much on the timed background jobs, and now all HttpRequests cannot be moderated since multiple service instances are making requests to the API.
So, in our current example:
We have a few HttpClients set up to all needed API endpoints, i.e.:
services.AddHttpClient<Endpoint1Service>(client =>
{
client.BaseAddress = new Uri(configOptions.Services.Endpoint1.Url);
});
services.AddHttpClient<Endpoint2Service>(client =>
{
client.BaseAddress = new Uri(configOptions.Services.Endpoint2.Url);
});
Endpoint1Service and Endpoint2Service were before accessed by background job services:
public async Task DoJob()
{
var items = await _repository.GetItems();
foreach (var item in items)
{
var processedResult = await _endpoint1Service.DoRequest(item);
await Task.Delay(2000);
//...
}
// save all results
}
But now these "endpoint" services are accessed concurrently, and a new instance is create every time, therefore no way to moderate the request rates.
One possible solution was to create some sort of singleton request buffer is injected into all services that uses this API, and moderates these requests to go out at a given rate. Problems I see with this is it seems dangerous to store requests in a in-memory buffer, in case something goes wrong.
Is this a direction I should be looking towards, or is there anything else I can try?
Hope this helps:
I created the following for similar scenarios. Its objective is concurrency throttled multi threading. However it also gives you a convenient wrapper over your request processing pipeline. Additionally it provides a max number of concurrent requests limit per client (if you want to use that).
Create one instance per end point service and set its number of threads to 1 if you want a throttle of 1. Set it to 4 if you want it at 4 concurrent requests to the given end point.
https://github.com/tcwicks/ChillX/blob/master/src/ChillX.Threading/APIProcessor/AsyncThreadedWorkItemProcessor.cs
or
https://github.com/tcwicks/ChillX/blob/master/src/ChillX.Threading/APIProcessor/ThreadedWorkItemProcessor.cs
The two implementations are interchangeable. If using in a web server context the former is probably better as it offloads to the background thread pool instead if using foreground threads.
Example Usage
In your case probably set: _maxWorkerThreads to a value of 1 if you want to rate limit it at 1 concurrent request. Set it to 4 if you want to rate limit it at 4 concurrent requests.
//Example Usage for WebAPI controller
class Example
{
private static ThreadedWorkItemProcessor<DummyRequest, DummyResponse, int, WorkItemPriority> ThreadedProcessorExample = new ThreadedWorkItemProcessor<DummyRequest, DummyResponse, int, WorkItemPriority>(
_maxWorkItemLimitPerClient: 100 // Maximum number of concurrent requests in the processing queue per client. Set to int.MaxValue to disable concurrent request caps
, _maxWorkerThreads: 16 // Maximum number of threads to scale upto
, _threadStartupPerWorkItems: 4 // Consider starting a new processing thread ever X requests
, _threadStartupMinQueueSize: 4 // Do NOT start a new processing thread if work item queue is below this size
, _idleWorkerThreadExitSeconds: 10 // Idle threads will exit after X seconds
, _abandonedResponseExpirySeconds: 60 // Expire processed work items after X seconds (Maybe the client terminated or the web request thread died)
, _processRequestMethod: ProcessRequestMethod // Your Do Work method for processing the request
, _logErrorMethod: Handler_LogError
, _logMessageMethod: Handler_LogMessage
);
public async Task<DummyResponse> GetResponse([FromBody] DummyRequest _request)
{
int clientID = 1; //Replace with the client ID from your authentication mechanism if using per client request caps. Otherwise just hardcode to maybe 0 or whatever
WorkItemPriority _priority;
_priority = WorkItemPriority.Medium; //Assign the priority based on whatever prioritization rules.
int RequestID = ThreadedProcessorExample.ScheduleWorkItem(_priority, _request, clientID);
if (RequestID < 0)
{
//Client has exceeded maximum number of concurrent requests or Application Pool is shutting down
//return a suitable error message here
return new DummyResponse() { ErrorMessage = #"Maximum number of concurrent requests exceeded or service is restarting. Please retry request later." };
}
//If you need the result (Like in a webapi controller) then do this
//Otherwise if it is say a backend processing sink where there is no client waiting for a response then we are done here. just return.
KeyValuePair<bool, ThreadWorkItem<DummyRequest, DummyResponse, int>> workItemResult;
workItemResult = await ThreadedProcessorExample.TryGetProcessedWorkItemAsync(RequestID,
_timeoutMS: 1000, //Timeout of 1 second
_taskWaitType: ThreadProcessorAsyncTaskWaitType.Delay_Specific,
_delayMS: 10);
if (!workItemResult.Key)
{
//Processing timeout or Application Pool is shutting down
//return a suitable error message here
return new DummyResponse() { ErrorMessage = #"Internal system timeout or service is restarting. Please retry request later." };
}
return workItemResult.Value.Response;
}
public static DummyResponse ProcessRequestMethod(DummyRequest request)
{
// Process the request and return the response
return new DummyResponse() { orderID = request.orderID };
}
public static void Handler_LogError(Exception ex)
{
//Log unhandled exception here
}
public static void Handler_LogMessage(string Message)
{
//Log message here
}
}

Stop a line execution after a stipulated amount of time in C#

I have a Windows service, developed in C#, which does some calculation on data at equal intervals of time say 30 mins. It fetches the data from database and calls a method CalcData() which does some business logic calculations.
class Program
{
static void Main(string[] args)
{
try
{
AutoCalExecution ae = new AutoCalExecution();
ae.FetchData();
ae.CalData();
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
Console.ReadLine();
}
}
class AutoCalExecution
{
public void FetchData()
{
// fetch data from db
}
public void CalData()
{
line1;
line2;
line3;
line4; // this line has som expression which actually does the calculation.
line5;
}
}
I have given the template of the code which I'm using. In CalData(), line4 is where the calculation is happening. The calculation typically takes 10 mins to be done. So line4 is executed for 10mins.
There are some scenarios where the calculation might take more than 10 mins. In that case I want to cancel the execution and go to line5 after certain amount of time, say 15 mins.
To summarize I want to set a timeout time for line4 for 15 mins(which can be configured based in requirement), If it doesn't finish with in 15 mins, it has to stop and come to line5.
public void CalData()
{
line1;
line2;
line3;
if ( set time to 15 mins exceeds){
line4; // this line has some expression which actually does the calculation.
}
else
{
Log (Line4 execution did not complete in stipulated time);
}
line5;
}
How do I set that condition in C#?
Update
This is something I tried:
var task = Task.Run(() => CalData());
if (task.Wait(TimeSpan.FromMinutes(Convert.ToDouble(timeout))))
{
if (task.Result)
{
log.Info("Completed");
}
else
{
log.Error("Not successful");
}
}
But the problem here is I want line5 to get executed in this method if line 4 doesn't finish. Is there a way I can write this similar code for a piece of code/snippet, instead of whole method?
Make line4 into a task.
Make the task cancellable by a cancellation token.
Use a cancellation token which cancels itself after 10mins (configured time).
https://learn.microsoft.com/en-us/dotnet/api/system.threading.cancellationtokensource.cancelafter?view=netframework-4.8
https://binary-studio.com/2015/10/23/task-cancellation-in-c-and-things-you-should-know-about-it/
I think you want something like this:
var work = Task.Run(() => line4);
if (work.Wait(TimeSpan.FromMinutes(10)))
{
// Work completed within the timeout
}
else
{
// Work did not complete within the timeout
}
Note that this will not actually stop the 'DoWork' code from running, it will continue on a worker thread until it is done. Also note that using 'Wait' risks deadlocks if used improperly, see Don't Block on Async Code.
If you actually want to cancel the processing you should give DoWork a cancellationToken and make the processing abort when the token is cancelled. There are also solutions to abort a running thread, but this is not recommended.

Client-side request rate-limiting

I'm designing a .NET client application for an external API. It's going to have two main responsibilities:
Synchronization - making a batch of requests to API and saving responses to my database periodically.
Client - a pass-through for requests to API from users of my client.
Service's documentation specifies following rules on maximum number of requests that can be issued in given period of time:
During a day:
Maximum of 6000 requests per hour (~1.67 per second)
Maximum of 120 requests per minute (2 per second)
Maximum of 3 requests per second
At night:
Maximum of 8000 requests per hour (~2.23 per second)
Maximum of 150 requests per minute (2.5 per second)
Maximum of 3 requests per second
Exceeding these limits won't result in immediate lockdown - no exception will be thrown. But provider can get annoyed, contact us and then ban us from using his service. So I need to have some request delaying mechanism in place to prevent that. Here's how I see it:
public async Task MyMethod(Request request)
{
await _rateLimter.WaitForNextRequest(); // awaitable Task with calculated Delay
await _api.DoAsync(request);
_rateLimiter.AppendRequestCounters();
}
Safest and simpliest option would be to respect the lowest rate limit only, that is of max 3 requests per 2 seconds. But because of "Synchronization" responsibility, there is a need to use as much of these limits as possible.
So next option would be to to add a delay based on current request count. I've tried to do something on my own and I also have used RateLimiter by David Desmaisons, and it would've been fine, but here's a problem:
Assuming there will be 3 requests per second sent by my client to the API at day, we're going to see:
A 20 second delay every 120th request
A ~15 minute delay every 6000th request
This would've been acceptable if my application was only about "Synchronization", but "Client" requests can't wait that long.
I've searched the Web, and I've read about token/leaky bucket and sliding window algorithms, but I couldn't translate them to my case and .NET, since they mainly cover the rejecting of requests that exceed a limit. I've found this repo and that repo, but they are both only service-side solutions.
QoS-like spliting of rates, so that "Synchronization" would have the slower, and "Client" the faster rate, is not an option.
Assuming that current request rates will be measured, how to calculate the delay for next request so that it could be adaptive to current situation, respect all maximum rates and wouldn't be longer than 5 seconds? Something like gradually slowing down when approaching a limit.
This is achievable by using the Library you linked on GitHub. We need to use a composed TimeLimiter made out of 3 CountByIntervalAwaitableConstraint like so:
var hourConstraint = new CountByIntervalAwaitableConstraint(6000, TimeSpan.FromHours(1));
var minuteConstraint = new CountByIntervalAwaitableConstraint(120, TimeSpan.FromMinutes(1))
var secondConstraint = new CountByIntervalAwaitableConstraint(3, TimeSpan.FromSeconds(1));
var timeLimiter = TimeLimiter.Compose(hourConstraint, minuteConstraint, secondConstraint);
We can test to see if this works by doing this:
for (int i = 0; i < 1000; i++)
{
await timeLimiter;
Console.WriteLine($"Iteration {i} at {DateTime.Now:T}");
}
This will run 3 times every second until we reach 120 iterations (iteration 119) and then wait until the minute is over and the continue running 3 times every second. We can also (again using the Library) easily use the TimeLimiter with a HTTP Client by using the AsDelegatingHandler() extension method provided like so:
var handler = TimeLimiter.Compose(hourConstraint, minuteConstraint, secondConstraint);
var client = new HttpClient(handler);
We can also use CancellationTokens, but as far as I can tell not at the same time as also using it as the handler for the HttpClient. Here is how you can use it with a HttpClientanyways:
var timeLimiter = TimeLimiter.Compose(hourConstraint, minuteConstraint, secondConstraint);
var client = new HttpClient();
for (int i = 0; i < 100; i++)
{
await composed.Enqueue(async () =>
{
var client = new HttpClient();
var response = await client.GetAsync("https://hacker-news.firebaseio.com/v0/item/8863.json?print=pretty");
if (response.IsSuccessStatusCode)
Console.WriteLine(await response.Content.ReadAsStringAsync());
else
Console.WriteLine($"Error code {response.StatusCode} reason: {response.ReasonPhrase}");
}, new CancellationTokenSource(TimeSpan.FromSeconds(10)).Token);
}
Edit to address OPs question more:
If you want to make sure a User can send a request without having to wait for the limit to be over with, we would need to dedicate a certain amount of request every second/ minute/ hour to our user. So we need a new TimeLimiter for this and also adjust our API TimeLimiter. Here are the two new ones:
var apiHourConstraint = new CountByIntervalAwaitableConstraint(5500, TimeSpan.FromHours(1));
var apiMinuteConstraint = new CountByIntervalAwaitableConstraint(100, TimeSpan.FromMinutes(1));
var apiSecondConstraint = new CountByIntervalAwaitableConstraint(2, TimeSpan.FromSeconds(1));
// TimeLimiter for calls automatically to the API
var apiTimeLimiter = TimeLimiter.Compose(apiHourConstraint, apiMinuteConstraint, apiSecondConstraint);
var userHourConstraint = new CountByIntervalAwaitableConstraint(500, TimeSpan.FromHours(1));
var userMinuteConstraint = new CountByIntervalAwaitableConstraint(20, TimeSpan.FromMinutes(1));
var userSecondConstraint = new CountByIntervalAwaitableConstraint(1, TimeSpan.FromSeconds(1));
// TimeLimiter for calls made manually by a user to the API
var userTimeLimiter = TimeLimiter.Compose(userHourConstraint, userMinuteConstraint, userSecondConstraint);
You can play around with the numbers to suit your need.
Now to use it:
I saw you're using a central Method to execute your Requests, this makes it easier. I'll just add an optional boolean parameter that determines if it's an automatically executed request or one made from a user. (You could replace this parameter with an Enum if you want more than just automatic and manual requests)
public static async Task DoRequest(Request request, bool manual = false)
{
TimeLimiter limiter;
if (manual)
limiter = TimeLimiterManager.UserLimiter;
else
limiter = TimeLimiterManager.ApiLimiter;
await limiter;
_api.DoAsync(request);
}
static class TimeLimiterManager
{
public static TimeLimiter ApiLimiter { get; }
public static TimeLimiter UserLimiter { get; }
static TimeLimiterManager()
{
var apiHourConstraint = new CountByIntervalAwaitableConstraint(5500, TimeSpan.FromHours(1));
var apiMinuteConstraint = new CountByIntervalAwaitableConstraint(100, TimeSpan.FromMinutes(1));
var apiSecondConstraint = new CountByIntervalAwaitableConstraint(2, TimeSpan.FromSeconds(1));
// TimeLimiter to control access to the API for automatically executed requests
ApiLimiter = TimeLimiter.Compose(apiHourConstraint, apiMinuteConstraint, apiSecondConstraint);
var userHourConstraint = new CountByIntervalAwaitableConstraint(500, TimeSpan.FromHours(1));
var userMinuteConstraint = new CountByIntervalAwaitableConstraint(20, TimeSpan.FromMinutes(1));
var userSecondConstraint = new CountByIntervalAwaitableConstraint(1, TimeSpan.FromSeconds(1));
// TimeLimiter to control access to the API for manually executed requests
UserLimiter = TimeLimiter.Compose(userHourConstraint, userMinuteConstraint, userSecondConstraint);
}
}
This isn't perfect, as when the user doesn't execute 20 API calls every minute but your automated system needs to execute more than 100 every minute it will have to wait.
And regarding day/ night differences: You can use 2 backing fields for the Api/UserLimiter and return the appropriate ones in the { get {...} } of the property

Sending many parrallel WebRequests

I'm trying to simulate many concurrent users (>2000) to test a web service. Every user performs actions at a specific pre-defined time, for example:
User A: 09:10:02, 09:10:03, 09:10:08
User B: 09:10:03, 09:10:05, 09:10:07
User C: 09:10:03, 09:10:09, 09:10:15, 09:10:20
I now want to send web request in real time at each of those times. I can tolerate a delay of at most ~2 seconds. What I was already trying without success:
a) Aggregate all times in a single list, sort it by time then iterate it:
foreach (DateTime sendTime in times) {
while (DateTime.now < sendTime)
Thread.Sleep(1);
SendRequest();
}
b) Create a thread for each user, with each thread checking for the same condition as above but a longer sleep time
Both approaches kind of work, but the delay between the time that the request was supposed to be sent and the time that it has actually been sent is way too high. Is there any way to send the requests with higher precision?
Edit: The suggests approaches work really well. However, the delay is still extremely high for many requests. Apparantly, the reason for this is my SendRequest() method:
private static async Task SendRequest()
{
// Log time difference
string url = "http://www.request.url/newaction";
WebRequest webRequest = WebRequest.Create(url);
try
{
WebResponse webResponse = await webRequest.GetResponseAsync();
}
catch (Exception e) { }
}
Note that my web service does not return any response, maybe this is the reason for the slow down? Can I send the request without waiting for the response?
Why are you doing this with multiple thread? Threading requires slow sleep/wake context switching. You could just do this all with timers/async calls.
List<DateTime> scheduledTimes = ...;
List<Task> requests = scheduledTimes
.Select(t => t - DateTime.Now)
.Select(async delay =>
{
await Task.Delay(delay);
SendRequest();
})
.ToList();
await Task.WhenAll(requests);
The above code will in one thread schedule all the requests onto the SynchronizationContext and run them.
Simples.
I would suggest to use a timer object to trigger the requests:
// In Form_Load or another init method
Timer tRequest = new Timer();
tRequest.Interval = 500;
tRequest.Tick += TRequest_Tick;
private void TRequest_Tick(object sender, EventArgs e)
{
var sendTimes = times.Where(t => t.AddMilliseconds(-500) < DateTime.Now && t.AddMilliseconds(500) > DateTime.Now);
foreach(DateTime sendTime in sendTimes)
{
SendRequest();
}
}

Categories