I'm running Service Bus on Azure, pumping about 10-100 messages per second.
Recently I've switched to .net 4.5 and all excited refactored all the code to have 'async' and 'await' at least twice in each line to make sure it's done 'properly' :)
Now I'm wondering whether it's actually for better or for worse. If you could have a look at the code snippets and let me know what your thoughts are. I especially worried if the thread context switching is not giving me more grief than benefit, from all the asynchrony... (looking at !dumpheap it's definitely a factor)
Just a bit of description - I will be posting 2 methods - one that does a while loop on a ConcurrentQueue, waiting for new messages and the other method that sends one message at a time. I'm also using the Transient Fault Handling block exactly as Dr. Azure prescribed.
Sending loop (started at the beginning, waiting for new messages):
private async void SendingLoop()
{
try
{
await this.RecreateMessageFactory();
this.loopSemaphore.Reset();
Buffer<SendMessage> message = null;
while (true)
{
if (this.cancel.Token.IsCancellationRequested)
{
break;
}
this.semaphore.WaitOne();
if (this.cancel.Token.IsCancellationRequested)
{
break;
}
while (this.queue.TryDequeue(out message))
{
try
{
using (message)
{
//only take send the latest message
if (!this.queue.IsEmpty)
{
this.Log.Debug("Skipping qeued message, Topic: " + message.Value.Topic);
continue;
}
else
{
if (this.Topic == null || this.Topic.Path != message.Value.Topic)
await this.EnsureTopicExists(message.Value.Topic, this.cancel.Token);
if (this.cancel.Token.IsCancellationRequested)
break;
await this.SendMessage(message, this.cancel.Token);
}
}
}
catch (OperationCanceledException)
{
break;
}
catch (Exception ex)
{
ex.LogError();
}
}
}
}
catch (OperationCanceledException)
{ }
catch (Exception ex)
{
ex.LogError();
}
finally
{
if (this.loopSemaphore != null)
this.loopSemaphore.Set();
}
}
Sending a message:
private async Task SendMessage(Buffer<SendMessage> message, CancellationToken cancellationToken)
{
//this.Log.Debug("MessageBroadcaster.SendMessage to " + this.GetTopic());
bool entityNotFound = false;
if (this.MessageSender.IsClosed)
{
//this.Log.Debug("MessageBroadcaster.SendMessage MessageSender closed, recreating " + this.GetTopic());
await this.EnsureMessageSender(cancellationToken);
}
try
{
await this.sendMessageRetryPolicy.ExecuteAsync(async () =>
{
message.Value.Body.Seek(0, SeekOrigin.Begin);
using (var msg = new BrokeredMessage(message.Value.Body, false))
{
await Task.Factory.FromAsync(this.MessageSender.BeginSend, this.MessageSender.EndSend, msg, null);
}
}, cancellationToken);
}
catch (MessagingEntityNotFoundException)
{
entityNotFound = true;
}
catch (OperationCanceledException)
{ }
catch (ObjectDisposedException)
{ }
catch (Exception ex)
{
ex.LogError();
}
if (entityNotFound)
{
if (!cancellationToken.IsCancellationRequested)
{
await this.EnsureTopicExists(message.Value.Topic, cancellationToken);
}
}
}
The code above is from a 'Sender' class that sends 1 message/second. I have about 50-100 instances running at any given time, so it could be quite a number of threads.
Btw do not worry about EnsureMessageSender, RecreateMessageFactory, EnsureTopicExists too much, they are not called that often.
Would I not be better of just having one background thread working through the message queue and sending messages synchronously, provided all I need is send one message at a time, not worry about the async stuff and avoid the overheads coming with it.
Note that usually it's a matter of milliseconds to send one Message to Azure Service Bus, it's not really expensive. (Except at times when it's slow, times out or there is a problem with Service Bus backend, it could be hanging for a while trying to send stuff).
Thanks and sorry for the long post,
Stevo
Proposed Solution
Would this example be a solution to my situation?
static void Main(string[] args)
{
var broadcaster = new BufferBlock<int>(); //queue
var cancel = new CancellationTokenSource();
var run = Task.Run(async () =>
{
try
{
while (true)
{
//check if we are not finished
if (cancel.IsCancellationRequested)
break;
//async wait until a value is available
var val = await broadcaster.ReceiveAsync(cancel.Token).ConfigureAwait(false);
int next = 0;
//greedy - eat up and ignore all the values but last
while (broadcaster.TryReceive(out next))
{
Console.WriteLine("Skipping " + val);
val = next;
}
//check if we are not finished
if (cancel.IsCancellationRequested)
break;
Console.WriteLine("Sending " + val);
//simulate sending delay
await Task.Delay(1000).ConfigureAwait(false);
Console.WriteLine("Value sent " + val);
}
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
}, cancel.Token);
//simulate sending messages. One every 200mls
for (int i = 0; i < 20; i++)
{
Console.WriteLine("Broadcasting " + i);
broadcaster.Post(i);
Thread.Sleep(200);
}
cancel.Cancel();
run.Wait();
}
You say:
The code above is from a 'Sender' class that sends 1 message/second. I
have about 50-100 instances running at any given time, so it could be
quite a number of threads.
This is a good case for async. You save lots of threads here. Async reduces context switching because it is not thread-based. It does not context-switch in case of something requiring a wait. Instead, the next work item is being processed on the same thread (if there is one).
For that reason you async solution will definitely scale better than a synchronous one. Whether it actually uses less CPU at 50-100 instances of your workflow needs to be measured. The more instances there are the higher the probability of async being faster becomes.
Now, there is one problem with the implementation: You're using a ConcurrentQueue which is not async-ready. So you actually do use 50-100 threads even in your async version. They will either block (which you wanted to avoid) or busy-wait burning 100% CPU (which seems to be the case in your implementation!). You need to get rid of this problem and make the queuing async, too. Maybe a SemaphoreSlim is of help here as it can be waited on asynchronously.
First, keep in mind that Task != Thread. Tasks (and async method continuations) are scheduled to the thread pool, where Microsoft has put in tons of optimizations that work wonders as long as your tasks are fairly short.
Reviewing your code, one line raises a flag: semaphore.WaitOne. I assume you're using this as a kind of signal that there is data available in the queue. This is bad because it's a blocking wait inside an async method. By using a blocking wait, the code changes from a lightweight continuation into a much heavier thread pool thread.
So, I would follow #usr's recommendation and replace the queue (and the semaphore) with an async-ready queue. TPL Dataflow's BufferBlock<T> is an async-ready producer/consumer queue available via NuGet. I recommend this one first because it sounds like your project could benefit from using dataflow more extensively than just as a queue (but the queue is a fine place to start).
Other async-ready data structures exist; my AsyncEx library has a couple of them. It's also not hard to build a simple one yourself; I have a blog post on the subject. But I recommend TPL Dataflow in your situation.
Related
I have a Windows service that reads data from the database and processes this data using multiple REST API calls.
Originally, this service ran on a timer where it would read unprocessed data from the database and process it using multiple threads limited using SemaphoreSlim. This worked well except that the database read had to wait for all processing to finish before reading again.
ServicePointManager.DefaultConnectionLimit = 10;
Original that works:
// Runs every 5 seconds on a timer
private void ProcessTimer_Elapsed(object sender, ElapsedEventArgs e)
{
var hasLock = false;
try
{
Monitor.TryEnter(timerLock, ref hasLock);
if (hasLock)
{
ProcessNewData();
}
else
{
log.Info("Failed to acquire lock for timer."); // This happens all of the time
}
}
finally
{
if (hasLock)
{
Monitor.Exit(timerLock);
}
}
}
public void ProcessNewData()
{
var unproceesedItems = GetDatabaseItems();
if (unproceesedItems.Count > 0)
{
var downloadTasks = new Task[unproceesedItems.Count];
var maxThreads = new SemaphoreSlim(semaphoreSlimMinMax, semaphoreSlimMinMax); // semaphoreSlimMinMax = 10 is max threads
for (var i = 0; i < unproceesedItems .Count; i++)
{
maxThreads.Wait();
var iClosure = i;
downloadTasks[i] =
Task.Run(async () =>
{
try
{
await ProcessItemsAsync(unproceesedItems[iClosure]);
}
catch (Exception ex)
{
// handle exception
}
finally
{
maxThreads.Release();
}
});
}
Task.WaitAll(downloadTasks);
}
}
To improve efficiency, I rewrite the service to run GetDatabaseItems in a separate thread from the rest so that there is a ConcurrentDictionary of unprocessed items between them that GetDatabaseItems fills and ProcessNewData empties.
The problem is that while 10 unprocessed items are send to ProcessItemsAsync, they are processed two at a time instead of all 10.
The code inside of ProcessItemsAsync calls var response = await client.SendAsync(request); where the delay occurs. All 10 threads make it to this code but come out of it two at a time. None of this code changed between the old version and the new.
Here is the code in the new version that did change:
public void Start()
{
ServicePointManager.DefaultConnectionLimit = maxSimultaneousThreads; // 10
// Start getting unprocessed data
getUnprocessedDataTimer.Interval = getUnprocessedDataInterval; // 5 seconds
getUnprocessedDataTimer.Elapsed += GetUnprocessedData; // writes data into a ConcurrentDictionary
getUnprocessedDataTimer.Start();
cancellationTokenSource = new CancellationTokenSource();
// Create a new thread to process data
Task.Factory.StartNew(() =>
{
try
{
ProcessNewData(cancellationTokenSource.Token);
}
catch (Exception ex)
{
// error handling
}
}, TaskCreationOptions.LongRunning
);
}
private void ProcessNewData(CancellationToken token)
{
// Check if task has been canceled.
while (!token.IsCancellationRequested)
{
if (unprocessedDictionary.Count > 0)
{
try
{
var throttler = new SemaphoreSlim(maxSimultaneousThreads, maxSimultaneousThreads); // maxSimultaneousThreads = 10
var tasks = unprocessedDictionary.Select(async item =>
{
await throttler.WaitAsync(token);
try
{
if (unprocessedDictionary.TryRemove(item.Key, out var item))
{
await ProcessItemsAsync(item);
}
}
catch (Exception ex)
{
// handle error
}
finally
{
throttler.Release();
}
});
Task.WhenAll(tasks);
}
catch (OperationCanceledException)
{
break;
}
}
Thread.Sleep(1000);
}
}
Environment
.NET Framework 4.7.1
Windows Server 2016
Visual Studio 2019
Attempts to fix:
I tried the following with the same bad result (two await client.SendAsync(request) completing at a time):
Set Max threads and ServicePointManager.DefaultConnectionLimit to 30
Manually create threads using Thread.Start()
Replace async/await pattern with sync HttpClient calls
Call data processing using Task.Run(async () => and Task.WaitAll(downloadTasks);
Replace the new long-running thread for ProcessNewData with a timer
What I want is to run GetUnprocessedData and ProcessNewData concurrently with an HttpClient connection limit of 10 (set in config) so that 10 requests are processed at the same time.
Note: the issue is similar to HttpClient.GetAsync executes only 2 requests at a time? but the DefaultConnectionLimit is increased and the service runs on a Windows Server. It also creates more than 2 connections when original code runs.
Update
I went back to the original project to make sure it still worked, it did. I added a new timer to perform some unrelated operations and the httpClient issue came back. I removed the timer, everything worked. I added a new thread to do parallel processing, the problem came back.
This is not a direct answer to your question, but a suggestion for simplifying your service that could make the debugging of any problem easier. My suggestion is to implement the producer-consumer pattern using an iterator for producing the unprocessed items, and a parallel loop for consuming them. Ideally the parallel loop would have async delegates, but since you are targeting the .NET Framework you don't have access to the .NET 6 method Parallel.ForEachAsync. So I will suggest the slightly wasteful approach of using a synchronous parallel loop that blocks threads. You could use either the Parallel.ForEach method, or the PLINQ like in the example below:
private IEnumerable<Item> Iterator(CancellationToken token)
{
while (true)
{
Task delayTask = Task.Delay(5000, token);
foreach (Item item in GetDatabaseItems()) yield return item;
delayTask.GetAwaiter().GetResult();
}
}
public void Start()
{
//...
ThreadPool.SetMinThreads(degreeOfParallelism, Environment.ProcessorCount);
new Thread(() =>
{
try
{
Partitioner
.Create(Iterator(token), EnumerablePartitionerOptions.NoBuffering)
.AsParallel()
.WithDegreeOfParallelism(degreeOfParallelism)
.WithCancellation(token)
.ForAll(item => ProcessItemAsync(item).GetAwaiter().GetResult());
}
catch (OperationCanceledException) { } // Ignore
}).Start();
}
Online demo.
The Iterator fetches unprocessed items from the database in batches, and yields them one by one. The database won't be hit more frequently than once every 5 seconds.
The PLINQ query is going to fetch a new item from the Iterator each time it has a worker available, according to the WithDegreeOfParallelism policy. The setting EnumerablePartitionerOptions.NoBuffering ensures that it won't try to fetch more items in advance.
The ThreadPool.SetMinThreads is used in order to boost the availability of ThreadPool threads, since the PLINQ is going to use lots of them. Without it the ThreadPool will not be able to satisfy the demand immediately, although it will gradually inject more threads and eventually will catch up. But since you already know how many threads you'll need, you can configure the ThreadPool from the start.
In case you dislike the idea of blocking threads, you can find a simple substitute of the Parallel.ForEachAsync here, based on the TPL Dataflow library. It requires installing a NuGet package.
The issue turned out to be the place where ServicePointManager.DefaultConnectionLimit is set.
In the version where HttpClient was only doing two requests at a time, ServicePointManager.DefaultConnectionLimit was being set before the threads were being created but after the HttpClient was initialized.
Once I moved it into the constructor before the HttpClient is initialized, everything started working.
Thank you very much to #Theodor Zoulias for the help.
TLDR; Set ServicePointManager.DefaultConnectionLimit before initializing the HttpClient.
I'm currently working on ASP.NET Core WebApp, which consist of web server and two long-running services– TCP Server (for managing my own clients) and TCP Client (integration with external platform).
Both of services are running alongside web sever– I achieved that, by making them inherit from BackgroundService and injecting to DI in this way:
services.AddHostedService(provider => provider.GetService<TcpClientService>());
services.AddHostedService(provider => provider.GetService<TcpServerService>());
Unfortunately, while development I ran into weird issue (which doesn't let me sleep at night so at this point I beg for your help). For some reason async code in TcpClientService blocks execution of other services (web server and tcp server).
using System;
using System.IO;
using System.Net.Sockets;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
namespace ClientService.AsyncPoblem
{
public class TcpClientService : BackgroundService
{
private readonly ILogger<TcpClientService> _logger;
private bool Connected { get; set; }
private TcpClient TcpClient { get; set; }
public TcpClientService(ILogger<TcpClientService> logger)
{
_logger = logger;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
try
{
if (Connected)
{
await Task.Delay(100, stoppingToken); // check every 100ms if still connected
}
else
{
TcpClient = new TcpClient("localhost", 1234);
HandleClient(TcpClient); // <-- Call causing the issue
_logger.Log(LogLevel.Debug, "After call");
}
}
catch (Exception e)
{
// log the exception, wait for 3s and try again
_logger.Log(LogLevel.Critical, "An error occured while trying to connect with server.");
_logger.Log(LogLevel.Critical, e.ToString());
await Task.Delay(3000, stoppingToken);
}
}
}
private async Task HandleClient(TcpClient client)
{
Connected = true;
await using var ns = client.GetStream();
using var streamReader = new StreamReader(ns);
var msgBuilder = new StringBuilder();
bool reading = false;
var buffer = new char[1024];
while (!streamReader.EndOfStream)
{
var res = await streamReader.ReadAsync(buffer, 0, 1024);
foreach (var value in buffer)
{
if (value == '\x02')
{
msgBuilder.Clear();
reading = true;
}
else if (value == '\x03')
{
reading = false;
if (msgBuilder.Length > 0)
{
Console.WriteLine(msgBuilder);
msgBuilder.Clear();
}
}
else if (value == '\x00')
{
break;
}
else if (reading)
{
msgBuilder.Append(value);
}
}
Array.Clear(buffer, 0, buffer.Length);
}
Connected = false;
}
}
}
Call causing the issue is located in else statement of ExecuteAsync method
else
{
TcpClient = new TcpClient("localhost", 1234);
HandleClient(TcpClient); // <-- Call causing the issue
_logger.Log(LogLevel.Debug, "After call");
}
The code reads properly from the socket, but it blocks initialization of WebServer and TcpServer. Actually, even log method is not being reached. No matter if I put await in front of HandleClient() or not, the code behaves the same.
I've done some tests, and I figured out that this piece of code is not blocking anymore ("After call" log shows up):
else
{
TcpClient = new TcpClient("localhost", 1234);
await Task.Delay(1);
HandleClient(TcpClient); // <- moving Task.Delay into HandleClient also works
_logger.Log(LogLevel.Debug, "After call");
}
This also works like a charm (if I try to await Task.Run(), it will block "After call" log, but rest of app will start with no problem):
else
{
tcpClient = new TcpClient("localhost", 6969);
Connected = true;
Task.Run(() => ReceiveAsync(tcpClient));
_logger.Log(LogLevel.Debug, "After call");
}
There is couple more combinations which make it work, but my question is– why other methods work (especially 1ms delay- this completely shut downs my brain) and firing HandleClient() without await doesn't? I know that fire and forget may not be the most elegant solution, but it should work and do it's job shouldn't it? I searched for almost a month, and still didn't find a single explanation for that. At this point I have hard time falling asleep at night, cause I have no one to ask and can't stop thinking about that..
Update
(Sorry for disappearing for over a day without any answers)
After many many hours of investigation, I started debugging once again. Every time I would hit while loop in HandleClient(), I was losing control over debugger, program seemed to continue to work, but it would never reach await streamReader.ReadAsync(). At some point I decided to change condition in the while loop to true (I have no idea why I didn't think of trying it before), and everything began to work as expected. Messages would get read from tcp socket, and other services would fire up without any issues.
Here is piece of code causing issue
while (!streamReader.EndOfStream) <----- issue
{
var res = await streamReader.ReadAsync(buffer, 0, 1024);
// ...
After that observation, I decided to print out the result of EndOfStream before reaching the loop, to see what happens
Console.WriteLine(streamReader.EndOfStream);
while (!streamReader.EndOfStream)
{
var res = await streamReader.ReadAsync(buffer, 0, 1024);
// ...
Now the exact same thing was happening, but before even reaching the loop!
Explanation
Note:
I'm not senior programmer, especially when it comes to dealing with asynchronous TCP communication so I might be wrong here, but I will try to do my best.
streamReader.EndOfStream is not a regular field, it is a property, and it has logic inside it's getter.
This is how it looks like from the inside:
public bool EndOfStream
{
get
{
ThrowIfDisposed();
CheckAsyncTaskInProgress();
if (_charPos < _charLen)
{
return false;
}
// This may block on pipes!
int numRead = ReadBuffer();
return numRead == 0;
}
}
EndOfStream getter is synchronous method. To detect whether stream has ended or not, it calls ReadBuffer(). Since there is no data in the buffer yet and stream hasn't ended, method hangs until there is some data to read. Unfortunately it cannot be used in asynchronous context, it will always block (unfortunately because it seems to be the only way to instantly detect interrupted connection, broken cable or end of stream).
I don't have finished piece of code yet, I need to rewrite it and add some broken connection detection. I will post my solution I soon as I finish.
I would like to thank everyone for trying to help me, and especially #RoarS. who took biggest part in discussion, and spent some of his own time to take a closer look at my issue.
This is poorly documented behaviour of the BackgroundService class. All registered IHostedService will be started sequentially in the order they were registered. The application will not start until each IHostedService has returned from StartAsync. A BackgroundService is an IHostedService that starts your ExecuteAsync task before returning from StartAsync. Async methods will run until their first call to await an incomplete task before returning.
TLDR; If you don't await anything in your ExecuteAsync method, the server will never start.
Since you aren't awaiting that async method, your code boils down to;
while(true)
HandleClient(...);
(Do you really want to spawn an infinite number of TcpClient as fast as the CPU will go?). There's a really easy fix;
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
await Task.Yield();
// ...
}
I'm developing an application using some multithreading.
I have a thread that produces some data each 200ms (vibration data from an acquisition device). And each time i reacieve the data I start several tasks to do stuff. On my development PC there is no error. But when I deploy the project on a less powerful tablet I have the following message appearing several times :
thread is dead. priority cannot be accessed.
Here is my code :
private void myCallback1Axis(IAsyncResult ar)
{
GC.Collect();
try
{
if (runningTask == ar.AsyncState)
{
data = reader.EndReadWaveform(ar); // GET THE DATA
// LAUNCH FFTs' THREADS
CancellationToken ct = cts.Token;
task1 = System.Threading.Tasks.Task.Factory.StartNew(() =>
{
try
{
if (ct.IsCancellationRequested)
{
Console.WriteLine("Task {0}: Cancelling", task1.Id);
return;
}
Console.WriteLine("Task {0}: {1}/2 In progress", task1.Id, 1);
ConsumeToFFT(new FFT_Parameters(data.GetRawData(), overlap1, (int)Fmax1,
NbLines1, HP, avg1, window1, switchFreqUnit1.Value, swVelo.Value));
}
catch (OperationCanceledException)
{
// Any clean up code goes here.
Console.WriteLine("Task {0}: Cancelling", task1.Id);
//throw; // To ensure that the calling code knows the task was cancelled.
}
catch (Exception)
{
// Clean up other stuff
//throw; // If the calling code also needs to know.
}
}, ct);
}
catch (DaqException ex)
{
MessageBox.Show(ex.Message);
runningTask = null;
myTask.Dispose();
}
}
I read that I have to :
Put threads into background before starting (here)
But nothing more. I'm stuck with those messages and I can not trace them. Any ideas how can I fix this?
Thank you very much.
(I'm using c#, visual studio, .net 4.0)
Actually I was not able to trace the error, and by a certain miracle it's not there any more. The only thing that I've changed is here :
BlockingCollection<double[]> flowData = new BlockingCollection<double[]>(1);
Declaring my data as BlockingCollection and then use it like that in my consumer :
double[] dataToProcess = flowData.Take();
and like that in my producer :
flowData.TryAdd(data.GetRawData());
I think that the fact that my data was not protected against thread race condition generated the error. Even if I am not using any Thread Priority in my code, having better protection of my shared data was the solution to get rid of my error.
Sorry guys, it's not the greatest explanation but at least I don't have the error anymore. Thank you for your help.
I am using some REST requests using Mono.Mac (3.2.3) to communicate with a server, and as a retry mechanism I am quietly attempting to give the HTTP actions multiple tries if they fail, or time out.
I have the following;
var tries = 0;
while (tries <= ALLOWED_TRIES)
{
try
{
postTask.Start();
tries++;
if (!postTask.Wait(Timeout))
{
throw new TimeoutException("Operation timed out");
}
break;
} catch (Exception e) {
if (tries > ALLOWED_TRIES)
{
throw new Exception("Failed to access Resource.", e);
}
}
}
Where the task uses parameters of the parent method like so;
var postTask = new Task<HttpWebResponse>(() => {return someStuff(foo, bar);},
Task.Factory.CancellationToken,
Task.Factory.CreationOptions);
The problem seems to be that the task does not want to be run again with postTask.Start() after it's first completion (and subsequent failure). Is there a simple way of doing this, or am I misusing tasks in this way? Is there some sort of method that resets the task to its initial state, or am I better off using a factory of some sort?
You're indeed misusing the Task here, for a few reasons:
You cannot run the same task more than once. When it's done, it's done.
It is not recommended to construct a Task object manually, there's Task.Run and Task.Factory.Start for that.
You should not use Task.Run/Task.Factory.Start for a task which does IO-bound work. They are intended for CPU-bound work, as they "borrow" a thread from ThreadPool to execute the task action. Instead, use pure async Task-based APIs for this, which do not need a dedicate thread to complete.
For example, below you can call GetResponseWithRetryAsync from the UI thread and still keep the UI responsive:
async Task<HttpWebResponse> GetResponseWithRetryAsync(string url, int retries)
{
if (retries < 0)
throw new ArgumentOutOfRangeException();
var request = WebRequest.Create(url);
while (true)
{
try
{
var result = await request.GetResponseAsync();
return (HttpWebResponse)result;
}
catch (Exception ex)
{
if (--retries == 0)
throw; // rethrow last error
// otherwise, log the error and retry
Debug.Print("Retrying after error: " + ex.Message);
}
}
}
More reading:
"Task.Factory.StartNew" vs "new Task(...).Start".
Task.Run vs Task.Factory.StartNew.
I would recommend doing something like this:
private int retryCount = 3;
...
public async Task OperationWithBasicRetryAsync()
{
int currentRetry = 0;
for (; ;)
{
try
{
// Calling external service.
await TransientOperationAsync();
// Return or break.
break;
}
catch (Exception ex)
{
Trace.TraceError("Operation Exception");
currentRetry++;
// Check if the exception thrown was a transient exception
// based on the logic in the error detection strategy.
// Determine whether to retry the operation, as well as how
// long to wait, based on the retry strategy.
if (currentRetry > this.retryCount || !IsTransient(ex))
{
// If this is not a transient error
// or we should not retry re-throw the exception.
throw;
}
}
// Wait to retry the operation.
// Consider calculating an exponential delay here and
// using a strategy best suited for the operation and fault.
Await.Task.Delay();
}
}
// Async method that wraps a call to a remote service (details not shown).
private async Task TransientOperationAsync()
{
...
}
This code is from the Retry Pattern Design from Microsoft. You can check it out here: https://msdn.microsoft.com/en-us/library/dn589788.aspx
cross-posted to http://social.msdn.microsoft.com/Forums/en-US/tpldataflow/thread/89b3f71d-3777-4fad-9c11-50d8dc81a4a9
I know... I'm not really using TplDataflow to its maximum potential. ATM I'm simply using BufferBlock as a safe queue for message passing, where producer and consumer are running at different rates. I'm seeing some strange behaviour that leaves me stumped as to how to
proceed.
private BufferBlock<object> messageQueue = new BufferBlock<object>();
public void Send(object message)
{
var accepted=messageQueue.Post(message);
logger.Info("Send message was called qlen = {0} accepted={1}",
messageQueue.Count,accepted);
}
public async Task<object> GetMessageAsync()
{
try
{
var m = await messageQueue.ReceiveAsync(TimeSpan.FromSeconds(30));
//despite messageQueue.Count>0 next line
//occasionally does not execute
logger.Info("message received");
//.......
}
catch(TimeoutException)
{
//do something
}
}
In the code above (which is part of a 2000 line distributed solution), Send is being called periodically every 100ms or so. This means an item is Posted to messageQueue at around 10 times a second. This is verified. However, occasionally it appears that ReceiveAsync does not complete within the timeout (i.e. the Post is not causing ReceiveAsync to complete) and TimeoutException is being raised after 30s. At this point, messageQueue.Count is in the hundreds. This is unexpected. This problem has been observed at slower rates of posting too (1 post/second) and usually happens before 1000 items have passed through the BufferBlock.
So, to work around this issue, I am using the following code, which works, but occasionally causes 1s latency when receiving (due to the bug above occurring)
public async Task<object> GetMessageAsync()
{
try
{
object m;
var attempts = 0;
for (; ; )
{
try
{
m = await messageQueue.ReceiveAsync(TimeSpan.FromSeconds(1));
}
catch (TimeoutException)
{
attempts++;
if (attempts >= 30) throw;
continue;
}
break;
}
logger.Info("message received");
//.......
}
catch(TimeoutException)
{
//do something
}
}
This looks like a race condition in TDF to me, but I can't get to the bottom of why this doesn't occur in the other places where I use BufferBlock in a similar fashion. Experimentally changing from ReceiveAsync to Receive doesn't help. I haven't checked, but I imagine in isolation, the code above works perfectly. It's a pattern I've seen documented in "Introduction to TPL Dataflow" tpldataflow.docx.
What can I do to get to the bottom of this? Are there any metrics that might help infer what's happening? If I can't create a reliable test case, what more information can I offer?
Help!
Stephen seems to think the following is the solution
var m = await messageQueue.ReceiveAsync();
instead of:
var m = await messageQueue.ReceiveAsync(TimeSpan.FromSeconds(30));
Can you confirm or deny this?