What I'm trying to do is create a 'Listener' which listens to several different Tcp ports at once, and pipes the messages to any Observers.
Pseudo-ish code:
private bool _Listen = false;
public void Start()
{
_Listen = true;
Task.Factory.StartNew(() => Listen(1);
Task.Factory.StartNew(() => Listen(2);
}
public void Stop()
{
_Listen = false;
}
private async void Listen(int port)
{
var tcp = new TcpClient();
while(_Listen)
{
await tcp.ConnectAsync(ip, port);
using (/*networkStream, BinaryReader, etc*/)
{
while(_Listen)
{
//Read from binary reader and OnNext to IObservable
}
}
}
}
(For brevity, I've omitted the try/catch inside the two whiles, both of which also check the flag)
My question is: should I be locking the flag, and if so, how does that tie-in with the async/await bits?
First of all, you should change your return type to Task, not void. async void methods are essentially fire-and-forget and can't be awaited or cancelled. They exist primarily to allow the creation of asynchronous event handlers or event-like code. They should never be used for normal asynchronous operations.
The TPL way to cooperatively cancel/abort/stop an asynchronous operation is to use a CancellationToken. You can check the token's IsCancellationRequested property to see if you need to cancel your operation and stop.
Even better, most asynchronous methods provided by the framework accept a CancellationToken so you can stop them immediatelly without waiting for them to return. You can use NetworkStream's ReadAsync(Byte[], Int32, Int32, CancellationToken) to read data and cancel immediatelly when someone calls your Stop method.
You could change your code to something like this:
CancellationTokenSource _source;
public void Start()
{
_source = new CancellationTokenSource();
Task.Factory.StartNew(() => Listen(1, _source.Token),_source.Token);
Task.Factory.StartNew(() => Listen(2, _source.Token), _source.Token);
}
public void Stop()
{
_source.Cancel();
}
private async Task Listen(int port,CancellationToken token)
{
var tcp = new TcpClient();
while(!token.IsCancellationRequested)
{
await tcp.ConnectAsync(ip, port);
using (var stream=tcp.GetStream())
{
...
try
{
await stream.ReadAsync(buffer, offset, count, token);
}
catch (OperationCanceledException ex)
{
//Handle Cancellation
}
...
}
}
}
You can read a lot more about cancellation in Cancellation in Managed Threads, including advice on how to poll, register a callback for cancellation, listen to multiple tokens etc.
The try/catch block exists because await throws an Exception if a Task is cancelled. You can avoid this by calling ContinueWith on the Task returned by ReadAsync and checking the IsCanceled flag:
private async Task Listen(int port,CancellationToken token)
{
var tcp = new TcpClient();
while(!token.IsCancellationRequested)
{
await tcp.ConnectAsync(ip, port);
using (var stream=tcp.GetStream())
{
///...
await stream.ReadAsync(buffer, offset, count, token)
.ContinueWith(t =>
{
if (t.IsCanceled)
{
//Do some cleanup?
}
else
{
//Process the buffer and send notifications
}
});
///...
}
}
}
await now awaits a simple Task that finishes when the continuation finishes
You would probably be better of sticking with RX all the way through instead of using Task. Here is some code I wrote for connecting to UDP sockets with RX.
public IObservable<UdpReceiveResult> StreamObserver
(int localPort, TimeSpan? timeout = null)
{
return Linq.Observable.Create<UdpReceiveResult>(observer =>
{
UdpClient client = new UdpClient(localPort);
var o = Linq.Observable.Defer(() => client.ReceiveAsync().ToObservable());
IDisposable subscription = null;
if ((timeout != null)) {
subscription = Linq.Observable.Timeout(o.Repeat(), timeout.Value).Subscribe(observer);
} else {
subscription = o.Repeat().Subscribe(observer);
}
return Disposable.Create(() =>
{
client.Close();
subscription.Dispose();
// Seems to take some time to close a socket so
// when we resubscribe there is an error. I
// really do NOT like this hack. TODO see if
// this can be improved
Thread.Sleep(TimeSpan.FromMilliseconds(200));
});
});
}
should I be locking the flag, and if so, how does that tie-in with the async/await bits?
You need to synchronize access to the flag somehow. If you don't, the compiler is allowed to make the following optimization:
bool compilerGeneratedLocal = _Listen;
while (compilerGeneratedLocal)
{
// body of the loop
}
Which would make your code wrong.
Some options how you can fix that:
Mark the bool flag volatile. This will ensure that the current value of the flag is always read.
Use CancellationToken (as suggested by Panagiotis Kanavos). This will make sure that the underlying flag is accessed in a thread-safe manner for you. It has also the advantage that many async methods support CancellationToken, so you can cancel them too.
Some form of Event (such as ManualResetEventSlim) would be a more obvious choice when you're potentially dealing with multiple threads.
private ManualResetEventSlim _Listen;
public void Start()
{
_Listen = new ManualResetEventSlim(true);
Task.Factory.StartNew(() => Listen(1);
Task.Factory.StartNew(() => Listen(2);
}
public void Stop()
{
_Listen.Reset();
}
private async void Listen(int port)
{
var tcp = new TcpClient();
while(_Listen.IsSet)
{
Related
I need an Rx.NET observable that accumulates items while there is no active subscribers, and emits the whole accumulated sequence (and any future items) to new subscribes, as soon as there is any.
It is different from ReplaySubject in that it doesn't keep the items which have been once replayed to any subscriber. Thus, once a queued item has been observed by the current subscribers, it gets removed from the queue and won't be seen by any new future subscribers.
Can something like that be composed using standard Rx.NET operators?
I need it to tackle races conditions in the following scenario. There is a looping async workflow RunWorkflowAsync which needs to perform ResetAsync task when it observes a specific ResetRequestedEvent message.
Here's the whole thing as a .NET 6 console app:
using System.Reactive.Linq;
using System.Reactive.Subjects;
using System.Threading.Channels;
try
{
await TestAsync();
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
async Task TestAsync()
{
var resetRequestsSubject = new Subject<ResetRequestedEvent>();
using var cts = new CancellationTokenSource(20000);
await Task.WhenAll(
SimulateResetRequests(cts.Token),
RunWorkflowAsync(resetRequestsSubject, cts.Token));
// simulate emmiting reset requests
async Task SimulateResetRequests(CancellationToken cancelToken)
{
async Task Raise(int n, int delay)
{
var ev = new ResetRequestedEvent(n);
Console.WriteLine($"{ev} issued");
resetRequestsSubject!.OnNext(ev);
await Task.Delay(delay, cancelToken);
}
await Raise(1, 50);
await Raise(2, 50);
await Raise(3, 50);
await Raise(4, 1000);
await Raise(5, 5000);
await Raise(6, 4000);
await Raise(7, 3000);
resetRequestsSubject.OnCompleted();
}
// simulate the reset task
async Task ResetAsync(CancellationToken cancelToken)
{
await Task.Delay(1000, cancelToken);
Console.WriteLine("Reset done");
}
// simulate the work task
async Task DoWorkAsync(CancellationToken cancelToken)
{
await Task.Delay(2000, cancelToken);
Console.WriteLine("Work done");
}
// do reset, then work in a loop until cancelled
async Task RunWorkflowAsync(IObservable<ResetRequestedEvent> resetRequests, CancellationToken externalCancelToken)
{
// from this point, make sure reset requests don't go unobserved
var accumulatedResetRequests = resetRequests.Accumulate(externalCancelToken);
using var auto1 = accumulatedResetRequests.Connect();
while (true)
{
externalCancelToken.ThrowIfCancellationRequested(); // stops the whole workflow
using var internalCts = CancellationTokenSource.CreateLinkedTokenSource(externalCancelToken);
var internalCancelToken = internalCts.Token;
// signal cancellation upon the most recent reset request
using var auto2 = accumulatedResetRequests
.Do(ev => Console.WriteLine($"{ev} seen"))
.Throttle(TimeSpan.FromMilliseconds(100))
.Do(ev => Console.WriteLine($"{ev} acted upon"))
.Subscribe(_ => internalCts.Cancel());
try
{
// start with a reset
await ResetAsync(internalCancelToken);
// do work until another reset is requested
while (true)
{
await DoWorkAsync(internalCancelToken);
}
}
catch (OperationCanceledException)
{
}
}
}
}
record ResetRequestedEvent(int Number);
public static class RxExt
{
class CumulativeObservable<T> : IConnectableObservable<T>
{
readonly IObservable<T> _source;
readonly Channel<T> _channel;
readonly CancellationToken _cancelToken;
public CumulativeObservable(IObservable<T> source, CancellationToken cancellationToken)
{
_source = source;
_channel = Channel.CreateUnbounded<T>();
_cancelToken = cancellationToken;
}
public IDisposable Connect() =>
_source.Subscribe(
onNext: item => _channel.Writer.TryWrite(item),
onError: ex => _channel.Writer.Complete(ex),
onCompleted: () => _channel.Writer.Complete());
public IDisposable Subscribe(IObserver<T> observer) =>
_channel.Reader.ReadAllAsync(_cancelToken).ToObservable().Subscribe(observer);
}
public static IConnectableObservable<T> Accumulate<T>(
this IObservable<T> #this,
CancellationToken cancellationToken) =>
new CumulativeObservable<T>(#this, cancellationToken);
}
The idea is to stop all pending tasks inside RunWorkflowAsync and perform ResetAsync when ResetRequestedEvent message comes along.
I realize there's more than one way to cook an egg (and implement RunWorkflowAsync), but I like this approach as I don't need to think about thread safety when I use and recycle internalCts cancellation token source (to stop all pending task before another iteration).
Above, CumulativeObservable does what I want, but it's a very naive implementation which only supports one concurrent observable (unlike, say, ReplaySubject) and lacks any safety checks.
I'd prefer a composition that can be built using standard operators.
I have this (working) bare-bones implementation of an asynchronous polling callback loop:
public void Start(ICallback callback)
{
if (Callback != null)
Stop();
Console.WriteLine("STARTING");
Callback = callback;
cancellation = new CancellationTokenSource();
this.task = Task.Run(() => TaskLoop(), cancellation.Token);
Console.WriteLine("STARTED");
}
public void Stop()
{
if (Callback == null)
{
Console.WriteLine("ALREADY stopped");
return;
}
Console.WriteLine("STOPPING");
cancellation.Cancel();
try
{
task.Wait();
}
catch (Exception e)
{
Console.WriteLine($"{e.Message}");
}
finally
{
cancellation.Dispose();
cancellation = null;
Callback = null;
task = null;
Console.WriteLine("STOPPED");
}
}
private void TaskLoop()
{
int i = 0;
while (!cancellation.IsCancellationRequested)
{
Thread.Sleep(1000);
Console.WriteLine("Starting iteration... {0}", i);
Task.Run(() =>
{
//just for testing
Callback.SendMessage($"Iteration {i} at {System.DateTime.Now}");
}).Wait();
Console.WriteLine("...Ending iteration {0}", i++);
}
Console.WriteLine("CANCELLED");
}
It's actually called from unmanaged C++ via COM, so this is a library project (and the callback is a COM-marshalled object) hence wanting to test the design first.
I'm switching to using the async paradigm and wonder if it should be as simple as sprinkling some async dust on my method declarations and swapping Wait() calls for await? Obviously Thread.Sleep would be changed for Task.Delay.
I am fairly sure COM will dedicate a thread to this object for marshaling purposes and unmanaged C++ doesn't know about the .Net async model, so are there any gotchas/pitfalls to watch out for?
This is an updated version I'm testing but, like Resource management, multithreading is an area your code can seem to work perfectly but is actually quite badly broken so I'd appreciate thoughts:
public void Start(ICallback callback)
{
if (Callback != null)
Stop();
Console.WriteLine("STARTING");
Callback = callback;
cancellation = new CancellationTokenSource();
this.task = TaskLoopAsync();
Console.WriteLine("STARTED");
}
public async void Stop()
{
if (Callback == null)
{
Console.WriteLine("ALREADY stopped");
return;
}
Console.WriteLine("STOPPING");
cancellation.Cancel();
try
{
await task;
}
catch (Exception e)
{
Console.WriteLine($"{e.Message}");
}
finally
{
cancellation.Dispose();
cancellation = null;
Callback = null;
task = null;
Console.WriteLine("STOPPED");
}
}
private async void TaskLoopAsync()
{
int i = 0;
while (!cancellation.IsCancellationRequested)
{
await Task.Delay(1000);
Console.WriteLine("Starting iteration... {0}", i);
Callback.SendMessage($"Iteration {i} at {System.DateTime.Now}");
Console.WriteLine("...Ending iteration {0}", i++);
}
Console.WriteLine("CANCELLED");
}
unmanaged C++ doesn't know about the .Net async model, so are there any gotchas/pitfulls to watch out for?
Just that one. It's fine to apply async/await to your internal code (e.g., TaskLoop), but you can't let it extend out to the COM boundary. So Start and Stop cannot be made async.
Switching to async introduced a bug in your code. The problem is with the async void Stop method. It is called from inside the Start method, and since there is no way to await an async void method, both methods are executing concurrently for a while. So it is a matter of luck which of the two commands below will be executed first:
this.task = TaskLoopAsync(); // in Start method
task = null; // in Stop method
Other points of interest:
The CancellationTokenSource is not used the way it is intended. This class is more than a glorified volatile bool. It also allows to cancel asynchronous operations an any time by registering callbacks. For example you can cancel the asynchronous Task.Delay immediately by passing the token through its second optional argument:
await Task.Delay(1000, cancellation.Token);
Then you must be prepared to handle the OperationCanceledException that will be thrown when the token is canceled. This is the standard way of communicating cancellation in .NET, by throwing this exception from the one end and capturing it from the other.
You can achieve more consistent intervals between calling the Callback.SendMessage method, by creating the Task.Delay task before calling the method, and awaiting it afterwards.
I have few methods that report some data to Data base. We want to invoke all calls to Data service asynchronously. These calls to data service are all over and so we want to make sure that these DS calls are executed one after another in order at any given time. Initially, i was using async await on each of these methods and each of the calls were executed asynchronously but we found out if they are out of sequence then there are room for errors.
So, i thought we should queue all these asynchronous tasks and send them in a separate thread but i want to know what options we have? I came across 'SemaphoreSlim' . Will this be appropriate in my use case?
Or what other options will suit my use case? Please, guide me.
So, what i have in my code currently
public static SemaphoreSlim mutex = new SemaphoreSlim(1);
//first DS call
public async Task SendModuleDataToDSAsync(Module parameters)
{
var tasks1 = new List<Task>();
var tasks2 = new List<Task>();
//await mutex.WaitAsync(); **//is this correct way to use SemaphoreSlim ?**
foreach (var setting in Module.param)
{
Task job1 = SaveModule(setting);
tasks1.Add(job1);
Task job2= SaveModule(GetAdvancedData(setting));
tasks2.Add(job2);
}
await Task.WhenAll(tasks1);
await Task.WhenAll(tasks2);
//mutex.Release(); // **is this correct?**
}
private async Task SaveModule(Module setting)
{
await Task.Run(() =>
{
// Invokes Calls to DS
...
});
}
//somewhere down the main thread, invoking second call to DS
//Second DS Call
private async Task SendInstrumentSettingsToDS(<param1>, <param2>)
{
//await mutex.WaitAsync();// **is this correct?**
await Task.Run(() =>
{
//TrackInstrumentInfoToDS
//mutex.Release();// **is this correct?**
});
if(param2)
{
await Task.Run(() =>
{
//TrackParam2InstrumentInfoToDS
});
}
}
Initially, i was using async await on each of these methods and each of the calls were executed asynchronously but we found out if they are out of sequence then there are room for errors.
So, i thought we should queue all these asynchronous tasks and send them in a separate thread but i want to know what options we have? I came across 'SemaphoreSlim' .
SemaphoreSlim does restrict asynchronous code to running one at a time, and is a valid form of mutual exclusion. However, since "out of sequence" calls can cause errors, then SemaphoreSlim is not an appropriate solution since it does not guarantee FIFO.
In a more general sense, no synchronization primitive guarantees FIFO because that can cause problems due to side effects like lock convoys. On the other hand, it is natural for data structures to be strictly FIFO.
So, you'll need to use your own FIFO queue, rather than having an implicit execution queue. Channels is a nice, performant, async-compatible queue, but since you're on an older version of C#/.NET, BlockingCollection<T> would work:
public sealed class ExecutionQueue
{
private readonly BlockingCollection<Func<Task>> _queue = new BlockingCollection<Func<Task>>();
public ExecutionQueue() => Completion = Task.Run(() => ProcessQueueAsync());
public Task Completion { get; }
public void Complete() => _queue.CompleteAdding();
private async Task ProcessQueueAsync()
{
foreach (var value in _queue.GetConsumingEnumerable())
await value();
}
}
The only tricky part with this setup is how to queue work. From the perspective of the code queueing the work, they want to know when the lambda is executed, not when the lambda is queued. From the perspective of the queue method (which I'm calling Run), the method needs to complete its returned task only after the lambda is executed. So, you can write the queue method something like this:
public Task Run(Func<Task> lambda)
{
var tcs = new TaskCompletionSource<object>();
_queue.Add(async () =>
{
// Execute the lambda and propagate the results to the Task returned from Run
try
{
await lambda();
tcs.TrySetResult(null);
}
catch (OperationCanceledException ex)
{
tcs.TrySetCanceled(ex.CancellationToken);
}
catch (Exception ex)
{
tcs.TrySetException(ex);
}
});
return tcs.Task;
}
This queueing method isn't as perfect as it could be. If a task completes with more than one exception (this is normal for parallel code), only the first one is retained (this is normal for async code). There's also an edge case around OperationCanceledException handling. But this code is good enough for most cases.
Now you can use it like this:
public static ExecutionQueue _queue = new ExecutionQueue();
public async Task SendModuleDataToDSAsync(Module parameters)
{
var tasks1 = new List<Task>();
var tasks2 = new List<Task>();
foreach (var setting in Module.param)
{
Task job1 = _queue.Run(() => SaveModule(setting));
tasks1.Add(job1);
Task job2 = _queue.Run(() => SaveModule(GetAdvancedData(setting)));
tasks2.Add(job2);
}
await Task.WhenAll(tasks1);
await Task.WhenAll(tasks2);
}
Here's a compact solution that has the least amount of moving parts but still guarantees FIFO ordering (unlike some of the suggested SemaphoreSlim solutions). There are two overloads for Enqueue so you can enqueue tasks with and without return values.
using System;
using System.Threading;
using System.Threading.Tasks;
public class TaskQueue
{
private Task _previousTask = Task.CompletedTask;
public Task Enqueue(Func<Task> asyncAction)
{
return Enqueue(async () => {
await asyncAction().ConfigureAwait(false);
return true;
});
}
public async Task<T> Enqueue<T>(Func<Task<T>> asyncFunction)
{
var tcs = new TaskCompletionSource(TaskCreationOptions.RunContinuationsAsynchronously);
// get predecessor and wait until it's done. Also atomically swap in our own completion task.
await Interlocked.Exchange(ref _previousTask, tcs.Task).ConfigureAwait(false);
try
{
return await asyncFunction().ConfigureAwait(false);
}
finally
{
tcs.SetResult();
}
}
}
Please keep in mind that your first solution queueing all tasks to lists doesn't ensure that the tasks are executed one after another. They're all running in parallel because they're not awaited until the next tasks is startet.
So yes you've to use a SemapohoreSlim to use async locking and await. A simple implementation might be:
private readonly SemaphoreSlim _syncRoot = new SemaphoreSlim(1);
public async Task SendModuleDataToDSAsync(Module parameters)
{
await this._syncRoot.WaitAsync();
try
{
foreach (var setting in Module.param)
{
await SaveModule(setting);
await SaveModule(GetAdvancedData(setting));
}
}
finally
{
this._syncRoot.Release();
}
}
If you can use Nito.AsyncEx the code can be simplified to:
public async Task SendModuleDataToDSAsync(Module parameters)
{
using var lockHandle = await this._syncRoot.LockAsync();
foreach (var setting in Module.param)
{
await SaveModule(setting);
await SaveModule(GetAdvancedData(setting));
}
}
One option is to queue operations that will create tasks instead of queuing already running tasks as the code in the question does.
PseudoCode without locking:
Queue<Func<Task>> tasksQueue = new Queue<Func<Task>>();
async Task RunAllTasks()
{
while (tasksQueue.Count > 0)
{
var taskCreator = tasksQueue.Dequeu(); // get creator
var task = taskCreator(); // staring one task at a time here
await task; // wait till task completes
}
}
// note that declaring createSaveModuleTask does not
// start SaveModule task - it will only happen after this func is invoked
// inside RunAllTasks
Func<Task> createSaveModuleTask = () => SaveModule(setting);
tasksQueue.Add(createSaveModuleTask);
tasksQueue.Add(() => SaveModule(GetAdvancedData(setting)));
// no DB operations started at this point
// this will start tasks from the queue one by one.
await RunAllTasks();
Using ConcurrentQueue would be likely be right thing in actual code. You also would need to know total number of expected operations to stop when all are started and awaited one after another.
Building on your comment under Alexeis answer, your approch with the SemaphoreSlim is correct.
Assumeing that the methods SendInstrumentSettingsToDS and SendModuleDataToDSAsync are members of the same class. You simplay need a instance variable for a SemaphoreSlim and then at the start of each methode that needs synchornization call await lock.WaitAsync() and call lock.Release() in the finally block.
public async Task SendModuleDataToDSAsync(Module parameters)
{
await lock.WaitAsync();
try
{
...
}
finally
{
lock.Release();
}
}
private async Task SendInstrumentSettingsToDS(<param1>, <param2>)
{
await lock.WaitAsync();
try
{
...
}
finally
{
lock.Release();
}
}
and it is importend that the call to lock.Release() is in the finally-block, so that if an exception is thrown somewhere in the code of the try-block the semaphore is released.
I've written a class that asynchronously pings a subnet. It works, however, the number of hosts returned will sometimes change between runs. Some questions:
Am I doing something wrong in the code below?
What can I do to make it work better?
The ScanIPAddressesAsync() method is called like this:
NetworkDiscovery nd = new NetworkDiscovery("192.168.50.");
nd.RaiseIPScanCompleteEvent += HandleScanComplete;
nd.ScanIPAddressesAsync();
namespace BPSTestTool
{
public class IPScanCompleteEvent : EventArgs
{
public List<String> IPList { get; set; }
public IPScanCompleteEvent(List<String> _list)
{
IPList = _list;
}
}
public class NetworkDiscovery
{
private static object m_lockObj = new object();
private List<String> m_ipsFound = new List<string>();
private String m_ipBase = null;
public List<String> IPList
{
get { return m_ipsFound; }
}
public EventHandler<IPScanCompleteEvent> RaiseIPScanCompleteEvent;
public NetworkDiscovery(string ipBase)
{
this.m_ipBase = ipBase;
}
public async void ScanIPAddressesAsync()
{
var tasks = new List<Task>();
m_ipsFound.Clear();
await Task.Run(() => AsyncScan());
return;
}
private async void AsyncScan()
{
List<Task> tasks = new List<Task>();
for (int i = 2; i < 255; i++)
{
String ip = m_ipBase + i.ToString();
if (m_ipsFound.Contains(ip) == false)
{
for (int x = 0; x < 2; x++)
{
Ping p = new Ping();
var task = HandlePingReplyAsync(p, ip);
tasks.Add(task);
}
}
}
await Task.WhenAll(tasks).ContinueWith(t =>
{
OnRaiseIPScanCompleteEvent(new IPScanCompleteEvent(m_ipsFound));
});
}
protected virtual void OnRaiseIPScanCompleteEvent(IPScanCompleteEvent args)
{
RaiseIPScanCompleteEvent?.Invoke(this, args);
}
private async Task HandlePingReplyAsync(Ping ping, String ip)
{
PingReply reply = await ping.SendPingAsync(ip, 1500);
if ( reply != null && reply.Status == System.Net.NetworkInformation.IPStatus.Success)
{
lock (m_lockObj)
{
if (m_ipsFound.Contains(ip) == false)
{
m_ipsFound.Add(ip);
}
}
}
}
}
}
One problem I see is async void. The only reason async void is even allowed is only for event handlers. If it's not an event handler, it's a red flag.
Asynchronous methods always start running synchronously until the first await that acts on an incomplete Task. In your code, that is at await Task.WhenAll(tasks). At that point, AsyncScan returns - before all the tasks have completed. Usually, it would return a Task that will let you know when it's done, but since the method signature is void, it cannot.
So now look at this:
await Task.Run(() => AsyncScan());
When AsyncScan() returns, then the Task returned from Task.Run completes and your code moves on, before all of the pings have finished.
So when you report your results, the number of results will be random, depending on how many happened to finish before you displayed the results.
If you want make sure that all of the pings are done before continuing, then change AsyncScan() to return a Task:
private async Task AsyncScan()
And change the Task.Run to await it:
await Task.Run(async () => await AsyncScan());
However, you could also just get rid of the Task.Run and just have this:
await AsyncScan();
Task.Run runs the code in a separate thread. The only reason to do that is in a UI app where you want to move CPU-heavy computations off of the UI thread. When you're just doing network requests like this, that's not necessary.
On top of that, you're also using async void here:
public async void ScanIPAddressesAsync()
Which means that wherever you call ScanIPAddressesAsync() is unable to wait until everything is done. Change that to async Task and await it too.
This code needs a lot of refactoring and bugs like this in concurrency are hard to pinpoint. My bet is on await Task.Run(() => AsyncScan()); which is wrong because AsyncScan() is async and Task.Run(...) will return before it is complete.
My second guess is m_ipsFound which is called a shared state. This means there might be many threads simultaneously reading and writing on this. List<T> is not a data type for this.
Also as a side point having a return in the last line of a method is not adding to the readability and async void is a prohibited practice. Always use async Task even if you return nothing. You can read more on this very good answer.
I'm running cpu-bound task in asp.net mvc application. Some users can "subscribe" to this task and they should be notified of completion. But when task have no subscribers it must be cancelled. The task start via ajax request and cancel when .abort() method is called. In controller I have CancellationToken as parameter which determines cancellation.
The problem is that when one of subscribers calls abort (unsubscribe) the linked token cancels task despite other users are waiting for the result. How can I cancel CancellationToken after checking some condition? I can't check IsCancellationRequested prop after every loop iteration because I'm wrapping non-async method.
Users are notified with SignalR after task completion. I've tried to implement ConcurrentDictionary to check before cancelling whether task has subscribers or not.
private async Task<Diff> CompareAsync(Model modelName, CancellationToken ct)
{
try
{
return await Task.Factory.StartNew(() =>
{
ct.ThrowIfCancellationRequested();
return _someServiceName.CompareLines(modelName.LinesA, modelName.LinesB, ct);
}, ct, TaskCreationOptions.LongRunning, TaskScheduler.Default).ConfigureAwait(false);
}
catch (OperationCanceledException)
{
//do some things
}
}
I need something like this, but can't come up with any (not ugly)ideas:
private async Task<Diff> CompareAsync(Model modelName, CancellationToken ct)
{
try
{
return await Task.Factory.StartNew(() =>
{
using (var source = new CancellationTokenSource())
{
if (ct.IsCancellationRequested && CompareService.SharedComparison.TryGetValue(modelName.Hash, out var usersCount) && usersCount < 2)
{
source.Cancel();
}
return _someServiceName.CompareLines(modelName.LinesA, modelName.LinesB, source.Token);
}
}, ct, TaskCreationOptions.LongRunning, TaskScheduler.Default).ConfigureAwait(false);
}
catch (OperationCanceledException)
{
//do some things
}
}
You will have to maintain a thread safe subscriber count (most likely using lock), and only cancel the token when it's 0.
private int _subscribers;
private object _sync = new object();
private AddSubscribers()
{
Lock(_sync )
{
// do what ever you need to do
_subscribers++;
}
}
private RemoveSubscribers()
{
Lock(_sync )
{
// do what ever you need to do
_subscribers--;
if(_subscribers <= 0)
{
// cancel token
}
}
}
Note : Obviously this is not a complete solution and leaves a lot to the imagination.