I've got a MessageReceiver which is pumping messages from a queue:
var factory = MessagingFactory.CreateFromConnectionString(connectionString);
var receiver = factory.CreateMessageReceiver(queuePath);
receiver.OnMessageAsync(HandleBrokeredMessageAsync);
HandleBrokeredMessageAsync is my delegate which the receiver pumps messages into.
When I call Close() on the receiver, it will stop pumping further messages from the queue. In order to avoid potential race conditions, I want to be sure all pending processing has completed before returning control.
I have considered tracking each call to HandleBrokeredMessageAsync into a ConcurrentBag<T>, removing them from the bag when they complete. I'd use a BlockingCollection<T> to block the process until the drain-down is finished, but it's not clear when to call CompleteAdding(): I would call it after calling Close() but can there be a gap between calling Close() and a message being subsequently delivered to the handler?
receiver.Close();
pendingMessages.CompleteAdding();
// Can additional messages be pumped after this?
Take a look at the sample here as it uses a ManualResetEvent to co-ordinate the close of the Run() method and the Close operation. A similar approach may work where you check within message processing and stop there (not accept the next message for processing) and then call close when all those concurrent processors are done?
https://stackoverflow.com/a/16739691/1078351
Related
I have a observable object that creates a UDP socket. This object has methods to send packets from that UDP socket and a thread to listen for received packets and invoke the PacketReceived event when a packet is received. My question is how should I handle the case when close method of the observer is called while the listener thread is busy invoking PacketReceived event. I can think of 2 solutions.
Close method immediately returns and listener thread ends after finished invoking the PacketReceived event. But with this solution listener thread could be still alive after calling the close method. So after the close method returns if I try to close another object that is used in a method that subscribed to PacketReceived event there will be a chance UDP listener thread try to access it after it is closed.
Thread that calls the close method waits for the listener thread to finish its work then closes the object. So after the close method returns it is guaranteed no other listener event will be invoked. So after that thread that calls the close method can close other objects that could be used by the UDP listener thread. But the problem is if the thread that calls the close method holds a lock and UDP listener thread tries to hold the same lock while invoking there will be a deadlock.
What is the preferred solution to this problem.
The second option is the better one. For this you can use semaphores. As #Fildor has stated, we have no code to go on, so this will be a "sketch" rather than a direct solution.
It sounds like you can use a simple SemaphoreSlim object to control this problem
var semaphore = new SemaphoreSlim(1, 1);
await semaphore.WaitAsync();
try
{
// Only one thread at a time can access this.
}
...
finally
{
semaphore.Release();
}
Obviously, you are needing cross class safty here, so making a class with a semaphore that is accessable from both places shpuld be enough.
Depanding on your use case and the latency required, you could also use a ConcurrentDictionary<string, SemaphoreSlim>, that is a concurrent dictionary of semaphores - here the key would be some kind of unique identifier that the thread that calls the close method and the listner thread both have access to. Then you can do something like
private readonly ConcurrentDictionary<string, SemaphoreSlim> _semaphoreDictionary =
new ConcurrentDictionary<string, SemaphoreSlim>();
...
var semaphore = _semaphoreDictionary.GetOrAdd(someUniqueKeyForTheThreadPair, new SemaphoreSlim(1, 1));
await semaphore.WaitAsync();
try
{
// Only one thread at a time can access this.
}
...
finally
{
semaphore.Release();
_semaphoreDictionary.Remove(someUniqueKeyForTheThreadPair, out _);
}
Without seeing any of your code, that is the best I can offer.
I am using durable functions to wait for external events with timeouts. Even though one of the events is received before the timeout, a TimerFired event is recognised in dfMon when none should be.
The orchestration's logic is as follows:
orchstration will put a message on a queue which is monitored by an external system
this remote system will notify my orchestration via cmdReceived external event if the above queue message was received
then it will trigger a long-running local process
afterwards it will again notify my orchestration via cmdExecuted external event of the completion of this process
I.e. ->
PUT a message on the queue
timer1: wait 10mins for external event cmdReceived, continue if received, else throw
timer2: wait 60mins for external event cmdExecuted, continue if received, else throw
Now the cmdReceived event is received rather quickly, after 2mins or so. The the next timer should wait for a max of 60mins for cmdExecuted to be received. This usually takes about 12mins or so. By then 10mins (from timer1) have elapsed. And a TimerFired event is logged, however no exception is thrown and the orchestration keeps on running. But I do ask myself whether there is still an impact I am not immediately aware of. The existence of the TimerFired event (timer1) I mean.
This is the relevant line of code:
await ctx.WaitForExternalEvent("cmdReceived", TimeSpan.FromMinutes(10));
var success = await ctx.WaitForExternalEvent<bool>("cmdExecuted", TimeSpan.FromMinutes(60));
My first thought was, that maybe I need to explicitly cancel the timer myself. I've read about this here. So then I tried:
var cts = new CancellationTokenSource();
await ctx.WaitForExternalEvent("cmdReceived", TimeSpan.FromMinutes(10), cts.Token);
cts.Cancel();
var cts2 = new CancellationTokenSource();
var success = await ctx.WaitForExternalEvent<bool>("cmdExecuted", TimeSpan.FromMinutes(60), cts2.Token);
cts2.Cancel();
However, this didn't really help. The first timer keeps on firing (though not throwing) after 10mins even after the event for which it was configured has already been received.
See below for a screenshot from dfMon.
Is there anything wrong about this? I find this extra TimerFired event confusing. Does it matter as long as it is (correctly) not throwing an exception?
Cheers
Short answer: no it does not matter. The event is ignored by Durable Task.
The way that durable timers work with the default Storage durability provider is through scheduled queue messages.
So when you call WaitForExternalEvent, it uses CreateTimer under the hood.
When the replay finishes and Durable Task sees that there is a timer that needs to be started, it sends a scheduled message to one of the control queues.
This message will become visible at the time you specified.
Now when your orchestration receives the external event, that causes a replay, and it'll replay over the WaitForExternalEvent call.
Here it sees it has a new event for a received event.
This resolves the Task it is waiting on and your orchestration continues.
The next time it replays the timer event arrives.
But the timer gets ignored by processing the external event first.
(Internally it uses TaskCompletionSource.TrySetException() which won't do anything because SetResult has already been called)
It won't throw the timeout exception.
Regardless that it did nothing, the event did occur so it is recorded.
I have an async Method named "RequestMessage()". Within this method, I'm going to send a message to a message broker. Since I don't know when to expect the result, I'm using "TaskCompletionSource". I want the async method to terminate, when the reply message arrived (I'll receive an event from the broker).
This works fine so far. Now, my issue is that this message could never be answered, or at least way to late.
I'm looking for a change to implement my own timeout. To do so, I tried a Timer, as well as the Observer of Reactive Extensions. Th issue is always the same - I can't get my main thread and the timer thread syncronized, as I'm using .NET Core 2 and there is no SynchronizationContext.
So, in my code there is an observer ..
Observable
.Interval(TimeSpan.FromSeconds(timeOutInSeconds))
.Subscribe(
x =>
{
timeoutCallback();
});
If time expires a callback should be called. In my caller method, I handle the callback this way:
TimeoutDelegate timeoutHandler = () => throw new WorkcenterRepositoryCommunicationException("Broker communication timed out.", null);
As you realized already, this Exception will never be catched, as it is not thrown to the main thread.
How can I sync threads here?
Thanks in advance!
The best way to "fail upon some problem" IMHO would be to throw the appropriate exception, but you can definitely just use return; if you prefer to avoid exceptions.
This will create a completed/faulted task that was completed synchronously, so the caller using await will get a finished task and continue on using the same thread.
CancellationToken allows for the caller to cancel the operation, which isn't the case you are describing.
Task.Yield doesn't terminate any operation, it just enables other tasks to run for some time and reschedules itself for later.
So I'm calling Dispatcher.BeginInvoke() to perform some UI actions in a Timer.Elapsed event. The timer is ticking fast, and multiple new instances of BeginInvoke() may stack up before a previous call is fully processed. After processing of current call is finished, I'm always interested in picking the latest instance of BeginInvoke() only, i.e. any previous unprocessed instances on the message queue should be discarded.
What is the correct way of emptying the Dispatcher's BeginInvoke queue to achieve this?
To demonstrate an example, consider that I'm reading value from a sensor in Timer.Elapsed event several times a second and then updating a complex UI to show the read values. This UI update action takes some time and during this time, one or more new instances of the read values stack up on the dispatcher queue to get rendered. Obviously, when I have got more recent values from the sensor, I'd want to discard all instances in the waiting line and just keep the current one, to be sent for rendering once the processor is free.
There is no chance to dequeue callbacks since you are not managing the UI thread.
But you could use a CancellationTokenSource:
Pass the CancellationTokenSource (CancellationTokenSource.Token)
to the dispatcher and your callback.
Listen for cancellation by repeatedly invoking CancellationToken.ThrowIfCancellationRequested() inside your callback and catch the OperationCanceledException exception that will be thrown once the CancellationTokenSource.Cancel() was called
Use a catch block to catch OperationCanceledException and do the clean up in order to reverse state to prior of executing the callback
Before invoking the dispatcher with a new action you cancel all previous callbacks by invoking CancellationTokenSource.Cancel(). This will trigger the ThrowIfCancellationRequested() to actually throw an OperationCanceledException inside the callback.
Invoke dispatcher with new callback and new CancellationToken from a fresh CancellationTokenSource instance and dispose all cancelled CancellationTokenSource instances.
This way you can cancel the dispatcher action e.g. in case it's long running or prevent it to be executed in case the action is still pending. Otherwise you have to enqueue the next dispatcher action and override the changes of the previous action.
Dispatcher.InvokeAsync(...) is equal to Dispatchher.BeginInvoke(...) but in addition it allows you to pass a cancellation token to the dispatcher.
I have a thread that grabs messages from a concurrent queue and writes them to a network stream. The loop inside the thread looks like this:
while (!cancel.IsCancellationRequested)
{
messageBus.outboundPending.WaitOne();
var message = messageBus.GetFrom(Direction.Outbound);
if (message == null) { continue; }
MessageWriter.WriteMessage(networkStream, message, cancel, OnStreamClose).Wait(cancel);
}
The requirement is that the thread stops if the cancellation token is set. However, since it waits for pending messages in the queue, the thread will stay blocked. How could I "combine" both the cancellation token and the outbound event so that if either of them are set, the thread unblocks?
The only convoluted way that I can think of to make it work is to replace the outboundPending event with a new third event, and start two new threads: one that waits for the outbound event, and another that waits for the cancel event, and have both of them set the third event when they unblock. But that feels really bad.
Try WaitHandle.WaitAny and include the CancellationToken.WaitHandle.
A discussion of a cancellable WaitAll can be found here
Use the WaitOne(TimeSpan) method. It will return true if it was signaled, and false if the timeout was reached.
e.g, if you send TimeSpan.FromSeconds(1) and a second has passed without a signal, the execution will continue and the method will return false. If the signal was given, it will return true.