I'm using a library that listens to messages by calling a method that returns a task that ends only if the a connection to the server is closed.
// Return a task that completes the connection ends
Task Listen();
The way I use listen is fire and forget, since i don't want to block on just recieving messages.
The library also provides a subscription to execute a task when it is done sending messages (basically at the end of the listen).
// Exception is null if no problem occurs during processing messages
void SubscribeToOnEnd(Func<Exception, Task> OnEnd);
In my code, I want to be able to dispose the the library object, after the fact that listen ended, or we recieved the OnEnd.
Right now I can think of 3 options:
If I call it from the OnEnd task, I'm not sure of the behaviour it could cause it is the OnEnd is called from the object that I'm trying to dispose.
Dispose the object in the continuation of the Listen() method, but I'm not sure of the consequences of this option, any side effects as option 1.
Fire a task on the OnEnd using Task.Run to dispose the object. Which is don't really think is a good practice and it may cause some synchronization issues.
What is the best practice for handling such a case ?
For simplicity imagine my code as
public class Class
{
private Library library;
public async Task MyMethod(List<string> messages)
{
// in reality this is synchronized check
if(this.library == null)
{
this.library = new Library();
this.library.SubscribeToRecieve((m) =>
{
Console.WriteLine(m);
});
this.library.SubscribeToOnEnd(this.OnEnd);
this.library.Listen();
}
foreach(var s in messages)
{
await this.library.SendAsync(s);
}
}
private Task OnEnd(Exception)
{
// do some stuff
}
}
The library object is valid for use with as long as the connection is alive, and if for any reason the connection is closed/droped, I want to be able to dispose it at that point, so when MyMethod is called again I can create another object and start listening again.
Related
We have a third-party method Foo which sometimes runs in a deadlock for unknown reasons.
We are executing an single-threaded tcp-server and call this method every 30 seconds to check that the external system is available.
To mitigate the problem with the deadlock in the third party code we put the ping-call in a Task.Run to so that the server does not deadlock.
Like
async Task<bool> WrappedFoo()
{
var timeout = 10000;
var task = Task.Run(() => ThirdPartyCode.Foo());
var delay = Task.Delay(timeout);
if (delay == await Task.WhenAny(delay, task ))
{
return false;
}
else
{
return await task ;
}
}
But this (in our opinion) has the potential to starve the application of free threads. Since if one call to ThirdPartyCode.Foo deadlock the thread will never recover from this deadlock and if this happens often enough we might run out of resources.
Is there a general approach how one should handle deadlocking third-party code?
A CancellationToken won't work because the third-party-api does not provide any cancellation options.
Update:
The method at hand is from the SAPNCO.dll provided by SAP to establish and test rfc-connections to a sap-system, therefore the method is not a simple network-ping. I renamed the method in the question to avoid further misunderstandings
Is there a general approach how one should handle deadlocking third-party code?
Yes, but it's not easy or simple.
The problem with misbehaving code is that it can not only leak resources (e.g., threads), but it can also indefinitely hold onto important resources (e.g., some internal "handle" or "lock").
The only way to forcefully reclaim threads and other resources is to end the process. The OS is used to cleaning up misbehaving processes and is very good at it. So, the solution here is to start a child process to do the API call. Your main application can communicate with its child process by redirected stdin/stdout, and if the child process ever times out, the main application can terminate it and restart it.
This is, unfortunately, the only reliable way to cancel uncancelable code.
Cancelling a task is a collaborative operation in that you pass a CancellationToken to the desired method and externally you use CancellationTokenSource.Cancel:
public void Caller()
{
try
{
CancellationTokenSource cts=new CancellationTokenSource();
Task longRunning= Task.Run(()=>CancellableThirdParty(cts.Token),cts.Token);
Thread.Sleep(3000); //or condition /signal
cts.Cancel();
}catch(OperationCancelledException ex)
{
//treat somehow
}
}
public void CancellableThirdParty(CancellationToken token)
{
while(true)
{
// token.ThrowIfCancellationRequested() -- if you don't treat the cancellation here
if(token.IsCancellationRequested)
{
// code to treat the cancellation signal
//throw new OperationCancelledException($"[Reason]");
}
}
}
As you can see in the code above , in order to cancel an ongoing task , the method running inside it must be structured around the CancellationToken.IsCancellationRequested flag or simply CancellationToken.ThrowIfCancellationRequested method ,
so that the caller just issues the CancellationTokenSource.Cancel.
Unfortunately if the third party code is not designed around CancellationToken ( it does not accept a CancellationToken parameter ), then there is not much you can do.
Your code isn't cancelling the blocked operation. Use a CancellationTokenSource and pass a cancellation token to Task.Run instead :
var cts=new CancellationTokenSource(timeout);
try
{
await Task.Run(() => ThirdPartyCode.Ping(),cts.Token);
return true;
}
catch(TaskCancelledException)
{
return false;
}
It's quite possible that blocking is caused due to networking or DNS issues, not actual deadlock.
That still wastes a thread waiting for a network operation to complete. You could use .NET's own Ping.SendPingAsync to ping asynchronously and specify a timeout:
var ping=new Ping();
var reply=await ping.SendPingAsync(ip,timeout);
return reply.Status==IPStatus.Success;
The PingReply class contains far more detailed information than a simple success/failure. The Status property alone differentiates between routing problems, unreachable destinations, time outs etc
I am trying to make a function that when called returns back information to the caller that is on a server. What I want in this function, is that it creates a thread that issues the command to the server, and then suspends itself until the server responds back with the answer.
public AccountState GetAccount(string key)
{
AccountState state = null;
Thread t = new Thread(() =>
{
_connection.SomeCommandSentToServer(key);
accountRequests.TryAdd(key, (Thread.CurrentThread, null));
//Suspend current thread until ServerReponseHere is called
Thread.CurrentThread.Suspend();
//We have been resumed, value should be in accountRequests now
accountRequests.TryRemove(key, out var item);
state = item.AccountState;
});
t.Start();
return state;
}
public ConcurrentDictionary<string, (Thread Thread, AccountState AccountState)> accountRequests = new ConcurrentDictionary<string, (Thread Thread, AccountState AccountState)>();
///Once server is done with processed command, call to this function made
public void ServerReponseHere(string key, AccountState state)
{
accountRequests.TryGetValue(username, out var item);
accountRequests.TryUpdate(username, (item.Thread, new AccountState()), item);
item.Thread.Resume();
}
My Idea then is that in a different function, when server responds back, it calls the ResumeThread function shown above.
C# says that Suspend / Resume are deprecated functions however, -- what is a better way to do this?
UPDATE
Clarification about "SomeCommandSentToServer" -- This just sends a command to the server via TCP sockets.
In that call, all that is really happening is a transmission to the server. I'm using a library that uses WinSock2.h call of "Send()" -- Yes I know it is a deprecated library... but the library I'm using requires it.
I have a separate thread that polls input from the server. So I have no way to "await" on this SomeCommandSentToServer -- I would need to await on some sort of call back function (aka the resume function I was mentioning) -- to make this work.
I am unsure how to do that
With all the information available from the question, here is what you should aim for when using the async / await pattern:
public async Task<AccountState> GetAccountAsync(string key)
{
// The method SomeCommandSentToServerAsync must be changed to support async.
AccountState state = await _connection.SomeCommandSentToServerAsync(key);
return state;
}
It is highly unlikely that you need anything else. By that, I mean you will not have to manipulate threads directly, put them in a concurrent dictionary and manually suspend or resume them because it looks horrible from a maintenance perspective ;)
.NET will take care of the threading part, meaning the magic of the async infrastructure will most likely release the current thread (assuming a call is actually made to the server) until the server returns a response.
Then the infrastructure will either use the existing synchronization context -if you are on a UI thread for instance- or grab a thread from the thread pool -if not- to run the rest of the method.
You could even reduce the size of the method a bit more by simply returning a Task with a result of type AccountState:
public Task<AccountState> GetAccountAsync(string key)
{
// The method SomeCommandSentToServerAsync must be changed to support async.
return _connection.SomeCommandSentToServerAsync(key);
}
In both example, you will haver to make the callers async as well:
public async Task TheCallerAsync()
{
// Grab the key from somewhere.
string key = ...;
var accountState = await <inst>.GetAccountAsync(key);
// Do something with the state.
...
}
Turning a legacy method into an async method
Now, regarding the legacy SomeCommandSentToServer method. There is a way to await that legacy method. Yes, you can turn that method into an asynchronous method that can be used with the async / await.
Of course, I do not have all the details of your implementation but I hope you will get the idea of what needs to be done. The magical class to do that is called TaskCompletionSource.
What it allows you to do is to give you access to a Task. You create the instance of that TaskCompletionSource class, you keep it somewhere, you send the command and immediately return the Task property of that new instance.
Once you get the result from your polling thread, you grab the instance of TaskCompletionSource, get the AccountState and call SetResult with the account state. This will mark the task as completed and do the resume part you were asking for :)
Here is the idea:
public Task<AccountState> SomeCommandSentToServerAsync(string key)
{
var taskCompletionSource = new TaskCompletionSource<AccountState>();
// Find a way to keep the task in some state somewhere
// so that you can get it the polling thread.
// Do the legacy WinSock Send() command.
return taskCompletionSource.Task;
}
// This would be, I guess, your polling thread.
// Again, I am sure it is not 100% accurate but
// it will hopefully give you an idea of where the key pieces must be.
private void PollingThread()
{
while(must_still_poll)
{
// Waits for some data to be available.
// Grabs the data.
if(this_is_THE_response)
{
// Get the response and built the account state somehow...
AccountState accountState = ...
// Key piece #1
// Grab the TaskCompletionSource instance somewhere.
// Key piece #2
// This is the magic line:
taskCompletionSource.SetResult(accountState);
// You can also do the following if something goes wrong:
// taskCompletionSource.SetException(new Exception());
}
}
}
I am just learning about Threads in C# and a question arose.
I have a TCP-Server-Class which accepts connections and passes them to a TCP-Client-Class.
The code roughly looks like this: (dummy code)
Class TcpServer
{
public static Main(string[] args)
{
while(true)
{
//I create a new instance of my "TCP-Client-Class" and pass the accepted connection to the constructor
ConnectionHandler client = new ConnectionHandler(TCPListner.acceptconnections);
//create a new Thread to handle that connection
Thread client1 = new Thread (client.handleConnection()); //and start handling it
client.start;
//Do some other stuff for protokolling
do.someOtherStuff;
// and then wait for a new connection
}
}
//Some other Methods etc.
}
Class ConnectionHandler
{
//Constructor in which a connection TCPclient connection has to be passed
public ConnectionHandler(TCPclient client)
{
//Do stuff
}
//Method to handle connection
public void handleConnections()
{
//Open streams
//.
//.
//.
//close streams
//close connections
}
}
Now to my questions:
a) Is it obligatory to close that Thread again, after it reached the "close connection" part?
b) To close a thread do I just have to call the .join Method in my main class or is there anything else I have to take care about.
c) Incase of an error, can I just simply leave the "handleConnection()" method and close that thread (ofc with appropriate error-handling)?
d) Is it important to drop the "client" reference or the "client1" reference? Or is it just consumed by the garbage collector?
Well, it's entirely fine to let the thread just complete normally. If the top-level call of a thread throws an exception, it may take down the process, depending on how the CLR is configured. It's usually better to have a top-level error handler to log the error and move on.
However, you should consider what you want to happen on shutdown:
Your while (true) loop should be changed to allow some mechanism for shutting down
If you keep track of all the threads currently handling existing requests, when you know you're trying to shut down, you can Join on them (probably with a timeout) to allow them to complete before the server finishes. However, you want to remove a thread from that collection when it completes. This sort of thing gets fiddly fairly quickly, but is definitely doable.
As an aside, it's more common to use a thread-pool for this sort of thing, rather than creating a brand new thread for each request.
I have been struggling a bit with some async await stuff. I am using RabbitMQ for sending/receiving messages between some programs.
As a bit of background, the RabbitMQ client uses 3 or so threads that I can see: A connection thread and two heartbeat threads. Whenever a message is received via TCP, the connection thread handles it and calls a callback which I have supplied via an interface. The documentation says that it is best to avoid doing lots of work during this call since its done on the same thread as the connection and things need to continue on. They supply a QueueingBasicConsumer which has a blocking 'Dequeue' method which is used to wait for a message to be received.
I wanted my consumers to be able to actually release their thread context during this waiting time so somebody else could do some work, so I decided to use async/await tasks. I wrote an AwaitableBasicConsumer class which uses TaskCompletionSources in the following fashion:
I have an awaitable Dequeue method:
public Task<RabbitMQ.Client.Events.BasicDeliverEventArgs> DequeueAsync(CancellationToken cancellationToken)
{
//we are enqueueing a TCS. This is a "read"
rwLock.EnterReadLock();
try
{
TaskCompletionSource<RabbitMQ.Client.Events.BasicDeliverEventArgs> tcs = new TaskCompletionSource<RabbitMQ.Client.Events.BasicDeliverEventArgs>();
//if we are cancelled before we finish, this will cause the tcs to become cancelled
cancellationToken.Register(() =>
{
tcs.TrySetCanceled();
});
//if there is something in the undelivered queue, the task will be immediately completed
//otherwise, we queue the task into deliveryTCS
if (!TryDeliverUndelivered(tcs))
deliveryTCS.Enqueue(tcs);
}
return tcs.Task;
}
finally
{
rwLock.ExitReadLock();
}
}
The callback which the rabbitmq client calls fulfills the tasks: This is called from the context of the AMQP Connection thread
public void HandleBasicDeliver(string consumerTag, ulong deliveryTag, bool redelivered, string exchange, string routingKey, RabbitMQ.Client.IBasicProperties properties, byte[] body)
{
//we want nothing added while we remove. We also block until everybody is done.
rwLock.EnterWriteLock();
try
{
RabbitMQ.Client.Events.BasicDeliverEventArgs e = new RabbitMQ.Client.Events.BasicDeliverEventArgs(consumerTag, deliveryTag, redelivered, exchange, routingKey, properties, body);
bool sent = false;
TaskCompletionSource<RabbitMQ.Client.Events.BasicDeliverEventArgs> tcs;
while (deliveryTCS.TryDequeue(out tcs))
{
//once we manage to actually set somebody's result, we are done with handling this
if (tcs.TrySetResult(e))
{
sent = true;
break;
}
}
//if nothing was sent, we queue up what we got so that somebody can get it later.
/**
* Without the rwlock, this logic would cause concurrency problems in the case where after the while block completes without sending, somebody enqueues themselves. They would get the
* next message and the person who enqueues after them would get the message received now. Locking prevents that from happening since nobody can add to the queue while we are
* doing our thing here.
*/
if (!sent)
{
undelivered.Enqueue(e);
}
}
finally
{
rwLock.ExitWriteLock();
}
}
rwLock is a ReaderWriterLockSlim. The two queues (deliveryTCS and undelivered) are ConcurrentQueues.
The problem:
Every once in a while, the method that awaits the dequeue method throws an exception. This would not normally be an issue since that method is also async and so it enters the "Exception" completion state that tasks enter. The problem comes in the situation where the task that calls DequeueAsync is resumed after the await on the AMQP Connection thread that the RabbitMQ client creates. Normally I have seen tasks resume onto the main thread or one of the worker threads floating around. However, when it resumes onto the AMQP thread and an exception is thrown, everything stalls. The task does not enter its "Exception state" and the AMQP Connection thread is left saying that it is executing the method that had the exception occur.
My main confusion here is why this doesn't work:
var task = c.RunAsync(); //<-- This method awaits the DequeueAsync and throws an exception afterwards
ConsumerTaskState state = new ConsumerTaskState()
{
Connection = connection,
CancellationToken = cancellationToken
};
//if there is a problem, we execute our faulted method
//PROBLEM: If task fails when its resumed onto the AMQP thread, this method is never called
task.ContinueWith(this.OnFaulted, state, TaskContinuationOptions.OnlyOnFaulted);
Here is the RunAsync method, set up for the test:
public async Task RunAsync()
{
using (var channel = this.Connection.CreateModel())
{
...
AwaitableBasicConsumer consumer = new AwaitableBasicConsumer(channel);
var result = consumer.DequeueAsync(this.CancellationToken);
//wait until we find something to eat
await result;
throw new NotImplementeException(); //<-- the test exception. Normally this causes OnFaulted to be called, but sometimes, it stalls
...
} //<-- This is where the debugger says the thread is sitting at when I find it in the stalled state
}
Reading what I have written, I see that I may not have explained my problem very well. If clarification is needed, just ask.
My solutions that I have come up with are as follows:
Remove all Async/Await code and just use straight up threads and block. Performance will be decreased, but at least it won't stall sometimes
Somehow exempt the AMQP threads from being used for resuming tasks. I assume that they were sleeping or something and then the default TaskScheduler decided to use them. If I could find a way to tell the task scheduler that those threads are off limits, that would be great.
Does anyone have an explanation for why this is happening or any suggestions to solving this? Right now I am removing the async code just so that the program is reliable, but I really want to understand what is going on here.
I first recommend that you read my async intro, which explains in precise terms how await will capture a context and use that to resume execution. In short, it will capture the current SynchronizationContext (or the current TaskScheduler if SynchronizationContext.Current is null).
The other important detail is that async continuations are scheduled with TaskContinuationOptions.ExecuteSynchronously (as #svick pointed out in a comment). I have a blog post about this but AFAIK it is not officially documented anywhere. This detail does make writing an async producer/consumer queue difficult.
The reason await isn't "switching back to the original context" is (probably) because the RabbitMQ threads don't have a SynchronizationContext or TaskScheduler - thus, the continuation is executed directly when you call TrySetResult because those threads look just like regular thread pool threads.
BTW, reading through your code, I suspect your use of a reader/writer lock and concurrent queues are incorrect. I can't be sure without seeing the whole code, but that's my impression.
I strongly recommend you use an existing async queue and build a consumer around that (in other words, let someone else do the hard part :). The BufferBlock<T> type in TPL Dataflow can act as an async queue; that would be my first recommendation if you have Dataflow available on your platform. Otherwise, I have an AsyncProducerConsumerQueue type in my AsyncEx library, or you could write your own (as I describe on my blog).
Here's an example using BufferBlock<T>:
private readonly BufferBlock<RabbitMQ.Client.Events.BasicDeliverEventArgs> _queue = new BufferBlock<RabbitMQ.Client.Events.BasicDeliverEventArgs>();
public void HandleBasicDeliver(string consumerTag, ulong deliveryTag, bool redelivered, string exchange, string routingKey, RabbitMQ.Client.IBasicProperties properties, byte[] body)
{
RabbitMQ.Client.Events.BasicDeliverEventArgs e = new RabbitMQ.Client.Events.BasicDeliverEventArgs(consumerTag, deliveryTag, redelivered, exchange, routingKey, properties, body);
_queue.Post(e);
}
public Task<RabbitMQ.Client.Events.BasicDeliverEventArgs> DequeueAsync(CancellationToken cancellationToken)
{
return _queue.ReceiveAsync(cancellationToken);
}
In this example, I'm keeping your DequeueAsync API. However, once you start using TPL Dataflow, consider using it elsewhere as well. When you need a queue like this, it's common to find other parts of your code that would also benefit from a dataflow approach. E.g., instead of having a bunch of methods calling DequeueAsync, you could link your BufferBlock to an ActionBlock.
What is a correct way to queue complex tasks in wp8?
The tasks consists of the following:
Showing a ProgressIndicator through updating a model variable
Fetching or storing data to a wcf service (UploadStringAsync)
Updating potentially data bound model with the result from UploadStringCompleted.
Hiding the ProgressIndicator through updating a model variable
Currently I've been working with a class owning a queue of command objects, running a single thread that is started when an item is added if it's not already running.
I have however problems with waiting for tasks or subtasks where the code stops running.
Previously I've used async await, but a few levels down the behaviour was becoming more and more unpredictable.
What I want is the main thread being able to create and queue command objects.
The command objects should run one at a time, not starting a new one until the previous one is completely finished.
The command objects should be able to use the dispatcher to access the main thread if neccesary.
If you use async/await, there's no need for another thread (since you have no CPU-bound processing).
In your case, it sounds like you just need a queue of asynchronous delegates. The natural type of an asynchronous delegate is Func<Task> (without a return value) or Func<Task<T>> (with a return value). This little tip is unfortunately not well-known at this point.
So, declare a queue of asynchronous delegates:
private readonly Queue<Func<Task>> queue = new Queue<Func<Task>>();
Then you can have a single "top-level" task that just (asynchronously) processes the queue:
private Task queueProcessor;
The queueProcessor can be null whenever there's no more items. Whenever it's not null, it'll represent this method:
private async Task ProcessQueue()
{
try
{
while (queue.Count != 0)
{
Func<Task> command = queue.Dequeue();
try
{
await command();
}
catch (Exception ex)
{
// Exceptions from your queued tasks will end up here.
throw;
}
}
}
finally
{
queueProcessor = null;
}
}
Your Enqueue method would then look like this:
private void Enqueue(Func<Task> command)
{
queue.Enqueue(command);
if (queueProcessor == null)
queueProcessor = ProcessQueue();
}
Right now, I have the exception handling set up like this: any queued command that throws an exception will cause the queue processor to stop processing (with the same exception). This may not be the best behavior for your application.
You can use it like this (with either a lambda or an actual method, of course):
Enqueue(async () =>
{
ShowProgressIndicator = true;
ModelData = await myProxy.DownloadStringTaskAsync();
ShowProgressIndicator = false;
});
Note the use of DownloadStringTaskAsync. If you write TAP wrappers for your EAP members, your async code will be more "natural-looking" (i.e., simpler).
This is sufficiently complex that I'd recommend putting it into a separate class, but you'd want to decide how to handle (and surface) errors first.