i am developing a wp8 app, i use a HttpClient to perform PostAsync and GetAsync operations, i am setting the timeout to 1 second :
private HttpClient client = new HttpClient();
client.Timeout = TimeSpan.FromMilliseconds(1000);
I have a try catch block on my Get and Post operation to caught the TimeOutExceptions as:
try
{
var response = await client.PostAsync(param1,param2);
}
catch (TimeoutException e)
{
//do something
}
Nevertheless my catch block is not capturing the exception, i debug my app and watch the throwen exception is a TaskCanceledException, ¿How can i caught the right exception?, ¿Why is the TimeOutException replaced?
Finally, and to avoid confusion, my real timeout will be 10 seconds, i am using 1 seconds just to test, and i need to show a message to the user if the timeout is exceeded.
On the HttpClicent PostAsync, the timeout is not sent as a TimeoutException. It is sent as a TaskCanceledException.
It is not 100% clear from the documentation, that I have seen, but the behaviour you are getting is the correct behavior. When the timeout is reached, TaskCanceledException is thrown.
This makes a little bit of sense if you look here | HttpClicent.Timeout Property
You may also set different timeouts for individual requests using a CancellationTokenSource on a task.
Related
I am trying to understand exception handling in TPL.
The following code seems to swallow exceptions:
var processor = new ActionBlock<int>((id) => SomeAction(id), new ExecutionDataflowBlockOptions { ... });
async Task SomeAction(int merchantID)
{
//Exception producing code
...
}
And listening to TaskScheduler.UnobservedTaskException events does not receive anything either.
So, does this mean the action block does a try-catch in itself when running the actions?
Is there any official documentation of this somewhere?
Update
The exception handling behavior of DataFlow blocks is explained in Exception Handling in TPL DataFlow Networks
**Original
This code doesn't swallow exceptions. If you await the block to complete with await processor.Completion you'll get the exception. If you use a loop to pump messages to the block before calling Complete() you need a way to stop the loop too. One way to do it is to use a CancellationTokenSource and signal it in case of exception:
void SomeAction(int i,CancellationTokenSource cts)
{
try
{
...
}
catch(Exception exc)
{
//Log the error then
cts.Cancel();
//Optionally throw
}
}
The posting code doesn't have to change all that much, it only needs to check whether
var cts=new CancellationTokenSource();
var token=cts.Token;
var dopOptions=new new ExecutionDataflowBlockOptions {
MaxDegreeOfParallelism=10,
CancellationToken=token
};
var block= new ActioBlock<int>(i=>SomeAction(i,cts),dopOptions);
while(!token.IsCancellationRequested && someCondition)
{
block.Post(...);
}
block.Complete();
await block.Completion;
When the action throws, the token is signaled and the block ends. If the exception is rethrown by the action, it will be rethrown by await block.Completion as well.
If that seems convoluted, it's because that's somewhat of an edge case for blocks. DataFlow is used to create pipelines or networks of blocks.
The general case
The name Dataflow is significant.
Instead of building a program by using methods that call each other, you have processing blocks that pass messages to each other. There's no parent method to receive results and exceptions. The pipeline of blocks remains active to receive and process messages indefinitely, until some external controller tells it to stop, eg by calling Complete on the head block, or signaling the CancellationToken passed to each block.
A block shouldn't allow unhandled exceptions to occur, even if it's a standalone ActionBlock. As you saw, unless you've already called Complete() and await Completion, you won't get the exception.
When an unhandled exception occurs inside a block, the block enters the faulted state. That state propagates to all downstream blocks that are linked with the PropagateCompletion option. Upstream blocks aren't affected, which means they may keep working, storing messages in their output buffers until the process runs out of memory, or deadlocks because it receives no responses from the blocks.
Proper Failure Handling
The block should catch exceptions and decide what to do, based on the application's logic:
Log it and keep processing. That's not that different from how web application's work - an exception during a request doesn't bring down the server.
Send an error message to another block, explicitly. This works but this type of hard-coding isn't very dataflow-ish.
Use message types with some kind of error indicator. Perhaps a Success flag, perhaps an Envelope<TMessage> object that contains either a message or an error.
Gracefully cancel the entire pipeline, by signaling all blocks to cancel by signaling the CancellationTokenSource used to produce the CancellationTokens used by all blocks. That's the equivalent of throw in a common program.
#3 is the most versatile option. Downstream blocks can inspect the Envelope and ignore or propagate failed messages without processing. Essentially, failed messages bypass downstream blocks.
Another option is to use the predicate parameter in LinkTo and send failed messages to a logger block and successful messages to the next downstream block. In complex scenarios, this could be used to eg retry some operations and send the result downstream.
These concepts, and the image, come from Scott Wlaschin's Railway Oriented Programming
The TaskScheduler.UnobservedTaskException event is not a reliable/deterministic way to handle exceptions of faulted tasks, because it's delayed until the faulted task is cleaned up by the garbage collector. And this may happen long after the error occurred.
The only type of exception that is swallowed by the dataflow blocks is the OperationCanceledException (AFAIK for non-documented reasons). All other exceptions result to the block transitioning to a faulted state. A faulted block has its Completion property (which is a Task) faulted as well (processor.Completion.IsFaulted == true). You can attach a continuation to the Completion property, to receive a notification when a block fails. For example you could ensure that an exception will not pass unnoticed, by simply crashing the process:
processor.Completion.ContinueWith(t =>
{
ThreadPool.QueueUserWorkItem(_ => throw t.Exception);
}, default, TaskContinuationOptions.OnlyOnFaulted, TaskScheduler.Default);
This works because throwing an unhandled exception on the ThreadPool causes the application to terminate (after raising the AppDomain.CurrentDomain.UnhandledException event).
If your application has a GUI (WinForms/WPF etc), then you could throw the exception on the UI thread, that allows more graceful error handling:
var uiContext = SynchronizationContext.Current;
processor.Completion.ContinueWith(t =>
{
uiContext.Post(_ => throw t.Exception, null);
}, default, TaskContinuationOptions.OnlyOnFaulted, TaskScheduler.Default);
This will raise the Application.ThreadException event in WinForms.
I've written some code which does a post to a remote webservice. I've tried using both HttpClient.PostAync as well as HttpClient.SendAsync, where with the former I just provide everything up front, and with the later I build an HttpRequestMessage with the appropriate values. In my testing today I'm getting some type of exception here. The same exception with both implementations. Presumably a timeout, or other error on the remote side / transmission. When I get this exception, it comes with the generic message of 'A task was cancelled'.
The code looks something like this:
using (MultipartFormDataContent formData = new MultipartFormDataContent())
{
//formData.Add(clientIDContent, "client_id");
//formData.Add(clientSecretContent, "client_secret");
formData.Add(quoteIDContent, "CPQuoteID");
formData.Add(poNumContent, "PONumber");
formData.Add(dealerNumContent, "DealerNumber");
formData.Add(orderUserContent, "OrderingUser");
formData.Add(fileContent, "order-file");
try
{
var response = await client.PostAsync(actionURL, formData);
int I = 0;
}
catch (Exception ex)
{
int x = 0;
}
}
the exception I get back is of type TaskCanceledException. The inner exception also carries no meaningful information. There must be a way to get better information back from this. It's frustrating to debug issues when our logs contain only generic messages like this. How do I get this info?
Just a bit more info. I'm operating under the assumption here that the HTTP client does this operation with a Task. At some point the task times out. I don't know if this is because the HTTP timeout is greater than the task timeout in some way, or if the HttpClient object catches the timeout exception, consumes it, then cancels the task throwing the cancellation exception. Either way, it would seem that meaningful information about the underlying issue is lost. Info I'd like to log.
My question might be silly, but need an answer. As far as I know whenever "The Operation has timed out" exception occurs in HttpWebRequest.GetResponse() method than connection is closed and released. If it is not true than how does it work? I tried to google this but couldn't get the answer.
EDIT: In this case it was a post request, connection was established and the URL which called was processing the request at server end, but HttpWebRequest Object was waiting on response and after sometime exception occurred.
My understanding is that you must call the Close method to close the stream and release the connection. Failure to do so may cause your application to run out of connections. If you are uncertain, you can always put a try/catch block around the Close method or the HttpWebRequest.GetResponse().
Well I am not entirely sure but it looks like that the Operation TimedOut exception probably faults the underlying connection channel; cause all the request after that ends up with same exception.
Per MSDN Documentation
You must call the Close method to close the stream and release the
connection. Failure to do so may cause your application to run out of
connections.
I did a small trial to see
private static void MakeRequest()
{
WebRequest req = null;
try
{
req = WebRequest.Create("http://www.wg.net.pl");
req.Timeout = 10;
req.GetResponse();
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
req.Timeout = 10000;
req.GetResponse(); // This as well results in TimeOut exception
}
}
Suppose I create a HTTPWebRequest, call its GetResponse() and start reading from the response stream. If the connection is interrupted while reading from the stream, do I have to wait for it to time out, or can I know right away that something's gone wrong? No exception is thrown when I interrupt the connection (e.g. I disconnect my computer from the network).
It depends on the situation.
In general you'll need to be prepared for both situations (immediate and late interruption).
If, for example, the server disconnects you, you'll know relatively quickly.
See http://msdn.microsoft.com/en-us/library/system.net.webexceptionstatus for the kinds of errors that can occur (the WebRequest classes throw WebExceptions on errors)
You have a variety of options:
Use the async methods (BeginGet... and EndGet...) and model your application around this. Basically you'll be notified "at some point" if there was a success or error. Do something else in the meantime
If you want absolute control you can specify a ReadTimeout on the acquired stream (See comment on the other answer, set Timeout on the request as well). Re-try or whatever.
Just wait
You dont have to worry if the request is interrupted or not.
You can specify explicit timeout as follows.
If its interrupted you will get exception.
try
{
var request = HttpWebRequest.Create(url);
request.Timeout = 3000;
var response = request.GetResponse() as HttpWebResponse;
if (response.StatusCode.Equals(HttpStatusCode.OK))
{
//do stuff
}
}
catch (Exception exception)
{
exception.ToLog();
}
Most probably you have to wait for the timeout
There are several questions on StackOverflow regarding closing WCF connections, however the highest ranking answers refers to this blog:
http://marcgravell.blogspot.com/2008/11/dontdontuse-using.html
I have a problem with this technique when I set a breakpoint at the server and let the client hang for more than one minute. (I'm intentionally creating a timeout exception)
The issue is that the client appears to "hang" until the server is done processing. My guess is that everything is being cleaned up post-exception.
In regard to the TimeOutException it appears that the retry() logic of the client will continue to resubmit the query to the server over and over again, where I can see the server-side debugger queue up the requests and then execute each queued request concurrently. My code wan't expecting WCF to act this way and may be the cause of data corruption issues I'm seeing.
Something doesn't totally add up with this solution.
What is the all-encompassing modern way
of dealing with faults and exceptions
in a WCF proxy?
Update
Admittedly, this is a bit of mundane code to write. I currently prefer this linked answer, and don't see any "hacks" in that code that may cause issues down the road.
This is Microsoft's recommended way to handle WCF client calls:
For more detail see: Expected Exceptions
try
{
...
double result = client.Add(value1, value2);
...
client.Close();
}
catch (TimeoutException exception)
{
Console.WriteLine("Got {0}", exception.GetType());
client.Abort();
}
catch (CommunicationException exception)
{
Console.WriteLine("Got {0}", exception.GetType());
client.Abort();
}
Additional information
So many people seem to be asking this question on WCF that Microsoft even created a dedicated sample to demonstrate how to handle exceptions:
c:\WF_WCF_Samples\WCF\Basic\Client\ExpectedExceptions\CS\client
Download the sample:
C# or VB
Considering that there are so many issues involving the using statement, (heated?) Internal discussions and threads on this issue, I'm not going to waste my time trying to become a code cowboy and find a cleaner way. I'll just suck it up, and implement WCF clients this verbose (yet trusted) way for my server applications.
Optional Additional Failures to catch
Many exceptions derive from CommunicationException and I don't think most of those exceptions should be retried. I drudged through each exception on MSDN and found a short list of retry-able exceptions (in addition to TimeOutException above). Do let me know if I missed an exception that should be retried.
Exception mostRecentEx = null;
for(int i=0; i<5; i++) // Attempt a maximum of 5 times
{
try
{
...
double result = client.Add(value1, value2);
...
client.Close();
}
// The following is typically thrown on the client when a channel is terminated due to the server closing the connection.
catch (ChannelTerminatedException cte)
{
mostRecentEx = cte;
secureSecretService.Abort();
// delay (backoff) and retry
Thread.Sleep(1000 * (i + 1));
}
// The following is thrown when a remote endpoint could not be found or reached. The endpoint may not be found or
// reachable because the remote endpoint is down, the remote endpoint is unreachable, or because the remote network is unreachable.
catch (EndpointNotFoundException enfe)
{
mostRecentEx = enfe;
secureSecretService.Abort();
// delay (backoff) and retry
Thread.Sleep(1000 * (i + 1));
}
// The following exception that is thrown when a server is too busy to accept a message.
catch (ServerTooBusyException stbe)
{
mostRecentEx = stbe;
secureSecretService.Abort();
// delay (backoff) and retry
Thread.Sleep(1000 * (i + 1));
}
catch(Exception ex)
{
throw ex; // rethrow any other exception not defined here
}
}
if (mostRecentEx != null)
{
throw new Exception("WCF call failed after 5 retries.", mostRecentEx );
}
Closing and Disposing a WCF Service
As that post alludes to, you Close when there were no exceptions and you Abort when there are errors. Dispose and thus Using shouldn't be used with WCF.