I have a Console Application which consumes a BizTalk Web Service. The Problem is that when I send the BizTalk Service object data in bulk, my console application throws the exception:
Application has either timed out or is Timing out.
My application actually needs to wait for the Biztalk service to finish processing its job. Increasing the obj.Timeout value was of no help. Is there anything else other than using Thread.Sleep method (which I want to avoid)?
Below is the relevant code snippet from my application:
pumpSyncService.Timeout = 750000;
outputRecords = pumpSyncService.PumpSynchronization(pumpRecords);
The pump records contain an array of objects. When the count is around 30, I get a correct response, but when the count increases to around 150 I get the exception.
Try sending smaller chunks in a loop. Instead of sending 150 all at once, send 30 records 5 times. The timeout might be happening because it takes too long to send 150 records.
you should be able to send all 30 at once , if the service allows you to. I am assuming you have verified that the event kicking this off is not firing 5 times . try it asynchronously and process your results when they come back.
Related
My C# client (running on .NET Framework 4.5.1 or later) calls a WSDL-defined SOAP web service call that returns a byte[] (with length typically about 100000). We make hundreds of calls to this web service just fine -- they normally take just a few seconds to return. But very intermittently, the call sits there for exactly 5 minutes and then throws an InvalidOperationException indicating that "There is an error in XML document (1, 678)", with an InnerException that is a WebException "The operation has timed out." We've wrapped a try-catch around this call, look for those particular Exceptions, and then ask the user if they'd like us to retry it, and usually it works just fine on the next try.
Looking at the logging on the server, the logs for the good calls and the intermittent bad calls look exactly the same. In particular, in both cases we get the log statement at the very end of the web service, right before the "return byteArray;"... and it is doing that in the typical 3-15 seconds from the start of the call. So, it seems the web service returns the byte array successfully, but the client that called the web service just never receives it.
However, the client does NOT get the typical SoapException or WebException... for example, if we pause the web service in the debugger right before that return, then after 60 seconds the client will get a WebException "The operation has timed out." But we don't get that in this case... instead we are stuck there for a full 5 minutes before we finally get the InvalidOperationException mentioned above. So, it is as if it started receiving the reply, so it doesn't consider it timed out the normal way, but it never gets the rest of the reply, and the parsing/deserializing of the XML containing the reply eventually times out.
Question #1: Any suggestions on what's happening here? Or what we might be doing wrong in our web service that would result in a byte[] reply getting stuck mid-return intermittently? I'd obviously love to fix the root problem.
Question #2: What controls the length of that 5 minute timeout?? Our exception handling for this would be okay except for the ridiculous 5 minute timeout. After about 10 seconds, the user knows it is stuck because it normally returns in 10 seconds or less. But they have to sit there and wait for 5 minutes before they can do anything. We have set every timeout setting we could find to just 60 seconds, but none seem to control this. We have set:
In the server Web.config: <httpRuntime executionTimeout="60">
In the server Global.asax.cs: HttpContext.Current.Server.ScriptTimeout = 60;
In both server and client: ServicePointManager.MaxServicePointIdleTime = 60000;
In the client, right after we new up the WSDL-defined class derived from SoapHttpClientProtocol with all the web service calls, we call: service.Timeout = 60000;
We previously had those at their defaults or set to 100 / 100000 ... we lowered them all to 60 / 60000 to see if the 5 minute wait would come down at all (just in case one or more of them were being added into that 5 minutes). But no, no matter what we changed any of those timeouts to, the timeout in this case remains exactly 5 minutes, every time it gets stuck.
Does anybody know where the length of the timeout is set for when it generates an InvalidOperationException on the XML document containing the returned byte array due to an InnerException WebException with the timeout?? (please!)
I am using a Lab View application to simulate a test running, which would post a JSON string to my ASP.NET application. Within the ASP.NET application I format the data with the proper partition and row keys, then send it to Azure Table Storage.
The problem that I am having is that after what seems like a random amount of time (i.e. 5 minutes, 2 hours, 5 hours), the data fails to be saved into Azure. I am try to catch any exceptions within the ASP.NET application and send the error message back to the Lab View app and the Lab View app is also catching any exceptions in may encounter so I can trouble shoot where the issue is occurring.
The only error that I am able to catch is a Timeout Error 56 in the Lab View program. My question is, does anyone have an idea of where I should be looking for the root cause of this? I do not know where to begin.
EDIT:
I am using a table storage writer that I found here to do batch operations with retries.
The constructor for exponential retry policy is below:
public ExponentialRetry(TimeSpan deltaBackoff, int maxAttempts)
when you (or the library you use to be exact) instantiate this as RetryPolicy = new ExponentialRetry(TimeSpan.FromMilliseconds(2),100) you are basically setting the max attempts as 100 which means you may end up waiting up to around 2^100 milliseconds (there is some more math behind this but just simplifying) for each of your individual batch requests to fail on the client side until the sdk gives up retrying.
The other issue with that code is it executes batch requests sequentially and synchronously, that has multiple bad effects, first, all subsequent batch requests are blocked by the current batch request, second your cores are blocked waiting on I/O operations, third it has no exception handling so if one of the batch operations throw an exception, the method bails out and would not continue any further processing other batch requests.
My recommendation, do not use that library, batch operations are fairly straight forward. The default retry policy if you do not explicitly define is the exponential retry policy anyways with sensible default parameters (does 3 retries) so you do not even need to define your own retry object. For best scalability and throughput run your batch operations async (and concurrently).
As to why things fail, when you write your own api, catch the StorageException and check the http status code on the exception itself. You could be getting throttled by azure as one of the possibilities but it is hard to say without further debugging or you providing the http status code for the failed batch operations to us.
You need to check whether an exception is transient or not. As Peter said on his comment, Azure Storage client already implements a retry policy. You can also wrap your code with another retry code (e.g using polly) or you should change the default policy associated to Azure Storage Client.
I have a project that uses a WCF service to do some database queries, builds an "Environment" object (which consists of different database class objects) and returns it inside a "Workspace" object to the client. It's been running fine.
I added another "Database" type to the service with all the correct contract and method updates. Now when I call the method the client times out after 1 minute. In debugging it take about 3-5 seconds to hit the end of the service method. Then nothing happens for the rest of the minute until on the client side we see a timeout problem. There are no errors/exceptions thrown.
Please see below:
Calling from client:
490 m_ScanWorkspace = m_Connection.ScanProxy.CreateEnvironments
End of service method:
477 return tWorkspace;
478 }
It takes 3-5 seconds to get to line 478 in the service. F10 shows it's complete.
Nothing happens until 1 minute later when line 490 in the client shows a timeout error. while debugging I can see a valid object in tWorkspace.
Firstly, set up WCF tracing using the Diagnostics namespace. Just use the first example on that tutorial and WCF will dump out a log of all activity, which you can open up in the log viewer. It will tell you exactly where the call is failing, which will help you pinpoint the problem.
WCF is great, but the error messages it gives are cryptic and often close to useless. A timeout after 1 minute doesn't necessarily mean what a timeout would normally mean - i.e. couldn't find the server. It could be other issues.
More than likely there will be a threshold exceeded which causes the response object to be incomplete. This could be array length, string content length, message size, and so on. You will find some of these detailed here: https://stackoverflow.com/a/480191/146077
Good luck!
I have a .net Webservice. It has a method that does a task that takes 60 seconds and after that, returns the string result to client.
In some networks especially low band widths I get timeout error in 40 seconds before webservice method do its task in 60 second.
Now I want to implement webservice Async call to support low band width networks.
In Async webservice call an approach is using a thread that runs webservice method and returns webservice result to main thread that is shown in the following picture.
But my problem will not be solved in this approach because that thread uses one connection.
I need another approach. A thread in my client call webservice method and method starts its operation and when the task is done, 1) webservice sends a message that your response is ready or, 2) client checks if the webservice response is ready (I think polling mechanism) like the following picture.
How can I implement the second approach in .net? Is it possible?
Thanks.
Create a table on your database to store the state of the process.
UniqClientId, ProcessId, StartTime, EndTime and any other state if required.
Client sends a request to the server by passing its unique id.
Server logs the process on the above table and initiates the process.
Clients contacts the server instantly(2-3 sec or 15-20 sec depending on your application) to check the process completion.
If the client get a response that process has been completed, then it requests the server to send the response.
In between, the server does the following job.
When the process completes, stores the EndTime on above table.
Provides a method to send the process state by checking the above table.
Provides a method to send the response.
I'm not sure what exactly your service is doing, but if the operation of your process is just to modify some table on the database, then is is not difficult to implement this.
We are developing a client-server system where the client connects to a service and fetches an image from a buffer. The request runs at 25 hertz (25 requests per second) over a NetTcpBinding. The image data which is sent contains the image buffer (byte[]) and some meta data about the image.
What we are experiencing is that occasionally, the server does not respond for 5 seconds (5020 to 5050 ms), and we can't figure out why.
Running svc logging on the client we see the following
Activity Boundary Suspend 10:00:00:000
Activity Boundary Resume 10:00:00:003
Received a message over a channel Information 10:00:05:017
This occurs both when running the server as a managed WCF service, and an unmanaged WWS service
It can happen once every 100.000 requests, once per night, or several times per minute at seemingly random intervals.
Does anyone know what might cause this issue?
We found the solution buried in the Microsoft customer support database.
The 5 second delay is due to the firing of the SWS(Silly Window
Syndrome) avoidance timer. The SWS timer is scheduled to send the
remaining data which is less than 1 MSS (Maximum Segment Size, 1460
bytes) and the receiver is supposed to send an ACK advertising the
increased receive window and indicating that the remaining data bytes
can be sent. However, if the receiver sends an ACK when it can be
ready for sufficient buffer within 5 seconds, the SWS timer cannot
recover the 5 seconds delay status due to a race condition.
http://support.microsoft.com/kb/2020447
This issue only occurs when using localhost or 127.0.0.1. The delays do not occur when running the service and client on different machines.