We are attempting to submit a large number of simultaneous web requests using HttpWebRequest for purposes of stress testing a server.
For this example, the web service method being tested simply pauses for 30 seconds and then returns allowing for queue depth and simultaneous request testing.
The client log shows 200 calls being queued up successfully in parallel within ~1 second using multiple calls to:
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(uri);
request.BeginGetResponse();
Fiddler shows the first ~80 being sent to server simultaneously, then before any initial responses have come back, Fiddler shows the remaining ~120 requests being added at about 1/second.
Following the suggestions from here have allowed an increase of the effective limit from about 10 to about 80. I wish to remove the next barrier without resorting to multiple machines/processes. Any ideas?
// Please excuse the blind setting of these values to find the other problem.
// These were set much higher but did not help
ThreadPool.SetMinThreads(30, 15);
ThreadPool.SetMaxThreads(30, 15);
System.Net.ServicePointManager.DefaultConnectionLimit = 1000;
and in App.Config
<system.net>
<connectionManagement>
<clear/>
</connectionManagement>
</system.net>
Another point is that when a large number of requests are performed and complete before the simple delay test, then the delay test is able to achieve 80 or so simultaneous. Starting in "cold" usually limits it to about 10 or so.
Related
I know this has been asked a few times but I'm trying to track down what my exact issue could be.
I've got a C# app, which queues up messages to be sent (using Azure Storage Queues) and these are processed by an Azure Webjob. We're using the twilio-csharp nuget package to send the messages.
The code to send a message is pretty simple:
MessageResource.Create(
body: message.Message,
from: new Twilio.Types.PhoneNumber(TwilioFromNumber),
to: new Twilio.Types.PhoneNumber(message.SendToPhoneNumber));
By default, the Webjob will process up to 16 messages at a time but to combat this issue we've set:
context.BatchSize = 2;
context.NewBatchThreshold = 0;
So, at any given point, we're not making more than 2 requests at a time.
Even with this low threshold, we still see these errors in the log periodically:
Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: TextMessageFunctions.SendTextMessage ---> Twilio.Exceptions.ApiException: Too Many Requests
at Twilio.Clients.TwilioRestClient.ProcessResponse(Response response)
Some other thoughts:
The answer on this question, from a Twilio Developer Evangelist, suggests the REST API's concurrency limit is 100 by default. Is this still true or there a way for me to check this on my account? There's no way we're close to 100. We never queue up more than 20-30 messages at a time, and that is on the extreme end of things.
We're using a Toll-Free US number to send from. According to Twilio, we should be able to queue up 43,200 messages on their end.
That same article says:
Notice: You can send messages to Twilio at a rapid rate, as long as the requests do not max out Twilio's REST API concurrency limit.
This makes me think I'm doing something wrong, because surely "a rapid rate" could be more than 2 requests at a time (and I still wonder about the rate of 100 mentioned above). Can we truly not call the Twilio API with 2 concurrent requests without getting this error?
Twilio developer evangelist here.
There has been a bit of a change in the concurrency limits recently that has affected you here. New accounts are now receiving a much lower concurrency allowance for POST requests, as low as 1 concurrent request. This was to combat a recent rise in fraudulent activity.
I am sure your activity isn't fraudulent, so here's what you should do:
For now reduce your batch size to 1 so that you only make 1 request at a time to the Twilio API.
Add code to catch errors and if they are 429 response, re-queue the job to happen later (with exponential back off if possible)
Get in touch with Twilio Sales to talk to them about your use case and request an increased concurrency limit
I am sure this limit is not going to be the long term solution to the issues we were facing and I am sorry that you are experiencing problems with this.
I have a WCF service that in functionA makes an HttpWebRequest call to functionX in an external service. Originally the timeout on this httpwebrequest was set to 5 minutes.
Recently, the external service has been taking longer than 5 minutes to respond (which I am ok with). So I bumped the httpWebRequest.timeout up to 10 minutes.
Meanwhile the wcf service should be able to process other incoming requests (to functionB, functionC, etc). What I'm experiencing now is that if functionX takes longer than ~5 minutes to respond (and thus functionA takes longer than 5 minutes to complete), subsequent requests to functionB in my wcf service are queued / do not process until functionA completes.
In the end everything completes properly, but I don't see why functionB is affected by the waiting that is happening over in functionA.
Forgive me if that is hard to follow. It is a strange and I'm having trouble wrapping my head around how these pieces are related.
You must decorate your WCF Service class with following attribute
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)] // The service instance is multi-threaded.
public class Service1
{
// ...
}
I assume your concurrency mode is set to Single defined as follows by Microsoft.
"The service instance is single-threaded and does not accept reentrant calls.
If the System.ServiceModel.ServiceBehaviorAttribute.InstanceContextMode property is System.ServiceModel.InstanceContextMode.Single, and additional messages arrive while the instance services a call, these messages must wait until the service is available or until the messages time out."
i had a same problem. i hosted my service in IIS. after little search i found out its because of maxconnection limit in web config. i added this line in to my web.config and the problem solved:
<system.net>
<connectionManagement>
<add address="*" maxconnection="1000"/>
</connectionManagement>
</system.net>
by default maxconnection value is 2.
but this is one of the many reasons. you should monitor your server requests in order to find out the exact reason.
We've had some unexpected behavior in our WCF service(s) in production. This is the (simplified) situation we're able to simulate:
We have a WCF service that retrieves some stuff from
the database.
Say there is a problem with the database, causing the SQL operation to timeout.
Extra info: The Instancecontexmode of the service is set to PerCall and ConcurrencyMode is Single.
If client 1 performs a request to the service and client 2 performs a request, we see (ie. in the Worker process requests in IIS) that these are threated concurrently and each take approx 30 seconds (because of the default SQL timeout).
However because the timeouts between client and server where not correct, we could actually perform a second request from client 1 before its initial request was handled. So then we could have:
Request 1 from client 1
Request 2 from client 1 (approx 5 seconds after first request)
Request 1 from client 2 (approx same time as request 2 from client 1)
Now the request 1 from client 1 takes approx. 30 seconds, like before. The request 2 from client 1, takes a bit less than 60 seconds. That is something that can be expected. But what is unexpected for us is that the request from client 2 also takes more or less the same time as request 2 from client 1. Where we expected it to take about 30 seconds. It seems to us that processing the call from client 2 only starts when processing of the second request from client 1 starts.
Is there someone that can explain this behavior? And what we could do to avoid/improve this?
Note: we know that we have to make sure our timeouts are 'in sync' between the client and server but our actual architecture is a lot more complex than sketched above. We will perform some optimizations, but even then we could have this situation...
I've faced with the next issue related to web service request processing:
Preamble
I have
Web api service hosted on IIS 7.0 on local machine
Test harness console application on the same machine
and i'm trying to simulate web service load by hitting one with requests generated via test harness app.
Test harness core code:
static int HitsCount = 40;
static async void PerformHitting()
{
{
await Task.WhenAll(ParallelEnumerable.Range(0, HitsCount)
.Select(_ => HitAsync())
.WithDegreeOfParallelism(HitsCount));
}
}
static async Task HitAsync()
{
// some logging skipped here
...
await new HttpClient().GetAsync(TargetUrl, HttpCompletionOption.ResponseHeadersRead);
}
Expectation
Logging shows that all HitAsync() calls are made simultaneously: each hit via HttpClients had started in
[0s; 0.1s] time frame (timings are roughly rounded here and below). Hence, I'm expecting to catch all these requests in approximately the same time frame on web service side.
Reality
But logging on the service side shows that requests grouped in bunches 8-12 request each and service catches these bunches with ~1 second interval. I mean:
[0s, 0.3s] <- requests #0-#10
[1.2s, 1.6s] <- requests #10-#20
...
[4.1s, 4.5s] <- request #30-#40
And i'm getting really long execution time for any significant HitCount values.
Question
I suspect some kind of built-in service throttling mechanism or framework built-in concurrent connections limitation. Only I found related to such guesstimate is that, but i didn't get any success trying soulutions from there.
Any ideas what is the issue?
Thanks.
By default, HTTP requests on ASP.NET are limited to 12 times the number of cores. I recommend setting ServicePointManager.DefaultConnectionLimit to int.MaxValue.
Well, the root of the problems lies in the IIS + Windows 7 concurrent requests handling limit (some info about such limits here. Moving service out to the machine with Windows Server kicked out the problem.
i have a web application which which proccesses some request on some data that client selects, when the client selects more than 20 objects and clicks on proceed the client recieves this error, because the server takes a long time to process, however if the records are less and hence a timely response is recieved, no such error comes can someone help me on this?
i have increased the sessiontimeout as well as set the
Try adjusting the executionTimeout in your web.config...this only applies if debug is set to false however.
<httpRuntime
executionTimeout="some number"
/>
If this alone does not solve your issue, check out this blog post which goes into a bit more depth on how to structure your timeouts. Note the IIS reference towards the bottom...