HTTP 503 responses to BeginGetResponse() create massive CPU usage - c#

I'm sending about ~3K HTTP requests per second to some HTTP servers to use an API that my service relies on.
The requests are made asynchronously by calling BeginGetResponse().
99% of the responses are 404's, which is expected, and my code handles those responses without issue. My CPU usage stays pretty low (~5-10%)
However, sometimes the servers on the API's end start experiencing problems, and every request i make gets returned a 503 error. These are handled exactly the same way as 404's, they're ignored.
Yet with the 503's the CPU usage is huge, it stays at ~60% the entire time the 503's come in. The result is that when the API servers I'm speaking with have issues, my entire server goes down as well.
Why do 503 responses from the HTTP server cause such high CPU usage compared to 404's? Do the 503's reset the TCP connections in the connection pool?

Related

Is it really possible to send multiple http requests at the same time with these conditions

Let's imagine we have the following resources (the blue arrow is the flow direction of http requests and the green one is the flow of http responses):
The "Desktop App" sends multiple http requests at the same time concurrently using threads and waits for responses but, my concern is that I heard, that the router sends the IP packets one after the other and my goal is to send the http requests at the same time.
Question:
Is it technically possible to send multiple requests at the same time with a single NIC and one Router connected to the internet?
The router might handle one packet after one another, but that does not mean the the router also waits for the response to the current packet before sending the next packet. So it definitely is capable of sending multiple requests and handling their response "at the same time" - at least from your application's perspective.
Some components that are taking part in transmitting data over a network might impose limits on how many requests can be sent in parallel. For the most part, your application will not notice this. Of course, if you have lots of simultaneous requests, you might experience longer response times because requests are queued, but no errors as long as you don't have a very large amount of parallel requests.

Performance Issue in Windows.Web.Http.HttpClient

I have been using the Windows.Web.Http.HttpClient for my API Requests. My HttpClient is a singleton. I analyzed the resource timing for my API calls with the Network Profiler in Visual Studio. In the Timings split-up, I see that the Waiting (TTFB) part takes the most time (about 275ms. Sometimes it goes as high as 800ms).
As per this doc, waiting time is the
Time spent waiting for the initial response, also known as the Time To
First Byte. This time captures the latency of a round trip to the
server in addition to the time spent waiting for the server to deliver
the response.
When trying the same API call in different platforms mac(NSUrlSession) or android, the waiting time is significantly lower in same network. My question is whether this waiting time delay is dependant on the HttpClient implementation? If not is there anything which needs to be changed in my NetworkAdapter code?
The TTFB is a function of the round-trip time (RTT) and the server’s response time (SRT), both of which are mostly outside of the client OS’ control. Just as a basic sanity check, I would recommend measuring the TTFB using Scenario 1 of the HttpClient SDK Sample app. One possible explanation would be that the Windows device doesn’t have the same network setup as the Mac/Android devices (e.g., are they all connected via WiFi? If so, are they all using the same band (2.4 GHz or 5 GHz)?). However, the most likely explanation is that the HTTP request being sent out by HttpClient differs from the one sent out by NSUrlSession (e.g., in terms of the headers), resulting in a different server-side processing time.
The TTFB is very site-dependent. Here's what I’m seeing on my end using the VS2017 Network Profiler with the HttpClient SDK sample app:
Bing.com:
Amazon.com:
Microsoft.com:

Windows kernel queuing outbound network connections

We have an application (a meta-search engine) that must make 50 - 250 outbound HTTP connections in response to a user action, frequently.
The way we do this is by creating a bunch of HttpWebRequests and running them asynchronously using Action.BeginInvoke. This obviously uses the ThreadPool to launch the web requests, which run synchronously on their own thread. Note that it is currently this way as this was originally a .NET 2.0 app and there was no TPL to speak of.
Using ETW (our event sources combined with the .NET framework and kernal ones) and NetMon is that while the thread pool can start 200 threads running our code in about 300ms (so, no threadpool exhaustion issues here), it takes up a variable amount of time, sometimes up to 10 - 15 seconds for the Windows kernel to make all the TCP connections that have been queued up.
This is very obvious in NetMon - you see around 60 - 100 TCP connections open (SYN) immediately (the number varies, but it's never more then around 120), then the rest trickle in over a period of time. It's as if the connections are being queued somewhere, but I don't know where and I don't know how to tune this to we can perform more concurrent outgoing connections. Perfmon Outbound Connection Queue stays at 0 but in the Connections Established counter you can see an initial spike of connections then a gradual increase as the rest filter through.
It does appear that latency to the endpoints to which we are connecting play a part, as running the code close to the endpoints that it connects to doesn't show the problem as significantly.
I've taken comprehensive ETW traces but there is no decent documentation on many of the Microsoft providers, which would be a help I'm sure.
Any advice to work around this or advice on tuning windows for a large amount of outgoing connections would be great. The platform is Win7 (dev) and Win2k8R2 (prod).
It looks like slow DNS queries are the culprit here. Looking at the ETW provider "Microsoft-Windows-Networking-Correlation", I can trace the network call from inception to connection and note that many connections are taking > 1 second at the DNS resolver (Microsoft-Windows-RPC).
It appears our local DNS server is slow/can't handle the load we are throwing at it and isn't caching aggressively. Production wasn't showing as severe symptoms as the prod DNS servers do everything right.

ASP.NET - Requests Queued and Requests Rejected

I am running an ASP.NET service. The service starts returning - "Service Unavailable - 503" under high loads.
Previously it was able to cope with these loads, I still am investigating why that is happening now.
I see a high requests rejected rate (via the ASP.NET perf counter) ; however the requests queued rate (via the ASP.NET perf counter) varies from deployment to deployment from 1 to 150. For some deployments that show a high requests rejected rate, I can correlate that to the high requests queued rate. However, for some deployments the requests queued is low - 1-5 but the requests rejected rate is high.
Am I missing something here? Any pointers on how to investigate this issue further?
I'd take a peek with a profiler, to see if your getting load in other areas that you weren't before such as syncronous DB and network calls.
Look at newRelic (simple to use) and identify the bottlenecks, so simple code changes may help you get out of your immediate hole.
Moving forward look into making the code base more async (if it isn't already).

HttpWebRequest takes a long time to send when there are a bunch at once from client

I am attempting to load-test a Comet-ish server using a C# load testing client that creates many HttpWebRequests (1000+). I am finding that after a few minutes, randomly, the server takes a long time to receive some of the requests. The client thinks it sent the request successfully, but it actually takes 40s to arrive at the server, at which point it is too late. (Since this is a Comet-type server, the server ends up dropping the client session, which is bad). I tried switching from asynchronous calls to synchronous calls but it didn't make a difference.
The problem must be at the client end. I did some tracing with Wireshark and it turns out that the request actually does take 40 or so seconds to make it to the network pipe from the client software! The server services the request right away when it receives it on its pipe.
Maybe C# is messing up because the request looks exactly the same as a request I made earlier and caching it for some weird reason? I am including "Cache-Control:no-cache" in my responses to avoid caching altogether.
I ran into a similar issue when I was first building my web crawler, which makes upwards of 2,000 requests every minute. The problem turned out to be that I wasn't always disposing of the HttpWebResponse objects in a timely fashion. The garbage collection / finalization mechanism will not keep up when you're making requests at that rate.
Whether you're doing synchronous or asynchronous requests doesn't really matter. Just make sure you always call response.Close().
You may be hitting the default client connection limitation. By default, it's two connections, and any more queue up behind it.
To get around this, add this to your app.config file:
<system.net>
<connectionManagement>
<remove address="*"/>
<add address="*" maxconnection="10" />
</connectionManagement>
</system.net>
Experiment with maxconnection to see where your effective upper limit is.

Categories