Unexplained CPU use in C#/WCF application - c#

I have a troublesome problem which I'm at a loss to explain. To put it simply, the CPU use is inexplicably high on the web servers in my web farm.
I have a large number of users hitting two front-end web servers. 99% of the page loads are Ajax requests and serve a simple JSON-serialized object which the web servers retrieve from a backend using WCF. In the typical case (again, probably 99% of the requests), all the ASPX page is doing is making a WCF call to get this data, serializing it into a JSON string and returning it.
The object is pretty small-- a guid, a couple short strings, a few ints.
The non-typical case is the initial page load, which does the same thing (WCF request) but injects the response into different parts of the page using asp:literals.
All three machines (2 web servers, one backend) have the same hardward specs. I would expect the backend to do the majority of the work in this situation, since it's managing all the data, doing the lookups, etc. BUT: the load on the backend is much less than the load on the front ends. The backend is a nice, level 10-20% CPU load. The front ends run an average of 30%, but they're all over the map, sometimes hitting spikes of 100% for 10 seconds and taking 600ms to serve these very simple pages.
When I run the front-end in profiler (ANTS), it flags the WCF communication as taking 80% of the CPU time. That's the whole call on the .NET-generated WCF proxy.
WCF Setup: the service is fully parallel. I have instancing set to "single" and concurrency set to "multiple". I opened up the maxConnections and listenBacklog on the service to 256. Under heavy strain (500 requests/s) I see about 75 connections open between both front-end servers and the service, so it's not hitting that wall. I have security set to 'none' all around. Bandwidth use is about 1/20th of the potential (4Mb/s on a 100Mb/s network).
On the client (the web servers), I create a static ChannelFactory for the service. Code to call the service looks like:
service = MyChannelFactory.CreateChannel();
try {
service.Call();
service.Close();
} catch {
service.Abort();
}
(simplified, but you get the basic picture)
What I don't understand is where all this load on the front end is coming from. What's strange about it is that it's never in the 30%-90% range. It's either in panic mode (100%) or doing OK (30% or less). Given the load on the backend, though, I'd expect both of these machines to be 10% or less. Memory use, handles, etc., all seem reasonable.
To add one more wrinkle: when I log how long it takes to service these calls on the backend, I get times consistently less than 15ms (maybe one or two spikes to 30ms every minute). On the front end, these calls can take up to 1s to return. I guess that could be because of the CPU problems, but it seems off to me.
So... does anyone have any ideas on where to look on this kind of thing? I'm running short on things to explore.
Clarification: The WCF service is hosted in a Windows service, and is using a netTcp binding. Also, I have the maxConnections on the client set to 128, FWIW.

It's hard to say what might be going on, but a wild guess would be that something is hitting a contention point and its spinning (instead of doing a wait).
By any chance, have you increased the number of allowed HTTP connections to the back-end server in the front-end server? You can do it through the config file. One common issue I see with WCF clients is that the limit is left to the default value of 2, which severely limits concurrency at the client proxy level.

Have you considered and tested for the possibility of external factors?
Process recycles?
Is Dynamic compression enabled?

Related

What is the highest safe number for ServicePointManager.DefaultConnectionLimit in a .net core app?

I have a .net core api that must make around 150,000 calls to collect data from external services. I am running these requests in parallel using Parallel.forEach and that seems to be working great, however I get an error from the http client for around 100,000 of my requests!
The Operation was canceled
Looking back at this I wish I had also logged the exception type but I believe this is due to not having enough outgoing connection limit.
Through debugging I have found that this returns 2:
ServicePointManager.DefaultConnectionLimit
On the face of it, if this really is the maximum amount of open connections allowed to an external domain / server, I want to increase that as high as possible. Ideally to 150,000 to ensure parallel processing doesnt cause an issue.
The problem is I cant find any information on what a safe limit is, or how much load this will put on my machine - if it is even a lot. Since this issue causes a real request to be made my data provider counts it in my charges - but obviously I get nothing from it since the .net core framework is just throwing my result away..
Since this problem is also intermittent it can be difficult to debug and I would just like to set this value as high as is safe to do so on my local machine.
I believe this question is relevant to stackoverflow since it does deal directly with the technical issue above, whereas other questions I could find only ask details about what this setting is.
As far as I understand, you are trying to make 150000 simulatenous request to external services. I presume that your services are Restful web services. If that is the case when you set DefaultConnectionLimit to an arbitrary number (very high), every single request opens a port for requesting data. This definitely clogs your network and your ports (port range is 0 to 65535).
Besides, making 150000 request without using throttling uncontrollably consumes your OS resources.
DefaultConnectionLimit is there because it protects you from aforementioned problems.
you may consider to use SemaphoreSlim for throttling

WCF Service calling another WCF Service is slow

I have a design whereby we have a WCF Service that accesses a datastore that is represented as another WCF service. The idea behind this is to adhere to the SOA and have the potential to load balance by the actual service and the data access layer, as well as enable the datastore to change massively with no impact on the initial service.
Problem is these are running on IIS6 and encryption must be enabled.
With both services enabled we are getting averages of approximately
Average Number of requests per second: 4.75469280423686 over 400 calls.
But if I remove the service call to the second service and replace with an absolute reference this nearly doubles to
Average Number of requests per second: 8.52248037501811 over 400 calls.
Does anyone have any clues as to how/what I can do to optimise this?
I should add these are not concurrent calls.
Are both web services running on the same machine and the same app pool? I've had that exact issue before; we eventually cut that architecture completely, but I believe it could have been helped by putting them in different app pools.
Also, since you mentioned IIS6, .Net may be holding back on you: Check out http://msdn.microsoft.com/en-us/library/ff647787.aspx (Chapter 6: Improving ASP.NET Performance) - especially the "Threading Explained" section. (IIS6 by default doesn't have the appropriate number of .Net threads for your processor - IIS7+ does.)
Good luck!

Pushing OR Polling

I have a SL client and a WCF service. The client polls the WCF every 4 seconds and I have almost 100 clients at a time.
The web server is an entry level server with 512 MB RAM.
I want to know, if polling is dependent on the server configuration, if I increase the server configuration will the polling for clients work better?
And second, would pushing (duplex) be better than polling? I have got some mixed response from the blogs I have been reading.
Moreover, what are the best practices in optimizing polling for quicker response at the client? My application needs real-time data
Thanks
My guess would be that you have some kind of race condition that is showing up only with a larger number of clients. What concurrency and instancing modes are you using for your WCF service? (See MSDN: WCF Sessions, Instancing, and Concurrency at http://msdn.microsoft.com/en-us/library/ms731193.aspx)
If you're "losing" responses the first thing I would do is start logging or tracing what's happening at the server. For instance, when a client "doesn't see" a response, is the server ever getting a request? (If so, what happens to it, etc etc.)
I would also keep an eye on memory usage -- you don't say what OS you're using, but 512 MB is awfully skinny these days. If you ever get into a swap-to-disk situation, it's clearly not going to be a good thing.
Lastly, assuming that your service is CPU-bound (i.e. no heavy database & filesystem calls), the best way to raise your throughput is probably to reduce the message payload (wire size), use the most performant bindings (i.e. if client is .NET and you control it, NetTcp binding is much faster than HTTP), and, of course, multithread your service. IMHO, with the info you've provided -- and all other things equal -- polling is probably fine and pushing might just make things more complex. If it's important, you really want to bring a true engineering approach to the problem and identify/measure your bottlenecks.
Hope this helps!
"Push" notifications generally have a lower network overhead, since no traffic is sent when there's nothing to communicate. But "pull" notifications often have a lower application overhead, since you don't have to maintain state when the client is just idling waiting for a notification.
Push notifications also tend to be "faster", since clients are notified immediately when the event happens rather than waiting for the next polling interval. But pull notifications are more flexible -- you can use just about any server or protocol you want, and you can double your client capacity just by doubling your polling wait interval.

Proper way to handle thousands of calls to external service from asp.net (mvc)

I'm tasked to create a web application. I'm currently using c# & asp.net (mvc - but i doubt its relevant to the question) - am a rookie developer and somewhat new to .net.
Part of the logic in the application im building is to make requests to an external smsgateway by means of hitting a particular url with a request - either as part of a user-initiated action in the webapp (could be a couple of messages send) or as part of a scheduledtask run daily (could and will be several thousand message send).
In relation to a daily task, i am afraid that looping - say - 10.000 times in one thread (especially if im also to take action depending on the response of the request - like write to a db) is not the best strategy and that i could gain some performance/timesavings from some parallelization.
Ultimately i'm more afraid that thousands of users at the same time (very likely) will perform the action that triggers a request. With a naive implementation that spawns some kind of background thread (whatever its called) for each request i fear a scenario with hundreds/thousands of requests at once.
So if my assumptions are correct - how do i deal with this? do i have to manually spawn some appropriate number of new Thread()s and coordinate their work from a producer/consumer-like queue or is there some easy way?
Cheers
If you have to make 10,000 requests to a service then it means that the service's API is anemic - probably CRUD-based, designed as a thin wrapper over a database instead of an actual service.
A single "request" to a well-designed service should convey all of the information required to perform a single "unit of work" - in other words, those 10,000 requests could very likely be consolidated into one request, or at least a small handful of requests. This is especially important if requests are going to a remote server or may take a long time to complete (and 2-3 seconds is an extremely long time in computing).
If you do not have control over the service, if you do not have the ability to change the specification or the API - then I think you're going to find this very difficult. A single machine simply can't handle 10,000 outgoing connections at once; it will struggle with even a few hundred. You can try to parallelize this, but even if you achieve a tenfold increase in throughput, it's still going to take half an hour to complete, which is the kind of task you probably don't want running on a public-facing web site (but then, maybe you do, I don't know the specifics).
Perhaps you could be more specific about the environment, the architecture, and what it is you're trying to do?
In response to your update (possibly having thousands of users all performing an action at the same time that requires you to send one or two SMS messages for each):
This sounds like exactly the kind of scenario where you should be using Message Queuing. It's actually not too difficult to set up a solution using WCF. Some of the main reasons why one uses a message queue are:
There are a large number of messages to send;
The sending application cannot afford to send them synchronously or wait for any kind of response;
The messages must eventually be delivered.
And your requirements fit this like a glove. Since you're already on the Microsoft stack, I'd definitely recommend an asynchronous WCF service backed by MSMQ.
If you are working with SOAP, or some other type XML request, you may not have an issue dealing with the level of requests in a loop.
I set up something similar using a SOAP server with 4-5K requests with no problem...
A SOAP request to a web service (assuming .NET 2.0 and superior) looks something like this:
WebServiceProxyClient myclient = new WebServiceProxyClient();
myclient.SomeOperation(parameter1, parameter2);
myclient.Close();
I'm assuming that this code will will be embedded into your business logic that you will be trigger as part of the user initiated action, or as part of the scheduled task.
You don't need to do anything especial in your code to cope with a high volume of users. This will actually be a matter of scalling on your platform.
When you say 10.000 request, what do you mean? 10.000 request per second/minute/hour, this is your page hit per day, etc?
I'd also look into using an AsyncController, so that your site doesn't quickly become completely unusable.

Whose responsibility is it to throttle web requests?

I am working on a class library that retrieves information from a third-party web site. The web site being accessed will stop responding if too many requests are made within a set time period (~0.5 seconds).
The public methods of my library directly relate to a resource an file on the web server. In other words, each time a method is called, an HttpWebRequest is created and sent to the server. If all goes well, an XML file is returned to the caller. However, if this is the second web request in less than 0.5s, the request will timeout.
My dilemma lies in how I should handle request throttling (if at all). Obviously, I don't want the caller sit around waiting for a response -- especially if I'm completely certain that their request will timeout.
Would it make more sense for my library to queue and throttle the webrequests I create, or should my library simply throw an exception if the a client does not wait long enough between API calls?
The concept of a library is to give its client code as little to worry about as possible. Therefore I would make it the libraries job to queue requests and return results in a timely manner. In an ideal world you would use a callback or delegate model so that the client code can operate in asynchronously, not blocking the UI. You could also offer the option for skipping the queue, (and failing if it operates too soon) and possibly even offer priorities within the queue model.
I also believe it is the responsibility of the library author to default to being a good citizen, and for the library's default operation to be to comply to the conditions of the data provider.
I'd say both - you're dealing with two independent systems and both should take measures to defend themselves from excessive load. The web server should refuse incoming connections, and the client library should take steps to reduce the requests it makes to a slow or unresponsive external service. A common pattern for dealing with this on the client is 'circuit breaker' which wraps calls to an external service, and fails fast for a certain period after failure.
That's the Web server's responsibility, imo. Because the critical load depends on hardware, network bandwidth, etc a lot of things that are outside of your application's control, it should not concern itself with trying the deal with it. IIS can throttle traffic based on various configuration options.
What kind of client is it? Is this an interactive client, for eg: GUI based app?
In that case, you can equate that to a webbrowser scenario, and let the timeout surface to the caller. Also, if you know for sure that this webserver is throttling requests, you can tell the client that he has to wait for a given time period before retrying. In that way, the client will not keep on re-issuing requests, and will know when the first timeout occurs that it is futile to issue requests too fast.

Categories