I have a design whereby we have a WCF Service that accesses a datastore that is represented as another WCF service. The idea behind this is to adhere to the SOA and have the potential to load balance by the actual service and the data access layer, as well as enable the datastore to change massively with no impact on the initial service.
Problem is these are running on IIS6 and encryption must be enabled.
With both services enabled we are getting averages of approximately
Average Number of requests per second: 4.75469280423686 over 400 calls.
But if I remove the service call to the second service and replace with an absolute reference this nearly doubles to
Average Number of requests per second: 8.52248037501811 over 400 calls.
Does anyone have any clues as to how/what I can do to optimise this?
I should add these are not concurrent calls.
Are both web services running on the same machine and the same app pool? I've had that exact issue before; we eventually cut that architecture completely, but I believe it could have been helped by putting them in different app pools.
Also, since you mentioned IIS6, .Net may be holding back on you: Check out http://msdn.microsoft.com/en-us/library/ff647787.aspx (Chapter 6: Improving ASP.NET Performance) - especially the "Threading Explained" section. (IIS6 by default doesn't have the appropriate number of .Net threads for your processor - IIS7+ does.)
Good luck!
Related
I have a .net core api that must make around 150,000 calls to collect data from external services. I am running these requests in parallel using Parallel.forEach and that seems to be working great, however I get an error from the http client for around 100,000 of my requests!
The Operation was canceled
Looking back at this I wish I had also logged the exception type but I believe this is due to not having enough outgoing connection limit.
Through debugging I have found that this returns 2:
ServicePointManager.DefaultConnectionLimit
On the face of it, if this really is the maximum amount of open connections allowed to an external domain / server, I want to increase that as high as possible. Ideally to 150,000 to ensure parallel processing doesnt cause an issue.
The problem is I cant find any information on what a safe limit is, or how much load this will put on my machine - if it is even a lot. Since this issue causes a real request to be made my data provider counts it in my charges - but obviously I get nothing from it since the .net core framework is just throwing my result away..
Since this problem is also intermittent it can be difficult to debug and I would just like to set this value as high as is safe to do so on my local machine.
I believe this question is relevant to stackoverflow since it does deal directly with the technical issue above, whereas other questions I could find only ask details about what this setting is.
As far as I understand, you are trying to make 150000 simulatenous request to external services. I presume that your services are Restful web services. If that is the case when you set DefaultConnectionLimit to an arbitrary number (very high), every single request opens a port for requesting data. This definitely clogs your network and your ports (port range is 0 to 65535).
Besides, making 150000 request without using throttling uncontrollably consumes your OS resources.
DefaultConnectionLimit is there because it protects you from aforementioned problems.
you may consider to use SemaphoreSlim for throttling
I need help to find a strategy to analyze a problem.
Suddenly, my application starts to behave strange.
Summarizing, my application
1. (.net 4.0) uses a webservice
2. (svc, .net 3.5) that executes some procedures. I measured the time of procedures and total time is under the one second (call this time).
Most of the time the wait is few milliseconds: fair enough.
Sometimes though (and unfortunately seems to be random), wait can go up to a couple of minutes and then goes to timeout (correctly); if I check for time, it is still under one second.
Where am I losing this time?
How can I figure out what is happening?
Do you have any tools, hints or whatever to suggest me to understand what is going on?
Thanks
In a similar situation, I would start with the following:
Configure WCF Tracing on both the client and service
... (http://msdn.microsoft.com/en-us/library/ms733025(v=vs.110).aspx)
Configure Fiddler to view the web service communications.
... The Fiddler and Monitoring Web Service Traffic SO post provides good reference links.
Configure performance monitor with WCF counters
... Windows Communication Foundation (WCF) includes a large set of performance counters, scoped to three different levels: Service, Endpoint and Operation, which will help monitor application performance. The MSDN article provides a detailed explanation:
... http://msdn.microsoft.com/en-us/library/ms735098(v=vs.110).aspx
Conduct a network trace to watch the actual network traffic
Note: The following article provides a really good overview of WCF performance optimization:
http://weblogs.asp.net/sweinstein/archive/2009/01/03/creating-high-performance-wcf-services.aspx
Good luck.
I am in the process of creating an application which will communicate with a single server where WCF Web Service(s) would be installed. I am a little new to this process and was wondering which of these two options would be better in the long run to handle the load for a significant amount of users:
1- Create and install a single Web Service on a multi-core server for all of the client applications to communicate with.
2- Create and install multiple Web Services on a multi-core server, each to communicate with different modules inside of the client application.
All-in-all I'm just trying to figure out whether in processing time and with a large number of users whether there is a significant difference between options 1 and 2, or if option 2 would just create an unnecessary programming headache.
Thanks,
Patrick
The advantage of having multiple web services would be that each can have their own application pool (i.e. worker process) in IIS. So you can recycle one application pool for one web service without affecting the others.
The advantage of having a single web service would be potentially easier maintenance, since the code is in one file, etc. Of course, if it's a lot of code, this can make maintenance harder too.
So the question is, what's the right level of granularity?
You can split the web services up per business function, and I've found that this is a good approach. For example, if you have some business methods that deal with invoicing, you could put those into an Invoicing web service.
If you have other business methods that deal with shipping orders, you could put those into a Shipping web service.
This creates a nice split, in my opinion, and also lets you leverage the application pool advantages discussed earlier.
Example
You can see a real world example of this type of split with FedEx. Note how they split their web services up by shipping, tracking and visibility, etc.
I have a little experience with WCF and would like to get your opinion/suggestion on how the following problem can be solved:
A web service needs to be accessible from multiple clients simultaneously and service needs to return a result from a shared data set. The concrete project I'm working on has to store a list of IP addresses/ranges. This list will be queried by a bunch of web servers for a validation purposes and we speak of a couple of thousand or more queries per minute.
My initial draft approach was to use Windows service as a WCF host with service contract implementing class that is decorated with ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple) that has a list object and a custom locking for accessing it. So basically I have a WCF service singleton with a list = shared data -> multiple clients. What I do not like about it is that data and communication layers are merged into one and performance wise this doesn't feel "right".
What I really really (- want is Windows service running an instance of IP list holding container class object, a second service running WCF service contract implementation and a way the latter querying the former in a nice way with a minimal blocking. Using another WCF channel would not really take me far away from the initial draft implementation or would it?
What approach would you take? Project is still in a very early stage so complete design re-do is not out of question.
All ideas are appreciated. Thanks!
UPDATE: The data set will be changed dynamically. Web service will have a separate method to add IP or IP range and on top of that there will be a scheduled task that will trigger data cleanup every 10-15 minutes according to some rules.
UPDATE 2: a separate benchmark project will be kicked up that should use MySQL as a data backend (instead on in-memory list).
It depends how far it has to scale. If a single server will suffice, then fine; keep it conveniently in memory (as long as you can recreate the data if the server gets restarted). If the data-volume is low, then simple blocking (lock) should work fine to synchronize the data, or for higher throughput a ReaderWriterLockSlim. I would probably not store it directly in the WCF class instance, though.
I would avoid anything involving sessions (if/when this ties into the WCF life-cycle); this is rarely helpful to simple services.
For distributed load (over multiple servers) I would give consideration to a separate dedicated backend. A database or memcached / AppFabric / etc would be worth consideration.
I have a troublesome problem which I'm at a loss to explain. To put it simply, the CPU use is inexplicably high on the web servers in my web farm.
I have a large number of users hitting two front-end web servers. 99% of the page loads are Ajax requests and serve a simple JSON-serialized object which the web servers retrieve from a backend using WCF. In the typical case (again, probably 99% of the requests), all the ASPX page is doing is making a WCF call to get this data, serializing it into a JSON string and returning it.
The object is pretty small-- a guid, a couple short strings, a few ints.
The non-typical case is the initial page load, which does the same thing (WCF request) but injects the response into different parts of the page using asp:literals.
All three machines (2 web servers, one backend) have the same hardward specs. I would expect the backend to do the majority of the work in this situation, since it's managing all the data, doing the lookups, etc. BUT: the load on the backend is much less than the load on the front ends. The backend is a nice, level 10-20% CPU load. The front ends run an average of 30%, but they're all over the map, sometimes hitting spikes of 100% for 10 seconds and taking 600ms to serve these very simple pages.
When I run the front-end in profiler (ANTS), it flags the WCF communication as taking 80% of the CPU time. That's the whole call on the .NET-generated WCF proxy.
WCF Setup: the service is fully parallel. I have instancing set to "single" and concurrency set to "multiple". I opened up the maxConnections and listenBacklog on the service to 256. Under heavy strain (500 requests/s) I see about 75 connections open between both front-end servers and the service, so it's not hitting that wall. I have security set to 'none' all around. Bandwidth use is about 1/20th of the potential (4Mb/s on a 100Mb/s network).
On the client (the web servers), I create a static ChannelFactory for the service. Code to call the service looks like:
service = MyChannelFactory.CreateChannel();
try {
service.Call();
service.Close();
} catch {
service.Abort();
}
(simplified, but you get the basic picture)
What I don't understand is where all this load on the front end is coming from. What's strange about it is that it's never in the 30%-90% range. It's either in panic mode (100%) or doing OK (30% or less). Given the load on the backend, though, I'd expect both of these machines to be 10% or less. Memory use, handles, etc., all seem reasonable.
To add one more wrinkle: when I log how long it takes to service these calls on the backend, I get times consistently less than 15ms (maybe one or two spikes to 30ms every minute). On the front end, these calls can take up to 1s to return. I guess that could be because of the CPU problems, but it seems off to me.
So... does anyone have any ideas on where to look on this kind of thing? I'm running short on things to explore.
Clarification: The WCF service is hosted in a Windows service, and is using a netTcp binding. Also, I have the maxConnections on the client set to 128, FWIW.
It's hard to say what might be going on, but a wild guess would be that something is hitting a contention point and its spinning (instead of doing a wait).
By any chance, have you increased the number of allowed HTTP connections to the back-end server in the front-end server? You can do it through the config file. One common issue I see with WCF clients is that the limit is left to the default value of 2, which severely limits concurrency at the client proxy level.
Have you considered and tested for the possibility of external factors?
Process recycles?
Is Dynamic compression enabled?