I have a simple Web API that returns the list of contacts:
public class ContactsApi : ApiController
{
public List<Contact> GetContacts()
{
Stopwatch watch = new Stopwatch();
watch.Start();
// Doing some business to get contacts;
watch.Stop();
// The operation only takes less than 500 milliseconds
// returning list of contacts
}
}
As I've used Stopwatch to test data-retrieval performance, it's apparent that it takes less than a second. However, when I issue a request to the GetContacts action via Chrome browser, it takes 4 to 5 seconds to return data.
Apparently that delay has nothing to do with my data-retrieval code. It seems to me that Web API is running slow. But I have no idea how to debug and trace that.
Is there any utility to log timing for ASP.NET HTTP request process pipeline? I mean, something like Navigation Timing to show that each event has occurred in what time?
How big is your response? Maybe it is a cost of serialization and transfer? However, there is a lot of possibilities to profile it, I would start from profiling with one of the tools in the market like ANTS Performance Profiler or dotTrace
Are you running it with the debugger? Do some tests without the debugger.
I had similar problems with a web API project I am currently developing and for us
turning off the debugger made the test take milliseconds instead of seconds.
There also seems to be some startup cost when calling a API the first time, subsequent request are always faster.
Try using Fiddler (http://fiddler2.com/), a free web debugging tool. It has most of the features that you are looking for.
4.5 seconds is pretty huge. If you use EF, you could use MiniProfiler.EF
I experienced some slowdown ( in the past) by incorrectly using Entity Framework Queryable ( converting it to lists, expanding, ...).
If you are using EF, keep it IQueryable as long as possible ( .ToList() executes a Query).
According to your needs, use debugging tools like MiniProfiler, MiniProfiler.Ef and tools other suggested are probably good too ( although i haven't used them in the past).
The cost of serialization could be important ( if ou are using DTO's), AutoMapper ( and probably other tools) seems slow on large lists. I'd suggest manually mapping them in an extension method, if you really want performance on big lists.
Related
I have my MVC application with API used in it running on IIS 6.0(7.0 on production servers). For the API, I use IHttpHandler implementation in API.ashx file.
I have many different API calls being made to my API.ashx file, but I'll tell about one, that has no DB calls, so it's definitely NOT database issue.
At the very beginning of ProcessRequest method I've added Diagnostics.Stopwatch to track performance and stopping it at the last method's line.
The output of my stopwatch is always stable(+-2ms) and shows 5ms(!!!) in average.
But on my site, I see absolutely unstable and different Time to First Byte. It may start from 15ms and may grow up to 1 SECOND, and demonstrates 300 ms in average, but in logs I'll still have my stable 5ms from stopwatch.
This happens on every server I use, even locally(so this is not network related-problem) and on production. BTW all static resources are loaded really fast(<10ms)
Can anyone suggest the solution to this?
This sounds like a difficult one to diagnose without a little more detail. Could you edit your question and add a waterfall chart showing the slow API call in question? A really good tool to produce waterfall charts is http://webpagetest.org
I also recommend reading this article about diagnosing slow TTFBs.
http://www.websiteoptimization.com/speed/tweak/time-to-first-byte/
It goes into great detail about some of the reasons behind a slow response.
Here are some server performance issues that may be slowing down your server.
Memory leaks
Too many processes / connections
External resource delays
Inefficient SQL Queries
Slow database calls
Insuficient server resources
Overloaded Shared Servers
Inconsistent website response times
Hope that helps!
Recently the web requests on my Web Api 2 w/ Entity Framework 6.1 server have taken a drastic reduction in speed. Adding ~5000ms to all requests that query the database. I've spent the last three days ripping my hair out trying to figure it out.
Setup:
Web Api 2.2
Entity Framework 6.1.1
Autofac for IoC, DbContext is InstancePerLifetimeScope() along with everything else.
One custom HttpParameterBinding for getting entity id's from an access token. This does query the db.
Only one DelegatingHandler, for logging requests
What I've done:
Pre generated views, slight improvement
Reduced properties in entities we query, no improvement
Turned off AutoTrackChanges, no improvement
Tried AsNoTracking() on a number of requests, no improvement
Profiling with Ant Performance Profiler, nothing useful
Profiling database with SQL Management Studio, the queries are all fast
Why do I say there's a delay between the handler and the controller? I timed it with DateTime.Now at the beginning and the end of the controller action, 1745ms The logging handler does a time before and after the await base.SendAsync(request, cancellationToken), 6234ms. I timed the binding as well, only 2ms.
That's 4489ms of time that's unaccounted for. Other requests have similar timings. It happens after the logging handler get's the request and reports it but before the binding starts. What happens in there? Where is it coming from? We don't have any async void methods that spin off, we don't have any per request actions that should take that long. Totally stumped.
Edit: Repeating the same request does not improve the performance. I don't believe one hit performance is the issue, it's consistently poor.
I appreciate the help, I did end up finding the answer.
We had services that were being injected into the controllers and their constructors were using potentially async calls that preloaded some stuff. Changing it to use AsyncLazy was the solution.
Potentially helpful steps to those in similar situations, enumerated now.
Ever played the board game Guess Who? That's strikingly similar to debugging. You want to ask that knock down half of the potential questions. Don't start with "is it this specific method that I felt dirty with", instead start with:
What works and what doesn't work? Find the the differences. That's your problem set.
Narrow down the problem set with generic questions. Find the shared similarities and get rid of them. Is it async calls? (Thanks commenter) Is it deadlock-like stuff? (Thanks again). Is it first hit or initial loading?
Once you've removed the shared similarities it's time to start commenting out code. I narrowed it down to a constructor that was getting injected with three objects and not doing any work it's self. When it wasn't the first two objects I knew where my problem was!
I've a piece of code like this:
foreach (var e in foobar)
{
var myObj = new MyObj();
GenericResult res = soapClient.doSomething(e);
if(res.success == true){
myObj.a = e.a;
myObj.b = e.b;
}
}
Every soapRequest takes about 500 milliseconds and sometime foobar is 1000+ elements so I'm wasting a lot of time waiting for the soapClient response. I've tried using Parallel.ForEach but it doesn't work because the SOAP provider accept only serialized requests. The provider suggest using async calls like soapClient.doSomthingAsync the problem is I haven't anything to do until I got the soapClient response.
The only solution I'm thinking of is using a Parallel.ForEach and a lock in the soap call
Just a few things you could try out.
What type of authentication is applied to the service call. If the service etc. authenticates against an AD, you should ensure that only the first call is authenticated and the rest of the calls just piggybacks on that. An AD authentication can take substantial amount of time (0.3 - 1.0 s)
Try installing Fiddler and use it as a WCF proxy. Fiddler will give you the means to break down the time being spent in various parts of the execution of the service call.
Have you tried to ping the server you target - are the ping timings acceptable?.
How much time is being spent on the first invocation, compared to the following calls. The first call is always going to take a significant amount of time as the CLR runtime have to generate a boat load of dynamic XML code. Just generating the XmlSerializer JIT code is costly, as it dynamically generates C# code, kicks of the CSC compiler and load the generated DLL. You might have a look at the SGEN tool, which makes it possible to generate the XmlSerializer DLL's at compile time and instead of runtime (note this will only help on the first execution timings)
I can't see how much time is actually being spent on the server side, inside the execution of the doSomething(), so it's difficult to see how much time is actually being spent on the network. Faulty network hardware, cables, switches as well as firewalls, routing tables, SLA's etc. might have a negative impact on what performance you can get out of this.
Of cause as already mentioned, having such a chatty interface as you have to use, is far from optimal and the service owner might run into all sorts of server side performance problems, if this is their normal way of exposing interfaces - but that is another story :)
I am performing WCF service testing here using a small C# .net test client.
How do i record the time taken for the following items : -
Transmission time for the response.
Time taken for De-Serialization of data.
How do i get these times using a WCF service proxy in .net ?
Any other way ( cant use Soap UI and fiddler due to admin policy issues here) would be highly appreciated.
I would recommend you to take a look at
http://miniprofiler.com/
I have used it in a production website and it does wonders for me.
The ultimate fallback is Stopwatch, this will at least work for the deserialisation.
var stopWatch = new Stopwatch();
stopWatch.Start();
// you call here
stopWatch.Stop();
var time = stopWatch.ElapsedTicks;
You can try the new benchmarking framework benchmarque from Chris Patterson:
What is Benchmarque?
Benchmarque (pronounced bench-mar-key) allows you to create comparative benchmarks using .NET. An example of a comparative benchmark would be evaluating two or more approaches to performing an operation, such as whether for(), foreach(), or LINQ is faster at enumerating an array of items. While this example often falls into the over-optimization category, there are many related algorithms that may warrant comparison when cycles matter.
I have a WCF REST Web Service (.NET 4) which is design based on multitier architecture (i.e. presentation, logic, data access, etc.). At runtime and without instrumentation, I would like to measure how much time a single request takes for each layer (e.g. Presentation = 2 ms, Logic = 50 ms, Data = 250 ms).
Considering that I cannot change the method signature to pass in a StopWatch (or anything similar), how would you accomplish such thing?
Thanks!
If you cannot add code to your solution, you will end up using profilers to examine what the code is doing, but I would only suggest that in an environment other than production unless you are only having issues in performance. There are plenty of ways to set up another environment and profile under load.
Profilers will hook into your code and examine how long each method takes. There is no "this is the business layer performance" magic, as the profiler realizes physical boundaries (class, method) instead of logical boundaries, but you can examine the output and determine speed.
If you can touch code there are other options you can add which can be turned on and off via configuration. Since you have stated you cannot change code, I would imagine this is a non-option.
There's a tracer in the MS Ent Libs, syntaxically you use a using statement and duration of everything that occurs inside the using statement is logged into the standard Ent Lib logging ecosystem.
using(new Tracer())
{
// Your code here.
}
More basic info on MSDN here, and see here for it's constructors; there are different constructors that allow you to pass in different identifiers to help you keep track of what's being recorded.
I've recently been working on performance-enhancements and looking at .NET performance tools. DotTrace seems to be the best candidate so far.
In particular, the Performance Profiler can produce Tracing Profiles. These profiles detail how long each method is taking:
Tracing profiling is a very accurate way of profiling that involves
getting notifications from CLR whenever a function is entered or left.
The time between these two notifications is taken as the execution
time of the function. Tracing profiling helps you learn precise timing
information and the number of calls on the function level.
This screen-shot illustrates the useful stats that the tool can produce:
You should be able to see exactly how much time a single request takes for each layer by inspection of the profiling data.