Recently the web requests on my Web Api 2 w/ Entity Framework 6.1 server have taken a drastic reduction in speed. Adding ~5000ms to all requests that query the database. I've spent the last three days ripping my hair out trying to figure it out.
Setup:
Web Api 2.2
Entity Framework 6.1.1
Autofac for IoC, DbContext is InstancePerLifetimeScope() along with everything else.
One custom HttpParameterBinding for getting entity id's from an access token. This does query the db.
Only one DelegatingHandler, for logging requests
What I've done:
Pre generated views, slight improvement
Reduced properties in entities we query, no improvement
Turned off AutoTrackChanges, no improvement
Tried AsNoTracking() on a number of requests, no improvement
Profiling with Ant Performance Profiler, nothing useful
Profiling database with SQL Management Studio, the queries are all fast
Why do I say there's a delay between the handler and the controller? I timed it with DateTime.Now at the beginning and the end of the controller action, 1745ms The logging handler does a time before and after the await base.SendAsync(request, cancellationToken), 6234ms. I timed the binding as well, only 2ms.
That's 4489ms of time that's unaccounted for. Other requests have similar timings. It happens after the logging handler get's the request and reports it but before the binding starts. What happens in there? Where is it coming from? We don't have any async void methods that spin off, we don't have any per request actions that should take that long. Totally stumped.
Edit: Repeating the same request does not improve the performance. I don't believe one hit performance is the issue, it's consistently poor.
I appreciate the help, I did end up finding the answer.
We had services that were being injected into the controllers and their constructors were using potentially async calls that preloaded some stuff. Changing it to use AsyncLazy was the solution.
Potentially helpful steps to those in similar situations, enumerated now.
Ever played the board game Guess Who? That's strikingly similar to debugging. You want to ask that knock down half of the potential questions. Don't start with "is it this specific method that I felt dirty with", instead start with:
What works and what doesn't work? Find the the differences. That's your problem set.
Narrow down the problem set with generic questions. Find the shared similarities and get rid of them. Is it async calls? (Thanks commenter) Is it deadlock-like stuff? (Thanks again). Is it first hit or initial loading?
Once you've removed the shared similarities it's time to start commenting out code. I narrowed it down to a constructor that was getting injected with three objects and not doing any work it's self. When it wasn't the first two objects I knew where my problem was!
Related
I am using StackExchange.MiniProfiler with the MVC and EntityFramework add-ons to try and track down a long TTFB that reliably occurs for one type of request on our web site. As you can see in the image at bottom, the duration indicated for this request is 504.3ms. I believe this corresponds to the time between the call to MiniProfiler.Start in BeginRequest and the call to MiniProfiler.End in EndRequest (minus the time of the children steps). Using browser tools I can see that the TTFB for this request corresponds with the data from MiniProfiler, so I believe MiniProfiler is accurate. I have been adding profiler steps around more and more code and think everything is wrapped now, yet they don't add up to anything near 504ms.
This request is an ajax request that happens on a page with a few other request going on at the same time. If I take the url out and hit it from the same browser in isolation, the duration and TTFB is only ~100ms. This would seem to imply something from one of the other requests is blocking this one, but I don't think we have anything that should be blocking at all, and certainly not that long, none of the other requests take that long.
The site is running as a mid level Azure App Service, could this be some sort of limitation there? How could I confirm or deny that? Any MiniProfiler tricks that might expose more data here?
The issue was related to this SO question here:
I just discovered why all ASP.Net websites are slow, and I am trying to work out what to do about it
Session states get locked, so if you have a bunch of simultaneous requests from one browser/session, they can end up all waiting on each other. Specifying some of our relevant controllers as readonly with an attribute made the bad all go away: [SessionState(SessionStateBehavior.ReadOnly)]
I have my MVC application with API used in it running on IIS 6.0(7.0 on production servers). For the API, I use IHttpHandler implementation in API.ashx file.
I have many different API calls being made to my API.ashx file, but I'll tell about one, that has no DB calls, so it's definitely NOT database issue.
At the very beginning of ProcessRequest method I've added Diagnostics.Stopwatch to track performance and stopping it at the last method's line.
The output of my stopwatch is always stable(+-2ms) and shows 5ms(!!!) in average.
But on my site, I see absolutely unstable and different Time to First Byte. It may start from 15ms and may grow up to 1 SECOND, and demonstrates 300 ms in average, but in logs I'll still have my stable 5ms from stopwatch.
This happens on every server I use, even locally(so this is not network related-problem) and on production. BTW all static resources are loaded really fast(<10ms)
Can anyone suggest the solution to this?
This sounds like a difficult one to diagnose without a little more detail. Could you edit your question and add a waterfall chart showing the slow API call in question? A really good tool to produce waterfall charts is http://webpagetest.org
I also recommend reading this article about diagnosing slow TTFBs.
http://www.websiteoptimization.com/speed/tweak/time-to-first-byte/
It goes into great detail about some of the reasons behind a slow response.
Here are some server performance issues that may be slowing down your server.
Memory leaks
Too many processes / connections
External resource delays
Inefficient SQL Queries
Slow database calls
Insuficient server resources
Overloaded Shared Servers
Inconsistent website response times
Hope that helps!
We are using the petapoco repository pattern (similar to this blog post). As the page loads we open up the repository, run the query, dispose and then carry on processing. This is fine on light pages, but when this happens a few times in the page we get quite significant performance degradation.
I had, perhaps wrongly, assumed that connection pooling would cope with this, which is enabled.
I ran a couple of tests.
The page it's on (it's an aspx page) takes around 1.2 seconds to load as it is at the moment. The page is running around 30 database queries...and, looking at the profiler, is doing a login and logout per query (even with connection pooling).
If I persist the connection and don't close until the page ends, this drops to around 70ms, which is quite a significant saving.
Perhaps we need to keep the Database object hanging around for the request, but I didn't think PetaPoco had this much of an overhead...particularly with the connection pooling.
I have created a test app to demonstrate it.
This demonstrates that loading a user 1000 times takes 230ms if the repository is reused, but takes 3.5seconds if the repository is recreated every time.
Your usage of connection pooling is breaking best practices.
It says nowhere to get rid of it after every statement. I normally keep a connection / repository around while doing processing and only close it when my function is finished (with MVC).
Yes, even the connection pool has overhead and you seem to be really bound on making that show.
What I always do is create a single instance of my repository per request. Because I develop almost exclusively using the MVC pattern, that means I create a private class level variable in each of my controllers and use it to service any requests within my Action methods. Translated to WebForms (ASPX), that means I would create one in the BeforeLoad (or whatever the event just before PageLoad is) and pass it around as needed. I don't think keeping a class-level instance is a good idea for Webforms though, but I can't remember enough to be sure.
The rule of thumb is to use one instance of your repo (or any other type of class really) for the entirety of your request, which is usually a page load or Ajax call. And for the reasons you've pointed out.
Remember: information on the internet is free, and you get what you pay for.
I have a simple Web API that returns the list of contacts:
public class ContactsApi : ApiController
{
public List<Contact> GetContacts()
{
Stopwatch watch = new Stopwatch();
watch.Start();
// Doing some business to get contacts;
watch.Stop();
// The operation only takes less than 500 milliseconds
// returning list of contacts
}
}
As I've used Stopwatch to test data-retrieval performance, it's apparent that it takes less than a second. However, when I issue a request to the GetContacts action via Chrome browser, it takes 4 to 5 seconds to return data.
Apparently that delay has nothing to do with my data-retrieval code. It seems to me that Web API is running slow. But I have no idea how to debug and trace that.
Is there any utility to log timing for ASP.NET HTTP request process pipeline? I mean, something like Navigation Timing to show that each event has occurred in what time?
How big is your response? Maybe it is a cost of serialization and transfer? However, there is a lot of possibilities to profile it, I would start from profiling with one of the tools in the market like ANTS Performance Profiler or dotTrace
Are you running it with the debugger? Do some tests without the debugger.
I had similar problems with a web API project I am currently developing and for us
turning off the debugger made the test take milliseconds instead of seconds.
There also seems to be some startup cost when calling a API the first time, subsequent request are always faster.
Try using Fiddler (http://fiddler2.com/), a free web debugging tool. It has most of the features that you are looking for.
4.5 seconds is pretty huge. If you use EF, you could use MiniProfiler.EF
I experienced some slowdown ( in the past) by incorrectly using Entity Framework Queryable ( converting it to lists, expanding, ...).
If you are using EF, keep it IQueryable as long as possible ( .ToList() executes a Query).
According to your needs, use debugging tools like MiniProfiler, MiniProfiler.Ef and tools other suggested are probably good too ( although i haven't used them in the past).
The cost of serialization could be important ( if ou are using DTO's), AutoMapper ( and probably other tools) seems slow on large lists. I'd suggest manually mapping them in an extension method, if you really want performance on big lists.
We ran into strange sql / linq behaviour today:
We used to use a web application to perform some intensive database actions on our system. Recently we moved to a winforms interface for various reasons.
We found out that performance has seriously decreased: an action that used to take about 15 minutes now takes as long as one whole hour. The strange thing is that It's the exact same method being called. The method performs quite a bit of read / write using linq2sql, and profiling on the client machine showed that the problematic section is on the SQL action itself, in the linq's "Save" method.
The only difference between the cases is that on one case the method is called from a web application's code behind (MVC in this case), and on the other from a windows form.
The one idea I could come up with is that SQL performance has something to do with the identity of the user accessing the db, but I could not find any support for that assumption.
Any ideas?
Did you run both tests from the same machine? If not hardware differences could be the issue... or network... one could be in a higher speed section of your network... like in the same vlan as the sql server. Try running the client code on the same server the web app was running on.
Also if your app is updating progress in a sycronous manner the app could be waiting a long time for display to update... as apposed to working with a stream ala response.write.
If you are actually outputting progress as you go you should make sure that the progress updates are events and that the display of those happens on another thread so that the processing isn't waiting on display. Actually you probably should put the processing on its own thread... and just have an event handler take care of the updates... that is a whole different discussion. The point is that your app could be waiting to update the display of progress.
It's a very old issue but I happened to run into the question just now. So for whom is may concern nowadays, the solution (and there-before the problem) was frustratingly silly. Linq2SQL was configured on the dev machines to constantly write a log to console.
This was causing a huge delay due to the simple act of outputing large amount of text to the console. On the web server the log was not being written, and therefore - no performance drawback. There was a colossal face-palming once we figured this one out. Thanks for the helpers, I hope this answer will help someone solve it faster next time.
Unattended logging. That was the problem.