I am using StackExchange.MiniProfiler with the MVC and EntityFramework add-ons to try and track down a long TTFB that reliably occurs for one type of request on our web site. As you can see in the image at bottom, the duration indicated for this request is 504.3ms. I believe this corresponds to the time between the call to MiniProfiler.Start in BeginRequest and the call to MiniProfiler.End in EndRequest (minus the time of the children steps). Using browser tools I can see that the TTFB for this request corresponds with the data from MiniProfiler, so I believe MiniProfiler is accurate. I have been adding profiler steps around more and more code and think everything is wrapped now, yet they don't add up to anything near 504ms.
This request is an ajax request that happens on a page with a few other request going on at the same time. If I take the url out and hit it from the same browser in isolation, the duration and TTFB is only ~100ms. This would seem to imply something from one of the other requests is blocking this one, but I don't think we have anything that should be blocking at all, and certainly not that long, none of the other requests take that long.
The site is running as a mid level Azure App Service, could this be some sort of limitation there? How could I confirm or deny that? Any MiniProfiler tricks that might expose more data here?
The issue was related to this SO question here:
I just discovered why all ASP.Net websites are slow, and I am trying to work out what to do about it
Session states get locked, so if you have a bunch of simultaneous requests from one browser/session, they can end up all waiting on each other. Specifying some of our relevant controllers as readonly with an attribute made the bad all go away: [SessionState(SessionStateBehavior.ReadOnly)]
Related
Background
When writing front-end tests, we often need to wait until the web-application is done fetching data and updating the DOM before we want to interact with the page. With Selenium C#, this means a lot of explicit waits on the state of the page that is tailored to the specific scenario (maybe waiting for a loading indicator or a specific element to appear). However, most of the time this visual indicator is just a proxy for an async task like an HTTP request. Other solutions such as Protractor and Cypress have easy solutions for waiting for HTTP requests (this is the default in Protractor).
Question
One of the frameworks I maintain is written in C#, and I'm trying to find a solution to easily wait for any outstanding HTTP requests, rather than writing custom explicit waits against the DOM. Is there a solution for this? I'm open to using an additional open-source solution if needed.
I assumed I might need to set up a proxy so that I can manipulate and hook into HTTP requests. I looked into BrowserUp (continuation of the BrowserMobProxy project, which seems to no longer be maintained), but can't tell from docs if this sort of use case is possible or intended.
I remember years ago trying to solve this in Ruby. We settled on a hybrid JavaScript and Ruby solution. Each time an Ajax request got sent, we set a global JavaScript variable to true. When all pending requests had finished, we set it to false. We still had flaky tests. They were brittle and inconsistent even with some JavaScript gymnastics going on behind the scenes.
Even though Ajax (or background HTTP requests) might be complete, JavaScript still needs additional processing time to do something with the response. It was mere milliseconds, but remember that Selenium and your browser run in different threads — everything is a race condition. We kept getting intermittent test failures because the HTTP requests were done, but the browser was still in the process of evaluating element.innerHTML = response.responseText when Selenium would attempt to interact with an element that was supposed to be on screen after the request was complete. We still had to use explicit waits.
Basically, you are stuck with explicit waits in order to achieve stable tests. I've jumped through a lot of hoops over the years to get things working any other way. The only saving grace I've found is the Page Object Model Pattern, which at least centralizes this ugly code in one place for any particular use case.
So, yeah. The code is ugly. You need to use explicit waits. It turns out test code needs to be just as purposefully architected as the application code it tests.
We have notice a huge performance impact on our asp.net mvc application since we moved to azure web apps. One thing we've noticed is that request times slow down significantly when an action is loaded the first time in a couple hours. Our app is not used a lot so there's a lot of idle time. I'll give a few examples (please note these are just back-end request response times, and exclude dom, js, image, etc. load times):
user dashboard - first request will take around 20 seconds. Subsequent loads are around 1 second
loading some object - first request will be around 7 seconds. refreshing will get you around 2 secs. loading other objects using same action is also faster after that action is hit the first time
I realize that some of the speed up might be due to query caching, but I am wondering if that's all. I am on a standard plan and do have "Always On" enabled, so I know that's not the issue. And this seems to be happening per action. So even if the user visited action1 already and now visits action2 for the first time, they'll still experience slowness.
What can I look for here to fix? Are there any azure specific settings?
Please run your query as shown below:
`--your query goes here
go
select * from sys.dm_exec_session_wait_stats
where session_id = ##spid
order by wait_time_ms desc `
The last SELECT statement will provide us the cause it takes so much time to run the first time.
It could be poor IO performance on large queries running on lower tiers.
Hope this helps.
One thing we've noticed is that request times slow down significantly when an action is loaded the first time in a couple hours.
As you did, enabling “Always On” setting on Azure web app can increase application responsiveness, especially if application is not very frequently accessed by users.
these are just back-end request response times, and exclude dom, js, image, etc. load times
Please check the code logic of those actions and make sure the code is efficient. Besides, you can try to specify custom initialization/warm-up actions for those pages (that always take long response time when the first time client browse the page). If possible, you can cache the frequently used data instead of retrieving data from database (or other sources) every time client browse the web page.
<system.webServer>
<applicationInitialization doAppInitAfterRestart="true">
<add initializationPage="/Home/Contact" hostName="appname.azurewebsites.net" />
</applicationInitialization>
</system.webServer>
Here is my problem:
I have just been brought onto a massive asp.net C# project and I've been charged with fixing some performance issues (not my area of expertise). More specifically after 5 - 7 redirects/ajax calls the web server stops responding and the whole page (and eventually the browser) freezes.
I don't think this is a coding issue as I've set up break points in a few pages (Page_Load method) and after the 5 requests it does not even reach the break points.
I don't believe this is related to this issue as I've increased the browser's maximum connections per server parameter and I got the same behavior. Furthermore after these 5 request in one browser IE, the application stops working in FF as well.
This is not a resource issue as the w3wp.exe process never exceeds 500MB memory.
One thing I've noticed when using Fiddler and other tools to monitor the requests is that the server takes a very long time when loading image files (png, jpg). I don't know if this is relevant.
I've enabled failed request tracing on the server and the only thing I've noticed is that some request fail with a 401 error even dough I've set Anonymous Authentication to enabled.
Here is the exact message
MODULE_SET_RESPONSE_ERROR_STATUS
ModuleName ManagedPipelineHandler
Notification 128
HttpStatus 401
HttpReason Unauthorized
HttpSubStatus 0
ErrorCode 0
ConfigExceptionInfo
Notification EXECUTE_REQUEST_HANDLER
ErrorCode The operation completed successfully. (0x0)
This message is sometimes thrown with ModuleName: ScriptModule
I have already wasted 2 days on this thing and I'm running out of ideas so any suggestions would be appreciated.
Like any large generic problem, your best bet in diagnosing the issue is to figure out how to break down the issue into smaller parts, how to hypothesize the issues, and how to validate or invalidate your hypotheses. My first inclination would be to hypothesize that the server-side processes in this particular are taking a long time, causing your client requests to block, making the whole thing seem frozen.
From there, I would attempt to replicate the long running server side processes by creating isolated client side tests - perhaps if the URLs are HTTP gets, I would test the same URLs individually. If they were HTTP posts, I'd create an isolated test form if feasible to see what happens with each request. If a long running server side process is found then you have a starting point.
If there are no long running server side processes then it may be JavaScript / client side coding issues that need to be looked into. But definitely when you're working a large, unfamiliar project, your best bet is to figure out how to break down the issue into smaller components that can then be tested
I solved the issue finally. Here is what I did:
Experimented with IIS settings and App_Pool recycling and noticed that there is nothing wrong with the way it handles requests that actually reach it.
I focused on the Http.sys module and noticed that in the log files there were a lot of Timer_ConnectionIdle and Client_Reset errors.
After some more experimentation and a lot of Google searches, I accidentally found this answer and it solved my issue. As the answer suggests the problem was caused by the AVG antivirus installed and incorrectly configured on the server.
Thanks for all the help and suggestions.
If it's ajax calls that are causing your browser to freeze, make sure they are not blocking ajax calls.
Just appending to Shan's answer, which is a good one.
First off, there is obviously a code issue as this is by no means 'normal' behavior for IIS.
That said, you must isolate it as Shan indicated. For example, given the server itself no longer accepts connections then we can pretty well eliminate javascript as the source of the problem and relegate it to being just a symptom.
Typically when a worker process spins into space like this it is due to either an infinite loop or an issue where multiple threads are trying to lock the same resource. I bet if you let it run long enough IIS itself will timeout, kill and restart the process.
With that in mind you want to look for any type of multithreaded garbage (which I highly recommend you don't do in a web server) or for anything that indicates a tight infinite loop. A loop is going to become apparent if you execute the requests individually. A multi-threaded issue will only show up if you happen to get a collision.
Run various performance counters on the web server. Also, once it locks up, let it sit that way for awhile. Once IIS performs it's own reset on the worker process go look for indicators in the event log.
how much traffic is heavy traffic? what are the best resources for learning about heavy traffic web site development?.. like what are the approaches?
There are a lot of principles that apply to any web site, irrelevant of the underlying stack:
use HTTP caching facilities. For one there is the user agent cache. Second, the entire web backbone is full of proxies that can cache your requests, so use this to full advantage. A request that does even land on your server will add 0 to your load, you can't optimize better than that :)
corollary to the point above, use CDNs (Content Delivery Network, like CloudFront) for your static content. CSS, JPG, JS, static HTML and many more pages can be served from a CDN, thus saving the web server from a HTTP request.
second corollary to the first point: add expiration caching hints to your dynamic content. Even a short cache lifetime like 10 seconds will save a lot of hits that will be instead served from all the proxies sitting between the client and the server.
Minimize the number of HTTP requests. Seems basic, but is probably the best overlooked optimization available. In fact, Yahoo best practices puts this as the topmost optimization, see Best Practices for Speeding Up Your Web Site. Here is their bets practices list:
Minimize HTTP Requests
Use a Content Delivery Network
Add an Expires or a Cache-Control Header
Gzip Components
... (the list is quite long actually, just read the link above)
Now after you eliminated as much as possible from the superfluous hits, you still left with optimizing whatever requests actually hit your server. Once your ASP code starts to run, everything will pale in comparison with the database requests:
reduce number of DB calls per page. The best optimization possible is, obviously, not to make the request to the DB at all to start with. Some say 4 reads and 1 write per page are the most a high load server should handle, other say one DB call per page, still other say 10 calls per page is OK. The point is that fewer is always better than more, and writes are significantly more costly than reads. Review your UI design, perhaps that hit count in the corner of the page that nobody sees doesn't need to be that accurate...
Make sure every single DB request you send to the SQL server is optimized. Look at each and every query plan, make sure you have proper covering indexes in place, make sure you don't do any table scan, review your clustered index design strategy, review all your IO load, storage design, etc etc. Really, there is no short cut you can take her, you have to analyze and optimize the heck out of your database, it will be your chocking point.
eliminate contention. Don't have readers wait for writers. For your stack, SNAPSHOT ISOLATION is a must.
cache results. And usually this is were the cookie crumbles. Designing a good cache is actually quite hard to pull off. I would recommend you watch the Facebook SOCC keynote: Building Facebook: Performance at Massive Scale. Somewhere at slide 47 they show how a typical internal Facebook API looks like:
.
cache_get (
$ids,
'cache_function',
$cache_params,
'db_function',
$db_params);
Everything is requested from a cache, and if not found, requested from their MySQL back end. You probably won't start with 60000 servers thought :)
On the SQL Server stack the best caching strategy is one based on Query Notifications. You can almost mix it with LINQ...
I will define heavy traffic as traffic which triggers resource intensive work. Meaning, if one web request triggers multiple sql calls, or they all calculate pi with a lot of decimals, then it is heavy.
If you are returning static html, then your bandwidth is more of an issue than what a good server today can handle (more or less).
The principles are the same no matter if you use MVC or not when it comes to optimize for speed.
Having a decoupled architecture
makes it easier to scale by adding
more servers etc
Use a repository
pattern for data retrieval (makes
adding a cache easier)
Cache data
which is expensive to query
Data to
be written could be written thru a
cache, so that the client don't have
to wait for the actual database
commit
There's probably more ground rules as well. Maybe you can you say something about the architecture of your application, and how much load you need to plan for?
MSDN has some resources on this. This particular article is out of date, but is a start.
I would suggest also not limiting yourself to reading about the MVC stack: many principles are cross-platform.
I'll have an ASP.net page that creates some Excel Sheets and sends them to the user. The problem is, sometimes I get Http timeouts, presumably because the Request runs longer than executionTimeout (110 seconds per default).
I just wonder what my options are to prevent this, without wanting to generally increase the executionTimeout in web.config?
In PHP, set_time_limit exists which can be used in a function to extend its life, but I did not see anything like that in C#/ASP.net?
How do you handle long-running functions in ASP.net?
If you want to increase the execution timeout for this one request you can set
HttpContext.Current.Server.ScriptTimeout
But you still may have the problem of the client timing out which you can't reliably solve directly from the server. To get around that you could implement a "processing" page (like Rob suggests) that posts back until the response is ready. Or you might want to look into AJAX to do something similar.
I've not really had to face this issue too much yet myself, so please keep that in mind.
Is there not anyway you can run the process async and specify a callback method to occur once complete, and then keep the page in a "we are processing your request.." loop cycle. You could then open this up to add some nice UI enhancements as well.
Just kinda thinking out loud. That would probably be the sort of thing I would like to do :)