I'll have an ASP.net page that creates some Excel Sheets and sends them to the user. The problem is, sometimes I get Http timeouts, presumably because the Request runs longer than executionTimeout (110 seconds per default).
I just wonder what my options are to prevent this, without wanting to generally increase the executionTimeout in web.config?
In PHP, set_time_limit exists which can be used in a function to extend its life, but I did not see anything like that in C#/ASP.net?
How do you handle long-running functions in ASP.net?
If you want to increase the execution timeout for this one request you can set
HttpContext.Current.Server.ScriptTimeout
But you still may have the problem of the client timing out which you can't reliably solve directly from the server. To get around that you could implement a "processing" page (like Rob suggests) that posts back until the response is ready. Or you might want to look into AJAX to do something similar.
I've not really had to face this issue too much yet myself, so please keep that in mind.
Is there not anyway you can run the process async and specify a callback method to occur once complete, and then keep the page in a "we are processing your request.." loop cycle. You could then open this up to add some nice UI enhancements as well.
Just kinda thinking out loud. That would probably be the sort of thing I would like to do :)
Related
Background
When writing front-end tests, we often need to wait until the web-application is done fetching data and updating the DOM before we want to interact with the page. With Selenium C#, this means a lot of explicit waits on the state of the page that is tailored to the specific scenario (maybe waiting for a loading indicator or a specific element to appear). However, most of the time this visual indicator is just a proxy for an async task like an HTTP request. Other solutions such as Protractor and Cypress have easy solutions for waiting for HTTP requests (this is the default in Protractor).
Question
One of the frameworks I maintain is written in C#, and I'm trying to find a solution to easily wait for any outstanding HTTP requests, rather than writing custom explicit waits against the DOM. Is there a solution for this? I'm open to using an additional open-source solution if needed.
I assumed I might need to set up a proxy so that I can manipulate and hook into HTTP requests. I looked into BrowserUp (continuation of the BrowserMobProxy project, which seems to no longer be maintained), but can't tell from docs if this sort of use case is possible or intended.
I remember years ago trying to solve this in Ruby. We settled on a hybrid JavaScript and Ruby solution. Each time an Ajax request got sent, we set a global JavaScript variable to true. When all pending requests had finished, we set it to false. We still had flaky tests. They were brittle and inconsistent even with some JavaScript gymnastics going on behind the scenes.
Even though Ajax (or background HTTP requests) might be complete, JavaScript still needs additional processing time to do something with the response. It was mere milliseconds, but remember that Selenium and your browser run in different threads — everything is a race condition. We kept getting intermittent test failures because the HTTP requests were done, but the browser was still in the process of evaluating element.innerHTML = response.responseText when Selenium would attempt to interact with an element that was supposed to be on screen after the request was complete. We still had to use explicit waits.
Basically, you are stuck with explicit waits in order to achieve stable tests. I've jumped through a lot of hoops over the years to get things working any other way. The only saving grace I've found is the Page Object Model Pattern, which at least centralizes this ugly code in one place for any particular use case.
So, yeah. The code is ugly. You need to use explicit waits. It turns out test code needs to be just as purposefully architected as the application code it tests.
I am using StackExchange.MiniProfiler with the MVC and EntityFramework add-ons to try and track down a long TTFB that reliably occurs for one type of request on our web site. As you can see in the image at bottom, the duration indicated for this request is 504.3ms. I believe this corresponds to the time between the call to MiniProfiler.Start in BeginRequest and the call to MiniProfiler.End in EndRequest (minus the time of the children steps). Using browser tools I can see that the TTFB for this request corresponds with the data from MiniProfiler, so I believe MiniProfiler is accurate. I have been adding profiler steps around more and more code and think everything is wrapped now, yet they don't add up to anything near 504ms.
This request is an ajax request that happens on a page with a few other request going on at the same time. If I take the url out and hit it from the same browser in isolation, the duration and TTFB is only ~100ms. This would seem to imply something from one of the other requests is blocking this one, but I don't think we have anything that should be blocking at all, and certainly not that long, none of the other requests take that long.
The site is running as a mid level Azure App Service, could this be some sort of limitation there? How could I confirm or deny that? Any MiniProfiler tricks that might expose more data here?
The issue was related to this SO question here:
I just discovered why all ASP.Net websites are slow, and I am trying to work out what to do about it
Session states get locked, so if you have a bunch of simultaneous requests from one browser/session, they can end up all waiting on each other. Specifying some of our relevant controllers as readonly with an attribute made the bad all go away: [SessionState(SessionStateBehavior.ReadOnly)]
I have a really long submit()-type function on one of my web pages that runs entirely on the server, and I'd like to display a progress bar to the client to show the, well, progress.
I'd be ok with updating it at intervals of like 20% so long as I can show them something.
Is this even possible? Maybe some kind of control with runat="server"? I'm kind of lost for ideas here.
It's possible, but it's quite a bit harder to do in a web based environment than in, for example, a desktop based environment.
What you'll have to do is submit a request to the server, have the server start the async task and then send a response back to the client. The client will then need to periodically poll the server (likely/ideally using AJAX) for updates. The server will want to, within the long running task's body, set a Session value (or use some other method of storing state) that can be accessed by the client's polling method.
It's nasty, and messy, and inefficient, so you wouldn't want to do this if there are going to be lots of users executing this.
Here is an example implementation by Microsoft. Note that this example uses UpdatePanel objects, ASP timers, etc. which make the code quite a bit simpler to write (and it's still not all that pretty) but these components are fairly "heavy". Using explicity AJAX calls, creating web methods rather than doing full postbacks, etc. will improve the performance quite a bit. As I said though, even in the best of cases, it's a performance nightmare. Don't do this if you have a lot of users or if this is an operation performed very much. If it's just for occasional use by a small percentage of admin users then that may not be a concern, and it does add a lot from the user's perspective.
I would take a look at .net 4.5's async and await.
Using Asynchronous Methods in ASP.NET MVC 4 -- (MVC example I know sorry)
Then check out this example using a progress bar
Here is my problem:
I have just been brought onto a massive asp.net C# project and I've been charged with fixing some performance issues (not my area of expertise). More specifically after 5 - 7 redirects/ajax calls the web server stops responding and the whole page (and eventually the browser) freezes.
I don't think this is a coding issue as I've set up break points in a few pages (Page_Load method) and after the 5 requests it does not even reach the break points.
I don't believe this is related to this issue as I've increased the browser's maximum connections per server parameter and I got the same behavior. Furthermore after these 5 request in one browser IE, the application stops working in FF as well.
This is not a resource issue as the w3wp.exe process never exceeds 500MB memory.
One thing I've noticed when using Fiddler and other tools to monitor the requests is that the server takes a very long time when loading image files (png, jpg). I don't know if this is relevant.
I've enabled failed request tracing on the server and the only thing I've noticed is that some request fail with a 401 error even dough I've set Anonymous Authentication to enabled.
Here is the exact message
MODULE_SET_RESPONSE_ERROR_STATUS
ModuleName ManagedPipelineHandler
Notification 128
HttpStatus 401
HttpReason Unauthorized
HttpSubStatus 0
ErrorCode 0
ConfigExceptionInfo
Notification EXECUTE_REQUEST_HANDLER
ErrorCode The operation completed successfully. (0x0)
This message is sometimes thrown with ModuleName: ScriptModule
I have already wasted 2 days on this thing and I'm running out of ideas so any suggestions would be appreciated.
Like any large generic problem, your best bet in diagnosing the issue is to figure out how to break down the issue into smaller parts, how to hypothesize the issues, and how to validate or invalidate your hypotheses. My first inclination would be to hypothesize that the server-side processes in this particular are taking a long time, causing your client requests to block, making the whole thing seem frozen.
From there, I would attempt to replicate the long running server side processes by creating isolated client side tests - perhaps if the URLs are HTTP gets, I would test the same URLs individually. If they were HTTP posts, I'd create an isolated test form if feasible to see what happens with each request. If a long running server side process is found then you have a starting point.
If there are no long running server side processes then it may be JavaScript / client side coding issues that need to be looked into. But definitely when you're working a large, unfamiliar project, your best bet is to figure out how to break down the issue into smaller components that can then be tested
I solved the issue finally. Here is what I did:
Experimented with IIS settings and App_Pool recycling and noticed that there is nothing wrong with the way it handles requests that actually reach it.
I focused on the Http.sys module and noticed that in the log files there were a lot of Timer_ConnectionIdle and Client_Reset errors.
After some more experimentation and a lot of Google searches, I accidentally found this answer and it solved my issue. As the answer suggests the problem was caused by the AVG antivirus installed and incorrectly configured on the server.
Thanks for all the help and suggestions.
If it's ajax calls that are causing your browser to freeze, make sure they are not blocking ajax calls.
Just appending to Shan's answer, which is a good one.
First off, there is obviously a code issue as this is by no means 'normal' behavior for IIS.
That said, you must isolate it as Shan indicated. For example, given the server itself no longer accepts connections then we can pretty well eliminate javascript as the source of the problem and relegate it to being just a symptom.
Typically when a worker process spins into space like this it is due to either an infinite loop or an issue where multiple threads are trying to lock the same resource. I bet if you let it run long enough IIS itself will timeout, kill and restart the process.
With that in mind you want to look for any type of multithreaded garbage (which I highly recommend you don't do in a web server) or for anything that indicates a tight infinite loop. A loop is going to become apparent if you execute the requests individually. A multi-threaded issue will only show up if you happen to get a collision.
Run various performance counters on the web server. Also, once it locks up, let it sit that way for awhile. Once IIS performs it's own reset on the worker process go look for indicators in the event log.
I just posted the question how-to-determine-why-the-browser-keeps-trying-to-load-a-page and discovered that my problem is with Gravatar.
I also noticed that StackOverflow is suffering from the same outage.
Does anyone know of a graceful way to determine if Gravatar, or any third party website for that matter, is up or not, before trying to retrieve avatar icons from them?
This would eliminate the long page load and the never ending busy cursor ... I shouldn't say never ending ... it just takes a long time to go away and is very confusing to the user as they sit there and wait ... for nothing.
You can have a different process that is periodically checking the status of the site. Set a rule about what is down for you, for instance you could say: "ping time > 1500 ms = down". Have this process to leave a note in a database table or config file. Then you check this value on each page rendering at almost no cost.
Depending on how critical is this external site, you can do the check more or less often.
This process could be an out of the web stack program, or a page only accessible through localhost that gets executed via Scheduled Tasks or an ASP.NET facility like mentioned in the comments.
For Gravatar you can cache all theses images instead of taking them from their server everytime. Of course, if user change their icon, it might not refresh as fast as it would be if it were direct access to the main server but at least you do not have to request gravar server everytime.