I have performance issue with ASP.NET Web API app hosted as Azure Web App. After deploying the first request to web service is really slow (we are talking about seconds here). Subsequent requests work just fine without extra delay.
"Always on" feature works fine keeping the app from unloading but this does not solve my issue. I do not want this first request to warm up the service (BTW - should it be warmed up?).
I've used diagnostic and profiling tools in Azure without finding the root cause of this thing. I've used Application Insights as well. It seems like one function of mine needs much more time to execute during this first request - debugging the app locally I did not notice any performance issue with mentioned function.
How can I fix this?
Thanks!
This bit me as well. "Always On" will only make automated calls to your service root - think about slapping the process every time so it won't fall sleep.. We don't use this in our PROD services, we rather have an Azure Availability Test invoking a Ping() endpoint every 5 mins - two birdies, one stone. Besides, AlwaysOn will generate 404 errors in App Insights if you don't have anything in the root..
A totally different thing is warming up each one of the endpoints so they could get JIT-ed and ready, and I have not found anything better than a warm-up script with the whole list of endpoints to call, it is not perfect but it works. So every time you do a deployment o do a restart this will automatically run and your first calls won't be hurt.
Have a look at this article.
I hope this helps
I've been trying to diagnose an issue pertaining to thousands of hung/stuck EndRequest requests in IIS. This is becoming a large problem for us as we're hitting the concurrent connection cap after about a week or two and have to recycle the whole application pool to clear the request list.
Because this is a live application, I have limited troubleshooting options, so anything that would halt or bring down the application pool I am not allowed to do.
IIS Information
Concurrent connection cap is set to its maximum of 65535.
Configuration debug in the web config is set to false and we have a
timeout set at 110 seconds.
Windows Server 2012 R2 Version 6.2 (Build 9200)
IIS Version 8.5.9600.16384
The long running requests have 0 data transfer, checked with
WireShark.
I'm pretty much at a loss on why these aren't timing out. I've set all the appropriate settings - the ones I could find from MSDN and other sources. We have a very, very hard time replicating this on our development environment so it's been blind testing for the most part. I've found articles and such on other state hangs, but I cannnot find anything on why a request in the EndRequest state will not time out.
Advanced Settings Page:
https://postimg.org/image/gxec32kmt/
Application Pool Requests Page:
https://postimg.org/image/qupcw57o5/
Web Config:
https://postimg.org/image/5xt4rh1xh/
Update 1
I did a bit of digging into our fallback that is supposed to close connections after an hour of no usage. We seem to currently have 10,153 sessions still active with a last active time of 3 days ago. I've stepped through this function quite a bit and it seems to be working as intended. It goes through the list of sessions and anyone over an hour of inactivity has their WebSocketHandler.Close() method called. However it seems some sessions are refusing to close after the method is being called. We have logging in place to tell us if any exceptions are being thrown during the run but it seems as though it's running as expected.
This was my mistake. I was running against an old sessions data pull. A current pull of the session data shows no sessions running greater then their specified time. This means that the WebSocketHandler.Close() was called on them and they were removed from our in-memory list.
Update 2
NETSTAT using netstat -s on pastebin: https://pastebin.com/embed_js/qBbZ4gJ1
Update 3
Correction to update 1. Can a connection close be called and fail? If so, then we're accidentally orphaning the reference to the connection in our server. I would still expect the IIS timeout to kick in however, there must be some catches to it collecting requests.
I have two an Azure WebApps. Both of them have the "AlwaysOn" option set to true, but even now the WebApps go idle after a bit.
If I don't query the WebApp for a few minutes, the next request I make to the WebApp takes significantly longer. Too long to be acceptable for production.
The WebApp runs a C# ASP.NET MVC 5 project which is my project's API.
Can I do anything to prevent this from keeping on happening?
It seems like having turned off the HTTP to HTTPS rewrite solved the issue. The response is still slow, but much faster than before.
I have a few Windows services (all written in C#) that all show the same strange behaviour.
I have them set to delayed auto start so that they get started after the boot (delayed because well they are not critical).
They all host WCF services as parts of Client-Server applications and were installed using WiX if that matters.
I noticed that sometimes they just don't start.
If you look into the Services window fast enough after the OS is ready they have status "Starting". If you then refresh the view they are no longer starting but not "Started" either.
You can then start them manually without any problem whatsoever.
This produces no error messages and no log entrys. And to make it even better this only occurs if the machine has been shut down and turned on again. Reboot works perfectly fine every time (tried it about 20 times on two different machines)
If you set the failure actions to restart the service after failure it seems it will eventually start the service successfully but surely this can not be the ideal solution.
OSs are Windows 7 and WinServer 2008 R2
What am I missing here? Why do they fail to be started automatically(the first time at least)? And why does it make a difference if the computer boots following a reboot or a shutdown?
EDIT:
I was wrong about the failure actions. The did not fix the problem.
EDIT 2:
I have added exception handling around everything to log possible exceptions. But so far no exceptions have been logged.
Might it be the WCF Services take a long time to start? afaik, the windows service has to come up in a certain time (best practices is 30 seconds, technical limit I don't know) to not time out. That could explain why your service is in status "starting" but does not start.
Please see my answer from the duplicate. A windows service typically shouldn't have access to the desktop for security reasons. But it certainly should have a good amount of logging in it. You probably have a race condition. The only thing you could do about this in WiX would be to express a dependency on another service to get the service control manager to wait awhile before starting the service. But it really would be better if your code was more robust. An example would be the OnStart event fire up a background worker process and then return success. The background thread could then keep attempting to host the WCF endpoint and everything do a fair amount of logging in the process.
(Sorry if this is a really long question, it said to be specific)
The company I work for has a number of sites, which have been running for some time with no problems. The applications are a mix of ASP.NET 2.0, 3.5, and 4.0, all using an ADO.NET to connect to a SQL Server Standard instance (on the same webserver) all being hosted with IIS7.
The problem began when we moved to an upgraded webserver. We made every effort to set up the server, db instance and IIS with the exact same settings (except for the different machine name, and the fact that we had upgraded from SQLExpress to Standard), and as far as we could tell, we did. Both servers are running Windows Server 2008 R2 (all current updates applied), and received a default install.
The problem is very apparent when starting up one of these applications. When you reach the login page of our application, the page itself loads extremely fast. This is true even when you load the page from a new machine that could not possibly have the page cached, with IIS caching disabled. The problem is actually visible when you enter your login information and click the login button. Because of the (not great)design of our databases, the login process must access a number of databases, theoretically up to 150 separate DBs, but in practice usually 2. The problem occurs even when only 2 databases (the minimum) are opened. Not a great design, but we have to live with it for now.
When trying to initially open a connection to the database, the entire process stops for about 20 seconds every time, regardless of whether you are connecting to 2 dbs or 40. I have run a .NET profiler (jetbrains dottrace) against the process, and the only information I could take from it was that one or all of the calls to sqlconnection.open() was accounting for 90% of the time. This only happens on first-use of the application, but the problem is compounded by the fact that IIS seems to disregard the recycling settings we have set for it, and recycles the application after a few minutes of idle, causing the problem to occur again.
I also tried to use the SQL Server profiler to see which database operations were the cause of the slowdown, but because of all the other DB activity, (and the fact that I had to do this on our production server, because the problem doesnt occur in our test environments) I couldn't pin down the exact operation that was causing the stoppage. I will try coming in late at night and shutting down the production sites to run the SQL profiler, but I might not be able to do this right away.
In the course of researching the problem, I have tried a couple solutions
Thinking it might be a name resolution problem, I tried modifiying both the hosts file on the webserver as well as giving the connectionstrings an IP address instead of the servername to resolve, with no difference. I have heard of the LLMNR protocol causing problems like this, but I think trying to connect by IP or resolving with the hosts file should have eliminated that possibility, tho i admit I never tried actually turning off LLMNR.
I have increased the idle timeouts, recycling intervals etc in IIS, but this doesn't even seem to be respected, much less solving the problem. This leads me to believe there is a setting overriding the IIS application settings on the machine.
multiple other code fixes, none of which made any difference. Is a SqlServer setting causing the problem?
other stuff that i forgot by now.
Any ideas, experience or whatevers would be greatly appreciated in helping me solve this problem!
I would advise using a non-tcp connection if you are still running the SQL instance on the local machine. SQL Server supports several protocols, tcp, named pipes, and shared memory are the more common.
Named Pipes
Data Source=np:computer\instance
Shared Memory
Data Source=lpc:computer\instance
Personally I prefer the Shared Memory. Remember you need to enable these protocols, and to avoid configuration mistakes I suggest you disable all you are not using.
see http://msdn.microsoft.com/en-us/library/ms187892.aspx
IIS Reset
In IIS7 there are two ways to configure the idle-timeout. Both begin by clicking on the "Application Pools" section and right-clicking the appropriate app domain. If you click the "Recycling..." option there is one setting. The other is in "Advanced Settings..." under the section for "Process Model" you will find "Idle Time-out (minutes)" which set to zero disables the process timeout. This later option is the one that works for us.
If I were you I'd solve this problem first as restarting the appdomain and/or worker process is always painful even if you don't have a 20 second lag.
Some ideas:
from the web server, can you ping the db server and get a "normal"
response, or are you seeing a similar delay?
if you're seeing a delay, run a tracert to see if you can nail down where the slowness is occurring
try using a tool like QueryExpress (http://www.albahari.com/queryexpress.aspx) which doesn't require an install to run. You can download this EXE and run it from your web server. See if you can connect to your db using this and run queries in a normal fashion.
Try something like SysInternals' TcpView (http://technet.microsoft.com/en-us/sysinternals/bb897437) to take a look at your open connections and see what activity is happening on your server and how much data is being sent to and received from your db server.
Just some initial thoughts on where I'd start to look based upon your problem description. I hope this helps. Good luck with things!
With IIS not respecting recycling settings: did restarting IIS/rebooting change the behavior?