Unity Not Being Disposed Causing Server Lockup? - c#

We have a server farm of about 40 servers that we roll code to every couple weeks. One thing we noticed when we roll the code live is after deploying the assemblies and performing an IIS reset and put it back in the BigIp (F5) and it receives traffic the server will lockup for about 10 minutes and clients will just spin until an eventual timeout.
Looking at the perfmon we can see a dramatic spike in number of finally's and number of pinned objects btw which lead me to investigate memory issues.
So one thing I started looking into it our Unity IoC configuration. In the global.asax.cs we are registering about 15 interfaces where most are using the ContainerControlledLifetimeManager to manage the lifetime. Normally there is never a problem with the code except in this ten minute window so my first thought was a memory or resource management issue.
Does anyone know if you have to explicitly Dispose() of your Unity Container or is this handled by Unity automagically somehow? I noticed today that there was no Dispose wiring in place for Application_End so my thought was maybe when the servers are brought back on after the IIS reset there is a Unity or object resource issue until the GC comes around and frees the memory (the ten minutes it takes to come up).
Any help is appreciated!

Performing an iisreset will kill the currently running w3wp.exe process, so it's unlikely that not properly disposing of unity objects in Application_End would cause performance issues on startup. It is possible that the old web process doesn't properly release file system or other resources the new web process depends upon, but I think you'd see file access or some other errors if that were the case.
Since you're performing an iisreset, I would look closely at the code that runs when the application starts for the first time. Maybe there are some components that take alot of time to start up (i.e., say there is a singleton type class that downloads and caches a bunch of stuff from the database) that are causing the slow down, possibly only when combined with the stress of handling all of the waiting HTTP requests. Also, keep in mind that ASP.NET will incur a bunch of overhead as it compiles the application to be used the first time. Since it seems that your web application is behind a load balancer, you may want to come up with a way to "prime" the application on each individual web server before you add that web server back to the load balancer, which could be accomplished by just loading a page locally on that web server. Priming the application would allow the web app initialize itself without having to handle any outside requests, which should improve the startup time.
Long story short, I would investigate startup issues and see what I could tune there before I focused on shutdown issues.

Related

CoreWcf Service hangs up after some time

I used to have some NET Framework WCF service. It worked like a clockwork. Once NET5 and CoreWCF were released, I migrated the service to, well, NET5 and CoreWCF.
Now it hangs up after some time. If the load is light, then it might work for a day or so (and then randomly die), but if load becomes heavier, then it may die just in an hour or so. When it dies, then I can see that it starts consuming a lot of processing power.
The clients work fine and even if I restart the service, then they will pick up the connection (after complaining for a while that the service is unavailable).
The service runs as a singleton.
Logging and then monitoring when did the logging die seems to be the only way to figure out what's going on. Unfortunately, such logging produces an outrageous amount of data and it seems producing some data even after the "core" of the service no longer operates properly.
Switching to gRPC is possible. However, this will require rewriting all clients.
Debugging is a no go because the service dies, let's say, somewhere after between one hour and one day and when it handles multiple connections and timer events.
I wonder if anyone has any ideas. Thanks a lot!
Suggestion: Run the service in a debugger and then see where it dies and/or starts consuming CPU cycles.
If you can't use a debugger, then I think logging/monitoring is your best bet. Perhaps you can reduce the amount of data logged to focus only the "core" of the service.

ASP.NET app longer on one machine than another (in IIS)

I have an ASP.NET app running in IIS. The first time a call to the application is made, it can sometimes take extremely long (e.g. 80 seconds), whereas the second time it's very quick
I know this has to do with the app first starting and possibly needing to gather resources etc. However, the problem is that I can run the same identical app on another machine and the load time for the first call is significantly less.
So I'm wondering what factors on the machine would affect this load time?
Thanks for any assistance
I agree with Steve's comment. FYI this slow response on initial request will happen every time the app pool has been idle for a while too. You can combat this by disabling the app pool from shutting down when idle. I think the default is 20 mins, this is a setting in IIS.
Then you will only suffer the problem every time the app pool recycles. You can stop this happening but I don't advise it. Interesting article on this here http://weblogs.asp.net/owscott/archive/2013/04/06/why-is-the-iis-default-app-pool-recycle-set-to-1740-minutes.aspx Recycling the app pool every now and again protects you from memory leaks. However you can pro actively spin up the app pool by setting up a scheduled task that runs a batch file to make a request to the website on detection of an app pool recycle.
This ensures that your site is always spinning and good to go for every request.
ASP.NET only compiles when the page is requested for the first time. This means that on first load the page is being compiled and then displayed. This can be solved by following the precompile instructions from Microsoft.
http://msdn.microsoft.com/en-us/library/ms227972%28v=vs.90%29.aspx
EDIT: I realized that I didn't answer the question that you were really asking.
There are a few things that could affect the first load:
1) The browser you are using may not be as efficient at displaying the type of content on the page (assuming different browsers).
2)If the machines aren't running on the same internet connection(and even if they are speeds vary between wifi/ethernet) this could be affecting the overall speed.
3)The specs on the machines themselves can be making a difference, browsers still take up resources to run, and as such a faster computer could display quicker (although it wouldn't make a giant difference).
4)You said the app was running on IIS, but you didn't specify whether it was a local (test) server or a deployed server. If it's local the specs of the machine again come into play, and in a giant way. Booting an IIS server, deploying the app and then displaying pages (what happens when you click run in VS or similar) can take very different amounts of time depending on the machine.

EventProvider constructor throwing Win32Exception Not enough storage

After moving a WCF service from one production server to another, where the configuration is very similar, custom event logging via Event Trace for Windows has stopped working, but just for one app.
The error is being thrown in the ctor of the EventProvider class and it is a Win32 "Not enough storage" error.
The WCF service is a 'concurrency mode multiple, instance context mode per call' thread per call configuration. At time of monitoring 60 threads belonged to the process. The EventProvider ctor is invoked per call. It is IIS/WAS hosted with AppFabric.
Another app on the same server is working OK.
I have no idea how to diagnose this. If anyone can even suggest a starting point I'd be grateful.
OK this turned out to be related to VMWare configuration. The machine is a 12Gb server, but was configured to have 6Gb permanently reserved, with 6Gb taken from a pool. With a lot of memory pressure and swapping on the physical level random Win32 exceptions started to get thrown in the VM. The solution is to make more memory available.
UPDATE: The above was coincidence, it is not related to VMWare most likely.
The issue returned after a month. It seems that what something on the server has changed which slows down garbage collection, and my per-call wcf service is not disposing EtwRegistration handles explicitly (ie. I am not explicitly Disposing the EventProvider). Experiments show that there is a limit of 1000 EventProviders per process. The change in performance on the server resulted in a handle leak that hit that limit.
Further update: If anyone would like to increase the number of providers, instead of forcing cleanup for whatever reason, I think this might help http://support.microsoft.com/kb/2583244

Some questions coming from application programming (C#/Visual C++) to ASP.NET (C#)

At the new place I am working, I've been tasking with developing a web-application framework. I am new (6 months ish) to the ASP.NET framework and things seem pretty straight forward, but I have a few questions that I'd like to ask you ASP professionals. I'll note that I am no stranger to C#.
Long life objects/Caching
What is the preferred method to deal with objects that you don't want to re-initialize every time a page is it? I noticed that there was a cache manager that can be used, but are there any caveats to using this? For example, I might want to cache various things and I was thinking about writing a wrapper around the cache that prefixed cache names so that I could implement different caches using the same underlying .NET cache manager.
1) Are there any design considerations I need to think about the objects that I am want to cache?
2) If I want to implement a manager of some time that is around during the lifetime of the web application (thread-safe, obviously), is it enough to initialize it during app_start and kill it in app_end? Or is this practiced frowned upon and any managers are created uniquely in the constructor/init method of the page being served.
3) If I have a long-term object initialized at app start, is this likely to get affected when the app pool is recycled? If it is destroy at app end is it a case of it simply getting destroyed and then recreated again? I am fine with this restriction, I just want to get a little clearer :)
Long Life Threads
I've done a bit of research on this and this question is probably redundant. It seems it is not safe to start a worker thread in the ASP.NET environment and instead, use a windows service to do long-running tasks. The latter isn't exactly a problem, the target environments will have the facility to install services, but I just wanted to double check that this was absolutely necessary. I understand threads can throw exceptions and die, but I do not understand the reasoning behind prohibiting them. If .NET provided a a thread framework that encompassed System.Thread, but also provided notifications for when the Application Server was going to recycle the App-Pool, we could actually do something about it rather than just keel over and die at the point we were stopped.
Are there any solutions to threading in ASP.NET or is it basically "service"?
I am sure I'll have more queries, but this is it for now.
EDIT: Thankyou for all the responses!
So here's the main thing that you're going to want to keep in mind. The IIS may get reset or may reset itself (based on criteria) while you're working. You can never know when that will happen unless it stops rendering your page while you're waiting on the response (in which case you'll get a browser notice that the page stopped responding, eventually.
Threads
This is why you shouldn't use threads in ASP.NET apps. However, that's not to say you can't. Once again, you'll need to configure the IIS engine properly (I've had it hang when spawning a lot of threads, but that may have been machine dependent). If you can trust that nobody will cause ASP.NET to recompile your code/restart your application (by saving the web.config, for instance) then you will have less issues than you might otherwise.
Instead of running a Windows service, you could use an ASMX or WCF service which also run on IIS/.NET. That's up to you. But with multiple service pools it allows you to keep everything "in the same environment" as far as installations and builds are concerned. They obviously don't share the same processpool/memoryspace.
"You're Wrong!"
I'm sure someone will read this far and go "but you can't thread in ASP.NET!!!" so here's the link that shows you how to do it from that venerable MSDN http://msdn.microsoft.com/en-us/magazine/cc164128.aspx
Now onto Long life objects/Caching
Caching
So it depends on what you mean by caching. Is this per user, per system, per application, per database, or per page? Each is possible, but takes some contrivance and complexity, depending on needs.
The simplest way to do it per page is with static variables. This is also highly dangerous if you're using it for user-code-stuff because there's no indication to the end user that the variable is going to change, if more than one users uses the page. Instead, if you need something to live with the user while they work with the page in particular, you could either stuff it into session (serverside caching, stays with the user, they can use it across multiple pages) or you could stick it into ViewState.
The cachemanager you reference above would be good for application style caching, where everyone using the webapp can use the same datastore. That might be good for intensive queries where you want to get the values back as quickly as possible so long as they're not stale. That's up to you to decide. Also, things like application settings could be stored there, if you use a database layer for storage.
Long term cache objects
You could initialize it in the app_start with no problem, and the same goes for destroying it at the end if you felt the need, but yes, you do need to watch out for what I described at first about the system throwing all your code out and restarting.
Keel over and die
But you don't get notified when you're (the app pool here) going to be restarted (as far as I know) so you can pretty much keel over and die on anything. Always assume the app is going to go down on you before your request, and that every request is the first one.
Really tho, that just leads back into web-design in the first place. You don't know that this is the first visitor or the fifty millionth (unless you're storing that information in memory of course) so just like the app is stateless, you also need to plan your architecture to be stateless as much as possible. That's where web-apps are great.
If you need state on a regular basis, consider sticking with desktop apps. If you can live with stateless-ness, welcome to ASP.NET and web development.
1) The main thing about caching is understanding the lifetime of the cache, and the effects of caching (particularly large) objects in cache. Consider caching a 1MB object in memory that is generated each time your default.aspx page is hit; and after a year of production you're getting 10,000 hits an hour, and object lifetime is 2 hours. You can easily chew up TONS of memory, which can affect performance, and also may cause things to be prematurely expired from the cache, which in turn can cause other issues. As long as you understand the effects of all of this, you're fine.
2) Starting it up in Application_Start and shutting it down in Application_End is fine. You can also implement a custom HttpApplication with an http module.
3) Yes, when your app pool is recycled it calls Application_End and everything is shutdown and destroyed.
4) (Threads) The issue with threads comes up in relation to scaling. If you hit that default.aspx page, and it fires up a thread, and that page gets hit 10,000 in 2 minutes, you could potentially have a ton of threads running in your application pool. Again, as long as you understand the ramifications of firing up a thread, you can do it. ThreadPool is another story, the asp.net runtime uses the ThreadPool to process requests, so if you tie up all the threadpool threads, your application can potentially hang because there isn't a thread available to process the request.
1) Are there any design considerations I need to think about the objects that I am want to cache?
2) If I want to implement a manager of some time that is around during the lifetime of the web application (thread-safe, obviously), is it enough to initialize it during app_start and kill it in app_end? Or is this practiced frowned upon and any managers are created uniquely in the constructor/init method of the page being served.
There's a difference between data caching and output caching. I think you're looking for data caching which means caching some object for use in the application. This can be done via HttpContext.Current.Cache. You can also cache page output and differentiate that on conditions so the page logic doesn't have to run at all. This functionality is also built into ASP.NET. Something to keep in mind when doing data caching is that you need to be careful about the scope of the things you cache. For example, when using Entity Framework, you might be tempted to cache some object that's been retrieved from the DB. However, if your DB Context is scoped per request (a new one for every user visiting your site, probably the correct way) then your cached object will rely on this DB Context for lazy loading but the DB Context will be disposed of after the first request ends.
3) If I have a long-term object initialized at app start, is this likely to get affected when the app pool is recycled? If it is destroy at app end is it a case of it simply getting destroyed and then recreated again? I am fine with this restriction, I just want to get a little clearer :)
Perhaps the biggest issue with threading in ASP.NET is that it runs in the same process as all your requests. Even if this weren't an issue in and of itself, IIS can be configured (and if you don't own the servers almost certainly will be configured) to shut down the app if it's inactive (which you mentioned) which can cause issues for these threads. I have seen solutions to that including making sure IIS never recycles the app pool to spawning a thread that hits the site to keep it alive even on hosted servers

IIS hosted web service method call randomly dies

We have an IIS hosted web method which is randomly dying on us about 10% of the time. In trying to debug this we've added Log.Debug() messages in front of every real code line and it appears to be dying on random lines.
Has anyone seen this or have an idea on how to debug this?
[Additional Details]
We've spent a lot of time looking at it and have discovered the following...
We have a seperate self-hosted WCF Service that access the same database and lives on the same machine. When it is under heavy load the web method croaks every time. If it's not under load then things usually work fine (but not 100%).
High CPU doesn't seem to be part of the problem. We ran a small app that created a high cpu load and the web service did not die.
The web service dies when we either new up an XmlSerializer (without doing the sgen precomp) OR have NHibernate create a SessionFactory. The only two things these things have in common is that they 1) seem like things people commonly do.. 2) seem like they would be fairly intensive.
We've added a Global.asax to try to capture Application_End and Application_Error but neither event gets fired. This to me implies that we're not dealing with a normal application pool resetting?
Sounds like it might be a threading issue. You are using informative debug messages -- you should try to reproduce the issue while running the debugger and breaking on all exceptions. Make sure you check all the windows logs for information on why the app pool crashed.
Per comment: It's hard to say, but many things can cause a thread to appear to "just die." Memory issues: are you doing any interop? Improper marshaling: are you touching data on another thread? But, I will play the probabilities and ask if you're sure your handling any exception that might be happening and logging it. Are you sure you are not gobbling up an exception and not reporting it? Somewhere down low? Is this a permissions issue? Are you running partial trust or on a low privilege user account?
Figured it out.. two problems really..
We added Global.asax but it didn't get copied over which explains why we weren't seeing any messages. We fixed this and found out that...
Our WCF log was being written out to the bin directory of the IIS Web Service. In retrospect this is kind of silly since the WS is an old school web service. The WCF stuff is in the same directory only for some reason that is unknown to us since the initial person who set things up is gone..
Lesson learned.. Somewhere there is a message that explains everything.. you just have to find it.

Categories