I used to have some NET Framework WCF service. It worked like a clockwork. Once NET5 and CoreWCF were released, I migrated the service to, well, NET5 and CoreWCF.
Now it hangs up after some time. If the load is light, then it might work for a day or so (and then randomly die), but if load becomes heavier, then it may die just in an hour or so. When it dies, then I can see that it starts consuming a lot of processing power.
The clients work fine and even if I restart the service, then they will pick up the connection (after complaining for a while that the service is unavailable).
The service runs as a singleton.
Logging and then monitoring when did the logging die seems to be the only way to figure out what's going on. Unfortunately, such logging produces an outrageous amount of data and it seems producing some data even after the "core" of the service no longer operates properly.
Switching to gRPC is possible. However, this will require rewriting all clients.
Debugging is a no go because the service dies, let's say, somewhere after between one hour and one day and when it handles multiple connections and timer events.
I wonder if anyone has any ideas. Thanks a lot!
Suggestion: Run the service in a debugger and then see where it dies and/or starts consuming CPU cycles.
If you can't use a debugger, then I think logging/monitoring is your best bet. Perhaps you can reduce the amount of data logged to focus only the "core" of the service.
Related
I am developing a asp.net site that needs hit a few social media sites daily for blanket friend/follower data. I have chosen arvixe business class as my hosting. In the future if we grow, I'd love to get onto a dedicated server and run a windows service, however since that is not in the cards at this point I need another reliable way of running scheduled tasks. I am familiar with running a thread timer from the app_code(global.aspx). However the app pool recycling will cause some problems with the timer. I have never used task scheduling like quartz but have read a lot about it on stackoverflow. I was looking for some advise as to how to approach my goal. One big problem I have using either method is that I will need the crawler threads to sleep for up to an hour regularly due to api call limits. My first thoughts were to use the db to save the starting and ending of a job. When the app pool recycles I would clear out any parts not completed and only start parts that do not have a record of running on that day. What do the experts here think? any good links to sample architecture of this type of scheduling?
It doesn't really matter what method you use, whether you roll your own or use Quartz. You are at the mercy of ASP.NET/IIS because that's where you want to host it.
Do you have a spare computer laying around that can just run a scheduled task and upload data to a hosted database? To be honest, it's possibly safer (depending on your use case) to just do it that way then try to run a scheduler in ASP.NET.
Somewhat along the lines of Bryan's post;
Find a spare computer.
Instead of allowing DB access have it call up a web service on your site. This service call should be the initiator of the process you are trying to do. Don't try to put params into it, just something like "StartProcess()" should work fine.
As far as going to sleep and resuming later take a look at Workflow Foundation. There are some nice built in features to persist state.
Don't expose your DB to the outside world, instead expose that page or web service and wraps some security around that. WCF has some nice built in security features for that.
The best part is when you decide to move off, you can keep your web service and have it called from a Windows Service in the same manner.
As long as you use a persistent job store (like a database) and you write and schedule your jobs so that they can handle things like being killed half way through, having IIS recycle your process is not that big a deal.
The bigger issue is that IIS shuts your site down if it doesn't have traffic. If you can keep your site up, then just make sure you set the misfire policy appropriately and that your jobs store any state data needed to pick up where they left off, you should be able to pull it off.
If you are language-agnostic and don't mind writing your "job-activation-script" in your favourite, Linux-supported language...
One solution that has worked very well for me is:
Getting relatively cheap, stable Linux hosting(from reputable
companies),
Creating a WCF service on your .Net hosted platform that will contain the logic you want to run regularly (RESTfully or SOAP or XMLRPC... whichever suits you),
Handling the calls through your Linux hosted cron jobs, written in your language of choice(I use PHP).
Working very well, like I said. No VPS expense,configurable and externally activated. I have one central place where my jobs are activated, with 99 to 100% uptime(never had any failures).
We have a server farm of about 40 servers that we roll code to every couple weeks. One thing we noticed when we roll the code live is after deploying the assemblies and performing an IIS reset and put it back in the BigIp (F5) and it receives traffic the server will lockup for about 10 minutes and clients will just spin until an eventual timeout.
Looking at the perfmon we can see a dramatic spike in number of finally's and number of pinned objects btw which lead me to investigate memory issues.
So one thing I started looking into it our Unity IoC configuration. In the global.asax.cs we are registering about 15 interfaces where most are using the ContainerControlledLifetimeManager to manage the lifetime. Normally there is never a problem with the code except in this ten minute window so my first thought was a memory or resource management issue.
Does anyone know if you have to explicitly Dispose() of your Unity Container or is this handled by Unity automagically somehow? I noticed today that there was no Dispose wiring in place for Application_End so my thought was maybe when the servers are brought back on after the IIS reset there is a Unity or object resource issue until the GC comes around and frees the memory (the ten minutes it takes to come up).
Any help is appreciated!
Performing an iisreset will kill the currently running w3wp.exe process, so it's unlikely that not properly disposing of unity objects in Application_End would cause performance issues on startup. It is possible that the old web process doesn't properly release file system or other resources the new web process depends upon, but I think you'd see file access or some other errors if that were the case.
Since you're performing an iisreset, I would look closely at the code that runs when the application starts for the first time. Maybe there are some components that take alot of time to start up (i.e., say there is a singleton type class that downloads and caches a bunch of stuff from the database) that are causing the slow down, possibly only when combined with the stress of handling all of the waiting HTTP requests. Also, keep in mind that ASP.NET will incur a bunch of overhead as it compiles the application to be used the first time. Since it seems that your web application is behind a load balancer, you may want to come up with a way to "prime" the application on each individual web server before you add that web server back to the load balancer, which could be accomplished by just loading a page locally on that web server. Priming the application would allow the web app initialize itself without having to handle any outside requests, which should improve the startup time.
Long story short, I would investigate startup issues and see what I could tune there before I focused on shutdown issues.
My company has an application that keeps track of information related to web sites that are hosted on various machines. A central server runs a windows service that gets a list of sites to check, and then queries a service running on those target sites to get a response that can be used to update the local data.
My task has been to apply multithreading to this process to reduce the time it takes to run through all the sites (almost 3000 sites that take about 8 hours to run sequentially). The service runs through successfuly when it's not multithreaded, but the moment I spread out the work to multiple threads (testing with 3 right now, plus a watcher thread) there's a bizarre crash that seems to originate from the call to the remote services that are supposed to provide the data. It's a SOAP/XML call.
When run on the test server, the service just gives up and doesn't complete it's task, but doesn't stop running. When run through the debugger (Dev Studio 2010) the whole thing just stops. I'll run it, and seconds later it'll stop debugging, but not because it completed. It does not throw an exception or give me any kind of message. With breakpoints I can walk through to the point where it just stops. Event logging leads me to the same spot. It stops on the line of code that tries to get a response from the web service on the other sites. And again: it only does that when multithreaded.
I found some information that suggested there's a limit to the number of connections that defaults to 2. The proposed solution is to add some tags to the app.config, but that hasn't solved the problem...
<system.net>
<connectionManagement>
<add address="*" maxconnection="20"/>
</connectionManagement>
</system.net>
I still think it might be related to the number of allowed connections, but I have been unable to find information around it online very well. Is there something straightforward I'm missing? Any help would be much appreciated.
No crash however bizarre will escape the stack-dump. Try going through that dump and see if it points out to some obvious function.
Are you using some third party tool or some other component for the actual service call ? If yes, then please check the documentation/contact-the-person-who-wrote-it, to confirm that their components are thread safe. If they are not, you have large task ahead. :) (I have worked on DB which are not safe, so trust me it is not very uncommon to find few global static variables thrown around..)
Lastly if you are 100% sure that this is due multiple threads then, put a lock in your worked thread. Initially say it covers entire main-while-loop. Therotically it should not crash not as even though it is multi-threaded, you have serialized the execution.
Next step is to reduce to scope of the thread. Say, there are three functions in the
main-while-loop , say f1(), f2(), f3(), then start locking f2() and f3() while leaving f1 unlocked... If things work out, then problem is somewhere in f2 or f3().
I hope you got the idea of what I am suggest
I know this is like blind man guessing elephant, but that is the best you can do, if your code uses LOT many external component which are not adequately documented.
I have been working on many applications which run as windows service or scheduled tasks.
Now, i want to make sure that these applications will be fault tolerant and reliable. For example; i have a service that runs every hour. if the service crashes while its operating or running, i d like the application to run again for the same period (there are several things involved with this including transactions of data processing) , to avoid data loss. moreover, i d like the program to report the error with details. My goal is to avoid data loss and not falling behind for running the program.
I have built a class library that a user can import into a project. Library is supposed to keep information of running instance of the program, ie. program reads and writes information of running interval, running status etc. This data is stored in a database.
I was curious, if there are some best practices to make the scheduled tasks/ windows services fault tolerant and reliable.
Edit : I am talking about independent tasks or services which on different servers. and my goal is to make sure that the service will keep running, report any failures and recover from them.
I'm interested in what other people have to say, but I'll give you a few points that I've stumbled across:
Make an event handler for Unhandled Exceptions. This way you can clean up resources, write to a log file, email an administrator, or anything you need to instead of having it crash.
AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(AppUnhandledExceptionEventHandler);
Override any servicebase event handlers you need in the main part of your application. OnStart and OnStop are pretty crucial, but there are many others you can use. http://msdn.microsoft.com/en-us/library/system.serviceprocess.servicebase%28v=VS.71%29.aspx
Beware of timers. Windows forms timers won't work right in a service. User System.Threading.Timers or System.Timers.Timer. Best Timer for using in a Windows service
If you are updating on a thread, make sure you use a lock() or monitor in key sections to make sure everything is threadsafe.
Be careful not to use anything user specific, as a service runs without a specific user context. I noticed some of my SQL connection strings were no longer working for windows authorizations, etc. Also have heard people having trouble with mapped drives.
Never make a service with a UI. In fact for Vista and 7 they make it nearly impossible to do anyway. It shouldn't require user interaction, the most you can do is send a message with a WIN32 function. MSDN claims making interactive services is bad practice. http://msdn.microsoft.com/en-us/library/ms683502%28VS.85%29.aspx
For debugging purposes, it is way cool to make a service run as a console application until you get it doing what you want it to. Awesome tutorial: http://mycomponent.blogspot.com/2009/04/create-debug-install-windows-service-in.html
Anyway, hope that helps a little, but that is just a couple thing I poked around to find on my own.
Something obvious - don't run all your tasks at the same time. Try to schedule them so only one task is using some expensive resource at any time (if possible). For example, if you need to send out newsletters and some specific notifications, schedule them at different times. If two tasks need to clean up something in the database, let the one run after another.
Also schedule tasks to run outside of normal business hours - at night obviously.
We have an IIS hosted web method which is randomly dying on us about 10% of the time. In trying to debug this we've added Log.Debug() messages in front of every real code line and it appears to be dying on random lines.
Has anyone seen this or have an idea on how to debug this?
[Additional Details]
We've spent a lot of time looking at it and have discovered the following...
We have a seperate self-hosted WCF Service that access the same database and lives on the same machine. When it is under heavy load the web method croaks every time. If it's not under load then things usually work fine (but not 100%).
High CPU doesn't seem to be part of the problem. We ran a small app that created a high cpu load and the web service did not die.
The web service dies when we either new up an XmlSerializer (without doing the sgen precomp) OR have NHibernate create a SessionFactory. The only two things these things have in common is that they 1) seem like things people commonly do.. 2) seem like they would be fairly intensive.
We've added a Global.asax to try to capture Application_End and Application_Error but neither event gets fired. This to me implies that we're not dealing with a normal application pool resetting?
Sounds like it might be a threading issue. You are using informative debug messages -- you should try to reproduce the issue while running the debugger and breaking on all exceptions. Make sure you check all the windows logs for information on why the app pool crashed.
Per comment: It's hard to say, but many things can cause a thread to appear to "just die." Memory issues: are you doing any interop? Improper marshaling: are you touching data on another thread? But, I will play the probabilities and ask if you're sure your handling any exception that might be happening and logging it. Are you sure you are not gobbling up an exception and not reporting it? Somewhere down low? Is this a permissions issue? Are you running partial trust or on a low privilege user account?
Figured it out.. two problems really..
We added Global.asax but it didn't get copied over which explains why we weren't seeing any messages. We fixed this and found out that...
Our WCF log was being written out to the bin directory of the IIS Web Service. In retrospect this is kind of silly since the WS is an old school web service. The WCF stuff is in the same directory only for some reason that is unknown to us since the initial person who set things up is gone..
Lesson learned.. Somewhere there is a message that explains everything.. you just have to find it.