Shutting down a Long-running process in a Windows Service - c#

I have a Windows Service that performs a long-running process. It is triggered by a timer and the entire process can take a few minutes to complete. When the timer elapses the service instantiates a management object that performs the various tasks, logs the results and then exits.
I have not implemented anything to handle those occasions when the server is shutdown during the middle of the process. It could cause some problems. What is the best practice to handle this?

Can only give vague suggestions since I don't know what task you are actually doing.
If it is something to do w/ database, there is transaction that can be rolled back if it is not committed.
If it involves some file manipulation, perhaps take a look at this article on Transactional NTFS. You can use it in combination w/ TransactionScope object to ensure atomic transaction.
If you are dealing with web services, well the service boundary will dictate when one transaction starts / ends and when the other one begins, use compensation model (if you break something on your part, you need to provide a way later on, after recovery, a way to notify / execute compensation scripts on the other end. (Think about ordering book online and how to handle backorder, cancellation, etc.)
For tracking mechanism, log every steps and the timelines for troubleshooting if something like shutdown occurs.

If your describing essentially a batch process its ok to have a timer that does work at an interval - much of the world works that way.
If its long running, try to keep your units of work, or batches, small enough that your process can at least check to see if its been signaled to stop or not. This will allow the service to exit gracefully instead of essentially ignoring the service stop message.
Somewhere in your timer function you have a property, IsShutdownRequired or some such, that your checking (assuming some loop processing). This property is set to true in the service stop control message, which allows your process to gracefully exit by either not trying to do more work, or as Jimmy suggested, rolling back that work if in a transaction.
Ideally, smaller batches would be better than one big one.

Related

How do I ensure that any of the threads are not waiting for something indefinitely?

I'm in process of writing a multi threaded application. Here's my case.
I grab a thousand records from database. Divide it into 5 chunks of list objects. and create 5 threads to process them. I do this same thing every minute until I have records remaining in the database
Task.Factory.StartNew(() => ProcessRecords(listRecords))
Inside ProcessRecords method, there is a small database update and some send mail takes place. (I'm using System.Net.Mail for email and don't use any ORM for db operation.)
Now I am worried that a thread might not complete because of some unknown issues. What will happen in that situation? Lets say one process (or even more) keeps on waiting for a deadlock in the database or something, what will happen to my application. It will keep on adding new threads with new set of records while some never ending threads. How can I implement something like timeout in this situation?
I want to run this process, terminate it in 5 minutes if it is not able to complete it.
Check out something called a TaskCancellationToken. You can use that to kill the task if you decide (by whatever means you prefer) that it's been running too long.
Alternatively, you could build that into the ProcessRecords() method itself: just have it commit seppuku if it runs too long by having it track its own start time and checking the elapsed time now and then; could be simpler.
That said, if you haven't already given it a shot, you might check to see whether .AsParallel() will save you some headaches here. There are a lot of cases where you can leave your parallelization woes to the compiler entirely.
Parallel.ForEach(db.Records, r => ProcessRecord(r));
Edit:
Parallel.ForEach(db.Records, ProcessRecord);
Yes. :)
Further edit:
For the OP, no, the TaskFactory doesn't offer anything like that out of the box. If you want to terminate the process from outside the process, you'll need to roll your own mechanism using some kind of a watcher thread to keep track of which tasks you have running, how long they've been running, and their respective cancellation tokens (or maybe just a bool you have at the top of a while loop...).

Converting threaded app to service

I currently have an application which is basically a wrapper for ~10 "LongRunning" Tasks. Each thread should keep running indefinitely, but sometimes they lock up or crash, and sometimes the wrapper app spontaneously exits (I haven't been able to track that down yet). Additionally, the wrapper application can currently only be running for one user, and that user has to be the one to restart the threads or relaunch the whole app.
I currently have a monitor utility to let me know when the threads stop doing work so that they can be manually restarted, but I'd like to automatically restart them instead. I'd also like the wrapper to be available to everyone to check the status of the threads, and for the threads to be running even when the wrapper isn't.
Based on these goals, I think I want to separate the threads into a Windows Service, and convert the wrapper into something which can just connect to the service to check its status and manipulate it.
How would I go about doing this? Is this a reasonable architecture? Should I turn each thread into a separate service, or should I have a single multi-threaded service?
Edit: All the tasks log to the same set of output files (via a TextWriter.Synchronized(StreamWriter)), and I would want to maintain that behavior.
They also all currently share the same database connection, which means I need to get them all to agree to close the connection at the same time when it's necessary. However, if they were split up they could each use their own database connection, and I wouldn't need to worry about synchronizing that. I actually suspect that this step is one of the current failure points, so splitting it up would be a Good Thing.
I would suggest you to stay inside one multithreading service if possible. Just make sure that threads are handled correctly when Service Stop is triggered. Put brake flags inside blocks of code that will take a lot of time to execute. This way you will make your service responsive on Stop event. Log any exceptions and make sure to wait for all threads to exit until service is finally stopped. This will prevent you to run same "task" in multiple threads.
Maintaining one service is in the end easier then multiple services.
Splitting to multiple services would be reasonable if you require some separate functionalities that can run or not beside each other.
I don't think moving the threads to a Windows Service removes any of the problems. The service will still crash randomly and the threads will still exit randomly.
I assume that your long-running tasks implement a kind of worker loop. Wrap the body of that loop in a try-catch and log all exceptions. Don't rethrow them so that the task does not ever exit. Examine the logs to find the bugs.

How do I cancel and roll back part of a workflow

I have a very long running workflow that moves video files around between video processing devices and then reports the files state to a database which is used to drive a UI
At times the users press a button on the UI to "Accept" a file into a video storage server. This involves copying a file from one server to another.]
They have asked if this activity can be cancelled.
I've looked at the wf4 documentation and I can't see a way to roll back part of a workflow.
Is this possible and what technique should I use.
The are two basic inbuild activities for reverting work.
The TransactionScope for ACID transaction
The Compensable activity for long running work.
With the Compensable activity you add activities to the compensation handler to undo work previously done. The Compensate activity can be used to trigger compensation. If there is no compensation you will get the confirmation handler either at the end of the workflow automatically or when you use the Conform activity.
See A Developer's Introduction to Windows Workflow Foundation (WF) in .NET 4 by Matt Milner for more details.
Okay, so let's first say that the processing of "rolling back" what was already uploaded will have to be done by hand, so where ever you're storing those chunks you'll need to clean up by hand when they cancel.
Now, on to the workflow itself, in my opinion you could setup your FlowChart like this:
Alright so let's break down this workflow. The entire service should be correlated on some client key so that way you can start the service with Start once per client to keep the startup costs down.
Next, when said client wants to start a transfer you'll call BeginTransfer which will move into the transfer loop. The transfer loop is setup so that you can cancel between chunks if necessary by calling CancelTransfer.
That same branch, in this model, is used to finish the transfer as well because it gets out of the loop, so when your done transferring chunks just call CancelTransfer (if you don't like that just setup a different branch that looks exactly the same).
Finally, when you're in the process loop, you can SoftExit the entire workflow and shut it down so that you can kill it softly if there is necessary maintenance or when the client is finished with its connection it needs to call SoftExit to dispose of it.
not sure if I totally understand your scenario but I think you would need to run your transfer process on an asynchronous thread, that from time to time check a "cancel" variable to perform a rollback. This variable can be modified on the main thread on your UI.
Of course, this will allow you to cancel between transfers, not in the midle on one single transfer.

Should I have a thread that sleeps then calls a method?

I wondering would this work. I have a simple C# cmd line application. It sends out emails at a set time(through windows scheduler).
I am wondering if the smtp would say fail would this be a good idea?
In the smtpException I put thread that sleeps for say 15mins. When it wakes up it just calls that method again. This time hopefully the smtp would be back up. If not it would keep doing this until the smpt is back online.
Is some down side that I am missing about this? I would of course do some logging that this is happening.
This is not a bad idea, in fact what you are effectively implementing is a simple variation of the Circuit-Breaker pattern.
The idea behind the pattern is the fact that if an external resource is down, it will probably not come back up a few milliseconds later. It might need some time to recover. Typically the circuit breaker pattern is used as a mean to fail fast - so that the user can get an error sooner; or in order not to consume more resources on the failing system. When you have stuff that can be put in a queue, and does not require instant delivery, like you do, it is perfectly reasonable to wait around for the resource to become available again.
Some things to note though: You might want to have a maximum count of retries, before failing completely, and you might want to start off with a delay less than 15 minutes.
Exponential back-off is the common choice here I think. Like the strategy that TCP uses to try to make a connection: double the timeout on each failed attempt. Prevents your program from flooding the event log with repeated failure notifications before somebody notices that something is wrong. Which can take a while.
However, using the task scheduler certainly doesn't help. You really ought to reprogram it so your program isn't consuming machine resource needlessly. But using the ITaskService interface from .NET isn't that easy. Check out this project.
I would strongly recommend using a Windows Service. Long-running processes that run in the background, wait for long periods of time and need a controlled, logged, 'monitorable' lifetime: it's what Windows Services do.
Thread.Sleep would do the job, but if you want it to be interruptable from another thread or something else going on, I would recommend Monitor.Wait (MSDN ref). Then you can run your process in a thread created and managed by the Service, and if you need to stop/interrupt, you Monitor.Pulse on the same sync object and the thread will come back to life.
Also ref:
Best architecture for a 30 + hour query
Hope that helps!
Infinite loops are always a worry. You should set it up so that it will fail after N attempts, and you definitely should have some way to shut it down from the user console.
Failure is not such a bad thing when the failure isn't yours. Let it fail and report why it failed.
Your choices are limited. Assuming that it is just a temporary condition and that it has worked at some point. The only thing you can do is notify of a problem, get somebody to fix it and then retry the operation later. The only thing you need to do is safeguard the messages so that you do not lose any.
If you stick with what you got watchout for concurrency, perhaps a named mutex to ensure only a single process is running at a time.
I send out Notifications to all our developers in a similar fashion. Only, I store the message body and subject in the database. After a message has been successfully processed then I set a success flag in the database. This way its easy to track and report errors and retries are a cakewalk.

Long running Windows Services

Folks,
I want to develop a long running windows service (it should be working without problems for months), and I wonder what is the better option here:
Use a while(true) loop in the OnStop method
Use a timer to tick each n seconds and trigger my code
Any other options ?
Thanks
Essam
I wouldn't do #1.
I'd either do #2, or I'd spin off a separate thread during OnStart that does the actual work.
Anything but #1
The services manager (or the user, if he's the one activating the controls) expects OnStart() and OnStop() to return in a timely fashion.
The way it's usually done is to start your own thread that keeps things running and ofcourse, listens to an event that might tell it to stop.
Might be worth considering a scheduled task with a short interval. Saves writing a lot of plumbing code and dealing with the peculiarities of Windows Services timers.
Don't mess with the service controller code. If the service wants to stop, you will only make matters worse by using #1. And BTW the service can always crash, in which case your while(true) won't help you a thing.
If you really want to have a "running windows service (it should be working without problems for months)", you'd better make sure your own code is properly and thoroughly tested using unit and integration tests before your run it as a service.
I would NOT recommend #1.
What I’ve done in the past for the exact same scenario/situation is create a scheduled task that runs ever N seconds, kicks off a small script that simply does these 2 things: #1 checks for “IsAlreadyRunning” flag (which is read from the database) #2 If the flag is true, then the script immediately stops end exits. If the flag is false, the script kicks off a separate process (exe) in a new thread (which utilizes a service to perform a task that can be either very short or sometimes really long, depending on the amount of records to process). This process of course sets and resets the IsAlreadyRunning flag to ensure threads do not kick off actions that overlap. I have a service that's been running for years now with this approach and I never had any problems with it. My main process utilizes a web service and bunch of other things to perform some heavy backup operations.
The System.Threading.Timer class would seem appropiate for this sort of usage.
Is it doing a
1 clean up task, or
2 waking up and looking to see if needs to run a task
If it is something like #2, then using MSMQ would be more appropriate. With MSMQ task would get done almost immediately.

Categories