Schedule batch file to run from windows service - c#

I have been running a batch file every 5 minutes through Windows Task scheduler and since there are quite a number of issues I faced like task scheduler going to hung state and not returning back to service- I have decided to use windows service primarily because I can invoke recovery action by monitoring the service through our monitoring infrastructure.
So, I have created a service to run that instead.
The service was built and installed but the moment I start the service which invokes the batch file that is looped and doing a set of task, it keeps looping forever.
The batch file is something like this:
#echo off
:begin
cd c:\work\scripts\matm\
cscript //E:jscript c:\work\Scripts\matm\matm.js >> C:\work\Scripts\matm\matm.log;
cscript //E:vbscript c:\work\Scripts\matm\TruncateLog.vbs
>>c:\work\Scripts\matm\TruncateLog.log;
del C:\work\Scripts\matm\Logs\myserver\matm.csv
timeout 600
goto begin
The batch script works perfectly when run from the command prompt and that is what I am expecting the service to invoke.
My thought is that the service gets into the loop as soon as we start it and never comes out of that.
I have defined the call to the batch file on this Onstart section as below
protected override void OnStart(string[] args)
My question is :-
a) How can I ensure that the service doesn't "start" running the batch file as soon as it starts? If my conception is wrong, how can I run the service every 5 minutes ?
b) How to stop the service? Or how can I stop the service if proc is a new instance of Process class that I have defined in the onstart() function.
Appreciate your help and feedback.
Regards,
Sash

Write a custom wrapper console application in C# that contains your error recovery logic.
Use the Windows Task Scheduler to invoke your wrapper application on a regular interval. You can configure it to only start the job if it is not already running. You can also make it kill the existing job.
No need to use a Windows Service. They are complicated.
Given that a Service might help you solve the problem here's what I'd do: In the OnStart method start a timer that ticks every 10min and starts your script. Or, start a thread that sleeps for 10min between calls to the script.

Related

How to "trick" Azure Function into running longer than 10 minutes

Azure Functions have a time limit of 10 minutes. Suppose I have a long-running task such as downloading a file that takes 1 hr to download.
[FunctionName("PerformDownload")]
[return: Queue("download-path-queue")]
public static async Task<string> RunAsync([QueueTrigger("download-url-queue")] string url, TraceWriter log)
{
string downloadPath = Path.Combine(Path.GetTempPath(), Guid.NewGuid().ToString);
log.Info($"Downloading file at url {url} to {downloadPath} ...");
using (var client = new WebClient())
{
await client.DownloadFileAsync(new Uri(url), myLocalFilePath);
}
log.Info("Finished!");
}
Is there any hacky way to make something like this start and then resume in another function before the time limit expires? Or is there a better way altogether to integrate some long task like this into a workflow that uses Azure Functions?
(On a slightly related note, is plain Azure Web Jobs obsolete? I can't find it under Resources.)
Adding for others who might come across this post: Workflows composed of several Azure Functions can be created in code using the Durable Functions extension, which can be used to create orchestration functions that schedule async tasks, shut down, and are reawakened when said async work is complete.
They're not a direct solution for long-running tasks that require an open TCP port, such as downloading a file, (for that, a function running on an App Service Plan has no execution time limit), but it can be used to integrate such tasks into a larger workflow.
Is there any hacky way to make something like this start and then
resume in another function before the time limit expires?
If you are on a Consumption Plan you have no control over how long your Function App runs, and so it would not be reliable to use background threads that continue running after your Function entry point completes.
On an App Service plan you're running on VMs you pay for, so you can configure your Function App to run continuously. Also AFAIK you don't have to have a Function timeout on an App Service Plan, so your main Function entry point can run for as long as you want.
Or is there a better way altogether to integrate some long task like this into a workflow that uses Azure Functions?
Yes. Use Azure Data Factory to copy data into Blob Storage, and then process it. The Data Factory pipeline can call Functions both before and after the copy activity.
One additional option, depending on the details of your workload, is to take advantage of Azure Container Instances. You can have your Azure Function spin up a container, process your workload (download your file \ do some processing, etc), and then shut down your container for you. Spin up time is typically a few seconds and you only pay for what you use (no need for a dedicated app service plan or vm instance). More details on ACI here.
10 minutes (based on the timeout setting in the host.json file) after the last function of your function app has been triggered, the VM running your function app will stop.
To prevent this behavior to happen, you can have an empty Timertrigger function that runs every 5 minutes. it wont cost anything and will keep your app up and running.
I think the issue is related with the Cold Start state. Here you can find more details about it.
https://markheath.net/post/avoiding-azure-functions-cold-starts
What you can do is, create an trigger azure function that "ping" your long running function to keep it "warm"
namespace NewProject
{
public static class PingTimer
{
[FunctionName("PingTimer")]
public static async Task Run([TimerTrigger("0 */4 * * * *")]TimerInfo myTimer, TraceWriter log)
{
// This CRON job executes every 4 minutes
log.Info($"PingTimer function executed at: {DateTime.Now}");
var client = new HttpClient();
string url = #"<Azure function URL>";
var result = await client.GetAsync(new Uri(url));
log.Info($"PingTimer function executed completed at: {DateTime.Now}");
}
}}

Restarting Azure Worker role "WaWorkerHost.exe" manually

As I understand Azure Worker roles run by the help of Host application called WaWorkerHost.exe and there is another application called WaHostBootstrapper.exe which checks if WaWorkerHost.exe is running and if not it will run the WaWorkerHost.exe.
How often does this 'worker role status check' occurs?
How can I quickly restart the Worker role myself? I can either reboot the machine worker role is running and wait for few minutes or chose the following traditional method:
Taskkill /im /f WaWorkerHost.exe
and wait for few minutes for the WaHostBootstrapper.exe to kick in but this very inefficient and slow.
Is there any (instant)method of restarting the worker role?
Can I run something like the following and expect similar results to the WaHostBootstapper.exe or there are other consideration?
WaWorkerHost.exe {MyAzureWorkerRole.dll}
The bootstrapper checks the WaWorkerHost status every 1 second.You can see it in the bootsrapper logs (c:\resources\WaHostBootstrapper.txt), by looking at interval of the trace:
"Getting status from client WaWorkerHost.exe"
You can use AzureTools which is a utility used by Azure support team.
One of the its features is gracefully recycle the role instance:
Alternatively, you can restart the instance programmatically:
Upload management certificate to your subscription.
Use the following code to programmatically restart the instance:
Using Microsoft Azure Compute Management library:
X509Certificate2 cert = new X509Certificate2("");
var credentials = new CertificateCloudCredentials("your_subscription_id", cert);
using (var managementClient = new ComputeManagementClient(credentials))
{
OperationStatusResponse response =
await managementClient.Deployments.RebootRoleInstanceByDeploymentSlotAsync(
"cloud_service_name",
DeploymentSlot.Production, // or staging
"instance_name");
}
This is not recommended, for three reasons:
The bootsrapper checks every second, which should be enough for most cases.
It could lead to weird issues. For example, you kill the worker, bootstrapper identifies that the worker is down, you manually start the worker, bootstrapper also tries to start the worker and fail (will crash? will enter zombie state?). It can lead to unhealthy bootstrapper, means that nothing takes care of the worker process.
It depends, of course, on what's the bootstrapper does other than starting the worker. But even if it is currently does nothing other than starting the role, you cannot know for sure if tomorrow Azure team will decide to add it more responsibilities/actions.
If the role itself is aware that it needs to restart, it can call RoleEnvironment.RequestRecycle to cause the role instance to be restarted.

Best way to implement windows service as "worker"

I have an ASP .NET page which allows users to start programs. These programs and the parameter are stored in a database and a windows service then executes these programs.
The programs are dlls which implements my IPlugin interface, so I can add them at runtime (the dlls are loaded at runtime so I can add them at runtime without compiling or restarting the service).
I created the ASP .NET page, more than 10 programs (plugins) and the windows service. Everything is running fine, but I think the implementation of the windows service is bad.
The windows service periodically queries the database and executes the needed program if it gets a new entry. The service can run multiple programs in parallel (at the moment 3 programs).
Currently my service method looks like this:
while (Alive)
{
// gets all running processes from the database
Processes = Proc.GetRunningProcs();
// if there are less than 3 processes running and
// a process is in queue
if (ReadyToRun())
{
// get next program from queue, sets the status to
// runnig and update the entry in the database
Proc.ProcData proc = GetNextProc();
proc.Status = Proc.ProcStatus.Running;
Proc.Update(proc);
// create a new thread and execute the program
Thread t = new Thread(new ParameterizedThreadStart(ExecuteProc));
t.IsBackground = true;
t.Start(proc);
}
Thread.Sleep(1000);
}
I have a method that queries the database for entries with status 'Canceling' (if a user cancels a program, the status will be set to 'Canceling') and does a Thread.Abort().
Is there a better practice? Like using tasks with the cancel mechanism or is the whole concept (storing the processes in database (program name, parameter, status,... and querying this information periodically) wrong?
As an alternative you can use some existing libraries for your purposes like Quartz.NET http://www.quartz-scheduler.net/. It takes care about job persistence, job scheduling and many other things. All you must do to create an adapter and put it into Windows Service.

Keep application running all the time

Basically I need my application to run from system start until system shutdown. I figured out the following approach:
create MyApp.exe and MyService.exe
MyApp should install MyService as a service
MyService is supposed to run at startup and periodically check if MyApp is running. If it's not than start it.
That's the code I wrote for my service:
protected override void OnStart(string[] args)
{
while(true)
{
int processesCount =
Process.GetProcessesByName(Settings.Default.MyAppName).Count() +
Process.GetProcessesByName(Settings.Default.MyAppName + ".vshost").Count() +
Process.GetProcessesByName(Settings.Default.MyAppUpdaterName).Count();
if(processesCount==0)
{
//restore
var p = new Process { StartInfo = { FileName = Settings.Default.MyAppName, Arguments = "" } };
p.Start();
}
else
{
}
System.Threading.Thread.Sleep(3000);
}
}
How can I install this process so that it starts on windows start?
I'm not sure if this infinite loop in OnStart method is a good idea. Is it?
Is the general idea ok?
What I've done is have a windows service that runs the logic and main application code. Then if you need a GUI for it, have the windows service expose a web service via WCF and create a windows app that calls to the web service. On install, put you windows app in the windows startup.
This model will have the main application code running all the time, but the GUI is only up when a user is logged in.
Is the general idea ok?
As Hans points out in comments this is hostile to the user and fortunately won't work on Vista or later because services run in their own windows station. Put whatever logic you need to run all the time in the service and use an IPC mechanism such as WCF to communicate with an (optionally) running UI. If the user disables the service or exits the GUI respect their wishes...
How can I install this process so that it starts on windows start?
Add an entry to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run or HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Runthat points to your GUI application.
I'm not sure if this infinite loop in OnStart method is a good idea.
Is it?
No. You need to return from OnStart if you need to do work after OnStart returns create a Thread to do that work.

How do I wait until a console application is idle?

I have a console application that starts up, hosts a bunch of services (long-running startup), and then waits for clients to call into it. I have integration tests that start this console application and make "client" calls. How do I wait for the console application to complete its startup before making the client calls?
I want to avoid doing Thread.Sleep(int) because that's dependent on the startup time (which may change) and I waste time if the startup is faster.
Process.WaitForInputIdle works only on applications with a UI (and I confirmed that it does throw an exception in this case).
I'm open to awkward solutions like, have the console application write a temp file when it's ready.
One option would be to create a named EventWaitHandle. This creates a synchronization object that you can use across processes. Then you have your 'client' applications wait until the event is signalled before proceeding. Once the main console application has completed the startup it can signal the event.
http://msdn.microsoft.com/en-us/library/41acw8ct(VS.80).aspx
As an example, your "Server" console application might have the following. This is not compiled so it is just a starting point :)
using System.Threading;
static EventWaitHandle _startedEvent;
static void main()
{
_startedEvent = new EventWaitHandle(false, EventResetMode.ManualReset, #"Global\ConServerStarted");
DoLongRunnningInitialization();
// Signal the event so that all the waiting clients can proceed
_startedEvent.Set();
}
The clients would then be doing something like this
using System.Threading;
static void main()
{
EventWaitHandle startedEvent = new EventWaitHandle(false, EventResetMode.ManualReset, #"Global\ConServerStarted");
// Wait for the event to be signaled, if it is already signalled then this will fall throught immediately.
startedEvent.WaitOne();
// ... continue communicating with the server console app now ...
}
What about setting a mutex, and removing it once start up is done. Have the client app wait until it can grab the mutex before it starts doing things.
Include an is ready check in the app's client interface, or have it return a not ready error if called before it's ready.
Create a WCF service that you can use for querying the status of the server process. Only start this service if a particular command is passed on the command line. The following traits will ensure a very fast startup of this service:
Host this service as the first operation of the client application
Use the net.tcp or net.pipe binding because they start very quickly
Keep this service as simple as possible to ensure that as long as the console application doesn't terminate, it will remain available
The test runner can attempt to connect to this service. Retry the attempt if it fails until the console application terminates or a reasonably short timeout period expires. As long as the console application doesn't terminate unexpectedly you can rely on this service to provide any additional information before starting your tests in a reasonably short period of time.
Since the two(the console application, and integration test app that makes client calls - as I understand) are separate application, so there should be a mechanism - a bridge - that would tell play as a mediator(socket, external file, registry, etc).
Another possibility could be that you come up with an average time the console takes to load the services and use that time in your test app; well, just thinking out loud!

Categories