As I understand Azure Worker roles run by the help of Host application called WaWorkerHost.exe and there is another application called WaHostBootstrapper.exe which checks if WaWorkerHost.exe is running and if not it will run the WaWorkerHost.exe.
How often does this 'worker role status check' occurs?
How can I quickly restart the Worker role myself? I can either reboot the machine worker role is running and wait for few minutes or chose the following traditional method:
Taskkill /im /f WaWorkerHost.exe
and wait for few minutes for the WaHostBootstrapper.exe to kick in but this very inefficient and slow.
Is there any (instant)method of restarting the worker role?
Can I run something like the following and expect similar results to the WaHostBootstapper.exe or there are other consideration?
WaWorkerHost.exe {MyAzureWorkerRole.dll}
The bootstrapper checks the WaWorkerHost status every 1 second.You can see it in the bootsrapper logs (c:\resources\WaHostBootstrapper.txt), by looking at interval of the trace:
"Getting status from client WaWorkerHost.exe"
You can use AzureTools which is a utility used by Azure support team.
One of the its features is gracefully recycle the role instance:
Alternatively, you can restart the instance programmatically:
Upload management certificate to your subscription.
Use the following code to programmatically restart the instance:
Using Microsoft Azure Compute Management library:
X509Certificate2 cert = new X509Certificate2("");
var credentials = new CertificateCloudCredentials("your_subscription_id", cert);
using (var managementClient = new ComputeManagementClient(credentials))
{
OperationStatusResponse response =
await managementClient.Deployments.RebootRoleInstanceByDeploymentSlotAsync(
"cloud_service_name",
DeploymentSlot.Production, // or staging
"instance_name");
}
This is not recommended, for three reasons:
The bootsrapper checks every second, which should be enough for most cases.
It could lead to weird issues. For example, you kill the worker, bootstrapper identifies that the worker is down, you manually start the worker, bootstrapper also tries to start the worker and fail (will crash? will enter zombie state?). It can lead to unhealthy bootstrapper, means that nothing takes care of the worker process.
It depends, of course, on what's the bootstrapper does other than starting the worker. But even if it is currently does nothing other than starting the role, you cannot know for sure if tomorrow Azure team will decide to add it more responsibilities/actions.
If the role itself is aware that it needs to restart, it can call RoleEnvironment.RequestRecycle to cause the role instance to be restarted.
Related
I want to create some functions in ASP.NET Web API, which should be executed daily at specific time and do specific task like update statuses/Records/Generating Emails, SMS.
Should i create a TaskService in Code
using System;
using Microsoft.Win32.TaskScheduler;
class Program
{
static void Main(string[] args)
{
// Get the service on the local machine
using (TaskService ts = new TaskService())
{
// Create a new task definition and assign properties
TaskDefinition td = ts.NewTask();
td.RegistrationInfo.Description = "Does something";
// Create a trigger that will fire the task at this time every other day
td.Triggers.Add(new DailyTrigger { DaysInterval = 2 });
// Create an action that will launch Notepad whenever the trigger fires
td.Actions.Add(new ExecAction("notepad.exe", "c:\\test.log", null));
// Register the task in the root folder
ts.RootFolder.RegisterTaskDefinition(#"Test", td);
// Remove the task we just created
ts.RootFolder.DeleteTask("Test");
}
}
}
or should i create a .bat file and create a new task in Task Scheduler.
As you have mentioned in the question, you need to do the specific tasks like update statuses/Records/Generating Emails, SMS etc.
So database access comes into the scenario and on the other hand, you will have to send emails and SMS's which may require third party libraries or other configuration setting access.
Thus, to do all this it will be better to go with code implementation via which you can maintain your changes and requirements well enough.
About the ".bat file and windows scheduler", you need to have great skills using the limited batch commands available to fulfill your requirement.
So, my suggestion is code, .exe and windows scheduler task.
Also, this should be a separate application, don't mix it up with Web API code. You can always create a new project in the web API solution with web API project and reuse whatever code is possible.
You should do this outside your web code. This is because your webapp should have no access to the task system or web service. By default IIS 7.5+ runs app's in their own limited user account (https://www.iis.net/learn/manage/configuring-security/application-pool-identities).
If you want to have a reliable tasks scheduling wherein you can apply time interval depend on your choice, I recommend [quartz]: https://www.quartz-scheduler.net/. Quartz allow to add/edit/delete/etc a scheduled task easily, manageable and no CPU overhead.
Moreover Quartz is an open source job scheduling system that can be used from smallest apps to large scale enterprise systems.
I recommend you to try Hangfire. It's free and you can use it for free in commercial app. Ducumentation you can find here.
I have one hangfire server running.
I created a background job as follows
BackgroundJob.Enqueue(() => new MyJob().Execute(path));
This job should run just once but in the processing jobs part of the web portal i see it running multiple times at once. How do I prevent this and ensure that the job is only ever kicked off once?
Every time you make that call, it will add another job to the DB. If you have that going on in Startup and your app frequently restarts, that will cause the behavior you're reporting. You could use the attribute DisableConcurrentExecution on your method call, or you can enter the jobinto the DB as a timed recurring job via Hangfire.RecurringJob.AddOrUpdate instead of BackgroundJob.Enqueue
var options = new BackgroundJobServerOptions { WorkerCount =1 };
app.UseHangfireServer(options);
I hope this will help you
I am developing a Windows Service in C# to centrally manage some application connectivity. It's a sleeper service in general, which performs some actions when awoken by an external executable. To this end I'm using named events, specifically the .NET EventWaitHandle. My code boils down to, at the service end:
EventWaitHandleSecurity sec = new EventWaitHandleSecurity();
sec.AddAccessRule(new EventWaitHandleAccessRule(
new SecurityIdentifier(WellKnownSidType.WorldSid, null),
EventWaitHandleRights.FullControl,
AccessControlType.Allow));
evh = new EventWaitHandle(false, EventResetMode.AutoReset, EVENT_NAME,
out created, sec);
Log(created ? "Event created" : "Event already existed?");
As it's an internal application on trusted servers I don't mind that granting 'Full Control' to 'World' in general wouldn't be smart.
At the client end I have:
EventWaitHandle.TryOpenExisting(EVENT_NAME, EventWaitHandleRights.Modify, out evh)
The code above works perfectly when I run my service in console-based interactive mode. The event is found on both ends, the client can set, and the service kicks to work. Everybody's happy.
When installing the service however it doesn't work. The logging still reports that the event was created anew, but the client cannot find the event. As I thought it was security-related I added the World Full Control Allow access rule, but it didn't change anything. I changed the service to run as Local Admin, even as my own user account, but nothing - the client cannot find the event even though logs show the service is happily polling away on it. If I change the TryOpenExisting to OpenExisting I get an explicit exception:
System.Threading.WaitHandleCannotBeOpenedException: No handle of the given name exists.
What am I missing?
Starting with Windows Vista, services are isolated and run in Session 0 (see Service Changes for Windows Vista). When calling CreateEvent (which EventWaitHandle does), the event object is created in the local namespace by default, also called session namespace. An event object created by a service in session 0 with a name in the session namespace is visible in session 0 only. It is invisible to applications running in an interactive user session.
To create an event object by a service (running in session 0) that can be discovered by application code (running in an interactive user session), you have to create it into the global namespace. This is done by prefixing the event name with "Global\", as documented under CreateEvent.
A helpful tool to track down kernel object-related bugs is Sysinternal's WinObj.
I have been running a batch file every 5 minutes through Windows Task scheduler and since there are quite a number of issues I faced like task scheduler going to hung state and not returning back to service- I have decided to use windows service primarily because I can invoke recovery action by monitoring the service through our monitoring infrastructure.
So, I have created a service to run that instead.
The service was built and installed but the moment I start the service which invokes the batch file that is looped and doing a set of task, it keeps looping forever.
The batch file is something like this:
#echo off
:begin
cd c:\work\scripts\matm\
cscript //E:jscript c:\work\Scripts\matm\matm.js >> C:\work\Scripts\matm\matm.log;
cscript //E:vbscript c:\work\Scripts\matm\TruncateLog.vbs
>>c:\work\Scripts\matm\TruncateLog.log;
del C:\work\Scripts\matm\Logs\myserver\matm.csv
timeout 600
goto begin
The batch script works perfectly when run from the command prompt and that is what I am expecting the service to invoke.
My thought is that the service gets into the loop as soon as we start it and never comes out of that.
I have defined the call to the batch file on this Onstart section as below
protected override void OnStart(string[] args)
My question is :-
a) How can I ensure that the service doesn't "start" running the batch file as soon as it starts? If my conception is wrong, how can I run the service every 5 minutes ?
b) How to stop the service? Or how can I stop the service if proc is a new instance of Process class that I have defined in the onstart() function.
Appreciate your help and feedback.
Regards,
Sash
Write a custom wrapper console application in C# that contains your error recovery logic.
Use the Windows Task Scheduler to invoke your wrapper application on a regular interval. You can configure it to only start the job if it is not already running. You can also make it kill the existing job.
No need to use a Windows Service. They are complicated.
Given that a Service might help you solve the problem here's what I'd do: In the OnStart method start a timer that ticks every 10min and starts your script. Or, start a thread that sleeps for 10min between calls to the script.
I have a console application that starts up, hosts a bunch of services (long-running startup), and then waits for clients to call into it. I have integration tests that start this console application and make "client" calls. How do I wait for the console application to complete its startup before making the client calls?
I want to avoid doing Thread.Sleep(int) because that's dependent on the startup time (which may change) and I waste time if the startup is faster.
Process.WaitForInputIdle works only on applications with a UI (and I confirmed that it does throw an exception in this case).
I'm open to awkward solutions like, have the console application write a temp file when it's ready.
One option would be to create a named EventWaitHandle. This creates a synchronization object that you can use across processes. Then you have your 'client' applications wait until the event is signalled before proceeding. Once the main console application has completed the startup it can signal the event.
http://msdn.microsoft.com/en-us/library/41acw8ct(VS.80).aspx
As an example, your "Server" console application might have the following. This is not compiled so it is just a starting point :)
using System.Threading;
static EventWaitHandle _startedEvent;
static void main()
{
_startedEvent = new EventWaitHandle(false, EventResetMode.ManualReset, #"Global\ConServerStarted");
DoLongRunnningInitialization();
// Signal the event so that all the waiting clients can proceed
_startedEvent.Set();
}
The clients would then be doing something like this
using System.Threading;
static void main()
{
EventWaitHandle startedEvent = new EventWaitHandle(false, EventResetMode.ManualReset, #"Global\ConServerStarted");
// Wait for the event to be signaled, if it is already signalled then this will fall throught immediately.
startedEvent.WaitOne();
// ... continue communicating with the server console app now ...
}
What about setting a mutex, and removing it once start up is done. Have the client app wait until it can grab the mutex before it starts doing things.
Include an is ready check in the app's client interface, or have it return a not ready error if called before it's ready.
Create a WCF service that you can use for querying the status of the server process. Only start this service if a particular command is passed on the command line. The following traits will ensure a very fast startup of this service:
Host this service as the first operation of the client application
Use the net.tcp or net.pipe binding because they start very quickly
Keep this service as simple as possible to ensure that as long as the console application doesn't terminate, it will remain available
The test runner can attempt to connect to this service. Retry the attempt if it fails until the console application terminates or a reasonably short timeout period expires. As long as the console application doesn't terminate unexpectedly you can rely on this service to provide any additional information before starting your tests in a reasonably short period of time.
Since the two(the console application, and integration test app that makes client calls - as I understand) are separate application, so there should be a mechanism - a bridge - that would tell play as a mediator(socket, external file, registry, etc).
Another possibility could be that you come up with an average time the console takes to load the services and use that time in your test app; well, just thinking out loud!