How to start and stop schedule webjob using asp .net c# - c#

I have created a trigger web job for some web application call, but I got a requirement that trigger job customer should be able to start or stop when want, and can also schedule.
I have done the schedule part using webjobs rest API but I am not able to complete start and stop work as trigger job does not have any option.
Is there a way to start and stop trigger webjob. I have tried to use kudo kill process but after webjob trigger completes it does not appear in process explorer.
var base64Auth = Convert.ToBase64String(Encoding.Default.GetBytes($"{azureUserName}:{azurePassword}"));
if (string.Equals(jobStatus, "STOP", StringComparison.InvariantCultureIgnoreCase))
{
using(var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("Authorization", "Basic " + base64Auth);
//var baseUrl = new Uri($"https://{webAppName}.scm.azurewebsites.net/");
var requestURl = string.Format("{0}/api/processes", azureWebAppUrl);
var response = client.GetAsync(requestURl).Result;
if (response.IsSuccessStatusCode == true)
{
string res = response.Content.ReadAsStringAsync().Result;
var processes = JsonConvert.DeserializeObject < List < ProcessData >> (res);
if (processes.Any(x => x.name.Contains(azureWebJobName)))
{
requestURl = string.Format("{0}/api/processes/{1}", azureWebAppUrl, processes.Where(x => x.name.Contains(azureWebJobName)).FirstOrDefault().id);
response = client.DeleteAsync(requestURl).Result;
}
returnStr = "success";
}
else
{
returnStr = response.Content.ToString();
}
}
}
Please help me to understand a better way to stop and start the trigger web job process. I also found that we can add app settings to the main application like WEBJOBS_STOPPED and WEBJOBS_DISABLE_SCHEDULE but it will depend and need to update it every time, I want to rely on webjob setting completely instead of main application settings.

I do not know if I understand your problem or not.
I think, You can use Quartz(1) for start job/jobs and schedule it with application setting.
https://www.quartz-scheduler.net/

Related

Start Quartz Job dynamically in ASP.NET Core 3.1 WebAPI

I have a ASP.NET Core 3.1 Web API and have chosen to use Quartz.net to execute recurring tasks.
I've implemented the QuartzHostedService as recommended and I manage to start job successfully by declaring them in the startup.
But I also need to add and execute some jobs dynamically by getting them in a database, and I don't want to deploy and restart my Web API each time I need to add a new recurring job (because some of my clients could add their own recurring tasks).
I managed to add jobs in the quartzHosted service by implemented a method in it:
public async Task AddJob(JobSchedule job)
{
Scheduler = await _schedulerFactory.GetScheduler();
Scheduler.JobFactory = _jobFactory;
var j = CreateJob(job);
var trigger = CreateTrigger(job);
await Scheduler.ScheduleJob(j, trigger);
job.JobStatus = JobStatus.Scheduling;
await Scheduler.Start();
job.JobStatus = JobStatus.Running;
}
I have implemented an InitialJob that have the goal to ask the database for new jobs (this job is declared in the startup).
In this InitialJob, my execute method is like this:
// Create a new scope
using (var scope = _provider.CreateScope())
{
// Resolve the Scoped service
var _uow = scope.ServiceProvider.GetService<IUnitOfWork>();
var stats = await _uow.Stat.GetActive();
var test = await _quartzHostedService.GetAllJobs();
foreach (var clientStat in stats.GroupBy(s => s.Clients))
{
foreach (var job in clientStat)
{
var key = new JobKey($"{job.Constant}.{clientStat.Key.FirstOrDefault().Id}");
var jobExist = _quartzHostedService.Scheduler.CheckExists(key);
if (!jobExist.Result)
{
var jobSchedule = new JobSchedule(jobType: typeof(GenericSimulationJob), cronExpression: job.Trigger.CronTask, job, clientStat.Key.FirstOrDefault().Id);
await _quartzHostedService.AddJob(jobSchedule);
}
}
}
var test2 = await _quartzHostedService.GetAllJobs();
}
When I check the jobs before adding them (variable test) I only have 2 jobs in the quartzHostedService, after adding them in the foreach (variable test2) I have 5. Perfect.
But it seems I'm not able to execute them. I only see 2 jobs in my QuartzHostedService.Schedule.CurrentlyExecutedJob.
I also see only 2 jobs in my jobFactory variable.
I tried to sleep and restart my Schedule but nothing works. Did I missed something? Is it just possible?
Thanks for your help
I found my mistake. I forgot to declare my IJob (which contains my scheduleJobs) in the startup, even if there is no job schedule in it at start.
services.AddSingleton<GenericSimulationJob>();
Then my code works (without start scheduler instance again).

Azure Blob Storage DownloadToStreamAsync hangs during network change

I've been having an issue with the Microsoft.WindowsAzure.Storage v9.3.3 and Microsoft.Azure.Storage.Blob v11.1.0 NuGet libraries. Specifically when download a large file. If you change your network during the "DownloadToStreamAsync" method the call hangs. I've been seeing my code, which processes a lot of files, hang occasionally and I've been trying to narrow it down. I think the network change might be a reliable way of triggering some failure in the Azure Blob Storage Libraries.
More info about the issue;
When I unplug my network cable my computer switches to WiFi but the request never resumes
If I start the download on WiFi and then plug in my network cable the same error occurs
The “ServerTimeout” property never fails the request or acts as expected in accordance to the Documentation
The “MaximumExecutionTime” property does fail the request but we don’t want to limit ourselves to a certain time period, especially because we’re dealing with large files
The following code fails 100% of the time if the network is changed during the call.
static void Main(string[] args)
{
try
{
CloudStorageAccount.TryParse("<Connection String>", out var storageAccount);
var cloudBlobClient = storageAccount.CreateCloudBlobClient();
var container = cloudBlobClient.GetContainerReference("<Container Reference>");
var blobRef = container.GetBlockBlobReference("Large Text.txt");
Stream memoryStream = new MemoryStream();
BlobRequestOptions optionsWithRetryPolicy = new BlobRequestOptions() { ServerTimeout = TimeSpan.FromSeconds(5), RetryPolicy = new LinearRetry(TimeSpan.FromSeconds(20), 4) };
blobRef.DownloadToStreamAsync(memoryStream, null, optionsWithRetryPolicy, null).GetAwaiter().GetResult();
Console.WriteLine("Completed");
}
catch (Exception ex)
{
Console.WriteLine($"Exception: {ex.Message}");
}
finally
{
Console.WriteLine("Finished");
}
}
I've found this active issue in the Azure Storage GitHub but it seems inactive.
Is there any other approach I could take to reliably and efficiently download a blob or something I'm missing when using this package?
Thanks to Mohit for the suggestion.
Create a Task to check the stream length in the background
If the stream hasn't increased in a set period of time, cancel the DownloadToStreamAsync
DISCLAIMER: I haven't written tests around this code or how to make it run in a performant way as you couldn't have a wait like this for every file you process. I might need to cancel the initial task if the download completes, I don't know yet, I just wanted to get it working first. I don't deem it production ready.
// Create download cancellation token
var downloadCancellationTokenSource = new CancellationTokenSource();
var downloadCancellationToken = downloadCancellationTokenSource.Token;
var completedChecking = false;
// A background task to confirm the download is still progressing
Task.Run(() =>
{
// Allow the download to start
Task.Delay(TimeSpan.FromSeconds(2)).GetAwaiter().GetResult();
long currentStreamLength = 0;
var currentRetryCount = 0;
var availableRetryCount = 5;
// Keep the checking going during the duration of the Download
while (!completedChecking)
{
Console.WriteLine("Checking");
if (currentRetryCount == availableRetryCount)
{
Console.WriteLine($"RETRY WAS {availableRetryCount} - FAILING TASK");
downloadCancellationTokenSource.Cancel();
completedChecking = true;
}
if (currentStreamLength == memoryStream.Length)
{
currentRetryCount++;
Console.WriteLine($"Length has not increased. Incremented Count: {currentRetryCount}");
Task.Delay(TimeSpan.FromSeconds(10)).GetAwaiter().GetResult();
}
else
{
currentStreamLength = memoryStream.Length;
Console.WriteLine($"Download in progress: {currentStreamLength}");
currentRetryCount = 0;
Task.Delay(TimeSpan.FromSeconds(1)).GetAwaiter().GetResult();
}
}
});
Console.WriteLine("Starting Download");
blobRef.DownloadToStreamAsync(memoryStream, downloadCancellationToken).GetAwaiter().GetResult();
Console.WriteLine("Completed Download");
completedChecking = true;
Console.WriteLine("Completed");

azure functions running in sequence, parallel desired

I have an azure function that I'm calling in parallel using postasync...
I arrange all my tasks in a queue and then wait for the responses in parallel using "WhenAll".
I can confirm that there is a burst of HTTP activity out to Azure and then HTTP activity stops on my local machine while I wait for responses from Azure.
When I monitor the function in Azure Portal, it looks like the requests are arriving every three seconds or so, even though from my side there is no network traffic after the initial burst.
When I get my results back, they are arriving in sequence, in the exact same order I sent them out, even though the Azure Portal monitor indicates that some functions take 10 seconds to run and some take 3 seconds to run.
I am using Azure functions Version 1 with a consumption service plan.
CentralUSPlan (Consumption: 0 Small)
My host.json file is empty ==> {}
Why is this happening? Is there some setting that is required to get azure functions to execute in parallel?
public async Task<List<MyAnalysisObject>> DoMyAnalysisObjectsHttpRequestsAsync(List<MyAnalysisObject> myAnalysisObjectList)
{
List<MyAnalysisObject> evaluatedObjects = new List<MyAnalysisObject>();
using (var client = new HttpClient())
{
var tasks = new List<Task<MyAnalysisObject>>();
foreach (var myAnalysisObject in myAnalysisObjectList)
{
tasks.Add(DoMyAnalysisObjectHttpRequestAsync(client, myAnalysisObject));
}
var evaluatedObjectsArray = await Task.WhenAll(tasks);
evaluatedObjects.AddRange(evaluatedObjectsArray);
}
return evaluatedObjects;
}
public async Task<MyAnalysisObject> DoMyAnalysisObjectHttpRequestAsync(HttpClient client, MyAnalysisObject myAnalysisObject)
{
string requestJson = JsonConvert.SerializeObject(myAnalysisObject);
Console.WriteLine("Doing post-async:" + myAnalysisObject.Identifier);
var response = await client.PostAsync(
"https://myfunctionapp.azurewebsites.net/api/BuildMyAnalysisObject?code=XXX",
new StringContent(requestJson, Encoding.UTF8, "application/json")
);
Console.WriteLine("Finished post-async:" + myAnalysisObject.Identifier);
var result = await response.Content.ReadAsStringAsync();
Console.WriteLine("Got result:" + myAnalysisObject.Identifier);
return JsonConvert.DeserializeObject<MyAnalysisObject>(result);
}

Azure ML web service times out

I have created a simple experiment in Azure ML and trigger it with an http client. In Azure ML workspace, everything works ok when executed. However, the experiment times out and fails when I trigger the experiment using an http client. Setting a timeout value for the http client does not seem to work.
Is there any way we can set this timeout value so that the experiment does not fail?
Make sure you're setting the client timeout value correctly. If the server powering the web service times out, then it will send back a response with the HTTP status code 504 BackendScoreTimeout (or possibly 409 GatewayTimeout). However, if you simply never receive a response, then your client isn't waiting long enough.
You can find out a good amount of time by running your experiment in ML Studio. Go to the experiment properties to find out how long it ran for, and then aim for about twice that amount of time as a timeout value.
I've had similar problems with an Azure ML experiment published as a web service. Most of the times it was running ok, while sometimes it returned with a timeout error. The problem is that the experiment itself has a 90 seconds running time limit. So, most probably your experiment has a running time over this limit and returns with a timeout error. hth
Looks like it isn't possible to set this timeout based on a feature request that is still marked as "planned" as of 4/1/2018.
The recommendation from MSDN forums from 2017 is to use the Batch Execution Service, which starts the machine learning experiment and then asynchronously asks whether it's done.
Here's a code snippet from the Azure ML Web Services Management Sample Code (all comments are from their sample code):
using (HttpClient client = new HttpClient())
{
var request = new BatchExecutionRequest()
{
Outputs = new Dictionary<string, AzureBlobDataReference> () {
{
"output",
new AzureBlobDataReference()
{
ConnectionString = storageConnectionString,
RelativeLocation = string.Format("{0}/outputresults.file_extension", StorageContainerName) /*Replace this with the location you would like to use for your output file, and valid file extension (usually .csv for scoring results, or .ilearner for trained models)*/
}
},
},
GlobalParameters = new Dictionary<string, string>() {
}
};
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", apiKey);
// WARNING: The 'await' statement below can result in a deadlock
// if you are calling this code from the UI thread of an ASP.Net application.
// One way to address this would be to call ConfigureAwait(false)
// so that the execution does not attempt to resume on the original context.
// For instance, replace code such as:
// result = await DoSomeTask()
// with the following:
// result = await DoSomeTask().ConfigureAwait(false)
Console.WriteLine("Submitting the job...");
// submit the job
var response = await client.PostAsJsonAsync(BaseUrl + "?api-version=2.0", request);
if (!response.IsSuccessStatusCode)
{
await WriteFailedResponse(response);
return;
}
string jobId = await response.Content.ReadAsAsync<string>();
Console.WriteLine(string.Format("Job ID: {0}", jobId));
// start the job
Console.WriteLine("Starting the job...");
response = await client.PostAsync(BaseUrl + "/" + jobId + "/start?api-version=2.0", null);
if (!response.IsSuccessStatusCode)
{
await WriteFailedResponse(response);
return;
}
string jobLocation = BaseUrl + "/" + jobId + "?api-version=2.0";
Stopwatch watch = Stopwatch.StartNew();
bool done = false;
while (!done)
{
Console.WriteLine("Checking the job status...");
response = await client.GetAsync(jobLocation);
if (!response.IsSuccessStatusCode)
{
await WriteFailedResponse(response);
return;
}
BatchScoreStatus status = await response.Content.ReadAsAsync<BatchScoreStatus>();
if (watch.ElapsedMilliseconds > TimeOutInMilliseconds)
{
done = true;
Console.WriteLine(string.Format("Timed out. Deleting job {0} ...", jobId));
await client.DeleteAsync(jobLocation);
}
switch (status.StatusCode) {
case BatchScoreStatusCode.NotStarted:
Console.WriteLine(string.Format("Job {0} not yet started...", jobId));
break;
case BatchScoreStatusCode.Running:
Console.WriteLine(string.Format("Job {0} running...", jobId));
break;
case BatchScoreStatusCode.Failed:
Console.WriteLine(string.Format("Job {0} failed!", jobId));
Console.WriteLine(string.Format("Error details: {0}", status.Details));
done = true;
break;
case BatchScoreStatusCode.Cancelled:
Console.WriteLine(string.Format("Job {0} cancelled!", jobId));
done = true;
break;
case BatchScoreStatusCode.Finished:
done = true;
Console.WriteLine(string.Format("Job {0} finished!", jobId));
ProcessResults(status);
break;
}
if (!done) {
Thread.Sleep(1000); // Wait one second
}
}
}

Let code run for X seconds, after that stop

I'm working on Windows Phone 7.1 app and want for several lines of code run for 10 seconds, if succeeds in 10 seconds, continue, if no success, stop the code and display message.
The thing is, my code is not a loop - phone tries to fetch data from a server (if internet connection is slow, might take too long).
if (DeviceNetworkInformation.IsNetworkAvailable)
{
// start timer here for 10s
WebClient webClient = new WebClient();
webClient.DownloadStringCompleted += loginHandler;
webClient.DownloadStringAsync(new Uri(string.Format(url + "?loginas=" + login + "&pass=" + pwd)));
// if 10s passed, stop code above and display MessageBox
}
You can use something like the following:
HttpClient client = new HttpClient();
var cts = new CancellationTokenSource();
cts.CancelAfter(10000);
try
{
var response = await client.GetAsync(new Uri(string.Format(url +
"?loginas=" + login + "&pass=" + pwd)), cts.Token);
var result = await response.Content.ReadAsStringAsync();
// work with result
}
catch(TaskCanceledException)
{
// error/timeout handling
}
You need the follwoing NuGet packages:
HttpClient
async/await
Make that piece of code a method, make that method run separately.
Launch a Timer, when 10 seconds elapsed, check the status of the first thread.
If it has fetched all that he was supposed to, make use of that, otherwise kill the thread, clean whatever you have to clean and return that message of error.

Categories