I am trying to develop a web service which calls a stored procedure. This stored procedure is quite long (around 1h30), and it does numerous "count" and "insert" in a database.
To launch this procedure I used a C# Class task ; here is the example :
[HttpPost]
[Route("updateData/{date:datetime?}")]
public JsonResult UpdateData(DateTime? date) {
try {
Task.Factory.StartNew(() => Data.UpdateData(date), TaskCreationOptions.LongRunning);
return Json("UpdateData successfully started !");
}
catch (Exception e) {
return Json("Error UpdateData: " + e);
}
}
When I test in local environment it works ; but when I work on Azure the process stops after roughly 30 minutes.
For launching the web service I use the Microsoft Azure scheduler.
The problem does not seem to be the stored procedure, but it seems to be in the use of the task (because without the task it works).
Is there something special to do ?
What you're experiencing is an IIS timeout. Once IIS detects inactivity, it will terminate the app pool:
Otherwise, when you have 20 minutes without any traffic then the app pool will terminate so that it can start up again on the next visit.
This happens because Task.Factory.StartNew doesn't register work with IIS, hence it doesn't know that you currently have active work going on.
To avoid this, If you're using .NET 4.5.2, you can use HostingEnvironment.QueueBackgroundWorkItem to register and queue work on a ASP.NET Threadpool thread.
If you're on previous versions, you can use BackgroundTaskManager by Stephan Cleary.
For more, read this post by the .NET Web Development team
Related
Azure Functions have a time limit of 10 minutes. Suppose I have a long-running task such as downloading a file that takes 1 hr to download.
[FunctionName("PerformDownload")]
[return: Queue("download-path-queue")]
public static async Task<string> RunAsync([QueueTrigger("download-url-queue")] string url, TraceWriter log)
{
string downloadPath = Path.Combine(Path.GetTempPath(), Guid.NewGuid().ToString);
log.Info($"Downloading file at url {url} to {downloadPath} ...");
using (var client = new WebClient())
{
await client.DownloadFileAsync(new Uri(url), myLocalFilePath);
}
log.Info("Finished!");
}
Is there any hacky way to make something like this start and then resume in another function before the time limit expires? Or is there a better way altogether to integrate some long task like this into a workflow that uses Azure Functions?
(On a slightly related note, is plain Azure Web Jobs obsolete? I can't find it under Resources.)
Adding for others who might come across this post: Workflows composed of several Azure Functions can be created in code using the Durable Functions extension, which can be used to create orchestration functions that schedule async tasks, shut down, and are reawakened when said async work is complete.
They're not a direct solution for long-running tasks that require an open TCP port, such as downloading a file, (for that, a function running on an App Service Plan has no execution time limit), but it can be used to integrate such tasks into a larger workflow.
Is there any hacky way to make something like this start and then
resume in another function before the time limit expires?
If you are on a Consumption Plan you have no control over how long your Function App runs, and so it would not be reliable to use background threads that continue running after your Function entry point completes.
On an App Service plan you're running on VMs you pay for, so you can configure your Function App to run continuously. Also AFAIK you don't have to have a Function timeout on an App Service Plan, so your main Function entry point can run for as long as you want.
Or is there a better way altogether to integrate some long task like this into a workflow that uses Azure Functions?
Yes. Use Azure Data Factory to copy data into Blob Storage, and then process it. The Data Factory pipeline can call Functions both before and after the copy activity.
One additional option, depending on the details of your workload, is to take advantage of Azure Container Instances. You can have your Azure Function spin up a container, process your workload (download your file \ do some processing, etc), and then shut down your container for you. Spin up time is typically a few seconds and you only pay for what you use (no need for a dedicated app service plan or vm instance). More details on ACI here.
10 minutes (based on the timeout setting in the host.json file) after the last function of your function app has been triggered, the VM running your function app will stop.
To prevent this behavior to happen, you can have an empty Timertrigger function that runs every 5 minutes. it wont cost anything and will keep your app up and running.
I think the issue is related with the Cold Start state. Here you can find more details about it.
https://markheath.net/post/avoiding-azure-functions-cold-starts
What you can do is, create an trigger azure function that "ping" your long running function to keep it "warm"
namespace NewProject
{
public static class PingTimer
{
[FunctionName("PingTimer")]
public static async Task Run([TimerTrigger("0 */4 * * * *")]TimerInfo myTimer, TraceWriter log)
{
// This CRON job executes every 4 minutes
log.Info($"PingTimer function executed at: {DateTime.Now}");
var client = new HttpClient();
string url = #"<Azure function URL>";
var result = await client.GetAsync(new Uri(url));
log.Info($"PingTimer function executed completed at: {DateTime.Now}");
}
}}
I am currently running my application using AWS lambda on .net core 1.1
I am finding that when I run the method below, that the final log points (_logger.Info()) and any that happen afterwards in the calling code are not completing.
I have deliberately left the "await" off the log points as I do not want the application to wait until the log is completed before moving onto the next statement as they are web calls to a third party logging service, but I do want all the logging threads to complete before the entire process is complete.
If I put "await" before all the _logger.Info then all the logs are run to completion which suggests that the code works when every method awaits the execution of the last method.
Its as if the AWS Lambda is saying that the main thread is complete therefore stop the entire process even though the logging points that are spawned to run as async are not completed yet.
I have done some research and see that .net core 2.0 has transactions (https://learn.microsoft.com/en-us/dotnet/api/system.transactions.transaction?view=netcore-2.0) but this doesnt appear to be supported by .net core 1.1
Is there any way to tell AWS Lambda to wait until all the spawned threads have been completed successfully before completing? If so could you provide me with an example?
Is something else going on here that I have misunderstood?
Here is the code:
private async Task LoadExhibitor(JObject input)
{
// Retrieve Data
_logger.Info("Retrieving Data");
// Set some variables
....
if (rawExhibitor.Error != null)
{
_logger.Warn($"No exhibitor returned from ...");
// Some error handling
...
return;
}
// Transform some information to another object
_logger.Info("Transforming exhibitor to a friendly object");
var exhibitor = await someTransformer.Transform(rawExhibitor)
_logger.Info($"Saving to elastic search ...");
// Save
await repository.SaveAsync(exhibitor, eventEditionId, locale, sessionId);
_logger.Info($"Saving to elastic search has completed");
}
I am new to azure web app, Is there any way to redirect the page first then execute the remaining code? I am stuck in situation where I have to redirect my page first, then execute the remaining code... Actually I have deployed my code on azure web app which has request timeout for about 4 mins (which is not configurable), my code take approx 15 min to execute, I want to redirect to main page and execute the remaining code in background. I have tried threads and parallel programming also still no luck.. I am not able to overcome the time frame my web page get request time out every time. Is there a way anyone can suggest?
Thanks for help!
/*functionA and functionB are not execute after redirecting.*/
private static async Task <int> functionA(para1, para2)
{
Task<int> temp1 = await functionB(y,z);
return int;
}
private static async Task<int> functionB(para1, para2)
{
return int;
}
/* This method will execute first */
private string functionC(para1, para2, para3)
{
console.log("hello world");
redirect.response("www.xyz.com");
Task<int> temp = await functionA(x,y);
return str; //return string type value
}
If you've got heavy processing that will result in a HTTP timeout, I suggest looking into offloading processing to a WebJob or Azure Function. It would work as follows:
Your Azure WebApp receives a HTTP request for a long-running operation. It gathers the necessary information, creates a Service Bus Queue message, and fires the message off. Your WebApp then responds to the user by telling them that the processing has begun.
Provision a separate WebJob or Azure Function that monitors your Service Bus Queue for messages. When a message is received, the WebJob/Function can perform the processing.
You will probably want to tell your user when the operation has completed and what the result is. You have a few options. The slickest would be to use SignalR to push notifications that the operation has completed to your users. A less sophisticated would be to have your WebJob/Function update a database record, then have your HTTP clients poll for the result.
I've personally used this pattern with Service Bus Queues/WebJobs/SignalR, and have been very pleased with the results.
Asynchronous operations in Azure storage queues and WebJobs can help in situation as stated
i have referred this
https://dev.office.com/patterns-and-practices-detail/2254
My application is a asp.net web service. There is a one more windows service ruining in the same server and trigger the web-service by specified intervals.
So in each web-service call, in side the web server, it create a new thread and start to do the task that web service suppose to do.
Some times, this started thread could not finish the assigned task, before next web service call comes. So the next web-service call create a same kind of a new thread and assign new task to the new thread.
So if I check the server in busy situations, Sometime there are 20+ parallel threads ruining.
Every thing works fine. But server (windows 2003 sp2) looks not responding some times. So I check the CPU performance in the Task Manager and it shows 100% when the web-service start to work. even if there are only 1,2 threads same thing happens.
I feel something wrong. Am i doing something conceptually wrong. Appreciate some advice.
Edit
public class EmailPrinter : System.Web.Services.WebService
public void webServiceMethod()
{
Thread email_thread = new Thread(new ThreadStart(this.downloadEmails));
email_thread.Start();
}
private void downloadEmails(object sender, DoWorkEventArgs e)
{
EmailService.init();
EmailService.ReceiveEmail();
}
}
I'm working in .NET 3.5 and I have the problem with stopping some service using ServiceController. I searched whole internet and I found no solution to my problem ;)
Here's how I do that:
using (ServiceController service = new ServiceController("Service"))
{
try
{
TimeSpan timeout = TimeSpan.FromSeconds(timeoutSeconds);
service.Stop();
service.WaitForStatus(ServiceControllerStatus.Stopped, timeout);
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
}
I'm waiting until my service stops. Then I want to replace it's libraries (update it). And the exception is thrown:
UnauthorizedAccessException
Exception Details: System.UnauthorizedAccessException: Access to the path 'blablabla' is denied.
I'm sure that I have access to this path, because I run my application as an Administrator.
What is interesting when this piece of code executes, this Service disappear from the List of current services (in Task Manager). So it actually stops it, but some thread has to still use it, so I can not move any Service's files. Even when I try to move files by myself (I go to the Service directory and try to move its files using mouse) I can not do it. But when I stop the Service manually (Task Manager and 'End Task') I can do whatever I want with its files. So what's the difference between stopping it from C# code (using ServiceController) and stopping it using Task Manager?
I don't know if it's important but I run this application (and this Service) on Windows Server 2008 in Virtual Box. (I needed to run it on some other machine then my computer).
Any ideas how can I solve this problem? ;)
Thanks for any help.
Best wishes,
Pete.
Ok, I solved my problem.
First I used an Administrator Command-Prompt Command of Net Stop to stop the Service and it worked, but I started to wonder why. The answer is it took a lot of time!
ServiceController actually stopped it, but some processes was still using it.
service.WaitForStatus(ServiceControllerStatus.Stopped, timeout);
Also workerd fine, because it imidaitely changed the status of the service (it's just one change in Windows Services list), but closing politely all threads using certain Service takes some time :)
So all You need to do is just wait, and then check again if it's still running. If it is, then kill it with Process class.