I have an Azure Function (~4) running in a Linux Consumption plan. It targets .Net 6. It uses a ServiceBusTrigger. The ServiceBus has two queues, qprocessing and qcomplete. The first queue, qprocessing, has several messages, which are scheduled for delivery to this function. The ServiceBusTrigger is not firing and the messages stay on the queue until I investigate why they didn't execute.
I use the explorer to peek at the messages. Then they fire. When the function executes the message is moved to the queue, qcomplete. The following examples show what I received in the complete queue.
"DeliveryDateTime":"2022-01-15T12:00:00","SendRequested":"2022-01-16T10:12:40.3301147Z"
"DeliveryDateTime":"2022-01-15T12:00:00","SendRequested":"2022-01-16T10:12:40.3285614Z"
DeliveryDateTime is EST. SendRequested is UTC as set by the function when it executes. These messages remained on the queue for 17 hours. And they didn't fire until I used the explorer to peek at them.
I've been noticing this issue of unreliable delivery when scheduling a message to be enqueued.
I have Application Insights enabled, and I see no errors or exceptions when I execute the following traces for the last three days.
traces
| where message contains '"state": "Error"'
traces
| where message contains "Exception while executing function"
The function executes, but I have to peek at the ServiceBus queue first.
Or I have to access the Azure function app's web site. Just showing the Azure function app's web site generates a result.
For now, I have a monitor running every 15 minutes, which accesses the function app's web site. It's the page that says, "Your Functions 4.0 App is up and running."
UPDATED
The problem is in the Scale Controller not becoming aware of your trigger or having problems with it.
Add the SCALE_CONTROLLER_LOGGING_ENABLED setting to your configuration as per this doc: Configure scale controller logs
This will add in the traces table logging about the Scale Controller and you might see something like this
"Function app does not contain any active triggers", which indicates that when your app will go idle, the Scale Controller will not wake it up, not being aware of any trigger.
After the function is deployed there must be a sync of triggers sometimes is automatic, sometimes is manual, sometimes it fails.
In my case altering the host.json file was the issue (like this) and also "leftovers" from previous deploys inside the storage account used by the function, both in the Blobs and in the File Shares that gave different kind of problems but still they invalidated my trigger
In other cases is a mixture of deployment method not triggering stuff, by design or by failure.
Related
I have added http triggered azure function and deployed it in function app. function app contains only one this http trigger on demand azure function. function app has app service plan, not consumption plan.
also, function app version is ~1. so that timeout is unlimited.
In the azure function code, I am reading one file having thousands of historical records and processing those records. this task is taking more than hour of time. this is one time task.
when I invoke this azure function after deployment, it gets invoked and after some time I noticed that it is getting invoked again and processing already processed records again.
Can anyone help me to understand invoking strategy of azure function, if azure function running for a long time without any status, will it callback itself?
if yes, how to stop this to call back again till it completes its processing.
Functions are supposed to be short-lived, they shouldn't run long time.The strength of Functions is in short-lived executions with small- or variable throughput.
Whenever possible, refactor large functions into smaller function sets that work together and return responses fast. For example, a webhook or HTTP trigger function might require an acknowledgment response within a certain time limit; it's common for webhooks to require an immediate response. You can pass the HTTP trigger payload into a queue to be processed by a queue trigger function. This approach lets you defer the actual work and return an immediate response.
Have a look of this:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-best-practices#avoid-long-running-functions
With Durable Functions you can easily support long-running processes, applying the Async HTTP APIs. When in case you are dealing with functions that require some time to process the payload or request, running under an 'App Service Plan, WebJob, or Durable Functions' is the right way.
As suggested by #Thiago Custodio, you also need to split the large files into smaller ones, and pass them to activities in your durable functions workflow.
I am using an azure web job to pick up messages off the queue and process them as shown below:
namespace TicketProcessor
{
public class Functions
{
public static void ProcessQueueMessage([QueueTrigger("ticketprocessorqueue")] string message, TextWriter log)
{
//DO STUFF HERE...
}
}
}
And this web job exists within an app service in my azure account. My volume has been steadily increasing, and I now need to "scale this out". I need this same web job to run on multiple instances simultaneously. There are 2 reasons for this:
More processing power
My service targets a web api that throttles according to IP
I see the option within my app service to "Scale Out". But I'm not sure of the internal workings of the message queue as it relates to the scale out. My particular concern is this: If I scale this app service out to 2 instances, and my web job runs on both (as shown above), will each message in the queue get processed twice (once on each instance)? Or does the Azure queue mechanism handle it in such a way that each message is processed only once by one of the two instances?
One more thing to consider - in reading up on this, I found that there are two types of queues in Azure (storage queues and service bus queues). One thing I found interesting in the docs (in the "Foundational Capabilities" section) is that the Delivery guarantee for storage queues is "At-Least-Once". But the Delivery guarantee for service bus queues is both "At-Least-Once" and "At-Most-Once". This seems to indicate that if my process runs in a service bus queue, then it is guaranteed to run only once. But if it runs in a storage queue, it could possibly run more than once. In my case, I am running it with a storage queue. Any clarity on this will be helpful.
If I scale this app service out to 2 instances, and my web job runs on
both (as shown above), will each message in the queue get processed
twice (once on each instance)?
The answer to your question is no. Each message will be processed only once. From this link ("Multiple Instances" section):
If your web app runs on multiple instances, a continuous WebJob runs
on each machine, and each machine will wait for triggers and attempt
to run functions. The WebJobs SDK queue trigger automatically prevents
a function from processing a queue message multiple times; functions
do not have to be written to be idempotent. However, if you want to
ensure that only one instance of a function runs even when there are
multiple instances of the host web app, you can use the Singleton
attribute.
Just before I get to the question, I must confess I'm very new to Azure Functions thus not truly understanding "the over-all".
A bit about the Environment we have an "API" which inserts "some" data then pushes a model to a Service Bus Queue.
We then have an Azure Function which triggers on Service Bus message received, admittedly this works perfect unless left for 30-60 seconds, then an error is thrown.
This is all done locally (VS17)... There is no logic, all I do is debug and view the contents of the message.
Ideally I'd like to know why I'm receiving this error to begin with, I assume behind the scenes the Azure Function needs to stay in state of active connection.
I'd really appreciate some guidance, or advice on missing parameters.
Thanks.
Please check the hosting plan of your Azure function. You would have chosen either Consumption plan or App Service plan at the time of creation and this cannot be modified.
The hosting plan can be potential reason behind your function getting timed out.
The default timeout for functions on a Consumption plan is 5 minutes. The value can be increased for the Function App up to a maximum of 10 minutes by changing the property functionTimeout in the host.json project file.
In the dedicated App Service plan, your function apps run on dedicated VMs on Basic, Standard, Premium, and Isolated SKUs, which is the same as other App Service apps. Dedicated VMs are allocated to your function app, which means the functions host can be always running.
When a manager creates a task and sets the activation date in the future, it's supposed to be stored in the DB. No message is being dispatched out to the regarded workers, until a day or two before it's due. When the time's approaching, an email's being sent out to the subordinates.
Previously I've resolved that using a locally run Windows Service that scheduled the messaging. However, as I'm implementing something similar in the Azure, I'm not sure how to resolve it (other than actually hosting my own Windows Server in the cloud, of course, but kind of defeats the whole point).
Since my MVC application is strictly event driven, I've browsed around in the Azure portal to find a utility to schedule or postpone a method being invoked. No luck. So at the moment, all the emails are dispensed immediately and the scheduling is performed by keeping the message in the inbox until it's time (or manually setting up an appointment).
How should I approach the issue?
Other possible solution is to use Queueing mechanism. You can use Azure Storage Queues or Service Bus Queues.
The way it would work is when a task is created and saved in the database, you will write a message in a queue. This message will contain details about the task (may be a task id). However that message will be invisible by default and will only become visible after certain amount of time (you will calculate this period based on when you would need to send out the email). When the visibility timeout period expires, the message will become available to be consumed in the queue. Then you will have a WebJob with a Queue trigger (i.e. the WebJob will become alive when there's a message in the queue). In your WebJob code, you will fetch the task information from the database and send the notification to concerned person.
If you're using Azure Storage Queue, the property you would be interested in is InitialVisibilityTimeout. Please see this thread for more details: Azure storage queue message (show at specific time).
If you're using Azure Service Bus Queue, the property you would be interested in is BrokeredMessage.ScheduledEnqueueTimeUtc. You can read more about this property here: https://msdn.microsoft.com/en-us/library/microsoft.servicebus.messaging.brokeredmessage.scheduledenqueuetimeutc.aspx.
One solution to run background tasks is to use Web Jobs. Web Jobs can run on a schedule (let's say once per day), manually or triggered by a message in a queue.
You can use Azure WebJobs. Basically, create a WebJob and schedule it to regularly check the data in your database for upcoming tasks and then notify people.
To be clear there are no Errors for the hosted service, just a generic Windows service error.
The error message says:
Error 1053: The service did not respond to the start or control request in a timely fashion.
If I run NServiceBus.Host explicitly (where the windows service is installed) I am presented with relevant messages indicating a successful "spinning up" of the end point, and, in fact, I can see subscription message(s) are persisted into a relevant private MSMQ queue and the exe then sits and waits, like a good server should, for something to happen upon it.
If I start the windows service (hosting the endpoint) there are no exceptions or events in the event viewer, or entries in the log file to indicate any errors or give me reason to believe something bad is happening. If I look in the log file and queue I can see subscription messages are indicated as dispatched, in effect, the same behavior as running it standalone, with the only difference being that the service wont start.
EDIT:
The windows service is provided by the NServiceBus framework in the form of a generic host, and therefore implementation of the various required windows service methods is not something I have control of, which you would normally have if you were creating the windows service yourself.
The most common reason that I've found for this is down to logging.
The user account running the service must have Performance Monitoring Access.
I add this through Server Manager > Users & Groups > Groups > Performance Log Users > Add.