Just before I get to the question, I must confess I'm very new to Azure Functions thus not truly understanding "the over-all".
A bit about the Environment we have an "API" which inserts "some" data then pushes a model to a Service Bus Queue.
We then have an Azure Function which triggers on Service Bus message received, admittedly this works perfect unless left for 30-60 seconds, then an error is thrown.
This is all done locally (VS17)... There is no logic, all I do is debug and view the contents of the message.
Ideally I'd like to know why I'm receiving this error to begin with, I assume behind the scenes the Azure Function needs to stay in state of active connection.
I'd really appreciate some guidance, or advice on missing parameters.
Thanks.
Please check the hosting plan of your Azure function. You would have chosen either Consumption plan or App Service plan at the time of creation and this cannot be modified.
The hosting plan can be potential reason behind your function getting timed out.
The default timeout for functions on a Consumption plan is 5 minutes. The value can be increased for the Function App up to a maximum of 10 minutes by changing the property functionTimeout in the host.json project file.
In the dedicated App Service plan, your function apps run on dedicated VMs on Basic, Standard, Premium, and Isolated SKUs, which is the same as other App Service apps. Dedicated VMs are allocated to your function app, which means the functions host can be always running.
Related
I have an Azure Function (~4) running in a Linux Consumption plan. It targets .Net 6. It uses a ServiceBusTrigger. The ServiceBus has two queues, qprocessing and qcomplete. The first queue, qprocessing, has several messages, which are scheduled for delivery to this function. The ServiceBusTrigger is not firing and the messages stay on the queue until I investigate why they didn't execute.
I use the explorer to peek at the messages. Then they fire. When the function executes the message is moved to the queue, qcomplete. The following examples show what I received in the complete queue.
"DeliveryDateTime":"2022-01-15T12:00:00","SendRequested":"2022-01-16T10:12:40.3301147Z"
"DeliveryDateTime":"2022-01-15T12:00:00","SendRequested":"2022-01-16T10:12:40.3285614Z"
DeliveryDateTime is EST. SendRequested is UTC as set by the function when it executes. These messages remained on the queue for 17 hours. And they didn't fire until I used the explorer to peek at them.
I've been noticing this issue of unreliable delivery when scheduling a message to be enqueued.
I have Application Insights enabled, and I see no errors or exceptions when I execute the following traces for the last three days.
traces
| where message contains '"state": "Error"'
traces
| where message contains "Exception while executing function"
The function executes, but I have to peek at the ServiceBus queue first.
Or I have to access the Azure function app's web site. Just showing the Azure function app's web site generates a result.
For now, I have a monitor running every 15 minutes, which accesses the function app's web site. It's the page that says, "Your Functions 4.0 App is up and running."
UPDATED
The problem is in the Scale Controller not becoming aware of your trigger or having problems with it.
Add the SCALE_CONTROLLER_LOGGING_ENABLED setting to your configuration as per this doc: Configure scale controller logs
This will add in the traces table logging about the Scale Controller and you might see something like this
"Function app does not contain any active triggers", which indicates that when your app will go idle, the Scale Controller will not wake it up, not being aware of any trigger.
After the function is deployed there must be a sync of triggers sometimes is automatic, sometimes is manual, sometimes it fails.
In my case altering the host.json file was the issue (like this) and also "leftovers" from previous deploys inside the storage account used by the function, both in the Blobs and in the File Shares that gave different kind of problems but still they invalidated my trigger
In other cases is a mixture of deployment method not triggering stuff, by design or by failure.
I have inherited an azure service bus solution - C#, Web Api with Singleton service implementing the queue. Running locally on my PC, I can publish a message to my Dev queue and see that event consumed by my service bus receiver. No problem.
In our staging environment however my receiver is not firing so my code never processes the messages. I found an instance where a different environment was pointing to the staging queue purely by luck which makes me think "what else is using this queue". We have no application logging (useless I know) of when events are published or consumed so I wondered, is there a way from within Azure to see either
What is consuming the events published to the queue, or
What is currently connected to the queue so I can validate each connection and make sure a dev in a far flung office isn't running test programs using the queue.
Thanks
Create application insights instance
Connect your web app in azure to the created AI
after some time you will be able to see requests to other systems sent by your app (in application map you'll see fancy diagram of requests, in logs you can query requests to service bus)
Drop the AI instance if you don't need it anymore
In brief, what is the best way to create Azure resources (VM's, ResourceGroups, etc) that are defined programmatically, without locking the web app's interface because of the long time that some of these operations take?
More detailed:
I have a Net Core web application where customers are added, manually. Once added, it automatically creates some resources for Azure. However, I noticed that my interfaces is 'locked' during these operations. What is a relatively simple way of detaching these operations from the web application? I had in mind sending a trigger using a Service Bus or Azure Relay and triggering an Azure Function. However, it seems to me that all these resources return something back, and my web app is waiting for that. I need a 'send and forget method' for that. Just send out the trigger to create these resources, don't bother with the return values for now, and continue with the app.
If a 'send and return' method also works within my web app, that is also fine.
Any suggestions are welcome!
You need to queue the work to run the background and then return the action immediately. The easiest method of doing this is to create a hosted service. There's a couple of different ways to do this:
Use a queued background service and actually queue the work to be done in your action.
Just write the required info to a database table, redis store, etc. and use a timed service to perform the work on a schedule.
In either case, you may also consider splitting this off into a worker service (essentially, a separate app composed of just the hosted service, instead of running it in the same instance as your web app). This allows you to scale the service independently and also insulates your web app from problems that service might encounter.
Once you've set up your service and scheduled the work, you just need some way to let the user know when the work is complete. A typical approach is to use SignalR to allow the server to notify the client with progress updates or success notifications. However, you can also just do something simple like email the user when everything is ready.
I have a .NET Core Console Application that I need to deploy to Azure and schedule to run once a day. The application is creating a TCP Socket to get market data. I need to schedule it to run in the morning, and the application will receive a close message near the end of the day from the market and automatically close. Approximately run time is estimated at 16 hours, 5 days a week.
Here are the option I've looked researched:
Cloud Service, which might be deprecated (I'm having a hard to validating the comments I've read to this effect)
Service Fabric - but this really looks like it's tailored for stateless applications that can spin up and down for scale. In my case, it should always be a single instance (I do like the self "healing", if my service does go down, it would be great if it is automatically restarted or a new one is spun up)
Azure Web job and azure scheduler. It looks like I could set this to "always on" and add a settings file that has cron configuration, but it seems like a waste of resources to have it "always on". This option also appears to be limited in it's deployment options - I can't set up (that I see) a git integration and auto-deploy. This does seem like the way to go
I'm looking for the pro's and con's of these options above for my use case, or any other options that I might have missed.
There's one thing that seems to be overlooked here. This part:
The application is creating a TCP Socket to get market data.
Is that 80/TCP or 443/TCP? Does it talk HTTP over one of those ports?
Because if your application talks custom protocol over an arbitrary TCP port, you can't use WebJobs. The App Service sandbox does not allow arbitrary port binding. This applies ingress. Egress (outbound), there's no restriction. You can make raw TCP requests from the WebJob to any destination and port.
From https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#network-endpoint-listening:
Network endpoint listening
The only way an application can be accessed via the internet is through the already-exposed HTTP (80) and HTTPS (443) TCP ports; applications may not listen on other ports for packets arriving from the internet.
There's no need to involve the Azure Scheduler service. WebJobs have a built-in cron implementation which is completely free.
Also, the Always On feature really means hit this site with synthetic requests every couple minutes so it serves a 200 OK, and thus prevent the Application Pool from being unloaded from memory due to inactivity.
I would use the tank. Can't find anything wrong with the tank if you can pick your tank size. There's also very little maintenance with tanks.
Size (id) Cores Ram Net Bandwidth Total disk size
---------------------------------------------------------------
ExtraSmall 1 0.75 GB Low 19 GB
Small 1 1.75 GB Moderate 224 GB
...
ServiceDefinition.csdef if you need to listen on a socket:
<Endpoints>
<InputEndpoint name="aRawTCPEndpoint" protocol="tcp" port="54321" localPort="54321" />
</Endpoints>
Where does your application keep state? Memory/disk/off-the-box database? Cloud Service roles are stateless in nature and if one instance gets sick it's barbecued and a new one is spun up. It's crucial that state be kept off-the-box, in durable storage - Blob/Table storage, Azure SQL, DocumentDB, etc.
Imagine you've build your house 6 years ago. And you've used this material called ClassicBrick in the structure. It is a good material, strong, waterproof, scratch-resistant. But recently this newer and better material - let's call it Armritis (which by the way is designed to be used in BRIDGES not houses, but i digress) came out which everybody tells you is better in every way. Do you tear down the house? Cloud Services are not deprecated and until i see an official Microsoft roadmap telling the opposite i'm not going to entertain this in any way.
On the topic of Service Fabric, it CAN do stateful and it's actually one of its biggest selling points:
From https://azure.microsoft.com/en-us/documentation/articles/service-fabric-reliable-services-quick-start/:
Create a stateful service
Service Fabric introduces a new kind of service that is stateful. A stateful service can maintain state reliably within the service itself, co-located with the code that's using it. State is made highly available by Service Fabric without the need to persist state to an external store.
Azure Functions is also worth a good look if you can speak HTTP over standard ports.
First, we need to compare the pricing between Cloud Service, Service Fabric, Web job &Scheduler if you doesn't want to wast resources. Here is the pricing calculator. Because your console application job will have special scheduler time to work. It is better to save money if the job is not work. So, Web job(if you have web app meantime) &Scheduler will be good choice to achieve your purpose.
Using a cloud service for such a tiny job is like using a tank to go to work
Service Fabric is mainly for building Micro-services style application not a console apps or jobs that run once a day
Web Jobs require a web app, so you have the remaining option which is the base for web jobs.
You can create the scheduler and make it run every day at specific time or execute it manually on demand
I would go with either one solution, in this priority:
App Service: Web Apps (Web Job)
Virtual Machine
Cloud Service
App Service: Web Apps (Web Job) provides a free plan. And it started to support Web Jobs in a free plan. You will be able to work with files, should you need it. Just as mentioned in other answers, just set a scheduler. If you have doubts and think it is not kosher to do it on a website, then think of it as getting a free website (if you use paid plan) as a bonus. Either way, everything runs on a machine - be it with web-server or without. Maybe you will even start some long-ago-forgotten web project of yours?
Cloud Service and Virtual Machine are both straightforward and simple. Gotta be honest, I haven't used Cloud Service, yet I think you can connect to it via Remote Desktop just like to an ordinary VM. You will have complete control. I would choose Virtual Machine over the Cloud Service though, just because it is cheaper.
Solutions that will NOT work:
Azure Scheduler does not fit you, because it allows only HTTP/S requests and posting messages to Azure Storage queues, Azure Service Bus queues, or Azure Service Bus.
Personally I would go with a WebJob without AlwaysOn, and use Azure Scheduler to fire the WebJob at the desired times using HTTPS. Then the WebJob can do the calls needed to get the data. This does not need AlwaysOn since the Scheduler call wakes it up anyway.
Azure Functions might also be worth a look, though they aren't meant for long-running tasks.
This would also probably be the cheapest option as the Web App can probably run on Free tier (depending on how long the job takes) and the Scheduler can also be Free tier.
We are building an asp.net web application which completely pushes the data to salesforce and is a forms authenticated website. to minimize no.of API calls to Salesforce and reduce the response time to end user in the website, when a user login, we store all the contact information in session object. But, the problem is, when some one changes information in Salesforce, how can i get to know in the asp.net web application to have the updated information queried again and update the session object.
I know there is salesforce listener we can use to have the notifications send interms of outbound messages. But, just wondering how can i manage to update my current running session object of a contact in the asp.net web application.
Your inputs are valuable to me.
If you do have access to the listener and you can use it to push events - then I think an approach like this would likely minimize events/API calls tremendously.
The remote service - SalesForce
The local service - a WCF/SOAP Kind of service
The web application - the ASP.NET app that you are referring to
The local cache - a caching system (could be filesystem, could be more elaborate)
First of all you should look into creating a very simple local service who's purpose is to receive API calls from SalesForce when data is changed. It's purpose should be to receive API calls when the data that matters to you is changed. When such a call is received, you should update a local cache with the new values. The web application should always and first check if the item that is requested is in the local cache, if not then you can allow it to make an API call to the remote service in order to retrieve data. Once data is retrieved, update local cache and display it. Therefore, from this point forward, unless data changes (which SalesForce should push changes to you and to your local cache) you should never ever have to make an API call ever again.
You could even evolve to pushing data when it is created in SalesForce and also doing a massive series of API calls to SalesForce when the new local service is in place and the remote service is properly configured. This will then give you a solution where the "internet could die" and you would still have access to the local cache and therefore data.
The only challenge here is that I don't know if SalesForce outgoing API calls can be retried easily if they fail (in case the local service does go down, or the internet does, or SalesForce is not available) in order to keep eventual consistency.
If the local cache is the Session object (which I don't recommend because it's volatile) just integrate the local service and the web application into the same umbrella (same app).
The challenges here are
Make sure changes (including creations and deletions) trigger the proper calls from the remote service to the local service
Make sure the local cache is up to date - eventual consistency should be fine as long as it only takes minutes to update it locally when changes occur - a good design should be within 30 seconds if all services are operating normally
Make sure that you can push back any changes back to SalesForce?
Don't trust the network - it will eventually fail - account for that possibility
Good luck, hope this helps
Store the values in the cache, and set the expiration time of the entry to be something low enough that when a change is made, the update will be noticed quickly enough. For you that may be a few hours, it may be less, or it could be days.