I have a console application which I want to convert to an Azure Function Timer Trigger app which will run every hour after some data processing and uploads are done. The data processing and uploads are being done via classes which are injected in the program.cs file of the console application. Somewhere in the classes I have a task.delay by 1hour where it will query new data after the data has been queried and uploaded for the first time. So, I copied the entire code of the console application with its packages to the Azure Function Timer trigger app. What I am trying to do is to run the program.cs file of the console application first in the azure function app in order to do its job (data processing, querying data, uploading data to azure...). and then initiate the timer trigger. Is that doable ? What line of code can I add in the run method of the azure function app to execute the program.cs file first and then initiate the trigger. You can find here the startup code of the azure function time trigger app.
using System;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
namespace ExportServiceFunctionApp
{
public static class ExportServiceFunctionApp
{
[FunctionName("ExportServiceFunctionApp")]
public static void Run([TimerTrigger("0 0 */1 * * * ")]TimerInfo myTimer, ILogger log)
{
log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");
}
}
}
There are a few solutions to achieve this.
Solution 1. Temporarily replace the timer trigger with http trigger
While debugging the app we just comment the first line of the original Run function and add an http trigger instead like this:
public static async Task Run([HttpTrigger] Microsoft.AspNetCore.Http.HttpRequest req, ILogger log)
// public static async Task Run([TimerTrigger("0 0 * * * *")] TimerInfo myTimer, ILogger log)
{
// YOUR REGULAR CODE HERE
}
Then when running the app you'll see an endpoint like this:
Just open the endpoint in browser (or postman) and the function will get called.
And right before pushing the code to the repo, just bring back the original Run function and remove the http trigger one.
Solution 2: Add another http trigger that calls the timer function
Add the following function to your app to expose an http trigger.
[FunctionName("Test")]
public static async Task Test([HttpTrigger] Microsoft.AspNetCore.Http.HttpRequest req, ILogger log)
{
Run(null, log);
}
The function basically calls the Run function.
So when you run the app, again you'll get an endpoint from the console that can be used from the browser to trigger the function.
The url will look like this:
http://localhost:7071/api/Test
Azure functions is event driven in nature. If function trigger means the event handled.
Run method of function means that function has triggered and its entry point for it.
If you want any processing or code execution before it you may need to write one more function and perform the steps and then trigger another function of either timer trigger or ant different type.
Related
Is there now a way to set the Trigger Properties(Name/Connection) using the value from Azure App Configuration?.
I added a startup class that reads the data from Azure App Configuration but it seems the trigger set its properties earlier than that, therefore not able to bind the data that came from the app configuration.
I also found this thread about it but im not sure if there is a new update?:
https://github.com/MicrosoftDocs/azure-docs/issues/63419
https://github.com/Azure/AppConfiguration/issues/203
You can do this. The following code gets the name of the queue to monitor from an app setting, and it gets the queue message creation time in the insertionTime parameter:
public static class BindingExpressionsExample
{
[FunctionName("LogQueueMessage")]
public static void Run(
[QueueTrigger("%queueappsetting%")] string myQueueItem,
DateTimeOffset insertionTime,
ILogger log)
{
log.LogInformation($"Message content: {myQueueItem}");
log.LogInformation($"Created at: {insertionTime}");
}
}
Similarly, you can use this approach for other triggers.
with a very simple Azure Function program, I will test Azure Event Grid. My goal is if a file is uploaded to Storage account, then my Azure function should be triggered and log a message like Hello World. I have this block of code in my Azure Function:
namespace BlobTrigger
{
public static class BlobEventGrid
{
[FunctionName("BlobEventGrid")]
public static void Run([EventGridTrigger]JObject eventGridEvent,
[Blob("{data.url}", Connection = "BlobConnection")] string file,
TraceWriter log)
{
log.Info("Hello World");
}
}
}
I have set my Event grid from this article.
If I upload more than 50 file to my container, and then control the Live Metrics of my Azure Functions I can only see 4 incoming events:
By controlling metrics in event subscription, I see such metrics:
Delivered Events:66
Matched Events: 51
Do you have any Idea, why I only 4 events are tracked by my azure functions?
I have a very simple function and boiled it down further more to trouble shoot this issue, I read some previous questions on SO about similar issues but they dont apply to my issue I believe.
Function code
[FunctionName("XXXItems")]
public static async System.Threading.Tasks.Task RunAsync([TimerTrigger("0 45 6 * * 1-5")]TimerInfo myTimer, ILogger log)
{
log.LogInformation($"XXXItems --> Timer trigger function executed at: {DateTime.Now}");
}
I have configure the APPINSIGHTS_INSTRUMENTATIONKEY in the function settings
But nothing gets logged and I am a bit lost as how to debug this
I'm writing an ASP.NET Core 2.2 C# web application that uses SignalR to take calls from JavaScript in a web browser. On the server side, I initialize SignalR like this:
public static void ConfigureServices(IServiceCollection services)
{
...
// Use SignalR
services.AddSignalR(o =>
{
o.EnableDetailedErrors = true;
});
}
and
public static void Configure(IApplicationBuilder app, Microsoft.AspNetCore.Hosting.IHostingEnvironment env, ILoggerFactory loggerFactory)
{
...
// Route to SignalR hubs
app.UseSignalR(routes =>
{
routes.MapHub<ClientProxySignalR>("/clienthub");
});
...
}
My SignalR Hub class has a method like this:
public class ClientProxySignalR : Hub
{
...
public async Task<IEnumerable<TagDescriptor>> GetRealtimeTags(string project)
{
return await _requestRouter.GetRealtimeTags(project).ConfigureAwait(false);
}
...
}
and on the client side:
var connection = new signalR.HubConnectionBuilder()
.withUrl("/clienthub")
.configureLogging(signalR.LogLevel.Information)
.build();
connection.start().then(function () {
...
// Enable buttons & stuff so you can click
...
}
document.getElementById("tagGetter").addEventListener("click", function (event) {
connection.invoke("GetRealtimeTags", "Project1").then(data => {
...
// use data
...
}
}
This all works as far as it goes, and it does work asynchronously. So if I click the "tagGetter" button, it invokes the "GetRealtimeTags" method on my Hub and the "then" portion is invoked when the data comes back. It is also true that if this takes a while to run, and I click the "tagGetter" button again in the meantime, it makes the .invoke("GetRealtimeTags") call again...at least in the JavaScript.
However...this is where the problem occurs. Although the second call is made in the JavaScript, it will not trigger the corresponding method in my SignalR Hub class until the first call finishes. This doesn't match my understanding of what is supposed to happen. I thought that each invocation of a SignalR hub method back to the server would cause the creation of a new instance of the hub class to handle the call. Instead, the first call seems to be blocking the second.
If I create two different connections in my JavaScript code, then I am able to make two simultaneous calls on them without one blocking the other. But I know that isn't the right way to make this work.
So my question is: what am I doing wrong in this case?
This is by design of websockets to ensure messages are delivered in exact order.
You can refer to this for more information: https://hpbn.co/websocket/
Quoted:
The preceding example attempts to send application updates to the
server, but only if the previous messages have been drained from the
client’s buffer. Why bother with such checks? All WebSocket messages
are delivered in the exact order in which they are queued by the
client. As a result, a large backlog of queued messages, or even a
single large message, will delay delivery of messages queued behind
it—head-of-line blocking!
They also suggest a workaround solution:
To work around this problem, the application can split large messages
into smaller chunks, monitor the bufferedAmount value carefully to
avoid head-of-line blocking, and even implement its own priority queue
for pending messages instead of blindly queuing them all on the
socket.
Interesting question.
I think the button should be disabled and show a loading icon on the first time it is clicked. But maybe your UI enables more than one project to be loaded at once. Just thought for a second we might have an X-Y problem.
Anyways, to answer your question:
One way you can easily deal with this is to decouple the process of "requesting" data from the process of "getting and sending" data to the user when it is ready.
Don't await GetRealtimeTags and instead start a background task noting the connection id of the caller
Return nothing from GetRealtimeTags
Once the result is ready in the background task, call a new RealtimeTagsReady method that will call the JavaScript client with the results using the connection id kept earlier
Let me know if this helps.
I want to write a code, similar to the code at the bottom of this link (https://azure.microsoft.com/en-us/blog/automating-azure-analysis-services-processing-with-azure-functions/) in Visual Studio and building a DLL file. However instead of using the connection string, i would like to use an existing Linked Service from my Azure portal.
The goal is to create a DLL that refreshes my Cube, while at the same time using an existing Linked Service which is already in my Azure Portal.
Is this possible?
Thanks.
#r "Microsoft.AnalysisServices.Tabular.DLL"
#r "Microsoft.AnalysisServices.Core.DLL"
#r "System.Configuration"
using System;
using System.Configuration;
using Microsoft.AnalysisServices.Tabular;
public static void Run(TimerInfo myTimer, TraceWriter log)
{
log.Info($"C# Timer trigger function started at: {DateTime.Now}");
try
{
Microsoft.AnalysisServices.Tabular.Server asSrv = new Microsoft.AnalysisServices.Tabular.Server();
var connStr = ConfigurationManager.ConnectionStrings["AzureASConnString"].ConnectionString; // Change this to a Linked Service connection
asSrv.Connect(connStr);
Database db = asSrv.Databases["AWInternetSales2"];
Model m = db.Model;
db.Model.RequestRefresh(RefreshType.Full); // Mark the model for refresh
//m.RequestRefresh(RefreshType.Full); // Mark the model for refresh
m.Tables["Date"].RequestRefresh(RefreshType.Full); // Mark only one table for refresh
db.Model.SaveChanges(); //commit which will execute the refresh
asSrv.Disconnect();
}
catch (Exception e)
{
log.Info($"C# Timer trigger function exception: {e.ToString()}");
}
log.Info($"C# Timer trigger function finished at: {DateTime.Now}");
}
So I guess you're using the Data Factory and you want to process your analysis services model from your pipeline. I don't see what your question actually has to do with the Data lake store.
To trigger Azure Functions from the Data Factory (v2 only), you'll have to use a web activity. It is possible to pass a Linked Service as part of your payload, as shown in the documentation. It looks like this:
{
"body": {
"myMessage": "Sample",
"linkedServices": [{
"name": "MyService1",
"properties": {
...
}
}]
}
However, there is no Analysis services linked service in the Data Factory, at least, I didn't hear of such a thing. Passing in a connectionstring from the pipeline seems like a good idea however. You could pass it as a pipeline parameter in your body of the webrequest.
Create a parameter in your pipeline
Add it to your Web Activity Payload
{
"body": {
"AzureASConnString": "#pipeline().parameters.AzureASConnString"
}
You can retrieve this value from functions like described here