I have created a Service bus triggered Azure function and want to log custom events in application insights.
private static string key = TelemetryConfiguration.Active.InstrumentationKey =
System.Environment.GetEnvironmentVariable(
"APPINSIGHTS_INSTRUMENTATIONKEY", EnvironmentVariableTarget.Process);
private static TelemetryClient telemetryClient =
new TelemetryClient() { InstrumentationKey = key };
[FunctionName("Function1")]
public static void Run([ServiceBusTrigger("xxxxx", "xxxxx", AccessRights.Manage, Connection = "SBConnectionKey")]string mySbMsg, ILogger logger, TraceWriter log)
{
log.Info($"C# ServiceBus topic trigger function processed message: {mySbMsg}");
telemetryClient.Context.Cloud.RoleName = "AIFunction";
logger.LogMetric("test", 123);
telemetryClient.TrackEvent("Ack123 Recieved");
telemetryClient.TrackMetric("Test Metric", DateTime.Now.Millisecond);
}
I can see only log.Info($"C# ServiceBus topic trigger function processed message: {mySbMsg}");this logs in the trace. But custom events and metrics are not logging to application insights.
Any ideas what could be going on?
Answering your explicit question:
What is wrong with the telemetry that I send or where to find it in Application Insights Portal?
I created the function with almost the same code and tested. You can analyse the repo. I deployed the function and got the following results:
Answering your implicit question:
How to use Application Insights?
It is tricky at the beginning to use App Insights query language, I found this succinct file helpful. Other elements to consider in working with this monitoring tool:
there is a lagging between the telemetry is sent and you see it in application insights portal. Real time monitoring would be an expensive tool.
In the past I faced the same issue and the problem was that event/metric name was not found in the name of the telemetry but somewhere in details. This issue, might be referring to it. So what we decided to do is put more details and use of this method and MetricTelemetry class.
Although Application Insights might seem confusing, that is a powerful tool, it is worth investing time to learn better.
Related
I'm trying to implement a basic pub/sub system with dynamic subscribers. I need to dynamically register a topic subscriber in my .NET APIs, but it seems like I can only do that manually from the Azure Portal. When my program starts, I want to be able to register a subscriber to a topic in the format of subscribername-{timestamp} because I want to be able to deploy as many staging/dev versions as I want without having to manually create these subscribers each time.
I feel like this is a fundamental feature that I'm just blindly missing. I can do this when working with queues, but if I try to do the same with a topic, I get continuous errors of that subscriber path not found. I have searched the internet to no end and while I have found SOME solutions, they are very old and often not compatible with .NET 5 or the package is deprecated. I'm feeling like I'm going against the grain and missing something with what I'm coming up with, so I'd like to get some input on what the proper practice is for this.
I'm using Azure.Messaging.ServiceBus for publishing and subscribing currently. Below is some code -
var processor = ServiceBusClient.CreateProcessor(TopicName, $"DynamicSubscriber-{DateTime.Now}");
try
{
processor.ProcessErrorAsync += ErrorHandler;
processor.ProcessMessageAsync += MessageHandler;
await processor.StartProcessingAsync();
}
catch (Exception e)
{
await processor.DisposeAsync();
await ServiceBusClient.DisposeAsync();
}
finally
{
Console.WriteLine("Press a key to exit.");
Console.ReadLine();
}
Thank You #PeterBons! Yes, when updating/creating/fetching/deleting the Service Bus entities, ServiceBusAdministrationClient is the client Class to be used.
Also, There are few error details given this article when using the method of Queue with ServiceBusAdministrationClient and this SO Thread.
The ServiceBusTopicSubscription class is used to setup the Azure Service bus subscription. The class uses the ServiceBusClient to set up the message handler, the ServiceBusAdministrationClient is used to implement filters and add or remove these rules. The Azure.Messaging.ServiceBus Nuget package is used to connect to the subscription.
I am adding Application Insights (AI) to my web API service by following this page Application Insights Instructions. I managed to get my service to connect to AI and I am able to see when my service preforms a post, get, etc. I also placed log calls through my service, but none of them are being written to my AI's Traces log.
I made sure to setup my Startup.cs and appsettings.json files to contain the new code needed to run AI in throughout my service, and the logging data need to have AI grab logs debug and up.
Startup.cs
public void ConfigureServices(IServiceCollection services)
{
services.AddApplicationInsightsTelemetry();
}
appsettings.json
Logging Example
public async Task ProcessQueueAsync(dBData dbContext)
{
// _logger is of type ILogger<[INSERT CLASS NAME]>
_logger.LogDebug("This is a test log by Lotzi11.");
await ProcessQueueAsyncSingle(dbContext, CancellationToken.None);
}
Can someone help me figure out why my logs are not being sent to AI?
Your code configuration and appsettings.json are correct. I can see these logs in AI as per your settings at my side.
One thing you should know is that, it may take a few minutes for these data to arrive in AI server. Please wait for a few minutes, like 5 minutes or more, then query these data again from azure portal -> application insights.
And here is a simple way to check if the data is sent to AI. In visual studio, when running the project, you can search the logs in visual studio output window. If you can find the logs there, then it should be sent to AI:
Search in visual studio output window:
If you still cannot see the these logs in AI, you should also check if you have set something like filter or sampling in your code.
While I was working on solving my problem, I found out that my company uses Serilog to handle logging, so I had to alter my project so Serilog would also send logs to AI. I modified my code using the following page serilog-sinks-applicationinsights.
This led my to realize that even though I followed Microsofts instructions on setting up AI, my ILogger class is not properly setup to handle sending logs to AI. To fix that, I alter my Startup.cs's constructor:
public Startup(IHostEnvironment environment, IConfiguration configuration)
{
var env = new Environment(environment.EnvironmentName);
_systemConfiguration = new SystemConfiguration(env, configuration);
_systemConfiguration.Validate();
Log.Logger = new LoggerConfiguration().Enrich.FromLogContext().WriteTo.ApplicationInsights(_systemConfiguration.BaseConfiguration["APPINSIGHTS_INSTRUMENTATIONKEY"], TelemetryConverter.Traces).CreateLogger();
using var provider = new SerilogLoggerProvider(Log.Logger);
_logger = provider.CreateLogger(nameof(Startup));
}
After adding AI to Log.Logger, my logs began showing up in my AI's page.
I have a working azure function which puts a message on a service bus queue.
public static void Run(
[TimerTrigger("0 * * * *")]TimerInfo myTimer,
[ServiceBus("queueName", Connection = "ServiceBusConnection")] ICollector<Message> queue,
TraceWriter log)
{
//function logic here
}
The connection string is currently in the plain text in the app settings. Is it possible to have this encyrpted and still use the built in integration between azure functions and the service bus?
I have tried creating a ServiceBusAttribute at runtime but it doesn't look like you can pass it a connection string.
Any help is much appreciated
This is currently not possible. There is a feature request to retrieve secrets used in bindings from KeyVault: https://github.com/Azure/azure-webjobs-sdk/issues/746
The GitHub issue also describes a workaround to retrieve the secrets from KeyVault at build time within VSTS.
TL;DR: This example is not working for me in VS2017.
I have an Azure Cosmos DB and want to fire some logic when something adds or updates there. For that, CosmosDBTrigger should be great.
Tutorial demonstrates creating trigger in Azure Portal and it works for me. However, doing just the same thing in Visual Studio (15.5.4, latest by now) does not.
I use the default Azure Functions template, predefined Cosmos DB trigger and nearly default code:
[FunctionName("TestTrigger")]
public static void Run(
[CosmosDBTrigger("Database", "Collection", ConnectionStringSetting = "myCosmosDB")]
IReadOnlyList<Document> input,
TraceWriter log)
{
log.Info("Documents modified " + input.Count);
log.Info("First document Id " + input[0].Id);
}
App runs without errors but nothing happens when I actually do stuff in the database. So I cannot debug things and actually implement some required logic.
Connection string is specified in the local.settings.json and is considered. If I deliberately foul it, trigger spits runtime errors.
It all looks like the connection string is to a wrong database. But it is exactly the one, copypasted, string I have in the trigger made via Azure Portal.
Where could I go wrong? What else can I check?
Based on your comment, you were running both portal and local Apps at the same time for the same collection and the same lease collection.
That means both Apps were competing to each other for locks (leases) on collection processing. The portal App won in your case, took the lease, so the local App was sitting doing nothing.
I have the following functions in the same web job console app that uses the azure jobs sdk and its extensions. The timed trigger queries an API end point for a file, does some additional work on it and then saves the file to the blob named blahinput. Now the second method "ProcessBlobMessage" is supposed to identify the new blob file in the blahinput and do something with it.
public static void ProcessBlobMessage([BlobTrigger("blahinput/{name}")] TextReader input,
string name, [Blob("foooutput/{name}")] out string output)
{//do something }
public static void QueryAnAPIEndPointToGetFile([TimerTrigger("* */1 * * * *")] TimerInfo timerInfo) { // download a file and save it to blob named blah input}
The problem here is :
When I deploy the above said web job as continuous, only the timer triggered events seems to get triggered while the function that is supposed to identify the new file never gets triggered. Is it not possible to have two such triggers in the same web job?
From this article: How to use Azure blob storage with the WebJobs SDK
The WebJobs SDK scans log files to watch for new or changed blobs. This process is not real-time; a function might not get triggered until several minutes or longer after the blob is created. In addition, storage logs are created on a "best efforts" basis; there is no guarantee that all events will be captured. Under some conditions, logs might be missed. If the speed and reliability limitations of blob triggers are not acceptable for your application, the recommended method is to create a queue message when you create the blob, and use the QueueTrigger attribute instead of the BlobTrigger attribute on the function that processes the blob.
Until the new blob trigger strategy is released, BlobTriggers are not reliable. The trigger is based on Azure Storage Analytics logs which stores logs on a Best-Effort basis.
There is an ongoing Github issue about this and there is also a PR regarding a new Blob scanning strategy.
This being said, check if you are using the Latest Webjobs SDK version 1.1.1 because there was an issue on prior versions that could lead to problems on BlobTriggers.