I an trying to log details of my function app into Application Insights.
My basic code:
public class AzureAppInsightsExplore
{
private readonly ILogger<AzureAppInsightsExplore> logger;
public AzureAppInsightsExplore(ILogger<AzureAppInsightsExplore> logger)
{
this.logger = logger;
}
[FunctionName("AzureAppInsightsExplore")]
public async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req, ILogger log) //Can we directly use log
{
// Unable to find this log in trace but in live metrics it is shown.
logger.LogInformation("C# HTTP trigger function processed a request.");
int a = 0, b = 0;
//Unhandled exception
int c = a / b;
return new OkObjectResult(string.Empty);
}
}
Host.Json:
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
}
}
Here are few strange things I have noticed (not sure where I am doing wrong).
Unhandled Exception is logging twice in Application Insights.
Any specific reason to inject ILogger<ClassName> logger when function run method is having ILogger log by default.
Main concern is that I am seeing a lot of unwanted logs in trace table, where I am just expecting it would have information that I log in code with log.LogXXX(message).
Is there a way that I can stop loading unwanted data Trace table, because it would increase the cost.
I am not see those logs messages that I am logging from code (I have tried after 10 mins, as it might take some time to load into trace tables) , but I can see those logs in Live metrics.
Can someone suggest me on the above, It would be really helpful.
Kind regards.
Thanks #Hooman Bahreini, According to SO-Thread it says,
ILogger: is responsible to write a log message of a given Log Level.
ILoggerProvider: is responsible to create an instance of ILogger (you are not supposed to use ILoggerProvider directly to create a logger).
ILoggerFactory: you can register one or more ILoggerProviders with the factory, which in turn uses all of them to create an instance of ILogger. ILoggerFactory holds a collection of ILoggerProviders.
We can able to see logging messages you need give correct format in quotations and column name too:
Related
i have started in the last few weeks working (or trying it) Simple MVC-App for notifications.
I can log informations to console but how can I add easy an file logger for the notification function for http-requests?.
From the scratch implementing with very much work I can realize it, but without any supporting functions for filelogging. I found the Ilogger to log in the Console. But is there an easy way to switch Ilogger from logging to console logging to file?
public class NotificationsController : ControllerBase
{
private readonly MyConfig config;
private readonly ILogger<NotificationsController> _logger;
public NotificationsController(MyConfig config, ILogger<NotificationsController> logger)
{
this.config = config;
_logger = logger;
}
[HttpGet]
public ActionResult<string> Get()
{
...
string Message = $"About page visited at {DateTime.UtcNow.ToLongTimeString()}";
_logger.LogInformation(Message);
...
The example I modified is fromn here:
https://learn.microsoft.com/de-de/learn/modules/msgraph-changenotifications-trackchanges/5-exercise-change-notification
https://github.com/microsoftgraph/msgraph-training-changenotifications/tree/live
Thank you for reaching out. AS.NET Core doesn't include a logging provider for writing logs to files, see documentation on logging in .NET Core and ASP.NET. To write logs to files, consider using a third party logging provider.
Let me know whether this helps and if you have further questions.
I have a .Net Worker program that uses multiple NuGet packages that log information to ILogger. For each of the NuGet packages, I am delegating same "master" logger object. For each of the log information, I would like to now fetch and send to our internal chat. I trying to create an Event that will execute a "CallMeWhenLog" method every time a new string is added to ILogger.
public class TestClass
{
private readonly ILogger<TestClass> _logger;
public TestClass(ILogger<TestClass> logger)
{
_logger = logger;
}
public async Task Process(Message message,
CancellationToken cancellationToken = default)
{
_logger.LogInformation("New message!");
_logger.LogError("New message!", ex);
}
public void CallMeWhenLog(string loggedMessage)
{
var chatHandler = new ChatHandler();
chatHandler.SendMessage(loggedMessage);
}
}
I think it is possible by creating a subscription to ILogger but I never used events before. It sounds straight forward but I am a bit lost. For example, I would like to call "CallMeWhenLog" method just after _logger.LogInformation and _logger.LogError is executed.
From your comment it seems like you're using (ASP).NET Core.
You may want to look into creating your own Logger - docs, and registering it just like the other three logging provides (e.g. logging.AddEventLog()). You can implement posting to the chat in that custom logger.
Or take a look at the 3rd party providers: https://learn.microsoft.com/en-us/aspnet/core/fundamentals/logging/?view=aspnetcore-5.0#third-party-logging-providers
About a month ago, I noticed that some of the monitoring functionality in the old Azure Functions portal interface stopped working. I wrote more details about the issues on the Azure Functions Host GitHub but my particular questions are as of yet unanswered.
Now it seems the Azure Functions portal interface default to the new "management experience" that looks more similar to the rest of Azure, and with that, it's even more apparent that something is wrong in the way we use logging and tracing.
My question is: Does anybody have any code samples as to how to set up Azure Function logging, live metrics, and app insights tracing so that it:
Works with dependency injection
Works with the new "management experience" interface
Currently, in order to see what a particular Azure Function is doing, I have to go to the old Azure interface and study the log stream. The Functions do work, and they spit out information in the log stream, but only in the old interface, and not much else in terms of monitoring seems to work. Using the old interface:
The invocation logs, the one you get when you press the "Monitor" link under "Functions(Read Only) > [function] > Monitor, shows no invocations at all even though the functions are definitely being called according to the logs.
The Live app metrics link results in the default "Not available: your app is offline or using an older SDK" with some animated demo charts.
These worked fine a month ago. Now, not so much.
Using the new interface:
Monitoring > Log stream shows nothing except the word "Connected!", regardless of verbosity.
Monitoring > Log Stream > Open in Live Metrics again just yields the default "Not available: your app is offline or using an older SDK".
Going to a specific function in the new interface by using Functions > Functions > [click on a function]:
Developer > Code + Test > Test-button > Run, the Logs window pops up, just says "Connected!" and nothing else, again regardless of verbosity.
Monitor > Invocations, there are no invocation traces registered here, even though the function is obviously being called according to the old interface log stream.
Monitor > Logs, again, just says "Connected!", regardless of verbosity.
I don't understand why it suddenly stopped working a month back, and why so many things don't seem to work with the new interface. Our Functions' NuGet packages are all up to date.
In terms of logging, the logger is dependency injected so that we can use it in multiple classes and not just in the default Functions.cs class:
using Microsoft.Extensions.Logging;
public class EventForwarder
{
private readonly ILogger<EventForwarder> log;
And we log through the use of extension methods, nothing fancy really:
using Microsoft.Extensions.Logging;
public static class LoggerExtensions
{
public static void Info(this ILogger log, string msg) => log.LogInformation(msg);
The app insights tracer is also dependency injected using a workaround suggested here, i.e. our Startup.cs looks lite this:
using Microsoft.ApplicationInsights.Extensibility;
using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.Extensions.DependencyInjection;
[assembly: FunctionsStartup(typeof(EventForwarder.Startup))]
namespace EventForwarder
{
public class Startup : FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
// https://github.com/Azure/azure-functions-host/issues/5353
builder.Services.AddSingleton(sp =>
{
var key = Environment.GetEnvironmentVariable("APPINSIGHTS_INSTRUMENTATIONKEY");
return string.IsNullOrWhiteSpace(key) ? new TelemetryConfiguration() : new TelemetryConfiguration(key);
});
We're performing traces of Http retries, among other things, like so:
public class HttpRetryPolicyService
{
private readonly ILogger<HttpRetryPolicyService> log;
private readonly TelemetryClient insights;
public HttpRetryPolicyService(ILogger<HttpRetryPolicyService> log,
TelemetryConfiguration insightsConfig)
{
this.log = log;
insights = new TelemetryClient(insightsConfig);
}
...
private void LogRetry(DelegateResult<HttpResponseMessage> message, TimeSpan delay, int attempt, Context context)
{
if (message.Exception != null)
{
log.Warn($"Exception details: {message.Exception}");
insights.Track(message.Exception);
And we're using extension methods to trace, like so:
using Microsoft.ApplicationInsights;
namespace EventForwarder.Static
{
public static class TelemetryExtensions
{
public static void Track(this TelemetryClient insights, string eventName)
{
insights.TrackEvent(eventName);
insights.Flush();
}
What am I missing?
Edit #1: Btw, adding Application Insights as a Service Dependency in the Publish dialog unfortunately does not solve these issues.
Edit #2: Also, preemtively, our Functions host.json files all look like this:
{
"version": "2.0",
"healthMonitor": {
"enabled": true,
"healthCheckInterval": "00:00:10",
"healthCheckWindow": "00:02:00",
"healthCheckThreshold": 6,
"counterThreshold": 0.80
},
"logging": {
"fileLoggingMode": "always",
"applicationInsights": {
"enableLiveMetrics": true,
"samplingSettings": {
"isEnabled": true
}
},
"logLevel": {
"EventForwarder": "Information"
}
}
}
This is what breaks your app, remove it and everything should work:
// https://github.com/Azure/azure-functions-host/issues/5353
builder.Services.AddSingleton(sp =>
{
var key = Environment.GetEnvironmentVariable("APPINSIGHTS_INSTRUMENTATIONKEY");
return string.IsNullOrWhiteSpace(key) ? new TelemetryConfiguration() : new TelemetryConfiguration(key);
});
My guess would be that the workaround actually breaks the logging now that the bugfix has been rolled out.
I created a sample app where logging and log stream work quite nicely, also with dependency injection. I tested it with both Windows and Linux consumption plans. The function app was created using the wizard in the Azure Portal, selecting .NET Core 3.1. Please be aware that TrackEvent does not show up in the function's log stream. It shows up in Application Insights Live Metrics. It can also take up to 30s after "Connected" shows up until actual logs are shown. The Live Metrics view works better, especially if you open directly from application insights.
I was able to reproduce your issues by applying the "workaround" mentioned above. Without it, everything works fine.
Full sample: https://github.com/LXBdev/Functions-V3-sample
public class Startup : FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
builder.Services.AddScoped<MyService>();
}
"logging": {
"applicationInsights": {
"samplingExcludedTypes": "Request",
"samplingSettings": {
"isEnabled": true
}
},
"logLevel": {
"Functions_V3_sample": "Information"
}
}
public MyService(ILogger<MyService> logger, TelemetryClient telemetry)
{
Logger = logger ?? throw new ArgumentNullException(nameof(logger));
Telemetry = telemetry ?? throw new ArgumentNullException(nameof(telemetry));
}
public void Foo()
{
Logger.LogInformation("Foo");
Telemetry.TrackTrace("BarLog", Microsoft.ApplicationInsights.DataContracts.SeverityLevel.Information);
Telemetry.TrackEvent("BarEvent");
}
Update: There was an issue with the host.json in my original answer and sample - logs weren't really persisted to AppInsights because of https://github.com/Azure/azure-functions-host/issues/4345. I updated the code accordingly.
Is there any way to track if end-point is available for Tcp Sink logging ?
For example locally on my machine I do not have FileBeat setup, while its working on Staging machine.
The way I initialize Logger
private readonly ILogger _tcpLogger;
public TcpClient(IOptions<ElasticSearchConfig> tcpClientConfig)
{
var ip = IPAddress.Parse(tcpClientConfig.Value.TcpClientConfig.IpAddress);
_tcpLogger = new LoggerConfiguration()
.WriteTo.TCPSink(ip, tcpClientConfig.Value.TcpClientConfig.Port, new TcpOutputFormatter())
.CreateLogger();
}
and simple method just to submit log
public void SubmitLog(string json)
{
_tcpLogger.Information(json);
}
And in my case when its submitting json string locally, it just goes nowhere and I would like to get an exeption/message back.
ideally on json submit, but during initialization is Ok.
Writing to a Serilog logger is meant to be a safe operation and never throw exceptions, and that's by design. Thus any exceptions that happen when sending those messages would only appear in the SelfLog - if you enable it.
e.g.
// Write Serilog errors to the Console
Serilog.Debugging.SelfLog.Enable(msg => Console.WriteLine(msg));
The example above is, of course, just to illustrate the SelfLog feature... You'll choose where/how to display or store these error messages.
Now, if the operation you're logging is important enough that you'd want to guarantee it succeeds (or throws exception if it doesn't) then you should use Audit Logging, i.e. Use .AuditTo.TCPSink(...) instead of .WriteTo.TCPSink(...)
Given the following middleware:
public class RequestDurationMiddleware
{
private readonly RequestDelegate _next;
private readonly ILogger<RequestDurationMiddleware> _logger;
public RequestDurationMiddleware(RequestDelegate next, ILogger<RequestDurationMiddleware> logger)
{
_next = next;
_logger = logger;
}
public async Task Invoke(HttpContext context)
{
var watch = Stopwatch.StartNew();
await _next.Invoke(context);
watch.Stop();
_logger.LogTrace("{duration}ms", watch.ElapsedMilliseconds);
}
}
Because of the pipeline, it occurs before the end of pipeline and logs different times:
WebApi.Middlewares.RequestDurationMiddleware 2018-01-10 15:00:16.372 -02:00 [Verbose] 382ms
Microsoft.AspNetCore.Server.Kestrel 2018-01-10 15:00:16.374 -02:00 [Debug] Connection id ""0HLAO9CRJUV0C"" completed keep alive response.
Microsoft.AspNetCore.Hosting.Internal.WebHost 2018-01-10 15:00:16.391 -02:00 [Information] "Request finished in 405.1196ms 400 application/json; charset=utf-8"
How can I capture the actual request execution time from WebHost (405.1196ms in the example) value in this case? I want to store this value in database or use it elsewhere.
I thought this question was really interesting, so I looked into this for a bit to figure out how the WebHost is actually measuring and displaying that request time. Bottom line is: There is neither a good nor an easy nor a pretty way to get this information, and everything feels like a hack. But follow along if you’re still interested.
When the application is started, the WebHostBuilder constructs the WebHost which in turn creates the HostingApplication. That’s basically the root component that is responsible to respond to incoming requests. It is the component that will invoke the middleware pipeline when a request comes in.
It is also the component that will create HostingApplicationDiagnostics which allows to collect diagnostics about the request handling. At the beginning of the request, the HostingApplication will call HostingApplicationDiagnostics.BeginRequest, and at the end of the request, it will call HostingApplicationDiagnostics.RequestEnd.
Not that surprisingly, HostingApplicationDiagnostics is the thing that will measure the request duration and also log that message for the WebHost that you have been seeing. So this is the class that we have to inspect more closely to figure out how to get the information.
There are two things the diagnostics object uses to report diagnostics information: A logger, and a DiagnosticListener.
Diagnostic listener
The DiagnosticListener is an interesting thing: It is basically a general event sink that you can just raise events on. And other objects can then subscribe to it to listen to these events. So this almost sounds perfect for our purpose!
The DiagnosticListener object that the HostingApplicationDiagnostics uses is passed on by the WebHost and it actually gets resolved from dependency injection. Since it is registered by the WebHostBuilder as a singleton, we can actually just resolve the listener from dependency injection and subscribe to its events. So let’s just do that in our Startup:
public void ConfigureServices(IServiceCollection services)
{
// …
// register our observer
services.AddSingleton<DiagnosticObserver>();
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env,
// we inject both the DiagnosticListener and our DiagnosticObserver here
DiagnosticListener diagnosticListenerSource, DiagnosticObserver diagnosticObserver)
{
// subscribe to the listener
diagnosticListenerSource.Subscribe(diagnosticObserver);
// …
}
That’s already enough to get our DiagnosticObserver running. Our observer needs to implement IObserver<KeyValuePair<string, object>>. When an event occurs, we will get a key-value-pair where the key is an identifier for the event, and the value is a custom object that is passed by the HostingApplicationDiagnostics.
But before we implement our observer, we should actually look at what kind of events HostingApplicationDiagnostics actually raises.
Unfortunately, when the request ends, the event that is raised on the diagnostic lister just gets passed the end timestamp, so we would also need to listen to the event that is raised at the beginning of the request to read the start timestamp. But that would introduce state into our observer which is something we want to avoid here. In addition, the actual event name constants are prefixed with Deprecated which might be an indicator that we should avoid using these.
The preferred way is to use activities which are also closely related to the diagnostic observer. Activities are apparently states that track, well, activities as they appear in the application. They are started and stopped at some point, and also already record how long they run on their own. So we can just make our observer listen to the stop event for the activity to get notified when its done:
public class DiagnosticObserver : IObserver<KeyValuePair<string, object>>
{
private readonly ILogger<DiagnosticObserver> _logger;
public DiagnosticObserver(ILogger<DiagnosticObserver> logger)
{
_logger = logger;
}
public void OnCompleted() { }
public void OnError(Exception error) { }
public void OnNext(KeyValuePair<string, object> value)
{
if (value.Key == "Microsoft.AspNetCore.Hosting.HttpRequestIn.Stop")
{
var httpContext = value.Value.GetType().GetProperty("HttpContext")?.GetValue(value.Value) as HttpContext;
var activity = Activity.Current;
_logger.LogWarning("Request ended for {RequestPath} in {Duration} ms",
httpContext.Request.Path, activity.Duration.TotalMilliseconds);
}
}
}
Unfortunately there is just no solution without downsides… I found this solution to be very inaccurate for parallel requests (e.g. when opening a page that has also images or scripts which are requested in parallel). This is likely due to the fact that we are using a static Activity.Current to get the activity. However there does not really seem to be a way to get just the activity for a single request, e.g. from the key value pair that was passed.
So I went back and tried my original idea again, using those deprecated events. The way I understood it is btw. that they are just deprecated because using activities is recommended, not because they will be removed soon (of course we are working with implementation details and an internal class here, so these things could change at any time). To avoid problems with concurrency, we need to make sure we store the state inside of the HTTP context (instead of a class field):
private const string StartTimestampKey = "DiagnosticObserver_StartTimestamp";
public void OnNext(KeyValuePair<string, object> value)
{
if (value.Key == "Microsoft.AspNetCore.Hosting.BeginRequest")
{
var httpContext = (HttpContext)value.Value.GetType().GetProperty("httpContext").GetValue(value.Value);
httpContext.Items[StartTimestampKey] = (long)value.Value.GetType().GetProperty("timestamp").GetValue(value.Value);
}
else if (value.Key == "Microsoft.AspNetCore.Hosting.EndRequest")
{
var httpContext = (HttpContext)value.Value.GetType().GetProperty("httpContext").GetValue(value.Value);
var endTimestamp = (long)value.Value.GetType().GetProperty("timestamp").GetValue(value.Value);
var startTimestamp = (long)httpContext.Items[StartTimestampKey];
var duration = new TimeSpan((long)((endTimestamp - startTimestamp) * TimeSpan.TicksPerSecond / (double)Stopwatch.Frequency));
_logger.LogWarning("Request ended for {RequestPath} in {Duration} ms",
httpContext.Request.Path, duration.TotalMilliseconds);
}
}
When running this, we do actually get accurate results and we also have access to the HttpContext which we can use to identify the request. Of course, the overhead that’s involved here is very apparent: Reflection to access property values, having to store information in HttpContext.Items, the whole observer thing in general… that’s probably not a very performant way to do this.
Futher reading on diagnostic source and activities: DiagnosticSource Users Guid and Activity User Guide.
Logging
Somewhere above I mentioned that the HostingApplicationDiagnostics also reports the information to the logging facilities. Of course: This is what we are seeing in the console after all. And if we look at the implementation, we can see that this already calculates the proper duration here. And since this is structured logging, we could use this to grab that information.
So let’s attempt to write a custom logger that checks for that exact state object and see what we can do:
public class RequestDurationLogger : ILogger, ILoggerProvider
{
public ILogger CreateLogger(string categoryName) => this;
public void Dispose() { }
public IDisposable BeginScope<TState>(TState state) => NullDisposable.Instance;
public bool IsEnabled(LogLevel logLevel) => true;
public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
{
if (state.GetType().FullName == "Microsoft.AspNetCore.Hosting.Internal.HostingRequestFinishedLog" &&
state is IReadOnlyList<KeyValuePair<string, object>> values &&
values.FirstOrDefault(kv => kv.Key == "ElapsedMilliseconds").Value is double milliseconds)
{
Console.WriteLine($"Request took {milliseconds} ms");
}
}
private class NullDisposable : IDisposable
{
public static readonly NullDisposable Instance = new NullDisposable();
public void Dispose() { }
}
}
Unfortunately (you probably love this word by now, right?), the state class HostingRequestFinishedLog is internal, so we cannot use it directly. So we have to use reflection to identify it. But we just need its name, then we can extract the value from the read-only list.
Now all we need to do is register that logger (provider) with the web host:
WebHost.CreateDefaultBuilder(args)
.ConfigureLogging(logging =>
{
logging.AddProvider(new RequestDurationLogger());
})
.UseStartup<Startup>()
.Build();
And that’s actually all we need to be able to access the exact same information that the standard logging also has.
However, there are two problems: We don’t have a HttpContext here, so we cannot get information about which request this duration actually belongs to. And as you can see in the HostingApplicationDiagnostics, this logging call is actually only made when the log level is at least Information.
We could get the HttpContext by reading the private field _httpContext using reflection but there is just nothing we can do about the log level. And of course, the fact that we are creating a logger to grab information from one specific logging call is a super hack and probably not a good idea anyway.
Conclusion
So, this is all terrible. There simply is no clean way to retrieve this information from the HostingApplicationDiagnostics. And we also have to keep in mind that the diagnostics stuff actually only runs when it’s enabled. And performance critical applications will likely disable it at one point or another. In any way, using this information for anything outside of diagnostics would be a bad idea since it’s just too fragile in general.
So what is the better solution? A solution that works outsid of a diagnostics context? A simple middleware that runs early; just like you have already used. Yes, this is likely not as accurate as it will leave out a few paths from the outer request handling pipeline but it will still be an accurate measurement for the actual application code. After all, if we wanted to measure framework performance, we would have to measure it from the outside anyway: as a client, making requests (just like the benchmarks work).
And btw. this is also how Stack Overflow’s own MiniProfiler works. You just register the middleware early and that’s it.