I am adding Application Insights (AI) to my web API service by following this page Application Insights Instructions. I managed to get my service to connect to AI and I am able to see when my service preforms a post, get, etc. I also placed log calls through my service, but none of them are being written to my AI's Traces log.
I made sure to setup my Startup.cs and appsettings.json files to contain the new code needed to run AI in throughout my service, and the logging data need to have AI grab logs debug and up.
Startup.cs
public void ConfigureServices(IServiceCollection services)
{
services.AddApplicationInsightsTelemetry();
}
appsettings.json
Logging Example
public async Task ProcessQueueAsync(dBData dbContext)
{
// _logger is of type ILogger<[INSERT CLASS NAME]>
_logger.LogDebug("This is a test log by Lotzi11.");
await ProcessQueueAsyncSingle(dbContext, CancellationToken.None);
}
Can someone help me figure out why my logs are not being sent to AI?
Your code configuration and appsettings.json are correct. I can see these logs in AI as per your settings at my side.
One thing you should know is that, it may take a few minutes for these data to arrive in AI server. Please wait for a few minutes, like 5 minutes or more, then query these data again from azure portal -> application insights.
And here is a simple way to check if the data is sent to AI. In visual studio, when running the project, you can search the logs in visual studio output window. If you can find the logs there, then it should be sent to AI:
Search in visual studio output window:
If you still cannot see the these logs in AI, you should also check if you have set something like filter or sampling in your code.
While I was working on solving my problem, I found out that my company uses Serilog to handle logging, so I had to alter my project so Serilog would also send logs to AI. I modified my code using the following page serilog-sinks-applicationinsights.
This led my to realize that even though I followed Microsofts instructions on setting up AI, my ILogger class is not properly setup to handle sending logs to AI. To fix that, I alter my Startup.cs's constructor:
public Startup(IHostEnvironment environment, IConfiguration configuration)
{
var env = new Environment(environment.EnvironmentName);
_systemConfiguration = new SystemConfiguration(env, configuration);
_systemConfiguration.Validate();
Log.Logger = new LoggerConfiguration().Enrich.FromLogContext().WriteTo.ApplicationInsights(_systemConfiguration.BaseConfiguration["APPINSIGHTS_INSTRUMENTATIONKEY"], TelemetryConverter.Traces).CreateLogger();
using var provider = new SerilogLoggerProvider(Log.Logger);
_logger = provider.CreateLogger(nameof(Startup));
}
After adding AI to Log.Logger, my logs began showing up in my AI's page.
Related
I currently have several .Net windows services running on my server. Is there a way to attach a console app to the service to get all of the ILogger data? I have had issues where the service runs perfectly as a console app/worker service but as soon as I run it as a windows service, it just sits there and does nothing.
I did find an article about attaching the VS debugger to the process, but this will not work with our network security.
I am open to any other suggestions as well.
The technical answer is no, but as #Fildor mentioned, you would set up a log sink of some sort. The file logger is just an example, but you can also have the logs send emails, post to some cloud logging service such as splunk or cloudwatch, etc.
One issue you may run into is that you need to capture an error prior to ILogger being available and properly configured for you. Here is a guide I followed for capturing startup errors using NLog: https://alistairevans.co.uk/2019/10/04/asp-net-core-3-0-logging-in-the-startup-class-with-nlog/
Startup classes are no longer necessary in the latest .NET version, so I modified their example to be code you would have in Program.cs:
// NLog: setup the nlog config first
NLogBuilder.ConfigureNLog("nlog.config");
try
{
var host = Host.CreateDefaultBuilder(args)
.ConfigureLogging(logging =>
{
logging.ClearProviders();
logging.SetMinimumLevel(LogLevel.Trace);
})
// Use NLog to provide ILogger instances.
.UseNLog()
.Build();
host.Run();
}
catch (Exception ex)
{
var logger = nlogLoggerProvider.CreateLogger(typeof(Program).FullName);
}
}
Here's the list of available log sinks you can configure in that nlog configuration file: https://nlog-project.org/config/
This same thing can be accomplished with other log providers you may already be using such as Serilog, Log4Net, etc.
I'm using Application Insights in ASP.NET Core 3.1 application with the below code.
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers();
ApplicationInsightsServiceOptions aiOptions = new ApplicationInsightsServiceOptions();
aiOptions.DeveloperMode = true;
services.AddApplicationInsightsTelemetry(aiOptions);
}
As you can see i have enabled Developer mode to ensure that the telemetry data is pushed immediately (instead of waiting for 2-5 mins). However, it doesn't seem to be working.
Any ideas on how to make it work?
DeveloperMode simply means SDK channel will not buffer telemetry items in memory. Regular behavior is telemetry is buffered in memory, and once every 30 secs or when buffer has 500 items, they get pushed to backend. Developer mode simply causes every item to be sent without buffering.
The telemetry will be visible in Azure portal typically in 3-10 minutes (depending on backend/indexing/etc. delays, not controlled by SDK). By enabling developer mode, only the SDK level buffering is disabled, leading to a max "gain" of 30 sec. Telemetry can still take several minutes to show up in portal.
(The intention behind behind developer mode is to show data instantly in local. i.e Visual Studio itself shows telemetry while debugging. For that Developer is not required to be explicitly enabled. Attaching a debugger automatically enables developer mode)
Did it work before you enable the developer mode?
When you register application insights into the DI container like this
services.AddApplicationInsightsTelemetry()
It automatically assumes that you have in appsettings.json file a json object with the instrumentation key
"ApplicationInsights": {
"InstrumentationKey": "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
},
Same when you deploy it as an azure web app, it automatically creates a configuration variable for you.
I would suggest that you pass your instrumentation key into your ApplicationInsightsServiceOptions explicitly to make sure it is loaded properly.
ApplicationInsightsServiceOptions aiOptions = new ApplicationInsightsServiceOptions();
aiOptions.InstrumentationKey("xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx")
aiOptions.DeveloperMode = true;
services.AddApplicationInsightsTelemetry(aiOptions);
I want to be able to send application logs to Cloud Watch Log. and I got to know that there is a Cloud Watch Agent service that runs in the background and reads logs from log file and send only the delta (extra logs) to Cloud Watch Log. All this makes sense to me. Then I got to know about NLog a C# logging framework, and wrote below POC to send logs.
static void Main(string[] args)
{
ConfigureNLog();
var logger = NLog.LogManager.GetCurrentClassLogger();
logger.Info("Hello World");
logger.Log(LogLevel.Info, "Sample informational message");
}
static void ConfigureNLog()
{
var accessKey = ConfigurationManager.AppSettings.Get("AWSAccessKey");
var secretKey = ConfigurationManager.AppSettings.Get("AWSSecretKey");
var config = new LoggingConfiguration();
var awsTarget = new AWSTarget()
{
LogGroup = "NLog.ProgrammaticConfigurationExample",
Region = "us-east-1",
Credentials = new BasicAWSCredentials(accessKey, secretKey)
};
config.AddTarget("aws", awsTarget);
config.LoggingRules.Add(new LoggingRule("*", LogLevel.Debug, awsTarget));
LogManager.Configuration = config;
}
Now when I run above code, I am able to send log to Cloud Watch. But I am confused now, where is the significance of Cloud Watch Agent?
Since I am directly sending log data, does that mean I don't need Cloud Watch Agent in my scenario?
In case I want to use Cloud Watch Agent then I need to use FILE as a target for logs by NLog and then tell Cloud Watch Agent to send that log file to Cloud Watch Log??
Is my understanding correct? Please help me in understanding the flow.
Is below flow correct?
NLog write log to File -> Cloud Agent read log from there -> Send log
to Cloud Watch
Question: How to use Cloud Watch Agent in above POC to send data via NLog?
Cloud Watch Agent runs on your server and can watch logs files that are produced. These log files can be anything, IIS Logs, Time Logs, Event Log, Etc. When the log file is updated, CWA will grab the updates and send to Cloud Watch. This is the generic behavior of the CWA and is great for Event Logs and OS logging.
By modifying the AWS.EC2.Windows.CloudWatch.json CWA json file, you can configure it to watch log files for certain formats and send changes to CW outside the standard/example ones it does by default. You can update the json to your NLog entry layout format and have it watch for that specific format in the file. CW Does have a delay sending.
Now you have Nlog which writes log files. You can have NLog send the log entries to a file and have the Cloud Watch Agent watch that file, pick up the change and send it or you can have NLog send the entries directly to CW. Since you are writing directly to CW through a NLog target, you don't need the Cloud Agent for your NLog files. I suggest keeping CWA for other log files like IIS or event logs.
I guess it is a matter a preference on how you do it. I think NLog Targets with layouts is easier than dealing with the CloudWatch json file to try and match the log format. I only use CWA to send log files I have no control over and use a NLog Target to send my NLog entries.
I can post an example CWA json snippet for a 3rd party log file I monitor with CWA if you need an example.
When an application just have to write to a file, then it lives a very simple life with few problems.
When an application suddenly have to handle network traffic (with timeouts, disconnects, retries, connectivity, latency, etc.) then it suddenly will have issues with things queuing up, taking memory, using sockets, causing garbage collections, stalls, etc. (And loosing all pending logevents on crash)
Depending on the lifetime of your application and the critically of your applications, then it can be useful to give it a simple life. And let a friend like Cloud Watch Agent worry about the network-stuff.
See also https://github.com/NLog/NLog.Extensions.Logging/wiki/NLog-cloud-logging-with-Azure-function-or-AWS-lambda
I do have a Docker container running a .net core 2 app.
The logging is configured using this code in Program.cs
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.ConfigureLogging((hostingContext, logging) =>
{
logging.AddConfiguration(hostingContext.Configuration.GetSection("Logging"));
logging.AddConsole();
logging.AddDebug();
})
.UseStartup<Startup>();
and the appsettings.json file
{
"Logging": {
"LogLevel": {
"Default": "Information"
}
},
}
Logging seems to be Ok, when running Kestrel directly, I can see the logs in the terminal. Same thing, when containerized: the command docker logs shows what I want.
Troubles arise in production, when run in as a container in Azure Web App. I cannot find any consistent docker logs.
I made attempts to visit the Log file via FTP or via the url https://[mysite].scm.azurewebsites.net/api/logs/docker the log files are almost empty for example,
https://[mysite].scm.azurewebsites.net/api/vfs/LogFiles/2018_09_09_RD0003FF74F63E_docker.log,
only container starting lines are present
Also I do have the same lines in the usual portal interface.
The question is: do docker logs are automatically output in docker.log files in Azure Web App? Is there something that I am missing?
Firstly you need to enable container logs
[App Service] -> Monitoring -> App Service logs
Then you can see container logs in [App Service] -> Monitoring -> Log stream
UPD
Also you can find logs in KUDU
Then "Current Docker Logs"
I've been cursing this for a good while, and eventually found something that works for me.
First I enabled filesystem logging as per #dima_horror's answer.
Next I ran a command-line az webapp log tail --name myApp --resource-group myRg
That now seems to give me useful output (it gave me nothing prior to enabling filesystem logging).
Have you checked the logs in container settings? I followed this guide to deploy a container to Azure web app.
To add to the answers, if you're having trouble with deploying the image in your App Service and need logs for that specifically, you can get the docker logs for the provisioning phase of your container under Deployment Center, e.g.:
I have the same exact issue, the App Service container logs is generic and vague. This is not the same logs that Docker shows us whenever we run a container.
17/02/2020 08:59:25.186 INFO - Site: tutorial-api - Start container succeeded. Container: f8bfa7e27680c0e9551c6157f9d1c8a73c9a3e739b4f15de8586ce52809798d3
17/02/2020 08:59:30.675 INFO - Site: tutorial-api - Application Logging (Filesystem): On
17/02/2020 08:59:44.106 INFO - Site: tutorial-api - Waiting for container to be ready
17/02/2020 08:59:49.116 INFO - Site: tutorial-api - Container has exited
17/02/2020 08:59:49.117 ERROR - Site: tutorial-api - Container could not be started
17/02/2020 08:59:49.120 INFO - Site: tutorial-api - Purging after container failed to start
17/02/2020 08:59:49.120 ERROR - Site: tutorial-api - Unable to start container. Error message: Container could not be started: tutorial-api_20
"Unable to start container, container couldn't be started"
Wow! Azure just told me every 60 seconds a minute has passed.
I understand that this is production environment, but you got to give us something!
Out of frustration, I decided to run same image in a Azure Container Instance resource, there it shows you the same detailed logs that Docker provides (see screenshot below)
Now that's what I'm talking about!
Using the error logs in Azure Container Instance, I found out that my App Service couldn't access the Sql Server resource (even though they are within the same resource group). I simply enabled the Sql Server resource to be accessed within the same resource group
I've been working to try and convert Microsoft's EWS Streaming Notification Example to a service
( MS source http://www.microsoft.com/en-us/download/details.aspx?id=27154).
I tested it as a console app. I then used a generic service template and got it to the point it would compile, install, and start. It stops after about 10 seconds with the ubiquitous "the service on local computer started and then stopped."
So I went back in and upgraded to C# 2013 express and used NLog to put a bunch of log trace commands to so I could see where it was when it exited.
The last place I can find it is in the example code, SynchronizationChanges function,
public static void SynchronizeChanges(FolderId folderId)
{
logger.Trace("Entering SynchronizeChanges");
bool moreChangesAvailable;
do
{
logger.Trace("Synchronizing changes...");
//Console.WriteLine("Synchronizing changes...");
// Get all changes since the last call. The synchronization cookie is stored in the
// _SynchronizationState field.
// Only the the ids are requested. Additional properties should be fetched via GetItem
//calls.
logger.Trace("Getting changes into var changes.");
var changes = _ExchangeService.SyncFolderItems(folderId, PropertySet.IdOnly, null, 512,
SyncFolderItemsScope.NormalItems,
_SynchronizationState);
// Update the synchronization cookie
logger.Trace("Updating _SynchronizationState");
the log file shows the trace message ""Getting changes into var changes." but not the "Updating _SynchronizationState" message.
so it never gets past var changes = _ExchangeService.SyncFolderItems
I cannot for the life figure out why its just exiting. There are many examples of EWS streaming notifications. I have 3 that compile and run just fine but nobody as far as I can tell has posted an example of it done as a service.
If you don't see the "Updating..." message it's likely the sync threw an exception. Wrap it in a try/catch.
OK, so now that I see the error, this looks like your garden-variety permissions problem. When you ran this as a console app, you likely presented the default credentials to Exchange, which were for your login ID. For a Windows service, if you're running the service with one of the built-in accounts (e.g. Local System), your default credentials will not have access to Exchange.
To rectify, either (1) run the service under the account you did the console app with, or (2) add those credentials to the Exchange Service object.