Using EventFlow to monitor ETW event on local machine - c#

I am trying to set up a simple ETW and EventFlow example that allows specific ETW providers to be monitored. In this case the Service Control Manager ETW provider to monitor when Service Start and Stop messages are issued.
I have the following input configuration for Tracing and ETW.
"inputs": [
{
"type": "Trace",
"traceLevel": "Warning"
},
{
"type": "ETW",
"providers": [
{
"providerName": "Service Control Manager"
}
]
}]
I have the following code which is starting up monitoring using EventFlow.
static void Main(string[] args)
{
using (var pipeline = DiagnosticPipelineFactory.CreatePipeline("eventFlowConfig.json"))
{
System.Diagnostics.Trace.TraceWarning("EventFlow is working!");
Console.ReadLine();
}
}
The trace event is appearing in the console, but when I start and stop a service no ETW events are appearing.
Is EventFlow designed for this scenario on a local machine? If so what am i missing in my configuration or code?
The console process is running as administrator and the account has access to the Performance Log Users and Performance Log Monitors group

If you want to listen for ETW events from the Service Control Manager, you'll need to listen for the provider named Microsoft-Windows-Services.
Here is what I have in my eventFlowConfig.json
{
"inputs": [
{
"type": "ETW",
"providers": [
{ "providerName": "Microsoft-Windows-Services" }
]
}
],
"filters": [],
"outputs": [
{ "type": "StdOutput" }
],
"schemaVersion": "2016-08-11",
"extensions": []
}
To check that it worked, I stopped and started SQL Server services. The events were output in the console as expected.
As an additional sanity check, you can use the Visual Studio Diagnostic Events viewer to listen for ETW events. Launch the viewer, click the cog to configure, add the provider name in the list of ETW Providers, and apply. You should now be able to see the same events in both the viewer and your console application.

Related

Sending logs to DataDog using a Serilog sink not working

Context:
Having looked over numerous answers and documentation. I am having to resort to asking others and I am thankful for your help.
C# configuration: https://docs.datadoghq.com/logs/log_collection/csharp/?tab=serilog
Support EU endpoints (My DataDog is on an eu server): https://docs.datadoghq.com/logs/log_collection/?tab=host#supported-endpoints
Github documentation for Serilog: https://github.com/DataDog/serilog-sinks-datadog-logs
My set up:
I have a DataDog trial account
I have a DataDog agent installed locally but I actually want to send logs to DataDog with an agentless approach
My logger makes logs into my Log/log.json files although It doesn't seem to update the file immediatley and can take several minutes to finally place the information into the file (no idea why)
I have the following dependencies installed as per the serilog documentation:
I have a .Net 6 project for an Angular App.
I have an appsettings.json that looks like this:
{
"AllowedHosts": "*",
"ConnectionStrings": {
"SitePageToSitePageModernConnection": "Data Source=.\\SQLEXPRESS;Initial Catalog=Blah;Integrated Security=True;MultipleActiveResultSets=True;TrustServerCertificate=True"
},
"Serilog": {
"Using": [
"Serilog.Sinks.File",
"Serilog.Sinks.Console",
"Serilog.Sinks.Datadog.Logs"
],
"MinimumLevel": {
"Default": "Error",
"Override": {
"Microsoft": "Error",
"System": "Error",
"My.App.Namespace.Something": "Information"
}
},
"WriteTo": [
{
"Name": "File",
"Args": {
"path": "Logs/log.json",
"rollingInterval": "Day", // When a new file is created
"flushToDiskInterval": "00:00:01", // Currently does nothing. seems to be overwritten by operating system's paging cache interval (whatever that is)
"retainedFileCountLimit": 7 // How many files should be retained over the days specified by rollingInterval
}
},
{
"Name": "DatadogLogs",
"Args": {
"apiKey": "d7...b07",
"source": "something",
"host": "noideawhatgoeshere",
"configuration": {
"Url": "http-intake.logs.datadoghq.eu"
}
}
}
],
"Enrich": [
"FromLogContext",
"WithMachineName",
"WithThreadId"
],
"Properties": {
"Application": "MyApplicationSample"
}
}
}
I am setting up the Serilog in the following manner:
// WebApplicationBuilder builder...
builder.Services.AddLogging(loggingBuilder =>
{
ConfigurationManager configurationManager = builder.Configuration;
Logger logger = new LoggerConfiguration()
.ReadFrom
.Configuration(configurationManager)
.CreateLogger();
// Adds serilog to the logging providers
loggingBuilder.AddSerilog(logger);
});
In a controller, I am using ILogger<MySiteController> to log.
// constructor injects:
...ILogger<MySiteController> _logger...
// call to controller runs some logs:
logger.LogInformation("Test information");
logger.LogError("Test error");
logger.LogWarning("Test warning");
My question/problem:
None of my logs end up in DataDog. Any ideas what I'm missing or misunderstanding?
The URL configured, in this case, was incorrect and should have been:
"configuration": {
"url": "https://http-intake.logs.datadoghq.eu",
"port": 443
}
For more information, I have laid out some comments to help others in the future:
{
// ...
//"AllowedHosts": "*",
// ...
// Serilog framework can look at this to configure itself
"Serilog": {
// This determines which of the types of logging you want to have
"Using": [
// This is for logging into a file in a static manner
"Serilog.Sinks.File",
// This is for logging into a file that can be configured to log for a day and then create a new file at the next day (a rolling file, get it?)
"Serilog.Sinks.RollingFile",
// This will allow logs to go into your console. Console in this case (for an Angular driven app, means the Terminal that appears when you launch your app)
"Serilog.Sinks.Console",
// This will allow logging to Datadog
"Serilog.Sinks.Datadog.Logs"
],
// These are the log levels you want to support
"MinimumLevel": {
// By default, you could say "I only want to see errors coming up"
"Default": "Error",
// If you are wanting to see information or debug or trace etc logs from other specific c# class namespaces, then you can configure them here
"Override": {
// Anything in a namespace that starts with `Microsoft` we only care about Errors being flagged (for example)
"Microsoft": "Error",
// Anything in a namespace that starts with `System` we only care about Errors being flagged (for example)
"System": "Error",
// If you had a tool with a C# class's namespace of "My.Tooling.Test", you, in this case will probably want to see all the logs you raise. so we could set this to Information (for example)
"My.Tooling.Test": "Information"
}
},
// Given that we have defined which frameworks we want to make use of in the "Using" section, we can now use the framework functionality for Console, File, RollingFile, Datadog, etc.
"WriteTo": [
// Tells Serilog to log stuff to the console
{
"Name": "Console"
//... You can configure more here but you'll need to Google that yourself
},
// Tells Serilog to log stuff to a File
{
"Name": "File",
//... You can configure more here but you'll need to Google that yourself
"Args": {
"path": "App_Data\\logs\\log.txt",
"rollingInterval": 3,
"outputTemplate": "{Timestamp:yyyy-MM-dd HH:mm:ss,fff} [P{ProcessId}/D%APP_DOMAIN_ID%/T{ThreadId}] {Level:u4} {CorrelationId} {UserId} - {Message:lj}{Exception}{NewLine}"
}
},
// Tells Serilog to log stuff to a Rolling File
{
"Name": "RollingFile",
//... You can configure more here but you'll need to Google that yourself.
"Args": {
"path": "App_Data\\logs\\DiagnosticTraceLog.txt{RollingDate}",
"shared": true,
"outputTemplate": "{Timestamp:yyyy-MM-dd HH:mm:ss,fff} [P{ProcessId}/D%APP_DOMAIN_ID%/T{ThreadId}] {Level:u4} {CorrelationId} {UserId} - {Message:lj}{Exception}{NewLine}"
}
},
// Tells Serilog to log stuff to a DataDog instance
{
// This name is important! It must be precisely this to work!
"Name": "DatadogLogs",
// The configuration properties here must all be written precisely as the following are
"Args": {
// Mandatory | API Key - Found in Datadog Organization/Api Keys section.
"apiKey": "6c...91",
// Optional | An arbitrary "source" that enables you to filter to your logs within the DataDog application
"source": "anything.you.want",
// Optional | An arbitrary "servive" that enables you to filter to your logs within the DataDog application
"service": "anything.you.want",
// Optional | An arbitrary "host" that enables you to filter to your logs within the DataDog application
"host": "anything.you.want",
// Optional | Arbitrary "tags" that enables you to filter to your logs within the DataDog application
"tags": [ "app.name:My.Tooling.Test" ],
// Optional | The default Datadog server is "intake.logs.datadoghq.com". If your DataDog instance is on the "app.datadoghq.com" URL, you do not need any of this configuration.
//
// | If you are not on the default Datadog server, you will need to find out what server you are on by visiting here and using the dropdown at the right https://docs.datadoghq.com/getting_started/site/
// | and then, once you know the server, you can visit here for the URL to use: https://docs.datadoghq.com/logs/log_collection/?tab=host#supported-endpoints
//
// | IMPORTANT: the URLs are not quite the entire URL that's needed. Be aware that "https://" will need to be placed on to determine which type you want to use. For example:
"configuration": {
"url": "https://http-intake.logs.datadoghq.eu",
"port": 443
}
}
}
],
// This ends up being stored in the final log message that is raised for you. You'll see it in DataDog under "properties" object where it mentions "Thread Id" or "SourceContext" for example.
"Enrich": [ "FromLogContext", "WithMachineName", "WithThreadId" ],
// This ends up being stored in the final log message that is raised for you. You'll see it in DataDog under "properties" object where it mentions "Application".
"Properties": {
"Application": "My.Tooling.Test"
}
}
}
And specifically for c# apps, the following packages are required to be installed to use the above appsettings.json:
Mandatory stuff:
microsoft.extensions.configuration.json
serilog.aspnetcore
Sink types (The usings defined in appsettings.json):
serilog.sinks.datadog.logs
serilog.sinks.file
serilog.sinks.rollingfile
Enrichers that you've used:
serilog.enrichers.environment
serilog.enrichers.thread
serilog.settings.configuration
serilog.aspnetcore.enrichers.correlationid
Which can all be configured in C# like so (You have 2 choices):
// This allows you to add a Logging Provider to the already filled out list of logging providers
// that are supplied by default to your application.
builder.Services.AddLogging(loggingBuilder =>
{
// Get the builder's configuration from your Program
// (It essentially just gets the appsettings.json file)
ConfigurationManager configurationManager = builder.Configuration;
// Create a new LoggerConfiguration so that you can tell
// it to get its configuration from your appsettings.json file
Logger logger = new LoggerConfiguration()
.ReadFrom
.Configuration(configurationManager)
// Finally create the logger which will be used
.CreateLogger();
// Adds serilog to the logging providers
loggingBuilder.AddSerilog(logger);
});
OR
// Forces all logs to go through your single serilog configuration
// Be aware that if you use this, you need to ensure you configure your Serilog to
// do Console logging, otherwise you won't see any logs in the console.
// Luckily, I've provided all the code you need to do that in the appsettings.json configuration
// (But you may need to change the log level to Information or Debug for it to show up)
builder.Host.UseSerilog((hostingContext, services, loggerConfiguration) =>
{
loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration);
});
And you can finally create logs by injecting the ILogger<YOURCLASSNAME> logger into the class where you want to use it and running the following commands:
logger.LogInformation("{message}", "Test");
logger.LogWarning("{message}", "Test");
logger.LogError("{message} {exception}", "Test", error);
logger.LogWarning(JsonConvert.SerializeObject(new Something() { Name = "Something", Description = "Something" }));

Serilog does not log to file after exception has occured

I am using serilog in my .Net core 3.1 CONSOLE project. It works perfectly fine usually but if there is any exception, it logs the exception and then it stops logging all together for all subsequent runs of that app. I need to delete the log file and it starts working again.
Not sure what I am doing wrong or missing anything. Please help.
Here is the serilog configuration.
"Serilog": {
"WriteTo": [
{
"Name": "File",
"Args": {
"path": "C:\\Logs\\MyLogs.log",
"rollingInterval": "Day",
"fileSizeLimitBytes": 5000,
"rollOnFileSizeLimit": true
}
}
] }
And here is how I am using it in generic host
return Host.CreateDefaultBuilder()
.ConfigureServices(( hostContext,services ) =>
{
...
})
.ConfigureAppConfiguration(config =>
{
config.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("AppConfigs.json");
})
.UseSerilog(( context,builder ) =>
{
builder
.ReadFrom.Configuration(context.Configuration);
});
This was a stupid mistake on my part. The issue was
"fileSizeLimitBytes": 5000,
which is 5 kb. The exception text was much bigger which was causing it to be more than the limit. And
"rollOnFileSizeLimit": true
was not working because I forgot to add Serilog.Sink.File package. Once I added that and changed the limit, it was all fine. Thanks for the replies.

How to remove default serilog properties from log?

Using serilog to log a object, e.g. Log.Information("{#log}", log). Where log is a custom object.
The logs comes out like
{
"#t": "2020-01-24T09:31:23.5064000Z",
"#mt": "{#log}",
"log": {
"TraceId": "e57afecc-8efe-4d48-8057-d46cce71c3d9",
"Timestamp": "01/24/2020 09:31:23",
"Service": "serviceType",
"Action": "actionType",
"$type": "BaseLog"
}
}
I'd like not to have the extra serilog properties on there, and just have a flat structure of my log, e.g.
Even when i'm using CompactJsonFormatter
{
"TraceId": "e57afecc-8efe-4d48-8057-d46cce71c3d9",
"Timestamp": "01/24/2020 09:31:23",
"Service": "serviceType",
"Action": "actionType"
}
Is there an option/extension to serilog where I can remove these?
Do not put the burden of log transformation on your application, but you can use something like fluentd or logstash to do this for you. They are meant for doing those things.

Viewing/Collecting Service Fabric ETW Events OnPrem

We are trying to view Service ETW Events in an OnPrem cluster. The long term plan will be to install an ElasticSearch cluster to send the events to. I don't have time to build that out this week, instead I need to understand why my App is blowing up.
We have installed Microsoft Message Analyzer on one of the node servers and I can connect with a live session to view the Cluster ETW Events Service Fabric System Provider.
I would like to be able to view the Application ETW Events. I've followed the instructions in the article here:
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-diagnostic-collect-logs-without-an-agent
This article seems to focus on 3 files, eventFlowConfig.json, Program.cs and ServiceEventSource.cs.
EventFlowConfig.Json has:
"inputs": [
{
"type": "EventSource",
"sources": [
{ "providerName": "Microsoft-ServiceFabric-Services" },
{ "providerName": "Microsoft-ServiceFabric-Actors" },
{ "providerName": "TMHP-CacheApp-CacheAPI" }
]
}
],
"filters": [
{
"type": "drop",
"include": "Level == Verbose"
}
],
"outputs": [
{
"type": "StdOutput"
}
In program.cs I have:
using (var diagnosticsPipeline = ServiceFabricDiagnosticPipelineFactory.CreatePipeline("CacheApp-CacheAPI-DiagnosticsPipeline"))
{
ServiceRuntime.RegisterServiceAsync("EndpointType",
context => new Endpoint(context)).GetAwaiter().GetResult();
ServiceEventSource.Current.ServiceTypeRegistered(Process.GetCurrentProcess().Id, typeof(Endpoint).Name);
// Prevents this host process from terminating so services keeps running.
Thread.Sleep(Timeout.Infinite);
}
ServiceEventSource.cs has:
[EventSource(Name = "TMHP-CacheApp-CacheAPI")]
I package and deploy fine, but using MSMA I don't know how to attach to the Application Provider? I think I would be adding a "Custom Provider" but it asks for a GUID. Is there someway to find this Guid? I'm assuming I want to add the customer provider for my specific ServiceFabric Application of type:
"TMHP-CacheApp-CacheAPI"
Thanks in advance,
Greg
I'd recommend that you use PerFView instead, Vance has an article on how to view EventSource based events in PerfView https://blogs.msdn.microsoft.com/vancem/2012/07/09/introduction-tutorial-logging-etw-events-in-c-system-diagnostics-tracing-eventsource/ The '*' in front of the EventSource name is important.
Here's some instructions for how to view actor events, your custom EventSource should be the same pattern.
The Actor and ReliableServices are EventSource based so to view them in PerfView you have to follow the instructions on Vance’s blog. Don't forget the '*'!!!
Start PerfView with a command line like this: perfview /onlyproviders=*Microsoft-ServiceFabric-Actors
You can collect using Collect | Collect | Start Collection. Make sure that the Advanced Options | Additional Providers field contains =*Microsoft-ServiceFabric-Actors
When you finish collecting they are viewable under Events
The Guid of a .NET EventSource based Event Provider is actually based on the name of the provider, it is generated using a hash algorithm. This blog post has a short description of it: https://blogs.msdn.microsoft.com/dcook/2015/09/08/etw-provider-names-and-guids/. You can use the supplied ETWGuid.exe to generate the Guid for your provider:
C:\code> .\EtwGuid.exe TMHP-CacheApp-CacheAPI
TRACELOGGING_DEFINE_PROVIDER(
g_hMyProvider,
"TMHP-CacheApp-CacheAPI",
// {9deef099-8d1a-568a-1618-08ffbb7146b3}
(0x9deef099,0x8d1a,0x568a,0x16,0x18,0x08,0xff,0xbb,0x71,0x46,0xb3));
So the Guid for TMHP-CacheApp-CacheAPI would be 9deef099-8d1a-568a-1618-08ffbb7146b3. This only works for .NET EventSources btw, other Event Providers may have other ways of setting the guid for a provider.
You can then look for that provider in your Microsoft Message Analyzer, PerfView or any other tool for ETW viewing.
As for the Microsoft supplied Event Providers you have three built-in you should look at for Service Fabric:
Microsoft-ServiceFabric - CBD93BC2-71E5-4566-B3A7-595D8EECA6E8 - These ere the 'low-level' Service Fabric activities
Microsoft-ServiceFabric-Actors - (https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-actors-diagnostics)
Microsoft-ServiceFabric-Services - https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-services-diagnostics

Azure Functions how to add application settings to bindings

I'm trying to add some custom binding using my app settings for my Azure Function. I need to receive only string a string from my settings.
I would like to get simpleValue from my settings.
{
"bindings": [
{
"name": "someValue",
"type": "stringSetting",
"connection": "simpleValue",
"direction": "in"
}
],
"disabled": false
}
and the get it in Run method:
static void GetOrders(TraceWriter log, string someValue)
{
log.Info(someValue);
}
Is it even possible. Maybe there is other way to do it?
I already found the solution. Just add:
using System.Configuration;
and add this line to code with the key ("simpleValue") value:
ConfigurationManager.AppSettings["simpleValue"]
App Settings configurations can be referred in binding json as %MY_CUSTOM_CONFIG% - enclosed within percent symbols.
Note that the connection property of triggers and bindings is a
special case and automatically resolves values as app settings,
without percent signs.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-triggers-bindings

Categories