How To Log Effectively Without Flooding Log Files - c#

In an application with long running task and dependency injection log files can easily become flooded with useless data. This makes it harder to follow log files and drains storage space.
Examples:
A background service that polls a database every 10 seconds for data and logs to say that it is checking for data and how much data is retrieved
A transient service (DI) that has some logging in a method called by the constructor
For example 1, the background service logging is useful for diagnostics when something goes wrong but can easily flood the log file.
For example 2, each time the transient service is constructed, (which might be a lot) that logging in the method called by the constructor is logged.
Obviously, the logs can be split into different files e.g. debug level log file and general log file - this can make the general log file easier to follow but doesn't deal with log files taking up too much space. It also may result in the separation some info that paints a clearer picture of what is happening.
Is there anything more that can be done apart from splitting up the log files and being more selective about what's logged. Are there any best practices for this or any resources that provide good approaches to tackling this problem, or is it just a case of figuring out what's best to do in the specific scenario at hand?

You want to control the logging behaviour by using the correct LogLevel when logging messages.
You should have a look at the LogLevel Enum as it will clearly show you when to use which level.
In the appsettings.json of your application you can then set the minimum log level depending on the deployment environment.
You are referring to trace or information logging which should only be used in a test or development environment in order to get as much information as possible when something is wrong.
Usually only enabled when you are trying to reproduce a known error.
In a production environment you will only log Error or Critical messages. In your exception handling you could log some additional information about parameters that where passed into the failing method along with the stack trace. This should give you enough information to reproduce the error in dev or test where you can debug the application or enable trace logs.
Consider using Structured Logging for those scenarios.

Related

Verbose in Loglevel doesn't exist

I try to write my first demo using EF7.
I have installed Microsoft.Extensions.Logging.Console 1.0.0-rc2-final
To log.
But when i try to use the follwoing code:
public static void LogToConsole(this DbContext context)
{
var contextServices = ((IInfrastructure<IServiceProvider>) context).Instance;
var loggerFactory = contextServices.GetRequiredService<ILoggerFactory>();
loggerFactory.AddConsole(LogLevel.Verbose);
}
I couldn't find the Verbose enum !
Instead i get the following :
Could someone help me to explain what's happened and which one should i use to log ?
Back in December, the original log levels were changed a bit to be more consistent with other logging systems. As part of this change, Verbose was renamed to Trace and moved in severity below Debug.
As for what log level you should use, it depends a lot on what you want to log and what you expect to see. See the recommendations in the documentation; to quote the first three bullet points:
Log using the correct LogLevel. This will allow you to consume and route logging output appropriately based on the importance of the messages.
Log information that will enable errors to be identified quickly. Avoid logging irrelevant or redundant information.
Keep log messages concise without sacrificing important information.
To choose the correct log level, you should first familiarize yourself with what they mean. Ordered from lowest severity to highest:
Trace – For the most detailed messages, containing possibly sensitive information. Should never be enabled in production.
Debug – For possibly interactive investigation during development; useful for debugging but no real long term value.
Information – For tracking the flow of the application.
Warning – For unnormal (but expected) events in the application, including errors and exceptions, which are properly handled and do not impact the application’s execution (but could still be a sign of potential problems).
Error – For real failures which cause the current activity to fail, leaving the application in a recoverable state though, so other activities will not be impacted.
Critical – For failures on the application level which leaves the application in a unrecoverable state and impacts further execution.
You can find similar explanations in the offical documentation and in the project’s logging guidelines.
Use LogLevel.Debug. The levels got renamed and shuffled around in RC2. See the announcement for more details.

How do can I turn a trace on and off in enterprise library without restarting the service?

So in our project, we have enterprise library as our logger. Obviously, in production, we don't want to have our debug trace logging out all of the time, but we would like to be able to be able to turn on that debug trace without restarting the app pool, but I'm not quite sure how to go about it.
In the past, I'd have written something that would only log to the file if the file was present. So for instance, if I wanted to enable debug trace, I'd have the NOC drop a file in the log folder, then the next time the logger was invoked, it would see that the file is there and start dumping to it. Seems kind of cheesy, but it works. However, with a website, I'd have to write a full blown asynchronous logger that knew how to queue information to be written, and I don't really want to do that, and I don't know how to achieve this using Enterprise logger. Any ideas on this one would be great.
The other thought was to have a flag in the config file or registry that could be changed while the site is running which would enable that same trace.
I'm looking for ideas or solutions. The main concern is that we need to be able to enable and disable this logging on the fly.
So I wasn't able to find a built-in solution for doing this with EnterpriseLibrary, so this is what I ended up doing and it seems to work; in fact, I extended the idea a bit further by adding a priority override check.
I created a proxy class for writing to the EL LogWriter. What it does is check for the existence of a file (priority_override.txt in this case). If the file is present, it will read the file and search for the text PriorityLevel={n} where n is the override. If it is able to get this override number, it will then override the priority level that was provided in the code. I use this to basically force a high priority on all log statements so that they don't get blocked by the Filter.
My default filter prevents anything below a priority 5 from making it to the listeners; this trick allows me to override that by temporarily increasing the priority of all logs.So since Debug is generally priority 1, it doesn't get logged. If I drop a file in the log directory called priority_override.txt that has the contents PriorityLevel=99, then ALL log statements will make it to the configured listeners and handled accordingly. Everything else is just a normal matter of configuring EL with proper categories and priorities etc. So for instance, if my highest priority is 5 and that triggers an email, then I would just override it to 4 so that everything gets logged but emails do not get sent. Then when we're done troubleshooting in production, for instance, we just delete the priority_override.txt and everything returns to normal.
Additionally, now we don't have to manage the changing of config files separately for our test environments. We can just leave the priority_override.txt in the log folder in each respective environment.

System.Diagnostics.Trace on a live environment

Problems have been reported to me regarding the performance of a live site. I can't seem to replicate any of these issues on any dev or staging environments, and the profilers I have ran against dev has revealed nothing unusual.
This has led me to turn to a diagnostics trace for a simple timing trace so I can at least try and isolate the cause and try and narrow it down.
I'm quite happy to add
System.Diagnostics.Trace.WriteLine("....");
wherever necessary and add a listener (via web.config entry) to write out to a log file, but could this massively impact the performance of the live environment itself?
Is there anything else I need to consider when, potentially, leaving this to run over the weekend? i.e. is it best that I specify how large the log file is to get before closing and opening a new one?
It depends how much data you are going to log so turn on the logger and check if your application behaves normally. Also if logging to a log file slows down your application consider a faster TraceListener such as EventLogTraceListener (you may create a dedicated event log for this purpose with maximum size and log rolling). In case logging to a file is not a problem get EssentialDiagnostics RollingFileTraceListener. It has many options including setting maximum file size and the number of rolled files.
Use a logging framework like log4NET and make logging like:
LogManager.GetCurrentClassLogger().Debug("...");
When you disable logging afterwards in the configuration, these functions are not executed by the framework.
If you need to do string formatting for your messages: Use "DebugFormat()" which will not do the formatting if it is not needed by the level of logging desired.

Why might log4net entries go "missing" in some listeners

This one really has me scratching my head....
I have been using log4net (currently version 1.2.10) in an application for some time. While adding a new option to the application, I noticed that even though the log4net Debug, Error, etc. methods were getting called items from that log source were not being seen by the console appender.
Having checked the obvious (like making sure there was no filtering involved), I noticed something else that was strange. If I have more than one appender (e.g. a log file appender and a UDP appender) then the appenders will sometimes see different subsets of the log messages. Which subset they see appears to be random, but typically when the problem occurs they will fail to see all messages from a given log source.
Why might this be happening, and what can I do about it since lost messages mean the log file cannot be trusted to show an accurate picture of remote failures?
[Additional information below added Jan 19th, 2010]
I finally took a good look at the ILog object getting passed back in response to the call
LogManager.GetLogger(typeof (MyTypeHere));
On some occasions, I am getting an ILog object with Debug, Info, Warning, Error etc set to false. On other occasions the ILog object has them correctly set to true. Since my code does nothing to manipulate those flags, on the occasions when my code is passed the "disabled" ILog object messages from my code (understandably) do not get propagated at all.
I still cannot explain the apparent discrepancy between the two appenders.
We regularly use the logfile, console and smtp appenders together and we don't seem to have these issues. Under certain conditions, some appenders can lose messages because of their inherent nature. For example, the UDP appender, because of the transport mechanism, isn't guaranteed to transmit all messages. Same thing happens with the SMTP appender. If you are using a common log file but logging from several processes, sometimes the file is locked by another process (this usually throws an exception, but it might be getting caught somewhere in your code), so be sure to set the Minimal Lock property on it. Also, the appenders can be buffered, so if there is a process crash, log4net might not have a chance to flush out the buffered data.
In the end the most reliable appenders are those that log locally, such as the file and the event log appenders, but then you have to harvest all the logs. If you want to log centrally, you might want to consider logging to a database or a message queue.
Do I understand correctly that some messages which are normally logged successfully suddennly stop appearing (being logged) at some point? If that is the case, then I would suggest turning on the internal logging of log4net. Alternativly debug the issue with log4net code (with your problem I'd suggest breaking somewhere around CallAppenders method in Logger class. It will tell you which appenders will actually be called for a logging event).
If some messages are consistently are not being logged then I would look at the log4net configuration. Check for any levels/thresholds being set and more importantly, if you are using loggers check that their names and make sure that the prefix of whatever you put into LogManager.GetLogger(...) call matches the name loggers in your config.
I double what jvilalta said. I have been using log4net for years now with many types of appenders and I have not seen a situation where a message would be missing from only some of appenders but not all.
I know this is old, but I've had this happen in asp.net mvc apps recently and it was really frustrating to track down. It seems to happen on methods that use the ValidateInput(false) attribute.
My guess is that using this attribute skips initializing some data that log4net tries to access while logging. I found that adding the following to my web.config fixed the problem:
<httpRuntime requestValidationMode="2.0" />
Of course it has other side affects (not related to logging).

What should an Application Log ideally contain?

What kind of information should an Application Log ideally contain? How is it different from Error Log?
You are going to get a lot of different opinions for this question.....
Ultimately it should contain any information that you think is going to be relevant to your application. It should also contain information that will help you determine what is happening with the application. That is not to say it should contain errors, but could if you wanted to use it that way.
At a minimum I would suggest that you include:
application start/stop time
application name
pass/fail information (if applicable)
Optional items would be:
call processing (if not too intensive)
errors if you decide to combine application and error logs
messaging (if not too intensive)
One thing you want to keep in mind is that you do not want to be writing so much information to your logs that you impact your application performance. Also, want to make sure you don't grow your log files so large that you run out of disk space.
A true error log should really contain:
The stack trace of where the error took place
The local variables present at the point of error.
A timestamp of when the error took place.
Detail of the exception thrown (if it is an exception).
A general application log file, for tracking events, etc, should contain less internal information, and perhaps be more user friendly.
To be honest, the answer really depends on what software the log is for.
Ideally, it should contain exactly the information you need to diagnose an application problem, or analyze a particular aspect of its past behavior. The only thing that makes this hard to do is that you do not know in advance exactly what problems will occur or which aspects of the application behavior will interest you in the future. You can't log every single change in application state, but you have to log enough. How much is enough? That's hard to say and very application-dependent. I doubt a desktop calculator logs anything.
An error log would just log any errors that occur. Unexpected exceptions and other unexpected conditions.
An application log usually contains errors, warning, events and non-critical information in difference to an error log that usually contains only errors and critical warnings.
The application log should contain all the information necessary for audit. This may include such things as successful/unsuccessful log on and any specific actions. The error log can be a subset of the application log or a separate log containing only information related to errors in the application.

Categories