System.Diagnostics.Trace on a live environment - c#

Problems have been reported to me regarding the performance of a live site. I can't seem to replicate any of these issues on any dev or staging environments, and the profilers I have ran against dev has revealed nothing unusual.
This has led me to turn to a diagnostics trace for a simple timing trace so I can at least try and isolate the cause and try and narrow it down.
I'm quite happy to add
System.Diagnostics.Trace.WriteLine("....");
wherever necessary and add a listener (via web.config entry) to write out to a log file, but could this massively impact the performance of the live environment itself?
Is there anything else I need to consider when, potentially, leaving this to run over the weekend? i.e. is it best that I specify how large the log file is to get before closing and opening a new one?

It depends how much data you are going to log so turn on the logger and check if your application behaves normally. Also if logging to a log file slows down your application consider a faster TraceListener such as EventLogTraceListener (you may create a dedicated event log for this purpose with maximum size and log rolling). In case logging to a file is not a problem get EssentialDiagnostics RollingFileTraceListener. It has many options including setting maximum file size and the number of rolled files.

Use a logging framework like log4NET and make logging like:
LogManager.GetCurrentClassLogger().Debug("...");
When you disable logging afterwards in the configuration, these functions are not executed by the framework.
If you need to do string formatting for your messages: Use "DebugFormat()" which will not do the formatting if it is not needed by the level of logging desired.

Related

How To Log Effectively Without Flooding Log Files

In an application with long running task and dependency injection log files can easily become flooded with useless data. This makes it harder to follow log files and drains storage space.
Examples:
A background service that polls a database every 10 seconds for data and logs to say that it is checking for data and how much data is retrieved
A transient service (DI) that has some logging in a method called by the constructor
For example 1, the background service logging is useful for diagnostics when something goes wrong but can easily flood the log file.
For example 2, each time the transient service is constructed, (which might be a lot) that logging in the method called by the constructor is logged.
Obviously, the logs can be split into different files e.g. debug level log file and general log file - this can make the general log file easier to follow but doesn't deal with log files taking up too much space. It also may result in the separation some info that paints a clearer picture of what is happening.
Is there anything more that can be done apart from splitting up the log files and being more selective about what's logged. Are there any best practices for this or any resources that provide good approaches to tackling this problem, or is it just a case of figuring out what's best to do in the specific scenario at hand?
You want to control the logging behaviour by using the correct LogLevel when logging messages.
You should have a look at the LogLevel Enum as it will clearly show you when to use which level.
In the appsettings.json of your application you can then set the minimum log level depending on the deployment environment.
You are referring to trace or information logging which should only be used in a test or development environment in order to get as much information as possible when something is wrong.
Usually only enabled when you are trying to reproduce a known error.
In a production environment you will only log Error or Critical messages. In your exception handling you could log some additional information about parameters that where passed into the failing method along with the stack trace. This should give you enough information to reproduce the error in dev or test where you can debug the application or enable trace logs.
Consider using Structured Logging for those scenarios.

Logging in backup file using ETW

Is there any way I can use Microsoft.Diagnostic.Tracing.EventSource package or any other .NET built in types to implement a back up logic for logging?
I have used an EventSession that directs all logs to an .ETL file but there are two problems with this:
The most important one is that, from what I understand, the logs are actually being written to the file when the processing stops. But what happens if the machine shuts down or the processor process gets killed? From my tests the logs are lost.
The second problem is less important and it's just that it would be more convenient for my solution to be able to read from the file as logs are coming in.
Basically I would like to have a file buffer for my logs and I was wondering if there's some built in implementation especially for this.
Yes, EventSource's WriteEvent methods are not blocking, so they do
not guarantee that logs actually have been saved.
And yes, you cannot
read ETL files while they are in use.
ETW is not the option for you in this case, you could use common TraceSource (with two listeners, one is configured to do auto-flush and another one is ETW adaptor, which you could listen to through a real-time ETW session) or log4net (you could choose appender that works for your case).

How do can I turn a trace on and off in enterprise library without restarting the service?

So in our project, we have enterprise library as our logger. Obviously, in production, we don't want to have our debug trace logging out all of the time, but we would like to be able to be able to turn on that debug trace without restarting the app pool, but I'm not quite sure how to go about it.
In the past, I'd have written something that would only log to the file if the file was present. So for instance, if I wanted to enable debug trace, I'd have the NOC drop a file in the log folder, then the next time the logger was invoked, it would see that the file is there and start dumping to it. Seems kind of cheesy, but it works. However, with a website, I'd have to write a full blown asynchronous logger that knew how to queue information to be written, and I don't really want to do that, and I don't know how to achieve this using Enterprise logger. Any ideas on this one would be great.
The other thought was to have a flag in the config file or registry that could be changed while the site is running which would enable that same trace.
I'm looking for ideas or solutions. The main concern is that we need to be able to enable and disable this logging on the fly.
So I wasn't able to find a built-in solution for doing this with EnterpriseLibrary, so this is what I ended up doing and it seems to work; in fact, I extended the idea a bit further by adding a priority override check.
I created a proxy class for writing to the EL LogWriter. What it does is check for the existence of a file (priority_override.txt in this case). If the file is present, it will read the file and search for the text PriorityLevel={n} where n is the override. If it is able to get this override number, it will then override the priority level that was provided in the code. I use this to basically force a high priority on all log statements so that they don't get blocked by the Filter.
My default filter prevents anything below a priority 5 from making it to the listeners; this trick allows me to override that by temporarily increasing the priority of all logs.So since Debug is generally priority 1, it doesn't get logged. If I drop a file in the log directory called priority_override.txt that has the contents PriorityLevel=99, then ALL log statements will make it to the configured listeners and handled accordingly. Everything else is just a normal matter of configuring EL with proper categories and priorities etc. So for instance, if my highest priority is 5 and that triggers an email, then I would just override it to 4 so that everything gets logged but emails do not get sent. Then when we're done troubleshooting in production, for instance, we just delete the priority_override.txt and everything returns to normal.
Additionally, now we don't have to manage the changing of config files separately for our test environments. We can just leave the priority_override.txt in the log folder in each respective environment.

Viewing output of multiple .Net console apps in one place

I have a C# console app which I'm deploying around 20 times (with different config settings) and running. As you might imagine it's hard to keep an eye on what's happening with 20 apps running (I'm eventually going to deploy these as windows services), so is there anything that can show the output of these in one place easily?
I've thought about log files but these could get big quite fast, and it is a lot of files to open and look at - I just want to have some output to check things are still running as expected.
Edit:
I'm going to be writing errors and stop/start information to the database. What I'm talking about here is the general processing information, which isn't all that relevant to revisit, but interesting to look at while its running in the console app.
I have successfully used log4net and its configurable UdpAppender. Then you can point all the UdpAppenders to a single machine where you can receive the Udp messages with Log4View for example.
Since it's configurable, you can use it when you install and debug in production and then increase the logging level to only output ERROR messages instead of DEBUG or INFO messages.
http://logging.apache.org/log4net/
http://www.log4view.com
http://logging.apache.org/log4net/release/config-examples.html
Maybe because I come from a heavy DB background, but how about using SQL Server with a Log table to track activity across different apps?
DBs are geared up towards concurrency and will easily handle multiple applications inserting data into the same Log table, also you get the options of slicing and dicing through the data as much as you would like, taking advantage of the already existing aggregation functions in a DB environment.
If you go down that route, you will probably need to consider maintaining that table (Log retention period, etc.).
You could also potentially start using tools such as Splunk to collate all the log data, and start corresponding app failures to system or environment failures (if these are being tracked).
I'd second Mikael Östberg and recommend using a logger library (log4net, or nlog). There are many options where you can send messages to either a database or queues, etc... Since you can turn the logging on or off easily, you can even keep it in your services as a monitor hook in case something weird happens

Logging from multiple processes to same file using Enterprise Library 4.1

I have several processes running concurrently that I want to log to the same file.
We have been using Enterprise Library 4.1 Logging Application Block (with a RollingFlatFileTraceListener), and it works fine, apart from the fact that it prepends a GUID to the log file name when two processes try to write to the log file at the same time (a quirk of System.Diagnostics.TextWriterTraceListener I believe).
I've tried various things, including calling Logger.Writer.Dispose() after writing to the log file, but it's not ideal to do a blocking call each time a log entry is being written.
The EntLib forums suggest using MSMQ with a Distributor Service, but that is not an option as MSMQ is not allowed at my company.
Is there another way I can quickly and easily log from multiple threads/processes to the same file?
Sorry to say but the answer is no. The File TraceListeners lock the output file so only one TraceListener can log to a file.
You can try other Trace Listeners that are not file based (e.g. Database, Event Log).
Another option I can think of would be to write your own logging service (out of process) that would log to the file and accepts LogEntries. Then create a custom trace listener that sends a message to your service.
It might not be a good idea since you would have a bit of custom development plus it could impact performance since it is an out of process call. Basically you are setting up your own simplified-pseudo-distributor-service.
EntLib locks the log file when it writes to it. Therefore, 2 processes cannot write to the same log file.
When we have had this problem, that we needed to log from many difference places, to the same place, we have used database logging.
If you are 100% stuck logging to a text file, then you could log to individual log files, and then write a program to merge these files.
I know this is old, but if you are still curious. log4net supports this:
http://logging.apache.org/log4net/release/faq.html#How do I get multiple process to log to the same file?
The problem occurs when the App Pool Recycles and allows for Overlapping Threads. The closing thread has it still open, and the new thread gets the error. Try disabling the overlapping recycling behavior in IIS, or create your own version of the text writer.

Categories