Before tracing to a TraceSource, should the "Trace Level" be checked prior to issuing the trace itself?
var ts = new TraceSource("foo");
ts.Switch.Level = SourceLevels.Warning;
if (/* should there be a guard here? and if so, what? */) {
ts.TraceEvent(TraceEventType.Warning, 0, "bar");
}
While there is SourceSwitch.ShouldTrace(TraceEventType), the documentation indicates
Application code should not call this method; it is intended to be called only by methods in the TraceSource class.
It appears that pre-TraceSource model employed the TraceSwitch (not SourceSwitch) class which had various TraceXYZ methods (for this purpose?), but such appears to be not needed/used/mentioned with the TraceSource model.
(Having the guard outside the trace method affects evaluation of expressions used in/for the call - of course side-effects or computationally expensive operations in such are "bad" and "ill-advised", but I'd still like focus on the primary question.)
As per expensive trace parameters computation I came up with the following:
internal sealed class LazyToString
{
private readonly Func<object> valueGetter;
public LazyToString(Func<object> valueGetter)
{
this.valueGetter = valueGetter;
}
public override string ToString()
{
return this.valueGetter().ToString();
}
}
The usage would be
traceSource.TraceEvent(TraceEventType.Verbose, 0, "output: {0}", new LazyToString(() =>
{
// code here would be executed only when needed by TraceSource
// so it can contain some expensive computations
return "1";
}));
Any better idea?
I know that in NLog you generally just do the trace at whatever level you want and it will take care of whether or not the log level should be traced or not.
To me it looks like TraceSource works the same way.
So I would say "No" you probably shouldn't check.
Test it out by setting different trace levels and tracing messages at different levels and see what gets traced.
I think in terms of performance you are generally ok if you use the methods defined on the class:
Based on an example from: http://msdn.microsoft.com/en-us/library/sdzz33s6.aspx
This is good:
ts.TraceEvent(TraceEventType.Verbose, 3, "File {0} not found.", "test");
This would be bad:
string potentialErrorMessageToDisplay = string.Format( "File {0} not found.", "test" );
ts.TraceEvent(TraceEventType.Verbose, 3, potentialErrorMessageToDisplay );
In the first case the library probably avoids the call to string.Format if the error level won't be logged anyway. In the second case, string.Format is always called.
Are strings you provide to the message argument expensive? A constant or literal is pretty cheap. If that is the case, don't worry about it, use the trace switch/trace listener filters, etc to reduce the amoount of trace processed (and the perf cost of trace) (BTW, the default trace listener is very expensive, always clear the trace listeners before adding the ones you want)
System.Diagnostics doesn't have anything to make a inactive TraceSource invocation costless. Even if you use the listener filters, or set the trace switch to zero (turn it off) the TraceEvent will be invoked and the message string will be constructed.
Imagine that the trace string is expensive to calculate, for example, it iterates across all the rows in a dataset and dumps them to a string. That could take a not trivial number of milliseconds.
To get around this you can make the string building part wrapped in a function that has a conditional attribute to turn it off in release mode, or use wrapper method that takes a lambda expression or a Func that creates the string (and isn't executed when not needed)
Like #nexuzzz suggests, there could be situations where calculation of event parameter is expensive. Here is what I could think of.
Suggestions to developers would be: "If you don't have string argument readily available, use the lambda version of TraceInformation or TraceWarning.
public class TraceSourceLogger : ILogger
{
private TraceSource _traceSource;
public TraceSourceLogger(object that)
{
_traceSource = new TraceSource(that.GetType().Namespace);
}
public void TraceInformation(string message)
{
_traceSource.TraceInformation(message);
}
public void TraceWarning(string message)
{
_traceSource.TraceEvent(TraceEventType.Warning, 1, message);
}
public void TraceError(Exception ex)
{
_traceSource.TraceEvent(TraceEventType.Error, 2, ex.Message);
_traceSource.TraceData(TraceEventType.Error, 2, ex);
}
public void TraceInformation(Func<string> messageProvider)
{
if (_traceSource.Switch.ShouldTrace(TraceEventType.Information))
{
TraceInformation(messageProvider());
}
}
public void TraceWarning(Func<string> messageProvider)
{
if (_traceSource.Switch.ShouldTrace(TraceEventType.Warning))
{
TraceWarning(messageProvider());
}
}
}
Related
I'm trying to create my own Cache implementation for an API. It is the first time I work with ConcurrentDictionary and I do not know if I am using it correctly. In a test, something has thrown error and so far I have not been able to reproduce it again. Maybe some concurrency professional / ConcurrentDictionary can look at the code and find what may be wrong. Thank you!
private static readonly ConcurrentDictionary<string, ThrottleInfo> CacheList = new ConcurrentDictionary<string, ThrottleInfo>();
public override void OnActionExecuting(HttpActionContext actionExecutingContext)
{
if (CacheList.TryGetValue(userIdentifier, out var throttleInfo))
{
if (DateTime.Now >= throttleInfo.ExpiresOn)
{
if (CacheList.TryRemove(userIdentifier, out _))
{
//TODO:
}
}
else
{
if (throttleInfo.RequestCount >= defaultMaxRequest)
{
actionExecutingContext.Response = ResponseMessageExtension.TooManyRequestHttpResponseMessage();
}
else
{
throttleInfo.Increment();
}
}
}
else
{
if (CacheList.TryAdd(userIdentifier, new ThrottleInfo(Seconds)))
{
//TODO:
}
}
}
public class ThrottleInfo
{
private int _requestCount;
public int RequestCount => _requestCount;
public ThrottleInfo(int addSeconds)
{
Interlocked.Increment(ref _requestCount);
ExpiresOn = ExpiresOn.AddSeconds(addSeconds);
}
public void Increment()
{
// this is about as thread safe as you can get.
// From MSDN: Increments a specified variable and stores the result, as an atomic operation.
Interlocked.Increment(ref _requestCount);
// you can return the result of Increment if you want the new value,
//but DO NOT set the counter to the result :[i.e. counter = Interlocked.Increment(ref counter);] This will break the atomicity.
}
public DateTime ExpiresOn { get; } = DateTime.Now;
}
If I understand what you are trying to do if the ExpiresOn has passed remove the entry else update it or add if not exists.
You certainly can take advantage of the AddOrUpdateMethod to simplify some of your code.
Take a look here for some good examples: https://learn.microsoft.com/en-us/dotnet/standard/collections/thread-safe/how-to-add-and-remove-items
Hope this helps.
The ConcurrentDictionary is sufficient as a thread-safe container only in cases where (1) the whole state that needs protection is its internal state (the keys and values it contains), and only if (2) this state can be mutated atomically using the specialized API it offers (GetOrAdd, AddOrUpdate). In your case the second requirement is not met, because you need to remove keys conditionally depending on the state of their value, and this scenario is not supported by the ConcurrentDictionary class.
So your current cache implementation is not thread safe. The fact that throws exceptions sporadically is a coincidence. It would still be non-thread-safe if it was totally throw-proof, because it would not be totally error-proof, meaning that it could occasionally (or permanently) transition to a state incompatible with its specifications (returning expired values for example).
Regarding the ThrottleInfo class, it suffers from a visibility bug that could remain unobserved if you tested the class extensively in one machine, and then suddenly emerge when you deployed your app in another machine with a different CPU architecture. The non-volatile private int _requestCount field is exposed through the public property RequestCount, so there is no guarantee (based on the C# specification) that all threads will see its most recent value. You can read this article by Igor Ostrovsky about the peculiarities of the memory models, which may convince you (like me) that employing lock-free techniques (using the Interlocked class in this case) with multithreaded code is more trouble than it's worth. If you read it and like it, there is also a part 2 of this article.
We have a central STATIC method that get's called from many different locations of our ASP.NET application.
I need to add some conditional logic to the static method that needs to run only if the method was called from a specific class. One approach would be to add an additional parameter to the static method's signature -- some kind of enum that would represent which class called this static method, but I was hoping .NET offered a more elegant approach.
EDIT: See Sample Code below
I am trying to modify how exceptions are handled. Currently, if we are processing 1000 checks, and there is an exception inside the loop at check 500, checks 500 - 1000 will not be processed.
We have several screens on our website that calls this central method. One of them called Check Creation Wizard, another called ACH Creation Wizard, etc. Well for the ACH Creation Wizard, we want to handle exceptions by simply skipping a failed check, and move on to the rest of the checks. However, for all other wizards, we want to continue failing the remaining batch of checks if one fails.
public static string GenerateChecks(List<CheckJob> checkJobs)
{
foreach (CheckJob check in checkJobs)
{
try
{
bool didGenerate = DoGenerate(check);
if(didGenerate)
{
Account acct = LoadAccount(check.GetParent());
ModifyAccount(acct);
SaveAcct(acct);
}
}
catch (Exception ex)
{
if (Transaction.IsInTransaction)
{
Transaction.Rollback();
}
throw;
}
}
}
This all smells from afar. You can have this in many ways, but detecting the calling class is the wrong way.
Either make a different static method for this specific other class, or have an additional argument.
If you insist on detecting the caller, this can be done in several ways:
Use the stack trace:
var stackFrame = new StackFrame(1);
var callerMethod = stackFrame.GetMethod();
var callingClass = callerMethod.DeclaringType; // <-- this should be your calling class
if(callingClass == typeof(myClass))
{
// do whatever
}
If you use .NET 4.5, you can have caller information. Not specifically the class, but you can get the caller name and source file at the time of compilation. Add a parameter with a default value decorated with [CallerMemberName] or [CallerFilePath], for example:
static MyMethod([CallerFilePath]string callerFile = "")
{
if(callerFile != "")
{
var callerFilename = Path.GetFileName(callerFile);
if(callerFilename == "myClass.cs")
{
// do whatever
}
}
}
Simply use an additional parameter with a default value (or any kind of different signature)
Note that 1 is very slow, and 2 is just awful... so for the better yet: use a different method if you need a different process
Update
After watching your code, it's even more clear that you want to have either two different methods or an argument... for example:
public static string GenerateChecks(List<CheckJob> checkJobs, bool throwOnError = true)
{
//...
catch (Exception ex)
{
if(throwOnError)
{
if (Transaction.IsInTransaction)
{
Transaction.Rollback();
}
throw;
}
}
}
And then pass false to that when you want to keep going
You never make a decision on what to do based on who called you. You allow the caller to make that decision by providing a feature.
You want a single method to do two different things on error. So either (1) write two methods, and have the caller decide which one to call, or (2) make the method take a Boolean that changes its behaviour, and have the caller decide which Boolean to pass, true or false.
Adding a parameter is definitely more "elegant". Make the parameter optional (by providing a default value, e.g. bool and false) and only execute the special code if the parameter is explicitly set to true.
The alternative, though not as "elegant" as you can read from the comments, would be to search the StackTrace for the calling code.
I think, you can use StackTrace class, but this logic is not very good
You can use StackTrace like this
static void Main(string[] args)
{
Do();
}
static void Do()
{
DosomethingElse();
}
private static void DosomethingElse()
{
StackTrace stackTrace = new StackTrace();
foreach (StackFrame Frame in stackTrace.GetFrames())
{
Console.WriteLine(Frame);
}
}
and this would be the output
{DosomethingElse at offset 77 in file:line:column <filename unknown>:0:0}
{Do at offset 37 in file:line:column <filename unknown>:0:0}
{Main at offset 40 in file:line:column <filename unknown>:0:0}
....
I'm building up tooling for a piece of software and would like to be able save out the boolean expression in the source code that exists in a Debug.Assert (or Trace.Assert) call.
For example, if the program crashes with:
var x = -1;
Debug.Assert(x >= 0, "X must be non-negative", "some detail message");
I'd like to be able to get out the string "x >= 0" as well as the message and detail message.
I've looked into using a TraceListener, but TraceListener#Fail(string) and TraceListener#Fail(string, string) only can capture the message and detail message fields (which, in the case a developer does not include, leaves me with no easy way to report what went wrong).
I suppose it's possible to create a stack trace and read the particular line that failed and report that (assuming the source code is available), but this seems relatively fragile.
Thanks for your time!
You can use expressions to accomplish something rough:
public static class DebugEx
{
[Conditional("DEBUG")]
public static void Assert(Expression<Func<bool>> assertion, string message)
{
Debug.Assert(assertion.Compile()(), message, assertion.Body.ToString());
}
}
and use it like so:
var i = -1;
DebugEx.Assert(() => i > 0, "Message");
There are some down sides to this. The first is, you have to use a lambda, so that complicates the syntax a little bit. The second is since we are dynamically compiling things, there is a performance hit. Since this will only happen in Debug mode (hence the conditional), the performance loss won't be seen in Release mode.
Lastly, the output isn't pretty. It'll look something like this:
(value(WindowsFormsApplication1.Form1+<>c__DisplayClass0).i > 0)
There isn't a whole lot you can do about this. The reason this happens is because of the closure around i. This is actually accurate since that is what it gets compiled down into.
I had already started typing an answer when #vcsjones posted his answer, so I abandoned mine, but I see there are some parts of it that are still relevant. Primarily with regards to formatting the lambda expression into something readable, So I will merge his with that part of my intended answer.
It uses a number of regular expressions to format the assertion expression, so that in many cases it will look decent (i.e. close to what you typed).
For the example given in #vcsjones answer it will now look like this:
Assertion '(i > 0)' failed.
public static class DebugEx
{
private static readonly Dictionary<Regex, string> _replacements;
static DebugEx()
{
_replacements = new Dictionary<Regex,string>()
{
{new Regex("value\\([^)]*\\)\\."), string.Empty},
{new Regex("\\(\\)\\."), string.Empty},
{new Regex("\\(\\)\\ =>"), string.Empty},
{new Regex("Not"), "!"}
};
}
[Conditional("DEBUG")]
public static void Assert(Expression<Func<bool>> assertion, string message)
{
if (!assertion.Compile()())
Debug.Assert(false, message, FormatFailure(assertion));
}
private static string FormatFailure(Expression assertion)
{
return string.Format("Assertion '{0}' failed.", Normalize(assertion.ToString()));
}
private static string Normalize(string expression)
{
string result = expression;
foreach (var pattern in _replacements)
{
result = pattern.Key.Replace(result, pattern.Value);
}
return result.Trim();
}
}
When invoking UpdatePerformanceCounters: In this updater all the counter names for the category and instance counters are the same - they are always derived from an Enum. The updater is passed a "profile" typically with content such as:
{saTrilogy.Core.Instrumentation.PerformanceCounterProfile}
_disposed: false
CategoryDescription: "Timed function for a Data Access process"
CategoryName: "saTrilogy<Core> DataAccess Span"
Duration: 405414
EndTicks: 212442328815
InstanceName: "saTrilogy.Core.DataAccess.ParameterCatalogue..ctor::[dbo].[sp_KernelProcedures]"
LogFormattedEntry: "{\"CategoryName\":\"saTrilogy<Core> DataAccess ...
StartTicks: 212441923401
Note the "complexity" of the Instance name.
The toUpdate.AddRange() of the VerifyCounterExistence method always succeeds and produces the "expected" output so the UpdatePerformanceCounters method continues through to the "successful" incrementing of the counters.
Despite the "catch" this never "fails" - except, when viewing the Category in PerfMon, it shows no instances or, therefore, any "successful" update of an instance counter.
I suspect my problem may be that my instance name is being rejected, without exception, because of its "complexity" - when I run this through a console tester via PerfView it does not show any exception stack and the ETW events associated with counter updates are successfully recorded in an out-of-process sink. Also, there are no entries in the Windows Logs.
This is all being run "locally" via VS2012 on a Windows 2008R2 server with NET 4.5.
Does anyone have any ideas of how else I may try this - or even test if the "update" is being accepted by PerfMon?
public sealed class Performance {
private enum ProcessCounterNames {
[Description("Total Process Invocation Count")]
TotalProcessInvocationCount,
[Description("Average Process Invocation Rate per second")]
AverageProcessInvocationRate,
[Description("Average Duration per Process Invocation")]
AverageProcessInvocationDuration,
[Description("Average Time per Process Invocation - Base")]
AverageProcessTimeBase
}
private readonly static CounterCreationDataCollection ProcessCounterCollection = new CounterCreationDataCollection{
new CounterCreationData(
Enum<ProcessCounterNames>.GetName(ProcessCounterNames.TotalProcessInvocationCount),
Enum<ProcessCounterNames>.GetDescription(ProcessCounterNames.TotalProcessInvocationCount),
PerformanceCounterType.NumberOfItems32),
new CounterCreationData(
Enum<ProcessCounterNames>.GetName(ProcessCounterNames.AverageProcessInvocationRate),
Enum<ProcessCounterNames>.GetDescription(ProcessCounterNames.AverageProcessInvocationRate),
PerformanceCounterType.RateOfCountsPerSecond32),
new CounterCreationData(
Enum<ProcessCounterNames>.GetName(ProcessCounterNames.AverageProcessInvocationDuration),
Enum<ProcessCounterNames>.GetDescription(ProcessCounterNames.AverageProcessInvocationDuration),
PerformanceCounterType.AverageTimer32),
new CounterCreationData(
Enum<ProcessCounterNames>.GetName(ProcessCounterNames.AverageProcessTimeBase),
Enum<ProcessCounterNames>.GetDescription(ProcessCounterNames.AverageProcessTimeBase),
PerformanceCounterType.AverageBase),
};
private static bool VerifyCounterExistence(PerformanceCounterProfile profile, out List<PerformanceCounter> toUpdate) {
toUpdate = new List<PerformanceCounter>();
bool willUpdate = true;
try {
if (!PerformanceCounterCategory.Exists(profile.CategoryName)) {
PerformanceCounterCategory.Create(profile.CategoryName, profile.CategoryDescription, PerformanceCounterCategoryType.MultiInstance, ProcessCounterCollection);
}
toUpdate.AddRange(Enum<ProcessCounterNames>.GetNames().Select(counterName => new PerformanceCounter(profile.CategoryName, counterName, profile.InstanceName, false) { MachineName = "." }));
}
catch (Exception error) {
Kernel.Log.Trace(Reflector.ResolveCaller<Performance>(), EventSourceMethods.Kernel_Error, new PacketUpdater {
Message = StandardMessage.PerformanceCounterError,
Data = new Dictionary<string, object> { { "Instance", profile.LogFormattedEntry } },
Error = error
});
willUpdate = false;
}
return willUpdate;
}
public static void UpdatePerformanceCounters(PerformanceCounterProfile profile) {
List<PerformanceCounter> toUpdate;
if (profile.Duration <= 0 || !VerifyCounterExistence(profile, out toUpdate)) {
return;
}
foreach (PerformanceCounter counter in toUpdate) {
if (Equals(PerformanceCounterType.RateOfCountsPerSecond32, counter.CounterType)) {
counter.IncrementBy(profile.Duration);
}
else {
counter.Increment();
}
}
}
}
From MSDN .Net 4.5 PerformanceCounter.InstanceName Property (http://msdn.microsoft.com/en-us/library/system.diagnostics.performancecounter.instancename.aspx)...
Note: Instance names must be shorter than 128 characters in length.
Note: Do not use the characters "(", ")", "#", "\", or "/" in the instance name. If any of these characters are used, the Performance Console (see Runtime Profiling) may not correctly display the instance values.
The instance name of 79 characters that I use above satisfies these conditions so, unless ".", ":", "[" and "]" are also "reserved" the name would not appear to be the issue. I also tried a 64 character sub-string of the instance name - just in case, as well as a plain "test" string all to no avail.
Changes...
Apart from the Enum and the ProcessCounterCollection I have replaced the class body with the following:
private static readonly Dictionary<string, List<PerformanceCounter>> definedInstanceCounters = new Dictionary<string, List<PerformanceCounter>>();
private static void UpdateDefinedInstanceCounterDictionary(string dictionaryKey, string categoryName, string instanceName = null) {
definedInstanceCounters.Add(
dictionaryKey,
!PerformanceCounterCategory.InstanceExists(instanceName ?? "Total", categoryName)
? Enum<ProcessCounterNames>.GetNames().Select(counterName => new PerformanceCounter(categoryName, counterName, instanceName ?? "Total", false) { RawValue = 0, MachineName = "." }).ToList()
: PerformanceCounterCategory.GetCategories().First(category => category.CategoryName == categoryName).GetCounters().Where(counter => counter.InstanceName == (instanceName ?? "Total")).ToList());
}
public static void InitialisationCategoryVerify(IReadOnlyCollection<PerformanceCounterProfile> etwProfiles){
foreach (PerformanceCounterProfile profile in etwProfiles){
if (!PerformanceCounterCategory.Exists(profile.CategoryName)){
PerformanceCounterCategory.Create(profile.CategoryName, profile.CategoryDescription, PerformanceCounterCategoryType.MultiInstance, ProcessCounterCollection);
}
UpdateDefinedInstanceCounterDictionary(profile.DictionaryKey, profile.CategoryName);
}
}
public static void UpdatePerformanceCounters(PerformanceCounterProfile profile) {
if (!definedInstanceCounters.ContainsKey(profile.DictionaryKey)) {
UpdateDefinedInstanceCounterDictionary(profile.DictionaryKey, profile.CategoryName, profile.InstanceName);
}
definedInstanceCounters[profile.DictionaryKey].ForEach(c => c.IncrementBy(c.CounterType == PerformanceCounterType.AverageTimer32 ? profile.Duration : 1));
definedInstanceCounters[profile.TotalInstanceKey].ForEach(c => c.IncrementBy(c.CounterType == PerformanceCounterType.AverageTimer32 ? profile.Duration : 1));
}
}
In the PerformanceCounter Profile I've added:
internal string DictionaryKey {
get {
return String.Concat(CategoryName, " - ", InstanceName ?? "Total");
}
}
internal string TotalInstanceKey {
get {
return String.Concat(CategoryName, " - Total");
}
}
The ETW EventSource now does the initialisation for the "pre-defined" performance categories whilst also creating an instance called "Total".
PerformanceCategoryProfile = Enum<EventSourceMethods>.GetValues().ToDictionary(esm => esm, esm => new PerformanceCounterProfile(String.Concat("saTrilogy<Core> ", Enum<EventSourceMethods>.GetName(esm).Replace("_", " ")), Enum<EventSourceMethods>.GetDescription(esm)));
Performance.InitialisationCategoryVerify(PerformanceCategoryProfile.Values.Where(v => !v.CategoryName.EndsWith("Trace")).ToArray());
This creates all of the categories, as expected, but in PerfMon I still cannot see any instances - even the "Total" instance and the update always, apparently, runs without error.
I don't know what else I can "change - probably "too close" to the problem and would appreciate comments/corrections.
These are the conclusions and the "answer" insofar as as it explains, to the best of my ability, what I believe is happening and posted by myself - given my recent helpful use of Stack Overflow this, I hope, will be of use to others...
Firstly, there is essentially nothing wrong with the code displayed excepting one proviso - mentioned later. Putting a Console.ReadKey() before program termination and after having done a PerformanceCounterCategory(categoryKey).ReadCategory() it is quite clear that not only are the registry entries correct (for this is where ReadCategory sources its results) but that the instance counters have all been incremented by the appropriate values. If one looks at PerfMon before the program terminates the instance counters are there and they do contain the appropriate Raw Values.
This is the crux of my "problem" - or, rather, my incomplete understanding of the architecture: INSTANCE COUNTERS ARE TRANSIENT - INSTANCES ARE NOT PERSISTED BEYOND THE TERMINATION OF A PROGRAM/PROCESS. This, once it dawned on me, is "obvious" - for example, try using PerfMon to look at an instance counter of one of your IIS AppPools - then stop the AppPool and you will see, in PerfMon, that the Instance for the stopped AppPool is no longer visible.
Given this axiom about instance counters the code above has another completely irrelevant section: When trying the method UpdateDefinedInstanceCounterDictionary assigning the list from an existing counter set is pointless. Firstly, the "else" code shown will fail since we are attempting to return a collection of (instance) counters for which this approach will not work and, secondly, the GetCategories() followed by GetCounters() or/and GetInstanceNames() is an extraordinarily expensive and time-consuming process - even if it were to work. The appropriate method to use is the one mentioned earlier - PerformanceCounterCategory(categoryKey).ReadCategory(). However, this returns an InstanceDataCollectionCollection which is effectively read-only so, as a provider (as opposed to a consumer) of counters it is pointless. In fact, it doesn't matter if you just use the Enum generated new PerformanceCounter list - it works regardless of whether the counters already exist or not.
Anyway, the InstanceDataCollectionCollection (this is essentially that which is demonstrated by the Win32 SDK for .Net 3.5 "Usermode Counter Sample") uses a "Sample" counter which is populated and returned - as per the usage of the System.Diagnostics.PerformanceData Namespace whichi looks like part of the Version 2.0 usage - which usage is "incompatible" with the System.Diagnostics.PerformanceCounterCategory usage shown.
Admittedly, the fact of non-persistance may seem obvious and may well be stated in documentation but, if I were to read all the documentation about everything I need to use beforehand I'd probably end up not actually writing any code! Furthermore, even if such pertinent documentation were easy to find (as opposed to experiences posted on, for example, Stack Overflow) I'm not sure I trust all of it. For example, I noted above that the instance name in the MSDN documentation has a 128 character limit - wrong; it is actually 127 since the underlying string must be null-terminated. Also, for example, for ETW, I wish it were made more obvious that keyword values must be powers of 2 and opcodes with value of less than 12 are used by the system - at least PerfView was able to show me this.
Ultimately this question has no "answer" other than a better understanding of instance counters - especially their persistence. Since my code is intended for use in a Windows Service based Web API then its persistence is not an issue (especially with daily use of LogMan etc.) - the confusing thing is that the damn things didn't appear until I paused the code and checked PerfMon and I could have saved myself a lot of time and hassle if I knew this beforehand. In any event my ETW event source logs all elapsed execution times and instances of what the performance counters "monitor" anyway.
Currently I have a custom built static logging class in C# that can be called with the following code:
EventLogger.Log(EventLogger.EventType.Application, string.Format("AddData request from {0}", ipAddress));
When this is called it simply writes to a defined log file specified in a configuration file.
However, being that I have to log many, many events, my code is starting to become hard to read because all of the logging messages.
Is there an established way to more or less separate logging code from objects and methods in a C# class so code doesn't become unruly?
Thank you all in advance for your help as this is something I have been struggling with lately.
I like the AOP Features, that PostSharp offers. In my opinion Loggin is an aspect of any kind of software. Logging isn't the main value an application should provide.
So in my case, PostSharp always was fine. Spring.NET has also an AOP module which could be used to achieve this.
The most commonly used technique I have seen employs AOP in one form or another.
PostSharp is one product that does IL weaving as a form of AOP, though not the only way to do AOP in .NET.
A solution to this is to use Aspect-oriented programming in which you can separate these concerns. This is a pretty complex/invasive change though, so I'm not sure if it's feasible in your situation.
I used to have a custom built logger but recently changed to TracerX. This provides a simple way to instrument the code with different levels of severity. Loggers can be created with names closely related to the class etc that you are working with
It has a separate Viewer with a lot of filtering capabilities including logger, severity and so on.
http://tracerx.codeplex.com/
There is an article on it here: http://www.codeproject.com/KB/dotnet/TracerX.aspx
If your primary goal is to log function entry/exit points and occasional information in between, I've had good results with an Disposable logging object where the constructor traces the function entry, and Dispose() traces the exit. This allows calling code to simply wrap each method's code inside a single using statement. Methods are also provided for arbitrary logs in between. Here is a complete C# ETW event tracing class along with a function entry/exit wrapper:
using System;
using System.Diagnostics;
using System.Diagnostics.Tracing;
using System.Reflection;
using System.Runtime.CompilerServices;
namespace MyExample
{
// This class traces function entry/exit
// Constructor is used to automatically log function entry.
// Dispose is used to automatically log function exit.
// use "using(FnTraceWrap x = new FnTraceWrap()){ function code }" pattern for function entry/exit tracing
public class FnTraceWrap : IDisposable
{
string methodName;
string className;
private bool _disposed = false;
public FnTraceWrap()
{
StackFrame frame;
MethodBase method;
frame = new StackFrame(1);
method = frame.GetMethod();
this.methodName = method.Name;
this.className = method.DeclaringType.Name;
MyEventSourceClass.Log.TraceEnter(this.className, this.methodName);
}
public void TraceMessage(string format, params object[] args)
{
string message = String.Format(format, args);
MyEventSourceClass.Log.TraceMessage(message);
}
public void Dispose()
{
if (!this._disposed)
{
this._disposed = true;
MyEventSourceClass.Log.TraceExit(this.className, this.methodName);
}
}
}
[EventSource(Name = "MyEventSource")]
sealed class MyEventSourceClass : EventSource
{
// Global singleton instance
public static MyEventSourceClass Log = new MyEventSourceClass();
private MyEventSourceClass()
{
}
[Event(1, Opcode = EventOpcode.Info, Level = EventLevel.Informational)]
public void TraceMessage(string message)
{
WriteEvent(1, message);
}
[Event(2, Message = "{0}({1}) - {2}: {3}", Opcode = EventOpcode.Info, Level = EventLevel.Informational)]
public void TraceCodeLine([CallerFilePath] string filePath = "",
[CallerLineNumber] int line = 0,
[CallerMemberName] string memberName = "", string message = "")
{
WriteEvent(2, filePath, line, memberName, message);
}
// Function-level entry and exit tracing
[Event(3, Message = "Entering {0}.{1}", Opcode = EventOpcode.Start, Level = EventLevel.Informational)]
public void TraceEnter(string className, string methodName)
{
WriteEvent(3, className, methodName);
}
[Event(4, Message = "Exiting {0}.{1}", Opcode = EventOpcode.Stop, Level = EventLevel.Informational)]
public void TraceExit(string className, string methodName)
{
WriteEvent(4, className, methodName);
}
}
}
Code that uses it will look something like this:
public void DoWork(string foo)
{
using (FnTraceWrap fnTrace = new FnTraceWrap())
{
fnTrace.TraceMessage("Doing work on {0}.", foo);
/*
code ...
*/
}
}
To make the code readable, only log what you really need to (info/warning/error). Log debug messages during development, but remove most when you are finished. For trace logging, use
AOP to log simple things like method entry/exit (if you feel you need that kind of granularity).
Example:
public int SomeMethod(int arg)
{
Log.Trace("SomeClass.SomeMethod({0}), entering",arg); // A
if (arg < 0)
{
arg = -arg;
Log.Warn("Negative arg {0} was corrected", arg); // B
}
Log.Trace("SomeClass.SomeMethod({0}), returning.",arg); // C
return 2*arg;
}
In this example, the only necessary log statement is B. The log statements A and C are boilerplate, logging that you can leave to PostSharp to insert for you instead.
Also: in your example you can see that there is some form of "Action X invoked by Y", which suggests that a lot of your code could in fact be moved up to a higher level (e.g. Command/Filter).
Your proliferation of logging statements could be telling you something: that some form of design pattern could be used, which could also centralize a lot of the logging.
void DoSomething(Command command, User user)
{
Log.Info("Command {0} invoked by {1}", command, user);
command.Process(user);
}
I think it is a good option to implement something similar to filters in ASP.NET MVC. This is implement with the help of attributes and reflection. You mark every method you want to log in a certain way and enjoy. I suppose there might be a better way to do it, may be with the help of Observer pattern or something but as long as I thought about it I couldn't think of something better.
Basically such problems are called cross-cutting concerns and can be tackled with the help of AOP.
I also think that some interesting inheritance schema can be applied with log entities at the base but I would go for filters