I am using NLog as my logging framework. I envisage I will have logs coming in from multiple sources (20, 30+)
I want to be able to live monitor at will.
What Viewers (commercial or free) are the best to use?
I am currently rolling over my days and using C:\Logging as my "base" logging directory.
NLog FileName for trace is as follows:
C:\Logging\${appdomain:format={1\}}\${shortdate}\MyType.xml
I have Trace/Debug/Info,Warn/Error/Fatal all going into their own separate files (Debug.xml/Info.xml/Error.xml etc), all in the above file name format.
I also have a UDP target setup, and that is currently going to Sentinel. This works fine, and would be a great solution for me if sentinel could setup multiple apps/tabs/receiveds. But on the surface, I can only have one it would seem. The other problem is that I have millions of logs pumping through. Last time i left it running for a while, it killed all the memory in my system.
Ideally, What i would like, is an application that i simply add the "C:\Logging" folder to, like a "watch folder" and it keeps pumping out my logs, including detecting when a new file is created (example Fatal.xml), which would also handle date rollovers. Also the addition of multiple receiver types eg UDP
Not sure if Amazon is an option for you, but I ran across an AWS NLog Target that I was looking implement. I am not capturing as many logs as you, but do have logs coming from multiple servers. This would send the items written to the logs to Amazon CloudWatch Logs target and searchable in the console.
I am not sure on the bandwidth required to duplicate log items to AWS but it would put them in one place. CloudWatch Retention has been increased but if you do find issues, you could always go back to the text log files for more details past the retention period.
You could also setup CloudWatch Alarms to let you know if there are issues.
Related
I'm currently working in a chat API, and I receive multiple requests at the same time, from different sessions, so its almost impossible to track each
conversation separately, because it mixes with all the others logs from other conversations.
So I want to create a separated file for each session(conversation) dynamically, with the filename as the sessionId, but if I create multiple loggers, my application just freeze, because I can have more than 100 sessions simultaneously.
I have also tried to change the file path (programmatically) for each request with its id on it, but it also freezes the application after 1-2 hours.
Is there any solution for this problem?
If these conversation files are so important, consider other options than logging. A database might be appropriate.
Another solution might be to parse the log files and split them into conversation files in a separate (logical?) process (perhaps later, after the session has ended.) This way the program doesn't need to keep track of many files at the same time and parsing can be done faster/more efficiently.
Is there any way I can use Microsoft.Diagnostic.Tracing.EventSource package or any other .NET built in types to implement a back up logic for logging?
I have used an EventSession that directs all logs to an .ETL file but there are two problems with this:
The most important one is that, from what I understand, the logs are actually being written to the file when the processing stops. But what happens if the machine shuts down or the processor process gets killed? From my tests the logs are lost.
The second problem is less important and it's just that it would be more convenient for my solution to be able to read from the file as logs are coming in.
Basically I would like to have a file buffer for my logs and I was wondering if there's some built in implementation especially for this.
Yes, EventSource's WriteEvent methods are not blocking, so they do
not guarantee that logs actually have been saved.
And yes, you cannot
read ETL files while they are in use.
ETW is not the option for you in this case, you could use common TraceSource (with two listeners, one is configured to do auto-flush and another one is ETW adaptor, which you could listen to through a real-time ETW session) or log4net (you could choose appender that works for your case).
So in our project, we have enterprise library as our logger. Obviously, in production, we don't want to have our debug trace logging out all of the time, but we would like to be able to be able to turn on that debug trace without restarting the app pool, but I'm not quite sure how to go about it.
In the past, I'd have written something that would only log to the file if the file was present. So for instance, if I wanted to enable debug trace, I'd have the NOC drop a file in the log folder, then the next time the logger was invoked, it would see that the file is there and start dumping to it. Seems kind of cheesy, but it works. However, with a website, I'd have to write a full blown asynchronous logger that knew how to queue information to be written, and I don't really want to do that, and I don't know how to achieve this using Enterprise logger. Any ideas on this one would be great.
The other thought was to have a flag in the config file or registry that could be changed while the site is running which would enable that same trace.
I'm looking for ideas or solutions. The main concern is that we need to be able to enable and disable this logging on the fly.
So I wasn't able to find a built-in solution for doing this with EnterpriseLibrary, so this is what I ended up doing and it seems to work; in fact, I extended the idea a bit further by adding a priority override check.
I created a proxy class for writing to the EL LogWriter. What it does is check for the existence of a file (priority_override.txt in this case). If the file is present, it will read the file and search for the text PriorityLevel={n} where n is the override. If it is able to get this override number, it will then override the priority level that was provided in the code. I use this to basically force a high priority on all log statements so that they don't get blocked by the Filter.
My default filter prevents anything below a priority 5 from making it to the listeners; this trick allows me to override that by temporarily increasing the priority of all logs.So since Debug is generally priority 1, it doesn't get logged. If I drop a file in the log directory called priority_override.txt that has the contents PriorityLevel=99, then ALL log statements will make it to the configured listeners and handled accordingly. Everything else is just a normal matter of configuring EL with proper categories and priorities etc. So for instance, if my highest priority is 5 and that triggers an email, then I would just override it to 4 so that everything gets logged but emails do not get sent. Then when we're done troubleshooting in production, for instance, we just delete the priority_override.txt and everything returns to normal.
Additionally, now we don't have to manage the changing of config files separately for our test environments. We can just leave the priority_override.txt in the log folder in each respective environment.
I have a C# console app which I'm deploying around 20 times (with different config settings) and running. As you might imagine it's hard to keep an eye on what's happening with 20 apps running (I'm eventually going to deploy these as windows services), so is there anything that can show the output of these in one place easily?
I've thought about log files but these could get big quite fast, and it is a lot of files to open and look at - I just want to have some output to check things are still running as expected.
Edit:
I'm going to be writing errors and stop/start information to the database. What I'm talking about here is the general processing information, which isn't all that relevant to revisit, but interesting to look at while its running in the console app.
I have successfully used log4net and its configurable UdpAppender. Then you can point all the UdpAppenders to a single machine where you can receive the Udp messages with Log4View for example.
Since it's configurable, you can use it when you install and debug in production and then increase the logging level to only output ERROR messages instead of DEBUG or INFO messages.
http://logging.apache.org/log4net/
http://www.log4view.com
http://logging.apache.org/log4net/release/config-examples.html
Maybe because I come from a heavy DB background, but how about using SQL Server with a Log table to track activity across different apps?
DBs are geared up towards concurrency and will easily handle multiple applications inserting data into the same Log table, also you get the options of slicing and dicing through the data as much as you would like, taking advantage of the already existing aggregation functions in a DB environment.
If you go down that route, you will probably need to consider maintaining that table (Log retention period, etc.).
You could also potentially start using tools such as Splunk to collate all the log data, and start corresponding app failures to system or environment failures (if these are being tracked).
I'd second Mikael Östberg and recommend using a logger library (log4net, or nlog). There are many options where you can send messages to either a database or queues, etc... Since you can turn the logging on or off easily, you can even keep it in your services as a monitor hook in case something weird happens
we are using log4net with a AdoNetAppender to write critical logs into an database. Since the AdoNetAppender is a subclass of the BufferedAppender there is a possibility to enable queuing of log events.
What I'd like to do is to save the backup & restore the log buffer to a local file, so that no log entry can get lost it the database is down or the application crashes.
Does somebody know how to this?
Don't think you can save the buffer without writing some code yourself. What I rather would suggest is sending the logs to both a AdoNetAppender and a RollingFileAppender. The first will ensure your regular logging to database while the second will ensure that the latest logs are also written to disk.
Update: in light of your later comments I can see how logging to two different sources (one database and one local store, either a file or local database) gets tough to consolidate.
Imo you should absolutely use log4net for what it is best at: a tried and true framework for collection log data from the application and routing that data to receiving systems. Building a failover system on top of log4net though is not what it is designed for. For instance, there is no process model that can pick up the pieces after an application crash.
Instead, handle failover in the receiving system. Failover at the database level and the network level gets you a long way, still you are not guaranteed 100% uptime. By logging to a local store and then have a process picking up the logs and shipping it to the database would minimize the risk for log data being lost, and at the same time you avoid having to consolidate logs from two different stores. Even better, logging is still simple and fast and thus have a low impact on the application.
An alternative would be logging to a local database and having a database job pull the data into the master database. You could also use queuing. There is a sample MsmqAppender out there to get you started. If you're using MS SQL Server you could even use the Service Broker for its queuing abilities.