A method for high-load web site logging to file? - c#

I need to build in click and conversion tracking (more specific and focused than IIS log files) to an existing web site. I am expecting pretty high load. I have investigated using log4net, specifically the FileAppender Class, but the docs explicitly state: "This type is not safe for multithreaded operations."
Can someone suggest a robust approach for a solution for this type of heavy logging? I really like the flexibility log4net would give me. Can I get around the lack of safe multi-threading using lock? Would this introduce performance/contention concerns?

While FileAppender itself may not be safe for logging, I'd certainly expect the normal access routes to it via log4net to be thread-safe.
From the FAQ:
log4net is thread-safe.
In other words, either the main log4net framework does enough locking, or it has a dedicated logging thread servicing a producer/consumer queue of log messages.
Any logging framework which wasn't thread-safe wouldn't survive for long.

You could check out the Logging Application Block available in the Microsoft Enterprise Library. It offers a whole host of different types of loggers, as well as a handy GUI configurator that you can point to your app.config\web.config in order to modify it. So there's not need to sift through the XML yourself.
Here's a link to a nice tutorial on how to get started with it:
http://elegantcode.com/2009/01/20/enterprise-library-logging-101/

I'm also interested in the answer, but I'll tell you what I was told when I tried to find a solution.
An easy way around it would be to use something like an SQL database. If the data you want isn't well suited for that, you could have each page access write it's own log file and then periodically merge the log files.
However, I'm sure there's a better solution.

When using syslog, you won't be having any threading issues. Syslog, sends the loglines using UDP to a logdaemon (could potentially be on the same machine).
Works especially great if you have more running processes/services, since all log lines are aggregated in 1 viewing tool.
if you expect really heavy loads, look at how the guys from facebook do it: http://developers.facebook.com/scribe/ You can use their opensource logtool. I don't think you'll hit their kind of load just yet, so you should be safe for some time to come!
R

Related

Confluent.Kafka - Topic Log Compaction

I'm currently building publisher and consumer assets using Confluent.Kafka and I'm trying to understand if there is anything different I need to do in code. I'm able to create the topic log compaction but I do not fully understand how to work with it in C# .NET Core.
My main ask is after creating a topic with log compaction enabled is there anything that must be done IN CODE to use it or is it all handled under the hood.
If there are code specific aspects to write does anyone have an example they can point me to? I've been looking into it for a couple of days and I find plenty of information on how to create a topic with log compaction enabled (which I've already achieved) but nothing on how that might affect code usage for the producer and consumer.
Any help would be much appreciated.
No, you don't need to make any changes to your code to use log compaction. To use log compaction, you only need to configure the topic.
The only thing different in code would be that you can delete events with a certain key by producing a tombstone value. Which in C# is just a null.
Make sure you really understand how log compaction works, you can read more about it here. To activate log compaction you must set the cleanup.policy=compact when creating the topic. But you must also consider other topic configurations which impact how often the topic is compacted: delete.retention.ms, segment.ms, min.cleanable.dirty.ratio.

Live log monitoring within the application generating the logs

I have an application in .NET that I need to log. I want to log certain events and exceptions. I saw online that log4net was being heavily recommended for this purpose. I set it up to quickly begin logging to a txt file.
But this is not good enough for my purposes. From within my application, I'd like to be able to pull up a monitor which has a live listing of all the logs being generated.
If log4net the best approach for this? If not, what is?
I have no problem consuming the log events and finding my own way to display the data, I just don't know what the best way is to send the logging events to my monitor form.
You may want to look at log2console, which is an excellent logging monitor compatible with log4net. It can listen to the log4net remoting appender and present the data quite nicely.
If you need to implement your own monitor from within the program, I would suggest trying out the MemoryAppender. There's some helpful info here (the question is actually a very nice tutorial)
As you can see, he has set up two appenders - one which is logging to file and one which is logging to the memory appender. In your monitor, you can get a handle to the appender using the following code:
Hierarchy hierarchy = LogManager.GetRepository() as Hierarchy;
MemoryAppender mappender = hierarchy.Root.GetAppender("MemoryAppender") as MemoryAppender;
And you can cycically get the new events in a background thread with mappender.GetEvents(), before clearing it with mappender.Clear(). Keep in mind that this is not thread safe, so creating a thread safe wrapper for your logging is probably a good idea.

Good remote application logging/monitoring software

I'm sure this has already been done, but Google isn't helping me - I'm getting swamped with answers for similar but different problems:
My boss has asked me to find or build a system that will log uses of our kiosk installations. We build kiosks using java, native c++, c#, python and using things like Unity. We saw another company we worked with using a simple system where a post call with data was logged on a remote site to be checked later. The system allowed the application programmer to decide the contents of the message, and was able to allocate it to either debug or release according to the programmer's wishes.
An example of the log output might be:
[Debug] 28-11-2011 10:10:20 Kiosk1: Pulse
[Debug] 28-11-2011 10:10:25 Kiosk1: Button pressed
[Debug] 28-11-2011 10:10:45 Kiosk1: Widget used
[Debug] 28-11-2011 10:11:20 Kiosk1: Pulse
I looked at log4net/log4j, but that doesn't seem to be compatible with native c++ or python. I'm probably mistaken there :).
Does anyone know of a system that works like this, or that will otherwise be suitable for logging from such diverse languages? If not, I can write my own easily enough. I just don't want to have to support it :)
Regards,
Steve
I'm not sure, but I think what you're looking for is SPLUNK. This can parse almost every log and display it in a unified manner. It can listen to ports, read log files via polling and parses and indexes anything you throw at any point of time.
You can use this to set up you're own multi-language logging server/system. We've been using this and it seamlessly works in our distributed environment.
While writing a specialized logging backend to handle logging both locally and to the network is quite possible, I would advise against it. The reason being that network latency can be to long so it either stops your application, or logging messages can be queued up if using another process/thread to do the actual network pushing.
A much simpler solution is to use little script that is scheduled to run once or a couple of times per day, and that copies the log file(s) to the remote location.
For C++ I highly recommend Poco logging. It allows you to specify the formatting and log level/output using e.g. a properties file.
the python logging library that is included with python is quite similar to log4net, so if you are used to those, the other will be quite easy to understand, but they do not share code (as far as I know)
Use log4j/log4net with a socket appender or log remotely via rsyslog.
You might be interested in something like web beacons. I know it's not exactly what you're asking for, but you ought to think about it for the same reason that web developers do: it's good to know what users are doing.

Using Performance Counters to track windows services

I'm working with a system that consists of several applications and services, almost all using a SQL database.
Windows services do different things at different times, and I would like to track them. Meaning that on some deployed systems we see the machine running high on CPU, we see that the sql process is running high, but we can't be sure about which service is responsible for it.
I wonder if Performance Counters are good for this job.
Basically I would like to be able to see at a certain moment which service woke up and is processing something.
It seems to me that I can end up having a perfcounter that only has the value 0 or 1 for each service to show if it is doing something, but this doesn't seem like a normal usage for perfcounters.
Are performance counters suitable?
Do you think I should track this in a different way?
If your monitoring framework/approach already centers around monitoring performance counters, this is a viable approach.
Personally I find more detailed instrumentation necessary to really understand what's happening in my services (though maybe that has to do with the nature of my services).
I use .NET Logging Framework because it's simple and can write to multiple targets including log files, the event log, and a TCP socket (I have a simple monitor that listens on the logging socket for each app server and shows me in real-time what's happening).
Performance Counters are attractive because they are really lightweight, but as you say, they only allow you to capture numeric values. Sure, there's a slew of different types of values you can record, such as averages, deltas and totals, but they have to be numbers.
If you need more information than that, you must resort to some other type of instrumentation. In your case, it sounds like your need goes more in that direction.
If your services don't wake up and suspend themselves too often, it sounds like informational message to a custom event log might be a good idea. Create a custom event log for the application if you expect a fair amount of these so as to not flood the regular Application event log.
The .NET Trace API will be a better option if you expect the instrumentation to generate too much data for the normal event log. You can configure your application(s) to trace or not based on app/web.config, although a change will require a restart of the app. This is a good option if you only wish to use the instrumentation for troubleshooting, but it otherwise generates too much data or if tracing itself degrades performance too much. Another good thing about the Tracing API is that you can Trace on multiple levels, so even if you have written code to Trace very verbosely, you will only see that verbose trace data if you enable verbose tracing. That gives you better control of just what is being traced.
Eric J has a good point. I think if you really want to capture "timing" performance you'll have to use some other sort of logging and use start and stop time logs. I personally like log4net though it can be a pain to configure the first time around

A good broadcast mechanism for inhouse .net applications to announce their location and version?

I would like to provide a large number of inhouse .net applications with a lightweight way to announce that they are being used. My goal is to keep track of which users might benefit from support check-ins and/or reminders to upgrade.
This is on an inhouse network. There is definitely IP connectivity among all the machines, and probably UDP. (But probably not multicast.)
Writing to a known inhouse share or loading a known URL would be possibilities, but I would like to minimize the impact on the application itself as completely as possible, even at the expense of reliability. So I would rather not risk a timeout (for example if I'm accessing some centralized resource and it has disappeared), and ideally I would rather not launch a worker thread either.
It would also be nice to permit multiple listeners, which is another reason I am thinking about broadcasting rather than invoking a service.
Is there some kind of fire-and-forget broadcast mechanism I could use safely and effectively for this?
There are certainly many options for this, but one that is very easy to implement and meets your criteria is an Asynchronous Web Service call.
This does not require you to start a worker thread (the Framework will do that behind the scenes). Rather than use one of the options outlined in that link to fetch the result, simply ignore the result since it is meaningless to the calling app.
I did something similar, though not exactly a "braodcast"
I have an in house tool several non-techies in the company use. I have it check a network share for a specific EXE (the same EXE you would download if you wanted to use it) and compares the version # of that file with the executing assembly. If the one on the network is newer, alert the user to download the new one.
A lot simpler than trying to set up an auto updater for something that will only be used within the same building as me.
If upgrading is not an issue (i.e. there are no cases where using the old version is better), you can do what I did with something similar:
The application that people actually launch is an updater program, it checks the file version and timestamp on a network share and if a newer version exists, copies it to the program directory. It then runs the program (whether it was updated or not).
var current = new FileInfo(local);
var latest = new FileInfo(remote);
if (!current.Exists)
latest.CopyTo(local);
var currentVersion = FileVersionInfo.GetVersionInfo(local);
var latestVersion = FileVersionInfo.GetVersionInfo(remote);
if (latest.CreationTime > current.CreationTime || latestVersion.FileVersion != currentVersion.FileVersion)
latest.CopyTo(local, true);
Process.Start(local)
I also have the program itself check to see if the updater needs updating (as the updater can't update itself due to file locks)
After some experimentation, I have been getting good results using Win32 mailslots.
There is no official managed wrapper, but the functions are simple to use via PInvoke, as demonstrated in examples like this one.
Using a 'domain' mailslot provides a true broadcast mechanism, permitting multiple listeners and no requirement for a well-known server.

Categories