It's not really a coding problem, but i'm after some feedback from the community regarding some issues that I'm having, while developing a new Logger implementation.
Background
our ASP.NET Application originally worked with log4net. While log4net is a great logging tool, it does not suit our needs and in some cases it even causes problems for our application by the way the logging is done. We currently are implementing our own logging system that is mimicing some behavior of log4net, but also tailored to suit our needs. I'm not here to discuss about the usage of log4net or how to config it.
System
Currently we have a system that's beeing developed. The system has a logger class, which is a Singleton (design flaw, I know...) and this class has a collection of IReporter objects.
Everytime the application calls Logger.Instance.Log(message) the logger will direct these messages to every IReporter inside the queue, and the reporters have the responsibility of logging the message in their destination/storage/whatever.
Currently we've chosen that each IReporter has a backgroundthread and a message queue to process the messages at their own speed. The danger here is that when the app dies suddenly we could lose some of the messages.
Another approach we had in mind was to have a thread pool on the logger and let these threads run over the queue and then delegate the messages to the reporters.
What I'm concerned about is the performance. We first implemented this using events in the logger, but the threads spawned went haywire fast when accessing the file. So now with this approach we hope to limit the access to the resources
What I'm lookign for is people who had similar situations and how they approached this issue.
Did I understand it correctly, and all those processes access the same set of files? On Windows?
You shouldn't do that, as the OS will do some complex locking that will take time, or worse, depending on how you access the files, you can get a deadlock. It would be better to do all the logging on one single thread, and running the IReporters sequentialy.
If you are concerned that your software may die during a log operation, put the logger in another process, communicate by IPC. But are you sure you want to reinvent syslogd?
Your design sounds an awful lot like Log4Net with a bunch of FileAppenders. You should really reconsider your decision, unless there are requirements on you that you haven't shared. Log4Net has a lot more use in the field than your logger ever will, and it's had lots of bugs and performance issues already shaken out of it.
Related
In a web application that fires of many ajax requests from different users to carry out actions. One of these requests fires off some database updates. If this is currently in progress I want to make sure other session requests for this action is just ignored. Is it safe to implement a static variable that I can lock so the action can be ignored by other requests if one is already in progress or would this just be a bad design?
Update
After digging more I came across Optimistic Concurrency. I'm using EF6 so to avoid this it sounds like all I need to do is with Concurrency Mode to fixed?
A solution based on static variables may look attractive, because it is easy to implement. However, it quickly becomes a maintenance liability, particularly in a web application environment.
The problem with static variables in web environments, such as IIS, is that they are not shared globally across your entire application: if you configure your app pool to have several worker processes, each process would have its own copy of all static variables. Moreover, if you configure your infrastructure for load balancing, each process on each server would have its own copy, with no control on the part of your application. In your situation this would mean a possibility of multiple updates happening at the same time.
That is why I would avoid using a static variable in situations when it is absolutely critical that at most a single request be in progress at any given time.
In your situation, the persistence layer should be in charge of not corrupting the data no matter how many updates are firing at the same time. Persistence layer needs to decide which requests to execute, and which to throw away. One approach to solving this problem is optimistic locking. See this Q&A for general information on how it could be implemented.
I have an application in .NET that I need to log. I want to log certain events and exceptions. I saw online that log4net was being heavily recommended for this purpose. I set it up to quickly begin logging to a txt file.
But this is not good enough for my purposes. From within my application, I'd like to be able to pull up a monitor which has a live listing of all the logs being generated.
If log4net the best approach for this? If not, what is?
I have no problem consuming the log events and finding my own way to display the data, I just don't know what the best way is to send the logging events to my monitor form.
You may want to look at log2console, which is an excellent logging monitor compatible with log4net. It can listen to the log4net remoting appender and present the data quite nicely.
If you need to implement your own monitor from within the program, I would suggest trying out the MemoryAppender. There's some helpful info here (the question is actually a very nice tutorial)
As you can see, he has set up two appenders - one which is logging to file and one which is logging to the memory appender. In your monitor, you can get a handle to the appender using the following code:
Hierarchy hierarchy = LogManager.GetRepository() as Hierarchy;
MemoryAppender mappender = hierarchy.Root.GetAppender("MemoryAppender") as MemoryAppender;
And you can cycically get the new events in a background thread with mappender.GetEvents(), before clearing it with mappender.Clear(). Keep in mind that this is not thread safe, so creating a thread safe wrapper for your logging is probably a good idea.
I have been working on many applications which run as windows service or scheduled tasks.
Now, i want to make sure that these applications will be fault tolerant and reliable. For example; i have a service that runs every hour. if the service crashes while its operating or running, i d like the application to run again for the same period (there are several things involved with this including transactions of data processing) , to avoid data loss. moreover, i d like the program to report the error with details. My goal is to avoid data loss and not falling behind for running the program.
I have built a class library that a user can import into a project. Library is supposed to keep information of running instance of the program, ie. program reads and writes information of running interval, running status etc. This data is stored in a database.
I was curious, if there are some best practices to make the scheduled tasks/ windows services fault tolerant and reliable.
Edit : I am talking about independent tasks or services which on different servers. and my goal is to make sure that the service will keep running, report any failures and recover from them.
I'm interested in what other people have to say, but I'll give you a few points that I've stumbled across:
Make an event handler for Unhandled Exceptions. This way you can clean up resources, write to a log file, email an administrator, or anything you need to instead of having it crash.
AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(AppUnhandledExceptionEventHandler);
Override any servicebase event handlers you need in the main part of your application. OnStart and OnStop are pretty crucial, but there are many others you can use. http://msdn.microsoft.com/en-us/library/system.serviceprocess.servicebase%28v=VS.71%29.aspx
Beware of timers. Windows forms timers won't work right in a service. User System.Threading.Timers or System.Timers.Timer. Best Timer for using in a Windows service
If you are updating on a thread, make sure you use a lock() or monitor in key sections to make sure everything is threadsafe.
Be careful not to use anything user specific, as a service runs without a specific user context. I noticed some of my SQL connection strings were no longer working for windows authorizations, etc. Also have heard people having trouble with mapped drives.
Never make a service with a UI. In fact for Vista and 7 they make it nearly impossible to do anyway. It shouldn't require user interaction, the most you can do is send a message with a WIN32 function. MSDN claims making interactive services is bad practice. http://msdn.microsoft.com/en-us/library/ms683502%28VS.85%29.aspx
For debugging purposes, it is way cool to make a service run as a console application until you get it doing what you want it to. Awesome tutorial: http://mycomponent.blogspot.com/2009/04/create-debug-install-windows-service-in.html
Anyway, hope that helps a little, but that is just a couple thing I poked around to find on my own.
Something obvious - don't run all your tasks at the same time. Try to schedule them so only one task is using some expensive resource at any time (if possible). For example, if you need to send out newsletters and some specific notifications, schedule them at different times. If two tasks need to clean up something in the database, let the one run after another.
Also schedule tasks to run outside of normal business hours - at night obviously.
I need to build in click and conversion tracking (more specific and focused than IIS log files) to an existing web site. I am expecting pretty high load. I have investigated using log4net, specifically the FileAppender Class, but the docs explicitly state: "This type is not safe for multithreaded operations."
Can someone suggest a robust approach for a solution for this type of heavy logging? I really like the flexibility log4net would give me. Can I get around the lack of safe multi-threading using lock? Would this introduce performance/contention concerns?
While FileAppender itself may not be safe for logging, I'd certainly expect the normal access routes to it via log4net to be thread-safe.
From the FAQ:
log4net is thread-safe.
In other words, either the main log4net framework does enough locking, or it has a dedicated logging thread servicing a producer/consumer queue of log messages.
Any logging framework which wasn't thread-safe wouldn't survive for long.
You could check out the Logging Application Block available in the Microsoft Enterprise Library. It offers a whole host of different types of loggers, as well as a handy GUI configurator that you can point to your app.config\web.config in order to modify it. So there's not need to sift through the XML yourself.
Here's a link to a nice tutorial on how to get started with it:
http://elegantcode.com/2009/01/20/enterprise-library-logging-101/
I'm also interested in the answer, but I'll tell you what I was told when I tried to find a solution.
An easy way around it would be to use something like an SQL database. If the data you want isn't well suited for that, you could have each page access write it's own log file and then periodically merge the log files.
However, I'm sure there's a better solution.
When using syslog, you won't be having any threading issues. Syslog, sends the loglines using UDP to a logdaemon (could potentially be on the same machine).
Works especially great if you have more running processes/services, since all log lines are aggregated in 1 viewing tool.
if you expect really heavy loads, look at how the guys from facebook do it: http://developers.facebook.com/scribe/ You can use their opensource logtool. I don't think you'll hit their kind of load just yet, so you should be safe for some time to come!
R
I'm working with a system that consists of several applications and services, almost all using a SQL database.
Windows services do different things at different times, and I would like to track them. Meaning that on some deployed systems we see the machine running high on CPU, we see that the sql process is running high, but we can't be sure about which service is responsible for it.
I wonder if Performance Counters are good for this job.
Basically I would like to be able to see at a certain moment which service woke up and is processing something.
It seems to me that I can end up having a perfcounter that only has the value 0 or 1 for each service to show if it is doing something, but this doesn't seem like a normal usage for perfcounters.
Are performance counters suitable?
Do you think I should track this in a different way?
If your monitoring framework/approach already centers around monitoring performance counters, this is a viable approach.
Personally I find more detailed instrumentation necessary to really understand what's happening in my services (though maybe that has to do with the nature of my services).
I use .NET Logging Framework because it's simple and can write to multiple targets including log files, the event log, and a TCP socket (I have a simple monitor that listens on the logging socket for each app server and shows me in real-time what's happening).
Performance Counters are attractive because they are really lightweight, but as you say, they only allow you to capture numeric values. Sure, there's a slew of different types of values you can record, such as averages, deltas and totals, but they have to be numbers.
If you need more information than that, you must resort to some other type of instrumentation. In your case, it sounds like your need goes more in that direction.
If your services don't wake up and suspend themselves too often, it sounds like informational message to a custom event log might be a good idea. Create a custom event log for the application if you expect a fair amount of these so as to not flood the regular Application event log.
The .NET Trace API will be a better option if you expect the instrumentation to generate too much data for the normal event log. You can configure your application(s) to trace or not based on app/web.config, although a change will require a restart of the app. This is a good option if you only wish to use the instrumentation for troubleshooting, but it otherwise generates too much data or if tracing itself degrades performance too much. Another good thing about the Tracing API is that you can Trace on multiple levels, so even if you have written code to Trace very verbosely, you will only see that verbose trace data if you enable verbose tracing. That gives you better control of just what is being traced.
Eric J has a good point. I think if you really want to capture "timing" performance you'll have to use some other sort of logging and use start and stop time logs. I personally like log4net though it can be a pain to configure the first time around