In my program i'm using
logWriter = File.CreateText(logFileName);
to store logs.
Should I call logWriter.Close() and where? Should it be finalizer or something?
The normal approach is to wrap File.CreateText in a using statement
using (var logWriter = File.CreateText(logFileName))
{
//do stuff with logWriter
}
However, this is inconvenient if you want logWriter to live for the duration of your app since you most likely won't want the using statement wrapping around your app's Main method.
In which case you must make sure that you call Dispose on logWriter before the app terminates which is exactly what using does for you behind the scenes.
Yes you should close your file when you're done with it. You can create a log class ( or use an existing one like log4net ) and implement IDisposable and inside the Dispose-method you release the resources.
You can wrap it with a using-block, but I would rather have it in a seperate class. This way in the future you can handle more advance logging, for instance, what happens when your application runs on multiple threads and you try to write to the file at the same time?
log4net can be configured to use a text-file or a database and it's easy to change if the applications grows.
If you have a log file which you wish to keep open, then the OS will release the file as part of shutting the process down when the application exits. You do not actually have to manage this explicitly.
One issue with letting the OS clean up your file handle, is that your file writer will use buffering and it may need flushing before it will write out the remains of its buffer. If you do not call close\dispose on it you may lose information. One way of forcing a flush is to hook the AppDomain unload event which will get called when your .Net process shuts down, e.g.:
AppDomain.CurrentDomain.DomainUnload += delegate { logWriter.Dispose(); };
There is a time limit on what can occur in a domain unload event handler, but writing the remains of a file writer buffer out is well within this. I am assuming you have a default setup i.e. one default AppDomain, otherwise things get tricky all round including logging.
If you are keeping your file open, consider opening it with access rights that will allow other processes to have read access. This will enable a program such as a file tailer or text editor to be used to read the file whilst your program is running.
Related
So, I'd like to write a logger in c# for an app I'm working on. However, since I love efficiency, I don't want to be opening and closing a log file over and over again during execution.
I think I'd like to write all events to RAM and then write to the log file once when the app exits. Would this be a good practice? If so, how should I implement it?
If this is not a good practice, what would be?
(And I'm not using Windows' event log at this time.)
However, since I love efficiency, I don't want to be opening and closing a log file over and over again during execution
No. It's since you love premature optimizations.
I think I'd like to write all events to RAM and then write to the log file once when the app exits. Would this be a good practice? If so, how should I implement it?
If you love efficiency, why do you want to waste a lot of memory for log entries?
If this is not a good practice, what would be?
It is if you want to lose all logs when your application crashes (since it cannot write the log to disk then). Why did you create the log in the first place?
You'll have to think of some issues you might encounter:
System being shut down while your application runs -> no log files
An application crash might not invoke your write method
If the log grows large (how long does your application run?), you might get memory problems
If the log grows large, and there is not enough space on the drive, not a single log line will be written
You could simply keep the file open while your application runs (with at least FileShare.Read so you can monitor it), or consider writing batches of log lines, invoking the write method after a group of methods, or even using a timer.
Well if your app crashes, you lose all your logs. Better if you flush the logs to disk at an appropriate moment.. Lazy write:
Queue off the log entries to a seperate logger thread, (ie. store them in some class and queue the class instance to a producer-consumer queue). In the logger thread, wait on the input queue with a timeout. If a log entry comes in, store it in a local cache queue.
If (timeout fires) or (some high water mark of logs stored is reached) then write all cached log entries to the file and flush file buffers.
Rgds,
Martin
I have a text file and multiple threads/processes will write to it (it's a log file).
The file gets corrupted sometimes because of concurrent writings.
I want to use a file writing mode from all of threads which is sequential at file-system level itself.
I know it's possible to use locks (mutex for multiple processes) and synchronize writing to this file but I prefer to open the file in the correct mode and leave the task to System.IO.
Is it possible ? what's the best practice for this scenario ?
Your best bet is just to use locks/mutexex. It's a simple approach, it works and you can easily understand it and reason about it.
When it comes to synchronization it often pays to start with the simplest solution that could work and only try to refine if you hit problems.
To my knowledge, Windows doesn't have what you're looking for. There is no file handle object that does automatic synchronization by blocking all other users while one is writing to the file.
If your logging involves the three steps, open file, write, close file, then you can have your threads try to open the file in exclusive mode (FileShare.None), catch the exception if unable to open, and then try again until success. I've found that tedious at best.
In my programs that log from multiple threads, I created a TextWriter descendant that is essentially a queue. Threads call the Write or WriteLine methods on that object, which formats the output and places it into a queue (using a BlockingCollection). A separate logging thread services that queue--pulling things from it and writing them to the log file. This has a few benefits:
Threads don't have to wait on each other in order to log
Only one thread is writing to the file
It's trivial to rotate logs (i.e. start a new log file every hour, etc.)
There's zero chance of an error because I forgot to do the locking on some thread
Doing this across processes would be a lot more difficult. I've never even considered trying to share a log file across processes. Were I to need that, I would create a separate application (a logging service). That application would do the actual writes, with the other applications passing the strings to be written. Again, that ensures that I can't screw things up, and my code remains simple (i.e. no explicit locking code in the clients).
you might be able to use File.Open() with a FileShare value set to None, and make each thread wait if it can't get access to the file.
I currently have a multithreaded application which runs in following order:
Start up and change XML file
Do work
Change XML to default values
The step 3 is very important and I have to insure that it always happens. But if the application crashes, I might end up with the wrong XML.
The scenario where I am using it is:
My application is a small utility which connects to a remote device, but on the same machine there is a running service which is connected to the same remote device, which I want to connect to. Service exposes restartService method and during startup depending on the XML data it will connect to the remote device or will not. So in the end I have to ensure that whatever happened to my application, XML is set to the default state.
I thought having a thread running as a separate process and checking every n seconds if the main process is alive and responding would solve this issue. But I have found very few examples of multiprocess applications in C#. So if someone could show an example of how you to create a thread which runs as a separate process, that would be great.
What if I create a separate project - console application. It is compiled into separate executable and is launched from within main application. Then use IpcChannel for the communication between 2 processes. Or Create a WCF application. Will one of these approach work?
A Thread belongs to a Process, so if the process dies then so do all it's threads. Each application is expected to be a single process and while you can launch additional processes it sounds like a complex solution to what might be a simple problem.
Rather than changing and reverting the file could you just read it into memory and leave the filesystem alone?
You can subscribe to an event called DispatcherUnhandledException so when ever an Unhandled exception is thrown , you can safely revert your XML settings.
public partial class App : Application
{
public App()
{
this.DispatcherUnhandledException += new System.Windows.Threading.DispatcherUnhandledExceptionEventHandler(App_DispatcherUnhandledException);
}
void App_DispatcherUnhandledException(object sender, System.Windows.Threading.DispatcherUnhandledExceptionEventArgs e)
{
//When ever an Unhandeled exception is thrown
// You can change your XML files to default values.
}
}
// If you killed process through Task Manager
AppDomain.CurrentDomain.ProcessExit += new EventHandler(CurrentDomain_ProcessExit);
void CurrentDomain_ProcessExit(object sender, EventArgs e)
{
// Change your Settings Here.
}
// If you initiated Windows ShutDown
this.SessionEnding += new SessionEndingCancelEventHandler(App_SessionEnding);
void App_SessionEnding(object sender, SessionEndingCancelEventArgs e)
{
// XML Changes
}
What you are talking about is usually called "supervision" in mainframe computing and other large-ish computing infrastructures. A supervised process is a process that runs under the scrutiny of a supervisor process, which restarts it or otherwise "fixes" the problem if the former crashes or is unable to finish its job.
You can see a glimpse of this in the way that Windows restarts services automatically if they stop working; that is a very simplistic version of a supervision subsystem.
As far as I understand, this is a complex area of computer engineering, and I don't think that Windows or .NET provide a programmatic interface to it. Depending on your specific needs, you might be able to develop a simple approach to it.
Consider setting a "dirty" flag in your config file and storing a backup of the default XML in another file. When your application starts it changes the XML and sets the flag. If it successfully completes then it resets the flag and restores the XML. Your service checks the flag to see if it needs to use the last XML written by your app or switch to the backup file.
I think that whether the application is multithreaded or multiprocess or whatever is not actually the problem you need to solve. The real problem is: how do I make this operation atomic?
When you say that you have to insure that step 3 always happens, what you're really saying is your program needs to perform an atomic transaction: either all of it happens, or none of it happens.
To accomplish this, your process should be designed the way that database transactions are designed. It should begin the transaction, do the work, and then either commit the transaction or roll it back. The process should be designed so that if, when it starts up, it detects that a transaction was begun and not committed or rolled back by an earlier run, it should start by rolling back that transaction.
Crucially, the commit method should have as little instability as possible. For instance, a typical way to design a transactional process is to use the file system: create a temporary file to indicate that the transaction has begun, write the output to temporary files, and then have the commit method rename the temporary files to their final names and delete the transaction-flag file. There's still a risk that the file system will go down in between the time you've renamed the files and the time you've deleted the flag file (and there are ways to mitigate that risk too), but it's a much smaller risk than the kind you've been describing.
If you design the process so that it implements the transactional model, whether it uses multiprocessing or multithreading or single-threading is just an implementation detail.
I've got a class that represents a document (GH_Document). GH_Document has an AutoSave method on it which is called prior to every potentially dangerous operation. This method creates (or overwrites) an AutoSave file next to the original file.
GH_Document also contains a method called DestroyAutoSaveFiles() which removes any and all files from the disk that have been created by the AutoSave function. I call this method on documents when the app closes down, and also when documents get unloaded. However, it appears I missed a few cases since AutoSave files are still present after some successful shutdowns.
So this got me thinking. What's the best way to handle situations like this? Should I track down all possible ways in which documents can disappear and add autosave cleanup logic everywhere? Or should I implement IDisposable and perform cleanup in GH_Document.Dispose()? Or should I do this in GH_Document.Finalize()?
The only time I want an autosave file to remain on disk is if the application crashes. Are Dispose and Finalize guaranteed to not be called in the event of a crash?
Are Dispose and Finalize guaranteed to not be called in the event of a crash?
In general, no, although it depends on how your application crashes. In C# it will usually be the result of an uncaught exception propagating up to the top level. In this case, any finally clauses will be executed on the way up the stack, and that includes Dispose calls from using statements.
If the application is terminated suddenly (for example by killing it in Task Manager) the finally clauses will not be called, but you shouldn't rely on this behaviour.
A simple solution is to put your auto save files in a hidden directory with a special naming convention and delete all autosave files in that directory on successful shutdown.
-What is the most foolproof way of ensuring the folder or file I want to manipulate is accessible (not read-only)?
-I know I can use ACL to add/set entries (make the file/folder non-readonly), but how would I know if I need to use security permissions to ensure file access? Or can I just add this in as an extra measure and handle the exception/negative scenario?
-How do I know when to close or just flush a stream? For example, should I try to use the streams once in a method and then flush/close/dipose at the end? If I use dispose(), do I still need to call flush() and close() explicitly?
I ask this question because constantly ensuring a file is available is a core requirement but it is difficult to guarantee this, so some tips in the design of my code would be good.
Thanks
There is no way to guarantee access to a file. I know this isn't a popular response but it's 100% true. You can never guarantee access to a file even if you have an exclusive non-sharing open on a Win32 machine.
There are too many ways this can fail that you simply cannot control. The classic example is a file opened over the network. Open it any way you'd like with any account, I'll simply walk over and yank the network cable. This will kill your access to the file.
I'm not saying this to be mean or arrogant. I'm saying this to make sure that people understand that operating on the file system is a very dangerous operation. You must accept that the operation can and will fail. It's imperative that you have a fallback scenario for any operation that touches disk.
-What is the most foolproof way of ensuring the folder or file I want to manipulate is accessible (not read-only)?
Opening them in write-mode?
Try and write a new file into the folder and catch any exceptions. Along with that do the normally sanity checks like folder/files exists etc.
You should never change the folder security in code as the environment could drastically change and cause major headaches. Rather ensure that the security is well documented and configured before hand. ALternatively use impersonation in your own code to ensure you are always running the required code as a user with full permissions to the folder/file.
Never call Dispose() unless you have no other choice. You always flush before closing the file or when you want to commit the content of the stream to the file/disk. The choice of when to do it depends on the amount of data that needs to be written and the time involved in writing the data.
100% foolproof way to ensure a folder is writable - create a file, close it, verify it is there, then delete it. A little tedious, but you asked for foolproof =)
Your better bet, which covers your question about ACL, is to handle the various exceptions if you cannot write to a file.
Also, I always call Close explicitly unless I need to read from a file before I'm done writing it (in which case I call flush then close).
Flush() - Synchronizes the in-memory buffer with the disk. Call when you want to write the buffer to the disk but keep the file open for further use.
Dispose(bool) - Releases the unmanaged resource (i.e. the OS file handle) and, if passed true, also releases the managed resources.
Close() - Calls Dispose(true) on the object.
Also, Dispose flushes the data before closing the handle so there is no need to call flush explicitly (although it might be a good idea to be flushing frequently anyway, depending on the amount and type of data you're handling).
If you're doing relatively atomic operations to files and don't need a long-running handle, the "using" paradigm is useful to ensure you are handling files properly, e.g.:
using (StreamReader reader = new StreamReader("filepath"))
{
// Do some stuff
} // CLR automagically handles flushing and releasing resources