NLog - Allow other processes to read log file - c#

Starting to use NLog. The main process (a Windows service) is writing to the log file every few seconds. I need to allow another process (a desktop app) to read this file at arbitrary times (desktop app doesn't require write access).
Problem however is that NLog probably creates an exclusive lock when it opens the file for writing. So if the desktop process tries to read when the file is locked, an exception is thrown.
How can I configure NLog to allow other processes to have readonly access to the log file contents even if the main process has it open for writing? The desktop process will call File.ReadAllText() which I hope is safe for concurrent operations.
(I read through the docs and found that NLog even allows concurrent writing to a log file from different processes, so a read-only access should be easier in theory. I can't see any solutions though).

Instead of using File.ReadAllText() or File.ReadAllTextAsync() that requires exclusive file-lock:
System.IO.IOException: The process cannot access the file '...' because it is being used by another process.
Then I suggest using FileShare.ReadWrite to avoid failure when NLog is actively writing to the log-file:
using (var f = new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
using (var s = new StreamReader(f))
{
fileContent = s.ReadToEnd();
}
This will also avoid issues for the application that uses NLog for writing to the log-file, as reading with exclusive lock will cause log-writing to fail with logevent being lost and bad performance.

Problem however is that NLog probably creates an exclusive lock when it opens the file for writing
No it doesn't lock by default. There are two important settings:
ConcurrentWrites, this setting is default true:
concurrentWrites - Enables support for optimized concurrent writes to same log file from multiple processes on the same machine-host, when using keepFileOpen = true. By using a special technique that lets it keep the files open from multiple processes. If only single process (and single AppDomain) application is logging, then it is faster to set to concurrentWrites = False. Boolean Default: True. Note: in UWP this setting should be false
Also there is a keepFileOpen setting, with the default on false
keepFileOpen - Indicates whether to keep log file open instead of opening and closing it on each logging event. Changing this property to true will improve performance a lot, but will also keep the file handle locked. Consider setting openFileCacheTimeout = 30 when enabling this, as it will allow archive operations and react to log file being deleted. Boolean Default: False
See docs, there also more settings like concurrentWriteAttemptDelay, concurrentWriteAttempts etc.
Last but not least, if you're locking the file too long, maybe copy first and then read it by you application?

After release of version 5.0 the keepFileOpen is now by default set to True. Reference

Related

Concurrent file usage in C#

I have one application that will read from a folder and wait for a file to appear in this folder. When this file appear, the application shall read the content, execute a few functions to external systems with the data from the file and then delete the file (and in turn wait for next file).
Now, I want to run this application on two different machines but both listen in the same folder. So it’s the exact same application but two instances. Let’s call it instance A and instance B.
So when a new file appear, both A and B will find the file, and both will try to read it. This will lead to some sort of race condition between the two instances. I want that if A started read the file before B, B shall simply skip the file and let A process and delete it. Same thing if B finds the file first, A shall do nothing.
Now how can I implement this, setting a lock on the file is not sufficient I guess because lets say A started to read the file, it is then locked by A, then A will unlock it in order to delete it. During that time B might try to read the file. In that case the file is processed twice, which is not acceptable.
So to summarize, I have two instances of one program and one folder / network share, whenever a file appear in the folder. I want EITHER instance A or instance B process the file. NEVER both, any ideas of how I can implement such functionality in C#?
The correct way to do this is to open the file with a write lock (e.g., System.IO.FileAccess.Write, and a read share (e.g., System.IO.FileShare.Read). If one of the processes tries to open the file when the other process already has it open, then the open command will throw an exception, which you need to catch and handle as you see fit (e.g., log and retry). By using a write lock for the file open, you guarantee that the opening and locking are atomic and therefore synchronised between the two processes, and there is no race condition.
So something like this:
try
{
using (FileStream fileStream = new FileStream(FileName, FileMode.Open, FileAccess.Write, FileShare.Read))
{
// Read from or write to file.
}
}
catch (IOException ex)
{
// The file is locked by the other process.
// Some options here:
// Log exception.
// Ignore exception and carry on.
// Implement a retry mechanism to try opening the file again.
}
You can use FileShare.None if you do not want other processes to be able to access the file at all when your program has it open. I prefer FileShare.Read because it allows me to monitor what is happening in the file (e.g., open it in Notepad).
To cater for deleting the file is a similar principle: first rename/move the file and catch the IOException that occurs if the other process has already renamed it/moved it, then open the renamed/moved file. You rename/move the file to indicate that the file is already being processed and should be ignored by the other process. E.g., rename it with a .pending file extension, or move it to a Pending directory.
try
{
// This will throw an exception if the other process has already moved the file -
// either FileName no longer exists, or it is locked.
File.Move(FileName, PendingFileName);
// If we get this far we know we have exclusive access to the pending file.
using (FileStream fileStream = new FileStream(PendingFileName, FileMode.Open, FileAccess.Write, FileShare.Read))
{
// Read from or write to file.
}
File.Delete(PendingFileName);
}
catch (IOException ex)
{
// The file is locked by the other process.
// Some options here:
// Log exception.
// Ignore exception and carry on.
// Implement a retry mechanism to try moving the file again.
}
As with opening files, File.Move is atomic and protected by locks, therefore it is guaranteed that if you have multiple concurrent threads/processes attempting to move the file, only one will succeed and the others will throw an exception. See here for a similar question: Atomicity of File.Move.
I can think of two quick solutions to this;
Distribute the load
Have your 2 processes so that they only work on some files. How you do this could be based on the filename, or the date/time. E.g. Process 1 reads files which have a time stamp ending in an odd number, and process 2 reads the ones with an even number.
Database as lock
The other alternative is that you use some kind of database as a lock.
Process 1 reads a file and does an insert into a database table based on the filename (must be unique). If the insert works, then it is responsible for the file and continues processing it, else if the insert fails, then the other process has already inserted it so it is responsible and process 1 ignores the file.
The database has to be accessible to both processes, and this will incur some overhead. But might be a better option if you want to scale this out to more processes.
So if you are going to apply lock you can try to use file name as a lock object. You can try to rename file in special way (like by adding dot in front of file name)
and first service that was lucky to rename file will continue with it. And second one (slow) will get exception that file does not exist.
And you have to add check to your file processing logic that service will not try to "lock" file that is "locked" already (have a name started with dot).
UPD may be it is better to include special set of characters (like a mark) and some service identificator (machinename concatenated with PID)
because i'm not sure how file rename will work in the concurrent mode.
So if you have got file.txt in the shared folder
first of all you have to check is there .lock string in the file name
already
if no service can try to rename it to the file.txt.lockDevhost345 (where .lock - special marker, Devhost - name of current computer and 345 is a PID (process identifier)
then service have to check is there file.txt.lockDevhost345 file
available
if yes - it was locked by current service instance and can be used
if no - it was "stolen" by concurrent service so it should not be processed.
If you do not have write permission you can use another network share and try to create additional file lock marker, for example for file.txt service can try to create (and hold write lock) new file like file.txt.lock First service that has created lock file is taking care about original file and removes lock only when original file was processed.
Instead getting deep in file access change, I would suggest to use a functionality-server approach. Additional argument for this approach is file usage from different computers. This particular thing goes deep in access and permission administration.
My suggestion is about to have a single point of file access (Files repository) that implements the following functionality:
Get files list. (gets a list of available files)
Checkout file. (proprietary grab access to the file so that the owner of the checkout was authorized to modify the file)
Modify file. (update file content or delete it)
Check-in changes to the repository
There are a lot of ways to implement the approach. (Use API of a files a file versioning system; implement a service; use a database, ...)
An easy one (requires a database that supports transactions, triggers or stored procedures)
Get files list. (SQL SELECT from an "available files table")
Checkout file. (SQL UPDATE or Update stored procedure. By update in the trigger or in the stored procedure define an "raise error" state in case of multiple checkout)
Modify file. (update file content or delete it. Please keep in mind that is till better to do over a functionality "server". In this case you would need to implement security policy once)
Check-in changes to the repository (Release the "Checked Out" filed of the particular file entry. Implement the Check-In in transaction)

c# log to file that is used by another logger

I'm trying to write logs to the log file that is currently being used by logger from another process (this second logger is written in C using following:
::CreateFile(m_fileName.c_str(), GENERIC_WRITE, FILE_SHARE_WRITE|FILE_SHARE_READ, NULL, CREATE_NEW, FILE_ATTRIBUTE_NORMAL, NULL );
::WriteFile(handle(handle), s.data(), (DWORD)s.size(), &written, NULL)
::FlushFileBuffers(handle(handle))
and is never closed).
My custom logger also needs to write to the same log file. Is there any way I can achieve that in C# using synchronization techniques such as locks? Currently I'm opening file using:
File.Open(filename, FileMode.Open, FileAccess.Write, FileShare.ReadWrite);
and able to write some information to the log, but it appears in the random place in the middle of previous occurrence, so that log becomes a mess.
As you have noticed you can open and write to the same file from multiple processes. The output is of course interleaved randomly.
Single writes are atomic in the sense that the bytes of a single write will appear contiguously, so it would be enough to make sure that each log item is written in a single write operation. Note, that FileStream does internal buffering. You need to disable that by setting the buffer size to 1 (the minimum).
If you can't or don't want to ensure that you need to synchronize. Cross-process synchronization is easy to achieve using a named mutex.

Lock a file while retaining the ability to read/append/write/truncate in the same thread?

I have a file containing, roughly speaking, the state of the application.
I want to implement the following behaviour:
When the application is started, lock the file so that no other applications (or user itself) will be able to modify it;
Read the previous application state from the file;
... do work ...
Update the file with a new state (which, given the format of the file, involves rewriting the entire file; the length of the file may decrease after the operation);
... do work ...
Update the file again
... do work ...
If the work failed (application crashed), the lock is taken off, and the content of the file is left as it was after the previous unit of work executed.
It seems that, to rewrite the file, one should open it with a Truncate option; that means one should open a new FileStream each time they want to rewrite a file. So it seems that behavior I want could only achieved by such a dirty way:
When the application is started, read the file, then open the FileStream with the FileShare.Read;
When some work is done, close the handle opened previously, open another FileStream with the FileMode.Truncate and FileShare.Read, write the data and flush the FileStream.
When some work is done, close the handle opened previously, open another FileStream with the FileMode.Truncate and FileShare.Read, write the data and flush the FileStream.
On the Dispose, close the handle opened previously.
Such a way has some disadvantages: extra FileStream are opened; the file integrity is not guaranteed between FileStream close and FileStream open; the code is much more complicated.
Is there any other way, lacking these disadvantages?
Don't close and reopen the file. Instead, use FileStream.SetLength(0) to truncate the file to zero length when you want to rewrite it.
You might (or might not) also need to set FileStream.Position to zero. The documentation doesn't make it clear whether SetLength moves the file pointer or not.
Why don't you take exclusive access to the file when application starts, and create an in-memory cache of the file that can be shared across all threads in the process while your actual file remains locked for OS. You can use lock(memoryStream) to avoid concurrency issues. when you are done updating the local in-memory version of file just update the file on disk and release lock on it.
Regards.

how to catch program shutdown to release resources?

In my program i'm using
logWriter = File.CreateText(logFileName);
to store logs.
Should I call logWriter.Close() and where? Should it be finalizer or something?
The normal approach is to wrap File.CreateText in a using statement
using (var logWriter = File.CreateText(logFileName))
{
//do stuff with logWriter
}
However, this is inconvenient if you want logWriter to live for the duration of your app since you most likely won't want the using statement wrapping around your app's Main method.
In which case you must make sure that you call Dispose on logWriter before the app terminates which is exactly what using does for you behind the scenes.
Yes you should close your file when you're done with it. You can create a log class ( or use an existing one like log4net ) and implement IDisposable and inside the Dispose-method you release the resources.
You can wrap it with a using-block, but I would rather have it in a seperate class. This way in the future you can handle more advance logging, for instance, what happens when your application runs on multiple threads and you try to write to the file at the same time?
log4net can be configured to use a text-file or a database and it's easy to change if the applications grows.
If you have a log file which you wish to keep open, then the OS will release the file as part of shutting the process down when the application exits. You do not actually have to manage this explicitly.
One issue with letting the OS clean up your file handle, is that your file writer will use buffering and it may need flushing before it will write out the remains of its buffer. If you do not call close\dispose on it you may lose information. One way of forcing a flush is to hook the AppDomain unload event which will get called when your .Net process shuts down, e.g.:
AppDomain.CurrentDomain.DomainUnload += delegate { logWriter.Dispose(); };
There is a time limit on what can occur in a domain unload event handler, but writing the remains of a file writer buffer out is well within this. I am assuming you have a default setup i.e. one default AppDomain, otherwise things get tricky all round including logging.
If you are keeping your file open, consider opening it with access rights that will allow other processes to have read access. This will enable a program such as a file tailer or text editor to be used to read the file whilst your program is running.

How to avoid File Blocking

We are monitoring the progress of a customized app (whose source is not under our control) which writes to a XML Manifest. At times , the application is stuck due to unable to write into the Manifest file. Although we are covering our traces by explicitly closing the file handle using File.Close and also creating the file variables in Using Blocks. But somehow it keeps happening. ( Our application is multithreaded and at most three threads might be accessing the file. )
Another interesting thing is that their app updates this manifest at three different events(add items, deleting items, completion of items) but we are only suffering about one event (completion of items). My code is listed here
using (var st = new FileStream(MenifestPath, FileMode.Open, FileAccess.Read))
{
using (TextReader r = new StreamReader(st))
{
var xml = r.ReadToEnd();
r.Close();
st.Close();
//................ Rest of our operations
}
}
If you are only reading from the file, then you should be able to pass a flag to specify the sharing mode. I don't know how you specify this in .NET, but in WinAPI you'd pass FILE_SHARE_READ | FILE_SHARE_WRITE to CreateFile().
I suggest you check your file API documentation to see where it mentions sharing modes.
Two things:
You should do the rest of your operations outside the scopes of the using statements. This way, you won't risk using the closed stream and reader. Also, you needn't use the Close methods, because when you exit the scope of the using statement, Dispose is called, which is equivalent.
You should use the overload that has the FileShare enumeration. Locking is paranoid in nature, so the file may be locked automatically to protect you from yourself. :)
HTH.
The problem is different because that person is having full control on the file access for all processes while as i mentioned ONE PROCESS IS THIRD PARTY WITH NO SOURCE ACCCESS. And our applications are working fine. However, their application seems stuck if they cant get hold the control of file. So i am willing to find a method of file access that does not disturb their running.
This could happen if one thread was attempting to read from the file while another was writing. To avoid this type of situation where you want multiple readers but only one writer at a time, make use of the ReaderWriterLock or in .NET 2.0 the ReaderWriterLockSlim class in the System.Threading namespace.
Also, if you're using .NET 2.0+, you can simplify your code to just:
string xmlText = File.ReadAllText(ManifestFile);
See also: File.ReadAllText on MSDN.

Categories