Lock file exclusively then delete/move it - c#

I'm implementing a class in C# that is supposed to monitor a directory, process the files as they are dropped then delete (or move) the processed file as soon as processing is complete. Since there can be multiple threads running this code, the first one that picks up the file, locks it exclusively, so no other threads will read the same file and no external process or user can access in any way. I would like to keep the lock until the file is deleted/moved, so there's no risk of another thread/process/user accessing it.
So far, I tried 2 implementation options, but none of them works as I want.
Option 1
FileStream fs = file.Open(FileMode.Open, FileAccess.Read, FileShare.Delete);
//Read and process
File.Delete(file.FullName); //Or File.Move, based on a flag
fs.Close();
Option 2
FileStream fs = file.Open(FileMode.Open, FileAccess.Read, FileShare.None);
//Read and process
fs.Close();
File.Delete(file.FullName); //Or File.Move, based on a flag
The issue with Option 1 is that other processes can access the file (they can delete, move, rename) while it should be fully locked.
The issue with Option 2 is that the file is unlocked before being deleted, so other processes/threads can lock the file before the delete happens, so the delete will fail.
I was looking for some API that can perform the delete using the file handle I already have exclusive access.
Edit
The directory being monitored resides in a pub share, so other users and processes have access to it.
The issue is not managing the locks within my own process. The issue I'm trying to solve is how to lock a file exclusively then move/delete it without releasing the lock

Two solutions come to mind.
The first and simplest is to have the thread rename the file to something that the other threads won't touch. Something like "filename.dat.<unique number>", where <unique number> is something thread-specific. Then the thread can party on the file all it wants.
If two threads get the file at the same time, only one of them will be able to rename it. You'll have to handle the IOException that occurs in the other threads, but that shouldn't be a problem.
The other way is to have a single thread monitoring the directory and placing file names into a BlockingCollection. Worker threads take items from that queue and process them. Because only one thread can get that particular item from the queue, there is no contention.
The BlockingCollection solution is a little bit (but only a little bit) more complicated to set up, but should perform better than a solution that has multiple threads monitoring the same directory.
Edit
Your edited question changes the problem quite a bit. If you have a file in a publicly accessible directory, it's at risk of being viewed, modified, or deleted at any point between the time it's placed there and the time your thread locks it.
Since you can't move or delete a file while you have it open (not that I'm aware of), your best bet is to have the thread move the file to a directory that's not publicly accessible. Ideally to a directory that's locked down so that only the user under which your application runs has access. So your code becomes:
File.Move(sourceFilename, destFilename);
// the file is now in a presumably safe place.
// Assuming that all of your threads obey the rules,
// you have exclusive access by agreement.
Edit #2
Another possibility would be to open the file exclusively and copy it using your own copy loop, leaving the file open when the copy is done. Then you can rewind the file and do your processing. Something like:
var srcFile = File.Open(/* be sure to specify exclusive access */);
var destFile = File.OpenWrite(/* destination path */);
// copy the file
var buffer = new byte[32768];
int bytesRead = 0;
while ((bytesRead = srcFile.Read(buffer, 0, buffer.Length)) != 0)
{
destFile.Write(buffer, 0, bytesRead);
}
// close destination
destFile.Close();
// rewind source
srcFile.Seek(0, SeekOrigin.Start);
// now read from source to do your processing.
// for example, to get a StreamReader, just pass the srcFile stream to the constructor.
You can process and then copy, sometimes. It depends on if the stream stays open when you're finished processing. Typically, code does something like:
using (var strm = new StreamReader(srcStream, ...))
{
// do stuff here
}
That ends up closing the stream and the srcStream. You'd have to write your code like this:
using (var srcStream = new FileStream( /* exclusive access */))
{
var reader = new StreamReader(srcStream, ...);
// process the stream, leaving the reader open
// rewind srcStream
// copy srcStream to destination
// close reader
}
Doable, but clumsy.
Oh, and if you want to eliminate the potential of somebody reading the file before you can delete it, just truncate the file at 0 before you close it. As in:
srcStream.Seek(0, SeekOrigin.Begin);
srcStream.SetLength(0);
That way if somebody does get to it before you get around to deleting it, there's nothing to modify, etc.

Here is the most robust way I know of that will even work correctly if you have multiple processes on multiple servers working with these files.
Instead of locking the files themselves, create a temporary file for locking, this way you can unlock/move/delete the original file without problems, but still be sure that at least any copies of your code running on any server/thread/process will not try to work with the file at the same time.
Psuedo code:
try
{
// get an exclusive cross-server/process/thread lock by opening/creating a temp file with no sharing allowed
var lockFilePath = $"{file}.lck";
var lockFile = File.Open(lockFilePath, FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.None);
try
{
// open file itself with no sharing allowed, in case some process that does not use our locking schema is trying to use it
var fileHandle = File.Open(file, FileMode.Open, FileAccess.Read, FileShare.None);
// TODO: add processing -- we have exclusive access to the file, and also the locking file
fileHandle.Close();
// at this point it is possible for some other process that does not use our locking schema to lock the file before we
// move it, causing us to process this file again -- we would always have to handle issues where we failed to move
// the file anyway (maybe we just lost power, or crashed?) so we had to design around this no matter what
File.Move(file, archiveDestination);
}
finally
{
lockFile.Close();
try
{
File.Delete(lockFilePath);
}
catch (Exception ex)
{
// another process opened locked file after we closed it, before it was deleted -- safely ignore, other process will delete lock file
}
}
}
catch (Exception ex)
{
// another process already has exclusive access to the lock file, we don't need to do anything
// or we failed while processing, in which case we did not move the file so it will be tried again by this process or another
}
One nice thing about this pattern is it can also be used for times when locking is supported by the file storage. For example, if you were trying to process files on an FTP/SFTP server, you could make your temporary locking files use a normal drive (or SMB share) -- since the locking files do not have to be in the same location as the files themselves.
I can't take credit for the idea, it's been around longer than the PC, and used by plenty of apps like Microsoft Word, Excel, Access, and most older database systems. Read: well tested.

The file system itself is volatile in nature so it's very difficult to try and do what you want. This is a classic race condition in the file system. With option 2, you could alternatively move the file to a "processing" or staging directory that you create before doing your work. YMMV on performance but you could at least benchmark it to see if it could fit your needs.

You may need to implement some form of shared / synchronised List from the spawning thread. If the parent thread keeps track of files by periodically checking the directory, it can then hand them off to child threads and that'll eliminate the locking problem.

This solution, thought not 100% water tight, may well get you what you need. (It did for us.)
Use two locks that together give you exclusive access to the file. When you are ready to delete the file, you release one of them, then deleted the file. The remaining lock will still prevent most other processes from obtaining a lock.
FileInfo file = ...
// Get read access to the file and only allow other processes write or delete access.
// Keeps others from locking the file for reading.
var readStream = file.Open(FileMode.Open, FileAccess.Read, FileShare.Write | FileShare.Delete);
FileStream preventWriteAndDelete;
try
{
// Now try to get a lock on than only allows others to read the file. We can acquire both
// locks because they each allow the other. Together, they give us exclusive access to the
// file.
preventWriteAndDelete = file.Open(FileMode.Open, FileAccess.Write, FileShare.Read);
}
catch
{
// We couldn't get the second lock, so release the first.
readStream.Dispose();
throw;
}
Now you can read the file (with readStream). If you need to write to it, you'll have to do that with the other stream.
When you are ready to delete the file, you first release the lock that prevents writing and deletion while still holding the lock that prevents reading.
preventWriteAndDelete.Dispose(); // Release lock that prevents deletion.
file.Delete();
// This lock specifically allowed deletion, but with the file gone, we're done with it now.
readStream.Dispose();
The only opportunity for another process (or thread) to get a lock on the file is if it requests a shared write lock, one which gives it write-only access and also allows others to write to the file. This is not very common. Most processes attempt either a shared read lock (read access allowing others to read, but not write or delete) or an exclusive write lock (write or read/write access with no sharing). Both of these common scenarios will fail. A shared read/write lock (requesting read/write access and allowing others the same) will also fail.
In addition, the window of opportunity for a process to request and acquire a shared write lock is very small. If a process is hammering away trying to acquire such a lock, then it may succeed, but few applications do this. So unless you have such an application in your scenario, this strategy should meet your needs.
You can also use the same strategy to move the file.
preventWriteAndDelete.Dispose();
file.MoveTo(destination);
readStream.Dispose();

You could use the MoveFileEx API function to mark the file for deletion upon next reboot. Source

Related

C# - How to copy a file being written

I have a class implementing a Log file writer.
Logs must be written for the application to "work correctly", so it is of the utmost importance that the writings to disk are ok.
The log file is kept open for the whole life of the application, and write operations are accordingly very fast:
var logFile = new FileInfo(filepath);
_outputStream = logFile.Open(FileMode.Append, FileAccess.Write, FileShare.Read);
Now, I need to synchronize this file to a network path, during application lifetime.
This network copy can be slightly delayed without problems. The important bit is that I have to guarantee that it doesn't interfere with log writing.
Given this network copy must be eventually consistent, I need to make sure that all file contents are written, instead only the last message(s).
A previous implementation used heavy locking and a simple System.IO.File.Copy(filepath, networkPath, true), but I would like to lock as little as possible.
How could I approach this problem? I'm out of ideas.

Concurrent file usage in C#

I have one application that will read from a folder and wait for a file to appear in this folder. When this file appear, the application shall read the content, execute a few functions to external systems with the data from the file and then delete the file (and in turn wait for next file).
Now, I want to run this application on two different machines but both listen in the same folder. So it’s the exact same application but two instances. Let’s call it instance A and instance B.
So when a new file appear, both A and B will find the file, and both will try to read it. This will lead to some sort of race condition between the two instances. I want that if A started read the file before B, B shall simply skip the file and let A process and delete it. Same thing if B finds the file first, A shall do nothing.
Now how can I implement this, setting a lock on the file is not sufficient I guess because lets say A started to read the file, it is then locked by A, then A will unlock it in order to delete it. During that time B might try to read the file. In that case the file is processed twice, which is not acceptable.
So to summarize, I have two instances of one program and one folder / network share, whenever a file appear in the folder. I want EITHER instance A or instance B process the file. NEVER both, any ideas of how I can implement such functionality in C#?
The correct way to do this is to open the file with a write lock (e.g., System.IO.FileAccess.Write, and a read share (e.g., System.IO.FileShare.Read). If one of the processes tries to open the file when the other process already has it open, then the open command will throw an exception, which you need to catch and handle as you see fit (e.g., log and retry). By using a write lock for the file open, you guarantee that the opening and locking are atomic and therefore synchronised between the two processes, and there is no race condition.
So something like this:
try
{
using (FileStream fileStream = new FileStream(FileName, FileMode.Open, FileAccess.Write, FileShare.Read))
{
// Read from or write to file.
}
}
catch (IOException ex)
{
// The file is locked by the other process.
// Some options here:
// Log exception.
// Ignore exception and carry on.
// Implement a retry mechanism to try opening the file again.
}
You can use FileShare.None if you do not want other processes to be able to access the file at all when your program has it open. I prefer FileShare.Read because it allows me to monitor what is happening in the file (e.g., open it in Notepad).
To cater for deleting the file is a similar principle: first rename/move the file and catch the IOException that occurs if the other process has already renamed it/moved it, then open the renamed/moved file. You rename/move the file to indicate that the file is already being processed and should be ignored by the other process. E.g., rename it with a .pending file extension, or move it to a Pending directory.
try
{
// This will throw an exception if the other process has already moved the file -
// either FileName no longer exists, or it is locked.
File.Move(FileName, PendingFileName);
// If we get this far we know we have exclusive access to the pending file.
using (FileStream fileStream = new FileStream(PendingFileName, FileMode.Open, FileAccess.Write, FileShare.Read))
{
// Read from or write to file.
}
File.Delete(PendingFileName);
}
catch (IOException ex)
{
// The file is locked by the other process.
// Some options here:
// Log exception.
// Ignore exception and carry on.
// Implement a retry mechanism to try moving the file again.
}
As with opening files, File.Move is atomic and protected by locks, therefore it is guaranteed that if you have multiple concurrent threads/processes attempting to move the file, only one will succeed and the others will throw an exception. See here for a similar question: Atomicity of File.Move.
I can think of two quick solutions to this;
Distribute the load
Have your 2 processes so that they only work on some files. How you do this could be based on the filename, or the date/time. E.g. Process 1 reads files which have a time stamp ending in an odd number, and process 2 reads the ones with an even number.
Database as lock
The other alternative is that you use some kind of database as a lock.
Process 1 reads a file and does an insert into a database table based on the filename (must be unique). If the insert works, then it is responsible for the file and continues processing it, else if the insert fails, then the other process has already inserted it so it is responsible and process 1 ignores the file.
The database has to be accessible to both processes, and this will incur some overhead. But might be a better option if you want to scale this out to more processes.
So if you are going to apply lock you can try to use file name as a lock object. You can try to rename file in special way (like by adding dot in front of file name)
and first service that was lucky to rename file will continue with it. And second one (slow) will get exception that file does not exist.
And you have to add check to your file processing logic that service will not try to "lock" file that is "locked" already (have a name started with dot).
UPD may be it is better to include special set of characters (like a mark) and some service identificator (machinename concatenated with PID)
because i'm not sure how file rename will work in the concurrent mode.
So if you have got file.txt in the shared folder
first of all you have to check is there .lock string in the file name
already
if no service can try to rename it to the file.txt.lockDevhost345 (where .lock - special marker, Devhost - name of current computer and 345 is a PID (process identifier)
then service have to check is there file.txt.lockDevhost345 file
available
if yes - it was locked by current service instance and can be used
if no - it was "stolen" by concurrent service so it should not be processed.
If you do not have write permission you can use another network share and try to create additional file lock marker, for example for file.txt service can try to create (and hold write lock) new file like file.txt.lock First service that has created lock file is taking care about original file and removes lock only when original file was processed.
Instead getting deep in file access change, I would suggest to use a functionality-server approach. Additional argument for this approach is file usage from different computers. This particular thing goes deep in access and permission administration.
My suggestion is about to have a single point of file access (Files repository) that implements the following functionality:
Get files list. (gets a list of available files)
Checkout file. (proprietary grab access to the file so that the owner of the checkout was authorized to modify the file)
Modify file. (update file content or delete it)
Check-in changes to the repository
There are a lot of ways to implement the approach. (Use API of a files a file versioning system; implement a service; use a database, ...)
An easy one (requires a database that supports transactions, triggers or stored procedures)
Get files list. (SQL SELECT from an "available files table")
Checkout file. (SQL UPDATE or Update stored procedure. By update in the trigger or in the stored procedure define an "raise error" state in case of multiple checkout)
Modify file. (update file content or delete it. Please keep in mind that is till better to do over a functionality "server". In this case you would need to implement security policy once)
Check-in changes to the repository (Release the "Checked Out" filed of the particular file entry. Implement the Check-In in transaction)

Lock a file while retaining the ability to read/append/write/truncate in the same thread?

I have a file containing, roughly speaking, the state of the application.
I want to implement the following behaviour:
When the application is started, lock the file so that no other applications (or user itself) will be able to modify it;
Read the previous application state from the file;
... do work ...
Update the file with a new state (which, given the format of the file, involves rewriting the entire file; the length of the file may decrease after the operation);
... do work ...
Update the file again
... do work ...
If the work failed (application crashed), the lock is taken off, and the content of the file is left as it was after the previous unit of work executed.
It seems that, to rewrite the file, one should open it with a Truncate option; that means one should open a new FileStream each time they want to rewrite a file. So it seems that behavior I want could only achieved by such a dirty way:
When the application is started, read the file, then open the FileStream with the FileShare.Read;
When some work is done, close the handle opened previously, open another FileStream with the FileMode.Truncate and FileShare.Read, write the data and flush the FileStream.
When some work is done, close the handle opened previously, open another FileStream with the FileMode.Truncate and FileShare.Read, write the data and flush the FileStream.
On the Dispose, close the handle opened previously.
Such a way has some disadvantages: extra FileStream are opened; the file integrity is not guaranteed between FileStream close and FileStream open; the code is much more complicated.
Is there any other way, lacking these disadvantages?
Don't close and reopen the file. Instead, use FileStream.SetLength(0) to truncate the file to zero length when you want to rewrite it.
You might (or might not) also need to set FileStream.Position to zero. The documentation doesn't make it clear whether SetLength moves the file pointer or not.
Why don't you take exclusive access to the file when application starts, and create an in-memory cache of the file that can be shared across all threads in the process while your actual file remains locked for OS. You can use lock(memoryStream) to avoid concurrency issues. when you are done updating the local in-memory version of file just update the file on disk and release lock on it.
Regards.

Multiple Threads reading from the same file

I have a xml file that needs to be read from many many times. I am trying to use the Parallel.ForEach to speed this processes up since none of that data being read in is relevant as to what order it is being read in. The data is just being used to populate objects. My problem is even though I am opening the file each time in the thread as read only it complains that it is open by another program. (I don't have it opened in a text editor or anything :))
How can I accomplish multi reads from the same file?
EDIT: The file is ~18KB pretty small. It is read from about 1,800 times.
Thanks
If you want multiple threads to read from the same file, you need to specify FileShare.Read:
using (var stream = File.Open("theFile.xml", FileMode.Open, FileAccess.Read, FileShare.Read))
{
...
}
However, you will not achieve any speedup from this, for multiple reasons:
Your hard disk can only read one thing at a time. Although you have multiple threads running at the same time, these threads will all end up waiting for each other.
You cannot easily parse a part of an XML file. You will usually have to parse the entire XML file every time. Since you have multiple threads reading it all the time, it seems that you are not expecting the file to change. If that is the case, then why do you need to read it multiple times?
Depending on the size of the file and the type of reads you are doing it might be faster to load the file into memory first, and then provide access to it directly to your threads.
You didnt provide any specifics on the file, the reads, etc so I cant say for sure if it would address your specific needs.
The general premise would be to load the file once in a single thread, and then either directly (via the Xml structure) or indirectly (via XmlNodes, etc) provide access to the file to each of your threads. I envision something similar to:
Load the file
For each Xpath query dispatch the matching nodes to your threads.
If the threads dont modify the XML directly, this might be a viable alternative.
When you open the file, you need to specify FileShare.Read :
using (var stream = new FileStream("theFile.xml", FileMode.Open, FileAccess.Read, FileShare.Read))
{
...
}
That way the file can be opened multiple times for reading
While an old post, it seems to be a popular one so I thought I would add a solution that I have used to good effect for multi-threaded environments that need read access to a file. The file must however be small enough to hold in memory at least for the duration of your processing, and the file must only be read and not written to during the period of shared access.
string FileName = "TextFile.txt";
string[] FileContents = File.ReadAllLines(FileName);
foreach (string strOneLine in FileContents)
{
// Do work on each line of the file here
}
So long as the file is only being read, multiple threads or programs can access and process it at the same time without treading on one another's toes.

Read from a growing file in C#?

In C#/.NET (on Windows) is there a way to read a "growing" file using a file stream? The length of the file will be very small when the filestream is opened, but the file will be being written to by another thread. If/when the filestream "catches up" to the other thread (i.e. when Read() returns 0 bytes read), I want to pause to allow the file to buffer a bit, then continue reading.
I don't really want to use a FilesystemWatcher and keep creating new file streams (as was suggested for log files), since this isn't a log file (it's a video file being encoded on the fly) and performance is an issue.
Thanks,
Robert
You can do this, but you need to keep careful track of the file read and write positions using Stream.Seek and with appropriate synchronization between the threads. Typically you would use an EventWaitHandle or subclass thereof to do the synchronization for data, and you would also need to consider synchronization for the access to the FileStream object itself (probably via a lock statement).
Update: In answering this question I implemented something similar - a situation where a file was being downloaded in the background and also being uploaded at the same time. I used memory buffers, and posted a gist which has working code. (It's GPL but that might not matter for you - in any case you can use the principles to do your own thing.)
This worked with a StreamReader around a file, with the following steps:
In the program that writes to the file, open it with read sharing, like this:
var out = new StreamWriter(File.Open("logFile.txt", FileMode.OpenOrCreate, FileAccess.Write, FileShare.Read));
In the program that reads the file, open it with read-write sharing, like this:
using (FileStream fileStream = File.Open("logFile.txt", FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
using ( var file = new StreamReader(fileStream))
Before accessing the input stream, check whether the end has been reached, and if so, wait around a while.
while (file.EndOfStream)
{
Thread.Sleep(5);
}
The way i solved this is using the DirectoryWatcher / FilesystemWatcher class, and when it triggers on the file you want you open a FileStream and read it to the end. And when im done reading i save the position of the reader, so next time the DirectoryWatcher / FilesystemWatcher triggers i open a stream set the position to where i was last time.
Calling FileStream.length is actualy very slow, i have had no performance issues with my solution ( I was im reading a "log" ranging from 10mb to 50 ish).
To me the solution i describe is very simple and easy to maintain, i would try it and profile it. I dont think your going to get any performance issues based on it. I do this when ppl are playing a multi threaded game, taking their entire CPU and nobody has complained that my parser is more demanding then the competing parsers.
One other thing that might be useful is the FileStream class has a property on it called ReadTimeOut which is defined as:
Gets or sets a value, in miliseconds, that determines how long the stream will attempt to read before timing out. (inherited from Stream)
This could be useful in that when your reads catch up to your writes the thread performing the reads may pause while the write buffer gets flushed. It would certianly be worth writing a small test to see if this property would help your cause in any way.
Are the read and write operations happening on the same object? If so you could write your own abstractions over the file and then write cross thread communication code such that the thread that is performing the writes and notify the thread performing the reads when it is done so that the thread doing the reads knows when to stop reading when it reaches EOF.

Categories