C# - Lock directory until all files get copied into it - c#

At particular location, directory with files inside it gets created. At that location another software continuously checks for new directory and then process that directory immediately.
Now, The problem is,
When I want to copy files to that location, I first create directory, and then copy all the files inside it. But as soon as directory gets created, the processing software process that directory and deletes it. And my copy function raise an exception.
if (!Directory.Exists(DestinationPath))
Directory.CreateDirectory(DestinationPath);
CopyFilesInDirectory();//Throw exception as directory created above processed by another software and deleted.
What I want to do is, stop the parent directory from getting processed by another software until my copying of files gets completed.

If you have a small window of opportunity, you can block the other software from deleting the directory by keeping a file open in it, like
if (!Directory.Exists(DestinationPath))
Directory.CreateDirectory(DestinationPath);
using(var lockFile = File.Open(Path.Combine(DestinationPath, "_lock"))
{
CopyFilesInDirectory();
}
If the other software may delete the directory after the second and third line, you will need to try again until you get the lock file open.
The cleaner way would be to modify the other software, of course.

Related

Create and open as read-only a temporary copy of an existing file and delete after use

I've scoured for information, but I just fear I may be getting in over my head here as I am not proficient in multi-threading. I have desktop app that needs to create a read-only, temp copy of an existing file, open the file in it's default application and then delete the file once the user is done viewing it.
It must open read only as the user may try and save it thinking it's the original file.
To do this I have created a new thread which copies the file to a temp path, set's the files attributes, attaches a Process handler to it and then "waits" and deletes the file on exit. The advantage of this is that the thread will continue to run even after the program has exited (so it seems anyway). This way the file will still delete even if the user keeps it open longer than the program.
Here is my code. The att object holds my file information.
new Thread(() =>
{
//Create the temp file name
string temp = System.IO.Path.GetTempPath() + att.FileNameWithExtension;
//Determine if this file already exists (in case it didn't delete)
//This is important as setting it readonly will create User Access Control (UAC) issues for overwritting
//if the read only attribute exists
if (File.Exists(temp)) { File.SetAttributes(temp, FileAttributes.Temporary ); }
//Copy original file to temp location. Overwrite if it already exists due to previous deletion failure
File.Copy(att.FullFileName, temp, true);
//Set temp file attributes
File.SetAttributes(temp, FileAttributes.Temporary | FileAttributes.ReadOnly);
//Start process and monitor
var p = Process.Start(temp);//Open attachment in default program
if (p != null) { p.WaitForExit(); }
//After process ends remove readonly attribute to allow deletion without causing UAC issues
File.SetAttributes(temp, FileAttributes.Temporary);
File.Delete(temp);
}
).Start();
I've tested it and so far it seems to be doing the job, but it all feels so messy. I honestly feel like there should be an easier way to handle this that doesn't involve the need to creating new threads. If looked into copying files into memory first, but I can't seem to figure out how to open them in their default application from a MemoryStream.
So my question is.
Is there a better way to achieve opening a readonly, temp copy of a file that doesn’t write to disk first?
If not, what implications could I face from taking the mutlithreaded approach?
Any info is appreciated.
Instead of removing the temporary file(s) on shutdown, remove the 'left over' files at startup.
This is often easier to implement than trying to ensure that such cleanup code runs at process termination and handles those 'forced' cases like power fail, 'kill -9', 'End process' etc.
I like to create a 'temp' folder for such files: all of my apps scan and delete any files in such a folder at startup and the code can just be added to any new project without change.

Deleting temp files automatically (C#)

I have a scenario where I download files from a storage to the temp folder. Then I call a framework to process the file and this framework needs the file during the lifetime of the application. When the applications exits I close all files but when the application crashs the file does not get deleted. There can be multiple instances of the application.
What is the best way to get these files deleted? I have 2 ideas:
It is okay to delete the files on the next run of the application. My idea is to use one main folder in the temp paths and one folder inside where the name of the folder is equal to the process id of the current process. When I run the application the next time I check all folders and also check if there is a process with this id running. If not I delete the folder. The problem with this solution is, that it needs admin permissions to run Process.GetProcessById
I create one folder per process and use a lock file. I keep a stream opened with DeleteOnClose equal to true. On the next run of the application, I check all folders and their lock files. If there is no lock file or I can delete it I also delete the folder.
Do you have any other ideas?
EDIT: Implemented solution #2, works like a charm.
There is no inbuilt way to delete temp files automatically. But you can achieve this on reboot with a simple call to the WinAPI function MoveFileEx specifying the flag value MOVEFILE_DELAY_UNTIL_REBOOT - your temp file will be gone next time you boot (if it still exists). Here are two examples of doing this from C#: 1, and 2.
Calling this function has the effect of putting an entry into HKLM\System\CurrentControlSet\Control\Session Manager\PendingFileRenameOperations key in the registry (you can enter that value directly but this is the preferred way to do it). If you do this before doing your work with the temp file, then delete the temp file when you're finished with it. If your process crashes then all files that have been worked with will already have an entry in the registry, if the files are already gone upon the next reboot then nothing happens (i.e. there is no error raised or anything).

Error "The directory is not empty." when the directory is empy

I'm getting a very intermittent "directory not empty" error trying to delete a directory from c# code but when I look, the directory seems to be empty.
The actual scenario is this: process A invokes process B using a synchronous .Net remoting call, process B deletes the files from the directory and then returns to process A which deletes the directory itself. The disk is a locally attached NTFS Disk (probably SATA).
I'm wondering if there's a possible race Condition with NTFS when you have two processes cooperating in this way, where process B's delete calls have not been completely flushed to the file system?
Of course the more obvious answer is that the directory was genuinely not empty at the time and something else emptied it before I looked at it, but I don't see how this could happen in my current application because there's no other process which would delete the files.
Sure, deleting directories is a perilous adventure on a multi-tasking operating system. You always have the risk that another process has a file opened. Failure to delete the directory in your scenario has two primary reasons:
Particularly trouble-some are the kind of processes that opens the file in a way that does not prevent you from deleting the file but still makes deleting the directory fail with this error. Search indexers and anti-malware fit this category. They'll open the file with delete sharing, FileShare.Delete in a .NET program. Deleting the file works fine. But the file won't disappear until they close the file handle. So you can't delete the directory until they do.
Very hard to diagnose is a process that has the directory selected as its current working directory. Environment.CurrentDirectory in a .NET program. Explorer tends to trigger this, just looking at the directory is enough to prevent it from getting deleted.
These mishaps occur totally outside of your control. You will need to deal with them, catching the exception is required. Little you can do by try again later, there is however no upper limit on how long you'll have to wait. Renaming the directory and giving it a "trash" name is a good strategy. Note how the recycle bin in Windows essentially follows this scenario, good for more than just recycling :)

Force folder rename while in use

I have a service running on a webserver that waits for a zip to be dropped in a folder, extracts it, and then moves it to a certain directory. Since we want to replace the directory in question, it renames the existing folder (very large folder, takes a couple minutes to delete), then moves the extracted files in its place, then deletes the old folder. The problem is: when it tries to rename the existing folder, it gets 'Access to the path '<>' is denied.', I believe because the folder is in constant use by the webservice. Is there a way I can force the folder to rename, or take control and wait for it to not be in use? Or is there another way I can accomplish this goal?
You can't "force" a rename while any process holds an underlying operating system handle to the folder(it would be horrible if you were able to do that).
You can:
Implement pause/resume functionality for the webservice so it can be told to pause its work and release the handles, then resume after you are done.
or
Stop the webserice completely, do your work, then start the webservice

Why doesn't OS X lock files like windows does when copying to a Samba share?

I have a project that uses the .net FileSystemWatcher to watch a Samba network share for video files. When it sees a file, it adds it to an encode queue. When files are dequeued, they are moved to a local directory where the process then encodes the file to several different formats and spits them out to an output directory.
The problem arises because the video files are so big, that it often takes several minutes for them to copy completely into the network directory, so when a file is dequeued, it may or may not have completely finished being copied to the network share. When the file is being copied from a windows machine, I am able to work around it because trying to move a file that is still being copied throws an IOException. I simply catch the exception and retry every few seconds until it is done copying.
When a file is dropped into the Samba share from a computer running OS X however, that IOException is not thrown. Instead, a partial file is copied to the working directory which then fails to encode because it is not a valid video file.
So my question is, is there any way to make the FileSystemWatcher wait for files to be completely written before firing its "Created" event (based on this question I think the answer to that question is "no")? Alternatively, is there a way to get files copied from OS X to behave similarly to those in windows? Or do I need to find another solution for watching the Samba share? Thanks for any help.
Option 3. Your best bet is to have a process that watches the incoming share for files. When it sees a file, note its size and/or modification date.
Then, after some amount of time (like, 1 or 2 seconds), look again. Note any files that were seen before and compare their new sizes/mod dates to the one you saw last time.
Any file that has not changed for some "sufficiently long" period of time (1s? 5s?) is considered "done".
Once you have a "done" file, MOVE/rename that file to another directory. It is from THIS directory that your loading process can run. It "knows" that only files that are complete are in this directory.
By having this two stage process, you are able to later possibly add other rules for acceptance of a file, since all of those rules must pass before the file gets moved to its proper staging area (you can check format, check size, etc.) beyond a simple rule of just file existence.
Your later process can rely on file existence, both as a start mechanism and a restart mechanism. When the process restarts after failure or shut down, it can assume that any files in the second staging are either new or incomplete and take appropriate action based on its own internal state. When the processing is done it can choose to either delete the file, or move it to a "finished" area for archiving or what not.

Categories