This is pertaining to a simple file copy operation code. My requirement is that only new files be copied from the source folder to the destination folder, so before I copy the file, I check that:
it exists in the source folder
it does not exist in the destination folder
After this I proceed with the copy operation.
However, I randomly get an IOException stating that "The file <filename> already exists."
Now, I have this code running (as part of a win service) on 2 servers so I'm willing to concede that maybe, just maybe, within that short interval where Server1 checked the conditions and proceeded to copy the file, Server2 copied it to destination, resulting in the IOException on Server1.
But, I have several thousands of files being copied and I get this error in thousands. How is this possible? What am I missing? Here's the code:
try
{
if(File.Exists(String.Format("{0}\\{1}",pstrSourcePath,strFileName)) && !File.Exists(String.Format("{0}\\{1}",pstrDestPath,strFileName)))
File.Copy(String.Format("{0}\\{1}",pstrSourcePath,strFileName),String.Format("{0}\\{1}",pstrDestPath,strFileName))
}
catch(IOException ioEx)
{
txtDesc.Value=ioEx.Message;
}
I imagine it's a permissions issue. From the docs for File.Exists:
If the caller does not have sufficient permissions to read the specified file, no exception is thrown and the method returns false regardless of the existence of path.
Perhaps the file does exist, but your code doesn't have permission to check it?
Note that your code would be clearer if you used string.Format once for each file and saved the results to temporary variables. It would also be better to use Path.Combine instead of string.Format, like this:
string sourcePath = Path.Combine(pstrSourcePath, strFileName);
string targetPath = Path.Combine(pstrDestPath, strFileName);
if (File.Exists(sourcePath) && !File.Exists(targetPath))
{
File.Copy(sourcePath, targetPath);
}
(I'd also ditch the str and pstr prefixes, but hey...)
The two server scenario is sufficient to explain the problem. Beware that they'll have a knack for automatically synchronizing to each other's copy operation. Whatever server is behind will quickly catch up because the file is already present in the target machine's file system cache.
You have to give up on the File.Exist test, it just cannot work reliably on a multi-tasking operating system. The race condition is unsolvable, the reason that neither Windows nor .NET has an IsFileLocked() method for example. Just call File.Copy(). You'll get an IOException of course if the file already exists. Filter out the exception messages by using Marshal.GetLastWin32Error(). The ERROR_FILE_EXISTS error code is 80.
The same thing is happening to me and I cannot figure it out. In my case, I am always writing to a new location and yet sometimes I receive the same error. When I look at that location, there is a file with that name zero bytes in size. I can guarantee that file did not exist prior and some other process is not also writing to that location. This is copying across to a network share, not sure if that is significant, but thought I would mention it. It is almost like the File.Copy operation is writing the files, then erring because the file exists (but not always). I am logging my copy operation as I recurse a directory structure and see no duplicate copy operations that might overlap.
Related
I have a webservice that is writing files that are being read by a different program.
To keep the reader program from reading them before they're done writing, I'm writing them with a .tmp extension, then using File.Move to rename them to a .xml extension.
My problem is when we are running at volume - thousands of files in just a couple of minutes.
I've successfully written file "12345.tmp", but when I try to rename it, File.Move() throws an IOException:
File.Move("12345.tmp", "12345.xml")
Exception: The process cannot access the file because it is being used
by another process.
For my situation, I don't really care what the filenames are, so I retry:
File.Move("12345.tmp", "12346.xml")
Exception: Exception: Could not find file '12345.tmp'.
Is File.Move() deleting the source file, if it encounters an error in renaming the file?
Why?
Is there someway to ensure that the file either renames successfully or is left unchanged?
The answer is that it depends much on how the file system itself is implemented. Also, if the Move() is between two file systems (possibly even between two machines, if the paths are network shares) - then it also depends much on the O/S implementation of Move(). Therefore, the guarantees depend less on what System.IO.File does, and more about the underlying mechanisms: the O/S code, file-system drivers, file system structure etc.
Generally, in the vast majority of cases Move() will behave the way you expect it to: either the file is moved or it remains as it was. This is because a Move within a single file system is an act of removing the file reference from one directory (an on-disk data structure), and adding it to another. If something bad happens, then the operation is rolled back: the removal from the source directory is undone by an opposite insert operation. Most modern file systems have a built-in journaling mechanism, which ensures that the move operation is either carried out completely or rolled back completely, even in case the machine loses power in the midst of the operation.
All that being said, it still depends, and not all file systems provide these guarantees. See this study
If you are running over Windows, and the file system is local (not a network share), then you can use the Transacitonal File System (TxF) feature of Windows to ensure the atomicity of your move operation.
This is a bit of a tricky one and hopefully I can gain some insight on how the C# built in Directory.Move function works (or should work). I've written a program that puts a list of folder names that are older than a specific date into a DirectoryInfo list, which it iterates over to Move the folder elsewhere.
foreach (DirectoryInfo temp in toBeDeleted)
{
filecheck.WriteLine(temp.Name);
Directory.Move(temp.FullName, #"T:\Transactiondeletions\" + counter + "\\" + temp.Name);
}
Where temp.Fullname is something like T:\UK\DATA\386\trans\12345678.16
However when I run the program I hit an access denied error.
T: in this case is something like 10.11.12.13\Data2$
I have another mapped drive, U:, which is on the same IP as 10.11.12.13\Data3$ and has the exact same directory structure.
The kicker is that my program works just fine on the U drive but not on the T drive. I've tried both the drive letter in my code as the actual full path with IP, and it still works fine on the U drive but not on the T drive.
On the T drive whenever my programs tries to move a folder, it hits Access denied.
However it works fine when:
I move the folder manually by hand
I use a directory copy + Directory.Delete instead of Directory.Move
Any ideas? I can't figure out why it won't work here even though I can move the files manually, I've tried running the .exe manually and as admin and as a colleague as well but the result is the same.
I thought it might've been related to a streamwriter being open still (filecheck), but I've already tried moving this part of the code until after I close the streamwriter but it hits the same errors so I've 'excluded' that possibility.
Any advice would be greatly appreciated and I'll be happy to provide any more required information if necessary.
I still have no solution for the Directory.Move operation not working. however I've been able to work around the problem by going into the directory and using File.Move to move all files elsewhere, and then using Directory.Delete to delete the original directory. For some reason it works like this. But it will do!
There may be 2 reasons for this exception. First - file is locked by the different process i.e. Windows Explorer etc. It is legitimate exception and you have to deal with it accordingly. Second - file is locked by the same process and by the same process here I mean any thread of it. This, in my opinion is a Microsoft's bug to throw the same exception as in first case. If you look deeper, it can be branched further: same process may have another stream etc. opened in the different thread or it can be held by current thread calling Move. In the first branch I still want more elaborate exception and in the second the issue is rooted in Windows kernel. Long story short: OS seems not have enough time to release IO locks held even by the same thread following previous file/folder operation.
In order to verify my claim have a look at System.IO.Directory.InternalMove method in .NET source. Down at the end of that method there is a call to Win32Native.MoveFile which is the source of that exception. Right there you have this comment // This check was originally put in for Win9x.. That one shows how professional Microsoft developers are and that there is no feasible solution to this issue.
I had few workarounds to this: 1. Do not use Move but use Copy+Delete source. 2. Wrap Move call into the IO Utility method which would contain do while loop around try catch block containing Move call. Remember, we are only addressing a bug, where we believe same thread (or same process) holds the lock so we need to specify timeout exit condition after some number of Thread.Sleep(x) calls if file is held by another process.
I am working on server software that periodically needs to save data to disk. I need to make sure that the old file is overwritten, and that the file cannot get corrupted (e.g. only partially overwritten) in case of unexpected circumstances.
I've adopted the following pattern:
string tempFileName = Path.GetTempFileName();
// ...write out the data to temporary file...
MoveOrReplaceFile(tempFileName, fileName);
...where MoveOrReplaceFile is:
public static void MoveOrReplaceFile( string source, string destination ) {
if (source == null) throw new ArgumentNullException("source");
if (destination == null) throw new ArgumentNullException("destination");
if (File.Exists(destination)) {
// File.Replace does not work across volumes
if (Path.GetPathRoot(Path.GetFullPath(source)) == Path.GetPathRoot(Path.GetFullPath(destination))) {
File.Replace(source, destination, null, true);
} else {
File.Copy(source, destination, true);
}
} else {
File.Move(source, destination);
}
}
This works well as long as the server has exclusive access to files. However, File.Replace appears to be very sensitive to external access to files. Any time my software runs on a system with an antivirus or a real-time backup system, random File.Replace errors start popping up:
System.IO.IOException: Unable to remove the file to be replaced.
Here are some possible causes that I've eliminated:
Unreleased file handles: using() ensures that all file handles are released as soon as possible.
Threading issues: lock() guards all access to each file.
Different disk volumes: File.Replace() fails when used across disk volumes. My method checks this already, and falls back to File.Copy().
And here are some suggestions that I've come across, and why I'd rather not use them:
Volume Shadow Copy Service: This only works as long as the problematic third-party software (backup and antivirus monitors, etc) also use VSS. Using VSS requires tons of P/Invoke, and has platform-specific issues.
Locking files: In C#, locking a file requires maintaining a FileStream open. It would keep third-party software out, but 1) I still won't be able to replace the file using File.Replace, and 2) Like I mentioned above, I'd rather write to a temporary file first, to avoid accidental corruption.
I'd appreciate any input on either getting File.Replace to work every time or, more generally, saving/overwriting files on disk reliably.
You really want to use the 3rd parameter, the backup file name. That allows Windows to simply rename the original file without having to delete it. Deleting will fail if any other process has the file opened without delete sharing, renaming is never a problem. You could then delete it yourself after the Replace() call and ignore an error. Also delete it before the Replace() call so the rename won't fail and you'll cleanup failed earlier attempts. So roughly:
string backup = destination + ".bak";
File.Delete(backup);
File.Replace(source, destination, backup, true);
try {
File.Delete(backup);
}
catch {
// optional:
filesToDeleteLater.Add(backup);
}
There are several possible approaches, here some of them:
Use a "lock" file - a temporary file that is created before the operation and indicates other writers (or readers) that the file is being modified and thus exclusively locked. After the operation complete - remove the lock file. This method assumes that the file-creation command is atomic.
Use NTFS transactional API (if appropriate).
Create a link to the file, write the changed file under a random name (for example Guid.NewGuid()) - and then remap the link to the new file. All readers will access the file through the link (which name is known).
Of course all 3 approaches have their own drawbacks and advantages
If the software is writing to an NTFS partition then try using Transactional NTFS. You can use AlphFS for a .NET wrapper to the API. That is probably the most reliable way to write files and prevent corruption.
What is difference between
Copying a file and deleting it using File.Copy() and File.Delete()
Moving the file using File.Move()
In terms of permission required to do these operations is there any difference? Any help much appreciated.
File.Move method can be used to move the file from one path to another. This method works across disk volumes, and it does not throw an exception if the source and destination are the same.
You cannot use the Move method to overwrite an existing file. If you attempt to replace a file by moving a file of the same name into that directory, you get an IOException. To overcome this you can use the combination of Copy and Delete methods
Performance wise, if on one and the same file system, moving a file is (in simplified terms) just adjusting some internal registers of the file system itself (possibly adjusting some nodes in a red/black-tree), without actually moving something.
Imagine you have 180MiB to move, and you can write onto your disk at roughly 30MiB/s. Then with copy/delete, it takes approximately 6 seconds to finish. With a simple move [same file system], it goes so fast you might not even realise it.
(I once wrote some transactional file system helpers that would move or copy multiple files, all or none; in order to make the commit as fast as possible, I moved/copied all stuff into a temporary sub-folder first, and then the final commit would move existent data into another folder (to enable rollback), and the new data up to the target).
I don't think there is any difference permission-wise, but I would personally prefer to use File.Move() since then you have both actions happening in the same "transaction". In other words if something on the move fails the whole operation fails. However, if you break it up in two steps (copy + delete) if copy worked and delete failed, you would have to reverse the "transaction" (delete the copy) manually.
Permission in file transfer is checked at two points: source, and destination. So, if you don't have read permission in source folder, or you don't have write permission in destination, then these methods both throw AccessDeniedException exception. In other words, permission checking is agnostic to method in use.
we run several instances of our program (c#) on a single computer.
In each instance our code tries to create "many" temporary files with help of method Path.GetTempFile().
And sometimes, our program fails with exception:
Exception: Access to the path is denied.
StackTrace: at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.Path.GetTempFileName()
I checked temporary folder and didn't find something strange: free disk is enough, number of temporary files is not very big, etc.
I have only one explanation: one instance gets temporary file and opens it, but in the same time, another instance also gets name of the temporary file and tries to open it.
If it is correct?
If yes, how to solve the issue, if not how to understand what a problem?
UPD:
failed on computer with Windows Server 2008 HPC
Thank you,
Igor.
msdn states for the Path class:
Any public static (Shared in Visual Basic) members of this type are thread safe.
Furthermore there are two reasons given for IO exceptions:
The GetTempFileName method will raise an IOException if it is used to create more than 65535 files without deleting previous temporary files.
The GetTempFileName method will raise an IOException if no unique temporary file name is available. To resolve this error, delete all unneeded temporary files.
I'd recommend to check for this conditions (since you explicitly state that you create many temp files).
see http://support.microsoft.com/kb/982613/en-us