Path.GetTempFileName in MultiProcessing - c#

we run several instances of our program (c#) on a single computer.
In each instance our code tries to create "many" temporary files with help of method Path.GetTempFile().
And sometimes, our program fails with exception:
Exception: Access to the path is denied.
StackTrace: at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.Path.GetTempFileName()
I checked temporary folder and didn't find something strange: free disk is enough, number of temporary files is not very big, etc.
I have only one explanation: one instance gets temporary file and opens it, but in the same time, another instance also gets name of the temporary file and tries to open it.
If it is correct?
If yes, how to solve the issue, if not how to understand what a problem?
UPD:
failed on computer with Windows Server 2008 HPC
Thank you,
Igor.

msdn states for the Path class:
Any public static (Shared in Visual Basic) members of this type are thread safe.
Furthermore there are two reasons given for IO exceptions:
The GetTempFileName method will raise an IOException if it is used to create more than 65535 files without deleting previous temporary files.
The GetTempFileName method will raise an IOException if no unique temporary file name is available. To resolve this error, delete all unneeded temporary files.
I'd recommend to check for this conditions (since you explicitly state that you create many temp files).

see http://support.microsoft.com/kb/982613/en-us

Related

Exception raised copying git repository folder

sometimes it happens that while my application is copying the git repository folder file by file, I have this kind of exception when an other thread wants to do some operation on the repository, as for example knowing the Head value.
The exception is System.AccessViolationException and the stack trace is the following:
in LibGit2Sharp.Core.NativeMethods.git_reference_lookup(git_reference*& reference, git_repository* repo, String name)
in LibGit2Sharp.Core.Proxy.git_reference_lookup(RepositoryHandle repo, String name, Boolean shouldThrowIfNotFound) in c:\projects\libgit2sharp\LibGit2Sharp\Core\Proxy.cs:riga 1932
in LibGit2Sharp.ReferenceCollection.Resolve[T](String name) in c:\projects\libgit2sharp\LibGit2Sharp\ReferenceCollection.cs:riga 441
in LibGit2Sharp.Repository.get_Head() in c:\projects\libgit2sharp\LibGit2Sharp\Repository.cs:riga 268
in GitManager.get_CurrentBranch() in C:\Repository\MyProject\GitRepositoryManagement\GitManager.cs:riga 80
I don't know why this happens..any help is really appreciated!!! Thanks.
This error means that another process is locking file or directory. It can happen at any filesystem operation in Windows, I have seen it while listing directory content for example. Since you mentioned that you are using several threads you probably should explicitly guard the repository access so that only one thread is accessing it at a time. If the another process is not yours, you could look for possible conflicting process with procmon utility, for example.

C# Directory.Move access denied error

This is a bit of a tricky one and hopefully I can gain some insight on how the C# built in Directory.Move function works (or should work). I've written a program that puts a list of folder names that are older than a specific date into a DirectoryInfo list, which it iterates over to Move the folder elsewhere.
foreach (DirectoryInfo temp in toBeDeleted)
{
filecheck.WriteLine(temp.Name);
Directory.Move(temp.FullName, #"T:\Transactiondeletions\" + counter + "\\" + temp.Name);
}
Where temp.Fullname is something like T:\UK\DATA\386\trans\12345678.16
However when I run the program I hit an access denied error.
T: in this case is something like 10.11.12.13\Data2$
I have another mapped drive, U:, which is on the same IP as 10.11.12.13\Data3$ and has the exact same directory structure.
The kicker is that my program works just fine on the U drive but not on the T drive. I've tried both the drive letter in my code as the actual full path with IP, and it still works fine on the U drive but not on the T drive.
On the T drive whenever my programs tries to move a folder, it hits Access denied.
However it works fine when:
I move the folder manually by hand
I use a directory copy + Directory.Delete instead of Directory.Move
Any ideas? I can't figure out why it won't work here even though I can move the files manually, I've tried running the .exe manually and as admin and as a colleague as well but the result is the same.
I thought it might've been related to a streamwriter being open still (filecheck), but I've already tried moving this part of the code until after I close the streamwriter but it hits the same errors so I've 'excluded' that possibility.
Any advice would be greatly appreciated and I'll be happy to provide any more required information if necessary.
I still have no solution for the Directory.Move operation not working. however I've been able to work around the problem by going into the directory and using File.Move to move all files elsewhere, and then using Directory.Delete to delete the original directory. For some reason it works like this. But it will do!
There may be 2 reasons for this exception. First - file is locked by the different process i.e. Windows Explorer etc. It is legitimate exception and you have to deal with it accordingly. Second - file is locked by the same process and by the same process here I mean any thread of it. This, in my opinion is a Microsoft's bug to throw the same exception as in first case. If you look deeper, it can be branched further: same process may have another stream etc. opened in the different thread or it can be held by current thread calling Move. In the first branch I still want more elaborate exception and in the second the issue is rooted in Windows kernel. Long story short: OS seems not have enough time to release IO locks held even by the same thread following previous file/folder operation.
In order to verify my claim have a look at System.IO.Directory.InternalMove method in .NET source. Down at the end of that method there is a call to Win32Native.MoveFile which is the source of that exception. Right there you have this comment // This check was originally put in for Win9x.. That one shows how professional Microsoft developers are and that there is no feasible solution to this issue.
I had few workarounds to this: 1. Do not use Move but use Copy+Delete source. 2. Wrap Move call into the IO Utility method which would contain do while loop around try catch block containing Move call. Remember, we are only addressing a bug, where we believe same thread (or same process) holds the lock so we need to specify timeout exit condition after some number of Thread.Sleep(x) calls if file is held by another process.

Is it possible to know which of .NET code created a temporary file?

I'm reviewing a Windows Azure web role VM an I see that the temporary folder of the process running the role payload contains several dozen temporary files of zero length created some long time ago. This is a potential problem for me because if files are created and left over in uncontrolled manner the role gets trashed at some point.
I'm in full control of the payload code and there's good chance that those temporary files are created by the same process that runs the payload.
Is it possible to intercept temporary files creation from C# code running in the same process as the process creating the files?
Why don't you use a debugger (remote/otherwise) and live dumps to get this information?
You can always try
to use FileSystemWatcher to throw an exception at the time a temp file is created. Use the debugger (Ctl+Alt+E in VS for exception dialog) to trap on first-chance exceptions. You can then inspect the stack for the other threads
to use system audit policies (this will only tell you what user and perhaps (?) process created a file. (See also http://www.techrepublic.com/article/step-by-step-how-to-audit-file-and-folder-access-to-improve-windows-2000-pro-security/5034308)
Eliminate by tagging your own file creations with extra information (e.g. write their names to a log file, create .tag files for each file written etc)

Error: The process cannot access the file '...' because it is being used by another process

I have a function that always creates a directory and put in it some files (images).
When the code runs first time, no problem. Second time (always), it gets an error when I have to delete the directory (because I want to recreate it to put in it the images). The error is "The process cannot access the file '...' because it is being used by another process". The only process that access to this files is this function.
It's like the function "doesn't leave" the files.
How can I resolve this with a clear solution?
Here a part of the code:
String strPath = Environment.CurrentDirectory.ToString() + "\\sessionPDF";
if (Directory.Exists(strPath))
Directory.Delete(strPath, true); //Here I get the error
Directory.CreateDirectory(strPath);
//Then I put the files in the directory
If your code or another process is serving up the images, they will be locked for an indefinite amount of time. If it's IIS, they're locked for a short time while being served. I'm not sure about this, but if Explorer is creating thumbs for the images, it may lock the files while it does that. It may be for a split second, but if your code and that process collide, it's a race condition.
Be sure you release your locks when you're done. If the class implements IDisposable, wrap a using statement around it if you're not doing extensive work on that object:
using (var Bitmap = ... || var Stream = ... || var File = ...) { ... }
...which will close the object afterwards and the file will not be locked.
Just going out on a limb here without seeing the code that dumps the files, but if you're using FileStreams or Bitmap objects, I would double check to ensure you are properly disposing of all of those objects before running the second method.
The only clear solution on this case is keep track of who is handling access to the directory and fix the bug, by releasing that access.
If the object/resource that handling access is 3rd party, or by any means is not possible to change or access, it's a time to revise an architecture, to handle IO access in a different way.
Hope this helps.
Sounds like you are not releasing the file handle when the file is created. Try doing all of your IO within the using statement, that way the file will be released automatically when you are finished with it.
http://msdn.microsoft.com/en-us/library/yh598w02%28v=vs.80%29.aspx
I have seen cases where a virus scanner will scan the new file and prevent the file from being deleted, though that is highly unlikely.
Be sure to .Dispose of all IDisposable objects and make sure that nothing has changed your Environment.CurrentDirectory to the directory you want to delete.

IOException with File.Copy() inspite of prior File.Exists() check

This is pertaining to a simple file copy operation code. My requirement is that only new files be copied from the source folder to the destination folder, so before I copy the file, I check that:
it exists in the source folder
it does not exist in the destination folder
After this I proceed with the copy operation.
However, I randomly get an IOException stating that "The file <filename> already exists."
Now, I have this code running (as part of a win service) on 2 servers so I'm willing to concede that maybe, just maybe, within that short interval where Server1 checked the conditions and proceeded to copy the file, Server2 copied it to destination, resulting in the IOException on Server1.
But, I have several thousands of files being copied and I get this error in thousands. How is this possible? What am I missing? Here's the code:
try
{
if(File.Exists(String.Format("{0}\\{1}",pstrSourcePath,strFileName)) && !File.Exists(String.Format("{0}\\{1}",pstrDestPath,strFileName)))
File.Copy(String.Format("{0}\\{1}",pstrSourcePath,strFileName),String.Format("{0}\\{1}",pstrDestPath,strFileName))
}
catch(IOException ioEx)
{
txtDesc.Value=ioEx.Message;
}
I imagine it's a permissions issue. From the docs for File.Exists:
If the caller does not have sufficient permissions to read the specified file, no exception is thrown and the method returns false regardless of the existence of path.
Perhaps the file does exist, but your code doesn't have permission to check it?
Note that your code would be clearer if you used string.Format once for each file and saved the results to temporary variables. It would also be better to use Path.Combine instead of string.Format, like this:
string sourcePath = Path.Combine(pstrSourcePath, strFileName);
string targetPath = Path.Combine(pstrDestPath, strFileName);
if (File.Exists(sourcePath) && !File.Exists(targetPath))
{
File.Copy(sourcePath, targetPath);
}
(I'd also ditch the str and pstr prefixes, but hey...)
The two server scenario is sufficient to explain the problem. Beware that they'll have a knack for automatically synchronizing to each other's copy operation. Whatever server is behind will quickly catch up because the file is already present in the target machine's file system cache.
You have to give up on the File.Exist test, it just cannot work reliably on a multi-tasking operating system. The race condition is unsolvable, the reason that neither Windows nor .NET has an IsFileLocked() method for example. Just call File.Copy(). You'll get an IOException of course if the file already exists. Filter out the exception messages by using Marshal.GetLastWin32Error(). The ERROR_FILE_EXISTS error code is 80.
The same thing is happening to me and I cannot figure it out. In my case, I am always writing to a new location and yet sometimes I receive the same error. When I look at that location, there is a file with that name zero bytes in size. I can guarantee that file did not exist prior and some other process is not also writing to that location. This is copying across to a network share, not sure if that is significant, but thought I would mention it. It is almost like the File.Copy operation is writing the files, then erring because the file exists (but not always). I am logging my copy operation as I recurse a directory structure and see no duplicate copy operations that might overlap.

Categories