Exception raised copying git repository folder - c#

sometimes it happens that while my application is copying the git repository folder file by file, I have this kind of exception when an other thread wants to do some operation on the repository, as for example knowing the Head value.
The exception is System.AccessViolationException and the stack trace is the following:
in LibGit2Sharp.Core.NativeMethods.git_reference_lookup(git_reference*& reference, git_repository* repo, String name)
in LibGit2Sharp.Core.Proxy.git_reference_lookup(RepositoryHandle repo, String name, Boolean shouldThrowIfNotFound) in c:\projects\libgit2sharp\LibGit2Sharp\Core\Proxy.cs:riga 1932
in LibGit2Sharp.ReferenceCollection.Resolve[T](String name) in c:\projects\libgit2sharp\LibGit2Sharp\ReferenceCollection.cs:riga 441
in LibGit2Sharp.Repository.get_Head() in c:\projects\libgit2sharp\LibGit2Sharp\Repository.cs:riga 268
in GitManager.get_CurrentBranch() in C:\Repository\MyProject\GitRepositoryManagement\GitManager.cs:riga 80
I don't know why this happens..any help is really appreciated!!! Thanks.

This error means that another process is locking file or directory. It can happen at any filesystem operation in Windows, I have seen it while listing directory content for example. Since you mentioned that you are using several threads you probably should explicitly guard the repository access so that only one thread is accessing it at a time. If the another process is not yours, you could look for possible conflicting process with procmon utility, for example.

Related

C# Directory.Move access denied error

This is a bit of a tricky one and hopefully I can gain some insight on how the C# built in Directory.Move function works (or should work). I've written a program that puts a list of folder names that are older than a specific date into a DirectoryInfo list, which it iterates over to Move the folder elsewhere.
foreach (DirectoryInfo temp in toBeDeleted)
{
filecheck.WriteLine(temp.Name);
Directory.Move(temp.FullName, #"T:\Transactiondeletions\" + counter + "\\" + temp.Name);
}
Where temp.Fullname is something like T:\UK\DATA\386\trans\12345678.16
However when I run the program I hit an access denied error.
T: in this case is something like 10.11.12.13\Data2$
I have another mapped drive, U:, which is on the same IP as 10.11.12.13\Data3$ and has the exact same directory structure.
The kicker is that my program works just fine on the U drive but not on the T drive. I've tried both the drive letter in my code as the actual full path with IP, and it still works fine on the U drive but not on the T drive.
On the T drive whenever my programs tries to move a folder, it hits Access denied.
However it works fine when:
I move the folder manually by hand
I use a directory copy + Directory.Delete instead of Directory.Move
Any ideas? I can't figure out why it won't work here even though I can move the files manually, I've tried running the .exe manually and as admin and as a colleague as well but the result is the same.
I thought it might've been related to a streamwriter being open still (filecheck), but I've already tried moving this part of the code until after I close the streamwriter but it hits the same errors so I've 'excluded' that possibility.
Any advice would be greatly appreciated and I'll be happy to provide any more required information if necessary.
I still have no solution for the Directory.Move operation not working. however I've been able to work around the problem by going into the directory and using File.Move to move all files elsewhere, and then using Directory.Delete to delete the original directory. For some reason it works like this. But it will do!
There may be 2 reasons for this exception. First - file is locked by the different process i.e. Windows Explorer etc. It is legitimate exception and you have to deal with it accordingly. Second - file is locked by the same process and by the same process here I mean any thread of it. This, in my opinion is a Microsoft's bug to throw the same exception as in first case. If you look deeper, it can be branched further: same process may have another stream etc. opened in the different thread or it can be held by current thread calling Move. In the first branch I still want more elaborate exception and in the second the issue is rooted in Windows kernel. Long story short: OS seems not have enough time to release IO locks held even by the same thread following previous file/folder operation.
In order to verify my claim have a look at System.IO.Directory.InternalMove method in .NET source. Down at the end of that method there is a call to Win32Native.MoveFile which is the source of that exception. Right there you have this comment // This check was originally put in for Win9x.. That one shows how professional Microsoft developers are and that there is no feasible solution to this issue.
I had few workarounds to this: 1. Do not use Move but use Copy+Delete source. 2. Wrap Move call into the IO Utility method which would contain do while loop around try catch block containing Move call. Remember, we are only addressing a bug, where we believe same thread (or same process) holds the lock so we need to specify timeout exit condition after some number of Thread.Sleep(x) calls if file is held by another process.

Best file mutex in .NET 3.5

I want to use some mutex on files, so any process won't touch certain files before other stop using them. How can I do it in .NET 3.5? Here are some details:
I have some service, which checks every period of time if there are any files/directories in certain folder and if there are, service's doing something with it.
My other process is responsible for moving files (and directories) into certain folder and everything works just fine.
But I'm worrying because there can be situation, when my copying process will copy the files to certain folder and in the same time (in the same milisecond) my service will check if there are some files, and will do something with them (but not with all of them, because it checked during the copying).
So my idea is to put some mutex in there (maybe one extra file can be used as a mutex?), so service won't check anything until copying is done.
How can I achieve something like that in possibly easy way?
Thanks for any help.
The canonical way to achieve this is the filename:
Process A copies the files to e.g. "somefile.ext.noprocess" (this is non-atomic)
Process B ignores all files with the ".noprocess" suffix
After Process B has finished copying, it renames the file to "somefile.ext"
Next time Process B checks, it sees the file and starts processing.
If you have more than one file, that have to be processd together (or none), you need to adapt this scheme to an additional transaction file containing the file names for the transaction: Only if this file exists and has the correct name, must process B read it and process the files mentioned in it.
Your problem really is not of mutual exclusion, but of atomicity. Copying multiple files is not an atomic operation, and so it is possible to observe the files in a half-copied state which you'd like to prevent.
To solve your problem, you could hinge your entire operation on a single atomic file system operation, for example renaming (or moving) of a folder. That way no one can observe an intermediate state. You can do it as follows:
Copy the files to a folder outside the monitored folder, but on the same drive.
When the copying operation is complete, move the folder inside the monitored folder. To any outside process, all the files would appear at once, and it would have no chance to see only part of the files.

Is it possible to know which of .NET code created a temporary file?

I'm reviewing a Windows Azure web role VM an I see that the temporary folder of the process running the role payload contains several dozen temporary files of zero length created some long time ago. This is a potential problem for me because if files are created and left over in uncontrolled manner the role gets trashed at some point.
I'm in full control of the payload code and there's good chance that those temporary files are created by the same process that runs the payload.
Is it possible to intercept temporary files creation from C# code running in the same process as the process creating the files?
Why don't you use a debugger (remote/otherwise) and live dumps to get this information?
You can always try
to use FileSystemWatcher to throw an exception at the time a temp file is created. Use the debugger (Ctl+Alt+E in VS for exception dialog) to trap on first-chance exceptions. You can then inspect the stack for the other threads
to use system audit policies (this will only tell you what user and perhaps (?) process created a file. (See also http://www.techrepublic.com/article/step-by-step-how-to-audit-file-and-folder-access-to-improve-windows-2000-pro-security/5034308)
Eliminate by tagging your own file creations with extra information (e.g. write their names to a log file, create .tag files for each file written etc)

IOException with File.Copy() inspite of prior File.Exists() check

This is pertaining to a simple file copy operation code. My requirement is that only new files be copied from the source folder to the destination folder, so before I copy the file, I check that:
it exists in the source folder
it does not exist in the destination folder
After this I proceed with the copy operation.
However, I randomly get an IOException stating that "The file <filename> already exists."
Now, I have this code running (as part of a win service) on 2 servers so I'm willing to concede that maybe, just maybe, within that short interval where Server1 checked the conditions and proceeded to copy the file, Server2 copied it to destination, resulting in the IOException on Server1.
But, I have several thousands of files being copied and I get this error in thousands. How is this possible? What am I missing? Here's the code:
try
{
if(File.Exists(String.Format("{0}\\{1}",pstrSourcePath,strFileName)) && !File.Exists(String.Format("{0}\\{1}",pstrDestPath,strFileName)))
File.Copy(String.Format("{0}\\{1}",pstrSourcePath,strFileName),String.Format("{0}\\{1}",pstrDestPath,strFileName))
}
catch(IOException ioEx)
{
txtDesc.Value=ioEx.Message;
}
I imagine it's a permissions issue. From the docs for File.Exists:
If the caller does not have sufficient permissions to read the specified file, no exception is thrown and the method returns false regardless of the existence of path.
Perhaps the file does exist, but your code doesn't have permission to check it?
Note that your code would be clearer if you used string.Format once for each file and saved the results to temporary variables. It would also be better to use Path.Combine instead of string.Format, like this:
string sourcePath = Path.Combine(pstrSourcePath, strFileName);
string targetPath = Path.Combine(pstrDestPath, strFileName);
if (File.Exists(sourcePath) && !File.Exists(targetPath))
{
File.Copy(sourcePath, targetPath);
}
(I'd also ditch the str and pstr prefixes, but hey...)
The two server scenario is sufficient to explain the problem. Beware that they'll have a knack for automatically synchronizing to each other's copy operation. Whatever server is behind will quickly catch up because the file is already present in the target machine's file system cache.
You have to give up on the File.Exist test, it just cannot work reliably on a multi-tasking operating system. The race condition is unsolvable, the reason that neither Windows nor .NET has an IsFileLocked() method for example. Just call File.Copy(). You'll get an IOException of course if the file already exists. Filter out the exception messages by using Marshal.GetLastWin32Error(). The ERROR_FILE_EXISTS error code is 80.
The same thing is happening to me and I cannot figure it out. In my case, I am always writing to a new location and yet sometimes I receive the same error. When I look at that location, there is a file with that name zero bytes in size. I can guarantee that file did not exist prior and some other process is not also writing to that location. This is copying across to a network share, not sure if that is significant, but thought I would mention it. It is almost like the File.Copy operation is writing the files, then erring because the file exists (but not always). I am logging my copy operation as I recurse a directory structure and see no duplicate copy operations that might overlap.

Path.GetTempFileName in MultiProcessing

we run several instances of our program (c#) on a single computer.
In each instance our code tries to create "many" temporary files with help of method Path.GetTempFile().
And sometimes, our program fails with exception:
Exception: Access to the path is denied.
StackTrace: at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.Path.GetTempFileName()
I checked temporary folder and didn't find something strange: free disk is enough, number of temporary files is not very big, etc.
I have only one explanation: one instance gets temporary file and opens it, but in the same time, another instance also gets name of the temporary file and tries to open it.
If it is correct?
If yes, how to solve the issue, if not how to understand what a problem?
UPD:
failed on computer with Windows Server 2008 HPC
Thank you,
Igor.
msdn states for the Path class:
Any public static (Shared in Visual Basic) members of this type are thread safe.
Furthermore there are two reasons given for IO exceptions:
The GetTempFileName method will raise an IOException if it is used to create more than 65535 files without deleting previous temporary files.
The GetTempFileName method will raise an IOException if no unique temporary file name is available. To resolve this error, delete all unneeded temporary files.
I'd recommend to check for this conditions (since you explicitly state that you create many temp files).
see http://support.microsoft.com/kb/982613/en-us

Categories