I'm experiencing an issue with an FTP watcher service and the File.Move method.
The FTP server is a simple IIS 8.5 FTP site and the FTP client is FileZilla FTP Client
The windows service will poll a directory where the files are to be dropped.
The first task is to rename the file, using the static File.Move method.
The second, is to copy the file to another directory using the static File.Copy method.
The issue is that while the file is being transferred, the File.Copy will [correctly] throw an IO Exception if it is used, with the message "The file is being used by another process".
However the File.Move will perform it's task without throwing any exception while the file is still being transferred. Is this the correct behavior for this method? I've not been able to find any information on why this occurs. My impression was that the File.Move would throw an exception if it's used on a file that's being used by another process [The FTP Transfer] but it doesn't seem to.
Has anyone experienced this and / or have an explanation for the behavior of the File.Move method
Copying a file requires opening it for read access. The FTP server currently has the file open such that you cannot open it for reading.
Moving a file does not require opening it for read access unless the file is on a different volume than the destination.
Since moving a file to the same volume requires only delete access and not read access, the FTP server must lock the files for read and write, but not delete.
This code shows that File.Move will indeed throw an exception if the file is in use when you try to move it, so I think your premise is incorrect.
var filePath = #"d:\public\temp\temp.txt";
var moveToPath = #"d:\public\temp\temp2.txt";
// Create a stream reader so the file is 'in use'
using (var fileStream = new StreamReader(filePath))
{
// This will fail with an IO exception
File.Move(filePath, moveToPath);
}
Exception:
The process cannot access the file because it is being used by another process.
Moving a file is effectively implemented as a mere rename and only requires write permission on the target and source directory. For a real copy you need read permissions on the file itself. As there is an exclusive lock on the source file, the copy will fail, however, the move will succeed.
Related
We have a C# WinForms application that is run on the client. The application downloads a file from an FTP, saves it on a shared drive (hosted on the server) and the server will then run some code to decrypt the file and post the decrypted file back to the shared drive, to the same location as the encrypted file, under a different file name. When this is completed, the execution is then passed back to the client and it attempts to check if the decrypted file exists. It then throws an exception because the file cannot be found.
Client code:
protected void DecryptFile(string aEncryptedFilePath, string aDecryptedFilePath)
{
AppController.Task.SystemTaskManager.ExecuteTask(new TaskRequest(TaskPgpFileDecrypt.TaskId, aEncryptedFilePath, aDecryptedFilePath));
if (!File.Exists(aDecryptedFilePath)
throw new FileNotFoundException("File does not exist"); // Exception thrown here
}
Server code:
public TaskResponse Execute(string aSourceFilePath, string aDestinationFilePath)
{
// Decryption Code
if (!File.Exists(DestinationFilePath))
throw new ApplicationException($"Could not {ActionDescription} file {SourceFilePath}. Expected output file {DestinationFilePath} is missing.");
using (var fileStream = new FileStream(DestinationFilePath, FileMode.Open))
fileStream.Flush(true);
return new TaskResponse();
}
I've simplified it as best as I can but you can see that the client passes in an aEncryptedFilePath and what it expects to be the aDecrypedFilePath and the server code will decrypt the encrytped file and store on the path stored inaDestinationFilePath.
Now in the server code, you can also see that we check if the file exists and if it doesn't, the server will throw an exception. But here's the kicker. The server's file exists check returns true and continues the execution. It's only when we get to the client side code does the File.Exist check fail! We've tried flushing the buffer to ensure that the file is written to the disk but this doesn't help at all.
Another bit of useful information is that I can verify that the file exists because if I watch the folder where the file is created, I can see that it's created. However, if I click the file immediately after it shows up I get this warning from windows:
If I close the warning and wait a second or two, I am then able to open the file.
What could be causing this behaviour?
The Task class is an asynchronous operation. Based on the code here you are firing the decrypt command and then immediately checking to see if the file exists after you have issues the command to decrypt it. Because this is going to take some server cycles to complete you will need to wait for the task to complete before you access the file.
There is a really good MSDN doc that shows how to wait for a task to complete before continuing the code execution here: https://learn.microsoft.com/en-us/dotnet/api/system.threading.tasks.task?view=netframework-4.8#WaitingForOne
I have a ftp application. This application uses ftp rename command. If a file already exists in a directory which the file is renamed to, the error message 'the file not avaliable' caught. What can I do in c# to overwrite a file? In IIS there is a setting for this. When I do this, there is no problem.But,can I do this from c#?
What happens when there is a name collision depends on the server, if you cannot configure a known behaviour on each server you connect to you need to deal with it manually.
Either attempt a rename, catch the exception, delete the file then rename again or check for the files existence first (by requesting it size for example) and deleting it if found.
I am trying to write to a file in a Asp.Net form (.aspx). I can create the file fine using,
if (!File.Exists(Settings.Default.FileLocation))
{
File.Create(Settings.Default.FileLocation);
}
But when I go to write to the file using this code:
File.WriteAllBytes(Settings.Default.FileLocation, someByteArray);
I get an exception:
System.IO.IOException: The process cannot access the file 'C:\inetpub\wwwroot\mysite.com\captured\captured.xml' because it is being used by another process.
(in this case 'C:\inetpub\wwwroot\mysite.com\captured\captured.xml' == Settings.Default.FileLocation)
I cant delete or copy the file in Windows Explorer during this time as well. However, if I stop or restart the Application Pool the WebForm is running in, the error goes away. What is the cause of this and how do I prevent it?
This is running on a Win server 2012 R2 and IIS 7.5
Read the documentation on MSDN. You should be using using statements to open the file so that you can make sure any open handles to it are closed.
using(FileStream fs=File.Create(Settings.Default.FileLocation))
{
//manipulate file here
}
Also, File.WriteAllBytes will create the file if it doesn't exist, so there's no need to separately create it.
File.Create opens the file for read/write by default and needs to be closed before it can be used again. From the docs:
By default, full read/write access to new files is granted to all
users. The file is opened with read/write access and must be closed
before it can be opened by another application.
File.WriteAllBytes will create the file if it doesn't exist, so I think your code to check for file existence and creating it is probably overkill.
Using the .NET assembly of WinSCP to upload a file. OperationResultBase.Check() is throwing the following error:
WinSCP.SessionRemoteException: Transfer was successfully finished, but temporary transfer file 'testfile.zip.filepart' could not be renamed to target file name 'testfile.zip'. If the problem persists, you may want to turn off transfer resume support.
It seems that this happens with any zip file that I try to send. If it makes a difference, these are zip files that were created using the DotNetZip library.
Code that I'm using, taken pretty much directly from the example in the WinSCP documentation:
public void uploadFile(string filePath, string remotePath)
{
TransferOptions transferOptions = new TransferOptions();
transferOptions.TransferMode = TransferMode.Binary;
TransferOperationResult transferResult;
transferResult = currentSession.PutFiles(filePath, remotePath, false, transferOptions);
transferResult.Check();
foreach (TransferEventArgs transfer in transferResult.Transfers)
{
Console.WriteLine("Upload of {0} succeeded", transfer.FileName);
}
}
Discussion over at the WinSCP forum indicates that the assembly doesn't yet allow programmatic control of transfer resume support. Is there a workaround for this?
It sounds as if the filesystem on the destination server where the file is getting uploaded to does not allow file change permissions. This could be causing the renaming of the file at the finish of the upload to fail despite the fact that the complete file was uploaded and written to the filesystem with the temporary file name used while the transfer was in progress. If you don't have administrative access to the destination server, you can test that by trying to rename a file that is already on the destination server. If that fails also, then you will either need to have the proper permissions on the destination server changed in order for that to work. Otherwise you might have to use the advice provided in your error message to turn off the resume support so it is initially opened for writing with the desired filename instead of the temporary filename (with the .filepart extension).
Turn off the resumesupport:
put *.txt -nopreservetime -nopermissions -resumesupport=off
It would help, if you included full error message, including root cause as returned by the server.
My guess is that there's an antivirus application (or similar) running on the server-side. The antivirus application checks any file once upload finishes. That conflicts with WinSCP attempt to rename the file once the upload is finished. The problem may tend to occur more frequently for .ZIP archives, either because they tend to be larger or simply because they need to get extracted before the check (what takes time).
Anyway, you can disable the transfer to temporary file name using the TransferOptions.ResumeSupport.
See also the documentation for the error message "Transfer was successfully finished, but temporary transfer file ... could not be renamed to target file name ..."
All you have to do is to disable TransferResumeSupport using the below code.
transferOptions.ResumeSupport = new TransferResumeSuppor {State = TransferResumeSupportState.Off };
I have been trying to lock a file so that other cloned services cannot access the file. I then read the file, and then move the file when finished. The Move is allowed by using FileShare.Delete.
However in later testing, we found that this approach does not work if we are looking at a network share. I appreciate my approach may not have been the best, but my specific question is:
Why does the below demo work against the local file, but not against the network file?
The more specific you can be the better, as I've found very little information in my searches that indicates network shares behave differently to local disks.
string sourceFile = #"C:\TestFile.txt";
string localPath = #"C:\MyLocalFolder\TestFile.txt";
string networkPath = #"\\MyMachine\MyNetworkFolder\TestFile.txt";
File.WriteAllText(sourceFile, "Test data");
if (!File.Exists(localPath))
File.Copy(sourceFile, localPath);
foreach (string path in new string[] { localPath, networkPath })
{
using (FileStream fsLock = File.Open(path, FileMode.Open, FileAccess.ReadWrite, (FileShare.Read | FileShare.Delete)))
{
string target = path + ".out";
File.Move(path, target); //This is the point of failure, when working with networkPath
if (File.Exists(target))
File.Delete(target);
}
if (!File.Exists(path))
File.Copy(sourceFile, path);
}
EDIT: It's worth mentioning that if you wish to move the file from one network share, to another network share while the lock is in place, this works. The problem only seems to occur when moving a file within the same file share while it is locked.
I believe System.IO.File.Open() maps to the Win32 API function CreateFile(). In Microsoft's documentation for this function [ http://msdn.microsoft.com/en-us/library/aa363858(v=vs.85).aspx ], it mentions the following:
Windows Server 2003 and Windows XP/2000: A sharing violation occurs if an attempt is made to open a file or directory for deletion on a remote computer when the value of the dwDesiredAccess parameter is the DELETE access flag (0x00010000) OR'ed with any other access flag, and the remote file or directory has not been opened with FILE_SHARE_DELETE. To avoid the sharing violation in this scenario, open the remote file or directory with the DELETE access right only, or call DeleteFile without first opening the file or directory for deletion.
According to this, you would have to pass DELETE as the FileAccess parameter to IO.File.Open(). Unfortunately, the DELETE enumeration was not included as an option.
This problem only pertains to Windows 2003 and earlier. I have tested your code on Windows 2008 R2 SP1, and it works fine. So it is possible that it would also work on Windows 2008 as well.