I have been trying to lock a file so that other cloned services cannot access the file. I then read the file, and then move the file when finished. The Move is allowed by using FileShare.Delete.
However in later testing, we found that this approach does not work if we are looking at a network share. I appreciate my approach may not have been the best, but my specific question is:
Why does the below demo work against the local file, but not against the network file?
The more specific you can be the better, as I've found very little information in my searches that indicates network shares behave differently to local disks.
string sourceFile = #"C:\TestFile.txt";
string localPath = #"C:\MyLocalFolder\TestFile.txt";
string networkPath = #"\\MyMachine\MyNetworkFolder\TestFile.txt";
File.WriteAllText(sourceFile, "Test data");
if (!File.Exists(localPath))
File.Copy(sourceFile, localPath);
foreach (string path in new string[] { localPath, networkPath })
{
using (FileStream fsLock = File.Open(path, FileMode.Open, FileAccess.ReadWrite, (FileShare.Read | FileShare.Delete)))
{
string target = path + ".out";
File.Move(path, target); //This is the point of failure, when working with networkPath
if (File.Exists(target))
File.Delete(target);
}
if (!File.Exists(path))
File.Copy(sourceFile, path);
}
EDIT: It's worth mentioning that if you wish to move the file from one network share, to another network share while the lock is in place, this works. The problem only seems to occur when moving a file within the same file share while it is locked.
I believe System.IO.File.Open() maps to the Win32 API function CreateFile(). In Microsoft's documentation for this function [ http://msdn.microsoft.com/en-us/library/aa363858(v=vs.85).aspx ], it mentions the following:
Windows Server 2003 and Windows XP/2000: A sharing violation occurs if an attempt is made to open a file or directory for deletion on a remote computer when the value of the dwDesiredAccess parameter is the DELETE access flag (0x00010000) OR'ed with any other access flag, and the remote file or directory has not been opened with FILE_SHARE_DELETE. To avoid the sharing violation in this scenario, open the remote file or directory with the DELETE access right only, or call DeleteFile without first opening the file or directory for deletion.
According to this, you would have to pass DELETE as the FileAccess parameter to IO.File.Open(). Unfortunately, the DELETE enumeration was not included as an option.
This problem only pertains to Windows 2003 and earlier. I have tested your code on Windows 2008 R2 SP1, and it works fine. So it is possible that it would also work on Windows 2008 as well.
Related
I'm experiencing an issue with an FTP watcher service and the File.Move method.
The FTP server is a simple IIS 8.5 FTP site and the FTP client is FileZilla FTP Client
The windows service will poll a directory where the files are to be dropped.
The first task is to rename the file, using the static File.Move method.
The second, is to copy the file to another directory using the static File.Copy method.
The issue is that while the file is being transferred, the File.Copy will [correctly] throw an IO Exception if it is used, with the message "The file is being used by another process".
However the File.Move will perform it's task without throwing any exception while the file is still being transferred. Is this the correct behavior for this method? I've not been able to find any information on why this occurs. My impression was that the File.Move would throw an exception if it's used on a file that's being used by another process [The FTP Transfer] but it doesn't seem to.
Has anyone experienced this and / or have an explanation for the behavior of the File.Move method
Copying a file requires opening it for read access. The FTP server currently has the file open such that you cannot open it for reading.
Moving a file does not require opening it for read access unless the file is on a different volume than the destination.
Since moving a file to the same volume requires only delete access and not read access, the FTP server must lock the files for read and write, but not delete.
This code shows that File.Move will indeed throw an exception if the file is in use when you try to move it, so I think your premise is incorrect.
var filePath = #"d:\public\temp\temp.txt";
var moveToPath = #"d:\public\temp\temp2.txt";
// Create a stream reader so the file is 'in use'
using (var fileStream = new StreamReader(filePath))
{
// This will fail with an IO exception
File.Move(filePath, moveToPath);
}
Exception:
The process cannot access the file because it is being used by another process.
Moving a file is effectively implemented as a mere rename and only requires write permission on the target and source directory. For a real copy you need read permissions on the file itself. As there is an exclusive lock on the source file, the copy will fail, however, the move will succeed.
I've written an asp.net webapp that writes a file to a location on our iSeries FileShare.
The path looks like this: \IBMServerAddress\Filepath
This code executes perfectly on my local machine, but fails when it's deployed to my (windows) WebServer.
I understand that i may need to do some sort of impersonation to authenticate access to the IFS, but i'm unsure of how to proceed.
Here's the code i'm working with:
string filepath = "\\\\IBMServerAddress\\uploads\\";
public int SaveToDisk(string data, string plant)
{
//code for saving to disk
StreamWriter stream = null;
stream = File.CreateText(filepath + plant + ".txt"); // creating file
stream.Write(data + "\r\n"); //Write data to file
stream.Close();
return 0;
}
Again, this code executes perfectly on my local machine but does not work when deployed to my Windows WebServer - access to filepath is denied.
Thanks for your help.
EDIT: I've tried adding a network account with the same credentials as the IFS user, created a UNC path (iseries)on IIS7 to map the network drive (using the same credentials) - but receive this error:
Access to the path 'iseries\' is denied.
My understanding of Windows in general is that normally services don't have access to standard network shares like a program being run by a user does.
So the first thing would be to see if you can successfully write to a windows file share from the web server.
Assuming that works, you'll need one of two things in order to write to the IBM i share..
1) An IBM i user ID and password that matches the user ID and password the process is being run under
2) A "guest account" configured on IBM i Netserver
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_71/rzahl/rzahlsetnetguestprof.htm
You might have better luck with using Linux/UNIX based Network File System (NFS) which is supported in both Windows and the IBM i.
I have a job that needs to connect to two fileshares and copy some files for a data feed.
The source server is on our domain's network, and that works fine. The remote server, however, chokes on me and throws a "Could not find part of the path" error. I should add the destination server lives in a different domain than my source server.
The source and destination paths are read out of my app.config file.
I thought persistently mapping a drive would work, but since this is a scheduled task, that doesn't seem to work. I thought about using NET USE, but that doesn't seem to like taking a username and password.
The really weird thing - if I double click on the job while I'm logged into the machine, it'll run successfully.
Sample code:
DirectoryInfo di = new DirectoryInfo(srcPath);
try
{
FileInfo[] files = di.GetFiles();
foreach (FileInfo fi in files)
{
if(!(fi.Name.Contains("_desc")))
{
Console.WriteLine(fi.Name + System.Environment.NewLine);
File.Copy(fi.FullName, destPath + fi.Name, true);
fi.Delete();
}
}
}
Apparently this isn't as simple as copying the files over. Any suggestions on mapping a drive with credentials in C# 4.0?
EDIT
I'm trying to use a batch file called from the console application that maps the drive while the program is running. I'll know for sure whether that works in the morning.
I'd suggest looking into a proper file transfer protocol, like FTP.
Assuming that's out of the question, try using a UNC path like \\servername\path\file.txt. You will still need credentials, but assuming that the account running the application has those permissions you should be fine. Given that you mention a web.config file, I am guessing that would be an ASP.NET application, and therefore I mean the account that runs the Application Pool in IIS. See http://learn.iis.net/page.aspx/624/application-pool-identities/
What I finally wound up doing was mapping the drive in a batch file called by my program. I just launch a NET USE command and pause for a few seconds for the mapping to complete.
It looks like while the user is logged out, there's no context around mapped drives.
Using the .NET assembly of WinSCP to upload a file. OperationResultBase.Check() is throwing the following error:
WinSCP.SessionRemoteException: Transfer was successfully finished, but temporary transfer file 'testfile.zip.filepart' could not be renamed to target file name 'testfile.zip'. If the problem persists, you may want to turn off transfer resume support.
It seems that this happens with any zip file that I try to send. If it makes a difference, these are zip files that were created using the DotNetZip library.
Code that I'm using, taken pretty much directly from the example in the WinSCP documentation:
public void uploadFile(string filePath, string remotePath)
{
TransferOptions transferOptions = new TransferOptions();
transferOptions.TransferMode = TransferMode.Binary;
TransferOperationResult transferResult;
transferResult = currentSession.PutFiles(filePath, remotePath, false, transferOptions);
transferResult.Check();
foreach (TransferEventArgs transfer in transferResult.Transfers)
{
Console.WriteLine("Upload of {0} succeeded", transfer.FileName);
}
}
Discussion over at the WinSCP forum indicates that the assembly doesn't yet allow programmatic control of transfer resume support. Is there a workaround for this?
It sounds as if the filesystem on the destination server where the file is getting uploaded to does not allow file change permissions. This could be causing the renaming of the file at the finish of the upload to fail despite the fact that the complete file was uploaded and written to the filesystem with the temporary file name used while the transfer was in progress. If you don't have administrative access to the destination server, you can test that by trying to rename a file that is already on the destination server. If that fails also, then you will either need to have the proper permissions on the destination server changed in order for that to work. Otherwise you might have to use the advice provided in your error message to turn off the resume support so it is initially opened for writing with the desired filename instead of the temporary filename (with the .filepart extension).
Turn off the resumesupport:
put *.txt -nopreservetime -nopermissions -resumesupport=off
It would help, if you included full error message, including root cause as returned by the server.
My guess is that there's an antivirus application (or similar) running on the server-side. The antivirus application checks any file once upload finishes. That conflicts with WinSCP attempt to rename the file once the upload is finished. The problem may tend to occur more frequently for .ZIP archives, either because they tend to be larger or simply because they need to get extracted before the check (what takes time).
Anyway, you can disable the transfer to temporary file name using the TransferOptions.ResumeSupport.
See also the documentation for the error message "Transfer was successfully finished, but temporary transfer file ... could not be renamed to target file name ..."
All you have to do is to disable TransferResumeSupport using the below code.
transferOptions.ResumeSupport = new TransferResumeSuppor {State = TransferResumeSupportState.Off };
I have been using ApplicationDeployment.CurrentDeployment.DataDirectory to store content downloaded by the client at runtime which is expected to be there every time the app launches, however now I've found this changes seemingly randomly if the application is updated.
What is the best reliable method for storing user data for the application in click-once deployments?
Currently I've been using the following method
private const string LocalPath = "data";
public string GetStoragePath() {
string dir;
if (ApplicationDeployment.IsNetworkDeployed) {
ApplicationDeployment ad = ApplicationDeployment.CurrentDeployment;
dir = Path.Combine(ad.DataDirectory, LocalPath);
} else {
dir = LocalPath;
}
return CreateDirectory(dir);
}
I originally followed the article Accessing Local and Remote Data in ClickOnce Applications under the heading ClickOnce Data Directory which states this is recommended path.
NOTE: CreateDirectory(string) simply creates a directory if it doesn't already exist.
I have found the root cause of my problem is I'm creating many files and an index file, this index file contains absolute paths, click-once moves the content (or copies) on an upgrade, so the absolute paths no longer exist. I will investigate isolated storage as Damokles suggests to see if this has the same side affect for click-once deployments.
Another option is to make a directory for your application in the user's AppData folder and store it there. You can get a path to that with this:
Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData)
You'll find a lot of applications use that (and it's local equivalent). It also doesn't move around between ClickOnce versions.
Check out IsolatedStorage this should help.
It even works in partial trust environments.
To keep you data you need to use the application scoped IsolatedStorage
using System.IO;
using System.IO.IsolatedStorage;
...
IsolatedStorageFile appScope = IsolatedStorageFile.GetUserStoreForApplication();
using(IsolatedStorageFileStream fs = new IsolatedStorageFileStream("data.dat", FileMode.OpenOrCreate, appScope))
{
...
code taken from this post
It depends on the data you are saving.
You are currently saving to the Data Directory which is fine. What you need to be aware of is that each version of the application has its own Data Directory. When you update ClickOnce copies all the data from the previous version to the new version when the application is started up. This gives you a hook to migrate any of the data from one version to the next. This is good for in memory databases like Sql Lite or SQL CE.
One thing that I cam across is that when you have a large amount of data (4 gig) if you store it in the Data Directory this data will be copied from the old version to the new version. This will slow down the start up time after an upgrade. If you have a large amount of data or you don't want to worry about migrating data you can either store the data in the users local folder providing you have full trust or you can use isolated storage if you have a partial trust.
Isolated Storage
Local User Application Data