I'm working for a client that has given me access to two specific folders and their subfolders only. The first one used to be our previous working space and now we will switch to the second one.
When I connect to the SFTP using the WinSCP GUI it connects me to the old folder. However, I can change that by clicking on settings and adding the βnewβ path in the remote path field. The session will then take me to the new default folder/workspace automatically when I connect.
My question is how can I do this using .NET and the respective winscpnet library?
The problem is the root directory of the session is different to remote path.
Example :
Session directory is /C/Document/.
Remote path is /C/Inetpub/ftproot/username/
When I used the following command on terminal:
winscp.com> open sftp://someone:password;fingerprint=something#ipaddress/C/Inetpub/ftproot/username
winscp.com> put some.txt /in
winscp.com> exit
it works fine! Because as we can see, my session directory is /C/Inetpub/ftproot/username/.
Is there a way to set session root path in C#?
Solved: you are right it is a virtual path so /c/Inetpub instead of c/Inetpub
WinSCP .NET assembly does not use the concept of a working directory. It always uses absolute paths.
So if you have your GUI session configured to start in /new/path, use that as an absolute path in WinSCP .NET assembly.
session.PutFiles(#"c:\local\path\*", "/new/path/", false, transferOptions);
the modern way of doing things would be, using Renci.SshNet.
Dont forget to use regular slashes and to quit your session after things are done.
you can create your Session with:
var connectionInfo = new Renci.SshNet.ConnectionInfo(host, username, new PasswordAuthenticationMethod(username, password));
using (var sftp = new SftpClient(connectionInfo))
{
sftp.Connect();
sftp.ChangeDirectory("..");
sftp.ChangeDirectory("C:\Inetpub\ftproot\username\");
}
sftp.Disconnect();
Related
I have successfully established a connection to a remote Linux server using the SSH.NET package with the following code (I am using a ShellStream because I have to use sudo su):
using (var client = new SshClient(server, username, password))
{
client.Connect();
List<string> commands = new List<string>();
commands.Add("sudo su - user");
commands.Add("vi test.properties");
ShellStream shellStream = client.CreateShellStream("xterm", 80, 24, 800, 600, 1024);
// Execute commands under root account
foreach (string command in commands) {
WriteStream(command, shellStream);
}
client.Disconnect();
}
private static void WriteStream(string cmd, ShellStream stream)
{
stream.WriteLine(cmd + "; echo this-is-the-end");
while (stream.Length == 0)
Thread.Sleep(500);
}
I am trying to edit the test.properties file that is in the remote Linux server using a C# function that I created.
My code (using C# in Visual Studio) that is used to modify the text file is uses System.IO.File.ReadAllText, but it does not recognize the path of the remote server, for example:
The text file in the Linux server is in this location: /home/user/test.properties, so I am using this in my code:
System.IO.File.ReadAllText("/home/user/test.properties")
I am getting the following error:
Could not find a part of the path 'C:\home\user\test.properties'
For some reason it tries to look in my local file system instead of the remote server.
Is there a different approach I should be taking?
Thanks in advance!
In general, to modify remote files, use SFTP. In SSH.NET that's what SftpClient is for.
Though as you seem to need to use elevated privileges (su) β an important factor that your question title fails to mention β it's way more difficult. The right solution is to avoid the need for su. See somewhat related:
Allowing automatic command execution as root on Linux using SSH.
Another option would be to try to execute the SFTP server under su. Though that would require modification of SSH.NET code. See related Java question:
Using JSch to SFTP when one must also switch user
If you want to keep your current shell approach with su, you are stuck with simulating shell commands. Note that connecting to SSH server won't make other .NET classes (like the File) magically be able to work with remote files (even if SFTP was possible, let only when it is not, due to the su requirement).
The easiest way to read remote file using shell is using the cat command:
cat /home/user/test.properties
Alright, so after finishing the task, this is what I did since asking the question here:
1.After your(#Martin Prikryl) response I've tried using a combination of SSH and WinSCP:
WinSCP to download the file.
.NET to modify the locally downloaded file.
WinSCP to upload the file(and deleting it from the local folder afterwards).
SSH to move the file to its appropriate location in the server.
I discarded this solution because it worked pretty well in the lower environment,
but in the production I had permissions issues so I couldn't even download it, let alone that it might be a security issue(its a sensitive file).
2.My next solution was using only SSH to simulate shell commands, as you previously mentioned, I was limited to that because I was stuck using sudo su.
I connected to the server with SSH and used the 'sed' command to only show lines that contain specific words(instead of using cat to get the whole file).
I then used my .NET code to pull the values that I needed for my GET operation
For the POST operation I used 'sed' again to replace lines.
I have an internal ASP.NET MVC site that needs to read an Excel file. The file is on a different server from the one that ASP.NET MVC is running on and in order to prevent access problems I'm trying to copy it to the ASP.NET MVC server.
It works OK on my dev machine but when it is deployed to the server it can't see the path.
This is the chopped down code from the model (C#):
string fPath = HttpContext.Current.Server.MapPath(#"/virtualdir");
string fName = fPath + "test.xlsm";
if (System.IO.File.Exists(fName))
{
// Copy the file and do what's necessary
}
else
{
if (!Directory.Exists(fPath))
throw new Exception($"Directory not found: {fPath} ");
else
throw new Exception($"File not found: {fName } ");
}
The error I'm getting is
Directory not found:
followed by the path.
The path in the error is correct - I've copied and pasted it into explorer and it resolves OK.
I've tried using the full UNC path, a mapped network drive and a virtual directory (as in the code above). Where required these were given network admin rights (to test only!) but still nothing has worked.
The internal website is using pass through authentication but I've used specific credentials with full admin rights for the virtual directory, and the virtual dir in IIS expands OK to the required folder.
I've also tried giving the application pool (which runs in Integrated mode) full network admin rights.
I'm kind of hoping I've just overlooked something simple and this isn't a 'security feature'.
I found this question copy files between servers asp.net mvc but the answer was to use FTP and I don't want to go down that route if I can avoid it.
Any assistance will be much appreciated.
First, To be on the safe side that your directory is building correctly, I would use the Path.Combine.
string fName = Path.Combine(fPath, "test.xlsm")
Second, I would check the following post and try some things there as it seems to be a similar issue.
Directory.Exists not working for a network path
If you are still not able to see the directory, there is a good chance the user does not have access to that network path. Likely what happened is the app pool running your application has access to the directory on the server. The production box likely doesn't have that same access. You would have to get with the network engineer to get that resolved.
Alternatively, you could write a Powershell script to run as a user who has access to both the production and the development server to copy the file over to the production server if that is your ultimate goal and your server administrators could schedule it for you if that is allowed in your environment.
I have a desktop application in which the user is able to specify the input and output directories.Things work fine for local directories;but people have started complaining about network locations accessed using UNC Naming Conventions.
If the user pastes the UNC Path,the code checks if the Directory exists using the following method
if(Directory.Exists(selecteddir)
{
// all good
}
This method returns false for some network locations situated on other machines.I have tested using default local machine UNC Path \\?\C:\my_dir and the code works fine.
The application runs with administrative rights .
Im new to accessing network locations in C# Code.Is there any specific way to do this? If the user has already performed windows based authentication for the UNC Shares,wont these shares be accessible by the c# application?
Please advice on how to go forward.
Update:
I have also tried using directory info
DirectoryInfo info1 = new DirectoryInfo(#textbox.Text);
if (info1.Exists)
{
return true;
}
I have faced this situation many times. In the end, I believe that there is some issue with Directory.Exist method and I leave it.
Now, I am using DirectoryInfo class to check that like this.
DirectoryInfo info = new DirectoryInfo(#"Your Path");
if (info.Exists)
{
}
It is working fine for now. So there are other reasons too but it works for me. And of course, it does not resolve the impersonation issue.
The file path that I'm checking with File.Exists() resides on a mapped drive (Z:\hello.txt). The code runs fine in debug environment, however in IIS, it always returns false
var fullFileName = string.Format("{0}\\{1}", ConfigurationManager.AppSettings["FileName"], fileName);
if (System.IO.File.Exists(fullFileName))
Why is this so, and how can I workaround this?
I have granted everyone full read/write permissions in that mapped drive
EDIT:
I tried deleting the file via \\192.168.1.12\Examples\Files\2.xml and I get the same result. It doesn't detect the file on IIS, but works fine on debug
I think your application do not has permission on "Z:"
Is "Z:" network disk?
I have had similar issues using network mapped drives, when running debug code application works perfectly and when running release version application cannot find the file.
If the files are stored on the same server as the application is deployed we found a solution by storing the local drive directory location of the mapped drive for example Z:\files\ could be E:\folder\folder1\
If the application is deployed on a separate server we found using the full network name works for example \\server1\folder\
I hope this proves helpful to you.
Your web application is running under a certain security context and you need to find out what context this is. If it's a normal user, open a command prompt as the user (using the runas tool), map the required drive using the command prompt (be sure to use the /persistent:yes flag)
Alternatively why can't you just use a UNC path (\\serverName\shareName) and avoid all this nonsense?
EDIT: 2013-05-27
To troubleshoot this, create a new application pool, based on whatever app pool you want. Then set the identity that this pool runs under as shown in the attached screenshot.
Make sure that this user has the correct privileges on the file share and then retest it
May be you should use Path.DirectorySeparatorChar
I have been using ApplicationDeployment.CurrentDeployment.DataDirectory to store content downloaded by the client at runtime which is expected to be there every time the app launches, however now I've found this changes seemingly randomly if the application is updated.
What is the best reliable method for storing user data for the application in click-once deployments?
Currently I've been using the following method
private const string LocalPath = "data";
public string GetStoragePath() {
string dir;
if (ApplicationDeployment.IsNetworkDeployed) {
ApplicationDeployment ad = ApplicationDeployment.CurrentDeployment;
dir = Path.Combine(ad.DataDirectory, LocalPath);
} else {
dir = LocalPath;
}
return CreateDirectory(dir);
}
I originally followed the article Accessing Local and Remote Data in ClickOnce Applications under the heading ClickOnce Data Directory which states this is recommended path.
NOTE: CreateDirectory(string) simply creates a directory if it doesn't already exist.
I have found the root cause of my problem is I'm creating many files and an index file, this index file contains absolute paths, click-once moves the content (or copies) on an upgrade, so the absolute paths no longer exist. I will investigate isolated storage as Damokles suggests to see if this has the same side affect for click-once deployments.
Another option is to make a directory for your application in the user's AppData folder and store it there. You can get a path to that with this:
Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData)
You'll find a lot of applications use that (and it's local equivalent). It also doesn't move around between ClickOnce versions.
Check out IsolatedStorage this should help.
It even works in partial trust environments.
To keep you data you need to use the application scoped IsolatedStorage
using System.IO;
using System.IO.IsolatedStorage;
...
IsolatedStorageFile appScope = IsolatedStorageFile.GetUserStoreForApplication();
using(IsolatedStorageFileStream fs = new IsolatedStorageFileStream("data.dat", FileMode.OpenOrCreate, appScope))
{
...
code taken from this post
It depends on the data you are saving.
You are currently saving to the Data Directory which is fine. What you need to be aware of is that each version of the application has its own Data Directory. When you update ClickOnce copies all the data from the previous version to the new version when the application is started up. This gives you a hook to migrate any of the data from one version to the next. This is good for in memory databases like Sql Lite or SQL CE.
One thing that I cam across is that when you have a large amount of data (4 gig) if you store it in the Data Directory this data will be copied from the old version to the new version. This will slow down the start up time after an upgrade. If you have a large amount of data or you don't want to worry about migrating data you can either store the data in the users local folder providing you have full trust or you can use isolated storage if you have a partial trust.
Isolated Storage
Local User Application Data