I have a small utility I am working on that deletes old user profiles from domain machines.
Basically, where I am stuck is looking for a better process to delete remote directories.
I know I can use the System.IO and delete it from the UNC path, but I am not happy with the performance of the network deletion. It can take hours to delete medium sized profiles, and if there are dozens or hundreds of profile or machines this is not feasible as a solution.
So this appears to be out of the question
The best I can find appears to be PSExec calls, but I want something managed.
Are there any .NET classes that can invoke the remote machine to complete the deletion of the directory instead of relying on the calling machine?
If your client computers don't detect this as a virus, you can use it to execute remote commands on your network computers, including folder deletions:
PsExec v1.98
Introduction
Utilities like Telnet and remote control programs like Symantec's PC
Anywhere let you execute programs on remote systems, but they can be a
pain to set up and require that you install client software on the
remote systems that you wish to access. PsExec is a light-weight
telnet-replacement that lets you execute processes on other systems,
complete with full interactivity for console applications, without
having to manually install client software. PsExec's most powerful
uses include launching interactive command-prompts on remote systems
and remote-enabling tools like IpConfig that otherwise do not have the
ability to show information about remote systems.
I have no idea how it works. In .NET code the idea would be to send RPC calls to a remote application that you control, which is easy enough provided you already have said application running on the target computers. The mechanism used would be .NET Remoting or WCF.
Inspired by this answer, I did some minor modifications. I can't get it to run 100% managed, as I get an error code 9 (The storage control block address is invalid) when I try to run the rd-command from within the code itself.
The base functionality is blindingly fast on my small test-setup, but given that you overrule the "Are you sure?" prompt, it is also fairly dangerous if you specify the wrong path, so wear your hard hat as you proceed:
If you execute echo Y | rd /S c:\Temp\test in any command shell, you'll remove C:\Temp\Test and anything below it very quickly and without warning.
But executing this solution directly in the code doesn't work. So my quick fix is to place a bat-file (called DeleteTest.bat) on the machine, containing only this line and then execute the bat file by WMI.
In my small test, it deletes ~900 files of a total of ~200 mb in a second or so.
Also, in addition to the answer cited I get the return code, so my full code becomes:
var processToRun = new[] { "c:\\Temp\\DeleteTest.bat" };
var connection = new ConnectionOptions();
connection.Username = "me";
connection.Password = "password";
var wmiScope = new ManagementScope(String.Format("\\\\{0}\\root\\cimv2", "MyRemoteMachine"), connection);
var wmiProcess = new ManagementClass(wmiScope, new ManagementPath("Win32_Process"), new ObjectGetOptions());
var result = wmiProcess.InvokeMethod("Create", processToRun);
Console.WriteLine(
"Creation of process returned: " + result);
You will obviously also need the bat file to be generated (by code or pre-generated) and copied to the destination, but that should be trivial.
Related
I have successfully established a connection to a remote Linux server using the SSH.NET package with the following code (I am using a ShellStream because I have to use sudo su):
using (var client = new SshClient(server, username, password))
{
client.Connect();
List<string> commands = new List<string>();
commands.Add("sudo su - user");
commands.Add("vi test.properties");
ShellStream shellStream = client.CreateShellStream("xterm", 80, 24, 800, 600, 1024);
// Execute commands under root account
foreach (string command in commands) {
WriteStream(command, shellStream);
}
client.Disconnect();
}
private static void WriteStream(string cmd, ShellStream stream)
{
stream.WriteLine(cmd + "; echo this-is-the-end");
while (stream.Length == 0)
Thread.Sleep(500);
}
I am trying to edit the test.properties file that is in the remote Linux server using a C# function that I created.
My code (using C# in Visual Studio) that is used to modify the text file is uses System.IO.File.ReadAllText, but it does not recognize the path of the remote server, for example:
The text file in the Linux server is in this location: /home/user/test.properties, so I am using this in my code:
System.IO.File.ReadAllText("/home/user/test.properties")
I am getting the following error:
Could not find a part of the path 'C:\home\user\test.properties'
For some reason it tries to look in my local file system instead of the remote server.
Is there a different approach I should be taking?
Thanks in advance!
In general, to modify remote files, use SFTP. In SSH.NET that's what SftpClient is for.
Though as you seem to need to use elevated privileges (su) – an important factor that your question title fails to mention – it's way more difficult. The right solution is to avoid the need for su. See somewhat related:
Allowing automatic command execution as root on Linux using SSH.
Another option would be to try to execute the SFTP server under su. Though that would require modification of SSH.NET code. See related Java question:
Using JSch to SFTP when one must also switch user
If you want to keep your current shell approach with su, you are stuck with simulating shell commands. Note that connecting to SSH server won't make other .NET classes (like the File) magically be able to work with remote files (even if SFTP was possible, let only when it is not, due to the su requirement).
The easiest way to read remote file using shell is using the cat command:
cat /home/user/test.properties
Alright, so after finishing the task, this is what I did since asking the question here:
1.After your(#Martin Prikryl) response I've tried using a combination of SSH and WinSCP:
WinSCP to download the file.
.NET to modify the locally downloaded file.
WinSCP to upload the file(and deleting it from the local folder afterwards).
SSH to move the file to its appropriate location in the server.
I discarded this solution because it worked pretty well in the lower environment,
but in the production I had permissions issues so I couldn't even download it, let alone that it might be a security issue(its a sensitive file).
2.My next solution was using only SSH to simulate shell commands, as you previously mentioned, I was limited to that because I was stuck using sudo su.
I connected to the server with SSH and used the 'sed' command to only show lines that contain specific words(instead of using cat to get the whole file).
I then used my .NET code to pull the values that I needed for my GET operation
For the POST operation I used 'sed' again to replace lines.
I have a web service that uses SSH.NET to call a shell script on a Unix box.
If I run the script normally, it works fine, does its work correctly on the Informix DB.
Just some background:
I call a script that executes a .4gl (cant show this as its business knowledge).
The g4l is giving the following error back in a log, when I execute it with SSH.NET:
fglgo: error while loading shared libraries: libiffgisql.so: cannot open shared object file: No such file or directory
file_load ended: 2017-09-21 15:37:01
C# code to execute SSH.NET script
sshclients = new SshClient(p, 22, username, password);
sshclients.Connect();
sshclients.KeepAliveInterval = new TimeSpan(0, 0, 1);
sshclients.RunCommand("sh " + Script_dir);
I added the KeepAliveInterval, to see, if it helps.
My question is the error I am getting from Unix/4gl.
Why is this happening and who can I get the script to execute correctly?
The SshClient.RunCommand uses SSH "exec" channel internally. It, by default, (rightfully) does not allocate a pseudo terminal (PTY) for the session. As a consequence a different set of startup scripts is (might be) sourced. And/or different branches in the scripts are taken, based on absence/presence of the TERM environment variable. So the environment might differ from the interactive session, you use with your SSH client.
So, in your case, the PATH is probably set differently; and consequently the shared object cannot be found.
To verify that this is the root cause, disable the pseudo terminal allocation in your SSH client. For example in PuTTY, it's Connection > SSH > TTY > Don't allocate a pseudo terminal. Then, go to Connection > SSH > Remote command and enter your g4l command. Check Session > Close window on exit > Never and open the session. You should get the same "No such file or directory" error.
Ways to fix this, in preference order:
Fix the scripts not to rely on a specific environment.
Fix your startup scripts to set the PATH the same for both interactive and non-interactive sessions.
If the command itself relies on a specific environment setup and you cannot fix the startup scripts, you can change the environment in the command itself. Syntax for that depends on the remote system and/or the shell. In common *nix systems, this works:
sshclients.RunCommand("PATH=\"$PATH;/path/to/g4l\" && sh ...");
Another (not recommended) approach is to force the pseudo terminal allocation for the "exec" channel.
Though SSH.NET does not support this. You would have to modify its code issue SendPseudoTerminalRequest request in .RunCommand implementation (I didn't test this).
You can also try to use "shell" channel using .CreateShell method. For it, SSH.NET does support pseudo terminal allocation.
Though, using the pseudo terminal to automate a command execution can bring you nasty side effects. See for example Is there a simple way to get rid of junk values that come when you SSH using Python's Paramiko library and fetch output from CLI of a remote machine?
For a similar issues, see
Renci SSH.NET - no result string returned for opmnctl
Certain Unix commands fail with "... not found", when executed through Java using JSch
Commands executed using JSch behaves differently than in SSH terminal (bypasses confirm prompt message of "yes/"no")
JSch: Is there a way to expose user environment variables to "exec" channel?
Have seen similar questions asked by Informix-4gl developers as they transition to FourJs Genero and using its Web Services functionality. The question I'll put to them is "who owns the fglgo/fglrun process that the Genero Application Server has launched, where is it running from, and what is its environment". If needed, I'll illustrate with a simple program that does something like ...
MAIN
RUN "env > /tmp/myname.txt"
RUN "who >> /tmp/myname.txt"
RUN "pwd >> /tmp/myname.txt"
END MAIN
... and say compare with when program is running from command line. It is normally a case like in the earlier answer of configuring so that the environment is set correctly before the 4gl program is executed.
Background
I'm writing an web application so I can control an Ubuntu Server from a web site.
One idea I had was to run the 'screen' application from mono and redirect my input and output from there.
Running 'screen' from mono:
ProcessStartInfo info = new ProcessStartInfo("screen", "-m");
info.UseShellExecute = false;
info.RedirectStandardOutput = true;
info.RedirectStandardInput = true;
var p = new Process();
p.StartInfo = info;
p.Start();
var output = p.StandardOutput;
var input = p.StandardInput;
but running 'screen' with the RedirectStandardInput gives out the error:
Must be connected to a terminal
I've tried many different arguments and none seems to work with 'Redirecting Standard Input'
Other ideas for controlling a server will be greatly appreciated
I think this is the typical question in which you're asking how to implement your solution to a problem, instead of asking how to solve your problem. I don't think you should do hacky things like making a web app that tunnels the user actions to the server via a terminal.
I think you can bypass all that and, without writing a single line of code, take advantage of what the platform (Gtk+ in this case) already provides you:
You could run gnome-terminal in the server with the Broadway GDK backend. This way the gnome-terminal app will not run in the server, but open a web server on the port you specify. Later, you can use any WebSockets-enabled browser to control it.
This is the easiest and less hacky solution compared to the other ones offered so far. If you still are excited about using Mono for web development you still can, and you could embed this access in an iFrame or something.
(PS: If you don't want to depend on GTK being installed in the server; you could just use WebSockets in your client part of the webpage to be able to send events from the server to the client, and the library SSHNET to send the user's input directly through the wire.)
screen will need a terminal of some sort. It's also gigantically overkill.
You may wish to investigate the pty program from the Advanced Programming in the Unix Environment book (pty/ in the sources) to provide a pseudo-terminal that you can drive programmatically. (You'd probably run the pty program as-provided and write your driver in Mono if you're so inclined.) (The pty program will make far more sense if studied in conjunction with the book.)
The benefit to using the pty program, or functionality similar to it, is that you'd properly handle programs such as passwd that open("/dev/tty") to prompt the user for a password. If you simply redirect standard IO streams via pipe() and dup2() system calls, you won't have a controlling terminal for the programs that need one. (This is still a lot of useful programs but not enough to be a remote administration tool.)
There may be a Mono interface to the pty(7) system; if so, it may be more natural to use it than to use the C API, but the C API is what does the actual work, so it may be easier to just write directly in the native language.
A different approach to solve the same problem is shellinabox. Also interesting is this page from the anyterm website that compares different products that implement this kind of functionality.
Using shellinabox is very simple:
# ./shellinaboxd -s /:LOGIN
(this is the example given on their website) will start a webserver (on in your case the Ubuntu server). When you point your browser to http://yourserver:4200 you'll see a login screen, just like you would see when opening a session with ssh/putty/telnet/... but in your browser.
You could provide the required remote access functionality to the server's shell by just including an iframe that points to that service in your application's webpage.
I'm working on a Mono application that will run on Linux, Mac, and Windows, and need the ability for apps (on a single os) to send simple string messages to each other.
Specifically, I want a Single Instance Application. If a second instance is attempted to be started, it will instead send a message to the single instance already running.
DBus is out, as I don't want to have that be an additional requirement.
Socket communication seems to be hard, as windows seems to not allow permission to connect.
Memory Mapped Files seems not to be supported in Mono.
Named Pipes appears not to be supported in Mono.
IPC seems not to be supported on Mono.
So, is there a simple method to send string messages on a single machine to a server app that works on each os, without requiring permissions, or additional dependencies?
On my ubuntu (10.10 mono version: 2.6.7) I've tried using WCF for interprocess communication with BasicHttpBinding, NetTcpBinding and NetNamedPipeBinding. First 2 worked fine, for NetNamedPipeBinding I got an error:
Channel type IDuplexSessionChannel is
not supported
when calling ChannelFactory.CreateChannel() method.
I've also tried using Remoting (which is a legacy technology since WCF came out) with IpcChannel; example from this msdn page started and worked without problems on my machine.
I suppose you shouldn't have problems using WCF or Remoting on Windows either, not sure about Mac though, don't have any of those around to test. Let me know if you need any code examples.
hope this helps, regards
I wrote about this on the mono-dev mailing list. Several general-purpose inter-process messaging systems were considered, including DBus, System.Threading.Mutex class, WCF, Remoting, Named Pipes... The conclusions were basically mono doesn't support Mutex class (works for inter-thread, not for inter-process) and there's nothing platform agnostic available.
I have only been able to imagine three possible solutions. All have their drawbacks. Maybe there's a better solution available, or maybe just better solutions for specific purposes, or maybe there exist some cross-platform 3rd party libraries you could include in your app (I don't know.) But these are the best solutions I've been able to find so far:
Open or create a file in a known location, with exclusive lock. (FileShare.None). Each application tries to open the file, do its work, and close the file. If failing to open, Thread.Sleep(1) and try again. This is kind of ghetto, but it works cross-platform to provide inter-process mutex.
Sockets. First application listens on localhost, some high numbered port. Second application attempts to listen on that port, fails to open (because some other process already has it) so second process sends a message to the first process, which is already listening on that port.
If you have access to a transactional database, or message passing system (sqs, rabbitmq, etc) use it.
Of course, you could detect which platform you're on, and then use whatever works on that platform.
Solved my problem with two techniques: a named mutex (so that the app can be run on the same machine by different users), and a watcher on a message file. The file is opened and written to for communication. Here is a basic solution, written in IronPython 2.6:
(mutex, locked) = System.Threading.Mutex(True, "MyApp/%s" % System.Environment.UserName, None)
if locked:
watcher = System.IO.FileSystemWatcher()
watcher.Path = path_to_user_dir
watcher.Filter = "messages"
watcher.NotifyFilter = System.IO.NotifyFilters.LastWrite
watcher.Changed += handleMessages
watcher.EnableRaisingEvents = True
else:
messages = os.path.join(path_to_user_dir, "messages")
fp = file(messages, "a")
fp.write(command)
fp.close()
sys.exit(0)
For your simple reason for needing IPC, I'd look for another solution.
This code is confirmed to work on Linux and Windows. Should work on Mac as well:
public static IList Processes()
{
IList<Process> processes = new List<Process>();
foreach (System.Diagnostics.Process process in System.Diagnostics.Process.GetProcesses())
{
Process p = new Process();
p.Pid = process.Id;
p.Name = process.ProcessName;
processes.Add(p);
}
return processes;
}
Just iterate through the list and look for your own ProcessName.
To send a message to your application, just use MyProcess.StandardInput to write to the applications standard input. This only works assuming your application is a GUI application though.
If you have problems with that, then you could maybe use a specialized "lock" file. Using the FileSystemWatcher class you can check when it changes. This way the second instance could write a message in the file and then the first instance notice that it changes and can read in the contents of the file to get a message.
I'm finally set up to be able to work from home via VPN (using Shrew as a client), and I only have one annoyance. We use some batch files to upload config files to a network drive. Works fine from work, and from my team lead's laptop, but both of those machines are on the domain. My home system is not, and won't be, so when I run the batch file, I get a ton of "invalid drive" errors because I'm not a domain user.
The solution I've found so far is to make a batch file with the following:
explorer \\MACHINE1
explorer \\MACHINE2
explorer \\MACHINE3
Then manually login to each machine using my domain credentials as they pop up. Unfortunately, there are around 10 machines I may need to use, and it's a pain to keep entering the password if I missed one that a batch file requires.
I'm looking into using the answer to this question to make a little C# app that'll take the login info once and login programmatically. Will the authentication be shared automatically with Explorer, or is there anything special I need to do? If it does work, how long are the credentials cached?
Is there an app that does something like this automatically?
Unfortunately, domain authentication via the VPN isn't an option, according to our admin.
EDIT: If there's a way to pass login info to Explorer via the command line, that would be even easier using Ruby and highline.
EDIT: In case anyone else has the same problem, here's the solution I wound up using. It requires Ruby and the Highline gem.
require "highline/import"
domain = ask("Domain: ")
username = ask("Username: ")
password = ask("Password: ") { |q| q.echo = false }
machines = [
'\\MACHINE1\SHARE',
'\\MACHINE2\SHARE',
'\\MACHINE3\SHARE',
'\\MACHINE4\SHARE',
'\\MACHINE5\SHARE'
]
drives = ('f'..'z').to_a[-machines.length..-1]
drives.each{|d| system("net use #{d}: /delete >nul 2>nul"); }
machines.zip(drives).each{|machine, drive| system("net use #{drive}: #{machine} #{password} /user:#{domain}\\#{username} >nul 2>nul")}
It'll figure out how many mapped drives I need, then start mapping them to the requested shares. In this case, it maps them from V: to Z:, and assumes I don't have anything shared with those drive letters.
If you already have an Explorer window open to one of the shares, it may give an error, so before I ran the Ruby script, I ran:
net use * /delete
That cleared up the "multiple connections to a share not permitted" error, and allowed me to connect with no problems.
You could create a batch file that uses "NET USE" to connect to your shares. You'd need to use a drive letter for each share, but it'd be super simple to implement.
Your batch file would look like this:
net use h: \\MACHINE1 <password> /user:<domain>\<user>
net use i: \\MACHINE2 <password> /user:<domain>\<user>
net use j: \\MACHINE3 <password> /user:<domain>\<user>
UPDATE
Whether the connection remains or not depends upon what you specified for the /persistent switch. If you specified yes, then it will attempt to reconnect upon your next logon. If you specified no then it won't. The worrying this is the documentation says that it defaults to the value that you used last!
If you specified no, the connection will remain until you next reboot. If you drop your VPN connection the drive would be unavailable (but if you reconnect to the VPN the drive should be available as long as you haven't removed it).
I don't know of a way to use it without mapping to a drive letter, the documentation would lead you to believe that it isn't possible.
I understand your problem, that you're just trying to give explorer the correct credentials so it stops nagging you with login boxes. Using mapped drives though not perfect will at least alleviate your pain.
to pass credential by command line to the explorer you should take a look into the command net use
Use API WNetAddConnection2() via P/Invoke.