Running a script that starts and kills/cleans two jobs via batch file before I package it for an EXE, works great in ISE elevated but immediately fails in console or command prompt with the following:
The code I have put together is here: https://pastebin.com/FWaZD249
I tested it with:
PS1 to EXE, get the same results
Non-elevated ISE same results
Elevated console same results
Elevated CMD same results
Elevated ISE works (only after saving?)
It's really close to being done, basically, it's just a little script that checks the 5900 port for established connections, updates a form of a list of connections, and sends a little notification if someone new has connected, it runs on our print server computer which is screen-shared remotely via TightVNC, so operators don't get surprised when their mouse starts moving on them.
cmd batch code looks like this
powershell.exe -NoExit ". C:\Users\VS-Print-Server\Desktop\Userchecker.PS1"
As I mentioned in the comment: you initialize the jobs $job1 and job2 with $FormLib respectively $LisLib before those variables are set/initialized. Moving line 1 through 30 (everything before Write-Verbose -Verbose 'Before:') to the very end should make it work.
Related
I have developed a console application using MVS and I compiled it so can run on linux.
It runs on linux when I manually run and everything works so far. But whenever I try to run it at boot using crontab it seems not to be working.
My application is a HttpListener. The linux environment I run this is a robot's. I've not used linux very much so I have found and used the below commands to get this run at boot.
#reboot /home/rauman/Downloads/webserver
Then I tried setting a delay of 20secs,
#reboot sleep 20 && /home/rauman/Downloads/webserver
I normally run this application with terminal like below and works fine
./webserver
I'm accessing the robot using putty
After adding this to run at boot, I could see the pid of the application,
pidof webserver
So I guess it runs, but got no permission or something?
I have given permission for the file using,
chmod +x webserver
Any help is appreciated.
Edit : Solved
As, Mr. R pointed out
#reboot sleep 20 && cd /home/rauman/Downloads/ && /.webserver
I ran into the same problem a few days ago. My app runs flawlessly when I call it from its working directory but it failed when getting called from outside. It turned out the app reads a file by a relative address. So I had to change either the program or the pwd.
The easiest way that I came by is changing pwd:
#reboot sleep 20 && cd /home/rauman/Downloads/ && ./webserver
crontab runs processes as root, so any another file (such configurations) beloning to your user profile folder cannot be accessibile by ¢rotab.
Is this you scenario?
If yes, you should them into /etc or into /root.
I am working on a text-editor app that can be activated via command line. The argument provided is treated as path to file, i.e. pad launches the app but pad "C:\Sample.txt" launches the app and then displays the "Sample.txt" file. This works well in command prompt, but in power-shell after command is executed, the execution completes only after the app is closed. Also, the execution is recorded as failed. Is there any way I can notify the command to be completed successfully from my app itself??
Example of the difference in command line activation of Windows Terminal in both command prompt and power-shell:
https://drive.google.com/file/d/15gBiCcio3feGwxappgyBm68lezK5I0t2/view?usp=sharing
Since you are comparing cmd.exe vs powershell.exe uwpapp starts. Remember the cmd.exe simply does not hold/wait for the process/exe it started, whereas PowerShell does for uwp at least, by design.
As a workaround, if you want the PowerShell consolehost to immediately be available for more interactive steps, then you can use jobs to simulate what you are seeing when using cmd.exe. So, from a WT PowerShell session...
Start-Job -ScriptBlock {wt}
I have a web service that uses SSH.NET to call a shell script on a Unix box.
If I run the script normally, it works fine, does its work correctly on the Informix DB.
Just some background:
I call a script that executes a .4gl (cant show this as its business knowledge).
The g4l is giving the following error back in a log, when I execute it with SSH.NET:
fglgo: error while loading shared libraries: libiffgisql.so: cannot open shared object file: No such file or directory
file_load ended: 2017-09-21 15:37:01
C# code to execute SSH.NET script
sshclients = new SshClient(p, 22, username, password);
sshclients.Connect();
sshclients.KeepAliveInterval = new TimeSpan(0, 0, 1);
sshclients.RunCommand("sh " + Script_dir);
I added the KeepAliveInterval, to see, if it helps.
My question is the error I am getting from Unix/4gl.
Why is this happening and who can I get the script to execute correctly?
The SshClient.RunCommand uses SSH "exec" channel internally. It, by default, (rightfully) does not allocate a pseudo terminal (PTY) for the session. As a consequence a different set of startup scripts is (might be) sourced. And/or different branches in the scripts are taken, based on absence/presence of the TERM environment variable. So the environment might differ from the interactive session, you use with your SSH client.
So, in your case, the PATH is probably set differently; and consequently the shared object cannot be found.
To verify that this is the root cause, disable the pseudo terminal allocation in your SSH client. For example in PuTTY, it's Connection > SSH > TTY > Don't allocate a pseudo terminal. Then, go to Connection > SSH > Remote command and enter your g4l command. Check Session > Close window on exit > Never and open the session. You should get the same "No such file or directory" error.
Ways to fix this, in preference order:
Fix the scripts not to rely on a specific environment.
Fix your startup scripts to set the PATH the same for both interactive and non-interactive sessions.
If the command itself relies on a specific environment setup and you cannot fix the startup scripts, you can change the environment in the command itself. Syntax for that depends on the remote system and/or the shell. In common *nix systems, this works:
sshclients.RunCommand("PATH=\"$PATH;/path/to/g4l\" && sh ...");
Another (not recommended) approach is to force the pseudo terminal allocation for the "exec" channel.
Though SSH.NET does not support this. You would have to modify its code issue SendPseudoTerminalRequest request in .RunCommand implementation (I didn't test this).
You can also try to use "shell" channel using .CreateShell method. For it, SSH.NET does support pseudo terminal allocation.
Though, using the pseudo terminal to automate a command execution can bring you nasty side effects. See for example Is there a simple way to get rid of junk values that come when you SSH using Python's Paramiko library and fetch output from CLI of a remote machine?
For a similar issues, see
Renci SSH.NET - no result string returned for opmnctl
Certain Unix commands fail with "... not found", when executed through Java using JSch
Commands executed using JSch behaves differently than in SSH terminal (bypasses confirm prompt message of "yes/"no")
JSch: Is there a way to expose user environment variables to "exec" channel?
Have seen similar questions asked by Informix-4gl developers as they transition to FourJs Genero and using its Web Services functionality. The question I'll put to them is "who owns the fglgo/fglrun process that the Genero Application Server has launched, where is it running from, and what is its environment". If needed, I'll illustrate with a simple program that does something like ...
MAIN
RUN "env > /tmp/myname.txt"
RUN "who >> /tmp/myname.txt"
RUN "pwd >> /tmp/myname.txt"
END MAIN
... and say compare with when program is running from command line. It is normally a case like in the earlier answer of configuring so that the environment is set correctly before the 4gl program is executed.
I am trying to run a few commands on a remote machine using Win32_Process, but I can't get it to work.
This is what I tried first:
var processClass = new ManagementClass(#"\\server.domain.co.uk\root\cimv2:Win32_Process");
var inParams = processClass.GetMethodParameters("Create");
inParams["CommandLine"] = #"echo. 2>C:\users\user.name\desktop\EmptyFile.txt";
inParams["CurrentDirectory"] = #"C:\windows\system32";
var outParams = processClass.InvokeMethod("Create", inParams, null);
But nothing happens. I also tried running this locally at root\cimv2:Win32_Process, but again there was no effect. I was able to get it working locally when calling notepad.exe instead of the command line, but this does not work on the remote computer.
How can I work out what is going wrong with this?
In outParams, which is a System.Management.ManagementBaseObject, I can see that ClassPath contains the value Evaluation timed out - could this be a clue as to why it isn't working?
After reading around this and running some tests, it seems that similar to those started from CScript, these processes cannot be made interactive:
You can use Win32_Process.Create to execute a script or application on a remote computer. However, for security reasons, the process cannot be interactive. When Win32_Process.Create is called on the local computer, the process can be interactive.
I am still a bit confused, because the code I ran above did not appear in the task manager, yet the article goes on to say:
The remote process has no user interface but it is listed in the Task Manager
Either way, this suits my needs as I am running a shell command - however it is worth noting that executing cmd commands in this way requires that they are formatted as:
cmd /c <command-goes-here>
In my case, the command needs to be run from a specific location, so I need to pass:
cmd /c cd <path-to-executable> && <command-goes-here>
I am using a .NET library Renci.SshNet to connect to remote Solaris machine (its a VM on ESXi). It connects fine.
I use the following method to execute the commands and get the Standard Output. This works fine on any Linux machine and almost all command on Solaris (Except few - which is where the issue is)
outstring = sshClient.RunCommand(command).Execute();
For example when command = "cat /etc/release | grep Solaris" -it works fine.
However, when command = "smbios -t SMB_TYPE_SYSTEM" - it doesn't return anything. I try redirecting it to a file. The file gets created - but doesn't have anything on it.
I connect to the system using PuTTY and run the command - it runs perfectly and gives the desired output.
I am perplexed by this behavior. I am using a username with root privilloginto logon. So privileges are ruled out (anyway the same user gets the output in PuTTY).
I am wondering if there is any setting or restriction on Solaris (I am running ver 11.3) which does not allow the smbios command to run like this over a remote connection? Or it is something else? Any guidance will be extremely helpful. If any further info is required, please let me know.
Well, it turns out that it was to do with Path settings. When you login through Putty the $PATH as defined gets set. Hence smbios runs from Putty.
But in a SSH session the $PATH environment variable does not get applied. So its not able to find smbios to run. If you give the full path of smbios - like /usr/sbin/smbios it executes fine over Renci.SshNet.