I´m trying to run a PowerShell script with C# on .netCore.
I have tried many different solutions for now, but none of them seem to work. I just want to execute a PowerShell script and set ExecutionPolicies and Scope to make it work.
But I always got the exception that the ExecutionPolicies don´t allow me to run the script that way.
Despite that with the actual configuration and code, you find below, I don´t get any feedback from the Debugger after reaching the point where .Invoke(); is executed. And waiting for response and letting the software doing its stuff in background leads always to a stackoverflow exception.
commandParameters is a simple Dictionary<string, string>
Any thoughts on this?
Cheers.
var iss = InitialSessionState.CreateDefault2();
// Set its script-file execution policy.
iss.ExecutionPolicy = Microsoft.PowerShell.ExecutionPolicy.Unrestricted;
iss.ExecutionPolicy = (ExecutionPolicy)Microsoft.PowerShell.ExecutionPolicyScope.CurrentUser;
// Create a PowerShell instance with a runspace based on the
// initial session state.
PowerShell ps = PowerShell.Create(iss);
ps.AddCommand(path + "\\" + fileName);
ps.AddParameters(commandParameters);
var results = ps.InvokeAsync().GetAwaiter().GetResult();
Assigning to iss.ExecutionPolicy only ever controls the execution policy for the current process.
Do not assign a Microsoft.PowerShell.ExecutionPolicyScope value, as it is an unrelated enumeration that defines the execution-policy scope, relevant only to the Set-ExecutionPolicy and Get-ExecutionPolicy cmdlets. The numeric values of this unrelated enumeration just so happen to overlap with the appropriate [Microsoft.PowerShell.ExecutionPolicy] enumeration values, so that policy scope CurrentUser maps onto policy RemoteSigned (value 0x1).
In effect, your second iss.ExecutionPolicy = ... assignment overrides the first one and sets the process-scope execution policy to RemoteSigned.
The process-scope execution policy, i.e. for the current process only, is the only one you can set via an initial session state; see the next section for how to change policies for persistent scopes.
Therefore, iss.ExecutionPolicy = Microsoft.PowerShell.ExecutionPolicy.Unrestricted; alone is enough to invoke *.ps1 files in the session at hand - remove the second iss.ExecutionPolicy = ... assignment.
If you want to modify the execution policy persistently, you must use .AddCommand('Set-ExecutionPolicy') with the appropriate arguments and invoke that command. Caveat:
Changing the persistent current-user configuration also takes effect for regular PowerShell sessions (interactive ones / CLI calls) in the respective PowerShell edition.
By contrast, if you change the persistent machine configuration - which requires running with elevation (as admin):
With the Windows PowerShell SDK, the change also takes effect for regular PowerShell sessions.
With the PowerShell (Core) SDK, it only takes effect for the SDK project at hand.[1]
See this answer for sample code.
Caveat:
If the current user's / machine's execution policy is controlled via GPOs (Group Policy Objects), you fundamentally cannot override it programmatically (except via GPO changes).
To check if a GPO-based policy is in effect:
Run Get-ExecutionPolicy -List to list policies defined for each available scope, in descending order of precedence.
If either the MachinePolicy or the UserPolicy scope have a value other than Undefined, then a GPO policy is in effect (run Get-ExecutionPolicy without arguments to see the effective policy for the current session).
Limited workaround for when a GPO-based policy prevents script-file execution:
Assuming all of the following; script file refers to *.ps1 files:
Your script file calls no other script files.
It (directly or indirectly) loads no modules that on import happen to call script files.
Your script file doesn't rely on knowing its own location (directory) or name on disk.
You can load your script file's content into a string and pass that string to the SDK's .AddScript() method, as the following simplified example shows:
using (var ps = PowerShell.Create()) {
var results =
ps.AddScript(File.ReadAllText(path + "\\" + fileName))
.AddParameters(commandParameters)
.Invoke();
}
[1] Windows PowerShell stores the execution policies in the registry, which both regular sessions and the SDK consult. By contrast, PowerShell (Core) stores them in powershell.config.json files, and in the case of the machine policy alongside the PowerShell executable (DLL). Since SDK projects have their own executable, its attendant JSON file is not seen by the executable of a regular PowerShell (Core) installation.
Related
I am using System.Management.Automation.Powershell for a project, and I'm trying to get a list of HostNames on the network. Found a thing that can get me HostNames out of IPs, but when I got to get all active IPs on the network, the command Get-NetNeighbor gives absolutely no return.
PowerShell ps = PowerShell.Create();
ps.AddScript("Get-NetNeighbor -AddressFamily IPv4 -IPAddress 192.168.0.*");
var result = ps.Invoke();
When this gets run, I get an empty Collection of PSObjects from the Invoke() method.
I also tried parsing the output through Format-List by doing Get-NetNeighbor -AddressFamily IPv4 -IPAddress 192.168.0.* | Format-List -Property IPAddress still to no avail
The only solution I found is using arp -a, but I would prefer if I could avoid using that command as parsing Its results are a bit messy
Edit
It errors out because It cannot load the module due to Execution policy not allowing it as it's run on Restricted even though the system is set to RemoteSigned
How do i change the ExecutionPolicy for the code?
It appears that when C# runs powershell through System.Management.Automation.Powershell it uses Process Execution Policy. The easiest way to fix that was to add the following bit of code before the command for Get-NetNeighbor
ps.AddScript("Set-ExecutionPolicy -Scope Process -ExecutionPolicy RemoteSigned");
This will set the Execution Policy for that instance of powershell to RemoteSigned aka Execute all local scripts and for scripts from the internet the script has to be signed by a trusted publisher
The command doesn't require administrator privileges to set the ExecutionPolicy for Process scope
I am having trouble with a script I am writing that is using System.Net.WebClient (called from Powershell but I guess the problem should occur with everything that is using the same cache as System.Net.WebRequest):
For context (as there may be a better solution than what I found):
I made an extension for IE (yes, some clients still use it) in C# (yes, it's not recommended but I had no choice)
this extension needs to run with EPM activated (so low-privileged).
it needs a configuration file that is available on a server accessed by HTTPS.
the configuration needs to be available when IE is launched so we have to cache it (also, each tab has its own instance of the extension)
that cached configuration have to stay in a privileged folder (the extension injects code to some of the pages according to that configuration, so you don't want the user or any process to have write access to it)
To solve the problem of caching the configuration, I wrote a Powershell script that is launched through the task scheduler. The script uses System.Net.WebClient to download the file, and I set it to respect the cache of the file:
$webclient = New-Object System.Net.WebClient
$cacheLevel = [System.Net.Cache.RequestCacheLevel]::CacheIfAvailable
$webclient.CachePolicy = New-Object System.Net.Cache.RequestCachePolicy($cacheLevel)
When I launch the script using "Run As Administrator", the cache is respected (providing the server is well configured).
When I launch the script from the task scheduler (user NT AUTHORITY\SYSTEM, as I need privilege to be able to save the file in the extension installation dir), the cache is not respected and the file is downloaded every single time.
Any idea on how to solve this issue? I need the caching to able to be poll the file without having to do a full download (the file is small, but the number of users is high :D).
Maybe it would be possible to use the date of the file that was previously downloaded?
I have a web service that uses SSH.NET to call a shell script on a Unix box.
If I run the script normally, it works fine, does its work correctly on the Informix DB.
Just some background:
I call a script that executes a .4gl (cant show this as its business knowledge).
The g4l is giving the following error back in a log, when I execute it with SSH.NET:
fglgo: error while loading shared libraries: libiffgisql.so: cannot open shared object file: No such file or directory
file_load ended: 2017-09-21 15:37:01
C# code to execute SSH.NET script
sshclients = new SshClient(p, 22, username, password);
sshclients.Connect();
sshclients.KeepAliveInterval = new TimeSpan(0, 0, 1);
sshclients.RunCommand("sh " + Script_dir);
I added the KeepAliveInterval, to see, if it helps.
My question is the error I am getting from Unix/4gl.
Why is this happening and who can I get the script to execute correctly?
The SshClient.RunCommand uses SSH "exec" channel internally. It, by default, (rightfully) does not allocate a pseudo terminal (PTY) for the session. As a consequence a different set of startup scripts is (might be) sourced. And/or different branches in the scripts are taken, based on absence/presence of the TERM environment variable. So the environment might differ from the interactive session, you use with your SSH client.
So, in your case, the PATH is probably set differently; and consequently the shared object cannot be found.
To verify that this is the root cause, disable the pseudo terminal allocation in your SSH client. For example in PuTTY, it's Connection > SSH > TTY > Don't allocate a pseudo terminal. Then, go to Connection > SSH > Remote command and enter your g4l command. Check Session > Close window on exit > Never and open the session. You should get the same "No such file or directory" error.
Ways to fix this, in preference order:
Fix the scripts not to rely on a specific environment.
Fix your startup scripts to set the PATH the same for both interactive and non-interactive sessions.
If the command itself relies on a specific environment setup and you cannot fix the startup scripts, you can change the environment in the command itself. Syntax for that depends on the remote system and/or the shell. In common *nix systems, this works:
sshclients.RunCommand("PATH=\"$PATH;/path/to/g4l\" && sh ...");
Another (not recommended) approach is to force the pseudo terminal allocation for the "exec" channel.
Though SSH.NET does not support this. You would have to modify its code issue SendPseudoTerminalRequest request in .RunCommand implementation (I didn't test this).
You can also try to use "shell" channel using .CreateShell method. For it, SSH.NET does support pseudo terminal allocation.
Though, using the pseudo terminal to automate a command execution can bring you nasty side effects. See for example Is there a simple way to get rid of junk values that come when you SSH using Python's Paramiko library and fetch output from CLI of a remote machine?
For a similar issues, see
Renci SSH.NET - no result string returned for opmnctl
Certain Unix commands fail with "... not found", when executed through Java using JSch
Commands executed using JSch behaves differently than in SSH terminal (bypasses confirm prompt message of "yes/"no")
JSch: Is there a way to expose user environment variables to "exec" channel?
Have seen similar questions asked by Informix-4gl developers as they transition to FourJs Genero and using its Web Services functionality. The question I'll put to them is "who owns the fglgo/fglrun process that the Genero Application Server has launched, where is it running from, and what is its environment". If needed, I'll illustrate with a simple program that does something like ...
MAIN
RUN "env > /tmp/myname.txt"
RUN "who >> /tmp/myname.txt"
RUN "pwd >> /tmp/myname.txt"
END MAIN
... and say compare with when program is running from command line. It is normally a case like in the earlier answer of configuring so that the environment is set correctly before the 4gl program is executed.
I'm developing an open source .NET assembly (WinSCP .NET assembly) that spawns a native (C++) application and communicates with it via events and file mapping objects.
The assembly spawns the application using the Process class, with no special settings. The assembly creates few events (using the EventWaitHandle) and file mapping (using the PInvoked CreateFileMapping) and the application "opens" these using the OpenEvent and the OpenFileMapping.
It works fine in most cases. But now I'm having a user that uses the assembly from an ASPX application on Windows Server 2008 R2 64 bit.
In his case both the OpenEvent and the OpenFileMapping return NULL and the GetLastError returns the ERROR_ACCESS_DENIED.
I have tried to improve the assembly code by explicitly granting the current user necessary permissions to the event objects and the application code to require only the really needed access rights (instead of original EVENT_ALL_ACCESS) as per Microsoft Docs example. It didn't help. So I did not even bother to try the same for the file mapping object.
The C# code that creates the event is:
EventWaitHandleSecurity security = new EventWaitHandleSecurity();
string user = Environment.UserDomainName + "\\" + Environment.UserName;
EventWaitHandleAccessRule rule;
rule =
new EventWaitHandleAccessRule(
user, EventWaitHandleRights.Synchronize | EventWaitHandleRights.Modify,
AccessControlType.Allow);
security.AddAccessRule(rule);
rule =
new EventWaitHandleAccessRule(
user, EventWaitHandleRights.ChangePermissions, AccessControlType.Deny);
security.AddAccessRule(rule);
new EventWaitHandle(
false, EventResetMode.AutoReset, name, out createdNew, security);
The C++ code that "opens" the events is:
OpenEvent(EVENT_MODIFY_STATE, false, name);
(For other events the access level is SYNCHRONIZE, depending on needs).
I have also tried to add Global\ prefix to the object names. As expected this didn't solve the problem.
Does anyone have any idea what causes the "access denied" error in OpenEvent (or CreateFileMapping)?
My guess is that the event is created by either the anonymous user or the logged in user depending on how the website is setup. But the sub-process is being launched with the base process user. This can be checked by using process monitor and looking at the acl for the event handle to see who the creator is. Then look at the sub process to see who it is running as.
If this is the case then you can update the acl on the event to include the base process. In addition to this, you may still need to prefix with "global" to make sure that the event can be used across user boundaries.
I have written some stuff to execute Powershell via C# using RunspaceFactory.
I am loading the default Powershell profile like this:
Runspace runspace = RunspaceFactory.CreateRunspace();
runspace.Open();
string scriptText = #". .\" + scriptFileName + "; " + command;
Pipeline pipeline = runspace.CreatePipeline(scriptText);
Command = a function from the profile i know works.
All of this Powershell stuff is wrapped in Impersonator.
For the avoidance of doubt, $profile = C:\Users\Administrator\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1
This is a web application running under IIS 7.5
If my IIS app runs under the 'administrator' account, it works. Under any other account it throws the error:
"The term '.\Microsoft.PowerShell_profile.ps1' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again."
As I am impersonating the 'administrator' account I assumed the location would be correct.
Some logging with an invoked 'get-location' reports the directory is what it should be.
Out of bloody mindedness i tried to force it with:
System.Environment.CurrentDirectory = dir;
... and also an attempt at an invoked 'set-location'. These work, but if I invoke 'get-location' from a runspace it reports the same directory as before (the correct one).
I thought that, perhaps, there might be a problem with the impersonation, so i wrote some tests that touch the file system in ways the app pool shouldn't be able to do. These work.
I also checked this:
string contextUserName = System.Security.Principal.WindowsIdentity.GetCurrent().Name;
Which reports the correct user both when code is 'wrapped' in impersonator, and when it is executed via the app pool identity. Then I got desperate and tried invoke:
#"cmd /c dir"
... (and also get-Childitem). Both commands return:
{}
...when running under the app pool identity (regardless of impersonation), but a full and accurate directory listing of the correct directory when the app pool is running as 'administrator'.
I am sure I am missing something stupid and fundamental here, if anyone could give me some guidance on where I've made a mistake in my thinking (and code), that would be great.
This is kind of a "well known" issue with using PowerShell in ASP.NET when in an impersonated context: it doesn't work the way people think it should work.
The reason for that is because PowerShell, behind the covers, spins up another thread to actually do all of its work. That new thread inside PowerShell does not inherit the impersonated context.
The fix isn't exactly pretty. This MSDN blog post recommends setting ASP.NET to always flow the impersonation policy (so that thread behind the scenes gets the identity):
<configuration>
<runtime>
<legacyImpersonationPolicy enabled="false"/>
<alwaysFlowImpersonationPolicy enabled="true"/>
</runtime>
</configuration>
Another, even uglier approach (though less hacky) is to use WinRM. You can use a loopback PowerShell session (connect to localhost) with PowerShell, and let WinRM handle the impersonation. This requires version 3.0.0.0 of System.Management.Automation.
var password = "HelloWorld";
var ss = new SecureString();
foreach (var passChar in password)
{
ss.AppendChar(passChar);
}
var psCredential = new PSCredential("username", ss);
var connectionInfo = new WSManConnectionInfo(new Uri("http://localhost:5985/wsman"), "http://schemas.microsoft.com/powershell/Microsoft.PowerShell", psCredential);
using (var runspace = RunspaceFactory.CreateRunspace(connectionInfo))
{
connectionInfo.EnableNetworkAccess = true;
using (var powershell = PowerShell.Create())
{
This is also a tacky solution because it requires WinRM's Windows Service to be running, and configuring WinRM. WinRM isn't exactly something "that just works", however it gets the job done in the first option isn't suitable.
The URL used in the WSManConnectionInfo is a localhost WinRM endpoint, by default it listens on port 5985 in version 3, and credentials are specified for the connection.