Unable to pipe infinite output of console app using PowerShell - c#

During development of a console app I noticed that I'm unable to pipe its output into itself in PowerShell.
I created a small repro (source below) that works like this:
PS> .\but-why.exe print # continuously prints a random number every 500 ms
1746112985
1700785785
331650882
...
PS> .\but-why.exe read # echoes stdin
foo # this was typed
read 'foo'
bar # this too
read 'bar'
PS> "foo","bar" | .\but-why.exe read
read 'foo'
read 'bar'
But when I try to feed the output of print into read nothing happens:
PS> .\but-why.exe print | .\but-why.exe read
Same when I redirect all outputs to the success stream:
PS> .\but-why.exe print *>&1 | .\but-why.exe read
However, when I use CMD everything works as expected:
CMD> but-why.exe print | but-why.exe read
read '317394436'
read '1828759797'
read '767777814'
...
Through debugging I found that the second instance .\but-why.exe read never seems to be started.
Maybe it's my rather old PS version?
PS> $host.Version
Major Minor Build Revision
----- ----- ----- --------
5 1 19041 610
Source of the console app (net5.0):
using System;
using System.Threading;
switch (args[0]) {
case "print": Print(); break;
case "read": Read(); break;
}
void Print() {
var rng = new Random();
while (true) {
Console.WriteLine(rng.Next());
Thread.Sleep(500);
}
}
void Read() {
string? text;
while ((text = Console.ReadLine()) != null) {
Console.WriteLine($"read '{text}'");
}
}

You're seeing a design limitation in Windows PowerShell that has since been fixed in the cross-platform PowerShell [Core] 7+ edition:
When Windows PowerShell pipes data to an external program (which is then invariably text), it unexpectedly does not exhibit the usual streaming behavior.
That is, instead of passing the originating command's lines (stringified objects) on as they're being produced, Windows PowerShell tries to collect them all in memory first, before piping them to the external program.
In your case, because the first program never stops producing output, the Windows PowerShell engine never stops waiting for all output to be collected and therefore effectively hangs (until it eventually runs out of memory) - the target program is never even launched, because that only happens after collecting the output has finished.
Workarounds:
If feasible, switch to PowerShell [Core] 7+, where this limitation has been removed.
In Windows PowerShell, invoke your pipeline via cmd.exe, which does exhibit the expected streaming behavior, as you've observed.
# Workaround via cmd.exe
cmd /c '.\but-why.exe print | .\but-why.exe read'

Related

GhostscriptRasterizer Objects Returns 0 as PageCount value

txtStatus.Text = "";
if (!File.Exists(txtOpenLocation.Text))
{
txtStatus.Text = "File Not Found";
return;
}
txtStatus.Text = "File Found";
const string DLL_32BITS = "gsdll32.dll";
const string DLL_64BITS = "gsdll64.dll";
//select DLL based on arch
string NomeGhostscriptDLL;
if (Environment.Is64BitProcess)
{
NomeGhostscriptDLL = DLL_64BITS;
}
else
{
NomeGhostscriptDLL = DLL_32BITS;
}
GhostscriptVersionInfo gvi = new GhostscriptVersionInfo(NomeGhostscriptDLL);
var rasterizer = new GhostscriptRasterizer();
try
{
rasterizer.Open(txtOpenLocation.Text, gvi, true);
Console.WriteLine(rasterizer.PageCount); //This line always prints 0
} catch(Exception er)
{
txtStatus.AppendText("\r\nUnable to Load the File: "+ er.ToString());
return;
}
I have googled on it, but got no solution, and no helpful documentation about the rasterizer.Open() function.
The Console.WriteLine(rasterizer.PageCount); always prints 0, regardless which pdf file I load.
txtStatusis a multiline TextBox in UI. txtOpenLocation is another TextBox in UI, non-editable by user, and its value is set by a OpenFileDialog.
I am using Visual Studio 2019 Community Edition.
Another observation I feel worth mentioning— for every pdf file on my machine, when I try to open any pdf file with either Adobe Acrobat DC or Foxit Reader, first the reader crashes, becomes 'not responsive' for about 10 to 15 seconds and then it opens the pdf file.
I had the same problem yesterday, I downloaded version 9.26 from here https://github.com/ArtifexSoftware/ghostpdl-downloads/releases/download/gs926/gs926aw32.exe, and works!
I think this is a bug of ghostscript 9.27 release.
This isn't a bug at all, I suspect, (I certainly do not believe its a Ghostscript bug) but its probably a change in behaviour. Due to reported security vulnerabilities the Ghostscript developers have been removing access to many non-standard PostScript extensions (unique to Ghostscript). Most recently access to the dictionary for processing PDF files has been secured.
My suspicion is that Ghostscript.NET (which is not maintained by the Ghostscript developers) is using one or more non-standard extensions to do the work of retrieving the count of pages. Without knowing what exactly is being used currently I can't be sure of course.
If the developer of Ghostscript.NET would like to contact us and confirm this is the problem then we can discuss the currently supported method for retrieving the count of pages in a PDF file.
It won't help at all to send me a project using Ghostscript.NET, since I don't know anything about it. I'm also not a C# or .NET developer, so the code would likely be meaningless to me.
Ghostscript returns considerable information on the back channel, stdout and/or stderr. These can be redirected to an application-defined data sink. I imagine that Ghostscript.NET will give you some means to retrieve these and if you plan to do any real development involving Ghostscript then I would very strongly reccomend that you find out how to get this information.
When you say 'no error is thrown from Ghostscript' I think you may be confusing Ghostscript and Ghostscript.NET. Without seeing the back channel from Ghostscript I don't see how you can tell if Ghostscript is generating an error.
NB if you plan to distribute your application you must abide by the terms of the AGPL version 3 (which is the license applying to Ghostscript), and that includes shipping a copy of the license, and some means for informing users where they can get the original.
As with the OP and the primary answer to this question, I too encountered this exact issue just yesterday.
I just want to add that for me the suggested version of ghostscript (9.26) wasn't working. It complained that I should be using a 64 bit version.
For those who need that, it's here: https://github.com/ArtifexSoftware/ghostpdl-downloads/releases/download/gs926/gs926aw64.exe
I had to just guess at the URL. I'm amazed at how difficult it has been to find older versions.
This issue has been fixed in the latest release v.1.2.2 of GhostScript.NET
The fix was to stop using pdfdict and GS_PDF_ProcSet if the version was over 9.26 as these two functions were made private by the Ghostscript team for security reasons.
I am not very familiar with GhostScript or PostScript, however, I have traced down the issue within the GhostScript.NET code, which uses the gsapi to execute functions. The function that is being executed and failing on gs is within file GhostscriptViewerPdfFormatHandler.cs on the GhostScript.NET Project.
Upon further testing through using both a gs9.26 as recommended by Oswaldo Cotes Solano and comparing the results with gs9.52 using a test script, I have found that GS_PDF_ProcSet is causing a Unrecoverable error, exit code 1 on gs 9.52.
This leads to failure while using the gs9.52 API, however, it is by design since gs9.27 to add security. While -dNOSAFER is not recommended for production ready applications, it will get us by.
An example of the intended execution and result that works in gs9.26 should be similar to:
gswin32c.exe -q -dNOSAFER -sPDFname=c:/pdfs/test.pdf c:/pdfs/pdfpagecount.ps
Executing:
/GSNETViewer_PDFpage {
(%GSNET_VIEWER_PDF_PAGE: ) print dup == flush
pdfgetpage /Page exch store
Page /MediaBox pget
{ (%GSNET_VIEWER_PDF_MEDIA: ) print == flush }
if
Page /CropBox pget
{ (%GSNET_VIEWER_PDF_CROP: ) print == flush }
if
Page /Rotate pget not { 0 } if
(%GSNET_VIEWER_PDF_ROTATE: ) print == flush
} def
Executing:
/Page null def
/Page# 0 def
/PDFSave null def
/DSCPageCount 0 def
Executing:
GS_PDF_ProcSet begin
pdfdict begin
Executing: (C:/pdfs/Output.pdf) (r) file runpdfbegin
Executing: /FirstPage where { pop FirstPage } { 1 } ifelse
Executing: /LastPage where { pop LastPage } { pdfpagecount } ifelse
Executing: flush (%GSNET_VIEWER_PDF_PAGES: ) print exch =only ( ) print =only (
) print flush
%GSNET_VIEWER_PDF_PAGES: 1 1
Executing: process_trailer_attrs
Executing: 1 GSNETViewer_PDFpage
%GSNET_VIEWER_PDF_PAGE: 1
%GSNET_VIEWER_PDF_MEDIA: [0.0 0.0 612.0 792.0]
%GSNET_VIEWER_PDF_CROP: [0.0 0.0 612.0 792.0]
%GSNET_VIEWER_PDF_ROTATE: 0
Executing: Page pdfshowpage_init pdfshowpage_finish
Loading NimbusSans-Regular font from %rom%Resource/Font/NimbusSans-Regular... 4124032 2548352 5183568 3818848 3 done.
showpage, press <return> to continue
While running 2.52 with -dNOSAFER and also adding the -dNOSAFER argument to CLI to avoid the file access error, and the GhostScript.NET source to allow for the same functionality. Although the -dNOSAFER option is not the ideal choice and may have vulnerabilities, to test without diving further in, I've used this method for testing.
C:\Program Files\gs\-\bin>gswin64c.exe -q -dNOSAFER -sPDFname=test.pdf c:/pdfs/pdfpagecount.ps
Error: /undefined in GS_PDF_ProcSet
Operand stack:
Execution stack:
%interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1990 1 3 %oparray_pop 1989 1 3 %oparray_pop 1977 1 3 %oparray_pop 1833 1 3 %oparray_pop --nostringval-- %errorexec_pop .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval--
Dictionary stack:
--dict:738/1123(ro)(G)-- --dict:0/20(G)-- --dict:84/200(L)--
Current allocation mode is local
Current file position is 992
GPL Ghostscript 9.52: Unrecoverable error, exit code 1
Ultimately making a minor change 3 locations of the source has resulted in a working solution with 9.52. I'll do a pull-request with our changes and update the community when a pull request has been issued, otherwise, you can make a pull directly to our fork.
I've had the same problem. I was using c# (.NET) Ghostscript.NET (version 1.2.3). Problem was PDF file name. If it had parenthesis ) or (, then that problem occurs.
I had to rename PDF file in order to escape those characters.
using Ghostscript.NET.Rasterizer;
var strFilePath = "C:\PdfFile(.pdf";
using (var rasterizer = new GhostscriptRasterizer())
{
rasterizer.Open(strFilePath);
var strPageCount = rasterizer.PageCount; //return 0
}
var pattern = "[^A-Za-z0-9 .-]+";
var regEx = new Regex(pattern);
strFilePath = regEx.Replace(strFilePath, "");
using (var rasterizer = new GhostscriptRasterizer())
{
rasterizer.Open(strFilePath);
var strPageCount1 = rasterizer.PageCount; //return number of pages
}

Retrieving shell script exit code (return value)

Trying to validate a Logstash config file. When running the following line from a Windows command line:
C:> C:\Logstash\bin\logstash -t -f C:\Logstash\config\my.config
I can then check the result using
echo %errorlevel%
which returns 1 in case of a syntax error. Now I want to do this programatically in C#, so:
using System.Diagnostics;
var logstashProcess = Process.Start(#"C:\Logstash\bin\logstash", #"-t -f C:\Logstash\config\my.config");
logstashProcess.WaitForExit();
return logstashProcess.ExitCode == 0;
The problem is that it always returns true (exit code is zero) - even when the config file is totally messed up.
My guess: since C:\Logstash\bin\logstash is a shell script, the zero I get is the shell itself running successfully - not the Logstash process (which is executed from within that script using jruby). Any idea on how to get the real return value? Will a batch file work? (I prefer not to add an extra script to the party at this point)

Displaying C# console app output in real-time on a website using PHP [duplicate]

I am trying to run a process on a web page that will return its output in realtime. For example if I run 'ping' process it should update my page every time it returns a new line (right now, when I use exec(command, output) I am forced to use -c option and wait until process finishes to see the output on my web page). Is it possible to do this in php?
I am also wondering what is a correct way to kill this kind of process when someone is leaving the page. In case of 'ping' process I am still able to see the process running in the system monitor (what makes sense).
This worked for me:
$cmd = "ping 127.0.0.1";
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("pipe", "w") // stderr is a pipe that the child will write to
);
flush();
$process = proc_open($cmd, $descriptorspec, $pipes, realpath('./'), array());
echo "<pre>";
if (is_resource($process)) {
while ($s = fgets($pipes[1])) {
print $s;
flush();
}
}
echo "</pre>";
This is a nice way to show real time output of your shell commands:
<?php
header("Content-type: text/plain");
// tell php to automatically flush after every output
// including lines of output produced by shell commands
disable_ob();
$command = 'rsync -avz /your/directory1 /your/directory2';
system($command);
You will need this function to prevent output buffering:
function disable_ob() {
// Turn off output buffering
ini_set('output_buffering', 'off');
// Turn off PHP output compression
ini_set('zlib.output_compression', false);
// Implicitly flush the buffer(s)
ini_set('implicit_flush', true);
ob_implicit_flush(true);
// Clear, and turn off output buffering
while (ob_get_level() > 0) {
// Get the curent level
$level = ob_get_level();
// End the buffering
ob_end_clean();
// If the current level has not changed, abort
if (ob_get_level() == $level) break;
}
// Disable apache output buffering/compression
if (function_exists('apache_setenv')) {
apache_setenv('no-gzip', '1');
apache_setenv('dont-vary', '1');
}
}
It doesn't work on every server I have tried it on though, I wish I could offer advice on what to look for in your php configuration to determine whether or not you should pull your hair out trying to get this type of behavior to work on your server! Anyone else know?
Here's a dummy example in plain PHP:
<?php
header("Content-type: text/plain");
disable_ob();
for($i=0;$i<10;$i++)
{
echo $i . "\n";
usleep(300000);
}
I hope this helps others who have googled their way here.
Checked all answers, nothing works...
Found solution Here
It works on windows (i think this answer is helpful for users searching over there)
<?php
$a = popen('ping www.google.com', 'r');
while($b = fgets($a, 2048)) {
echo $b."<br>\n";
ob_flush();flush();
}
pclose($a);
?>
A better solution to this old problem using modern HTML5 Server Side Events is described here:
http://www.w3schools.com/html/html5_serversentevents.asp
Example:
http://sink.agiletoolkit.org/realtime/console
Code: https://github.com/atk4/sink/blob/master/admin/page/realtime/console.php#L40
(Implemented as a module in Agile Toolkit framework)
For command-line usage:
function execute($cmd) {
$proc = proc_open($cmd, [['pipe','r'],['pipe','w'],['pipe','w']], $pipes);
while(($line = fgets($pipes[1])) !== false) {
fwrite(STDOUT,$line);
}
while(($line = fgets($pipes[2])) !== false) {
fwrite(STDERR,$line);
}
fclose($pipes[0]);
fclose($pipes[1]);
fclose($pipes[2]);
return proc_close($proc);
}
If you're trying to run a file, you may need to give it execute permissions first:
chmod('/path/to/script',0755);
try this (tested on Windows machine + wamp server)
header('Content-Encoding: none;');
set_time_limit(0);
$handle = popen("<<< Your Shell Command >>>", "r");
if (ob_get_level() == 0)
ob_start();
while(!feof($handle)) {
$buffer = fgets($handle);
$buffer = trim(htmlspecialchars($buffer));
echo $buffer . "<br />";
echo str_pad('', 4096);
ob_flush();
flush();
sleep(1);
}
pclose($handle);
ob_end_flush();
I've tried various PHP execution commands on Windows and found that they differ quite a lot.
Don't work for streaming: shell_exec, exec, passthru
Kind of works: proc_open, popen -- "kind of" because you cannot pass arguments to your command (i.e. wont' work with my.exe --something, will work with _my_something.bat).
The best (easiest) approach is:
You must make sure your exe is flushing commands (see printf flushing problem). Without this you will most likely receive batches of about 4096 bytes of text whatever you do.
If you can, use header('Content-Type: text/event-stream'); (instead of header('Content-Type: text/plain; charset=...');). This will not work in all browsers/clients though! Streaming will work without this, but at least first lines will be buffered by the browser.
You also might want to disable cache header('Cache-Control: no-cache');.
Turn off output buffering (either in php.ini or with ini_set('output_buffering', 'off');). This might also have to be done in Apache/Nginx/whatever server you use in front.
Turn of compression (either in php.ini or with ini_set('zlib.output_compression', false);). This might also have to be done in Apache/Nginx/whatever server you use in front.
So in your C++ program you do something like (again, for other solutions see printf flushing problem):
Logger::log(...) {
printf (text);
fflush(stdout);
}
In PHP you do something like:
function setupStreaming() {
// Turn off output buffering
ini_set('output_buffering', 'off');
// Turn off PHP output compression
ini_set('zlib.output_compression', false);
// Disable Apache output buffering/compression
if (function_exists('apache_setenv')) {
apache_setenv('no-gzip', '1');
apache_setenv('dont-vary', '1');
}
}
function runStreamingCommand($cmd){
echo "\nrunning $cmd\n";
system($cmd);
}
...
setupStreaming();
runStreamingCommand($cmd);
First check whether flush() works for you. If it does, good, if it doesn't it probably means the web server is buffering for some reason, for example mod_gzip is enabled.
For something like ping, the easiest technique is to loop within PHP, running "ping -c 1" multiple times, and calling flush() after each output. Assuming PHP is configured to abort when the HTTP connection is closed by the user (which is usually the default, or you can call ignore_user_abort(false) to make sure), then you don't need to worry about run-away ping processes either.
If it's really necessary that you only run the child process once and display its output continuously, that may be more difficult -- you'd probably have to run it in the background, redirect output to a stream, and then have PHP echo that stream back to the user, interspersed with regular flush() calls.
If you're looking to run system commands via PHP look into, the exec documentation.
I wouldn't recommend doing this on a high traffic site though, forking a process for each request is quite a hefty process. Some programs provide the option of writing their process id to a file such that you could check for, and terminate the process at will, but for commands like ping, I'm not sure that's possible, check the man pages.
You may be better served by simply opening a socket on the port you expect to be listening (IE: port 80 for HTTP) on the remote host, that way you know everything is going well in userland, as well as on the network.
If you're attempting to output binary data look into php's header function, and ensure you set the proper content-type, and content-disposition. Review the documentation, for more information on using/disabling the output buffer.
Try changing the php.ini file set "output_buffering = Off". You should be able to get the real time output on the page
Use system command instead of exec.. system command will flush the output
why not just pipe the output into a log file and then use that file to return content to the client. not quite real time but perhaps good enough?
I had the same problem only could do it using Symfony Process Components ( https://symfony.com/doc/current/components/process.html )
Quick example:
<?php
use Symfony\Component\Process\Process;
$process = new Process(['ls', '-lsa']);
$process->run(function ($type, $buffer) {
if (Process::ERR === $type) {
echo 'ERR > '.$buffer;
} else {
echo 'OUT > '.$buffer;
}
});
?>

redirecting output from an external dll in powershell

I have a C# assembly that processes an xml file and at the end spits out the results to the console.
e.g. Console.WriteLine(_header.ToString());
I can load this dll in powershell and call the right method like this:
[sqlproj_doctor.sqlprojDoctor]::ProcessXML($file) | out-file ./test.xml
All is well.
The problem begins when I want to redirect the output. For some reason stdout is empty. What am I missing? I need to be further process the output of this dll.
Note: If I compile the same code as an executable, it correctly populates the standard output stream and I can redirect the output.
another note: as a workaround, I changed the method from void to string, and can now manipulate the returned string.
When you call [Console]::WriteLine('SomeText'), it write to PowerShell process stdout, not to command output, and so it can not be redirected from inside same PowerShell process, by standard PowerShell operators, like this:
[Console]::WriteLine('SomeText')|Out-File Test.txt
You have to spawn new PowerShell process, and redirect output of that new process:
powershell -Command "[Console]::WriteLine('SomeText')"|Out-File Test.txt
In case if some command use [Console]::WriteLine to write to console, you can capture that output without starting new PowerShell instance:
$OldConsoleOut=[Console]::Out
$StringWriter=New-Object IO.StringWriter
[Console]::SetOut($StringWriter)
[Console]::WriteLine('SomeText') # That command will not print on console.
[Console]::SetOut($OldConsoleOut)
$Results=$StringWriter.ToString()
$Results # That variable would contain "SomeText" text.
Although this does not help if command does not use [Console]::WriteLine, but write to stdout stream directly:
$OldConsoleOut=[Console]::Out
$StringWriter=New-Object IO.StringWriter
[Console]::SetOut($StringWriter)
$Stdout=[Console]::OpenStandardOutput()
$Bytes=[Console]::OutputEncoding.GetBytes("SomeText"+[Environment]::NewLine)
$Stdout.Write($Bytes,0,$Bytes.Length) # That command will print on console.
$Stdout.Close()
[Console]::SetOut($OldConsoleOut)
$Results=$StringWriter.ToString()
$Results # That variable will be empty.

Turning C output into C# input

I am currently trying to develop a program which takes the output of an existing program (written in C) and uses it as input (in C#). The problem I am having is that the existing program prints data in redundant format but it dynamically changes. An example could be a random name generator and I need to make a program that logs all of the random names as they appear.
Could I just pipe this and the output will be grabbed as it comes? The C program is run from a CLI.
You could redirect the output streams from the Process object to get direct access to it. You make a call and it will invoke an event of your choice when output is received. One of my questions may provide some explanation on how to do this - C# Shell - IO redirection:
processObject.StartInfo.RedirectStandardOutput = true;
processObject.OutputDataReceived += new DataReceivedEventHandler(processObject_OutputDataReceived);
/* ... */
processObject.Start();
processObject.BeginOutputReadLine();
And then later:
public void processObject_OutputDataReceived(object sender, DataReceivedEventArgs e) {
ProcessNewData(e.Data);
}
Note that this includes the trailing newline.
Or you can just pipe it and use Console.ReadLine to read it in. On the command line, you would execute:
cprogram | csharp_program
Unfortunately, you can't do this directly with the Process object - just use the method above. If you do choose to go this route, you can use:
string input = "";
int currentChar = 0;
while ( (currentChar = Console.Read()) > -1 ) {
input += Convert.ToChar(currentChar);
}
Which will read from the input until EOF.
If the C# program is also command-line based, running this command from the command-line will chain them together:
my_c_program.exe | my_csharp_program.exe
The C# program will receive the output of the C program, via its standard input stream (Console.Read, etc.)

Categories