Process not working - c#

I have tried calling a Process(console application) using the following code:
ProcessStartInfo pi = new ProcessStartInfo();
pi.UseShellExecute = false;
pi.RedirectStandardOutput = true;
pi.CreateNoWindow = true;
pi.FileName = #"C:\fakepath\go.exe";
pi.Arguments = "FOO BAA";
Process p = Process.Start(pi);
StreamReader streamReader = p.StandardOutput;
char[] buf = new char[256];
string line = string.Empty;
int count;
while ((count = streamReader.Read(buf, 0, 256)) > 0)
{
line += new String(buf, 0, count);
}
It works for only some cases.
The file that does not work has a size of 1.30 mb,
I don't know if that is the reason for it not working correctly.
line returns an empty string.
I hope this is clear.
Can someone point out my error? Thanks in advance.

A couple thoughts:
The various Read* methods of streamreader require you to ensure that your app has completed before they run, otherwise you may get no output depending on timing issues. You may want to look at the Process.WaitForExit() function if you want to use this route.
Also, unless you have a specific reason for allocating buffers (pain in the butt IMO) I would just use readline() in a loop, or since the process has exited, ReadToEnd() to get the whole output. Neither requires you to have to do arrays of char, which opens you up to math errors with buffer sizes.
If you want to go asynchronous and dump output as you run, you will want to use the BeginOutputReadLine() function (see MSDN)
Don't forget that errors are handled differently, so if for any reason your app is writing to STDERR, you will want to use the appropriate error output functions to read that output as well.

Related

Read 7Zip progress using from Process.StandardOuput

I am redirecting Process.StandardOutput and Process.StandardError from a System.Diagnostics.Process that uses 7zip to extract and zip archives and am unable to read the progress from the process.
It appears, 7Zip like some other applications, emit are backspace and delete characters to partially write a line data and then and delete the written characters using backspace and delete in order o show progress. I am trying to read those partial line outputs from the target process am unable to do so. However, I may be wrong in this assumption.
What I have tried:
var process = new Process
{
StartInfo =
{
FileName = command,
Arguments = arguments,
UseShellExecute = false,
RedirectStandardOutput = true,
RedirectStandardError = true
}
};
process.Start();
After the above code block, I have tried using various methods of reading the data.
I have tried the async event handlers:
process.OutputDataReceived += (sender, args) => { Console.WriteLine(args.Data); };
process.ErrorDataReceived += (sender, args) => { Console.WriteLine(args.Data); };
process.BeginOutputReadLine();
process.BeginErrorReadLine();
I have tried using the async methods of the StandardOutput:
while (!process.StandardOutput.EndOfStream)
{
char[] buffer = new char[256];
int read = process.StandardOutput.ReadAsync(buffer, 0, buffer.Length).Result;
Console.Write(buffer, 0, read);
}
and
process.StandardOutput.BaseStream.CopyToAsync(Console.OpenStandardOutput());
And have tried using the async methods of the underlying base stream.
while (!process.StandardOutput.EndOfStream)
{
byte[] buffer = new byte[256];
int read = process.StandardOutput.BaseStream.ReadAsync(buffer, 0, buffer.Length).Result;
string data = Encoding.UTF8.GetString(buffer, 0, read);
Console.Write(data);
}
As an example, run 7Zip from the terminal with the following command:
"c:\program files\7-zip\7z.exe" x -o"C:\target" "K:\Disk_23339.secure.7z"
This shows progress output when running directly in a command prompt, with each success progress incrementing overwriting the previous:
It then uses backspace chars to overwrite the previous progress.
Running the same command and arguments using Process.Start()
var process = new Process
{
StartInfo =
{
FileName = "c:\program files\7-zip\7z.exe",
Arguments = 'x -o"C:\target" \"K:\Disk_23339.secure.7z\"",
UseShellExecute = false,
RedirectStandardOutput = true,
RedirectStandardError = true
}
};
process.Start();
process.StandardOutput.BaseStream.CopyToAsync(Console.OpenStandardOutput());
process.WaitForExit();
When running this and attempting to read the redirected standard output of the process characters that are not emitted from the source process that do not contain a new line (either by linefeed or carriage return + line feed) are output to the standard output or standard error of System.Diagnostics.Process and hence never written to the console.
7zip of course is just one example. This issue also occurs with numerous PowerShell and Python scripts.
Is there anyway to read these characters from Process.StandardOutput and Process.StandardError.
I am not sure but I think the issue is the underlying stream reader reads one line at a time and never returns these partial lines because they never include line ending characters.
Though the post had been months, in case your problem (not able to get progress from 7zip when execute in command line) still exists, use the "-bsp1" switch in command line argument.
I was also looking for the solution of same issue and just got it (tested successfully in my case) right. By redirect StandardOutput, repeatedly do cmd.StandardOutput.ReadLine() (the Synchronous method, I tested this method instead of asynchronous method that use EventHandler, but I think async method would work too), and use RegExp to detect the progress to update my UI.
My command (run in .NET)
C:\Program Files\7-Zip\7z.exe a -t7z -mx=9 -bsp1 "G:\Temp\My7zTest_Compressed.7z" "G:\SourceFolder"
Credit to #justintjacob
How to show extraction progress of 7zip inside cmd?

Call jpegOptim with RedirectStandardInput and RedirectStandardOutput

I'm trying to do something that seems like it should be relatively simple: Call jpegoptim from C#.
I can get it to write to disk fine, but getting it to accept a stream and emit a stream has so far eluded me - I always end up with 0 length output or the ominous "Pipe has been ended."
One approach I tried:
var processInfo = new ProcessInfo(
jpegOptimPath,
"-m" + quality + " -T1 -o -p --strip-all --all-normal"
);
processInfo.CreateNoWindow = true;
processInfo.WindowStyle = ProcessWindowStyle.Hidden;
processInfo.UseShellExecute = false;
processInfo.RedirectStandardInput = true;
processInfo.RedirectStandardOutput = true;
processInfo.RedirectStandardError = true;
using(var process = Process.Start(processInfo))
{
await Task.WhenAll(
inputStream.CopyToAsync(process.StandardInput.BaseStream),
process.StandardOutput.BaseStream.CopyToAsync(outputStream)
);
while (!process.HasExited)
{
await Task.Delay(100);
}
// Do stuff with outputStream here - always length 0 or exception
}
I've also tried this solution:
http://alabaxblog.info/2013/06/redirectstandardoutput-beginoutputreadline-pattern-broken/
using (var process = new Process())
{
process.StartInfo.UseShellExecute = false;
process.StartInfo.CreateNoWindow = true;
process.StartInfo.RedirectStandardError = true;
process.StartInfo.RedirectStandardOutput = true;
process.StartInfo.FileName = fileName;
process.StartInfo.Arguments = arguments;
process.Start();
//Thread.Sleep(100);
using (Task processWaiter = Task.Factory.StartNew(() => process.WaitForExit()))
using (Task<string> outputReader = Task.Factory.StartNew(() => process.StandardOutput.ReadToEnd()))
using (Task<string> errorReader = Task.Factory.StartNew(() => process.StandardError.ReadToEnd()))
{
Task.WaitAll(processWaiter, outputReader, errorReader);
standardOutput = outputReader.Result;
standardError = errorReader.Result;
}
}
Same problem. Output length 0. If I let jpegoptim run without the output redirect I get what I'd expect - an optimized file - but not when I run it this way.
There's gotta be a right way to do this?
Update: Found a clue - don't I feel sheepish - jpegoptim never supported piping to stdin until an experimental build in 2014, fixed this year. The build I have is from an older library, dated 2013. https://github.com/tjko/jpegoptim/issues/6
A partial solution - see deadlock issue below. I had multiple problems in my original attempts:
You need a build of jpegoptim that will read and write to pipes instead of files-only. As mentioned builds prior to mid-2014 can't do it. The github "releases" of jpegoptim are useless zips of source, not built releases, so you'll need to look elsewhere for actual built releases.
You need to call it properly, passing --stdin and --stdout, and depending on how you'll be responding to it, avoid parameters that might cause it to write nothing, like -T1 (which will, when optimization is going to only be 1% or less, cause it to emit nothing to stdout).
You need to perform the non-trivial task of: Redirecting both input and output on the Process class
and avoiding a Buffer overflow on the input side that will get you 0 output once again - the obvious stream.CopyToAsync() overruns Process's very limited 4096 byte (4K) buffer and gets you nothing.
So many routes to nothing. None signalling why.
var processInfo = new ProcessInfo(
jpegOptimPath,
"-m" + quality + " --strip-all --all-normal --stdin --stdout",
);
processInfo.CreateNoWindow = true;
processInfo.WindowStyle = ProcessWindowStyle.Hidden;
processInfo.UseShellExecute = false;
processInfo.RedirectStandardInput = true;
processInfo.RedirectStandardOutput = true;
processInfo.RedirectStandardError = true;
using(var process = new Process())
{
process.StartInfo = processInfo;
process.Start();
int chunkSize = 4096; // Process has a limited 4096 byte buffer
var buffer = new byte[chunkSize];
int bufferLen = 0;
var inputStream = process.StandardInput.BaseStream;
var outputStream = process.StandardOutput.BaseStream;
do
{
bufferLen = await input.ReadAsync(buffer, 0, chunkSize);
await inputStream.WriteAsync(buffer, 0, bufferLen);
inputStream.Flush();
}
while (bufferLen == chunkSize);
do
{
bufferLen = await outputStream.ReadAsync(buffer, 0, chunkSize);
if (bufferLen > 0)
await output.WriteAsync(buffer, 0, bufferLen);
}
while (bufferLen > 0);
while(!process.HasExited)
{
await Task.Delay(100);
}
output.Flush();
There are some areas for improvement here. Improvements welcome.
Biggest problem: On some images, this deadlocks on the outputStream.ReadAsync line.
It all belongs in separate methods to break it up - I unrolled a bunch of methods to keep this example simple.
There are a bunch of flushes that may not be necessary.
The code here is meant to handle anything that streams in and out. The 4096 is a hard limit that any Process will deal with, but the assumption that all the input goes in, then all the output comes out is likely a bad one and based on my research could result in a deadlock for other types of process. It appears that jpegoptim behaves in this (very buffered, very unpipe-like...) way when passed --stdin --stdout however, so, this code copes well for this specific task.

Very Slow to pass "large" amount of data from Chrome Extension to Host (written in C#)

I am using Chrome's Native Messaging API to pass the DOM of a page to my host. When I try passing a small string from my extension to my host, everything works, but when I try to pass the entire DOM (which isn't that large...only around 260KB), everything runs much slower and I eventually get a Native host has exited error preventing the host from responding.
My main question: Why does it take so long to pass a 250KB - 350KB message from the extension to the host?
According to the developer's site:
Chrome starts each native messaging host in a separate process and communicates with it using standard input (stdin) and standard output (stdout). The same format is used to send messages in both directions: each message is serialized using JSON, UTF-8 encoded and is preceded with 32-bit message length in native byte order. The maximum size of a single message from the native messaging host is 1 MB, mainly to protect Chrome from misbehaving native applications. The maximum size of the message sent to the native messaging host is 4 GB.
The page's whose DOMs I'm interested in sending to my host are no more than 260KB (and on occasion 300KB), well below the 4GB imposed maximum.
popup.js
document.addEventListener('DOMContentLoaded', function() {
var downloadButton = document.getElementById('download_button');
downloadButton.addEventListener('click', function() {
chrome.tabs.query({currentWindow: true, active: true}, function (tabs) {
chrome.tabs.executeScript(tabs[0].id, {file: "getDOM.js"}, function (data) {
chrome.runtime.sendNativeMessage('com.google.example', {"text":data[0]}, function (response) {
if (chrome.runtime.lastError) {
console.log("Error: " + chrome.runtime.lastError.message);
} else {
console.log("Response: " + response);
}
});
});
});
});
});
host.exe
private static string StandardOutputStreamIn() {
Stream stdin = new Console.OpenStandardInput();
int length = 0;
byte[] bytes = new byte[4];
stdin.Read(bytes, 0, 4);
length = System.BitConverter.ToInt32(bytes, 0);
string = "";
for (int i=0; i < length; i++)
string += (char)stdin.ReadByte();
return string;
}
Please note, I found the above method from this question.
For the moment, I'm just trying to write the string to a .txt file:
public void Main(String[] args) {
string msg = OpenStandardStreamIn();
System.IO.File.WriteAllText(#"path_to_file.txt", msg);
}
Writing the string to the file takes a long time (~4 seconds, and sometimes up to 10 seconds).
The amount of text that is actually written varies, but it's never more than just the top document declaration and a few IE comment tags. All the text now shows up.
This file with barely any text is 649KB but the actual document should only 205KB (when I download it). The file is still slightly larger than it should be (216KB when it should be 205KB).
I've tested my getDOM.js function by just downloading the file, and the entire process is almost instantaneous.
I'm not sure why this process is taking such a long time, why the file is so huge, or why barely any of the message is actually being sent.
I'm not sure if this has something to do with deserializing the message in a specific way, if I should create a port instead of using the chrome.runtime.sendNativeMessage(...); method, or if there's something else entirely that I'm missing.
All help is very much appreciated! Thank you!
EDIT
Although my message is correctly sending FROM the extension TO the host, I am now receiving a Native host has exited error before the extension receive's the host's message.
This question is essentially asking, "How can I efficiently and quickly read information from the standard input?"
In the above code, the problem is not between the Chrome extension and the host, but rather between the standard input and the method that reads from the standard input stream, namely StandardOutputStreamIn().
The way the method works in the OP's code is that a loop runs through the standard input stream and continuously concatenates the input string with a new string (i.e. the character it reads from the byte stream). This is an expensive operation, and we can get around this by creating a StreamReader object to just grab the entire stream at once (especially since we know the length information contained in the first 4 bytes). So, we fix the speed issue with:
public static string OpenStandardStreamIn()
{
//Read 4 bytes of length information
System.IO.Stream stdin = Console.OpenStandardInput();
int length = 0;
byte[] bytes = new byte[4];
stdin.Read(bytes, 0, 4);
length = System.BitConverter.ToInt32(bytes, 0);
char[] buffer = new char[length];
using (System.IO.StreamReader sr = new System.IO.StreamReader(stdin))
{
while (sr.Peek() >= 0)
{
sr.Read(buffer, 0, buffer.Length);
}
}
string input = new string(buffer);
return input;
}
While this fixes the speed problem, I am unsure why the extension is throwing a Native host has exited error.

Issue with memory management and program performance

OK, I made a C# winform app, it's a File_Splitter_Joiner.
You just give it a file and it splits it for you to a number of pieces you specify.
The splitting is done in a separate thread.
Everything was working pretty fine until I sliced a 1Gig file!
In the task manager, I saw that my program started consuming 1Gigabyte of memory and my computer almost died!
not just that, when slicing finished, the consuming didn't change!
(dunno if this means that the garbage collector isn't working, although I'm pretty sure that I lost all references to what was holding the big data chumps, so it should work)
Here's the Splitter constructor (just to give you a better idea):
public FileSplitter(string FileToSplitPath, string PiecesFolder, int NumberOfPieces, int PieceSize, SplittingMethod Method)
{
FileToSplitInfo = new FileInfo(FileToSplitPath);
this.FileToSplitPath = FileToSplitPath;
this.PiecesFolder = PiecesFolder;
this.NumberOfPieces = NumberOfPieces;
this.PieceSize = PieceSize;
this.Method = Method;
SplitterThread = new Thread(Split);
}
And here is the method that did the actual splitting:
(I'm still a newbie, so what you're about to see 'may not' be done in the best way ever possible, I'm just learning here)
private void Split()
{
int remainingSize = 0;
int remainingPos = -1;
bool isNumberOfPiecesEqualInSize = true;
int fileSize = (int)FileToSplitInfo.Length; // FileToSplitInfo is a FileInfo object
if (fileSize % PieceSize != 0)
{
remainingSize = fileSize % PieceSize;
remainingPos = fileSize - remainingSize;
isNumberOfPiecesEqualInSize = false;
}
byte[] fileBytes = new byte[fileSize];
var _fs = File.Open(FileToSplitPath, FileMode.Open);
BinaryReader br = new BinaryReader(_fs);
br.Read(fileBytes, 0, fileSize);
br.Close();
_fs.Close();
for (int i = 0, index = 0; i < NumberOfPieces; i++, index += PieceSize)
{
var fs = File.Create(PiecesFolder + "\\" + Path.GetFileName(FileToSplitPath) + "." + (i+1).ToString());
var bw = new BinaryWriter(fs);
bw.Write(fileBytes, index, PieceSize);
if(i == NumberOfPieces-1 && !isNumberOfPiecesEqualInSize && Method == SplittingMethod.NumberOfPieces)
bw.Write(fileBytes, remainingPos, remainingSize);
bw.Close();
fs.Close();
}
MessageBox.Show("File has been splitted successfully!");
SplitterThread.Abort();
}
Now, instead of reading the bytes of the file via a BinaryReader, I was first reading it via the File.ReadAllBytes method, it was working fine with small file sizes, but, I got a "SystemOutOfMemory" exception when I dealt with our big guy, dunno why I didn't get that exception when I read the bytes via a BinaryReader.
(that was an in between question)
So, the main question is, how can I load big files (gigs speaking) in a way that doesn't consume so much memory ? I mean, how can I make my program not consume all that memory ?
and how I can I free the used memory after the splitting is done ?
(I actually used
bw.Dispose; fs.Dispose;
instead of
bw.Close(); fs.Close();
it was the same.
I know the Q might not make sense, cuz when we load something, it gets in our memory not somewhere else, but, the reason I asked it like that, is cuz I used another Splitting_Joining program (not written by me) just to see that if it had the same problem, I loaded the file, the program consumed about 5Migs of ram, when I started splitting, it used about 10Migs!!
Now that is a VERY big difference .. Probably that app was in C/C++ ..
So to sum up, who sucks ? is it my code, and if so how can I fix it ? or is it C# when it comes to performance ?
Thank you SOOO much for anything you could hook me up with :)
The following 2 lines will kil you:
int fileSize = (int)FileToSplitInfo.Length; // a FileInfo object
...
byte[] fileBytes = new byte[fileSize];
Your code will fail when the size is over Int32.MaxValue. Unnecessary, just use long fileSize = FileToSplitInfo.Length;
This corrected code will fail when there is not enough contiguous memory. Fragmentation (of the LOH) will bring you down sooner or later.
You allocate memory for the entire file but your only need PieceSize bytes at a time.
You don't even need to know the fileSize, just
byte[] pieceBuffer = new byte[PieceSize];
while (true)
{
int nBytes = br.Read(pieceBuffer, 0, pieceBuffer.Length);
if (nBytes == 0)
break;
// write this piece, the length is nBytes
}
There are different aspects that can be made better:
if you are working with big file, why first read all inside an array and after write into another file ? Just write into the new file while reading from the other.
use using to gurantee disposal of the streams, in any case: either there is an exception or not.
if you begin to work with really big file, like 1GB or even more, I would recommend to look on Memory Mapped Files. So you will laverage incredible memory consuption benefit with some increased performance cost.

Capturing binary output from Process.StandardOutput

In C# (.NET 4.0 running under Mono 2.8 on SuSE) I would like to run an external batch command and capture its ouput in binary form. The external tool I use is called 'samtools' (samtools.sourceforge.net) and among other things it can return records from an indexed binary file format called BAM.
I use Process.Start to run the external command, and I know that I can capture its output by redirecting Process.StandardOutput. The problem is, that's a text stream with an encoding, so it doesn't give me access to the raw bytes of the output. The almost-working solution I found is to access the underlying stream.
Here's my code:
Process cmdProcess = new Process();
ProcessStartInfo cmdStartInfo = new ProcessStartInfo();
cmdStartInfo.FileName = "samtools";
cmdStartInfo.RedirectStandardError = true;
cmdStartInfo.RedirectStandardOutput = true;
cmdStartInfo.RedirectStandardInput = false;
cmdStartInfo.UseShellExecute = false;
cmdStartInfo.CreateNoWindow = true;
cmdStartInfo.Arguments = "view -u " + BamFileName + " " + chromosome + ":" + start + "-" + end;
cmdProcess.EnableRaisingEvents = true;
cmdProcess.StartInfo = cmdStartInfo;
cmdProcess.Start();
// Prepare to read each alignment (binary)
var br = new BinaryReader(cmdProcess.StandardOutput.BaseStream);
while (!cmdProcess.StandardOutput.EndOfStream)
{
// Consume the initial, undocumented BAM data
br.ReadBytes(23);
// ... more parsing follows
But when I run this, the first 23bytes that I read are not the first 23 bytes in the ouput, but rather somewhere several hundred or thousand bytes downstream. I assume that StreamReader does some buffering and so the underlying stream is already advanced say 4K into the output. The underlying stream does not support seeking back to the start.
And I'm stuck here. Does anyone have a working solution for running an external command and capturing its stdout in binary form? The ouput may be very large so I would like to stream it.
Any help appreciated.
By the way, my current workaround is to have samtools return the records in text format, then parse those, but this is pretty slow and I'm hoping to speed things up by using the binary format directly.
Using StandardOutput.BaseStream is the correct approach, but you must not use any other property or method of cmdProcess.StandardOutput. For example, accessing cmdProcess.StandardOutput.EndOfStream will cause the StreamReader for StandardOutput to read part of the stream, removing the data you want to access.
Instead, simply read and parse the data from br (assuming you know how to parse the data, and won't read past the end of stream, or are willing to catch an EndOfStreamException). Alternatively, if you don't know how big the data is, use Stream.CopyTo to copy the entire standard output stream to a new file or memory stream.
Since you explicitly specified running on Suse linux and mono, you can work around the problem by using native unix calls to create the redirection and read from the stream. Such as:
using System;
using System.Diagnostics;
using System.IO;
using Mono.Unix;
class Test
{
public static void Main()
{
int reading, writing;
Mono.Unix.Native.Syscall.pipe(out reading, out writing);
int stdout = Mono.Unix.Native.Syscall.dup(1);
Mono.Unix.Native.Syscall.dup2(writing, 1);
Mono.Unix.Native.Syscall.close(writing);
Process cmdProcess = new Process();
ProcessStartInfo cmdStartInfo = new ProcessStartInfo();
cmdStartInfo.FileName = "cat";
cmdStartInfo.CreateNoWindow = true;
cmdStartInfo.Arguments = "test.exe";
cmdProcess.StartInfo = cmdStartInfo;
cmdProcess.Start();
Mono.Unix.Native.Syscall.dup2(stdout, 1);
Mono.Unix.Native.Syscall.close(stdout);
Stream s = new UnixStream(reading);
byte[] buf = new byte[1024];
int bytes = 0;
int current;
while((current = s.Read(buf, 0, buf.Length)) > 0)
{
bytes += current;
}
Mono.Unix.Native.Syscall.close(reading);
Console.WriteLine("{0} bytes read", bytes);
}
}
Under unix, file descriptors are inherited by child processes unless marked otherwise (close on exec). So, to redirect stdout of a child, all you need to do is change the file descriptor #1 in the parent process before calling exec. Unix also provides a handy thing called a pipe which is a unidirectional communication channel, with two file descriptors representing the two endpoints. For duplicating file descriptors, you can use dup or dup2 both of which create an equivalent copy of a descriptor, but dup returns a new descriptor allocated by the system and dup2 places the copy in a specific target (closing it if necessary). What the above code does, then:
Creates a pipe with endpoints reading and writing
Saves a copy of the current stdout descriptor
Assigns the pipe's write endpoint to stdout and closes the original
Starts the child process so it inherits stdout connected to the write endpoint of the pipe
Restores the saved stdout
Reads from the reading endpoint of the pipe by wrapping it in a UnixStream
Note, in native code, a process is usually started by a fork+exec pair, so the file descriptors can be modified in the child process itself, but before the new program is loaded. This managed version is not thread-safe as it has to temporarily modify the stdout of the parent process.
Since the code starts the child process without managed redirection, the .NET runtime does not change any descriptors or create any streams. So, the only reader of the child's output will be the user code, which uses a UnixStream to work around the StreamReader's encoding issue,
I checked out what's happening with reflector. It seems to me that StreamReader doesn't read until you call read on it. But it's created with a buffer size of 0x1000, so maybe it does. But luckily, until you actually read from it, you can safely get the buffered data out of it: it has a private field byte[] byteBuffer, and two integer fields, byteLen and bytePos, the first means how many bytes are in the buffer, the second means how many have you consumed, should be zero. So first read this buffer with reflection, then create the BinaryReader.
Maybe you can try like this:
public class ThirdExe
{
private static TongueSvr _instance = null;
private Diagnostics.Process _process = null;
private Stream _messageStream;
private byte[] _recvBuff = new byte[65536];
private int _recvBuffLen;
private Queue<TonguePb.Msg> _msgQueue = new Queue<TonguePb.Msg>();
void StartProcess()
{
try
{
_process = new Diagnostics.Process();
_process.EnableRaisingEvents = false;
_process.StartInfo.FileName = "d:/code/boot/tongueerl_d.exe"; // Your exe
_process.StartInfo.UseShellExecute = false;
_process.StartInfo.CreateNoWindow = true;
_process.StartInfo.RedirectStandardOutput = true;
_process.StartInfo.RedirectStandardInput = true;
_process.StartInfo.RedirectStandardError = true;
_process.ErrorDataReceived += new Diagnostics.DataReceivedEventHandler(ErrorReceived);
_process.Exited += new EventHandler(OnProcessExit);
_process.Start();
_messageStream = _process.StandardInput.BaseStream;
_process.BeginErrorReadLine();
AsyncRead();
}
catch (Exception e)
{
Debug.LogError("Unable to launch app: " + e.Message);
}
private void AsyncRead()
{
_process.StandardOutput.BaseStream.BeginRead(_recvBuff, 0, _recvBuff.Length
, new AsyncCallback(DataReceived), null);
}
void DataReceived(IAsyncResult asyncResult)
{
int nread = _process.StandardOutput.BaseStream.EndRead(asyncResult);
if (nread == 0)
{
Debug.Log("process read finished"); // process exit
return;
}
_recvBuffLen += nread;
Debug.LogFormat("recv data size.{0} remain.{1}", nread, _recvBuffLen);
ParseMsg();
AsyncRead();
}
void ParseMsg()
{
if (_recvBuffLen < 4)
{
return;
}
int len = IPAddress.NetworkToHostOrder(BitConverter.ToInt32(_recvBuff, 0));
if (len > _recvBuffLen - 4)
{
Debug.LogFormat("current call can't parse the NetMsg for data incomplete");
return;
}
TonguePb.Msg msg = TonguePb.Msg.Parser.ParseFrom(_recvBuff, 4, len);
Debug.LogFormat("recv msg count.{1}:\n {0} ", msg.ToString(), _msgQueue.Count + 1);
_recvBuffLen -= len + 4;
_msgQueue.Enqueue(msg);
}
The key is _process.StandardOutput.BaseStream.BeginRead(_recvBuff, 0, _recvBuff.Length, new AsyncCallback(DataReceived), null); and the very very important is that convert to asynchronous reads event like Process.OutputDataReceived.

Categories