For part of a C# application I am executing some relatively simple networking commands. (ping, ipconfig, tracert, nslookup, etc.)
I have read this answer on when to use C# vs CMD/PowerShell in the general sense:
https://stackoverflow.com/a/4188135/8887398
However, I'm wondering particularly for networking, and the Systems.Net library, is there any major advantage in taking the time to write code to implement these in C# as opposed to creating a new command line Process() within a C# app and executing it that way? (The Process() route is really simple/easy to code within C# app)
Question: What are my main advantages of implementing networking commands with the C# Systems.Net library vs creating a new Process() within a C# app and proceed internally as if you were using the command line?
There is certainly nothing wrong with starting a process. You can even start PowerShell and use that for sequences of actions that are easy to do in PowerShell.
Starting a process is not as easy as you might think.
You need to read both standard output and error for example. Else, the process will hang randomly. Doing this is surprisingly hard.
Getting results from commands is harder compared to using a .NET class. In .NET you get objects and exceptions back. A process can only send you text and an error code.
There is also more overhead. Whether that matters depends on the frequency of such operations.
You could leave child processes orphaned. The best solution is using Windows Job Object to kill the child tree when the parent exits.
So it depends on the specific case. I would definitely do a ping from C# since that is very easy to do. Other commands might benefit more from starting a process.
Related
Don't close it as duplicate, since I have a subtle but significant change from the similiar questions:
Is it possible to capture output of an external process (i.e. stdout) in java, when I didn't create the process, and all I know is the process name?
I'm running on windows 7.
EDIT:
If there is a way to do it in other language (C#\C++), then I can write a "CaptureOutput" program that capture the output, write to stdout, and in my java code to launch "CaptureOutput" and read its stdput.
Ugly, but might work.
So answer in other languages is also okay with me.
First let me say that what you're asking breaks all the rules of process isolation.
If your process does not create the process whose output you want to capture, and you also don't have access to modify the calling process (command shell? service manager? you haven't said which). Then your only chance, and it is a slim one at best, is to inject a thread into the process and while all its other threads are suspended, alter the global stdout (and stderr?). This can only be done by a process with full access privileges to the target process. Performing such surgery on a running process is not for the faint of heart.
What you are trying to do is pretty dangerous. It would be very easy to accidentally corrupt the memory of process you're trying to get into. Test, test, test. Then test some more. And good luck - I know I wouldn't want to have to pull this off.
This article - API Hooking - explains how to get started with what you want (using C++). Once you have your code injected into a running process, there are other Windows API calls to replace STDOUT (e.g. SetStdHandle).
Do you have any control over when the process starts? If so, you could start the process and have it pipe its stdout to a file which could be read or to another program you write that could log it in a database, event viewer, etc.
Under Linux, check out the operating system's IPC mechanisms such as message queues, pipes, shared memory, and sockets. These mechanisms allow for Inter-process communication. Although, if your particularly interested in a program's output, a work-around could just have the first process output the data out to disk onto a file, and read with a separate process. In this way, you could use multiple languages for the task. A simple example would be to have C++ write some data out to a file, and use JAVA read/use the data, given the same file. Hope I came close to answering, if at all.
I thought this could've been a common question, but it has been very difficult to find an answer. I've tried searching here and other forums with no luck.
I'm writing a C# (.net version 4) program to monitor a process. It already raises an event when the process starts and when it stops, but I also need to check where is this process reading from and writing to; specially writing to since I know this process writes a large amount of data every time it runs. We process batches of data, and the path where the process writes to contains the Batch ID, which is an important piece of information to log the results of the process.
I've looked into the System.Diagnostics.Process.BeginOutputReadLine method, but since the documentation says that StandardOutput must be redirected, I'm not sure if this can be done on a process that is currently running, or if it affects the write operation originally intended by the process.
It is a console application in C#. If anyone have any idea on how to do this, it would be much appreciated.
Thanks in advance!
Output redirection would only help you solve the problem of intercepting the process' standard output stream. This would have no effect on read/write operations to other files or streams that the program would use.
The easiest way to do this would be to avoid reverse engineering this information and exert some control over where the process writes its data (e.g. pass a command line parameter to it to specify the output path and you can monitor that output path yourself).
If that is impossible for some reason, you can look into these approaches, all of which are quite advanced and have various drawbacks:
Use Detours to launch the process and redirect calls to CreateFile to a function that you define (e.g. you could call into some other function to track the file name that it used and then call the real CreateFile). Note that a license to use Detours costs money and it requires you to build an an unmanaged DLL to define your replacement function.
Read the data from the Microsoft-Windows-Kernel-File event tracing provider. This provider tracks all file operations for everything on the system. Using this data requires advanced knowledge of ETW and a lot of P/Invoke calls if you are trying to consume it from C#.
Enumerate the open handles of the process once it is started. A previous stackoverflow.com question has several possible solutions. Note that this is not foolproof as it only gives you a snapshot of the activity at a point in time (e.g. the process may open and close handles too quickly for you to observe it between calls to enumerate them) and most of those answers require calling into undocumented functions.
I came across this implementation recently: DetectOpenFiles but i have not used and/or test it. Feel free to try it. It seems to deliver open file handle information for a given process id. Looking forward to read your experience with it! ;-)
I've written a ssh server in c# and I thought it'd be neat to hook up powershell as a shell. I've tried 2 methods to get this to work properly but both are far from perfect. Here's what I've tried:
Launch powershell.exe and redirect it's std(in/out). This doesn't
work well since powershell.exe detects it is redirected, changes
it's behaviour. What's more, it expects input data on the stdid, not
commands. So it uses the console api to read commands.
Host powershell in a "wrapper" application. This has the advantage of
being able to provide a "console" implementation to powershell (via
the PSHostRawUserInterface). This works better, but you can still invoke
commands (mostly real console applications) like "... | more", that expect
to be able to use the console api, and subsequently try to read from the
console of the wrapper process.
So what I'd like to do is have a set of functions replace the regular console input/output functions that console applications use, so I can handle them. But that seems rather drastic to the point of being a bad design idea (imo).
Right now I am on the idea of manipulating the console by sending the relevant keys with native/Pinvoke functions like WriteConsoleInput. I gather that it might be possible to fake the console that way. But I don't see how I would then "read" what happens on the console.
Also keep in mind, it's a service, so preferably it shouldn't spawn an actual console window, although perhaps in windows session 0 that wouldn't show up and not matter.
You've got PSSession for this purpose and the Enter-PSSession CmdLet. What will your SSH with Powershell do that PSSession is not doing ?
But if you want to do that here is a solution whithout writting anything : Using PowerShell through SSH
Edited 02/11/2011
PowerShell inside provide another way to do it whithout writting anything (free for personal usage).
Host03 sample, can perhaps provide basic code to do what you wat to do.
I installed PowerShellInside as suggested by JPBlanc, but didn't use it for very long. The one connection thing is just too limiting, and I don't like being limited (especially if that limitation is profit based but thats a whole other discussion i shouldn't get into). And despite being a solution to the original problem, it feels unsatisfactory because it doesn't solve the programming problem I ran into.
However, I did eventually manage to solve said problem, indeed by using the windows api calls in a wrapper process. Because there are quite a few pitfalls, I decided to anwser my own question and give others looking at the same problem some pointers. The basic structure is as follows:
Start the wrapper process with redirected stdin/-out (and stderr if you want). (In my case stdin and out will be streams of xterm control sequences and data, because that is the ssh way)
Using GetStdHandle() retrive the redirected input and output handles. Next SetStdHandle()'s to the CreateFile() of "CONIN$" and "CONOUT$", such that child processes inherits the the console and does not have the redirections of the wrapper process. (Note that a security descriptor allowing inheriting is needed for createfile)
Setup the console mode, size, title, Ctrl-C handler's, etc. Note: be sure to set a font if you want unicode support, I used Lucida Console (.FontFamily = 54, .FaceName = "Lucida Console"). Without this, reading the characters from your console output will return codepaged versions, which are horrible to work with in managed code.
Reading output can be done with the SetWinEventHook(), be sure to use out-of-context notification, because I'm pretty sure that having your managed application suddenly run in another process context/address space is a Bad Idea™ (I'm so sure that I didn't even try). The event will fire for every console window, not just your own. So filter all calls to the callback by window handle. Retrive the window handle of the current console application with GetConsoleWindow(). Also don't forget to unhook the callback when the application is done.
Note, upto this point be sure not to use (or do anything that causes the load of) the System.Console class, or things more than likely will go wrong. Usage after this point will behave as if the sub process had written to the output.
Spawn the needed sub process (Note, you must use .UseShellExecute = false or it will not inherit the console)
You can start providing input to the console using WriteConsoleInput()
At this point (or on a separate thread) you have to run a windows message loop or you will not recieve console event notification callbacks. You can simply use the parameterless Application.Run() to do this. To break the message loop, you must at some point post an exit message to your message loop. I did this with Application.Exit() in the subprocess's .Exited event. (Note use .EnableRaisingEvents for this to work)
Calls will now be made to your win event callback when something on your console changes. Pay attention to the scroll event, this might work somewhat unexpected. Also make no assumptions about synchronous delivery. If the sub process writes 3 lines, by the time you are processing the first event, the remaining 3 lines might already have been written. To be fair, windows does a nice job of composing events such that you don't get swamped with single character changes and can keep up with the changes.
Be sure to mark all PInvoke definitions with CharSet=CharSet.Unicode if they contain a character anywhere in the input or output. PInvoke.net missed quite a few of these.
The net result of all of this: a wrapper application for the windows console api. The wrapper can read/write the redirected stdin and stdout to communicate with the world. Ofcourse if you want to get fancy you could use any stream here (named pipe, tcp/ip, etc..). I implemented a few xterm control sequences and managed to get a fully working terminal wrapper that should be capable of wrapping any windows console process, translate the xterm input to input on the target application's console input and process the application's output to xterm control sequences. I even got the mouse to work. Starting powershell.exe as sub process now solves the original problem of running powershell in a ssh session. Cmd.exe also works. If anyone is interrested I'll see about posting the full code somewhere.
I have a C++ application that needs to communicate to a C# application (a windows service) running on the same machine. I want the C++ application to be able to write as many messages as it wants, without knowing or caring when/if the C# app is reading them, or even if it's running. The C# app be able to should just wake up every now and then and request the latest messages, even if the C++ app has been shut down.
What is the simplest way to achieve this? I think this kind of thing is what MSMQ is for, but I haven't found a good way to do it in C++. I'm using Named Pipes right now, but that's not really working out as the way I'm doing it requires a connection between the two apps, and the C++ call to WriteLine blocks until the read takes place.
Currently the best solution I can think of is just writing the messages to a file with a timestamp on each message that the C# application checks periodically against its last update timestamp. That seems a little crude, though.
What is the simplest way to achieve this sort of messaging?
I would use a named pipe.
Well, the simplest way actually is using a file to store the messages. I would suggest using an embedded database like SQLite, though: the advantage will be better performance and a nice way to query for changes (i.e. SELECT * FROM messages WHERE timestamp > last_app_start).
MSMQ definitely sounds like what you want, or the more basic reading and writing files written to a common area but then you need to watch contention on the files.
VC++ help on MSMQ.
The requirement of both apps not always running at the same time but still being able to message each other definitely means you need a third component to store/queue messages. Whether you use a shared database/file or you write a third app that acts as a message store is up to you. Either way you will find sharing always causes contention.
Personally I would look at 0MQ before MSMQ but neither will solve your problem as is. An sqlite database would be my first choice.
I have a console application that will be kicked off with a scheduler. If for some reason, part of that file is not able to be built I need a GUI front end so we can run it the next day with specific input.
Is there as way pass parameters to the application entry point to start the console application or the GUI application based on the arguments passed.
It sounds like what you want is to either run as a console app or a windows app based on a commandline switch.
If you look at the last message in this thread, Jeffrey Knight posted code to do what you are asking.
However, note that many "hybrid" apps actually ship two different executables (look at visual studio- devenv.exe is the gui, devenv.com is the console). Using a "hybrid" approach can sometimes lead to hard to track down issues.
Go to your main method (Program.cs). You'll put your logic there, and determine what to do , and conditionally execute Application.Run()
I think Philip is right. Although I've been using the "hybrid" approach in a widely deployed commercial application without problems. I did have some issues with the "hybrid" code I started out with, so I ended up fixing them and re-releasing the solution.
So feel free to take advantage of it. It's actually quite simple to use. The hybrid system is on google code and updates an old codeguru solution of this technique and provides the source code and working example binaries.
Write the GUI output to a file that the console app checks when loading. This way your console app can do the repair operations and the normal operations in one scheduled operation.
One solution to this would be to have the console app write the config file for a GUI app (WinForms is simplest).
I like the Hybrid approach, the command line switch appears to be fluff.
It could be simpler to have two applications using the same engine for common functionality. The way to think of it is the console app is for computers to use while the GUI App is for humans to use. Since the CLI App will execute first then it can communicate it's data through the config file to the GUI App.
One side benefit would be the interface to the processing engine would be more concise thus easier to maintain in the future.
This would be the simplest, because the Config file mechanism is easily available and you do not have to write a bunch of formatting and parsing routines.
If you don't want to use the Config mechanism, you could directly write JSON or XML Serialization to a file to easily transfer data also