We have a bit of a complicated scenario in the office where we have multiple standalone applications that can also be combined into a single workflow. I'm now looking into strategies to avoid running half a dozen apps for this one workflow and I'm fairly confident that the most appropriate solution is to write an over-arching app that runs these smaller apps in sequence.
The apps don't rely on each others' results as such, but they must be run in a specific instance and you can't run step 2 if step 1 fails, etc. Roll-back isn't necessary. Some of the apps are used in standalone scenarios as well as this workflow, so it seems like a controlling application would allow me to re-use those apps, rather than duplicate code.
A controlling app also allows for the workflow to be extensible; I can "plug in" a new step between step 1 and step 2 if there is a required amendment to the workflow. Further, it should allow me to do things like build a queue system so that the workflow can just be constantly run.
Am I on the right track with my thoughts? Are there limitations to this approach?
1) If you have the source code of these smaller apps the best thing to do would be to recreate an entire application that act as a "workspace" , with the various steps of the work included directly in this bigger app.
The benefits of this approach:
faster execution (instead of loading every time a new process/application you will use only one)
simpler deployment (one application is simpler than X)
better and customized gui
2) If, otherwise, you don't have the source code of these apps , so recreating these is impossible (excluding reverse-engineering) , your approach seem's to be the only one possible for your scenario.
In this case if these apps don't have an API to use, the most stupid and working approach would be to simply use the System.Diagnostics.Process
class to start a process for every app involved when is necessary to.
Here an example of this approach:
Process process = new Process();
string path = #"C:\path\to\the\app1.exe";
ProcessStartInfo processStartInfo = new ProcessStartInfo(path);
processStartInfo.UseShellExecute = false;
process.StartInfo = processStartInfo;
process.Start();
and so on... every time you want to launch an application .
For killing these applications you have 2 possibility, kill the process manually
by calling process.Kill() or let the application kill herself
Related
The title of my question might already give away the fact that I'm not sure about what I want, as it might not make sense.
For a project I want to be able to run executables within my application, while redirecting their standard in and out so that my application can communicate with them through those streams.
At the same time, I do not want to allow these executables to perform certain actions like use the network, or read/write outside of their own working directory (basically I only want to allow them to write and read from the standard in and out).
I read on different places on the internet that these permissions can be set with PermissionStates when creating an AppDomain in which you can then execute the executables. However, I did not find a way to then communicate with the executables through their standard in and out, which is essential. I can however do this when starting a new Process (Process.Start()), though then I cannot set boundaries as to what the executable is allowed to do.
My intuition tells me I should somehow execute the Process inside the AppDomain, so that the process kind of 'runs' in the domain, though I cannot see a way to directly do that.
A colleague of mine accomplished this by creating a proxy-application, which basically is another executable in which the AppDomain is created, in which the actual executable is executed. The proxy-application is then started by a Process in the main application. I think this is a cool idea, though I feel like I shouldn't need this step.
I could add some code containing what I've done so far creating a process and appdomain, though the question is pretty long already. I'll add it if you want me to.
The "proxy" application sounds like a very reasonable approach (given that you only ever want to run .NET assemblies).
You get the isolation of different processes which allows you to communicate via stdin/stdout and gives the additional robustness that the untrusted executable cannot crash your main application (which it could if it was running in an AppDomain inside your main application's process.
The proxy application would then setup a restricted AppDomain and execute the sandboxed code, similar to the approach described here:
How to: Run Partially Trusted Code in a Sandbox
In addition, you can make use of operation system level mechansims to reduce the attack surface of a process. This can be achieved e.g. by starting the proxy process with lowest integrity which removes write access to most resources (e.g. allow writing files only in AppData\LocalLow). See here for an example.
Of course, you need to consider whether this level of sandboxing is sufficient for you. Sandboxing, in general, is hard, and the level of isolation always will be to a certain degree only.
all. This may seem like a duplicate question, there being plenty of questions about elevating and de-elevating child processes on here.
I'm aware of the using UseShellExecute = true together with the Verb = "runas" option to start one's child process with elevated rights.
Example (found in many of the answers):
var info = new ProcessStartInfo(Assembly.GetEntryAssembly().Location)
{
UseShellExecute = true, // !
Verb = "runas",
};
My problem here is not starting child processes with elevated access, I'd like to know if there is a way to prompt for elevated access only once, after which starting the same child process a few times, say 8 of them for example.
The application I'm working on uses a Queuing system to process tasks. According to the user's number of cores, a child process (QueueProcessor) is started for each of the cores, which then starts processing this queue. So in essence, I would like to check the user's rights, prompt once for elevated rights, and start all 8 Queue Processors.
I can't use the manifest-route to require the entire application to run in Admin-rights, as this is not an option for many of our users. EDIT as per #MPatel's comment: To clarify, we specifically don't force users to run the application as an Administrator as per design decisions.
And obviously you can see how it would become annoying when you'd have to click the UAC dialogue 8 times each time you'd want to run one of these tasks.
I came across this article: Vista UAC The Definitive Guide, during my searches, but I'm having trouble implementing the C++ dll in my C#, .NET 3.5 project. This is the article that basically lead me to ask whether this is possible.
One other thing which might be worth mentioning: I'm not sure if I'm misunderstanding the problem slightly. The users have to have at least enough access to use the basic SQL-funtionality our program uses. So I think my problem might be less about having the child process run with full admin-rights, as just getting the child process to start up. The issue I'm getting is that .NET won't allow certain users to run child processes from the running app. Having the user run the main app using "Run As Administrator" seems to fix this problem. Is there some UAC specific right I can possibly set once through a prompt to simply allow the main App to start the child.exe's?
You could write your own process (C# application) that will be started by your main application with elevated rights.
Than, whenever you need to start a process with elevated rights you forward this request to your elevated process by using some kind of inter-process communication (NamedPipes, TcpClient, etc.) and that one will start this process as usual, leading to an elevated process cause it was started from an elevated one.
First of all, Microsoft released a guideline about handling UAC. The philosophy behind showing annoying user prompts is that Microsoft wants developers to create application with minimal permissions from the scratch. I do NOT think there are some magic parameter to fork several elevated child processes with only one user prompt.
In compare with solution of elevating your main application, you can create a server/service application and move all administrative stuffs into it. The application will run without UI in background. It will show UAC prompt only once. You can setup a thread pool with particular pool size for handling tasks efficiently. When you create any child process, the child process can run non-administrative tasks locally and silently. To do an administrative task, it will communicate with the server/service application using any IPC technique.
I hope my advice can help you. Sure, the solution requires extra coding effort. I can share more leads, if you think the approach is doable for you.
I have a console application that writes on a txt files information retrieved from a database. Until now I manually executes the executable generated by the console application.
Now I need to automatize the invocation of the .exe from my web application, so that each time a specific condition happens in my code behind I can run the .exe with a logic "fire and forget".
My goals are:
1) Users must not be affected in any way by the console application execution (the SQL queries and txt file generation might take around 3 to 5 minutes), therefore the logic of "fire and forget" delegated to a separate process.
2) Since the executable will be still run manually in some cases, I would prefer having the all logic in one place, in order to avoid the risk of having a different behaviour.
Can I safely use System.Diagnostics.Process to achieve this?
System.Diagnostics.Process cmd = new System.Diagnostics.Process();
cmd.Start("Logger.exe");
Does the process automatically ends or do I have to set a timeout and explicitly close it? Is it "safe" in a web application environment with different users accessing the web application let them call the executable without the risk of concurring accesses?
Thanks.
EDIT:
Changed to use the built in class for more clarity, thanks for the hint.
As far as the mechanics, I assume CommandLineProcess wraps Process? If so, I don't see anything necessarily wrong with it, at first glance. I just have some issue with running this as an executable from a web application, as you are more likely to reduce security to get it working than rearchitect (if you follow the normal path I see in development).
If you encapsulate the actual business code in a class library, you can run the code in the web application. The main rule is the folder it saves to should be under webroot (physically or logically) so you don't have to reduce security. But, if the logic is encapsulated, you can run the "file creeator" in the web process without spinning up a Process.
Your other option is wrap the process in a service (I like a non-HTTP WCF service, but you can go windows service, if you want). I would only go this direction if it makes sense to follow a SOA path with a service endpoint. As this is likely to be isolated to a single application, in process makes more sense (unless you are saving to a directory outside of webroot).
Hope this makes sense.
Yes, it will die on it's own - provided that the .exe file will terminate on it's own. It will run with the same credentials of the web server.
Keep in mind this is considered unsafe, since you are executing code based on whatever your webapp is doing. However, the problem is with .exe files being executed this way in general and not with the actual users accessing the app.
Similar question here How do I run a command line process from a web application?
I've created an exe file that does some maintainance work on my server.
I want to be able to launch it from the website that sits on the server.
The exe has to be launched on the server itself and not on the client.
My instincts tell me it's not possible but I've had to check with you guys.
If I need to set certain permissions / security - I can.
Yes, it can be done, but it's not recommended.
An ideal solution for running maintenance scripts/executables is to schedule them using cron on Unix/Linux systems or using Scheduled Tasks in Windows. Some advantages of the automated approach vs. remote manual launch:
The server computer is self-maintaining. Clients can fail and people can forget. As long as the server is running the server will be keeping itself up to date, regardless of the status of client machines or persons.
When will the executable be launched? Every time a certain page is visited? What if the page is refreshed? For a resource-intensive script/executable this can severely degrade server performance. You'll need to code rules to handle multiple requests and monitor running maintenance processes. Cron & scheduled tasks handle these possibilities already.
A very crude option, Assuming IIS: Change Execute Access from "Scripts Only" or "None" to "Scripts and Executables"
To make this less crude, you should have the executable implement a CGI interface (if that is under your control.
And, if you want to use ASP.NET to add autorization/authentication, the code (C#) to do this would be:
System.Diagnostics.Process process;
var startInfo = New System.Diagnostics.ProcessStartInfo("C:\file.exe")
process.StartInfo = startInfo;
process.Start();
process.WaitForExit();
It's possible, but almost certainly it's a bad idea. What environment/webserver? You should just need to set the relevant 'execute' permissions for the file.
I really suggest that you don't do this, however, and configure the task to run automatically in the background. The reasoning is that, configured badly, you could end up letting people run any executable, and depending on other factors, completely take over your machine.
Depends what language you're using; most server side scripting languages give you a way to exectue shell commands, for example:
$result=`wc -l /etc/passwd`;
executed a unix command from perl.
Most web languages (I know at least Java and PHP) allow you to execute a command line argument from within a program.
I have a console application that will be kicked off with a scheduler. If for some reason, part of that file is not able to be built I need a GUI front end so we can run it the next day with specific input.
Is there as way pass parameters to the application entry point to start the console application or the GUI application based on the arguments passed.
It sounds like what you want is to either run as a console app or a windows app based on a commandline switch.
If you look at the last message in this thread, Jeffrey Knight posted code to do what you are asking.
However, note that many "hybrid" apps actually ship two different executables (look at visual studio- devenv.exe is the gui, devenv.com is the console). Using a "hybrid" approach can sometimes lead to hard to track down issues.
Go to your main method (Program.cs). You'll put your logic there, and determine what to do , and conditionally execute Application.Run()
I think Philip is right. Although I've been using the "hybrid" approach in a widely deployed commercial application without problems. I did have some issues with the "hybrid" code I started out with, so I ended up fixing them and re-releasing the solution.
So feel free to take advantage of it. It's actually quite simple to use. The hybrid system is on google code and updates an old codeguru solution of this technique and provides the source code and working example binaries.
Write the GUI output to a file that the console app checks when loading. This way your console app can do the repair operations and the normal operations in one scheduled operation.
One solution to this would be to have the console app write the config file for a GUI app (WinForms is simplest).
I like the Hybrid approach, the command line switch appears to be fluff.
It could be simpler to have two applications using the same engine for common functionality. The way to think of it is the console app is for computers to use while the GUI App is for humans to use. Since the CLI App will execute first then it can communicate it's data through the config file to the GUI App.
One side benefit would be the interface to the processing engine would be more concise thus easier to maintain in the future.
This would be the simplest, because the Config file mechanism is easily available and you do not have to write a bunch of formatting and parsing routines.
If you don't want to use the Config mechanism, you could directly write JSON or XML Serialization to a file to easily transfer data also