I have a console application that writes on a txt files information retrieved from a database. Until now I manually executes the executable generated by the console application.
Now I need to automatize the invocation of the .exe from my web application, so that each time a specific condition happens in my code behind I can run the .exe with a logic "fire and forget".
My goals are:
1) Users must not be affected in any way by the console application execution (the SQL queries and txt file generation might take around 3 to 5 minutes), therefore the logic of "fire and forget" delegated to a separate process.
2) Since the executable will be still run manually in some cases, I would prefer having the all logic in one place, in order to avoid the risk of having a different behaviour.
Can I safely use System.Diagnostics.Process to achieve this?
System.Diagnostics.Process cmd = new System.Diagnostics.Process();
cmd.Start("Logger.exe");
Does the process automatically ends or do I have to set a timeout and explicitly close it? Is it "safe" in a web application environment with different users accessing the web application let them call the executable without the risk of concurring accesses?
Thanks.
EDIT:
Changed to use the built in class for more clarity, thanks for the hint.
As far as the mechanics, I assume CommandLineProcess wraps Process? If so, I don't see anything necessarily wrong with it, at first glance. I just have some issue with running this as an executable from a web application, as you are more likely to reduce security to get it working than rearchitect (if you follow the normal path I see in development).
If you encapsulate the actual business code in a class library, you can run the code in the web application. The main rule is the folder it saves to should be under webroot (physically or logically) so you don't have to reduce security. But, if the logic is encapsulated, you can run the "file creeator" in the web process without spinning up a Process.
Your other option is wrap the process in a service (I like a non-HTTP WCF service, but you can go windows service, if you want). I would only go this direction if it makes sense to follow a SOA path with a service endpoint. As this is likely to be isolated to a single application, in process makes more sense (unless you are saving to a directory outside of webroot).
Hope this makes sense.
Yes, it will die on it's own - provided that the .exe file will terminate on it's own. It will run with the same credentials of the web server.
Keep in mind this is considered unsafe, since you are executing code based on whatever your webapp is doing. However, the problem is with .exe files being executed this way in general and not with the actual users accessing the app.
Similar question here How do I run a command line process from a web application?
Related
The title of my question might already give away the fact that I'm not sure about what I want, as it might not make sense.
For a project I want to be able to run executables within my application, while redirecting their standard in and out so that my application can communicate with them through those streams.
At the same time, I do not want to allow these executables to perform certain actions like use the network, or read/write outside of their own working directory (basically I only want to allow them to write and read from the standard in and out).
I read on different places on the internet that these permissions can be set with PermissionStates when creating an AppDomain in which you can then execute the executables. However, I did not find a way to then communicate with the executables through their standard in and out, which is essential. I can however do this when starting a new Process (Process.Start()), though then I cannot set boundaries as to what the executable is allowed to do.
My intuition tells me I should somehow execute the Process inside the AppDomain, so that the process kind of 'runs' in the domain, though I cannot see a way to directly do that.
A colleague of mine accomplished this by creating a proxy-application, which basically is another executable in which the AppDomain is created, in which the actual executable is executed. The proxy-application is then started by a Process in the main application. I think this is a cool idea, though I feel like I shouldn't need this step.
I could add some code containing what I've done so far creating a process and appdomain, though the question is pretty long already. I'll add it if you want me to.
The "proxy" application sounds like a very reasonable approach (given that you only ever want to run .NET assemblies).
You get the isolation of different processes which allows you to communicate via stdin/stdout and gives the additional robustness that the untrusted executable cannot crash your main application (which it could if it was running in an AppDomain inside your main application's process.
The proxy application would then setup a restricted AppDomain and execute the sandboxed code, similar to the approach described here:
How to: Run Partially Trusted Code in a Sandbox
In addition, you can make use of operation system level mechansims to reduce the attack surface of a process. This can be achieved e.g. by starting the proxy process with lowest integrity which removes write access to most resources (e.g. allow writing files only in AppData\LocalLow). See here for an example.
Of course, you need to consider whether this level of sandboxing is sufficient for you. Sandboxing, in general, is hard, and the level of isolation always will be to a certain degree only.
I have a ASP.NET (C#) website that uses a third party DLL to process the data that the users POST via a web form. The call is pretty straightforward:
string result = ThirdPartyLib.ProcessData(myString);
Once in a blue moon this library hangs and (according to my hosting provider logs) consumes 100% of CPU. The website is hosted on a shared hosting, so I have no access to the IIS or event logs. When this happens, my website is automatically stopped by the hosting provider performance monitor, and I have manually switch it back on.
Now, I know that the right thing to to is investigate the problem and fix (or replace) the DLL. But as it's third-party software, I am unuable to fix it, and their support is not helpful at all. Moreover, I can't reproduce the problem. Replacing the library is a pain too.
Is there a way in C# to detect when this DLL starts consuming 100%CPU and kill the process automatically from my ASP.NET code?
You cannot "detect" if the current process is hanging because as the caller of a method (third party or not) you're simply not in control until it returns.
What you can do is move the call to the third party library into a separate executable and have it output its result via the standard output (you can simply use Console.WriteLine(string) for this).
Once you've done that, you can start a separate Process that runs this executable, read the result via StandardOutput and use WaitForExit(int) to wait a certain amount of time (maybe a few seconds) for the process to finish. The return value of WaitForExit() tells you if the process actually exited. In case it didn't, you can Kill() it and move on without IIS worker process hanging as a whole.
In a .NET web site I need to get code submitted by users, compile it and execute it. But I need code to be executed in an isolated environment so that no malicious code can harm my system (for instance, no Directory.Delete("C:\Windows") should be ever executed).
Is it possible to execute code in a kind of chroot environment?
You can compile and run the code in a sandbox. This is a newly created AppDomain with restricted permissions.
You can take a look at AppDomains. It's an isolated environment where applications execute. Take a look.
I've created an exe file that does some maintainance work on my server.
I want to be able to launch it from the website that sits on the server.
The exe has to be launched on the server itself and not on the client.
My instincts tell me it's not possible but I've had to check with you guys.
If I need to set certain permissions / security - I can.
Yes, it can be done, but it's not recommended.
An ideal solution for running maintenance scripts/executables is to schedule them using cron on Unix/Linux systems or using Scheduled Tasks in Windows. Some advantages of the automated approach vs. remote manual launch:
The server computer is self-maintaining. Clients can fail and people can forget. As long as the server is running the server will be keeping itself up to date, regardless of the status of client machines or persons.
When will the executable be launched? Every time a certain page is visited? What if the page is refreshed? For a resource-intensive script/executable this can severely degrade server performance. You'll need to code rules to handle multiple requests and monitor running maintenance processes. Cron & scheduled tasks handle these possibilities already.
A very crude option, Assuming IIS: Change Execute Access from "Scripts Only" or "None" to "Scripts and Executables"
To make this less crude, you should have the executable implement a CGI interface (if that is under your control.
And, if you want to use ASP.NET to add autorization/authentication, the code (C#) to do this would be:
System.Diagnostics.Process process;
var startInfo = New System.Diagnostics.ProcessStartInfo("C:\file.exe")
process.StartInfo = startInfo;
process.Start();
process.WaitForExit();
It's possible, but almost certainly it's a bad idea. What environment/webserver? You should just need to set the relevant 'execute' permissions for the file.
I really suggest that you don't do this, however, and configure the task to run automatically in the background. The reasoning is that, configured badly, you could end up letting people run any executable, and depending on other factors, completely take over your machine.
Depends what language you're using; most server side scripting languages give you a way to exectue shell commands, for example:
$result=`wc -l /etc/passwd`;
executed a unix command from perl.
Most web languages (I know at least Java and PHP) allow you to execute a command line argument from within a program.
I have a console application that will be kicked off with a scheduler. If for some reason, part of that file is not able to be built I need a GUI front end so we can run it the next day with specific input.
Is there as way pass parameters to the application entry point to start the console application or the GUI application based on the arguments passed.
It sounds like what you want is to either run as a console app or a windows app based on a commandline switch.
If you look at the last message in this thread, Jeffrey Knight posted code to do what you are asking.
However, note that many "hybrid" apps actually ship two different executables (look at visual studio- devenv.exe is the gui, devenv.com is the console). Using a "hybrid" approach can sometimes lead to hard to track down issues.
Go to your main method (Program.cs). You'll put your logic there, and determine what to do , and conditionally execute Application.Run()
I think Philip is right. Although I've been using the "hybrid" approach in a widely deployed commercial application without problems. I did have some issues with the "hybrid" code I started out with, so I ended up fixing them and re-releasing the solution.
So feel free to take advantage of it. It's actually quite simple to use. The hybrid system is on google code and updates an old codeguru solution of this technique and provides the source code and working example binaries.
Write the GUI output to a file that the console app checks when loading. This way your console app can do the repair operations and the normal operations in one scheduled operation.
One solution to this would be to have the console app write the config file for a GUI app (WinForms is simplest).
I like the Hybrid approach, the command line switch appears to be fluff.
It could be simpler to have two applications using the same engine for common functionality. The way to think of it is the console app is for computers to use while the GUI App is for humans to use. Since the CLI App will execute first then it can communicate it's data through the config file to the GUI App.
One side benefit would be the interface to the processing engine would be more concise thus easier to maintain in the future.
This would be the simplest, because the Config file mechanism is easily available and you do not have to write a bunch of formatting and parsing routines.
If you don't want to use the Config mechanism, you could directly write JSON or XML Serialization to a file to easily transfer data also