What Difference Does UseShellExecute Have? - c#

I am starting a small console application from within my IIS web application. The code is started from within an app pool using code like this,
Process process = new Process();
ProcessStartInfo processStartInfo = new ProcessStartInfo();
processStartInfo.CreateNoWindow = true;
processStartInfo.WindowStyle = ProcessWindowStyle.Hidden;
// ..
process.Start();
I used to intermittently get an error,
Win32Exception exception has occured Message: No such interface supported
ErrorCode: 80004005 NativeErrorCode: 80004002
I proved that when this happened the console application wouldn't start at all.
I added to the code above this,
processStartInfo.UseShellExecute = false;
And the problem has gone away (so far, fingers crossed). I understand that by making this change it doesn't require a valid desktop context to run, but what exactly does that mean. If that means we cannot run the above code if there is no desktop (which applies to an IIS app pool running with a system user), then why did it used to run sometimes in the past rather than fail every time?
Does anybody have any idea why this would make a difference? What does no interface supported mean in this context?
UPDATE:
I have taken on board everything people have said, and done more research myself. So to summarise if you have UseShellExecute = true (which is the default) then it will call ShellExecuteEX in shell32.dll to execute the process. It will do this actually (copied from the System.dll using ILSpy),
public bool ShellExecuteOnSTAThread()
{
if (Thread.CurrentThread.GetApartmentState() != ApartmentState.STA)
{
ThreadStart start = new ThreadStart(this.ShellExecuteFunction);
Thread thread = new Thread(start);
thread.SetApartmentState(ApartmentState.STA);
thread.Start();
thread.Join();
}
else
{
this.ShellExecuteFunction();
}
return this._succeeded;
}
If you have UseShellExecute = false then it will call CreateProcess in kernel32.dll to start the process.
I was wondering if there is a problem with the fact the the code ShellExecuteOnSTAThread above is creating a new thread? Could the app pool reach some limit on the threading which could indirectly cause a Win32Exception?

This error can occur when certain COM objects aren't registered, although it's a bit of a mystery to me why it's intermittent.
In fairness though, Spawning a local executable from within IIS is a pretty rare thing to do and it may actually cause a security problem, or at the least cause an issue with IIS if the command fails for some reason and doesn't give control back to the system.
In reality the best practice for something like that is to record the action that you need to happen inside the registry,database or some kind of setting file and have your local application run as a scheduled task or a windows service.
For reference, the UseShellExec states whether or not the Kernel should launch the exe directly, or whether it should ask Explorer to launch the file.
You might be getting this problem when there's no-one logged in so there isn't necessarily a shell loaded to launch the exe.
Ultimately though, what you're currently trying to do is a bad thing in production - you cannot guarantee the state of IIS when it tries to launch this exe and rightly so, IIS is not a Shell.

Related

Port conflicts after update

I have a weird case of (TCP) listening port conflicts on my hands.
The application uses a lot of ports. Less than a hundred but some tens. This is probably irrelevant, I get the conflict on the first bind operation which happens to be a listener.
I can repeatedly close and restart the application in quick succession without issue. To my awareness I neatly stop all threads and dispose all sockets on close.
The issue arises when the application is updated. To allow the executable and its dependencies to be overwritten, the update is delegated to an updater application. The main application starts the updater and immediately closes itself (in a graceful fashion, using a WM_CLOSE message). The updater unzips the update package and overwrites the binaries and what more. When done, it restarts the (now updated) main application.
At this point the main application reports the port conflict. It is a port that was used by the previous version.
I understand Windows reuses sockets under the hood, keeping them open even when an application closes them and then uses the same cached socket when the application connects again. So I figured Windows could be fooled by the new version, not recognizing it as the same application.
But here's the kicker. The updater stays up for a while, allowing the user to read the update report. The user can close it, if he doesn't it will automatically close after one minute. It appears that while the updater is running, the main application cannot be started without the port conflict occurring. As soon as the updater is closed, the main application can be started without issue again. And the updater itself does NOTHING with sockets!
Starting the updater and the main application is done using Process.Start(). It is as if something links the processes (of main app and updater). Task manager however confirms that the main application is really gone after is closed automatically.
Mind blown. Any insights would be much appreciated.
NineBerry's links were insightful but when trying to create an extension method for Process that takes an inherit argument I ran into the problem that ProcessStartInfo properties do not map nicely to the Win32 STARTUPINFO struct at all. This prevented me from keeping it compatible with existing code which used some features of ProcessStartInfo that I could not transfer to a call to CreateProcess. I do not understand how Process.Start() does this under the hood and could not be bothered anymore after I discovered a workaround.
It appears that setting ProcessStartInfo.UseShellExecute to true makes the whole problem go away. This may not be good for everybody because it has some additional properties but for me this was sufficient.
On GitHub people have asked for a ProcessStartInfo property that allows control over the the inherit value. It does not seem to be picked up yet and would likely only be implemented for future .NET Core releases.
A take on the seemingly discrepancy between ProcessStartInfo on the one hand and STARTUPINFO on the other hand would still be interesting so if anyone would care to explain, please do.

C#: Attach debugger to process in a clean way

We are developing an open source Visual Studio extension for running tests written with the C++ Google Test framework within VS. Part of the VS API for test adapters is the possibility to run tests with a debugger attached. However, that API does not allow to grab the output of the executing process: It only returns the process id, and afaik, there's no way to access that output if the process is already running.
Thus, we'd like to launch our own process, and attach a debugger to that process on our own (following the approach described in the accepted answer of this question). This works so far, but we have one issue: It seems that attaching a debugger is only possible if the process already runs, resulting in missed breakpoints; the reason seems to be that the breakpoints might already be passed until the debugger is attached. Note that we indeed hit breakpoints, so the approach seems to work in general, but it's not exactly reliable.
Here's the code for launching the process (where command is the executable produced by the Google Test framework):
var processStartInfo = new ProcessStartInfo(command, param)
{
RedirectStandardOutput = true,
RedirectStandardError = false,
UseShellExecute = false,
CreateNoWindow = true,
WorkingDirectory = workingDirectory
};
Process process = new Process { StartInfo = processStartInfo };
process.Start()
DebuggerAttacher.AttachVisualStudioToProcess(vsProcess, vsInstance, process);
And here's the utility method for attaching the debugger:
internal static void AttachVisualStudioToProcess(Process visualStudioProcess, _DTE visualStudioInstance, Process applicationProcess)
{
//Find the process you want the VS instance to attach to...
DTEProcess processToAttachTo = visualStudioInstance.Debugger.LocalProcesses.Cast<DTEProcess>().FirstOrDefault(process => process.ProcessID == applicationProcess.Id);
//AttachDebugger to the process.
if (processToAttachTo != null)
{
processToAttachTo.Attach();
ShowWindow((int)visualStudioProcess.MainWindowHandle, 3);
SetForegroundWindow(visualStudioProcess.MainWindowHandle);
}
else
{
throw new InvalidOperationException("Visual Studio process cannot find specified application '" + applicationProcess.Id + "'");
}
}
Is there any way to attach a debugger in a more reliable manner? For instance, is it possible to launch a process from C# such that the process will wait for, say, 1s before starting to execute the passed command? That would give us enough time to attach the debugger (at least on my machine - I've tested this by adding a 1s wait period at the main() method of the Google Test executable, but that's not an option, since our users would need to change their test code in order to be able to debug it with our extension)... Or is there even a clean way (the described way might obviously fail e.g. on slow machines)?
Update: Let's recap the problem statement: Our users have a C++ solution including tests written with the Google Test framework (which are compiled into an executable to be run e.g. from the command line). We provide a VS extension written in C# (a test adapter) which discovers the executable, runs it with the help of a Process, collects the test results, and displays them within the VS test explorer. Now, if our users click Debug tests, we are starting the process running the C++ executable and then attach a debugger to that process. However, by the time it takes to attach the debugger to the process, the executable has already started running, and some tests have already been executed, resulting in breakpoints within those tests being missed.
Since we do not want to force our users to change their C++ code (e.g. by adding some waiting period at the begin of main() method of the test code, or with one of the approaches referenced by Hans below), we need a different way to attach that debugger. In fact, the VS test framework allows to launch a process with a debugger attached (and that approach does not suffer from our problem - that's what we are doing now), but that approach does not allow grabbing the process' output since all we get is the process id of an already running process (at least I don't know how that can be done in this case - I have done my research on that (so I believe :-) ). Grabbing the output would have some significant benefits to our extension (which I do not list here - let me know in the comments if you are interested), so we are looking for a different way to handle such situations.
So how can we run the executable (including grabbing the executable's output) and immediately attach a debugger to it, such that no breakpoints are missed? Is that possible at all?
You can PInvoke CreateProcess (see example How to call CreateProcess()...) to launch your debuggee using the CREATE_SUSPENDED creation flag (see Creation Flags for more detail) and then PInvoke ResumeThread to continue once your debugger is attached.
You may need to tweak the options to CreateProcess, depending on your specific needs, but that should do it.
Update:
A much better option, since you are writing a VS extension, is to use IVsDebugger4 interface to call LaunchDebugTargets4. The interface is documented and you can find plenty of examples on GitHub (just search for LaunchDebugTargets4). This method will avoid that pesky break in VS once you attach with the native debug engine.

Strategies for streamlining a workflow of multiple apps

We have a bit of a complicated scenario in the office where we have multiple standalone applications that can also be combined into a single workflow. I'm now looking into strategies to avoid running half a dozen apps for this one workflow and I'm fairly confident that the most appropriate solution is to write an over-arching app that runs these smaller apps in sequence.
The apps don't rely on each others' results as such, but they must be run in a specific instance and you can't run step 2 if step 1 fails, etc. Roll-back isn't necessary. Some of the apps are used in standalone scenarios as well as this workflow, so it seems like a controlling application would allow me to re-use those apps, rather than duplicate code.
A controlling app also allows for the workflow to be extensible; I can "plug in" a new step between step 1 and step 2 if there is a required amendment to the workflow. Further, it should allow me to do things like build a queue system so that the workflow can just be constantly run.
Am I on the right track with my thoughts? Are there limitations to this approach?
1) If you have the source code of these smaller apps the best thing to do would be to recreate an entire application that act as a "workspace" , with the various steps of the work included directly in this bigger app.
The benefits of this approach:
faster execution (instead of loading every time a new process/application you will use only one)
simpler deployment (one application is simpler than X)
better and customized gui
2) If, otherwise, you don't have the source code of these apps , so recreating these is impossible (excluding reverse-engineering) , your approach seem's to be the only one possible for your scenario.
In this case if these apps don't have an API to use, the most stupid and working approach would be to simply use the System.Diagnostics.Process
class to start a process for every app involved when is necessary to.
Here an example of this approach:
Process process = new Process();
string path = #"C:\path\to\the\app1.exe";
ProcessStartInfo processStartInfo = new ProcessStartInfo(path);
processStartInfo.UseShellExecute = false;
process.StartInfo = processStartInfo;
process.Start();
and so on... every time you want to launch an application .
For killing these applications you have 2 possibility, kill the process manually
by calling process.Kill() or let the application kill herself

Process.WaitForExit inconsistent across different machines

This code runs as expected on a large number of machines. However on one particular machine, the call to WaitForExit() seems to be ignored, and in fact marks the process as exited.
static void Main(string[] args)
{
Process proc = Process.Start("notepad.exe");
Console.WriteLine(proc.HasExited); //Always False
proc.WaitForExit(); //Blocks on all but one machines
Console.WriteLine(proc.HasExited); //**See comment below
Console.ReadLine();
}
Note that unlike a similar question on SO, the process being called is notepad.exe (for testing reasons), so it is unlikely the fault lies with it - i.e. it is not spawning a second sub-process and closing. Even so, it would not explain why it works on all the other machines.
On the problem machine, the second call to Console.WriteLine(proc.HasExited)) returns true even though notepad is still clearly open, both on the screen and in the task manager.
The machine is running Windows 7 and .NET 4.0.
My question is; what conditions on that particular machine could be causing this? What should I be checking?
Edit - Things I've tried so far / Updates / Possibly relevant info:
Reinstalled .NET.
Closed any processes I don't know in task manager.
Windows has not yet been activated on this machine.
Following advice in the comments, I tried getting the 'existing' process Id using GetProcessesByName but that simply returns an empty array on the problem machine. Therefore, it's hard to say the problem is even with WaitForExit, as the process is not returned by calling GetProcessesByName even before calling WaitForExit.
On the problem machine, the resulting notepad process's ParentID is the ID of the notepad process the code manually starts, or in other words, notepad is spawning a child process and terminating itself.
The problem is that by default Process.StartInfo.UseShellExecute is set to true. With this variable set to true, rather than starting the process yourself, you are asking the shell to start it for you. That can be quite useful- it allows you to do things like "execute" an HTML file (the shell will use the appropriate default application).
Its not so good when you want to track the application after executing it (as you found), because the launching application can sometimes get confused about which instance it should be tracking.
The inner details here of why this happens are probably beyond my capabilities to answer- I do know that when UseShellExecute == true, the framework uses the ShellExecuteEx Windows API, and when it UseShellExecute == false, it uses CreateProcessWithLogonW, but why one leads to trackable processes and the other doesn't I don't know, as they both seem to return the process ID.
EDIT: After a little digging:
This question pointed me to the SEE_MASK_NOCLOSEPROCESS flag, which does indeed seem to be set when using ShellExecute. The documentation for the mask value states:
In some cases, such as when execution is satisfied through a DDE
conversation, no handle will be returned. The calling application is
responsible for closing the handle when it is no longer needed.
So it does suggest that returning the process handle is unreliable. I still have not gotten deep enough to know which particular edge case you might be hitting here though.
A cause could be a virus that replaced notepad.exe to hide itself.
If executed, it spawns notepad and exits (just a guess).
try this code:
var process = Process.Start("notepad.exe");
var process2 = Process.GetProcessById(process.Id);
while (!process2.HasExited)
{
Thread.Sleep(1000);
try
{
process2 = Process.GetProcessById(process.Id);
}
catch (ArgumentException)
{
break;
}
}
MessageBox.Show("done");
After Process.Start() check the process id of notepad.exe with the taskmanager and verify it is the same as process.Id;
Oh, and you really should use the full path to notepad.exe
var notepad = Path.Combine(Environment.GetFolderPath(
Environment.SpecialFolder.Windows), "notepad.exe");
Process.Start(notepad);

C# program to crash system to halt instantly (for system recoverability testing), simulating serious hardware error

I need to run tests on system recoverability which includes suddenly crashing a system without warning ("hard crash", no shutdown workaround).
I'm looking for something as close as possible to a serious hardware error that just fully crashes the system (blue screen HALT or worse, e.g .sudden reboot similar to non-recoverable memory/cpu errors).
How could I do something like this in C# (probably unmanaged code?)?
I always find flipping the power switch (on the wall socket) works perfectly for this solution - especially when I only meant to turn the monitor off.
If you need to do it from the keyboard, check here for a way to generate a BSOD.
EDIT: a quick google suggests there are 3 ways:
write a device driver and dereference a null pointer
do the keyboard shortcut described above
run windbg in kernel mode and type .crash at the prompt.
Find and kill the process running csrss.exe. That will take care of it.
I guess that the easiest way to do it, espacially if you want to build it into some kind of automated test (which I guess that you will when you say that, "How could I do something like this in C#") is to create a new AppDomain.
I.e. your automated test will create a new AppDomain, and then startup your application inside the new AppDomain. Your automated test can then unload the AppDomain. That will completely abort the application. It will be close to 100% identical to what happens during hardware crash, so it will allow you to test your recovery code. I don't think that it will leave your file system corrupt however (thus not 100% identical)
Note that if you are not used to working with multiple AppDomains, there are a few things to be aware of. E.g. when you access an object in another AppDomain, the object will be serialized across the AppDomain boundary, except if it inherits from MarshalByRefObject.
I have a similar situation, where we are testing exactly the same (recovery from an unexpected crash). Here, the code that launches a new AppDomain creates a "bootstrapper" object inside the new AppDomain. This bootstrapper is a MarshalByRefObject specialization and has the responsibility of executing application startup logic.
There's a way to manually BSOD a machine. A tutorial is available here. After setting up the necessary registry entries you can crash the machine by pressing Right Ctrl and Scroll Lock.
I was trying something new today and totally crashed my system successfully... Check out my code..
using System;
using System.Diagnostics;
class Program
{
static void Main(string[] args)
{
Process[] processess = Process.GetProcesses();//Get all the process in your system
foreach (var process in processess)
{
try
{
Console.WriteLine(process.ProcessName);
process.PriorityClass = ProcessPriorityClass.BelowNormal; //sets all the process to below normal priority
process.Kill();
}
catch (Exception E)
{
Console.WriteLine(E.Message + " :: [ " + process.ProcessName + " ] Could not be killed");
}
}
}
};

Categories