This code runs as expected on a large number of machines. However on one particular machine, the call to WaitForExit() seems to be ignored, and in fact marks the process as exited.
static void Main(string[] args)
{
Process proc = Process.Start("notepad.exe");
Console.WriteLine(proc.HasExited); //Always False
proc.WaitForExit(); //Blocks on all but one machines
Console.WriteLine(proc.HasExited); //**See comment below
Console.ReadLine();
}
Note that unlike a similar question on SO, the process being called is notepad.exe (for testing reasons), so it is unlikely the fault lies with it - i.e. it is not spawning a second sub-process and closing. Even so, it would not explain why it works on all the other machines.
On the problem machine, the second call to Console.WriteLine(proc.HasExited)) returns true even though notepad is still clearly open, both on the screen and in the task manager.
The machine is running Windows 7 and .NET 4.0.
My question is; what conditions on that particular machine could be causing this? What should I be checking?
Edit - Things I've tried so far / Updates / Possibly relevant info:
Reinstalled .NET.
Closed any processes I don't know in task manager.
Windows has not yet been activated on this machine.
Following advice in the comments, I tried getting the 'existing' process Id using GetProcessesByName but that simply returns an empty array on the problem machine. Therefore, it's hard to say the problem is even with WaitForExit, as the process is not returned by calling GetProcessesByName even before calling WaitForExit.
On the problem machine, the resulting notepad process's ParentID is the ID of the notepad process the code manually starts, or in other words, notepad is spawning a child process and terminating itself.
The problem is that by default Process.StartInfo.UseShellExecute is set to true. With this variable set to true, rather than starting the process yourself, you are asking the shell to start it for you. That can be quite useful- it allows you to do things like "execute" an HTML file (the shell will use the appropriate default application).
Its not so good when you want to track the application after executing it (as you found), because the launching application can sometimes get confused about which instance it should be tracking.
The inner details here of why this happens are probably beyond my capabilities to answer- I do know that when UseShellExecute == true, the framework uses the ShellExecuteEx Windows API, and when it UseShellExecute == false, it uses CreateProcessWithLogonW, but why one leads to trackable processes and the other doesn't I don't know, as they both seem to return the process ID.
EDIT: After a little digging:
This question pointed me to the SEE_MASK_NOCLOSEPROCESS flag, which does indeed seem to be set when using ShellExecute. The documentation for the mask value states:
In some cases, such as when execution is satisfied through a DDE
conversation, no handle will be returned. The calling application is
responsible for closing the handle when it is no longer needed.
So it does suggest that returning the process handle is unreliable. I still have not gotten deep enough to know which particular edge case you might be hitting here though.
A cause could be a virus that replaced notepad.exe to hide itself.
If executed, it spawns notepad and exits (just a guess).
try this code:
var process = Process.Start("notepad.exe");
var process2 = Process.GetProcessById(process.Id);
while (!process2.HasExited)
{
Thread.Sleep(1000);
try
{
process2 = Process.GetProcessById(process.Id);
}
catch (ArgumentException)
{
break;
}
}
MessageBox.Show("done");
After Process.Start() check the process id of notepad.exe with the taskmanager and verify it is the same as process.Id;
Oh, and you really should use the full path to notepad.exe
var notepad = Path.Combine(Environment.GetFolderPath(
Environment.SpecialFolder.Windows), "notepad.exe");
Process.Start(notepad);
Related
This is the code:
ConsoleKeyInfo cki;
while((cki = Console.ReadKey(true)).Key != ConsoleKey.Escape)
{
Console.WriteLine(cki.Key);
}
When i run it from cmd or powershell with dotnet run everything works fine. However when i run it from Git Bash it throws the following exception:
Unhandled exception. System.InvalidOperationException: Cannot read keys when either application does not have a console or when console input has been redirected. Try Console.Read.
Presumably, then, Git Bash is using IO redirection - which it is allowed to do - and the others ... aren't. The solution, then, is to use Read rather than ReadKey - at least if redirection is in play. You can probably detect this via Console.IsInputRedirected - and choose the most useful strategy for what is possible, but: you won't be able to detect keys in the same way, so you may need to have a slightly different user-experience in this scenario.
Basically, 100% of what Marc said in his answer.
Side note to that: under linux-like terminals (and git-bash is one of them) the typical (or even standard) way of aborting an interactive application/script that currently blocks/holds the console is pressing Control+C. net-core console applications support that. It's much easier to make it quite via control+c than to try to peek what keys are being pressed.
IIRC, net-core applications automatically detect pressing control+C and by default they just quit, and that makes the console usable by the user again. This means that no extra code needs to be written and even while(true) loops could be halted with this (the event-handler that handles control+c is ran on the thread pool, regardless of the main thread being busy).
https://learn.microsoft.com/en-us/dotnet/api/system.console.cancelkeypress?view=netcore-3.1
By default, the Cancel property is false, which causes program execution to terminate when the event handler exits. Changing its property to true specifies that the application should continue to execute.
This still happens on both OS, Windows and Linux, if we are dealing with unicode that have a rune width greater than 1 on a window resizing with Net5.0.
We are developing an open source Visual Studio extension for running tests written with the C++ Google Test framework within VS. Part of the VS API for test adapters is the possibility to run tests with a debugger attached. However, that API does not allow to grab the output of the executing process: It only returns the process id, and afaik, there's no way to access that output if the process is already running.
Thus, we'd like to launch our own process, and attach a debugger to that process on our own (following the approach described in the accepted answer of this question). This works so far, but we have one issue: It seems that attaching a debugger is only possible if the process already runs, resulting in missed breakpoints; the reason seems to be that the breakpoints might already be passed until the debugger is attached. Note that we indeed hit breakpoints, so the approach seems to work in general, but it's not exactly reliable.
Here's the code for launching the process (where command is the executable produced by the Google Test framework):
var processStartInfo = new ProcessStartInfo(command, param)
{
RedirectStandardOutput = true,
RedirectStandardError = false,
UseShellExecute = false,
CreateNoWindow = true,
WorkingDirectory = workingDirectory
};
Process process = new Process { StartInfo = processStartInfo };
process.Start()
DebuggerAttacher.AttachVisualStudioToProcess(vsProcess, vsInstance, process);
And here's the utility method for attaching the debugger:
internal static void AttachVisualStudioToProcess(Process visualStudioProcess, _DTE visualStudioInstance, Process applicationProcess)
{
//Find the process you want the VS instance to attach to...
DTEProcess processToAttachTo = visualStudioInstance.Debugger.LocalProcesses.Cast<DTEProcess>().FirstOrDefault(process => process.ProcessID == applicationProcess.Id);
//AttachDebugger to the process.
if (processToAttachTo != null)
{
processToAttachTo.Attach();
ShowWindow((int)visualStudioProcess.MainWindowHandle, 3);
SetForegroundWindow(visualStudioProcess.MainWindowHandle);
}
else
{
throw new InvalidOperationException("Visual Studio process cannot find specified application '" + applicationProcess.Id + "'");
}
}
Is there any way to attach a debugger in a more reliable manner? For instance, is it possible to launch a process from C# such that the process will wait for, say, 1s before starting to execute the passed command? That would give us enough time to attach the debugger (at least on my machine - I've tested this by adding a 1s wait period at the main() method of the Google Test executable, but that's not an option, since our users would need to change their test code in order to be able to debug it with our extension)... Or is there even a clean way (the described way might obviously fail e.g. on slow machines)?
Update: Let's recap the problem statement: Our users have a C++ solution including tests written with the Google Test framework (which are compiled into an executable to be run e.g. from the command line). We provide a VS extension written in C# (a test adapter) which discovers the executable, runs it with the help of a Process, collects the test results, and displays them within the VS test explorer. Now, if our users click Debug tests, we are starting the process running the C++ executable and then attach a debugger to that process. However, by the time it takes to attach the debugger to the process, the executable has already started running, and some tests have already been executed, resulting in breakpoints within those tests being missed.
Since we do not want to force our users to change their C++ code (e.g. by adding some waiting period at the begin of main() method of the test code, or with one of the approaches referenced by Hans below), we need a different way to attach that debugger. In fact, the VS test framework allows to launch a process with a debugger attached (and that approach does not suffer from our problem - that's what we are doing now), but that approach does not allow grabbing the process' output since all we get is the process id of an already running process (at least I don't know how that can be done in this case - I have done my research on that (so I believe :-) ). Grabbing the output would have some significant benefits to our extension (which I do not list here - let me know in the comments if you are interested), so we are looking for a different way to handle such situations.
So how can we run the executable (including grabbing the executable's output) and immediately attach a debugger to it, such that no breakpoints are missed? Is that possible at all?
You can PInvoke CreateProcess (see example How to call CreateProcess()...) to launch your debuggee using the CREATE_SUSPENDED creation flag (see Creation Flags for more detail) and then PInvoke ResumeThread to continue once your debugger is attached.
You may need to tweak the options to CreateProcess, depending on your specific needs, but that should do it.
Update:
A much better option, since you are writing a VS extension, is to use IVsDebugger4 interface to call LaunchDebugTargets4. The interface is documented and you can find plenty of examples on GitHub (just search for LaunchDebugTargets4). This method will avoid that pesky break in VS once you attach with the native debug engine.
I am starting a small console application from within my IIS web application. The code is started from within an app pool using code like this,
Process process = new Process();
ProcessStartInfo processStartInfo = new ProcessStartInfo();
processStartInfo.CreateNoWindow = true;
processStartInfo.WindowStyle = ProcessWindowStyle.Hidden;
// ..
process.Start();
I used to intermittently get an error,
Win32Exception exception has occured Message: No such interface supported
ErrorCode: 80004005 NativeErrorCode: 80004002
I proved that when this happened the console application wouldn't start at all.
I added to the code above this,
processStartInfo.UseShellExecute = false;
And the problem has gone away (so far, fingers crossed). I understand that by making this change it doesn't require a valid desktop context to run, but what exactly does that mean. If that means we cannot run the above code if there is no desktop (which applies to an IIS app pool running with a system user), then why did it used to run sometimes in the past rather than fail every time?
Does anybody have any idea why this would make a difference? What does no interface supported mean in this context?
UPDATE:
I have taken on board everything people have said, and done more research myself. So to summarise if you have UseShellExecute = true (which is the default) then it will call ShellExecuteEX in shell32.dll to execute the process. It will do this actually (copied from the System.dll using ILSpy),
public bool ShellExecuteOnSTAThread()
{
if (Thread.CurrentThread.GetApartmentState() != ApartmentState.STA)
{
ThreadStart start = new ThreadStart(this.ShellExecuteFunction);
Thread thread = new Thread(start);
thread.SetApartmentState(ApartmentState.STA);
thread.Start();
thread.Join();
}
else
{
this.ShellExecuteFunction();
}
return this._succeeded;
}
If you have UseShellExecute = false then it will call CreateProcess in kernel32.dll to start the process.
I was wondering if there is a problem with the fact the the code ShellExecuteOnSTAThread above is creating a new thread? Could the app pool reach some limit on the threading which could indirectly cause a Win32Exception?
This error can occur when certain COM objects aren't registered, although it's a bit of a mystery to me why it's intermittent.
In fairness though, Spawning a local executable from within IIS is a pretty rare thing to do and it may actually cause a security problem, or at the least cause an issue with IIS if the command fails for some reason and doesn't give control back to the system.
In reality the best practice for something like that is to record the action that you need to happen inside the registry,database or some kind of setting file and have your local application run as a scheduled task or a windows service.
For reference, the UseShellExec states whether or not the Kernel should launch the exe directly, or whether it should ask Explorer to launch the file.
You might be getting this problem when there's no-one logged in so there isn't necessarily a shell loaded to launch the exe.
Ultimately though, what you're currently trying to do is a bad thing in production - you cannot guarantee the state of IIS when it tries to launch this exe and rightly so, IIS is not a Shell.
I have a C# application which launches another executable using Process.Start().
99% of the time this call works perfectly fine. After the application has run for quite some time though, Process.Start() will fail with the error message:
Insufficient system resources exist to complete the requested service
Initially I thought this must have been due to a memory leak in my program - I've profiled it fairly extensively and it doesn't appear there's a leak - the memory footprint will still be reasonable even when this message failed.
Immediately after a failure like this, if I print some of the system statistics it appears that I have over 600MB of RAM free, plenty of space on disk, and the CPU usage is effectively at 0%.
Is there some other system resource I haven't thought of? Am I running into a memory limit within the .NET VM?
Edit2:
I opened up the application in SysInternals Process Explorer and it looks like I'm leaking Handles left and right:
Handles Used: 11,950,352 (!)
GDI Handles: 26
USER Handles: 22
What's strange here is that the Win32 side of handles seem very reasonable, but somehow my raw handle count has exploded waaaaay out of control. Any ideas what could cause a Handle leak like this? I was originally convinced it was Process.Start() but that would be USER handles, wouldn't it?
Edit:
Here's an example of how I'm creating the process:
var pInfo = new ProcessStartInfo(path, ClientStartArguments)
{
UseShellExecute = false,
WorkingDirectory = workingDirectory
};
ClientProcess = Process.Start(pInfo);
Here's an example of how I kill the same process (later in the program after I have interacted with the process):
Process[] clientProcesses = Process.GetProcessesByName(ClientProcessName);
if (clientProcesses.Length > 0)
{
foreach (var clientProcess in clientProcesses.Where(
clientProcess => clientProcess.HasExited == false))
{
clientProcess.Kill();
}
}
The problem here is with retained process handles. As we can see from your later edits you are keeping a reference to the Process object returned by Process.Start(). As mentioned in the documentation of Process:
Like many Windows resources, a process is also identified by its handle, which might not be unique on the computer. A handle is the generic term for an identifier of a resource. The operating system persists the process handle, which is accessed through the Handle property of the Process component, even when the process has exited. Thus, you can get the process's administrative information, such as the ExitCode (usually either zero for success or a nonzero error code) and the ExitTime. Handles are an extremely valuable resource, so leaking handles is more virulent than leaking memory.
I especially like the use of the word virulent. You need to dispose and release the reference to Process.
Also check out this excellent question and it's corresponding answer: Not enough memory or not enough handles?
Since the Process class implements IDisposable, it is good practice to properly dispose of it when you are done. In this case, it will prevent handle leaks.
using (var p = new Process())
{
p.StartInfo = new ProcessStartInfo(#"C:\windows\notepad.exe");
p.Start();
p.WaitForExit();
}
If you are calling Process.Kill() and the process has already exited, you will get an InvalidOperationException.
That's not an uncommon problem to have with little programs like this. The problem is that you are using a large amount of system resources but very little memory. You don't put enough pressure on the garbage collected heap so the collector never runs. So finalizable objects, the wrappers for system handles like Process and Thread, never get finalized.
Simply disposing the Process object after the process has exited will go a long way to solve the problem. But might not solve it completely, any threads that the Process class uses or you use yourself consume 5 operating system handles each. The Thread class doesn't have a Dispose() method. It should but it doesn't, it is next to impossible to call it correctly.
The solution is triggering a garbage collection yourself. Count the number of times you start a process. Every, say, hundredth time call GC.Collect(). Keep an eye on the Handle count with Taskmgr.exe. Use View + Select Columns to add it. Fine tune the GC.Collect calls to so that it doesn't increase beyond, say, 500.
I need to run tests on system recoverability which includes suddenly crashing a system without warning ("hard crash", no shutdown workaround).
I'm looking for something as close as possible to a serious hardware error that just fully crashes the system (blue screen HALT or worse, e.g .sudden reboot similar to non-recoverable memory/cpu errors).
How could I do something like this in C# (probably unmanaged code?)?
I always find flipping the power switch (on the wall socket) works perfectly for this solution - especially when I only meant to turn the monitor off.
If you need to do it from the keyboard, check here for a way to generate a BSOD.
EDIT: a quick google suggests there are 3 ways:
write a device driver and dereference a null pointer
do the keyboard shortcut described above
run windbg in kernel mode and type .crash at the prompt.
Find and kill the process running csrss.exe. That will take care of it.
I guess that the easiest way to do it, espacially if you want to build it into some kind of automated test (which I guess that you will when you say that, "How could I do something like this in C#") is to create a new AppDomain.
I.e. your automated test will create a new AppDomain, and then startup your application inside the new AppDomain. Your automated test can then unload the AppDomain. That will completely abort the application. It will be close to 100% identical to what happens during hardware crash, so it will allow you to test your recovery code. I don't think that it will leave your file system corrupt however (thus not 100% identical)
Note that if you are not used to working with multiple AppDomains, there are a few things to be aware of. E.g. when you access an object in another AppDomain, the object will be serialized across the AppDomain boundary, except if it inherits from MarshalByRefObject.
I have a similar situation, where we are testing exactly the same (recovery from an unexpected crash). Here, the code that launches a new AppDomain creates a "bootstrapper" object inside the new AppDomain. This bootstrapper is a MarshalByRefObject specialization and has the responsibility of executing application startup logic.
There's a way to manually BSOD a machine. A tutorial is available here. After setting up the necessary registry entries you can crash the machine by pressing Right Ctrl and Scroll Lock.
I was trying something new today and totally crashed my system successfully... Check out my code..
using System;
using System.Diagnostics;
class Program
{
static void Main(string[] args)
{
Process[] processess = Process.GetProcesses();//Get all the process in your system
foreach (var process in processess)
{
try
{
Console.WriteLine(process.ProcessName);
process.PriorityClass = ProcessPriorityClass.BelowNormal; //sets all the process to below normal priority
process.Kill();
}
catch (Exception E)
{
Console.WriteLine(E.Message + " :: [ " + process.ProcessName + " ] Could not be killed");
}
}
}
};