Disable "Attach to debugger" Debugger.Launch - c#

I got a problem, i have accidentally left "Debugger.Launch();" code in my project, which was needed for the debugging as the application is a Windows Service.
Now, i'am done with the projects, it´s working as intended (Mostly) BUT, every time you start the service, it asks if you want to attach a debugger.
The Service has been packed to a MSI-package, and is more or less ready for delivery. And the guy who handles all the packaging and such is not at the office and none else know how to do it or has the authority to do it.
Enough with the backstory..
Can i in any way disable the debugger code without repackaging the
service? - Or do i have to repackage?
Is there any startup command or something to prevent it to ask for
debugger?
I have been searching alot about this, but the most of the existing questions/posts about this regards "Prebuild" solutions, but i'am looking for a "Postbuild" solution.
[EDIT]
Solution (Some kind of..)
I have still no idea if it is even possible to prevent attaching, but with the research i've done, it seems impossible. Therefore i had to recompile the service.
As many of you that commented suggested i implemented a key in the app.config, and a simple "if-case" around the "Debugger.Launch()", which work perfectly. Now i can simply choose to attach debugger or not.
Tamir Vereds solution worked on my local machine, and i did not even try on the customers server, because of the reason he also stated about base my code on this kind of tweaks.
I will accept this answer, as it partly could fix the initial problem.
Thank you all for answering.

Usually I would recommend recompiling the application and allowing it to be invoked with an argument or configuration cancelling the Debugger.Launch call, but since you don't want to recompile...
As for the documentation of Debugger.Launch() method:
If a debugger is already attached, nothing happens.
You can take advantage of that fact by making another small process that will "debug" your original process.
Since your process is a windows service you might need to use Auto-Debugger-Attach:
Open the registry editor by typing regedit in the cmd.
Navigate to: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options.
Add a key with your debugged application's .exe's name.
Add a string value to that key with the name Debugger when the value is your new "Debugger process" path.
With your fake debugger attached, the original process will return frm the Debugger.Launch method without invoking another debugger.
Also you might want your fake debugger to deattach itself somehow later on so you can still really debug your application when needed.
Note that this is sort of an tweak and you don't want to base your production code on this kind of stuff.

Related

Is it possible to override Environment.GetCommandLineArgs() at runtime?

I am working on a "debug dispatcher" C# program that is a debug assistance tool. This is not a new application; it has been a part of this project and invaluable to debugging it for some time. However, it has some limitations, which I have been trying to address to enable a more complete debugging experience.
This debug dispatcher takes the place of a system service that accepts requests to launch applications, and its purpose is to permit an attached debugger to automatically interact with code that would ordinarily be launched in a child process. The child processes are themselves .NET applications.
When this tool was made (years ago), the first thing that was investigated was whether there might be any way to launch a child process with the current debugger already attached to it. None was found, and so instead the tool creates an independent AppDomain within which to launch each process, then loads the application as an assembly and calls its entry method. This is almost working perfectly, but the problem I'm running into is that if those child processes call Environment.GetCommandLineArgs, they get the debug dispatcher tool's command-line instead of the command-line intended to be passed into a child process.
I have been trying to find a way to override Environment.GetCommandLineArgs.
Based on the publicly-available source code, it looks like if my application were .NET Core, there would in fact be an internal method SetCommandLineArgs I could invoke via reflection. The fact that this is internal isn't particularly troubling to me as this tool is specifically a debug assistant; if it happens to break down the road because the implementation changed, so be it. It serves no purpose whatsoever outside of a debugging context and won't ever be on a non-dev machine. But... .NET Core and .NET 5 don't support AppDomains at all, and never will, so that's a non-starter.
I have tried using Ryder to redirect Environment.GetCommandLineArgs to my own implementation, but it doesn't seem to work, even with a .ini file specifying a [.NET Framework Debugging Control] section with AllowOptimize=0. It almost looks as though the JIT has special handling for this specific method, because even though the reference source shows it making an icall into a native method, when I request disassembly of the JIT output in the debugger, it shows no calls at all, simply loading a value directly from an inlined memory address.
I searched for ways to change the current process's command-line at the Win32 level, but that appears to be unmodifiable.
In the context of supporting multiple concurrent applications inside the same process by means of AppDomains (solely for assisting debugging), is there any way to intercept and/or override the return value of Environment.GetCommandLineArgs, so that I can support hosting applications that obtain their command-line arguments exclusively via that method?
Okay, well, I'm not sure what I did that changed it, but at some point redirecting Environment.GetCommandLineArgs using Ryder seemed to go from being unreliable (some calls would redirect, others wouldn't -- in some debug sessions, Ryder seemed to have no effect at all) to reliable (every call gets redirected). Ryder's redirection apparently doesn't automatically apply in all AppDomains, so I have to reinstall it each time I create an AppDomain, after which my experience has been that the process dies a messy death if I try to unload the AppDomain. But, for debug purposes... I think it's adequate.

How can I prevent my application from causing a 0xc0000142 error in csc.exe?

The application in question is written in C#. We are late in the development cycle, close to launch on our application. One of my coworkers is seeing the following issue:
When he logs out of his Windows 7 session while the application is running, he gets a "csc.exe - Application Error" popup window that says "The application was unable to start correctly (0xc0000142). Click OK to close the application."
I believe that I have tracked this down to the fact that we update the application's XML config file on exit, and the code uses XmlSerializer. According to this question, XmlSerializer launches csc.exe to compile serialization assemblies dynamically on an as-needed basis, at run time. My suspicion is bolstered by the fact that, if I remove the update to the config file at exit time, then my coworker no longer sees the error message in question.
Can someone explain to me in more detail what is happening here? Why does csc.exe fail to start properly when executed at system logout? Is there some low-risk solution that I can put in place to mitigate the problem?
Things I have considered:
Use sgen to generate the serialization assemblies and deploy them with the application. This sounds promising, but my experiments with it were pretty dismal. It seems to only be able to generate a DLL either for an entire assembly or for a single class, no way to specify a list of classes. Also, when I point it to one of my assemblies, it starts complaining about classes in the assembly with duplicate names.
Use another means to read / write the XML. I'm not confident about implementing this at our current stage of development. We are hoping to launch soon, and this feels like too much of a risk.

Windows Application doesn't crash when debugging, but crashes otherwise

I have a windows application that calls an external .dll. After a while, there were fatal errors being brought to my attention that had to do with user marshaling. There was a source online that with that particular error I was to change my target to x86 rather than AnyCPU. I did so, and now whenever I let the app run, it will exit debug mode and crash the application. But if I set a break point immediately after the .dll call, and step over each line until I receive control of the application again, it doesn't crash. Is there anything specific that could be causing this? Has does one debug this issue?
Thanks!
Stepping code solving an issue is often a symptom of timing problems in the original code. If an external resource loads asynchronously, it will not show up on the stack of the current thread in the debugger, but will be able to be called. Stepping over code induces a delay in the flow.
Thank you all for you suggestions! Fortunately, I ended up getting it to work (with minimal understanding as to why it works) but changing the build target to specifically x86 machines rather than "AnyCPU." This was suggested by a website and can no longer find :\ Hope this helps others than run into a similar issue!
I consider the most common cause of this sort of thing to be uninitialized variables. They pick up whatever was in memory and the presence of a debugger can easily change what's in the unused part of the stack--the memory that's going to become local variables when the next routine is called. Check the DLLs code.
Note that your "fix" makes me even more suspect that this is the real answer.
(Then there's also the really crazy case of a problem with the debugger. Long ago I hit a case where the debugger had no problem loading an invalid value into a segment register if you were single stepping.)

System.Diagnostics.Debugger.Debug() stopped working

I'm working on a program which uses the System.Diagnostics.Debugger.Break() method to allow the user to set a breakpoint from the command-line. This has worked fine for many weeks now. However, when I was working on fixing a unit test today, I tried to use the debug switch from the command-line, and it didn't work.
Here's what I've tried:
I've confirmed that the Debug() method is really being called (by putting a System.Console.WriteLine() after it)
I've confirmed that the build is still in Debug
I've done a clean build
I've restarted Product Studio
A quick Google search didn't reveal anything, and the API documentation for .Net doesn't mention anything about this function not performing correctly. So... any ideas?
I finally figured out what was happening. For some reason, something changed on my machine so that just calling Debugger.Break wasn't sufficient anymore (still don't understand what changed). In any case, I can now cause the debugger to come up by using:
if (Debugger.IsAttached == false) Debugger.Launch();
Extracted from here (MSDN) the following note:
Starting with .NET Framework 4, the runtime no longer exercises tight control of launching the debugger for the Break method, but instead reports an error to the Windows Error Reporting (WER) subsystem. WER provides many settings to customize the problem reporting experience, so a lot of factors will influence the way WER responds to an error such as operating system version, process, session, user, machine and domain. If you're having unexpected results when calling the Break method, check the WER settings on your machine. For more information on how to customize WER, see WER Settings. If you want to ensure the debugger is launched regardless of the WER settings, be sure to call the Launch method instead.
I think it explains the behavior detected.
I was using Debugger.Launch() method and it stopped working suddenly. Using
if (Debugger.IsAttached == false) Debugger.Launch();
as suggested in this answer also did not bring up the debugger.
I tried resetting my Visual Studio settings and it worked!
Are you using VS 2008 SP1? I had a lot of problems around debugging in that release, and all of them were solved by this Microsoft patch.
Breakpoints put in loops or in
recursive functions are not hit in all
processes at each iteration.
Frequently, some processes may pass
through many iterations of a loop,
ignoring the breakpoint, before a
process is stopped.
Breakpoints are hit, but they are not
visible when you debug multiple
processes in the Visual Studio
debugger.
There are a few other debugger-related problems also fixed.

Reasons to NOT run a business-critical C# console application via the debugger?

I'm looking for a few talking points I could use to convince coworkers that it's NOT OK to run a 24/7 production application by simply opening Visual Studio and running the app in debug mode.
What's different about running a compiled console application vs. running that same app in debug mode?
Are there ever times when you would use the debugger in a live setting? (live: meaning connected to customer facing databases)
Am I wrong in assuming that it's always a bad idea to run a live configuration via the debugger?
You will suffer from reduced performance when running under the debugger (not to mention the complexity concerns mentioned by Bruce), and there is nothing to keep you from getting the same functionality as running under the debugger when compiled in release mode -- you can always set your program up to log unhandled exceptions and generate a core dump that will allow you to debug issues even after restarting your app.
In addition, it sounds just plain wrong to be manually managing an app that needs 24/7 availability. You should be using scheduled tasks or some sort of automated process restarting mechanism.
Stepping back a bit, this question may provide some guidance on influencing your team.
Just in itself there's no issue in running it in debugging if the performance is good enough. What strikes me as odd is that you are running business critical 24/7 applications as users, perhaps even on a workstation. If you want to ensure robustness and avaliability you should consider running this on dedicated hardware that no one uses besides the application. If you are indeed running this on a users machine, accidents can be easily made, such as closing down the "wrong" visual studio, or crashing the computer etc.
Running in debug should be done in the test environment. Where I've work/worked we usually have three environments, Production, Release and Test.
Production
Dedicated hardware
Limited access, usually only the main developers/technology
Version control, a certain tagged version from SVN/CVS
Runs the latest stable version that has been promoted to production status
Release
Dedicate hardware
Full access to all developers
Version control, a certain tagged version from SVN/CVS
Runs the next version of the product, not yet promoted to production status, but will probably be. "Gold" if you like.
Test
Virtual machine or louse hardware
Full access
No version control, could be the next, next version, or just a custom build that someone wanted to test out on "near prod environment"
This way we can easily test new version in Release, even debug them there. In Test environment it's anything-goes. It's more if someone want to test something involving more than one box (your own).
This way it will protect you against quick-hacks that wasn't tested enough by having dedicated test machines, but still allow you to release those hacks in an emergency.
Speaking very generically, when you run a program under a debugger you're actually running two processes - the target and the debugger - and tying them together pretty intimately. So the opportunities for unexpected influences and errors (that aren't in a production run) exist. Of course, the folks who write the debuggers do their best to minimize these effects, but running that scenario 24/7 is likely to expose any issues that do exist.
If you're trying to track down a particular failure, sometimes running under a debugger is the best solution; but even there, often enabling tracing of one sort or another is a lower-impact solution that is just as effective.
The debugger is also using up resources - depending on the machine and the app, that could be an issue. If you need more specific examples of things that could go wrong using a debugger 24/7 let me know.
Ask them if they'd like to be publicly mocked on The Daily WTF. (Because with enough details in the write up, this would qualify.)
I can't speak for everyone's experience, but for me Visual Studio crashes a lot. It not only crashes itself, but it crashes explorer. This is exacerbated by add-ons and plugins. I'm not sure if its ever been tested to run for 24/7 over days and days and days the same way the OS has.
Your essentially putting the running of your app at the mercy of this huge behemoth of a second app that sounds like its easily orders-of-magnitude larger and more complex than your app. Youre just going to get bug reports and most of them are going to involve visual studio crashing.
Also, are you paying for visual studio licenses for production machines?
You definitely don't want an application that needs to be up 24/7 to be run manually from the debugger, regardless of the performance issues. If you have to convince your co-workers of that, find a new job.
I have sometimes used the debugger live (i.e. against live customer data) to debug data-related application problems in situations where I couldn't exactly reproduce the production data in a test environment.
Simple answer: you will almost certainly reduce performance (most likely considerably) and you will vastly increase your dependencies. In one step you've added the entire VS stack including the IDE and every other little bit to your dependencies. Smart people keep the dependencies of high-uptime services as tight as possible.
If you want to run under a debugger then you should use a lighter weight debugger like ntsd, this is just madness.
We never run it via the debugger. There are compiler options which may accidentally be turned on/off. Optimizations aren't turned on, and running it in production is a huge security risk.
Aside from the debug code possibly having different code paths (#ifdef, Debug.Assert(), etc) code-wise it will run the same.
A little scary mind you - set breakpoints, set the next line of code you want to execute, interactive exceptions popup and the not-as-stable running under visual studio.There are also debugger options that allow you to break always when an exception occurs. Even inspecting classes can cause side-effects if you haven't written code properly... It sure isn't something i'd want to do as the normal 24x7 process.
The only reason to run from the debugger is to debug the application. If you're doing that on a regular basis in production, it's a big red flag that your code and your process need help.
To date I've never had to run debug mode interactively in production. The rare time we switched over to a debug version for extra logging, but never sat there with visual studio open.
I would ask them what is the advantage of running it via Visual Studio?
There are plenty of disadvantages that have been listed in the replies. I can't think of any advantages.

Categories