System.NullReferenceException only in Release Build - c#

I'm getting an "System.NullReferenceException: Object reference not set to an instance of an object." Error when I launch a release build of my web application. It tells me to do a debug build to get more information, but when I launch the debug build the error no longer occurs. Without the help of the line numbers that are given with most errors in debug builds it is very hard (from what I know) to pinpoint the cause of this vague error.
Can anyone point me in the right direction to narrow down the cause of this exception?
Thanks.

As a quick remedy to your problem (if you don't have time to rewrite your code), please see the Event Log on the machine where you released the application. There is a big chance you're simply missing some dlls.
As a long term solution, I think you can start with adding some logging functionality to your application (Enterprise Library, log4net, etc. or even your own logger). Printing a complete stack trace is an invaluable source of help especially when you include the .pdb files in your release version. This will allow your executed code to tell exactly which line threw the exception.
Hope this helps,
Piotr

It is possible to get file names and line numbers in your stack traces even if you build in Release mode. See Display lines number in Stack Trace for .NET assembly in Release mode and Is stacktrace information available in .NET release mode build? for example.
In general, I think you should avoid introducing different program behavior in Debug versus Release mode (but maybe you didn't introduce that deliberately?).

Related

VisualStudio debugger breaking on handled exceptions when library linked as DLL

I am having a nuisance problem with a C# app. To simplify the scenario, I have a main .exe project which references another C# library as a direct DLL dependency. This DLL throws exceptions in a particular place, and the debugger is breaking on those. However, the exception is handled and not re-thrown. My exceptions dialog in VS is checked to only have the debugger break on User-Unhandled exceptions of this type (InvalidOperationException), yet it is still breaking.
However if I link the same library as a project reference (rather than to the compiled DLL) the debugger no longer breaks on this exception.
As well, if I run the .exe program directly (outside the debugger) I see no evidence that this exception is not being handled as I expect. No errors, and my logging indicates the expected control flow.
The related code has been mostly unchanged for some time, but I have refactored my solutions and projects; I was previously using only project references, so perhaps never spotted this issue until I went to DLL references.
Can anyone suggest anything else I might look at as to why the debugger is breaking on this handled exception?
I did some more googling and fiddling, and I have settled on the idea that it seems the 'Enable Just My Code' is what is causing the debugger to break
This link gave me some clear insights into how this option affects the Debugger: http://www.jaylee.org/post/2010/07/05/VS2010-On-the-Impacts-of-Debugging-with-Just-My-Code.aspx

What is behind a silent failure to load resource on x64 bit machine with .NET 4.0?

A user of my program has reported an inability to startup the application. I am not yet done troubleshooting, but I'm simply baffled.
Logging still works, so I used logging statements and was able to narrow down the crash to a single line in a user control's InitializeComponent:
this.HorizontalBox.Image =
((System.Drawing.Image)(resources.GetObject("HorizontalBox.Image")));
Here are the relevant clues from his end:
64 bit Windows 7
Correct .NET Framework (4.0 Client Profile)
No visual elements ever show, and no error dialogs. It is a silent shutdown when starting.
Logging works, but there were no logged errors.
He has uninstalled and reinstalled the .NET 4.0 Client Profile framework.
He doesn't have any Visual Studio or other development tools mucking with stuff.
I have spent a week or so eliminating theories and I'm becoming confused and desperate. Here are relevant details and things I have found:
I am targeting x86 explicitly.
The logging which failed to log any exception is set up to catch and log any unhandled exceptions and thread abort exceptions.
Whatever is killing the application also prevents the final "shutting down" logging message in the program's basic entry point.
I had read that certain icon (.ICO) file formats don't work in Windows XP. A far fetched theory, since this is Windows 7. This is the one and only case of ICO files in the project, so I was suspicious and switched it to PNG. No difference. I since figure that the image is failing merely because it is the first image loaded from a resource.
I had read that the Form_Load event may swallow exceptions (and only when debugging). Also, InitializeComponent() is in the constructor, so the theory was shaky. Nonetheless, I wrapped the call to InitializeComponent() in a try/catch, but the catch and its associated logging never get called.
I have seen posts about resource compilation problems between x86 and x64, but nothing relevant to runtime issues. (See this post)
I assumed there must be something wrong unique with the program showing issues, so I made a WindowsFormsApplication1 test application with nothing more than a single image embedded in the associated resource file. This also fails to load in the same way. This test application was also targeting x86.
It works fine on other x86 and x64 machines!
What could possibly be going on his machine? Why is exception handling failing me? This problem is crazy!
Edit: More Details, and I'm still baffled!
I have since sent the test application (a single form with a single image on it) built as x86, x64, and "Any Cpu". The x64 and "Any Cpu" applications both work.
Some questions spring to mind. Have you got a similar build machine with which to test - this may help to identify if it is the build/program integration or some possible issue with his build (i.e. a windows problem/virus/etc).
Has he installed to the default folder or did he do a customised install?
Has he tried a full uninstall / reinstall of your app? (I note you said the runtime was refreshed) - possibly to a different folder to make sure.
Can you recreate on a similar build (OS version) with VS installed to do a code walkthrough in the debugger - stack trace and output buffer may help identify - so may disasembly - and can set it to stop at all exceptions?
Unfortunatly unhandled exceptions can not always be caught in C# (especially post 2.0) - so a debugger of WinDBG may be your only option in the end (yuk!).
Can I suggest something though first...Just a thought:
Before the line that fails, as a test, output something like this:
var obj = resources.GetObject("HorizontalBox.Image");
Console.WriteLine("Obj = " + (obj is Bitmap));
Because I have a feel that the failure is happening when trying to marshal the resource into the Bitmap Type and getting a memory exception (maybe something corrupt with the image stride/pixel format etc or maybe something on the culprit machine is making the image file look like a non image file).

what's the difference between C# compilation setting "/debug:pdbonly" and "/debug:full"? [duplicate]

In Visual Studio for a C# project, if you go to Project Properties > Build > Advanced > Debug Info you have three options: none, full, or pdb-only.
Which setting is the most appropriate for a release build?
So, what are the differences between full and pdb-only?
If I use full will there be performance ramifications? If I use pdb-only will it be harder to debug production issues?
I would build with pdb-only. You will not be able to attach a debugger to the released product, but if you get a crash dump, you can use Visual Studio or WinDBG to examine the stack traces and memory dumps at the time of the crash.
If you go with full rather than pdb-only, you'll get the same benefits, except that the executable can be attached directly to a debugger. You'll need to determine if this is reasonable given your product & customers.
Be sure to save the PDB files somewhere so that you can reference them when a crash report comes in. If you can set up a symbol server to store those debugging symbols, so much the better.
If you opt to build with none, you will have no recourse when there's a crash in the field. You won't be able to do any sort of after-the-fact examination of the crash, which could severely hamper your ability to track down the problem.
A note about performance:
Both John Robbins and Eric Lippert have written blog posts about the /debug flag, and they both indicate that this setting has zero performance impact. There is a separate /optimize flag which dictates whether the compiler should perform optimizations.
WARNING
MSDN documentation for /debug switch (In Visual Studio it is Debug Info) seems to be out-of-date! This is what it has which is incorrect
If you use /debug:full, be aware that there is some impact on the
speed and size of JIT optimized code and a small impact on code
quality with /debug:full. We recommend /debug:pdbonly or no PDB for
generating release code.
One difference between /debug:pdbonly and /debug:full is that with
/debug:full the compiler emits a DebuggableAttribute, which is used to
tell the JIT compiler that debug information is available.
Then, what is true now?
Pdb-only – Prior to .NET 2.0, it helped to investigate the crash dumps from released product (customer machines). But it didn't let attaching the debugger. This is not the case from .NET 2.0. It is exactly same as Full.
Full – This helps us to investigate crash dumps, and also allows us to attach debugger to release build. But unlike MSDN mentions, it doesn't impact the performance (since .NET 2.0). It does exactly same as Pdb-only.
If they are exactly same, why do we have these options? John Robbins (windows debugging god) found out these are there for historical reasons.
Back in .NET 1.0 there were differences, but in .NET 2.0 there isn’t.
It looks like .NET 4.0 will follow the same pattern. After
double-checking with the CLR Debugging Team, there is no difference at
all.
What controls whether the JITter does a debug build is the /optimize
switch. <…>
The bottom line is that you want to build your release builds with
/optimize+ and any of the /debug switches so you can debug with source
code.
then he goes on to prove it.
Now the optimization is part of a separate switch /optimize (in visual studio it is called Optimize code).
In short, irrespective of DebugInfo setting pdb-only or full, we will have same results. The recommendation is to avoid None since it would deprive you of being able to analyze the crash dumps from released product or attaching debugger.
You'll want PDB only, but you won't want to give the PDB files to users. Having them for yourself though, alongside your binaries, gives you the ability to load crash dumps into a debugger like WinDbg and see where your program actually failed. This can be rather useful when your code is crashing on a machine you don't have access to.
Full debug adds the [Debuggable] attribute to your code. This has a huge impact on speed. For example, some loop optimizations may be disabled to make single stepping easier. In addition, it has a small effect on the JIT process, as it turns on tracking.
I'm in the process of writing a unhandled exception handler and the stack trace includes the line number when pdb-only is used, otherwise I just get the name of the Sub/Function when I choose None.
If I don't distribute the .pdb I don't get the line number in the stack trace even with the pdb-only build.
So, I'm distributing (XCOPY deploy on a LAN) the pdb along with the exe from my VB app.

Should I compile release builds with debug info as "full" or "pdb-only"?

In Visual Studio for a C# project, if you go to Project Properties > Build > Advanced > Debug Info you have three options: none, full, or pdb-only.
Which setting is the most appropriate for a release build?
So, what are the differences between full and pdb-only?
If I use full will there be performance ramifications? If I use pdb-only will it be harder to debug production issues?
I would build with pdb-only. You will not be able to attach a debugger to the released product, but if you get a crash dump, you can use Visual Studio or WinDBG to examine the stack traces and memory dumps at the time of the crash.
If you go with full rather than pdb-only, you'll get the same benefits, except that the executable can be attached directly to a debugger. You'll need to determine if this is reasonable given your product & customers.
Be sure to save the PDB files somewhere so that you can reference them when a crash report comes in. If you can set up a symbol server to store those debugging symbols, so much the better.
If you opt to build with none, you will have no recourse when there's a crash in the field. You won't be able to do any sort of after-the-fact examination of the crash, which could severely hamper your ability to track down the problem.
A note about performance:
Both John Robbins and Eric Lippert have written blog posts about the /debug flag, and they both indicate that this setting has zero performance impact. There is a separate /optimize flag which dictates whether the compiler should perform optimizations.
WARNING
MSDN documentation for /debug switch (In Visual Studio it is Debug Info) seems to be out-of-date! This is what it has which is incorrect
If you use /debug:full, be aware that there is some impact on the
speed and size of JIT optimized code and a small impact on code
quality with /debug:full. We recommend /debug:pdbonly or no PDB for
generating release code.
One difference between /debug:pdbonly and /debug:full is that with
/debug:full the compiler emits a DebuggableAttribute, which is used to
tell the JIT compiler that debug information is available.
Then, what is true now?
Pdb-only – Prior to .NET 2.0, it helped to investigate the crash dumps from released product (customer machines). But it didn't let attaching the debugger. This is not the case from .NET 2.0. It is exactly same as Full.
Full – This helps us to investigate crash dumps, and also allows us to attach debugger to release build. But unlike MSDN mentions, it doesn't impact the performance (since .NET 2.0). It does exactly same as Pdb-only.
If they are exactly same, why do we have these options? John Robbins (windows debugging god) found out these are there for historical reasons.
Back in .NET 1.0 there were differences, but in .NET 2.0 there isn’t.
It looks like .NET 4.0 will follow the same pattern. After
double-checking with the CLR Debugging Team, there is no difference at
all.
What controls whether the JITter does a debug build is the /optimize
switch. <…>
The bottom line is that you want to build your release builds with
/optimize+ and any of the /debug switches so you can debug with source
code.
then he goes on to prove it.
Now the optimization is part of a separate switch /optimize (in visual studio it is called Optimize code).
In short, irrespective of DebugInfo setting pdb-only or full, we will have same results. The recommendation is to avoid None since it would deprive you of being able to analyze the crash dumps from released product or attaching debugger.
You'll want PDB only, but you won't want to give the PDB files to users. Having them for yourself though, alongside your binaries, gives you the ability to load crash dumps into a debugger like WinDbg and see where your program actually failed. This can be rather useful when your code is crashing on a machine you don't have access to.
Full debug adds the [Debuggable] attribute to your code. This has a huge impact on speed. For example, some loop optimizations may be disabled to make single stepping easier. In addition, it has a small effect on the JIT process, as it turns on tracking.
I'm in the process of writing a unhandled exception handler and the stack trace includes the line number when pdb-only is used, otherwise I just get the name of the Sub/Function when I choose None.
If I don't distribute the .pdb I don't get the line number in the stack trace even with the pdb-only build.
So, I'm distributing (XCOPY deploy on a LAN) the pdb along with the exe from my VB app.

Generating debug symbols for Symbol Server from CI process to aid remote debugging

Does anyone have any advice about extending our SVN & Cruise Control CI process to populate a Symbol Server?
We are trying to remotely debug test environments for our ASP.NET 2.0 C# website and have been running into problems getting the correct symbols to always load.
Our build process is done in release mode not debug mode so how does this affect the creation of PDB files?
Using VS2008, we have solved several issues in connecting to remote debugging since the test environments are not in the same domain. We are now getting this message when trying to watch variables:
Cannot obtain value of local or argument 'xxxxx' as it is not available at this instruction pointer, possibly because it has been optimized away
Is this because our build and subsequent deployment process is in release mode?
This error message comes because the CLR itself has optimized out the variables. The PDB's still contain all of the information about the locals in release mode, the debugger is just simply unable to access them.
It is possible though to build in release mode and generally avoid this problem. One of the factors as to whether or not the CLR will optimize in such a way that locals are not visible is the DebuggableAttribute class.
This attribute is generally emitted by the compiler and it changes the flags based on the projects mode: Release or Debug. If the attribute already exists in your project though, the compiler will not overwrite it.
If your have a web application (vs a web site) you can just add the following line to AssemblyInfo.cs and it should fix the problem
[assembly: Debuggable(DebuggingModes.DisableOptimizations)]
Note this does disable performance optimizations so you probably don't want to actually release this way but it's helpful for debugging.

Categories