Avoid "Unreachable code" warning for preprocessor-dependent code - c#

I'm trying to figure out if there's any way to avoid getting an "Unreachable code" warning for something that's caused by the preprocessor. I don't want to suppress all such warnings, only those which will be dependent on the preprocessor, e.g.
#if WINDOWS
public const GamePlatform platform = GamePlatform.PC;
#else
public const GamePlatform platform = GamePlatform.MAC;
#endif
And later on there's code that goes:
if (platform == GamePlatform.PC)
{
...
}
else
{
...
}
One of those two sections will always be detected as "Unreachable code", and we've got those all over the place. I'd like to try and get rid of the many warnings it creates, but I still want to get warnings for legitimately unreachable code. (In reality, there's more than just two platforms, so each chunk of platform-specific code creates a pile of unnecessary warnings.)

Option 1: Add the preprocessor macros wherever you have the if-statements. This will be more performant, but perhaps a bit uglier.
Option 2: Make the platform variable not const. Setting it to static readonly made the warning go away for me.

Related

Mobile apps: is it a good practice to use a constant to distinguish the Free/Pro version?

I have two apps which do essentially the same, with small differences (different logo, ads hidden in the Pro, different Facebook app id for the login flow, etc.)
Currently, I use a
public const bool isProVersion = false;
Which I toggle before compiling, and it changes the whole app behavior depending on its value.
The only issue is that I have a lot of "unreachable code detected" notices in my error list.
This is intended, because some code must never be reached in that version, but it doesn't look very clean.
I could use a static variable instead of a constant, but this will make, for example, the ads code compiled and "reachable" into the Pro version, which is not needed and could lower the performance.
Is there any better alternative?
Expanding on the comment by Jeroen Vannevel, you really should use preprocessor directives. You should use an ISPROVERSION directive and two compiling configurations, one that defines ISPROVERSION (the pro configuration) and one that doesn't (the free configuration).
So, instead of doing this:
if (YourClassName.isProVersion)
{
// user has paid, yey!
SomeClass.LoadAds(false);
}
else
{
// user didn't pay, scr** them!
SomeClass.LoadAds(true);
}
You would be doing something like this:
#if ISPROVERSION
// user has paid, yey!
SomeClass.LoadAds(false);
#else
// user didn't pay, scr** them!
SomeClass.LoadAds(true);
#endif
This way, if you build using the pro configuration, the code in the #else statements won't even be compiled.
Read more about defining preprocessor directives here: /define (C# Compiler Options)

#if DEBUG for exception handling

I have a public method ChangeLanguage that is in a class library that is supposed to be used by other people who have no idea what the source code is but know what it can do.
public string ChangeLanguage(string language)
{
#if DEBUG
// Check if the new language is not null or an empty string.
language.ThrowArgumentNullExceptionIfNullOrEmpty("language", GetString("0x000000069"));
#else
if (string.IsNullOrEmpty(language))
language = _fallbackLanguage;
#endif
}
For me it looked obvious to check if a language was actually passed to the method and if not, throw an error to the developer. Tho this is a very small performance loss, I think it's better to not throw an exception but just use the fallback language when no language was provided.
I'm wondering if this is a good design approach and if I should keep using it at other places in the library like here:
_appResDicSource = Path.Combine("\\" + _projectName + ";component", _languagesDirectoryName, _fileBaseName + "_" + language + ".xaml");
_clsLibResDicSource = "\\Noru.Base;component\\Languages\\Language_" + language + ".xaml";
ResourceDictionary applicationResourceDictionary;
ResourceDictionary classLibraryResourceDictionary;
try { applicationResourceDictionary = new ResourceDictionary { Source = new Uri(_appResDicSource, UriKind.RelativeOrAbsolute) }; }
catch
{
#if DEBUG
throw new IOException(string.Format(GetString("1x00000006A"), _appResDicSource));
#else
return ChangeLanguage(_fallbackLanguage);
#endif
}
try { classLibraryResourceDictionary = new ResourceDictionary { Source = new Uri(_clsLibResDicSource, UriKind.RelativeOrAbsolute) }; }
catch
{
#if DEBUG
throw new IOException(string.Format(GetString("1x00000006B"), _clsLibResDicSource));
#else
return ChangeLanguage(_fallbackLanguage);
#endif
}
It depends on semantics of the call, but I would consider Debug.Fail (which will be eliminated if DEBUG is not defined):
public string ChangeLanguage(string language)
{
if (string.IsNullOrEmpty(language))
{
Debug.Fail("language is NOK");
language = _fallbackLanguage;
}
}
The issue is that, on the one hand, as mentioned by #OndrejTucny and #poke, it is not desirable to have different logic for different build configurations. It is correct.
But on the other hand, there are cases where you do not want the application to crash in the field due to a minor error. But if you just ignore the error unconditionally, you decrease the chances to detect it even on the local system.
I do not think that there is a universal solution. In general, you may end up deciding to throw or not, to log or not, to add an assertion or not, always or sometimes. And the answers depend on a concrete situation.
No, this is certainly not a good design approach. The problem is your method's DEBUG and RELEASE contracts are different. I wouldn't be happy using such API as a developer.
The inherent problem of your solution is that you will end up with production behavior that cannot be reproduced in the dev/test environment.
Either the semantics of your library is such that either not providing a language code is an error, then always raise an exception, or it is a valid condition that leads to some pre-defined behavior, such as using the 'fallback' language code, then always substitute the default. However, it shouldn't be both and decided only by the selection of a particular compilation of your assembly.
I think this is a bad idea. Mostly because this separates the logic of a debug build and a release build. This means that when you—as a developer—only build debug builds you will not be able to detect errors with the release-only logic, and you would have to test that separately (and in case of errors, you would have a release build, so you couldn’t debug those errors properly). So this adds a huge testing burden which needs to be handled for example by separate unit tests that run on debug and release builds.
That being said, performance is not really an issue. That extra logic only happens in debug builds and debug builds are not optimized for performance anyway. Furthermore, it’s very unlikely that such checks will cause any noticeable performance problems. And unless you profile your application and actually verify that it is causing performance problems, you shouldn’t try to optimize it.
How often is this method getting called? Performance is probably not a concern.
I don't think the decision to select the fallback language is the library's to make. Throw the exception, and let the clients choose to select a default, fallback if desired.

Code contracts static checking does not seem to be working

I am using VS 2010 and testing something real basic :
class Program
{
static void Main(string[] args)
{
var numbers = new int[2];
numbers[3] = 0;
}
}
I have gone to properties > Code Contracts and have enabled the static checking. No errors / warnings/squiggly underlines are showing on compile / build.
EDIT:
When turning the warning level to max I get this warning, which is not the warning I am after :
Warning 1 CodeContracts: Invoking method 'Main' will always lead to an error. If this is wanted, consider adding Contract.Requires(false) to document it
It's not clear what warning you're expecting (you state "I get this warning, which is not the warning I am after" without actually saying what warning you are after), but perhaps this will help.
First up:
var numbers = new int[2];
numbers[3] = 0;
This is an out-of-bounds access that will fail at runtime. This is the cause of the error you're getting, which states that "Invoking method 'Main' will always lead to an error." - that's perfectly accurate, it will always lead to an error because that out-of-bounds array access will always throw a runtime exception.
Since you state that this isn't the warning you're expecting, though, I've had to employ a bit of guesswork as to what you were expecting. My best guess was that due to having ticked the 'Implicit Non-Null Obligations' checkbox, and also having tried adding Contract.Requires(args != null) to your code, you're expecting to get a warning that your Main method could potentially be called with a null argument.
The thing is, Code Contracts will only inspect your own code to make sure that you always provide a non-null argument when calling Main. The thing is, you never call Main at all - the operating system will call Main, and Code Contracts is not going to inspect the operating system's code!
There's no way to provide compile-time checking of the arguments provided to Main - you have to check these at runtime, manually. Again, Code Contracts works by checking that calls you make to a function meet the requirements you set - if you're not actually making the calls yourself, Code Contracts has no compile-time say in the matter.
I have tried this (albeit with Visual Studio 2013 + Code Contracts) and I found the following:
With the Warning Level set to "low" (like you have), I do not get a warning.
With the Warning Level set to "hi", I do get a warning.
So I suggest increasing your warning level slider.

Does including Contract.Assert have any effect on execution?

I have seen code which includes Contract.Assert like,
Contract.Assert(t != null);
Will using Contract.Assert have a negative impact on my production code?
According to the manual on page 11, they're only included in your assembly when the DEBUG symbol is defined (which, of course, includes Debug builds, but not Release builds (by default)).
In addition to the runtime benefits of Contract.Assert, you can also consider the alternative of using Contract.Assume (Manual section 2.5 page 11) when making calls to legacy code which does not have contracts defined, and if you like having the static warning level cranked up on high (e.g. static checking level more warnings or all warnings - level 3+ - and Show Assumptions turned on).
Contract.Assume gives you the same run time benefits as Contract.Assert, but also suppresses static checks which can't be proven because of the legacy assembly.
e.g. in the below code, with static checking enabled and warnings set to level 3 gives the warning : CodeContracts: requires unproven: someString != null when checking the contracts of MethodDoesNotAllowNull
var aString = Legacy.MethodWithNoContract();
MethodDoesNotAllowNull(aString);
private void MethodDoesNotAllowNull(string someString)
{
Contract.Requires(someString != null);
}
With Legacy Assembly code:
public static string MethodWithNoContract()
{
return "I'm not really null :)";
}
Assume suppresses the warning (but gives the run time Assert benefit in debug builds):
var aString = LegacyAssembly.LegacyMethodWithNoContract();
Contract.Assume(aString != null);
MethodDoesNotAllowNull(aString);
This way, you still get the empirical runtime benefit of the Contract.Assert in debug builds.
As for good practices it is better to have a set of good unit tests. Then code contracts are not that necessary. They may be helpful, but are less important.

Is '#IF DEBUG' deprecated in modern programming?

This is my first StackOverflow question so be nice! :-)
I recently came across the following example in some C# source code:
public IFoo GetFooInstance()
{
#IF DEBUG
return new TestFoo();
#ELSE
return new Foo();
#ENDIF
}
Which lead me to these questions:
Is the use of "#IF DEBUG" unofficially deprecated? If not what is considered to be a good implementation of its use?
Using a class factory and/or tools like MOQ, RhinoMocks, etc how could the above example be implemented?
Using an IoC container, the entire function becomes redundant, instead of calling GetFooInstance you'd have code similar to:
ObjectFactory.GetInstance<IFoo>();
The setup of your IoC container could be in code or through a configuration file.
Nope. We use it all the time to sprinkle diagnostic information in our assemblies. For example I have the following shortcut used when debugging:
#if DEBUG
if( ??? ) System.Diagnostics.Debugger.Break();
#endif
Where I can change ??? to any relevant expresion like Name == "Popcorn". etc. This ensures that none of the debugging code leaks into the release build.
Just like some of the other posters have mentioned, I use #if statements all the time for debugging scenarios.
The style of code that you have posted is more of a factory creation pattern, which is common. I use it frequently, and not only do I not consider it depreciated, I consider the use of #if and #define statements to be an important tool in my bag of tricks.
I believe CastleWindsor (http://www.castleproject.org/container/index.html) also has an IoC container. I believe the general pattern is that in the configuration file, you state that TestFoo or IFoo will be the class created when CastleWindsor initializes the IoC container.
Yes. I would strongly advise AGAINST using "#IF DEBUG" except in rare circumstances
It was common place in C, but you should not use in a modern language such as C# for several reasons:
Code (and header) files become nightmarish
It is too easy to make a mistake and leave in/out a conditional for a Release build
Testing becomes a nightmare: need to test many combinations of builds
Not all code is compile checked, unless you compile for ALL possible
conditional symbols!
#Richard's answer shows how you can replace using IoC (much cleaner).
I strongly deprecate #if; instead use if with a manifest constant:
#define DEBUG 0 // actually in a header or on a command line
public IFoo GetFooInstance()
{
if (DEBUG)
return new TestFoo();
else
return new Foo();
}
Why is #IF bad? Because in the #IF version, not all code is typechecked. For complicated, long-lived codes, bitrot can set in, and then suddenly you need DEBUG but the code won't build. With the if version, the code always builds, and given even the most minimal optimization settings, your compiler will completely eliminate the unreachable code. In other words, as long as DEBUG is known to be always 0 (and #define will do that for you), there is no run-time cost to using if.
And you are guaranteed that if you change DEBUG from 0 to 1, the system will build.
Your compiler should be smart enough to figure out that:
final DEBUG=false;
...
if(DEBUG)
return new debugMeh();
else
return new meh();
the method never has to be called and can, in fact, be compiled out of the final assembly.
That plus the fact that even the unoptimized performance difference wouldn't amount to anything significant makes using some different syntax mostly unnecessary.
EDIT: I'd like to point out something interesting here. Someone in comments just said that this:
#IF DEBUG
return new TestFoo();
#ELSE
return new Foo();
#ENDIF
was easier to write than this:
if(DEBUG)
return new TestFoo();
else
return new Foo();
I find it really amazing the lengths that people will go to to defend the way they've done things as correct. Another said that they would have to define a DEBUG variable in each file. I'm not sure about C#, but in Java we generally put a
public static final DEBUG=true;
in our logger or another central factory object (although we actually find better ways to do this--such as using DI).
I'm not saying the same facilities don't exist in C#, I'm just amazed at the lengths people will go to to prove the solution they hold onto because of habit is correct.

Categories