I am using VS 2010 and testing something real basic :
class Program
{
static void Main(string[] args)
{
var numbers = new int[2];
numbers[3] = 0;
}
}
I have gone to properties > Code Contracts and have enabled the static checking. No errors / warnings/squiggly underlines are showing on compile / build.
EDIT:
When turning the warning level to max I get this warning, which is not the warning I am after :
Warning 1 CodeContracts: Invoking method 'Main' will always lead to an error. If this is wanted, consider adding Contract.Requires(false) to document it
It's not clear what warning you're expecting (you state "I get this warning, which is not the warning I am after" without actually saying what warning you are after), but perhaps this will help.
First up:
var numbers = new int[2];
numbers[3] = 0;
This is an out-of-bounds access that will fail at runtime. This is the cause of the error you're getting, which states that "Invoking method 'Main' will always lead to an error." - that's perfectly accurate, it will always lead to an error because that out-of-bounds array access will always throw a runtime exception.
Since you state that this isn't the warning you're expecting, though, I've had to employ a bit of guesswork as to what you were expecting. My best guess was that due to having ticked the 'Implicit Non-Null Obligations' checkbox, and also having tried adding Contract.Requires(args != null) to your code, you're expecting to get a warning that your Main method could potentially be called with a null argument.
The thing is, Code Contracts will only inspect your own code to make sure that you always provide a non-null argument when calling Main. The thing is, you never call Main at all - the operating system will call Main, and Code Contracts is not going to inspect the operating system's code!
There's no way to provide compile-time checking of the arguments provided to Main - you have to check these at runtime, manually. Again, Code Contracts works by checking that calls you make to a function meet the requirements you set - if you're not actually making the calls yourself, Code Contracts has no compile-time say in the matter.
I have tried this (albeit with Visual Studio 2013 + Code Contracts) and I found the following:
With the Warning Level set to "low" (like you have), I do not get a warning.
With the Warning Level set to "hi", I do get a warning.
So I suggest increasing your warning level slider.
Related
I have seen code which includes Contract.Assert like,
Contract.Assert(t != null);
Will using Contract.Assert have a negative impact on my production code?
According to the manual on page 11, they're only included in your assembly when the DEBUG symbol is defined (which, of course, includes Debug builds, but not Release builds (by default)).
In addition to the runtime benefits of Contract.Assert, you can also consider the alternative of using Contract.Assume (Manual section 2.5 page 11) when making calls to legacy code which does not have contracts defined, and if you like having the static warning level cranked up on high (e.g. static checking level more warnings or all warnings - level 3+ - and Show Assumptions turned on).
Contract.Assume gives you the same run time benefits as Contract.Assert, but also suppresses static checks which can't be proven because of the legacy assembly.
e.g. in the below code, with static checking enabled and warnings set to level 3 gives the warning : CodeContracts: requires unproven: someString != null when checking the contracts of MethodDoesNotAllowNull
var aString = Legacy.MethodWithNoContract();
MethodDoesNotAllowNull(aString);
private void MethodDoesNotAllowNull(string someString)
{
Contract.Requires(someString != null);
}
With Legacy Assembly code:
public static string MethodWithNoContract()
{
return "I'm not really null :)";
}
Assume suppresses the warning (but gives the run time Assert benefit in debug builds):
var aString = LegacyAssembly.LegacyMethodWithNoContract();
Contract.Assume(aString != null);
MethodDoesNotAllowNull(aString);
This way, you still get the empirical runtime benefit of the Contract.Assert in debug builds.
As for good practices it is better to have a set of good unit tests. Then code contracts are not that necessary. They may be helpful, but are less important.
Do you feel question is strange? yes what happened also strange. let me explain.
I have found a snippet from this Covariance and Contravariance with C# Arrays
string[] strings = new string[1];
object[] objects = strings;
objects[0] = new object();
Jon skeet explains that above code will throw ArrayTypeMismatchException, as said yes it does.
what I did is I put a breakpoint in line 3, Using DebuggerVisualizer I manually set objects[0] = new object() it doesn't throw any error and it works. later checking strings[0].GetType() returns System.Object. not only System.Object any type can be set in string[] by above mentioned procedure.
I have no idea how this happened i raised my question as a comment over there in the very same question i saw this but no answers.
Am curious to know what is happening behind. Anybody explain pls.
Edit1 This is even Interesting
After reproducing the above behaviour try this
int len = strings[0].Length;
if you place mouse over the Property Length is says strings[0].Length threw ArgumentException with message Cannot find the method on the object instance but actually it doesnt throw exception and code runs yielding result len=0
Your example seems to answer the question: yes, a string reference can refer a non-string object. This is not intended, however.
Consider what you have found, a bug in the debugger.
As Jon Skeet explains in the answer you mention, because .NET arrays have this "crazy" covaraiance even though arrays are not read-only but more like read-write, everytime one writes to an array of references the framework has to check the type of the object one tries to write to the array, and throw an ArrayTypeMismatchException if you're about to use a wrong type, like assigning an instance of Cat to an array of Dogs (a runtime Dog[]) which has been cast by "crazy" covariance into an Animal[].
What you have demonstrated is that when we use the Immediate window of the Visual Studio debugger (or similar windows), this required type check is not done, and as a result this can lead to any type Y (except pointer types probably) being assigned to a reference type variable of any reference type X. Like this:
X[] arrayOfX = new X[1];
object[] arrayCastByCrazyCovariance = arrayOfX;
Y badObject = new Y(); // or another constructor or method to get a Y
// Set breakpoint here.
// In Immediate window assign: arrayCastByCrazyCovariance[0] = badObject
// Detach debugger again.
X anomalousReferenceVariable = arrayOfX[0];
anomalousReferenceVariable.MemberOfX(); // or other bad things
This can make a Cat bark like a Dog, and stuff like that.
In the linked thread on Bypassing type safeguards, the answer by CodesInChaos shows an unrelated technique with which you can put a reference to an object of a "wrong" and unrelated type into a reference variable.
(I have preferred to rewrite my answer because the previous one had too many updates and wasn't clear enough).
Apparently, it has been found a not-so-perfect behaviour in one of the tools (Immediate Window) in the VS debugging part. This behaviour does not affect (AT ALL) the normal execution of the code and, purely speaking, does not affect even the debugging process.
What I meant in the last sentence above is that, when I debug code, I never use the Immediate Window, just write any code I want, execute it and see what the debugger shows. The referred problem does not affect this process (which can be called "debugging actually-executed code"; in the proposed example, pressing F11 when you are on objects[0] = new object();), what would imply a serious problem in VS. Thus from my point of view (the kind of debugging I do) and from the execution point of view, the referred error has no effect at all.
The only application of this error is when executing the "Immediate Window" functionality, a feature of the debugger which estimates what the code will deliver before it actually delivers it (what might be called "debugging not-executed code" or "estimating expected outputs from non-executed code", etc.; in the proposed example, being on line objects[0] = new object();, not pressing F11 but using the Immediate Window to input values and let this feature to tell you what is expected to happen).
In summary, the referred problem has to be understood within the right context, that is, it is not an overall-applicable problem, not even a problem in the whole debugger (when you press F11 in the referred line from the debugger, it outputs an error and thus the debugger does understand perfectly that this situation is wrong), but just in one of its tools. I am not even sure if this behaviour is even acceptable for this tool (i.e., what "Immediate Window" delivers is a prediction which might not be 100% right; if you want to know for sure what will happen, execute the code and let the debugger show you the information).
QUESTION: Can a String[] hold System.Object inside it?
ANSWER: NO.
CLARIFICATION: covariance is a complex reality which might not be perfectly accounted by some of the secondary tools in VS (e.g.,
"Immediate Window") and thus there might be cases where the
aforementioned statement does not fully apply. BUT this is a local
behaviour/bug in the specific tool with no effect in the actual
execution of the code.
I have the following code inside a method:
string username = (string)context["UserName"];
string un = (string)context["UserName"];
The problem is that the first string "username" is not assigned, while the second does.
To make it more strange, when I have stopped the debugging after the first line and copied the line to the Immediate window, dropping the varible type delaration, it was assigned succesfully.
I have made rebuild all and checked project properties which seems to be OK.
The context variable is a System.Configuration.SettingsContext, which is a hash table. To be more specific, I'm implementing a profile provider, the GetPropertyValues method.
I am using VS 2012 and .NET 4.5
EDIT:
I am using code contract in my project, which uses compile time code injection for runtime checking. I disabled it and all is working well. I'll try to remove contracts one by one to find which one is causing the problem.
What you are seeing is similar to a Code Contracts bug I saw before. I wrote something about it here a few months back. If you have this bug, you probably also have a lambda or LINQ expression in your method that uses username.
For future reference, this is the bug I saw:
I had in the same method a lambda expression capturing a local variable, lets say values, and a Contract.Requires() expression checking something completely unrelated. While debugging that method, the debugger shows in Locals the variable values twice, and reports the value of values always as null even when this is clearly not the case.
To reproduce:
static void Reproduction(string argument)
{
Contract.Requires(argument != null); // <-- (1)
int[] values = new int[1];
Debug.Assert(values != null);
Func<int, bool> d = i => values[i] >= 0; // <-- (2)
Console.WriteLine(values);
}
Put a breakpoint somewhere in this method after the assignment to values and ensure that the method gets called. When the debugger hits the breakpoint, look at the Locals list of Visual Studio for the duplicate variable. Hover the mouse over values to find that it gets reported as being null, something which is clearly not the case.
The problem disappears when removing either the contract (1) or the line with the lambda (2).
After investigating and desabling contracts I found that the problem appears only when runtime contracts checking is enabled and this contract appears:
Contract.Ensures(Contract.Result<System.Configuration.SettingsPropertyValueCollection>() != null);
If I delete this line, the code works, so it looks like code contracts bug, though I couldn't recreate it on a test project.
Ok, consider the following code:
const bool trueOrFalse = false;
const object constObject = null;
void Foo()
{
if (trueOrFalse)
{
int hash = constObject.GetHashCode();
}
if (constObject != null)
{
int hash = constObject.GetHashCode();
}
}
trueOrFalse is a compile time constant and as such, the compiler warns correctly that int hash = o.GetHashCode(); is not reachable.
Also, constObject is a compile time constant and as such the compiler warns again correctly that int hash = o.GetHashCode(); is not reachable as o != null will never be true.
So why doesn't the compiler figure out that:
if (true)
{
int hash = constObject.GetHashCode();
}
is 100% sure to be a runtime exception and thus issue out a compile time error? I know this is probably a stupid corner case, but the compiler seems pretty smart reasoning about compile time constant value types, and as such, I was expecting it could also figure out this small corner case with reference types.
UPDATE: This question was the subject of my blog on July 17th 2012. Thanks for the great question!
Why doesn't the compiler figure out that my code is 100% sure to be a runtime exception and thus issue out a compile time error?
Why should the compiler make code that is guaranteed to throw into a compile-time error? Wouldn't that make:
int M()
{
throw new NotImplementedException();
}
into a compile-time error? But that's exactly the opposite of what you want it to be; you want this to be a runtime error so that the incomplete code compiles.
Now, you might say, well, dereferencing null is clearly undesirable always, whereas a "not implemented" exception is clearly desirable. So could the compiler detect just this specific situation of there being a null ref exception guaranteed to happen, and give an error?
Sure, it could. We'd just have to spend the budget on implementing a data flow analyzer that tracks when a given expression is known to be always null, and then make it a compile time error (or warning) to dereference that expression.
The questions to answer then are:
How much does that feature cost?
How much benefit does the user accrue?
Is there any other possible feature that has a better cost-to-benefit ratio, and provides more value to the user?
The answer to the first question is "rather a lot" -- code flow analyzers are expensive to design and build. The answer to the second question is "not very much" -- the number of situations in which you can prove that null is going to be dereferenced are very small. The answer to the third question has, over the last twelve years, always been "yes".
Therefore, no such feature.
Now, you might say, well, C# does have some limited ability to detect when an expression is always/never null; the nullable arithmetic analyzer uses this analysis to generate more optimal nullable arithmetic code (*), and clearly the flow analyzer uses it to determine reachability. So why not just use the already existing nullability and flow analyzer to detect when you've always dereferenced a null constant?
That would be cheap to implement, sure. But the corresponding user benefit is now tiny. How many times in real code do you initialize a constant to null, and then dereference it? It seems unlikely that anyone would actually do that.
Moreover: yes, it is always better to detect a bug at compile time instead of run time, because it is cheaper. But the bug here -- a guaranteed dereference of null -- will be caught the first time the code is tested, and subsequently fixed.
So basically the feature request here is to detect at compile time a very unlikely and obvioulsy wrong situation that will always be immediately caught and fixed the first time the code is run anyways. It is therefore not a very good candidate for spending budget on to implement it; we have lots of higher priorities.
(*) See the long series of articles on how the Roslyn compiler does so which begins at http://ericlippert.com/2012/12/20/nullable-micro-optimizations-part-one/
While unreachable code is useless and does not affect your execution, code that throws an error is executed. So
if (true) { int hash = constObject.GetHashCode();}
is more or less the same as
throw new NullReferenceException();
You might very well want to throw that null reference. Whereas the unreachable code is just taking up space if it were to be compiled.
It also won't warn about the following code that has the same effect:
throw new NullReferenceException();
There's a balance with warnings. Most compiler errors happen when the compiler can't produce anything meaningful from it.
Some happen with things that affect verifiability, or which cross a threshold of how likely they are to be a bug. For example, the following code:
private void DoNothing(out string str)
{
return;
}
private void UseNothing()
{
string s;
DoNothing(s);
}
Won't compile, though if it did it would do no harm (the only place DoNothing is called doesn't use the string passed, so the fact that it is never assigned isn't a problem). There's just too high a risk that I'm doing something stupid here to let it go.
Warnings are for things that are almost certainly foolish or at least not what you wanted to happen. Dead code is likely enough to be a bug to make a warning worthwhile, but likely enough to be sensible (e.g. trueOrFalse may change as the application is developed) to make an error inappropriate.
Warnings are meant to be useful, rather than nuisances, so the bar for them is put quite high. There's no exact science, but it was deemed that unreachable code made the cut, and trying to deduce when throwing exceptions wasn't the desired behaviour didn't.
It no doubt helps that the compiler already detects unreachable code (and doesn't compile it) but sees one deliberate throw much like another, no matter how convoluted on the one hand or direct on the other.
Why would you even want that to be a compile-time error? Why would you want code that is guaranteed to throw an exception to be invalid at compile time? What if I, the programmer, want the semantics of my program to be:
static void Main(string[] args) {
throw new NullReferenceException();
}
It's my program. The compiler has no business telling me this isn't valid C# code.
My team is using FxCop to help clean up an existing ASP.NET application.
We have noticed some strange behavior in the way FxCop counts warnings.
It seems that on one pass through the code, FxCop only finds and counts the first warning related to a specific rule in each method.
So, if I have:
public test3(){
int a = 0; //DoNotInitializeUnecessarily
int b = 0; //DoNotInitializeUnecessarily
}
...my FxCop report will only find and count the first warning of type DoNotInitializeUnecessarily in method test3(). Is there any way to make FxCop find and count both instances of this problem in method test3()?
The current method of counting is problematic for us, because FxCop is under reporting the number of warnings. This makes it difficult to estimate how much time will be required to fix existing FxCop warnings, since we don't actually know how many are in the application.
Did you try changing
Tools->Settings->Project Defaults->"Disable rules after [ 1] exceptions"
?