① In following C# code, CS1729 occurs but I understand that CS0122 would be more appropriate.
namespace A
{
class Program
{
static void Main()
{
Test test = new Test(1);
}
}
class Test
{
Test(int i) { }
}
}
CS1729: 'A.Test' does not contain a constructor that takes 1 arguments
CS0122: 'A.Test.Test(int) is inaccessible due to its protection level'
② In following C# code, CS0122 occurs but I understand that CS1729 would be more appropriate
namespace A
{
class Program
{
static void Main()
{
Test test = new Test();
}
}
class Test
{
Test(int i) { }
}
}
CS0122: 'A.Test.Test(int) is inaccessible due to its protection level'
CS1729: 'A.Test' does not contain a constructor that takes 0 arguments
Question: Is there any reason why CS0122 and CS1729 are swapped in ① and ② or is this C# compiler bug ?
P.S.: Errors in ① and ② can be reproduced with Microsoft Visual C# 2010 Compiler version 4.030319.1.
Full disclosure: I work on the C# team at Microsoft.
Diagnostic reporting from a compiler is a tricky business! We spend a lot of time trying to ensure that the "best" diagnostic is reported for a particular error condition. However, this sometimes requires taking heuristics into account, and we don't always get that right. In this case, as #Henrik Holterman points out, both errors are reasonable (at least for the second case).
The first example is clearly a bug, though it's of low severity. After all, it's still an error with a somewhat "correct" (I'm being a bit gracious here) diagnostic. In the second example, both errors are correct, but the compiler failed to pick the "best", and hopefully, the most helpful diagnostic.
With the Roslyn C# compiler, we've had an opportunity to take a fresh look at our diagnostic reporting and make better choices. For these particular examples, the Roslyn compilers do in fact produce the errors that you were expecting. In the first example, CS0122 is reported, and in the second case, CS1729 is reported. So, you can rest assured that this is already fixed in a future release.
Related
I think this is a compiler bug.
The following console application compiles und executes flawlessly when compiled with VS 2015:
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
var x = MyStruct.Empty;
}
public struct MyStruct
{
public static readonly MyStruct Empty = new MyStruct();
}
}
}
But now it's getting weird: This code compiles, but it throws a TypeLoadException when executed.
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
var x = MyStruct.Empty;
}
public struct MyStruct
{
public static readonly MyStruct? Empty = null;
}
}
}
Do you experience the same issue? If so, I will file an issue at Microsoft.
The code looks senseless, but I use it to improve readability and to achieve disambiguation.
I have methods with different overloads like
void DoSomething(MyStruct? arg1, string arg2)
void DoSomething(string arg1, string arg2)
Calling a method this way...
myInstance.DoSomething(null, "Hello world!")
... does not compile.
Calling
myInstance.DoSomething(default(MyStruct?), "Hello world!")
or
myInstance.DoSomething((MyStruct?)null, "Hello world!")
works, but looks ugly. I prefer it this way:
myInstance.DoSomething(MyStruct.Empty, "Hello world!")
If I put the Empty variable into another class, everything works okay:
public static class MyUtility
{
public static readonly MyStruct? Empty = null;
}
Strange behavior, isn't it?
UPDATE 2016-03-29
I opened a ticket here: http://github.com/dotnet/roslyn/issues/10126
UPDATE 2016-04-06
A new ticket has been opened here: https://github.com/dotnet/coreclr/issues/4049
First off, it is important when analyzing these issues to make a minimal reproducer, so that we can narrow down where the problem is. In the original code there are three red herrings: the readonly, the static and the Nullable<T>. None are necessary to repro the issue. Here's a minimal repro:
struct N<T> {}
struct M { public N<M> E; }
class P { static void Main() { var x = default(M); } }
This compiles in the current version of VS, but throws a type load exception when run.
The exception is not triggered by use of E. It is triggered by any attempt to access the type M. (As one would expect in the case of a type load exception.)
The exception reproduces whether the field is static or instance, readonly or not; this has nothing to do with the nature of the field. (However it must be a field! The issue does not repro if it is, say, a method.)
The exception has nothing whatsoever to do with "invocation"; nothing is being "invoked" in the minimal repro.
The exception has nothing whatsoever to do with the member access operator ".". It does not appear in the minimal repro.
The exception has nothing whatsoever to do with nullables; nothing is nullable in the minimal repro.
Now let's do some more experiments. What if we make N and M classes? I will tell you the results:
The behaviour only reproduces when both are structs.
We could go on to discuss whether the issue reproduces only when M in some sense "directly" mentions itself, or whether an "indirect" cycle also reproduces the bug. (The latter is true.) And as Corey notes in his answer, we could also ask "do the types have to be generic?" No; there is a reproducer even more minimal than this one with no generics.
However I think we have enough to complete our discussion of the reproducer and move on to the question at hand, which is "is it a bug, and if so, in what?"
Plainly something is messed up here, and I lack the time today to sort out where the blame ought to fall. Here are some thoughts:
The rule against structs containing members of themselves plainly does not apply here. (See section 11.3.1 of the C# 5 specification, which is the one I have present at hand. I note that this section could benefit from a careful rewriting with generics in mind; some of the language here is a bit imprecise.) If E is static then that section does not apply; if it is not static then the layouts of N<M> and M can both be computed regardless.
I know of no other rule in the C# language that would prohibit this arrangement of types.
It might be the case that the CLR specification prohibits this arrangement of types, and the CLR is right to throw an exception here.
So now let us sum up the possibilities:
The CLR has a bug. This type topology should be legal, and it is wrong of the CLR to throw here.
The CLR behaviour is correct. This type topology is illegal, and it is correct of the CLR to throw here. (In this scenario it may be the case that the CLR has a spec bug, in that this fact may not be adequately explained in the specification. I don't have time to do CLR spec diving today.)
Let us suppose for the sake of argument that the second is true. What can we now say about C#? Some possibilities:
The C# language specification prohibits this program, but the implementation allows it. The implementation has a bug. (I believe this scenario to be false.)
The C# language specification does not prohibit this program, but it could be made to do so at a reasonable implementation cost. In this scenario the C# specification is at fault, it should be fixed, and the implementation should be fixed to match.
The C# language specification does not prohibit the program, but detecting the problem at compile time cannot be done at reasonable cost. This is the case with pretty much any runtime crash; your program crashed at runtime because the compiler couldn't stop you from writing a buggy program. This is just one more buggy program; unfortunately, you had no reason to know it was buggy.
Summing up, our possibilities are:
The CLR has a bug
The C# spec has a bug
The C# implementation has a bug
The program has a bug
One of these four must be true. I do not know which it is. Were I asked to guess, I'd pick the first one; I see no reason why the CLR type loader ought to balk on this one. But perhaps there is a good reason that I do not know; hopefully an expert on the CLR type loading semantics will chime in.
UPDATE:
This issue is tracked here:
https://github.com/dotnet/roslyn/issues/10126
To sum up the conclusions from the C# team in that issue:
The program is legal according to both the CLI and C# specifications.
The C# 6 compiler allows the program, but some implementations of the CLI throw a type load exception. This is a bug in those implementations.
The CLR team is aware of the bug, and apparently it is hard to fix on the buggy implementations.
The C# team is considering making the legal code produce a warning, since it will fail at runtime on some, but not all, versions of the CLI.
The C# and CLR teams are on this; follow up with them. If you have any more concerns with this issue please post to the tracking issue, not here.
This is not a bug in 2015 but a possibly a C# language bug. The discussion below relates to why instance members cannot introduce loops, and why a Nullable<T> will cause this error, but should not apply to static members.
I would submit it as a language bug, not a compiler bug.
Compiling this code in VS2013 gives the following compile error:
Struct member 'ConsoleApplication1.Program.MyStruct.Empty' of type 'System.Nullable' causes a cycle in the struct layout
A quick search turns up this answer which states:
It's not legal to have a struct that contains itself as a member.
Unfortunately the System.Nullable<T> type which is used for nullable instances of value types is also a value type and must therefore have a fixed size. It's tempting to think of MyStruct? as a reference type, but it really isn't. The size of MyStruct? is based on the size of MyStruct... which apparently introduces a loop in the compiler.
Take for instance:
public struct Struct1
{
public int a;
public int b;
public int c;
}
public struct Struct2
{
public Struct1? s;
}
Using System.Runtime.InteropServices.Marshal.SizeOf() you'll find that Struct2 is 16 bytes long, indicating that Struct1? is not a reference but a struct that is 4 bytes (standard padding size) longer than Struct1.
What's not happening here
In response to Julius Depulla's answer and comments, here is what is actually happening when you access a static Nullable<T> field. From this code:
public struct foo
{
public static int? Empty = null;
}
public void Main()
{
Console.WriteLine(foo.Empty == null);
}
Here is the generated IL from LINQPad:
IL_0000: ldsflda UserQuery+foo.Empty
IL_0005: call System.Nullable<System.Int32>.get_HasValue
IL_000A: ldc.i4.0
IL_000B: ceq
IL_000D: call System.Console.WriteLine
IL_0012: ret
The first instruction gets the address of the static field foo.Empty and pushes it on the stack. This address is guaranteed to be non-null as Nullable<Int32> is a structure and not a reference type.
Next the Nullable<Int32> hidden member function get_HasValue is called to retrieve the HasValue property value. This cannot result in a null reference since, as mentioned previously, the address of a value type field must be non-null, regardless of the value contained at the address.
The rest is just comparing the result to 0 and sending the result to the console.
At no point in this process is it possible to 'invoke a null on a type' whatever that means. Value types do not have null addresses, so method invocation on value types cannot directly result in a null object reference error. That's why we don't call them reference types.
Now that we've had a lengthy discussion about what and why, here's a way to work around the issue without having to wait on the various .NET teams to track down the issue and determine what if anything will be done about it.
The issue appears to be restricted to field types that are value types which reference back to this type in some way, either as generic parameters or static members. For instance:
public struct A { public static B b; }
public struct B { public static A a; }
Ugh, I feel dirty now. Bad OOP, but it demonstrates that the problem exists without invoking generics in any way.
So because they are value types the type loader determines that there is a circularity involved that should be ignored because of the static keyword. The C# compiler was smart enough to figure it out. Whether it should have or not is up to the specs, on which I have no comment.
However, by changing either A or B to class the problem evaporates:
public struct A { public static B b; }
public class B { public static A a; }
So the problem can be avoided by using a reference type to store the actual value and convert the field to a property:
public struct MyStruct
{
private static class _internal { public static MyStruct? empty = null; }
public static MyStruct? Empty => _internal.empty;
}
This is a bunch slower because it's a property instead of a field and calls to it will invoke the get method, so I wouldn't use it for performance-critical code, but as a workaround it at least lets you do the job until a proper solution is available.
And if it turns out that this doesn't get resolved, at least we have a kludge we can use to bypass it.
I decompiled a C# assembly using ILSpy. Opened it as a project in VC.
A small portion of the code throws errors that I don't know how to fix. Here's the code:
public static class CoroutineUtils
{
[DebuggerHidden]
public static IEnumerator WaitForRealSeconds(float time)
{
CoroutineUtils.<WaitForRealSeconds>c__Iterator2F <WaitForRealSeconds>c__Iterator2F = new CoroutineUtils.<WaitForRealSeconds>c__Iterator2F();
<WaitForRealSeconds>c__Iterator2F.time = time;
<WaitForRealSeconds>c__Iterator2F.<$>time = time;
return <WaitForRealSeconds>c__Iterator2F;
}
}
And here's the error: Unexpected character '$' (at line 8 in this case).
And if I open the .cs file in which the error appears, the compiler starts throwing a dozen more errors like Identifier expected at line 6 (right after "CoroutineUtils.")
Don't know what to do.
You can't just copy/paste decompiled code and be sure it will work. Compiler can use identifiers that are not valid in C# code but are valid in IL. It happens mostly for compiler generated code - automatic properties, anonymous types, iterators and async/await converted to state machines, etc. That's the case here.
It's really hard to say what the code is suppose to do, so it's really hard to say how to fix it.
Why does the code below crash the .NET compiler? It was tested on csc.exe version 4.0.
See e.g. here for online demo on different version - it crashes in the same manner while it says dynamic is not supported https://dotnetfiddle.net/FMn59S:
Compilation error (line 0, col 0): Internal Compiler Error (0xc0000005 at address xy): likely culprit is 'TRANSFORM'.
The extension method works fine on List<dynamic> though.
using System;
using System.Collections.Generic;
static class F {
public static void M<T>(this IEnumerable<T> enumeration, Action<T> action){}
static void U(C.K d) {
d.M(kvp => Console.WriteLine(kvp));
}
}
class C {
public class K : Dictionary<string, dynamic>{}
}
Update: this doesn't crash the compiler
static void U(Dictionary<string, dynamic> d)
{
d.M(kvp => Console.WriteLine(kvp));
}
Update 2: the same bug was reported in http://connect.microsoft.com/VisualStudio/feedback/details/892372/compiler-error-with-dynamic-dictinoaries. The bug was reported for FirstOrDefault, but it seems the compiler crashes on any extension method applied to class derived from Dictionary<T1,T2>, where at least one of the parameter types is dynamic. See an even more general description of the problem below by Erik Funkenbusch.
Update 3: another non-standard behaviour. When I try to call extension method as a static method, that is, F.M(d, kvp => Console.WriteLine(kvp));, the compiler doesn't crash, but it cannot find the overload:
Argument 1: cannot convert from 'C.K' to 'System.Collections.Generic.IEnumerable<System.Collections.Generic.KeyValuePair<string,dynamic>>'
Update 4 - SOLUTION (kind of): Hans sketched 2nd workaround, which is semantically equivalent to original code, but works only for extension method call and not for standard call. Since the bug is likely caused by the fact that the compiler fails to cast class derived from generic class with multiple parameters (with one being dynamic) to its supertype, the solution is to provide an explicit cast. See https://dotnetfiddle.net/oNvlcL:
((Dictionary<string, dynamic>)d).M(kvp => Console.WriteLine(kvp));
M((Dictionary<string, dynamic>)d, kvp => Console.WriteLine(kvp));
It is dynamic that is triggering the instability, the crash disappears when you replace it by object.
Which is one workaround, the other is to help it infer the correct T:
static void U(C.K d) {
d.M(new Action<KeyValuePair<string, dynamic>>(kvp => Console.WriteLine(kvp)));
}
The feedback report that you found is a strong match, no need to file your own I'd say.
Well, the answer to your question as to WHY it crashes the compiler, it's because you've encountered a bug that.... crashes the compiler.
The VS2013 compiler says "Internal Compiler Error (0xc0000005 at address 012DC5B5): likely culprit is 'TRANSFORM'", so clearly it's a bug.
C0000005 is typically a null pointer, or referencing unallocated, or deleted memory. It's a general protection fault.
EDIT:
The problem is also present in pretty much any kind of multiple parameter generic type where the any parameter is dynamic. For instance it crashes on:
List<Tuple<string, dynamic>>{}
It also crashes on
List<KeyValuePair<dynamic, string>>{}
But does not crash on
List<dynamic>{}
but does crash on
List<List<dynamic>>{}
I was trying to mock an interface which takes a DateTimeOffset? as one of its parameters. All of a sudden, Visual Studio started reporting an 'Internal Compiler Error' and that it has 'stopped working'. After a lot of trials, I started removing files one by one, and then code line by line. This reduced to the below code, which reproduces this error:
public class testClass
{
public interface ITest
{
void Test(DateTimeOffset? date);
}
public void test2()
{
var mock = new Mock<ITest>();
mock.Setup(x => x.Test(new DateTime(2012, 1, 1)));
}
}
The issue seems to be the line :
mock.Setup(x => x.Test(new DateTime(2012, 1, 1)));
If I comment it, the compiler works fine. Also, the issues is that I am setting up a new DateTime(), which fits in a DateTimeOffset.
Is this a bug in Moq, or VS2012? Anyone ever got this error before?
UPDATE
The following code sample also results in a compile error, both with the regular Visual Studio 2012 compiler and with Roslyn CTP September 2012:
using System;
using System.Linq.Expressions;
public interface ITest
{
void Test(DateTimeOffset? date);
}
public class TestClass
{
Expression<Action<ITest>> t = x => x.Test(new DateTime(2012, 1, 1));
}
The error:
1>CSC : error CS0583: Internal Compiler Error (0xc0000005 at address
00D77AFB): likely culprit is 'BIND'.
This code has nothing to do with Moq.
It is clearly a bug in the semantic analyzer. (The text "likely culprit is BIND" is characteristic of bugs in the semantic analyzer, which is called the "binder" internally.) The scenario here is that we have a lifted-to-nullable user-defined conversion in a lambda that is being converted to an expression tree. That code was a bug farm. I thought I wrote a test case for this exact scenario, but maybe I didn't.
In any event, the problem is likely my bad, so sorry about that. Not much I can do about it now though.
What is truly bizarre though is that the bug allegedly repros on both Roslyn and the C# 5 compiler. That is a crazy coincidence, because the Roslyn and C# 5 compilers have completely different code for this part of the semantic analysis. We rewrote most of it from scratch. Strange that we'd get it wrong in the same way twice.
Anyway, Kevin will see this since you tagged it Roslyn, and if you want to enter a bug on the Connect site, that would be appreciated by the team I'm sure.
UPDATE:
Wait, you got the exact same error in Roslyn? Then what is happening is probably that the IDE is still using the C# 5 analysis library. If you write code that loads the offending code into a Roslyn compilation and analyzes it, you likely will not get the error. Right?
This is impressive, crashing the C# compiler like this is a very rare feat. You can report it at connect.microsoft.com, albeit that Microsoft ought to have received a bunch of WER reports from it. Several from me anyway :)
You can work around the problem by rewriting the code. Either with:
static DateTimeOffset? arg = new DateTime(2012, 1, 1);
Expression<Action<ITest>> t = x => x.Test(arg);
Or with the cleaner:
public class TestClass
{
Expression<Action<ITest>> t;
public TestClass() {
DateTimeOffset? arg = new DateTime(2012, 1, 1);
t = x => x.Test(arg);
}
}
Consider the following code:
namespace ConsoleApplication
{
using NamespaceOne;
using NamespaceTwo;
class Program
{
static void Main(string[] args)
{
// Compilation error. MyEnum is an ambiguous reference
MethodNamespace.MethodClass.Frobble(MyEnum.foo);
}
}
}
namespace MethodNamespace
{
public static class MethodClass
{
public static void Frobble(NamespaceOne.MyEnum val)
{
System.Console.WriteLine("Frobbled a " + val.ToString());
}
}
}
namespace NamespaceOne
{
public enum MyEnum
{
foo, bar, bat, baz
}
}
namespace NamespaceTwo
{
public enum MyEnum
{
foo, bar, bat, baz
}
}
The compiler complains that MyEnum is an ambiguous reference in the call to Frobble(). Since there is no ambiguity in what method is being called, one might expect the compiler to resolve the type reference based on the method signature. Why doesn't it?
Please note that I'm not saying that the compiler should do this. I'm confident that there is a very good reason that it doesn't. I would simply like to know what that reason is.
Paul is correct. In most situation in C# we reason "from inside to outside".
there is no ambiguity in what method is being called,
That it is unambiguous to you is irrelevant to the compiler. The task of overload resolution is to determine whether the method group Frobble can be resolved to a specific method given known arguments. If we can't determine what the argument types are then we don't even try to do overload resolution.
Method groups that just happen to contain only one method are not special in this regard. We still have to have good arguments before overload resolution can succeed.
There are cases where we reason from "outside to inside", namely, when doing type analysis of lambdas. Doing so makes the overload resolution algorithm exceedingly complicated and gives the compiler a problem to solve that is at least NP-HARD in bad cases. But in most scenarios we want to avoid that complexity and expense; expressions are analyzed by analyzing child subexpressions before their parents, not the other way around.
More generally: C# is not a "when the program is ambiguous use heuristics to make guesses about what the programmer probably meant" language. It is a "inform the developer that their program is unclear and possibly broken" language. The portions of the language that are designed to try to resolve ambiguous situations -- like overload resolution or method type inference or implicitly typed arrays -- are carefully designed so that the algorithms have clear rules that take versioning and other real-world aspects into account. Bailing out as soon as one part of the program is ambiguous is one way we achieve this design goal.
If you prefer a more "forgiving" language that tries to figure out what you meant, VB or JScript might be better languages for you. They are more "do what I meant not what I said" languages.
I believe its because the C# compiler won't typically backtrack.
NamespaceOne and NamespaceTwo are defined in the same code file. That would be equivalent to putting them in different code files and referencing them via using statement.
In that case you can see why the names clash. You have equally named enum in two different namesapces and the compiler can't guess which one it is, even though Frobble has a NamespaceOne.MyEnum parameter. Instead of
MethodNamespace.MethodClass.Frobble(MyEnum.foo)
use
MethodNamespace.MethodClass.Frobble(NamespaceOne.MyEnum.foo)