Console.WriteLine(Enum.Value) gives different output in C# and VB.Net - c#

I am basically a C# guy, but writing VB.Net code these days.
Today I came across a very different behaviour of .Net
C# Code
enum Color
{
Red,
Green,
Blue
}
class Demo
{
public static void Main()
{
System.Console.WriteLine(Color.Red);
}
}
This prints Red
But when this code is written in VB.Net it prints 0.
VB.Net Code
Module Module1
Sub Main()
System.Console.WriteLine(Color.Red)
End Sub
End Module
Enum Color
Red
Green
Blue
End Enum
Why so different?

There is no Console.WriteLine(Enum) overload so the compilers are forced to pick one of the other ones. Overload resolution rules are very arcane and the VB.NET and C# rules are not the same, but both compilers are willing to pick one when there's an implicit conversion to the target argument type and pick the one that takes the least amount of work.
Which is where another rule applies, this kind of statement in VB.NET is perfectly valid:
Dim example As Integer = Color.Red '' Fine
But the C# compiler spits at:
int example = Color.Red; // CS0266
Insisting that you apply an (int) cast. It only has an explicit conversion, not an implicit one like VB.NET.
So the C# compiler is going to ignore all the overloads that take an integral argument, none are candidates because only explicit conversions exist for them. Except one, the Console.WriteLine(Object) overload. There is an implicit conversion for that one, it takes a boxing conversion.
The VB.NET compiler sees it as well, but now the "better" conversion comes into play. A boxing conversion is a very expensive conversion, converting to Integer is very cheap. It requires no extra code. So it likes that one better.
Workarounds are simple:
System.Console.WriteLine(CObj(Color.Red)) '' or
System.Console.WriteLine(Color.Red.ToString())

C# and VB.NET have different method overload resolution rules.
C# Picks Console.WriteLine(Object), while VB.NET picks Console.WriteLine(Int32). Let's see why it does so.
VB.NET rules:
Accessibility. It eliminates any overload with an access level that prevents the calling code from calling it.
Number of Parameters. It eliminates any overload that defines a different number of parameters than are supplied in the call.
Parameter Data Types. The compiler gives instance methods preference over extension methods. If any instance method is found that requires only widening conversions to match the procedure call, all extension methods are dropped and the compiler continues with only the instance method candidates. If no such instance method is found, it continues with both instance and extension methods.
In this step, it eliminates any overload for which the data types of the calling arguments cannot be converted to the parameter types defined in the overload.
Narrowing Conversions. It eliminates any overload that requires a narrowing conversion from the calling argument types to the defined parameter types. This is true whether the type checking switch (Option Strict Statement) is On or Off.
Least Widening. The compiler considers the remaining overloads in pairs. For each pair, it compares the data types of the defined parameters. If the types in one of the overloads all widen to the corresponding types in the other, the compiler eliminates the latter. That is, it retains the overload that requires the least amount of widening.
Single Candidate. It continues considering overloads in pairs until only one overload remains, and it resolves the call to that overload. If the compiler cannot reduce the overloads to a single candidate, it generates an error.
There are a lot of overloads for WriteLine, some of them are discarded at step 3. We're basically left with the following possibilities: Object and the numeric types.
The 5th point is interesting here: Least Widening. So what do the widening rules say?
Any enumerated type (Enum) widens to its underlying integral type and any type to which the underlying type widens.
Any type widens to Object
So, your Color enum first widens to Int32 (its underlying data type) - and this is a 100% match for Console.WriteLine(Int32). It would require yet another widening conversion to go from Int32 to Object, but the rules above say to retain the overload that requires the least amount of widening.
As for C# (from the C# 5 spec at §7.5.3.2):
Given an argument list A with a set of argument expressions { E1, E2, ..., EN } and two applicable function members MP and MQ with parameter types { P1, P2, ..., PN } and { Q1, Q2, ..., QN }, MP is defined to be a better function member than MQ if
for each argument, the implicit conversion from EX to QX is not better than the implicit conversion from EX to PX, and
for at least one argument, the conversion from EX to PX is better than the conversion from EX to QX.
Ok, now how is better defined (§7.5.3.4)?
Given a conversion C1 that converts from a type S to a type T1, and a conversion C2 that converts from a type S to a type T2, C1 is a better conversion than C2 if at least one of the following holds:
An identity conversion exists from S to T1 but not from S to T2
T1 is a better conversion target than T2 (§7.5.3.5)
Let's see at §7.5.3.5:
Given two different types T1 and T2, T1 is a better conversion target than T2 if at least one of the following holds:
An implicit conversion from T1 to T2 exists, and no implicit conversion from T2 to T1 exists
T1 is a signed integral type and T2 is an unsigned integral type.
So, we're converting from Color to either Object or Int32. Which one is better according to these rules?
There is an implicit conversion from Color to Object
There is no implicit conversion from Object to Color (obviously)
There is no implicit conversion from Color to Int32 (these are explicit in C#)
There is no implicit conversion from Int32 to Color (except for 0)
Spec §6.1:
The following conversions are classified as implicit conversions:
Identity conversions
Implicit numeric conversions
Implicit enumeration conversions.
Implicit nullable conversions
Null literal conversions
Implicit reference conversions
Boxing conversions
Implicit dynamic conversions
Implicit constant expression conversions
User-defined implicit conversions
Anonymous function conversions
Method group conversions
Implicit numeric conversions make no mention of enum types, and Implicit enumeration conversions deal with the other way around:
An implicit enumeration conversion permits the decimal-integer-literal 0 to be converted to any enum-type and to any nullable-type whose underlying type is an enum-type. In the latter case the conversion is evaluated by converting to the underlying enum-type and wrapping the result (§4.1.10).
Enums are handled by boxing conversions (§6.1.7):
A boxing conversion permits a value-type to be implicitly converted to a reference type. A boxing conversion exists from any non-nullable-value-type to object and dynamic, to System.ValueType and to any interface-type implemented by the non-nullable-value-type. Furthermore an enum-type can be converted to the type System.Enum.

Related

C# confuse parameter object and enumerator [duplicate]

First, a bit of background. Read the question and accepted answer posted here for a specific scenario for my question. I'm not sure if other, similar cases exist but this is the only case I am aware of.
The above "quirk" is something that I've been aware of for a long time. I didn't understand the full breadth of the cause until just recently.
Microsoft's documentation on the SqlParameter class sheds a little more light on the situation.
When you specify an Object in the value parameter, the SqlDbType is
inferred from the Microsoft .NET Framework type of the Object.
Use caution when you use this overload of the SqlParameter constructor
to specify integer parameter values. Because this overload takes a
value of type Object, you must convert the integral value to an Object
type when the value is zero, as the following C# example demonstrates.
Parameter = new SqlParameter("#pname", Convert.ToInt32(0));
If you do
not perform this conversion, the compiler assumes that you are trying
to call the SqlParameter (string, SqlDbType) constructor overload.
(emph. added)
My question is why does the compiler assume that when you specify a hard coded "0" (and only the value "0") that you are trying to specify an enumeration type, rather than an integer type? In this case, it assumes that you are declaring SqlDbType value, instead of the value 0.
This is non-intuitive and, to make matters worse, the error is inconsistent. I have old applications that I've written which have called stored procedures for years. I'll make a change to the application (often times not even associated with my SQL Server classes), publish an update, and this issue will all of a sudden break the application.
Why is the compiler confused by the value 0, when an object containing multiple method signatures contain two, similar signatures where one parameter is an object/integer and the other accepts an enumeration?
As I've mentioned, I've never seen this as a problem with any other constructor or method on any other class. Is this unique to the SqlParameter class or is this a bug inherit within C#/.Net?
It's because a zero-integer is implicitly convertible to an enum:
enum SqlDbType
{
Zero = 0,
One = 1
}
class TestClass
{
public TestClass(string s, object o)
{ System.Console.WriteLine("{0} => TestClass(object)", s); }
public TestClass(string s, SqlDbType e)
{ System.Console.WriteLine("{0} => TestClass(Enum SqlDbType)", s); }
}
// This is perfectly valid:
SqlDbType valid = 0;
// Whilst this is not:
SqlDbType ohNoYouDont = 1;
var a1 = new TestClass("0", 0);
// 0 => TestClass(Enum SqlDbType)
var a2 = new TestClass("1", 1);
// => 1 => TestClass(object)
(Adapted from Visual C# 2008 Breaking Changes - change 12)
When the compiler performs the overload resolution 0 is an Applicable function member for both the SqlDbType and the object constructors because:
an implicit conversion (Section 6.1) exists from the type of the argument to the type of the corresponding parameter
(Both SqlDbType x = 0 and object x = 0 are valid)
The SqlDbType parameter is better than the object parameter because of the better conversion rules:
If T1 and T2 are the same type, neither conversion is better.
object and SqlDbType are not the same type
If S is T1, C1 is the better conversion.
0 is not an object
If S is T2, C2 is the better conversion.
0 is not a SqlDbType
If an implicit conversion from T1 to T2 exists, and no implicit conversion from T2 to T1 exists, C1 is the better conversion.
No implicit conversion from object to SqlDbType exists
If an implicit conversion from T2 to T1 exists, and no implicit conversion from T1 to T2 exists, C2 is the better conversion.
An implicit conversion from SqlDbType to object exists, so the SqlDbType is the better conversion
Note that what exactly constitutes a constant 0 has (quite subtly) changed in Visual C# 2008 (Microsoft's implementation of the C# spec) as #Eric explains in his answer.
RichardTowers' answer is excellent, but I thought I'd add a bit to it.
As the other answers have pointed out, the reason for the behaviour is (1) zero is convertible to any enum, and obviously to object, and (2) any enum type is more specific that object, so the method that takes an enum is therefore chosen by overload resolution as the better method. Point two is I hope self-explanatory, but what explains point one?
First off, there is an unfortunate deviation from the specification here. The specification says that any literal zero, that is, the number 0 actually literally appearing in the source code, may be implicitly converted to any enum type. The compiler actually implements that any constant zero may be thusly converted. The reason for that is because of a bug whereby the compiler would sometimes allow constant zeroes and sometimes not, in a strange and inconsistent manner. The easiest way to solve the problem was to consistently allow constant zeroes. You can read about this in detail here:
https://web.archive.org/web/20110308161103/http://blogs.msdn.com/b/ericlippert/archive/2006/03/28/the-root-of-all-evil-part-one.aspx
Second, the reason for allowing zeros to convert to any enum is to ensure that it is always possible to zero out a "flags" enum. Good programming practice is that every "flags" enum have a value "None" which is equal to zero, but that is a guideline, not a requirement. The designers of C# 1.0 thought that it looked strange that you might have to say
for (MyFlags f = (MyFlags)0; ...
to initialize a local. My personal opinion is that this decision has caused more trouble than it was worth, both in terms of the grief over the abovementioned bug and in terms of the oddities it introduces into overload resolution that you have discovered.
Finally, the designers of the constructors could have realized that this would be a problem in the first place, and made the signatures of the overloads such that the developer could clearly decide which ctor to call without having to insert casts. Unfortunately this is a pretty obscure issue and so a lot of designers are unaware of it. Hopefully anyone reading this will not make the same mistake; do not create an ambiguity between object and any enum if you intend the two overrides to have different semantics.
This is apparently a known behavior and affects any function overloads where there is both an enumeration and object type. I don't understand it all, but Eric Lippert summed it up quite nicely on his blog
This is caused by the fact that the integer literal 0 has an implicit conversion to any enum type. The C# specification states:
6.1.3 Implicit enumeration conversions
An implicit enumeration conversion permits the decimal-integer-literal
0 to be converted to any enum-type and to any nullable-type whose
underlying type is an enum-type. In the latter case the conversion is
evaluated by converting to the underlying enum-type and wrapping the
result.
As a result, the most specific overload in this case is SqlParameter(string, DbType).
This does not apply for other int values, so the SqlParameter(string, object) constructor is the most specific.
When resolving the type for an overloaded method, C# selects the most specific option. The SqlParameter class has two constructors that take exactly two arguments, SqlParameter(String, SqlDbType) and SqlParameter(String, Object). When you provide the literal 0, it can be interpreted as an Object or as a SqlDbType. Since SqlDbType is more specific than Object, it is assumed to be the intent.
You can read more about overload resolution in this answer.

Why doesn't the C# compiler consider this generic type inference ambiguous?

Given the following class:
public static class EnumHelper
{
//Overload 1
public static List<string> GetStrings<TEnum>(TEnum value)
{
return EnumHelper<TEnum>.GetStrings(value);
}
//Overload 2
public static List<string> GetStrings<TEnum>(IEnumerable<TEnum> value)
{
return EnumHelper<TEnum>.GetStrings(value);
}
}
What rules are applied to select one of its two generic methods? For example, in the following code:
List<MyEnum> list;
EnumHelper.GetStrings(list);
it ends up calling EnumHelper.GetStrings<List<MyEnum>>(List<MyEnum>) (i.e. Overload 1), even though it seems just as valid to call EnumHelper.GetStrings<MyEnum>(IEnumerable<MyEnum>) (i.e. Overload 2).
For example, if I remove overload 1 entirely, then the call still compiles fine, instead choosing the method marked as overload 2. This seems to make generic type inference kind of dangerous, as it was calling a method which intuitively seems like a worse match. I'm passing a List/Enumerable as the type, which seems very specific and seems like it should match a method with a similar parameter (IEnumerable<TEnum>), but it's choosing the method with the more generic, generic parameter (TEnum value).
What rules are applied to select one of its two generic methods?
The rules in the specification - which are extremely complex, unfortunately. In the ECMA C# 5 standard, the relevant bit starts at section 12.6.4.3 ("better function member").
However, in this case it's relatively simple. Both methods are applicable, with type inference occurring separately for each method:
For method 1, TEnum is inferred to be List<MyEnum>
For method 2, TEnum is inferred to be MyEnum
Next the compiler starts checking the conversions from arguments to parameters, to see whether one conversion is "better" than the other. That goes into section 12.6.4.4 ("better conversion from expression").
At this point we're considering these conversions:
Overload 1: List<MyEnum> to List<MyEnum> (as TEnum is inferred to be List<MyEnum>)
Overload 2: List<MyEnum> to IEnumerable<MyEnum> (as TEnum is inferred to be MyEnum)
Fortunately, the very first rule helps us here:
Given an implicit conversion C1 that converts from an expression E to a type T1, and an implicit conversion C2 that converts from an expression E to a type T2, C1 is a better conversion than C2 if at least one of the following holds:
E has a type S and an identity conversion exists from S to T1 but not from S to T2
There is an identity conversion from List<MyEnum> to List<MyEnum>, but there isn't an identity conversion from List<MyEnum> to IEnumerable<MyEnum>, therefore the first conversion is better.
There aren't any other conversions to consider, therefore overload 1 is seen as the better function member.
Your argument about "more general" vs "more specific" parameters would be valid if this earlier phase had ended in a tie-break, but it doesn't: "better conversion" for arguments to parameters is considered before "more specific parameters".
In general, both overload resolution is incredibly complicated. It has to take into account inheritance, generics, type-less arguments (e.g. the null literal, the default literal, anonymous functions), parameter arrays, all the possible conversions. Almost any time a new feature is added to C#, it affects overload resolution :(

Implicit convertion of generic types parameterized with reference types vs value types

Why does C# implicitly convert generic types parameterized with a reference type implementing an interface to the same generic type parameterized with the implemented interface, but not perform the same implicit conversion for reference types?
Essentially, why does the first line compile but the second one fail?
IEnumerable<IComparable<Version>> x = Enumerable.Empty<Version>();
IEnumerable<IComparable<int>> y = Enumerable.Empty<int>();
Especially great would be a reference to the part of the spec that describes this behavior.
Short answer
Despite the name "implicit", implicit conversions don't apply unless the rules explicitly say they do, and the rules don't allow boxing conversions when going from IEnumerable<int> to IEnumerable<IComparable<int>>. As a simpler case, you can't go from IEnumerable<int> to IEnumerable<object> for the same reason, and that case is well documented.
Long answer
OK, first of all, why would IEnumerable<T> convert to IEnumerable<IComparable<T>> at all? This is covered in §6.1.6 (C# Language Specification 5.0):
The implicit reference conversions are:
[...]
From any reference-type to an interface or delegate type T if it has an implicit identity or reference conversion to an interface or
delegate type T0 and
T0 is variance-convertible (§13.1.3.2) to T.
And §13.1.3.2 says:
A type T<A1, …, An> is
variance-convertible to a type T<B1, …, Bn>
if T is either an interface or a delegate type
declared with the variant type parameters T<X1, …, Xn>,
and for each variant type parameter
Xi one of the following holds:
Xi is covariant and an implicit reference or identity conversion exists from Ai to
Bi
Since IEnumerable<T> is covariant in T, this means that if there is an implicit reference or identity conversion from T to IComparable<T>, then there is an implicit reference conversion from IEnumerable<T> to IEnumerable<IComparable<T>>, by virtue of these being variance-convertible.
I emphasized "reference" for a reason, of course. Since Version implements IComparable<Version>, there is an implicit reference conversion:
From any class-type S to any interface-type T, provided S
implements T.
Right, so now, why doesn't IEnumerable<int> implicitly convert to IEnumerable<IComparable<int>>? After all, int implicitly converts to IComparable<int>:
IComparable<int> x = 0; // sure
But it does so not through a reference conversion or an identity conversion, but through a boxing conversion (§6.1.7):
A boxing conversion exists from any non-nullable-value-type [...] to
any interface-type implemented by the non-nullable-value-type.
The rules of §13.1.3.2 do not allow boxing conversions in considering whether a variance conversion is possible, and there is no other rule that would enable an implicit conversion from IEnumerable<int> to IEnumerable<IComparable<int>>. Despite the name, implicit conversions are covered by explicit rules.
There is actually a much simpler illustration of this problem:
object x = 0; // sure, an int is an object
IEnumerable<object> x = new int[] { 0 }; // except when it's not
This isn't allowed for the same reason: there is no reference conversion from int to object, only a boxing conversion, and those are not considered. And in this form, there are several questions on Stack Overflow that explain why this is not allowed (like this one). To summarize: it's not that this is impossible, but to support it, the compiler would have to generate supporting code to stick the code for the boxing conversion somewhere. The C# team valued transparency in this case over ease of use and decided to allow identity-preserving conversions only.
Finally, as a matter of practical consideration, suppose you had an IEnumerable<int> and you needed an IEnumerable<IComparable<int>>, how would you get it? Well, by doing the boxing yourself:
Func<int, IComparable<int>> asComparable = i => i; // compiles to ldarg ; box ; ret
IEnumerable<IComparable<int>> x = Enumerable.Empty<int>().Select(asComparable);
Of course using Enumerable.Cast would be more practical here; I wrote it this way to highlight that an implicit conversion is involved. There is a cost to this operation, and that's just the point; the C# designers wanted this cost to be explicit.

Implicit cast of Func<MyType> to MyType

Given the following class:
public class MyType
{
public static implicit operator MyType(Func<MyType> wrapper) {
return wrapper();
}
}
From the implicit cast of Func<MyType> to MyType, I assumed the following would be possible:
public MyType MyTypeWrapper() {
return new MyType();
}
public void MyTestMethod() {
MyType m = MyTypeWrapper; // not a call!
}
However I'm getting:
Cannot convert method group 'MyTypeWrapper' to non-delegate type 'Test.MyType'. Did you intend to invoke the method?
Which, unfortunately for me, when searched for (as I half expected) resulted in tons of questions to which the answer was:
Hey, ya dun goofed; toss () on the end of WhateverMethod!
Now, as I'm typing this, I've noticed that an explicit cast does in fact compile:
MyType m = (MyType) MyTypeWrapper;
Why is it that I cannot implicitly cast a Func<MyType> to MyType as I've described?
This is unfortunate. I'm pretty sure you've found a compiler bug, and this section of the specification is extremely difficult to read.
Section 6.4.4 of the C# 4 specification explains why your implicit conversion is illegal.
The algorithm goes like this. First look at the source type and target type. There is no source type because a method group has no type. The target type is MyType. So search MyType for user-defined implicit conversions. Now the question is: what is the set of applicable user-defined operators ... that convert from a type encompassing S? S is the source type and we have already established that there is no source type. So this is already evidence that the conversion should fail. But even if the compiler for some reason decides that your Func<MyType> conversion is applicable, the rule is a standard implicit conversion ... is performed. Method group conversions are deliberately not classified as standard conversions.
So that's why it should be illegal.
Why then is the explicit cast legal?
There's no justification for that. This appears to be a bug.
This is unfortunate; many apologies for the error. I shall report it to my former colleagues; if they have an analysis which conflicts with mine, I'll update the answer.
UPDATE: My former colleagues inform me that the spec problem whereby the source expression is assumed to have a type will be addressed by a rewording in the next release of the spec. No word yet as to whether the explicit cast behavior is a bug.
You're already using the built-in implicit conversion from method group to Func<MyType>.
The compiler won't do two implicit conversions at once.
Once you have an explicit cast to your class, the compiler knows to look for an implicit cast to any type that can be explicitly casted to your class.
Because the C# compiler isn't able to convert MyTypeWrapper into a Func<MyType>(MyTypeWrapper). There's a difference between a method group and an actual delegate.
This compiles and runs fine:
MyType m = new Func<MyType>(MyTypeWrapper);
There is an implicit conversion from a method group to a delegate type that matches that group, and there is your user defined implicit conversion from that delegate to a type. The general idea here is that the compiler is only going to use one implicit conversion in a row at a time. When it has an A and needs a C it looks for conversions from A to C, not from A to any type B and from that type to C. That algorithm goes from O(n) to O(n^2) (not to mention possibly being quite confusing for programmers).
The reason your code works when using an explicit cast to MyType is that you're no longer chaining implicit conversions.
The signature of MyTestMethod MATCHES the signature of Func<MyType> but is NOT a Func<MyType>. Func has defined some implicit casts itself to allow you to assign such methods as Funcs, but you must explicitly cast for the signature to apply, because the compiler will not chain implicit casts together for you:
MyType m = (Func<MyType>)MyTypeWrapper; // not a call!

Equivalent implicit operators: why are they legal?

Update!
See my dissection of a portion of the C# spec below; I think I must be missing something, because to me it looks like the behavior I'm describing in this question actually violates the spec.
Update 2!
OK, upon further reflection, and based on some comments, I think I now understand what's going on. The words "source type" in the spec refer to the type being converted from -- i.e., Type2 in my example below -- which simply means that the compiler is able to narrow the candidates down to the two operators defined (since Type2 is the source type for both). However, it cannot narrow the choices any further. So the key words in the spec (as it applies to this question) are "source type", which I previously misinterpreted (I think) to mean "declaring type."
Original Question
Say I have these types defined:
class Type0
{
public string Value { get; private set; }
public Type0(string value)
{
Value = value;
}
}
class Type1 : Type0
{
public Type1(string value) : base(value) { }
public static implicit operator Type1(Type2 other)
{
return new Type1("Converted using Type1's operator.");
}
}
class Type2 : Type0
{
public Type2(string value) : base(value) { }
public static implicit operator Type1(Type2 other)
{
return new Type1("Converted using Type2's operator.");
}
}
Then say I do this:
Type2 t2 = new Type2("B");
Type1 t1 = t2;
Obviously this is ambiguous, as it is not clear which implicit operator should be used. My question is -- since I cannot see any way to resolve this ambiguity (it isn't like I can perform some explicit cast to clarify which version I want), and yet the class definitions above do compile -- why would the compiler allow those matching implicit operators at all?
Dissection
OK, I'm going to step through the excerpt of the C# spec quoted by Hans Passant in an attempt to make sense of this.
Find the set of types, D, from which
user-defined conversion operators will
be considered. This set consists of S
(if S is a class or struct), the base
classes of S (if S is a class), and T
(if T is a class or struct).
We're converting from Type2 (S) to Type1 (T). So it seems that here D would include all three types in the example: Type0 (because it is a base class of S), Type1 (T) and Type2 (S).
Find the set of applicable
user-defined conversion operators, U.
This set consists of the user-defined
implicit conversion operators declared
by the classes or structs in D that
convert from a type encompassing S to
a type encompassed by T. If U is
empty, the conversion is undefined and
a compile-time error occurs.
All right, we've got two operators satisfying these conditions. The version declared in Type1 meets the requirements because Type1 is in D and it converts from Type2 (which obviously encompasses S) to Type1 (which is obviously encompassed by T). The version in Type2 also meets the requirements for exactly the same reasons. So U includes both of these operators.
Lastly, with respect to finding the most specific "source type" SX of the operators in U:
If any of the operators in U convert from S, then SX is S.
Now, both operators in U convert from S -- so this tells me that SX is S.
Doesn't this mean that the Type2 version should be used?
But wait! I'm confused!
Couldn't I have only defined Type1's version of the operator, in which case, the only remaining candidate would be Type1's version, and yet according to the spec SX would be Type2? This seems like a possible scenario in which the spec mandates something impossible (namely, that the conversion declared in Type2 should be used when in fact it does not exist).
Ultimately, it can't be prohibitted with complete success. You and I could publish two assemblies. They we could start using each other's assembles, while updating our own. Then we could each provide implicit casts between types defined in each assembly. Only when we release the next version, could this be caught, rather than at compile time.
There's an advantage in not trying to ban things that can't be banned, as it makes for clarity and consistency (and there's a lesson for legislators in that).
We don't really want it to be a compile-time error just to define conversions which might cause ambiguity. Suppose that we alter Type0 to store a double, and for some reason we want to provide separate conversions to signed integer and unsigned integer.
class Type0
{
public double Value { get; private set; }
public Type0(double value)
{
Value = value;
}
public static implicit operator Int32(Type0 other)
{
return (Int32)other.Value;
}
public static implicit operator UInt32(Type0 other)
{
return (UInt32)Math.Abs(other.Value);
}
}
This compiles fine, and I can use use both conversions with
Type0 t = new Type0(0.9);
int i = t;
UInt32 u = t;
However, it's a compile error to try float f = t because either of the implicit conversions could be used to get to an integer type which can then be converted to float.
We only want the compiler to complain about these more complex ambiguities when they're actually used, since we'd like the Type0 above to compile. For consistency, the simpler ambiguity should also cause an error at the point you use it rather than when you define it.
EDIT
Since Hans removed his answer which quoted the spec, here's a quick run through the part of the C# spec that determines whether a conversion is ambiguous, having defined U to be the set of all the conversions which could possibly do the job:
Find the most specific source type, SX, of the operators in U:
If any of the operators in U convert from S, then SX is S.
Otherwise, SX is the most encompassed type in the combined set of target types of the operators in U. If no most encompassed type can be found, then the conversion is ambiguous and a compile-time error occurs.
Paraphrased, we prefer a conversion which converts directly from S, otherwise we prefer the type which is "easiest" to convert S to. In both examples, we have two conversions from S available. If there were no conversions from Type2, we'd prefer a conversion from Type0 over one from object. If no one type is obviously the better choice to convert from, we fail here.
Find the most specific target type, TX, of the operators in U:
If any of the operators in U convert to T, then TX is T.
Otherwise, TX is the most encompassing type in the combined set of target types of the operators in U. If no most encompassing type can be found, then the conversion is ambiguous and a compile-time error occurs.
Again, we'd prefer to convert directly to T, but we'll settle for the type that's "easiest" to convert to T. In Dan's example, we have two conversions to T available. In my example, the possible targets are Int32 and UInt32, and neither is a better match than the other, so this is where the conversion fails. The compiler has no way to know whether float f = t means float f = (float)(Int32)t or float f = (float)(UInt32)t.
If U contains exactly one user-defined conversion operator that converts from SX to TX, then this is the most specific conversion operator. If no such operator exists, or if more than one such operator exists, then the conversion is ambiguous and a compile-time error occurs.
In Dan's example, we fail here because we have two conversions left from SX to TX. We could have no conversions from SX to TX if we chose different conversions when deciding SX and TX. For example, if we had a Type1a derived from Type1, then we might have conversions from Type2 to Type1a and from Type0 to Type1 These would still give us SX=Type2 and TX=Type1, but we don't actually have any conversion from Type2 to Type1. This is OK, because this really is ambiguous. The compiler doesn't know whether to convert Type2 to Type1a and then cast to Type1, or cast to Type0 first so that it can use that conversion to Type1.

Categories