I have a bunch of different enums, such as...
public enum MyEnum
{
[Description("Army of One")]
one,
[Description("Dynamic Duo")]
two,
[Description("Three Amigo's")]
three,
[Description("Fantastic Four")]
four,
[Description("The Jackson Five")]
five
}
I wrote an extension method for any Enum to get the Description attribute if it has one. Simple enough right...
public static string GetDescription(this Enum currentEnum)
{
var fi = currentEnum.GetType().GetField(currentEnum.ToString());
var da = (DescriptionAttribute)Attribute.GetCustomAttribute(fi, typeof(DescriptionAttribute));
return da != null ? da.Description : currentEnum.ToString();
}
I can use this very simply and it works like a charm, returning the description or ToString() as expected.
Here is the problem though. I would like to have the ability to call this on an IEnumerable of MyEnum, YourEnum, or SomeoneElsesEnum. So I wrote the following extension just as simply.
public static IEnumerable<string> GetDescriptions(this IEnumerable<Enum> enumCollection)
{
return enumCollection.ToList().ConvertAll(a => a.GetDescription());
}
This doesn't work. It compiles fine as a method, but using it gives the following error:
Instance argument: cannot convert from 'System.Collections.Generic.IEnumerable<MyEnum>' to System.Collections.Generic.IEnumerable<System.Enum>'
So why is this?
Can I make this work?
The only answer I have found at this point is to write extension methods for generic T as follows:
public static IEnumerable<string> GetDescriptions<T>(this List<T> myEnumList) where T : struct, IConvertible
public static string GetDescription<T>(this T currentEnum) where T : struct, IConvertible
Someone must have a better answer for this, or an explanation of why I can extend an Enum but not an IEnumerable of Enum...
Anyone?
.NET generic covariance only works for reference types. Here, MyEnum is a value type, and System.Enum is a reference type (casting from an enum type to System.Enum is a boxing operation).
So, an IEnumerable<MyEnum> is not an IEnumerable<Enum>, as that would change the representation of each enumerated item from a value type to a reference type; only representation-preserving conversions are allowed. You need to use the generic method trick you've posted to get this to work.
Starting from v4 C# supports co-variance and contra-variance for generic interfaces and delegates. But unfortunately, these *-variances work only for reference types, it doesn't work for value types, such as enums.
Related
I have Enum
public enum ContentMIMEType
{
[StringValue("application/vnd.ms-excel")]
Xls,
[StringValue("application/vnd.openxmlformats-officedocument.spreadsheetml.sheet")]
Xlsx
}
In extensions I have 2 methods to get attribute value:
public static string GetStringValue<TFrom>(this TFrom enumValue)
where TFrom : struct, IConvertible
{
...
}
and
public static string GetStringValue(this Enum #enum)
{
...
}
These methods have different signature, but when I execute next operation ContentMIMEType.Xlsx.GetStringValue() 1st method is taken.
Why this happens, cause execution of 2nd method for me is more obvious (have tried to change sort order, but doens't help).
Here is more.
Simply from site:
overloading is what happens when you have two methods with the same
name but different signatures. At compile time, the compiler works out
which one it's going to call, based on the compile time types of the
arguments and the target of the method call.
And when compiller cannot deduct which is proper, compiller return error.
EDIT:
Based on Constraints on type parameters and Enum Class enum is struct and implement IConvertible so meets requirements and compiler use first matched. No conflict with Enum because Enum is lover than struct in inheritance hierarchy.
The signature:
public static string GetStringValue<TFrom>(this TFrom enumValue)
Is a generic signature, which means that it is allowed to be treated as:
public static string GetStringValue<ContentMIMEType>(this ContentMIMEType enumValue)
Which is more specific than:
public static string GetStringValue(this Enum #enum)
And is therefore the method chosen.
I have created a method with two generic parameters where one parameter (itemsToAdd) must be the same type as the generic parameter of the next parameter (inputList).
See this demo code:
public class GenericsDemo
{
public void AddToList<TList, TItems>(TList inputList, params TItems[] itemsToAdd)
where TItems : IConvertible
where TList : IEnumerable<TItems>
{
IEnumerable<IConvertible> someOtherList;
// Sounds good, doesn't work..
//someOtherList = inputList;
// This works
someOtherList = (IEnumerable<IConvertible>)inputList;
}
}
I would expect the inputList can be directly assigned into the IEnumerable<IConvertible> someOtherList, but it needs a cast. Why the cast is needed?
Covariance only works for classes, not for structs (Source).
Thus, if you restrict TItems to reference types, your code compiles (fiddle):
where TItems : class, IConvertible
A code sample:
interface IFoo { }
class FooImpl : IFoo { }
static void Bar<T>(IEnumerable<T> value)
where T : IFoo
{
}
static void Bar<T>(T source)
where T : IFoo
{
}
Can anybody explain, why this method call:
var value = new FooImpl[0];
Bar(value);
targets Bar<T>(T source) (and, hence, doesn't compile)?
Does compiler take into account type parameter constraints at all, when resolving overloads?
UPD.
To avoid confusion with arrays. This happens with any implementation of IEnumerable<T>, e.g.:
var value = new List<FooImpl>();
UPD 2.
#ken2k mentioned covariance.
But let's forget about FooImpl. This:
var value = new List<IFoo>();
Bar(value);
produces the same error.
I'm sure, that implicit conversion between List<IFoo> and IEnumerable<IFoo> exists, since I can easily write something like this:
static void SomeMethod(IEnumerable<IFoo> sequence) {}
and pass value into it:
SomeMethod(value);
Does compiler take into account type parameter constraints at all, when resolving overloads?
No, because generic constraints are not part of the function signature. You can verify this by adding a Bar overload that is identical except for the generic constraints:
interface IBar { }
static void Bar<T>(IEnumerable<T> value)
where T : IFoo
{
}
static void Bar<T>(T source)
where T : IBar
{
// fails to compile : Type ____ already defines a member called 'Bar' with the same parameter types
}
The reason your code doesn't compile is because the compiler chooses the "best" match based on the method signature, then tries to apply the generic constraints.
One possible reason why it doesn't is because this call would be ambiguous:
{suppose List<T> had an Add<T>(IEnumerable<T> source) method}
List<object> junk = new List<object>();
junk.Add(1); // OK
junk.Add("xyzzy") // OK
junk.Add(new [] {1, 2, 3, 4}); //ambiguous - do you intend to add the _array_ or the _contents_ of the array?
The obvious fix is to use a different name for the Bar method that takes a collection (as is done in the BCL with Add and AddRange.
EDIT: Ok, the reason why Bar<T>(T source) is selected over Bar<T>(IEnumerable<T> source) when passing an List is because of the "7.5.3.2 Better function member" section of the C# language reference. What it says is that when an overload resolution must occur, the arguments types are matched to the parameters types of the applicable function members (section 7.5.3.1) and the better function member is selected by the following set of rules:
• for each argument, the implicit conversion from EX to QX is not better than the implicit conversion from EX to PX, and
• for at least one argument, the conversion from EX to PX is better than the conversion from EX to QX.
(PX being the parameter types of the first method, QX of the second one)
This rule is applied "after expansion and type argument substitution". Since type argument substitution will swap the Bar(T source) to Bar>(IList source), this method arguments will be a better match than the Bar(IEnumerable source) which needs a conversion.
I couldn't find a online version of the language reference, but you can read it here
EDIT: misunderstood the question initially, working on finding the correct answer in the c# language spec. Basically IIRC the method is selected by considering the most appropriate type, and if you don't cast your parameter to IEnumerable<> exactly, then the Bar<T>(T source) will match the parameter type exactly, just like in this sample:
public interface ITest { }
public class Test : ITest { }
private static void Main(string[] args)
{
test(new Test() ); // outputs "anything" because Test is matched to any type T before ITest
Console.ReadLine();
}
public static void test<T>(T anything)
{
Console.WriteLine("anything");
}
public static void test(ITest it)
{
Console.WriteLine("it");
}
Will link to it when found
Because the cast between an array and an enumerable must be explicit: this compiles
var value = new FooImpl[0].AsEnumerable();
Bar(value);
and so does this:
var value = new FooImpl[0] as IEnumerable<IFoo>;
Bar(value);
From the doc:
Starting with the .NET Framework 2.0, the Array class implements the
System.Collections.Generic.IList,
System.Collections.Generic.ICollection, and
System.Collections.Generic.IEnumerable generic interfaces. The
implementations are provided to arrays at run time, and as a result,
the generic interfaces do not appear in the declaration syntax for the
Array class.
So your compiler doesn't know that the array matches the signature for Bar, and you have to explicitly cast it
As of c# 7.3, generic constraints are now considered part of the method signature for the purpose of overload resolution. From What's new in C# 7.0 through C# 7.3: Improved overload candidates:
When a method group contains some generic methods whose type arguments do not satisfy their constraints, these members are removed from the candidate set.
Thus in c# 7.3 / .Net Core 2.x and later, the code shown in the question compiles and runs successfully, and Bar<T>(IEnumerable<T> value) is called as desired.
A demo .Net 5 fiddle here now compiles successfully
A demo .NET Framework 4.7.3 fiddle here still fails to compile with the error:
The type 'FooImpl[]' cannot be used as type parameter 'T' in the generic type or method 'TestClass.Bar<T>(T)'. There is no implicit reference conversion from 'FooImpl[]' to 'IFoo'.
That's a covariance issue. List<T> is not covariant, so there is no implicit conversion between List<FooImpl> and List<IFoo>.
On the other hand, starting from C# 4, IEnumerable<T> now supports covariance, so this works:
var value = Enumerable.Empty<FooImpl>();
Bar(value);
var value = new List<FooImpl>().AsEnumerable();
Bar(value);
var value = new List<FooImpl>();
Bar((IEnumerable<IFoo>)value);
I have a few questions about generic classes with Enums.
First of all, I declared my class like this:
public class MyClass<TEnum> where TEnum : struct, IConvertible
But, I'm getting an error that states that my class cannot be used with type arguments.
Moreover, I need to convert the Enum's value to an Integer. How can I do that?
public void SomeMethod(TEnum value)
{
int a = (int)value; // Doesn't work, need to cast to Enum first (?).
}
Thanks.
You already have what you need since you declared requirement IConvertible. Just use ToInt32 etc methods:
public class MyClass<TEnum> where TEnum: struct, IConvertible
{
public int SomeMethod(TEnum value)
{
return value.ToInt32(null);
}
}
For example .NET type decimal is a struct and an IConvertble:
MyClass<decimal> test = new MyClass<decimal>();
Console.WriteLine(test.SomeMethod(150m));
For other classes be sure that you implement IConvertible.
You have declared your generic type parameter to implement IConvertible and that interface has a ToInt32 method.
Considering the following code:
namespace MyApp
{
using System;
using System.Collections.ObjectModel;
class Program
{
static void Main(string[] args)
{
var col = new MyCollection();
col.Add(new MyItem { Enum = MyEnum.Second });
col.Add(new MyItem { Enum = MyEnum.First });
var item = col[0];
Console.WriteLine("1) Null ? {0}", item == null);
item = col[MyEnum.Second];
Console.WriteLine("2) Null ? {0}", item == null);
Console.ReadKey();
}
}
class MyItem { public MyEnum Enum { get; set; } }
class MyCollection : Collection<MyItem>
{
public MyItem this[MyEnum val]
{
get
{
foreach (var item in this) { if (item.Enum == val) return item; }
return null;
}
}
}
enum MyEnum
{
Default = 0,
First,
Second
}
}
I was surprised to see the following result:
1) Null ? True
2) Null ? False
My first expectation was that because I was passing an int, the default indexer should be used, and the first call should have succeeded.
Instead, it seems that the overload expecting an enum is always called (even when casting 0 as int), and the test fails.
Can someone explain this behavior to me?
And give a workaround to maintain two indexers: one by index, and one for the enum?
EDIT : A workaround seems to be casting the collection as Collection, see this answer.
So:
Why does the compiler choose the most "complex" overload instead of the most obvious one (despite the fact it's an inherited one)? Is the indexer considered a native int method? (but without a warning on the fact that you hide the parent indexer)
Explanation
With this code we are facing two problems:
The 0 value is always convertible to any enum.
The runtime always starts by checking the bottom class before digging in inheritance, so the enum indexer is chosen.
For more precise (and better formulated) answers, see the following links:
original answer by James Michael Hare
sum up by Eric Lippert
The various answers here have sussed it out. To sum up and provide some links to explanatory material:
First, the literal zero is convertible to any enum type. The reason for this is because we want you to be able to initialize any "flags" enum to its zero value even if there is no zero enum value available. (If we had to do it all over again we'd probably not implement this feature; rather, we'd say to just use the default(MyEnum) expression if you want to do that.)
In fact, the constant, not just the literal constant zero is convertible to any enum type. This is for backwards compatibility with a historic compiler bug that is more expensive to fix than to enshrine.
For more details, see
http://blogs.msdn.com/b/ericlippert/archive/2006/03/28/the-root-of-all-evil-part-one.aspx
http://blogs.msdn.com/b/ericlippert/archive/2006/03/29/the-root-of-all-evil-part-two.aspx
That then establishes that your two indexers -- one which takes an int and one which takes an enum -- are both applicable candidates when passed the literal zero. The question then is which is the better candidate. The rule here is simple: if any candidate is applicable in a derived class then it is automatically better than any candidate in a base class. Therefore your enum indexer wins.
The reason for this somewhat counter-intuitive rule is twofold. First, it seems to make sense that the person who wrote the derived class has more information than the person who wrote the base class. They specialized the base class, after all, so it seems reasonable that you'd want to call the most specialized implementation possible when given a choice, even if it is not an exact match.
The second reason is that this choice mitigates the brittle base class problem. If you added an indexer to a base class that happened to be a better match than one on a derived class, it would be unexpected to users of the derived class that code that used to choose the derived class suddenly starts choosing the base class.
See
http://blogs.msdn.com/b/ericlippert/archive/2007/09/04/future-breaking-changes-part-three.aspx
for more discussion of this issue.
As James correctly points out, if you make a new indexer on your class that takes an int then the overload resolution question becomes which is better: conversion from zero to enum, or conversion from zero to int. Since both indexers are on the same type and the latter is exact, it wins.
It seems that because the enum is int-compatible that it prefers to use the implicit conversion from enum to int and chooses the indexer that takes an enum defined in your class.
(UPDATE: The real cause turned out to be that it is preferring the implicit conversion from the const int of 0 to the enum over the super-class int indexer because both conversions are equal, so the former conversion is chosen since it is inside of the more derived type: MyCollection.)
I'm not sure why it does this, when there's clearly a public indexer with an int argument out there from Collection<T> -- a good question for Eric Lippert if he's watching this as he'd have a very definitive answer.
I did verify, though, that if you re-define the int indexer in your new class as follows, it will work:
public class MyCollection : Collection<MyItem>
{
public new MyItem this[int index]
{
// make sure we get Collection<T>'s indexer instead.
get { return base[index]; }
}
}
From the spec it looks like the literal 0 can always be implicitly converted to an enum:
13.1.3 Implicit enumeration conversions An implicit enumeration
conversion permits the decimal-integer-literal 0 to be converted to
any enum-type.
Thus, if you had called it as
int index = 0;
var item = col[index];
It would work because you are forcing it to choose the int indexer, or if you used a non-zero literal:
var item = col[1];
Console.WriteLine("1) Null ? {0}", item == null);
Would work since 1 cannot be implicitly converted to enum
It's still weird, i grant you considering the indexer from Collection<T> should be just as visible. But I'd say it looks like it sees the enum indexer in your subclass and knows that 0 can implicitly be converted to int and satisfy it and doesn't go up the class-hierarchy chain.
This seems to be supported by section 7.4.2 Overload Resolution in the specification, which states in part:
and methods in a base class are not candidates if any method in a
derived class is applicable
Which leads me to believe that since the subclass indexer works, it doesn't even check the base class.
In C#, the contant 0 is always implicitly convertible to any enum type. You have overloaded the indexer, so the compiler chooses the most specific overload. Note that this happens during compilation. So if you would write:
int x = 0;
var item = col[x];
Now the compiler doesn't infer that x is always equal to 0 on the second line, so it will choose the original this[int value] overload. (The compiler isn't very smart :-))
In early versions of C#, only a literal 0 would be implicitly casted to an enum type. Since version 3.0, all constant expressions that evaluate to 0 can implicitly be casted to an enum type. That's why even (int)0 is casted to an enum.
Update: Extra information about the overload resolution
I always thought that the overload resolution just looked at the method signatures, but it also seems to prefer methods in derived classes. Consider for example the following code:
public class Test
{
public void Print(int number)
{
Console.WriteLine("Number: " + number);
}
public void Print(Options option)
{
Console.WriteLine("Option: " + option);
}
}
public enum Options
{
A = 0,
B = 1
}
This will result in the following behavior:
t.Print(0); // "0"
t.Print(1); // "1"
t.Print(Options.A); // "A"
t.Print(Options.B); // "B"
However, if you create a base class and move the Print(int) overload to the base class, then the Print(Options) overload will have a higher preference:
public class TestBase
{
public void Print(int number)
{
Console.WriteLine("Number: " + number);
}
}
public class Test : TestBase
{
public void Print(Options option)
{
Console.WriteLine("Option: " + option);
}
}
Now the behavior is changed:
t.Print(0); // "A"
t.Print(1); // "1"
t.Print(Options.A); // "A"
t.Print(Options.B); // "B"