c# assign a derived class to a base generic variable [duplicate] - c#

What is the real reason for that limitation? Is it just work that had to be done? Is it conceptually hard? Is it impossible?
Sure, one couldn't use the type parameters in fields, because they are allways read-write. But that can't be the answer, can it?
The reason for this question is that I'm writing an article on variance support in C# 4, and I feel that I should explain why it is restricted to delegates and interfaces. Just to inverse the onus of proof.
Update:
Eric asked about an example.
What about this (don't know if that makes sense, yet :-))
public class Lookup<out T> where T : Animal {
public T Find(string name) {
Animal a = _cache.FindAnimalByName(name);
return a as T;
}
}
var findReptiles = new Lookup<Reptile>();
Lookup<Animal> findAnimals = findReptiles;
The reason for having that in one class could be the cache that is held in the class itself. And please don't name your different type pets the same!
BTW, this brings me to optional type parameters in C# 5.0 :-)
Update 2: I'm not claiming the CLR and C# should allow this. Just trying to understand what led to that it doesnt.

First off, as Tomas says, it is not supported in the CLR.
Second, how would that work? Suppose you have
class C<out T>
{ ... how are you planning on using T in here? ... }
T can only be used in output positions. As you note, the class cannot have any field of type T because the field could be written to. The class cannot have any methods that take a T, because those are logically writes. Suppose you had this feature -- how would you take advantage of it?
This would be useful for immutable classes if we could, say, make it legal to have a readonly field of type T; that way we'd massively cut down on the likelihood that it be improperly written to. But it's quite difficult to come up with other scenarios that permit variance in a typesafe manner.
If you have such a scenario, I'd love to see it. That would be points towards someday getting this implemented in the CLR.
UPDATE: See
Why isn't there generic variance for classes in C# 4.0?
for more on this question.

As far as I know, this feature isn't supported by CLR, so adding this would require significant work on the CLR side as well. I believe that co- and contra-variance for interfaces and delegates was actually supported on CLR before the version 4.0, so this was a relatively straightforward extension to implement.
(Supporting this feature for classes would be definitely useful, though!)

If they were permitted, useful 100% type-safe (no internal typecasts) classes or structures could be defined which were covariant with regard to their type T, if their constructor accepted one or more T's or T supplier's. Useful, 100%-type-safe classes or structures could be defined which were contravariant with respect to T if their constructors accepted one or more T consumers. I'm not sure there's much advantage of a class over an interface, beyond the ability to use "new" rather than using a static factory method (most likely from a class whose name is similar to that of the interface), but I can certainly see usage cases for having immutable structures support covariance.

Related

Is it possible to create a variant value type?

The paper Valued Conversions by Kevlin Henney gives a motivation for a so-called variant value type functionality, as well as an outline of a C++ implementation. It is a good read and it covers exactly what I would like to have available in C#: a general type that can hold values of different value-types.
I have not been able to find anything like this in C# though. Somewhat similar questions on SO have unsatisfactory answers and comments like "this is probably not what you want". This surprises me because it looks like fairly commonly required functionality. Henney's C++ boost::any class is widely used.
Is it not possible to create this functionality in C#?
Edit: Responding to one of the answers, I do not think that generics will do the trick. Using a generic requires the developer to know what kind of value-type the Variant variable is holding, and that type becomes immutable for that particular Variant variable as well. But the Variant type I am talking about should be able to hold different types. For example, a function Variant ReadValue() could read an entry from a file, parse it, fill the Variant value accordingly and then return it. The caller does not know in advance what kind of type will be contained in the returned Variant.
This is what generics are for. List<T> where T is anything at all. Generics provide both compile-time and runtime type safety.
You could create your own generic type to store any value you want. You could also cast anything to object and pass it around as such.
You can also use generic constraints to limit your type, such as wanting to only have T be a reference type:
public MyClass<T> where T : class
Or a value type:
public MyClass<T> where T : struct
See more here: http://msdn.microsoft.com/en-us/library/d5x73970.aspx
You can look into using dynamic for this as well.
The dynamic type enables the operations in which it occurs to bypass compile-time type checking. Instead, these operations are resolved at run time.
Type dynamic behaves like type object in most circumstances. However, operations that contain expressions of type dynamic are not resolved or type checked by the compiler.
From what I understand, using any in C++ is same as using combination of object and ChangeType method in C# with exception of having nice syntax for autoconversion from and into the any type. And without limitation just for value types.
Henney's article is quite old (year 2000). In a live lesson (London DevWeek 2008) I remember him explaining low coupling and implementing towards abstractions (interfaces) for the OCP (Open-Closed Principle). He was quite fond of generics and more so generic interfaces. So conceptually it's most probably exactly what he has written about back then, albeit I must admit I didn't read the article. C# generics are even a bit more robust then C++ templates, you should look at Covariance and Contravariance in Generics.
On another note:
What you can't do with generics are variable arity templates, which have been available for C and C++.

Why was IEnumerable<T> made covariant in C# 4?

In earlier versions of C# IEnumerable was defined like this:
public interface IEnumerable<T> : IEnumerable
Since C# 4 the definition is:
public interface IEnumerable<out T> : IEnumerable
Is it just to make the annoying casts in LINQ expressions go away?
Won't this introduce the same problems like with string[] <: object[] (broken array variance) in C#?
How was the addition of the covariance done from a compatibility point of view? Will earlier code still work on later versions of .NET or is recompilation necessary here? What about the other way around?
Was previous code using this interface strictly invariant in all cases or is it possible that certain use cases will behave different now?
Marc's and CodeInChaos's answers are pretty good, but just to add a few more details:
First off, it sounds like you are interested in learning about the design process we went through to make this feature. If so, then I encourage you to read my lengthy series of articles that I wrote while designing and implementing the feature. Start from the bottom of the page:
Covariance and contravariance blog posts
Is it just to make the annoying casts in LINQ expressions go away?
No, it is not just to avoid Cast<T> expressions, but doing so was one of the motivators that encouraged us to do this feature. We realized that there would be an uptick in the number of "why can't I use a sequence of Giraffes in this method that takes a sequence of Animals?" questions, because LINQ encourages the use of sequence types. We knew that we wanted to add covariance to IEnumerable<T> first.
We actually considered making IEnumerable<T> covariant even in C# 3 but decided that it would be strange to do so without introducing the whole feature for anyone to use.
Won't this introduce the same problems like with string[] <: object[] (broken array variance) in C#?
It does not directly introduce that problem because the compiler only allows variance when it is known to be typesafe. However, it does preserve the broken array variance problem. With covariance, IEnumerable<string[]> is implicitly convertible to IEnumerable<object[]>, so if you have a sequence of string arrays, you can treat that as a sequence of object arrays, and then you have the same problem as before: you can try to put a Giraffe into that string array and get an exception at runtime.
How was the addition of the covariance done from a compatibility point of view?
Carefully.
Will earlier code still work on later versions of .NET or is recompilation necessary here?
Only one way to find out. Try it and see what fails!
It's often a bad idea to try to force code compiled against .NET X to run against .NET Y if X != Y, regardless of changes to the type system.
What about the other way around?
Same answer.
Is it possible that certain use cases will behave different now?
Absolutely. Making an interface covariant where it was invariant before is technically a "breaking change" because it can cause working code to break. For example:
if (x is IEnumerable<Animal>)
ABC();
else if (x is IEnumerable<Turtle>)
DEF();
When IE<T> is not covariant, this code chooses either ABC or DEF or neither. When it is covariant, it never chooses DEF anymore.
Or:
class B { public void M(IEnumerable<Turtle> turtles){} }
class D : B { public void M(IEnumerable<Animal> animals){} }
Before, if you called M on an instance of D with a sequence of turtles as the argument, overload resolution chooses B.M because that is the only applicable method. If IE is covariant, then overload resolution now chooses D.M because both methods are applicable, and an applicable method on a more-derived class always beats an applicable method on a less-derived class, regardless of whether the argument type match is exact or not.
Or:
class Weird : IEnumerable<Turtle>, IEnumerable<Banana> { ... }
class B
{
public void M(IEnumerable<Banana> bananas) {}
}
class D : B
{
public void M(IEnumerable<Animal> animals) {}
public void M(IEnumerable<Fruit> fruits) {}
}
If IE is invariant then a call to d.M(weird) resolves to B.M. If IE suddenly becomes covariant then both methods D.M are applicable, both are better than the method on the base class, and neither is better than the other, so, overload resolution becomes ambiguous and we report an error.
When we decided to make these breaking changes, we were hoping that (1) the situations would be rare, and (2) when situations like this arise, almost always it is because the author of the class is attempting to simulate covariance in a language that doesn't have it. By adding covariance directly, hopefully when the code "breaks" on recompilation, the author can simply remove the crazy gear trying to simulate a feature that now exists.
In order:
Is it just to make the annoying casts in LINQ expressions go away?
It makes things behave like people generally expect ;p
Won't this introduce the same problems like with string[] <: object[] (broken array variance) in C#?
No; since it doesn't expose any Add mechanism or similar (and can't; out and in are enforced at the compiler)
How was the addition of the covariance done from a compatibility point of view?
The CLI already supported it, this merely makes C# (and some of the existing BCL methods) aware of it
Will earlier code still work on later versions of .NET or is recompilation necessary here?
It is entirely backwards compatible, however: C# that relies on C# 4.0 variance won't compile in a C# 2.0 etc compiler
What about the other way around?
That is not unreasonable
Was previous code using this interface strictly invariant in all cases or is it possible that certain use cases will behave different now?
Some BCL calls (IsAssignableFrom) may return differently now
Is it just to make the annoying casts in LINQ expressions go away?
Not only when using LINQ. It's useful everywhere you have an IEnumerable<Derived> and the code expects a IEnumerable<Base>.
Won't this introduce the same problems like with string[] <: object[] (broken array variance) in C#?
No, because covariance is only allowed on interfaces that return values of that type, but don't accept them. So it's safe.
How was the addition of the covariance done from a compatibility point of view? Will earlier code still work on later versions of .NET or is recompilation necessary here? What about the other way around?
I think already compiled code will mostly work as is. Some runtime type-checks (is, IsAssignableFrom, ...) will return true where they returned false earlier.
Was previous code using this interface strictly invariant in all cases or is it possible that certain use cases will behave different now?
Not sure what you mean by that
The biggest problems are related to overload resolution. Since now additional implicit conversions are possible a different overload might be chosen.
void DoSomething(IEnumerabe<Base> bla);
void DoSomething(object blub);
IEnumerable<Derived> values = ...;
DoSomething(values);
But of course, if these overload behave differently, the API is already badly designed.

In Ruby, what is the equivalent to an interface in C#?

I'm currently trying to learn Ruby and I'm trying to understand more about what it offers in terms of encapsulation and contracts.
In C# a contract can be defined using an interface. A class which implements the interface must fulfil the terms within the contract by providing an implementation for each method and property (and maybe other things) defined. The individual class that implements an interface can do whatever it needs within the scope of the methods defined by the contract, so long as it accepts the same types of arguments and returns the same type of result.
Is there a way to enforce this kind of thing in Ruby?
Thanks
A simple example of what I mean in C#:
interface IConsole
{
int MaxControllers {get;}
void PlayGame(IGame game);
}
class Xbox360 : IConsole
{
public int MaxControllers
{
get { return 4; }
}
public void PlayGame(IGame game)
{
InsertDisc(game);
NavigateToMenuItem();
Click();
}
}
class NES : IConsole
{
public int MaxControllers
{
get { return 2; }
}
public void PlayGame(IGame game)
{
InsertCartridge(game);
TurnOn();
}
}
There are no interfaces in ruby since ruby is a dynamically typed language. Interfaces are basically used to make different classes interchangeable without breaking type safety. Your code can work with every Console as long it behaves like a console which in C# means implements IConsole. "duck typing" is a keyword you can use to catch up with the dynamic languages way of dealing with this kind of problem.
Further you can and should write unit tests to verify the behavior of your code. Every object has a respond_to? method you can use in your assert.
Ruby has Interfaces just like any other language.
Note that you have to be careful not to conflate the concept of the Interface, which is an abstract specification of the responsibilities, guarantees and protocols of a unit with the concept of the interface which is a keyword in the Java, C# and VB.NET programming languages. In Ruby, we use the former all the time, but the latter simply doesn't exist.
It is very important to distinguish the two. What's important is the Interface, not the interface. The interface tells you pretty much nothing useful. Nothing demonstrates this better than the marker interfaces in Java, which are interfaces that have no members at all: just take a look at java.io.Serializable and java.lang.Cloneable; those two interfaces mean very different things, yet they have the exact same signature.
So, if two interfaces that mean different things, have the same signature, what exactly is the interface even guaranteeing you?
Another good example:
interface ICollection<T>: IEnumerable<T>, IEnumerable
{
void Add(T item);
}
What is the Interface of System.Collections.Generic.ICollection<T>.Add?
that the length of the collection does not decrease
that all the items that were in the collection before are still there
that item is in the collection
And which of those actually shows up in the interface? None! There is nothing in the interface that says that the Add method must even add at all, it might just as well remove an element from the collection.
This is a perfectly valid implementation of that interface:
class MyCollection<T>: ICollection<T>
{
void Add(T item)
{
Remove(item);
}
}
Another example: where in java.util.Set<E> does it actually say that it is, you know, a set? Nowhere! Or more precisely, in the documentation. In English.
In pretty much all cases of interfaces, both from Java and .NET, all the relevant information is actually in the docs, not in the types. So, if the types don't tell you anything interesting anyway, why keep them at all? Why not stick just to documentation? And that's exactly what Ruby does.
Note that there are other languages in which the Interface can actually be described in a meaningful way. However, those languages typically don't call the construct which describes the Interface "interface", they call it type. In a dependently-typed programming language, you can for example express the properties that a sort function returns a collection of the same length as the original, that every element which is in the original is also in the sorted collection and that no bigger element appears before a smaller element.
So, in short: Ruby does not have an equivalent to a Java interface. It does however have an equivalent to a Java Interface, and its exactly the same as in Java: documentation.
Also, just like in Java, Acceptance Tests can be used to specify Interfaces as well.
In particular, in Ruby, the Interface of an object is determined by what it can do, not what class is is, or what module it mixes in. Any object that has a << method can be appended to. This is very useful in unit tests, where you can simply pass in an Array or a String instead of a more complicated Logger, even though Array and Logger do not share an explicit interface apart from the fact that they both have a method called <<.
Another example is StringIO, which implements the same Interface as IO and thus a large portion of the Interface of File, but without sharing any common ancestor besides Object.
Interfaces are usually introduced to static typed OO languages in order to make up for lack of multiple inheritance. In other words, they are more of a necessary evil than something useful per se.
Ruby, on the other hand:
Is dynamically typed language with "duck typing", so if you want to call method foo on two objects, they don't need to neither inherit same ancestor class, nor implement the same interface.
Supports multiple inheritance through concept of mixins, again no need for interfaces here.
Ruby doesn't really have them; interfaces and contracts generally live more in the static world, rather than the dynamic.
There is a gem called Handshake that can implement informal contracts, if you really need it.
Ruby uses the concept of Modules as a stand-in (kinda) for interfaces. Design Patterns in Ruby has a lot of really great examples on the differences between the two concepts and why ruby chooses the more flexible alternative to interfaces.
http://www.amazon.com/Design-Patterns-Ruby-Russ-Olsen/dp/0321490452
Jorg has a good point, ruby has interfaces, just not the keyword. In reading some of the replies, I think this is a negative in dynamic languages. Instead of enforcing an interface through the language, you must create unit tests instead of having a compiler catch methods not being implemented. It also makes understanding method harder to reason about, as you have to hunt down what an object is when you are trying to call it.
Take as an example:
def my_func(options)
...
end
If you look at the function, you have no clue what options is and what methods or properties it should call, without hunting for the unit tests, other places it is called, and even look at the method. Worse yet, the method may not even use those options but pass it to further methods. Why write unit tests when this should have been caught by a compiler. The problem is you must write code differently to express this downside in dynamic languages.
There is one upside to this though, and that is dynamic programming languages are FAST to write a piece of code. I don't have to write any interface declaration and later I can new methods and parameters without going to the interface to expose it. The trade-offs are speed for maintenance.

Why does C# (4.0) not allow co- and contravariance in generic class types?

What is the real reason for that limitation? Is it just work that had to be done? Is it conceptually hard? Is it impossible?
Sure, one couldn't use the type parameters in fields, because they are allways read-write. But that can't be the answer, can it?
The reason for this question is that I'm writing an article on variance support in C# 4, and I feel that I should explain why it is restricted to delegates and interfaces. Just to inverse the onus of proof.
Update:
Eric asked about an example.
What about this (don't know if that makes sense, yet :-))
public class Lookup<out T> where T : Animal {
public T Find(string name) {
Animal a = _cache.FindAnimalByName(name);
return a as T;
}
}
var findReptiles = new Lookup<Reptile>();
Lookup<Animal> findAnimals = findReptiles;
The reason for having that in one class could be the cache that is held in the class itself. And please don't name your different type pets the same!
BTW, this brings me to optional type parameters in C# 5.0 :-)
Update 2: I'm not claiming the CLR and C# should allow this. Just trying to understand what led to that it doesnt.
First off, as Tomas says, it is not supported in the CLR.
Second, how would that work? Suppose you have
class C<out T>
{ ... how are you planning on using T in here? ... }
T can only be used in output positions. As you note, the class cannot have any field of type T because the field could be written to. The class cannot have any methods that take a T, because those are logically writes. Suppose you had this feature -- how would you take advantage of it?
This would be useful for immutable classes if we could, say, make it legal to have a readonly field of type T; that way we'd massively cut down on the likelihood that it be improperly written to. But it's quite difficult to come up with other scenarios that permit variance in a typesafe manner.
If you have such a scenario, I'd love to see it. That would be points towards someday getting this implemented in the CLR.
UPDATE: See
Why isn't there generic variance for classes in C# 4.0?
for more on this question.
As far as I know, this feature isn't supported by CLR, so adding this would require significant work on the CLR side as well. I believe that co- and contra-variance for interfaces and delegates was actually supported on CLR before the version 4.0, so this was a relatively straightforward extension to implement.
(Supporting this feature for classes would be definitely useful, though!)
If they were permitted, useful 100% type-safe (no internal typecasts) classes or structures could be defined which were covariant with regard to their type T, if their constructor accepted one or more T's or T supplier's. Useful, 100%-type-safe classes or structures could be defined which were contravariant with respect to T if their constructors accepted one or more T consumers. I'm not sure there's much advantage of a class over an interface, beyond the ability to use "new" rather than using a static factory method (most likely from a class whose name is similar to that of the interface), but I can certainly see usage cases for having immutable structures support covariance.

Will C#4 allow "dynamic casting"? If not, should C# support it?

I don't mean dynamic casting in the sense of casting a lower interface or base class to a more derived class, I mean taking an interface definition that I've created, and then dynamically casting to that interface a different object NOT derived from that interface but supporting all the calls.
For example,
interface IMyInterface
{
bool Visible
{
get;
}
}
TextBox myTextBox = new TextBox();
IMyInterface i = (dynamic<IMyInterface>)myTextBox;
This could be achieved at compile time for known types, and runtime for instances declared with dynamic. The interface definition is known, as is the type (in this example) so the compiler can determine if the object supports the calls defined by the interface and perform some magic for us to have the cast.
My guess is that this is not supported in C#4 (I was unable to find a reference to it), but I'd like to know for sure. And if it isn't, I'd like to discuss if it should be included in a future variant of the language or not, and the reasons for and against. To me, it seems like a nice addition to enable greater polymorphism in code without having to create whole new types to wrap existing framework types.
Update
Lest someone accuse me of plagiarism, I was not aware of Jon Skeet having already proposed this. However, nice to know we thought of exceedingly similar syntax, which suggests it might be intuitive at least. Meanwhile, "have an original idea" remains on my bucket list for another day.
I think Jon Skeet has had such a proposal (http://msmvps.com/blogs/jon_skeet/archive/2008/10/30/c-4-0-dynamic-lt-t-gt.aspx), but so far, I haven't heard that C# 4.0 is going to have it.
I think that's problematic. You are introducing coupling between two classes which are not coupled.
Consider the following code.
public interface IFoo
{
int MethodA();
int MethodB();
}
public class Bar
{
int MethodA();
int MethodB();
}
public class SomeClass
{
int MethodFoo(IFoo someFoo);
}
should this then be legal?
int blah = someClass.MethodFoo((dynamic<IFoo>)bar);
It seems like it should be legal, because the compiler should be able to dynamically type bar as something that implements IFoo.
However, at this point you are coupling IFoo and Bar through a call in a completely separate part of your code.
If you edit Bar because it no longer needs MethodB, suddenly someClass.MethodFood doesn't work anymore, even though Bar and IFoo are not related.
In the same way, if you add MethodC() to IFoo, your code would break again, even though IFoo and Bar are ostensibly not related.
The fact is, although this would be useful in select cases where there are similarities amongst objects that you do not control, there is a reason that interfaces have to be explicitly attached to objects, and the reason is so that the compiler can guarantee that the object implements it.
There is no need for C# to support this, as it can be implemented very cleanly as library.
I've seen three or four separate implementations (I started writing one myself before I found them). Here's the most thorough treatment I've seen:
http://bartdesmet.net/blogs/bart/archive/2008/11/10/introducing-the-c-ducktaper-bridging-the-dynamic-world-with-the-static-world.aspx
It will probably be even easier to implement once the DLR is integrated into the runtime.
Because the wrapper/forwarder class for a given interface can be generated once and then cached, and then a given object of unknown type can be wrapped once, there is a lot of scope for caching of call sites, etc. so the performance should be excellent.
In contrast, I think the dynamic keyword, which is a language feature, and a hugely complex one, is an unnecessary and potentially disastrous digression, shoe-horned into a language that previously had a very clear static typing philosophy, which gave it an obvious direction for future improvement. They should have stuck to that and made the type inference work better and better until typing became more invisible. There are so many areas where they could evolve the language, without breaking existing programs, and yet they don't, simply due to resource constraints (e.g. the reason var can't be used in more places is because they would have to rewrite the compiler and they don't have time).
They are still doing good stuff in C# 4.0 (the variance features) but there is so much else that could be to be done to make the type system smarter, more automatic, more powerful at detecting problems at compile time. Instead, we're essentially getting a gimmick.
The opensource framework Impromptu-Interface does this using the C# 4 and the dlr.
using ImpromptuInterface;
interface IMyInterface
{
bool Visible
{
get;
}
}
TextBox myTextBox = new TextBox();
IMyInterface i = myTextBox.ActLike<IMyInterface>();
Since it uses the dlr it will also work with ExpandoObject and DynamicObject.

Categories