I was just reading this answer
Which overload is called and how?
by Jon Skeet and I just done understand how overload resolution gets done at compile time - How is that possible?? You dont know the type of the object till you run??
I always thought that all method calls were done at run time (late binding)
What are the exceptions to that??
I ll give an example:
public void DoWork(IFoo)
public void DoWork(Bar)
IFoo a = new Bar();
DoWork(a)
Which method gets called here and why?
When a method call is encountered by the compiler the types of all the parameters are known because C# is a statically-typed language: all expressions and variables are of a particular type and that type is definite and known at compile time.
This is ignoring dynamic which slightly complicates things.
Edit: This is a response to your edit. For clarity, I translated your code into the following:
interface IFoo { }
class Bar : IFoo { }
class Test {
public void DoWork(IFoo a) { }
public void DoWork(Bar b) { }
}
class Program {
static void Main(string[] args) {
IFoo a = new Bar();
Test t = new Test();
t.DoWork(a);
}
}
You are asking which method is called here (Test.DoWork(IFoo) or Test.DoWork(Bar)) when invoked as t.DoWork(a) in Main. The answer is that Test.DoWork(IFoo) is called. This is basically because the the parameter is typed as an IFoo. Let's go the specification (§7.4.3.1):
A function member is said to be an applicable function member with respect to an argument list A when all of the following are true:
The number of arguments in A is identical to the number of parameters in the function member declaration.
For each argument in A, the parameter passing mode of the argument (i.e., value, ref, or out) is identical to the parameter passing mode of the corresponding parameter, and
for a value parameter or a parameter array, an implicit conversion (§6.1) exists from the argument to the type of the corresponding parameter, or
for a ref or out parameter, the type of the argument is identical to the type of the corresponding parameter. After all, a ref or out parameter is an alias for the argument passed.
The issue here (see the bolded statement) is that there is no implicit conversion from IFoo to Bar. Therefore, the method Test.DoWork(Bar) is not an applicable function member. Clearly Test.DoWork(IFoo) is an applicable function member and, as the only choice, will be chosen by the compiler as the method to invoke.
I think you are confusing overload resolution with virtual method dispatch. Yes, the runtime type of the object will determine which method to run but the method has already been bound by the compiler. This is a type-safe way of allowing polymorphic behavior without all the flexibility and danger of true late-binding.
The compiler will determine which method overload in to call based on the types of the arguments as written in your code.
Virtual methods (including calls through an interface) are dispatched on the type of the receiver at runtime (late binding).
Nonvirtual methods are resolved at compile time, using the compile-time type of the reference. This is why shadowing (the new modifier) results in different behaviour depending on whether you call the shadowed method through a base class reference or a derived class reference.
In all cases, overload resolution uses the compile-time types of the arguments. Only the receiver -- that is, the x in a call of the form x.SomeMethod(y, z) -- is considered for late binding. Thus, if y is typed as object, and the only overloads are string y and int y, the compiler will error, even if at runtime y would actually be a string or int -- because it's only considering the compile-time type (the declared type of the variable).
The only thing decided at runtime is "virtual" functions, you call the appropriate function for your object.
Related
In the following code I am trying to accomplish default type deduction in a factory pattern. I would like to call the factory method with an arbitrary combination of arguments and have the generic type parameters inferred from the default parameters for the omitted arguments. This attempt at that, however, results in error CS1750. CS1750 is mentioned in a handful of places in the roslyn repo, but reviewing those didn't reveal much about the cause in this case.
The full text of the error is as follows:
(parameter) S a = new S()
A value of type 'S' cannot be used as a default parameter because there are no standard conversions to type 'S' [TypeInference]csharp(CS1750)
public class T {}
public class T0 : T {}
public class T1 : T {}
// T2{} .. T9{}
public class T10 : T {}
public struct S<T> {}
public class X {} // X is a complex object strongly-typed with T1..T10
public static class C {
public static X Factory<A,B/*,C..J*/>(
S<T0> s = new S<T0>(), // this is fine
S<A> a = new S<T0>(), // ERROR: no standard conversions to type...
S<B> b = new S<T0>() // ...'S<A>' [TypeInference]csharp(CS1750)
//S<C> c,
//...
//S<J> j
)
where A : T
where B : T
=>throw new System.NotImplementedException();
static void usage1() {
/* I'd like to be able to omit an arbitrary portion
of the factory arguments and have type inference use
the default values.*/
Factory(b : new S<T1>());
}
// simplified example
public static void Foo<A>(A a) {}
public static void Bar<A>(A a=42) {} // ERROR: CS1750
static void usage2() {
Foo(42); // this is fine
}
}
I have been trying to understand why CS1750 is raised. According to the 5th edition of the spec
15.6.2 Method Parameters...
The expression in a default-argument shall be one of the following:
...
an expression of the form new S() where S is a value type
new S<T0>() seem to meet this criterion. Indeed, the parameter declaration S<T0> s = new S<T0>() doesn't seem to raise an error.
Reading (admittedly without completely absorbing) the spec's type inference section it wasn't obvious why default values wouldn't be considered during type inference. The language in that section even seems to be careful to distinguish optional parameters with and without corresponding arguments. For example, in this sentence a missing optional argument is excluded as cause for inference failure:
12.6.3 Type Inference...
If ... there is a non-optional parameter with no corresponding argument, then inference immediately fails.
Rather than rule out default parameters, this seems more like a weak suggestion that type inference could be based on them.
Why is the compiler trying to convert S<T0> to S<A> instead of inferring A to be T0?
Does the spec prohibit type inference based on default parameters?
The answer to your question 1, I think, is 'the compiler doesn't work that way'. The compiler expects, as I did in my comment, that the programmer provides a valid type for a default parameter based on the current type constraints. The compiler won't infer a generic parameter for you based on the type of a default parameter. Here the only constraints are that both A and T0 descend from T, so in general we can't expect S<A> a = new S<T0>() to work. I'm pretty sure that's what the error means.
Your question 2 is really 'could it work that way?', I think. I think it could theoretically, but there are some problems. Consider what happens if, in your example, someone makes the call C.Factory<T1, T1>(), thus explicitly setting A to be of type T1 without providing any parameters. Now we've got an error case when we try to assign new S<T0>() to our parameter a of type S<A> = S<T1>, since S<T0> is not assignable to S<T1>. How should the compiler handle that? Since A is only constrained to be a descendant of T, and Factory has an overload that takes no parameters, it looks like the call is valid. So any exception is going to be confusing at best, and probably arguably wrong. But we don't want to infer A to be T0 after the programmer has explicitly asked for a different type.
What is the difference between the following two method signatures:
public static void test<T>()
vs
public static void test(Type t)
I know that the second one allows a type to be passed to the method but I am not clear on exactly what the first one is doing differently.
With the former, your type must be known at compile time, and you will able to use "T" within the method as a stand-in for the name of the type for things like variable declarations or casting as if you were writing normal code.
With the latter the type might not be known until runtime, but you will have to use reflection or dynamic objects to accomplish certain things that would be much easier (and type-safe) with the generic.
The first one requires the type (T) to be specified at compile time, and is a generic method.
The second one allows you to specify a type at runtime, and is a non-generic method.
The first one is a generic method. This means a number of things:
The type is resolved at compile time. You can call test<int> or test<String>, but you cannot call test<t> with t being a variable.
Since the type is resolved at compile time, you can use this type in other parts of the method, e.g. as the return type, as the type of a parameter or as the type of a variable inside the method. Example:
public static T test<T>(T param) { ... }
int x = test(myString); // Causes a compile-time error
While refactoring some code, I came across this strange compile error:
The constructor call needs to be dynamically dispatched, but cannot be because it is part of a constructor initializer. Consider casting the dynamic arguments.
It seems to occur when trying to call base methods/constructors that take dynamic arguments. For example:
class ClassA
{
public ClassA(dynamic test)
{
Console.WriteLine("ClassA");
}
}
class ClassB : ClassA
{
public ClassB(dynamic test)
: base(test)
{
Console.WriteLine("ClassB");
}
}
It works if I cast the argument to object, like this:
public ClassB(dynamic test)
: base((object)test)
So, I'm a little confused. Why do I have to put this nasty cast in - why can't the compiler figure out what I mean?
The constructor chain has to be determined for certain at compile-time - the compiler has to pick an overload so that it can create valid IL. Whereas normally overload resolution (e.g. for method calls) can be deferred until execution time, that doesn't work for chained constructor calls.
EDIT: In "normal" C# code (before C# 4, basically), all overload resolution is performed at compile-time. However, when a member invocation involves a dynamic value, that is resolved at execution time. For example consider this:
using System;
class Program
{
static void Foo(int x)
{
Console.WriteLine("int!");
}
static void Foo(string x)
{
Console.WriteLine("string!");
}
static void Main(string[] args)
{
dynamic d = 10;
Foo(d);
}
}
The compiler doesn't emit a direct call to Foo here - it can't, because in the call Foo(d) it doesn't know which overload it would resolve to. Instead it emits code which does a sort of "just in time" mini-compilation to resolve the overload with the actual type of the value of d at execution time.
Now that doesn't work for constructor chaining, as valid IL has to contain a call to a specific base class constructor. (I don't know whether the dynamic version can't even be expressed in IL, or whether it can, but the result would be unverifiable.)
You could argue that the C# compiler should be able to tell that there's only actually one visible constructor which can be called, and that constructor will always be available... but once you start down that road, you end up with a language which is very complicated to specify. The C# designers usually take the position of having simpler rules which occasionally aren't as powerful as you'd like them to be.
Suppose I have two classes:
class a
{
public void sayGoodbye() { Console.WriteLine("Tschüss"); }
public virtual void sayHi() { Console.WriteLine("Servus"); }
}
class b : a
{
new public void sayGoodbye() { Console.WriteLine("Bye"); }
override public void sayHi() { Console.WriteLine("Hi"); }
}
If I call a generic method that requires type 'T' to be derived from class 'a':
void call<T>() where T : a
Then inside that method I call methods on an instance of type 'T' the method call are bound to type 'a', as if the instance was being cast as 'a':
call<b>();
...
void call<T>() where T : a
{
T o = Activator.CreateInstance<T>();
o.sayHi(); // writes "Hi" (virtual method)
o.sayGoodbye(); // writes "Tschüss"
}
By using reflection I am able to get the expected results:
call<b>();
...
void call<T>() where T : a
{
T o = Activator.CreateInstance<T>();
// Reflections works fine:
typeof(T).GetMethod("sayHi").Invoke(o, null); // writes "Hi"
typeof(T).GetMethod("sayGoodbye").Invoke(o, null); // writes "Bye"
}
Also, by using an interface for class 'a' I get the expected results:
interface Ia
{
void sayGoodbye();
void sayHi();
}
...
class a : Ia // 'a' implements 'Ia'
...
call<b>();
...
void call<T>() where T : Ia
{
T o = Activator.CreateInstance<T>();
o.sayHi(); // writes "Hi"
o.sayGoodbye(); // writes "Bye"
}
The equivalent non-generic code also works fine:
call();
...
void call()
{
b o = Activator.CreateInstance<b>();
o.sayHi(); // writes "Hi"
o.sayGoodbye(); // writes "Bye"
}
Same thing if I change the generic constraint to 'b':
call<b>();
...
void call<T>() where T : b
{
T o = Activator.CreateInstance<T>();
o.sayHi(); // writes "Hi"
o.sayGoodbye(); // writes "Bye"
}
It seems that the compiler is generating method calls to the base class specified in the constraint, so I guess I understand what is happening, but this is not what I expected. Is this really the correct result?
Generics aren't C++ Templates
Generics are a general type: there will be only one generic class (or method) output by the compiler. Generics doesn't work by compile-time replacing T with the actual type provided, which would require compiling a separate generic instance per type parameter but instead works by making one type with empty "blanks". Within the generic type the compiler then proceeds to resolve actions on those "blanks" without knowledge of the specific parameter types. It thus uses the only information it already has; namely the constraints you provide in addition to global facts such as everything-is-an-object.
So when you say...
void call<T>() where T : a {
T o = Activator.CreateInstance<T>();
o.sayGoodbye();//nonvirtual
...then type T of o is only relevant at compile time - the runtime type may be more specific. And at compile time, T is essentially a synonym for a - after all, that's all the compiler knows about T! So consider the following completely equivalent code:
void call<T>() where T : a {
a o = Activator.CreateInstance<T>();
o.sayGoodbye();//nonvirtual
Now, calling a non-virtual method ignores the run-time type of a variable. As expected, you see that a.sayGoodbye() is called.
By comparison, C++ templates do work the way you expect - they actually expand the template at compile time, rather than making a single definition with "blanks", and thus the specific template instances can use methods only available to that specialization. As a matter of fact, even at run-time, the CLR avoids actually instantiating specific instances of templates: since all the calls are either virtual (making explicit instantiation unnecessary) or non-virtual to a specific class (again, no point in instantiating), the CLR can use the same bytes - probably even the same x86 code - to cover multiple types. This isn't always possible (e.g. for value types), but for reference types that saves memory and JIT-time.
Two more things...
Firstly, your call method uses Activator - that's not necessary; there's an exceptional constraint new() you may use instead that does the same thing but with compile-time checking:
void call<T>() where T : a, new() {
T o = new T();
o.sayGoodbye();
Attempting to compile call<TypeWithoutDefaultConstructor>() will fail at compile time with human-readable message.
Secondly, it may seem as though generics are largely pointless if they're just blanks - after all, why not simply work on a-typed variables all along? Well, although at compile-time you can't rely on any details a sub-class of a might have within the generic method, you're still enforcing that all T are of the same subclass, which allows in particular the usage of the well-known containers such as List<int> - where even though List<> can never rely on int internals, to users of List<> it's still handy to avoid casting (and related performance and correctness issues).
Generics also allow richer constraints than normal parameters: for example, you can't normally write a method that requires its parameter to be both a subtype of a and IDisposable - but you can have several constraints on a type parameter, and declare a parameter to be of that generic type.
Finally, generics may have run-time differences. Your call to Activator.CreateInstance<T>() is a perfect illustration of that, as would be the simple expression typeof(T) or if(myvar is T).... So, even though in some sense the compiler "thinks" of the return type of Activator.CreateInstance<T>() as a at compile time, at runtime the object will be of type T.
sayGoodbye is not virtual.
The compiler only "knows" T is of type a. It will call sayGoodbye on a.
On type b you redefine sayGoodbye, but the compiler is not aware of type b. It cannot know all derivates of a. You can tell the compiler that sayGoodbye may be overriden, by making it virtual. This will cause the compiler to call sayGoodbye on a special way.
Method hiding is not the same as polymorphism, as you've seen. You can always call the A version of the method simply by downcasting from B to A.
With a generic method, with T constrained to type A, there is no way for the compiler to know whether it could be some other type, so it would be very unexpected, in fact, for it to use a hiding method rather than the method defined on A. Method hiding is for convenience or interoperability; it has nothing to do with substituting behavior; for that, you need polymorphism and virtual methods.
EDIT:
I think the fundamental confusion here is actually Generics vs. C++ style templates. In .NET, there is only one base of code for the generic type. Creating a specialized generic type does not involve emitting new code for the specific type. This is different from C++, where a template specialization involves actually creating and compiling additional code, so that it will be truly specialized for the type specified.
The new keyword is kind of a hack in C#. It contradicts polymorphism, because the method called depends on the type of the reference you hold.
Method overloading allows us to define many methods with the same name but with a different set of parameters ( thus with the same name but different signature ).
Are these two methods overloaded?
class A
{
public static void MyMethod<T>(T myVal) { }
public static void MyMethod(int myVal) { }
}
EDIT:
Shouldn't statement A<int>.MyMethod(myInt); throw an error, since constructed type A<int> has two methods with the same name and same signature?
Are the two methods overloaded?
Yes.
Shouldn't statement A<int>.MyMethod(myInt); throw an error, since constructed type A<int> has two methods with the same signature?
The question doesn't make sense; A is not a generic type as you have declared it. Perhaps you meant to ask:
Should the statement A.MyMethod(myInt); cause the compiler to report an error, since there are two ambiguous candidate methods?
No. As others have said, overload resolution prefers the non-generic version in this case. See below for more details.
Or perhaps you meant to ask:
Should the declaration of type A be illegal in the first place, since in some sense it has two methods with the same signature, MyMethod and MyMethod<int>?
No. The type A is perfectly legal. The generic arity is part of the signature. So there are not two methods with the same signature because the first has generic arity zero, the second has generic arity one.
Or perhaps you meant to ask:
class G<T>
{
public static void M(T t) {}
public static void M(int t) {}
}
Generic type G<T> can be constructed such that it has two methods with the same signature. Is it legal to declare such a type?
Yes, it is legal to declare such a type. It is usually a bad idea, but it is legal.
You might then retort:
But my copy of the C# 2.0 specification as published by Addison-Wesley states on page 479 "Two function members declared with the same names ... must have have parameter types such that no closed constructed type could have two members with the same name and signature." What's up with that?
When C# 2.0 was originally designed that was the plan. However, then the designers realized that this desirable pattern would be made illegal:
class C<T>
{
public C(T t) { ... } // Create a C<T> from a given T
public C(Stream s) { ... } // Deserialize a C<T> from disk
}
And now we say sorry buddy, because you could say C<Stream>, causing two constructors to unify, the whole class is illegal. That would be unfortunate. Obviously it is unlikely that anyone will ever construct this thing with Stream as the type parameter!
Unfortunately, the spec went to press before the text was updated to the final version. The rule on page 479 is not what we implemented.
Continuing to pose some more questions on your behalf:
So what happens if you call G<int>.M(123) or, in the original example, if you call A.MyMethod(123)?
When overload resolution is faced with two methods that have identical signatures due to generic construction then the one that is generic construction is considered to be "less specific" than the one that is "natural". A less specific method loses to a more specific method.
So why is it a bad idea, if overload resolution works?
The situation with A.MyMethod isn't too bad; it is usually pretty easy to unambiguously work out which method is intended. But the situation with G<int>.M(123) is far worse. The CLR rules make this sort of situation "implementation defined behaviour" and therefore any old thing can happen. Technically, the CLR could refuse to verify a program that constructs type G<int>. Or it could crash. In point of fact it does neither; it does the best it can with the bad situation.
Are there any examples of this sort of type construction causing truly implementation-defined behaviour?
Yes. See these articles for details:
https://ericlippert.com/2006/04/05/odious-ambiguous-overloads-part-one/
https://ericlippert.com/2006/04/06/odious-ambiguous-overloads-part-two/
Yes. MyMethod(int myVal) will be called when the type of the parameter is an int, the generic overload will be called for all other parameter arguments, even when the parameter argument is implicitly convertible to (or is a derived class of) the hardcoded type. Overload resolution will go for the best fit, and the generic overload will resolve to an exact match at compile time.
Note: You can explicitly invoke the generic overload and use an int by providing the type parameter in the method call, as Steven Sudit points out in his answer.
short s = 1;
int i = s;
MyMethod(s); // Generic
MyMethod(i); // int
MyMethod((int)s); // int
MyMethod(1); // int
MyMethod<int>(1); // Generic**
MyMethod(1.0); // Generic
// etc.
Yes, they are. They will allow code as such:
A.MyMethod("a string"); // calls the generic version
A.MyMethod(42); // calls the int version
Yes, they are overloaded. The compiler is supposed to prefer explicit method signatures against generic methods if they are available. Beware, however, that if you can avoid this kind of overload you probably should. There have been bug reports with respect to this sort of overload and unexpected behaviors.
https://connect.microsoft.com/VisualStudio/feedback/details/522202/c-3-0-generic-overload-call-resolution-from-within-generic-function
Yes. They have the same name "MyMethod" but different signatures. The C# specification, however, specifically handles this by saying that the compiler will prefer the non-generic version over the generic version, when both are options.
Yes. Off the top of my head, if you call A.MyMethod(1);, it will always run the second method. You'd have to call A.MyMethod<int>(1); to force it to run the first.