I have found this topic, but it's VB...and they get an error:
vb issue
Here are my method signatures. Notice one has a different return type.
public static bool PopulateRunWithSimpleValueByFieldIdSync(string fieldValue, string fieldId, IViewModel myself, int index)
VS
public static void PopulateRunWithSimpleValueByFieldIdSync(string fieldValue, string fieldId, IViewModel myself, int index = 0, PopulateDoneCallback populateDone = null)
The actual call I was making:
PopulateRunWithSimpleValueByFieldIdSync(date, dtx.DateField, saver, index);
The compiler decided to pick the first method, and not give me an error. Once the first method was removed (it was unused code), it started calling the second method.
Is there an option somewhere to treat this as an error?
You're going to need to use some form of 3rd party code analysis if you want this to be flagged at compile time, since the C# language specs define the current behavior as what should happen.
According to the C# language guide (emphasis mine),
Use of named and optional arguments affects overload resolution in the following ways:
...
If two candidates are judged to be equally good, preference goes to a candidate that does not have optional parameters for which arguments were omitted in the call. This is a consequence of a general preference in overload resolution for candidates that have fewer parameters.
You can use a third party analysis tool to flag this as an error or to use Visual Studio's built in static code analysis too with a custom rule that you implement.
This is per design, according to the specs
Use of named and optional arguments affects overload resolution in the
following ways:
A method, indexer, or constructor is a candidate for execution if each of its parameters either is optional or corresponds, by name or
by position, to a single argument in the calling statement, and that
argument can be converted to the type of the parameter.
If more than one candidate is found, overload resolution rules for preferred conversions are applied to the arguments that are explicitly
specified. Omitted arguments for optional parameters are ignored.
If two candidates are judged to be equally good, preference goes to a candidate that does not have optional parameters for which arguments
were omitted in the call. This is a consequence of a general
preference in overload resolution for candidates that have fewer
parameters.
So, no - you can't.
Related
I recently noticed that C# compiler allows methods overloads like the following:
public static string foo(string a, bool x = false)
{
return "a";
}
public static string foo(string a)
{
return "b";
}
As far as I tested, it always returns "b" as long as the second param is not given, which makes sense. However, I think the compiler really should not allow this type of overloading. May I ask the reason why this feature is designed like this rather than giving an error by compiler?
While questions like this are fundamentally impossible to answer, since it is impossible to guess the language designers intentions, I might make a guess.
Optional arguments are handled by transforming the code to inject the argument to the call site. So
public static void Test(int a = 42){...}
...
Test();
would be transformed to to
public static void Test(int a){...}
...
Test(42);
by the compiler. From this point on the regular overload resolution can run without conflicts. Why was it designed this way? I have no idea, but common reasons for non intuitive features is backward compatibility or language compatibility.
For this reason it is important to be very careful using optional arguments in public APIs. Since the user of the library will use the default value from the version of the API it was compiled against. Not the version it is running against.
I can't speak to why this is part of the design so much as simply explain why you see which overload is favored in your testing.
If you look at the reference documentation, you'll note the following three bullets to describe overload resolution:
A method, indexer, or constructor is a candidate for execution if each of its parameters either is optional or corresponds, by name or by position, to a single argument in the calling statement, and that argument can be converted to the type of the parameter.
If more than one candidate is found, overload resolution rules for preferred conversions are applied to the arguments that are explicitly specified. Omitted arguments for optional parameters are ignored.
If two candidates are judged to be equally good, preference goes to a candidate that doesn't have optional parameters for which arguments were omitted in the call. Overload resolution generally prefers candidates that have fewer parameters.
I would assert that in your test, bullet #3 is most applicable to your observations - because you omit the second argument and either method is equally good, resolution favors your second method and you see "b" returned.
I have recently found an interesting behavior of C# compiler. Imagine an interface like this:
public interface ILogger
{
void Info(string operation, string details = null);
void Info(string operation, object details = null);
}
Now if we do
logger.Info("Create")
The compiler will complain that he does not know which overload to chose (Ambiguous invocation...). Seems logical, but when you try to do this:
logger.Info("Create", null)
It will suddenly have no troubles figuring out that null is a string. Moreover it seems that the behavior of finding the right overload has changed with time and I had found a bug in an old code that worked before and stopped working because compiler decided to use another overload.
So I am really wondering why does C# not generate the same error in the second case as it does in the first. Seems very logical to do this, but instead it tries and resolves it to random overload.
P.S. I don't think that it's good to provide such ambiguous interfaces and do not recommend that, but legacy is legacy and has to be maintained :)
There was a breaking change introduced in C# 6 that made the overload resolution better. Here it is with the list of features:
Improved overload resolution
There are a number of small improvements to overload resolution, which will likely result in more things just working the way you’d expect them to. The improvements all relate to “betterness” – the way the compiler decides which of two overloads is better for a given argument.
One place where you might notice this (or rather stop noticing a problem!) is when choosing between overloads taking nullable value types. Another is when passing method groups (as opposed to lambdas) to overloads expecting delegates. The details aren’t worth expanding on here – just wanted to let you know!
but instead it tries and resolves it to random overload.
No, C# doesn't pick overloads randomly, that case is the ambiguous call error. C# picks the better method. Refer to section 7.5.3.2 Better function member in the C# specs:
7.5.3.2 Better function member
Otherwise, if MP has more specific parameter types than MQ, then MP is better than MQ. Let {R1, R2, …, RN} and {S1, S2, …, SN} represent the uninstantiated and unexpanded parameter types of MP and MQ. MP’s parameter types are more specific than MQ’s if, for each parameter, RX is not less specific than SX, and, for at least one parameter, RX is more specific than SX:
Given that string is more specific than object and there is an implicit cast between null and string, then the mystery is solved.
I created extension method:
public static class XDecimal
{
public static decimal Floor(
this decimal value,
int precision)
{
decimal step = (decimal)Math.Pow(10, precision);
return decimal.Floor(step * value) / step;
}
}
Now I try to use it:
(10.1234m).Floor(2)
But compiler says Member 'decimal.Floor(decimal)' cannot be accessed with an instance reference; qualify it with a type name instead. I understand there is static decimal.Floor(decimal) method. But it has different signature. Why compiler is unable to choose correct method?
You have two good and correct answers here, but I understand that answers which simply quote the specification are not always that illuminating. Let me add some additional details.
You probably have a mental model of overload resolution that goes like this:
Put all the possible methods in a big bucket -- extension methods, static methods, instance methods, etc.
If there are methods that would be an error to use, eliminate them from the bucket.
Of the remaining methods, choose the unique one that has the best match of argument expressions to parameter types.
Though this is many people's mental model of overload resolution, regrettably it is subtly wrong.
The real model -- and I will ignore generic type inference issues here -- is as follows:
Put all the instance and static methods in a bucket. Virtual overrides are not counted as instance methods.
Eliminate the methods that are inapplicable because the arguments do not match the parameters.
At this point we either have methods in the bucket or we do not. If we have any methods in the bucket at all then extension methods are not checked. This is the important bit right here. The model is not "if normal overload resolution produced an error then we check extension methods". The model is "if normal overload resolution produced no applicable methods whatsoever then we check extension methods".
If there are methods in the bucket then there is some more elimination of base class methods, and finally the best method is chosen based on how well the arguments match the parameters.
If this happens to pick a static method then C# will assume that you meant to use the type name and used an instance by mistake, not that you wish to search for an extension method. Overload resolution has already determined that there is an instance or static method whose parameters match the arguments you gave, and it is going to either pick one of them or give an error; it's not going to say "oh, you probably meant to call this wacky extension method that just happens to be in scope".
I understand that this is vexing from your perspective. You clearly wish the model to be "if overload resolution produces an error, fall back to extension methods". In your example that would be useful, but this behaviour produces bad outcomes in other scenarios. For example, suppose you have something like
mystring.Join(foo, bar);
The error given here is that it should be string.Join. It would be bizarre if the C# compiler said "oh, string.Join is static. The user probably meant to use the extension method that does joins on sequences of characters, let me try that..." and then you got an error message saying that the sequence join operator -- which has nothing whatsoever to do with your code here -- doesn't have the right arguments.
Or worse, if by some miracle you did give it arguments that worked but intended the static method to be called, then your code would be broken in a very bizarre and hard-to-debug way.
Extension methods were added very late in the game and the rules for looking them up make them deliberately prefer giving errors to magically working. This is a safety system to ensure that extension methods are not bound by accident.
The process of deciding on which method to call has lots of small details described in the C# language specification. The key point applicable to your scenario is that extension methods are considered for invocation only when the compiler cannot find a method to call among the methods of the receiving type itself (i.e. the decimal).
Here is the relevant portion of the specification:
The set of candidate methods for the method invocation is constructed. For each method F associated with the method group M:
If F is non-generic, F is a candidate when:
M has no type argument list, and
F is applicable with respect to A (§7.5.3.1).
According to the above, double.Floor(decimal) is a valid candidate.
If the resulting set of candidate methods is empty, then further processing along the following steps are abandoned, and instead an attempt is made to process the invocation as an extension method invocation (§7.6.5.2). If this fails, then no applicable methods exist, and a binding-time error occurs.
In your case the set of candidate methods is not empty, so extension methods are not considered.
The signature of decimal.Floor is
public static Decimal Floor(Decimal d);
I'm no specialist in type inference, but I guess since there is a implicit conversion from int to Decimal the compiler chooses this as the best fitting method.
If you change your signature to
public static decimal Floor(
this decimal value,
double precision)
and call it like
(10.1234m).Floor(2d)
it works. But of course a double as precision is somewhat strange.
EDIT: A quote from Eric Lippert on the alogrithm:
Any method of the receiving type is closer than any extension method.
Floor is a method of the "receiving type" (Decimal). On the why the C# developers implemented it like this I can make no statement.
Say I have the following methods:
public static void MyCoolMethod(params object[] allObjects)
{
}
public static void MyCoolMethod(object oneAlone, params object[] restOfTheObjects)
{
}
If I do this:
MyCoolMethod("Hi", "test");
which one gets called and why?
It's easy to test - the second method gets called.
As to why - the C# language specification has some pretty detailed rules about how ambiguous function declarations get resolved. There are lots of questions on SO surrounding interfaces, inheritance and overloads with some specific examples of why different overloads get called, but to answer this specific instance:
C# Specification - Overload Resolution
7.5.3.2 Better function member
For the purposes of determining the
better function member, a
stripped-down argument list A is
constructed containing just the
argument expressions themselves in the
order they appear in the original
argument list.
Parameter lists for each of the
candidate function members are
constructed in the following way:
The expanded form is used if
the function member was applicable
only in the expanded form.
Optional parameters with no
corresponding arguments are removed
from the parameter list
The parameters are reordered
so that they occur at the same
position as the corresponding argument
in the argument list.
And further on...
In case the parameter type sequences {P1, P2, …, PN} and {Q1, Q2, …, QN} are equivalent > (i.e. each Pi has an identity conversion to the corresponding Qi), the following
tie-breaking rules are applied, in order, to determine the better function member.
If MP is a non-generic method and MQ is a generic method, then MP is better than MQ.
Otherwise, if MP is applicable in its normal form and MQ has a params array and is
applicable only in its expanded form, then MP is better than MQ.
Otherwise, if MP has more declared parameters than MQ, then MP is better than MQ.
This can occur if both methods have params arrays and are applicable only in their
expanded forms.
The bolded tie-breaking rule seems to be what is applying in this case. The specification goes into detail about how the params arrays are treated in normal and expanded forms, but ultimately the rule of thumb is that the most specific overload will be called in terms of number and type of parameters.
The second one, the compiler will first try to resolve against explicitly declared parameters before falling back on the params collection.
This overload is tricky...
MyCoolMethod("Hi", "test") obviously calls the 2nd overload, but
MyCoolMethod("Hi"); also calls the 2nd overload. I tested this.
Maybe since both of the inputs are objects, the compiler assume anything passed in will be an array of objects and completely ignores the 1st overload.
It probably has to do with the Better function member resolution mentioned by womp
http://msdn.microsoft.com/en-us/library/aa691338(v=VS.71).aspx
Suppose I have an existing assembly, and some of the classes have overloaded methods, with default behavior or values assumed for some of those overloads. I think this is a pretty typical pattern;
Type2 _defaultValueForParam2 = foo;
Type3 _defaultValueForParam3 = bar;
public ReturnType TheMethod(Type1 param1)
{
return TheMethod(param1, _defaultValueForParam2);
}
public ReturnType TheMethod(Type1 param1, Type2 param2)
{
return TheMethod(param1, param2, _defaultValueForParam3);
}
public ReturnType TheMethod(Type1 param1, Type2 param2, Type3 param3)
{
// actually implement the method here.
}
And I understand that optional params in C# is supposed to let me consolidated that down to a single method. If I produce a method with some params marked optional, will it work with downlevel callers of the assembly?
EDIT: By "work" I mean, a downlevel caller, an app compiled with the C# 2.0 or 3.5 compiler, will be able to invoke the method with one, two or three params, just as if I had used overloads, and the downlevel compiler won't complain.
I do want to refactor and eliminate all the overloads in my library, but I don't want to force the downlevel callers using the refactored library to provide every parameter.
I haven't read the docs on the new language standard, but I would assume that your pre-4.0 callers will have to pass all declared parameters, just as they do now. This is because of the way parameter-passing works.
When you call a method, the arguments are pushed onto the stack. If three 32-bit arguments are passed, then 12 bytes will be pushed onto the stack; if four 32-bit arguments are passed, then 16 bytes will be pushed onto the stack. The number of bytes pushed onto the stack is implicit in the call: the callee assumes that the correct number of arguments was passed.
So if a function takes four 32-bit parameters, it will look on the stack at the 16 bytes preceding the return address of the caller. If the caller has passed only 12 bytes, then the callee will read 4 bytes of whatever was already on the stack before the call was made. It has no way of knowing that all 16 expected bytes was not passed.
This is the way it works now. There's no changing that for existing compilers.
To support optional parameters, one of two things has to happen:
The caller can pass an additional value that explicitly tells the callee how many arguments (or bytes) were pushed onto the stack. The callee can then fill in the default values for any omitted parameters.
The caller can continue passing all declared parameters, substituting default values (which would be read from the callee's metadata) for any optional parameters omitted in the code. The callee then reads all parameter values from the stack, just as it does now.
I suspect that it will be implemented as in (2) above. This is similar to how it's done in C++ (although C++, lacking metadata, requires that the default parameters be specified in the header file), is more efficient that option (1), as it is all done at compile time and doesn't require an additional value to pushed onto the stack, and is the most straightforward implementation. The drawback to option (2) is that, if the default values change, all callers must be recompiled, or else they will continue to pass the old defaults, since they've been compiled in as constants. This is similar to the way public constants work now. Note that option (1) does not suffer this drawback.
Option (1) also does not support named parameter passing, whereby given a function declared like this:
static void Foo(int a, int b = 0, int c = 0){}
it can be called like this:
Foo(1, c: 2);
Option (1) could be modified to allow for this, by making the extra hidden value a bitmap of omitted arguments, where each bit represents one optional parameter. This arbitrarily limits the number of optional parameters a function can accept, although given that this limitation would be at least 32, that may not be such a bad thing. It does make it exceedingly unlikely that this is the actual implementation, however.
Given either implementation, the calling code must understand the mechanics of optional parameters in order to omit any arguments in the call. Additionally, with option (1), an extra hidden parameter must be passed, which older compilers would not even know about, unless it was added as a formal parameter in the metadata.
In c# 4.0, when an optional parameter is omitted, a default value for that parameter is substituted, to wit:
public void SendMail(string toAddress, string bodyText, bool ccAdministrator = true, bool isBodyHtml = false)
{
// Full implementation here
}
For your downlevel callers, this means that if they use one of the variants that is missing parameters, c# will substitute the default value you have provided for the missing parameter. This article explains the process in greater detail.
Your existing downlevel calls should all still work, but you will have to recompile your clients in c# 4.0.
Well, I think that if you replace all 3 methods by a single method with optional parameters, the code that uses your library will still work, but will need to be recompiled.