Another paradigm to Optional Parameters C# with Interfaces/Inheritance dimension added in - c#

Got into a tricky situation in using optional parameters in tandem with method overriding and interfaces in C#. I have read this.
Just wanted to add another dimension to the whole picture. There were quite a few code illustrations in that post. I picked up the one involving tags by VS1 and added another dimension to it as it had interfaces as well as inheritance being demonstrated. Though the code posted over there does work and displays the appropriate string as found in the sub class, base class, and interface, the following code does not.
void Main()
{
SubTag subTag = new SubTag();
ITag subTagOfInterfaceType = new SubTag();
BaseTag subTagOfBaseType = new SubTag();
subTag.WriteTag();
subTagOfInterfaceType.WriteTag();
subTagOfBaseType.WriteTag();
}
public interface ITag
{
void WriteTag(string tagName = "ITag");
}
public class BaseTag :ITag
{
public virtual void WriteTag(string tagName = "BaseTag") { Console.WriteLine(tagName); }
}
public class SubTag : BaseTag
{
public override void WriteTag(string tagName = "SubTag") { Console.WriteLine(tagName); }
}
And the output is
SubTag
ITag
BaseTag
So, it appears that the type of reference holding the reference to the inherited/implemented subclass does matter in determining which optional parameter value gets picked up.
Has anyone faced similar issue and found a solution? Or has C# has got some workaround for this in the later releases? (The one I am using is 4.0)
Thanks.

The C# team did not like adding optional arguments to the language, this is a rather good demonstration why.
It helps to understand how they are implemented. The CLR is quite oblivious of the feature, it is implemented by the compiler. If you write a method call with a missing argument then the C# compiler actually generates the code for the method call with an argument, passing the default value. Easy to see with ildasm.exe.
You can see this back in the language rules, the optional value must be a constant expression. Or in other words, a value that can be determined at compile time. You cannot use the new keyword or an expression that uses variables. Required so the compiler can embed the default value in the assembly metadata. It will need it again when it compiles a call to a method with optional arguments that's declared in another assembly.
The friction here is that the compiler cannot figure out which virtual method is actually going to be called at runtime. Dynamic dispatch is a pure runtime feature.
So all it reasonably can go by is the declared type of the object reference. You used all three versions so you get all three default argument values.

I think optional parameters are syntactic sugar only. So they get picked up at compile time. The compiler doesn't know the actual types of the objects, so the optional values are picked up based on the type of the reference.
If you need this behavior, then you can provide two different methods, one with the parameter and one without, then you can implement the parameterless method differently in different implementations. Of course this only works for fixed parameter layouts.
Update: tested and confirmed, given a void x(int z = 8) method, the method call x() is compiled to x(8), so the parameter values are baked in as constants.

A common way to solve this is to have a special "sentinel" value (often null) which the implementing methods recognise and substitute with the desired value.
For your example, it might look something like this:
public interface ITag
{
void WriteTag(string tagName = null);
}
public class BaseTag :ITag
{
public virtual void WriteTag(string tagName = null)
{
if (tagName == null)
tagName = "BaseTag";
Console.WriteLine(tagName);
}
}
public class SubTag : BaseTag
{
public override void WriteTag(string tagName = null)
{
if (tagName == null)
tagName = "SubTag";
Console.WriteLine(tagName);
}
}
Then your test code will output
SubTag
SubTag
SubTag
which I think is what you want?

Related

Adding option parameter to a library in c# without creating a breaking change

I have a nuget package, in it I have a method which has optional args (v1.0)
public void MyMethod(int a = 0){}
I later created a new version of my package with another optional arg (thinking it wasnt a breaking change) (v1.1)
public void MyMethod(int a = 0, int b = 1){}
This package is consumed by another nuget package which references v1.0 I have both v1.1 and the other nuget package included in my project. This means that at a binding level Im using v1.1 and the package using v1.0 is redirecting to the 1.1 dll.
This causes a missing method exception because of the following:
https://stackoverflow.com/a/9884700/1070291
I want to fix my library so that either signature will function, my first thought was this:
public void MyMethod(int a = 0, int b = 1){}
public void MyMethod(int a = 0){ MyMethod(a,1); }
However this causes an ambiguous method when its used elsewhere.
I'm wondering if there is some way to fill in the old method for backward compatibility without creating something ambiguous going forward.
I almost want to mark the old signature with something to instruct the compiler to include this in the assembly but not link any new methods to it.
Provide a method with explicitly no arguments. For example:
public void MyMethod(int a = 0, int b = 1) { }
public void MyMethod(int a = 0) { MyMethod(a, 1); }
public void MyMethod() { MyMethod(0, 1); }
This removes the ambiguity of having no arguments passed to MyMethod.
Extending for more arguments means you can write something like:
public void MyMethod(int a, int b, int c) { }
public void MyMethod(int a, int b) { MyMethod(a, b, 2); }
public void MyMethod(int a) { MyMethod(a, 1); }
public void MyMethod() { MyMethod(0); }
Note that we can completely remove optional arguments by explicitly implementing each overload. We do lose some intellisense help with default arguments, but this can be somewhat replaced with code comments.
I think that trying to solve the issue introduced by default parameters using other overloads with default parameters is just booking for more troubles in future...
Non-breaking change: what I'd do is to add a new method with a different name and keep the old method as-is. Callers of the old interface won't be affected by this change and new users will have the fully featured new method available.
public void MyMethod(int a = 0) {
MyBetterMethod(a, 1);
}
public void MyBetterMethod(int a, int b) { }
Migrate from old syntax: mark the old method [Obsolete] (and remove it in a future version). Do not forget to suggest the right method to use in the warning message.
Keep it as warning for few versions:
[Obsolete("This method is obsolete, use MyBetterMethod() instead.")]
public void MyMethod(int a = 0){}
With your next major release make it a compile-time error:
[Obsolete("This method is obsolete, use MyBetterMethod() instead.", true)]
public void MyMethod(int a = 0){}
Later you may consider to definitely remove that code and break binary compatibility. Life-cycle is better described in this Eric Lippert's SE answer (note that for simple deployments, if you do not need to keep binary compatibility, he suggest to remove code as soon as the deprecation became a compile-time error.)
Favor usage of new syntax: hide the obsolete method to minimize the chances that new users will call it instead of the new one:
[EditorBrowsable(EditorBrowsableState.Never)]
[Obsolete("This method is obsolete, use MyBetterMethod() instead.")]
public void MyMethod(int a = 0){}
Learn from this: do not use optional parameters, this is just one of the issues you will have (of course there are proper use-cases but IMO they're an exception, not the rule).
In general when writing a library you have a different approach to your public interface, be very careful about your class contract (to read as: quick fixes are an heritage you do not want to have).
When your method has too many parameters and you want to make callers' life easier then you probably have a design issue and for the rare cases where a default is really required then you should strongly consider to use an overloaded version. That said, I can't judge because I do not see your real code, there are chances that new method is doing something different or that it is doing too much.
When many parameters are unavoidable you should consider to group them all in a separate class that will be the only parameter of the method. Any time in future you can add more properties to that class and you won't break compatibility. See, for example, Process.Start() and its ProcessStartInfo. In this case you do not even need to rename new method:
public sealed class MyMethodInfo {
public int A { get; set; } = 0;
public int B { get; set; } = 1;
}
public void MyMethod(MyMethodInfo info) {
}
[EditorBrowsable(EditorBrowsableState.Never)]
[Obsolete("This method is obsolete, use MyMethod(MyMethodInfo) instead.")]
public void MyMethod(int a = 0) {
MyMethod(new MyMethodInfo { A = a });
}
One important note: default values are an important part of your interface contract, use them cum grano salis, and only when they really make sense, or force callers to specify a value. Always.
I would define 1 non-optional parameter to make it non-ambiguous.
public void MyMethod(int a, int b){} // first new method: this will be your main functioning method
public void MyMethod(int a = 0){ MyMethod(a,1); } // second old method: calls the main method above
There should not be ambiguity here, because:
0 parameter will call the second old method, which will call the first new method
1 parameter will call the second old method, which will call the first new method
2 parameters will call the first new method
In short, every methods are directed to the first new method.
Just for completeness here is how I eventually solved this issue.
public void MyMethod(int a = 0){ MyMethod(1,a); }
public void MyMethod(int b, int a = 0){}
Reinstated the origional method signiture in place
Added the new parameter in a new overload as a reqired parameter (which forced it before the existing optionals in the args list)
Called through with the default value from the old signiture to the new
The net effect is an 'optional' parameter done with an overload.
This works because the new overload now can only be called with the new argument making it un-ambiguous with the old implementation.
There are things I dislike about having to do this
Parameter order isnt as logical
If you repeat this process things get messier and you have to deal with permutations
Rules for Nuget and optional
Heres my psudo rules to help do this better in the future.
If you add a parameter (regardless of if there are optionals or not) it must be done in a new overload
New paramaters cannot be optional (ever)
If you need to add an optional (like something with [CallerMemberName]) you have to do so with an overload which also changes the signiture (either by adding another non-optiona parameter, changing method name ect)
Its important to note this only applies to libraries which are shipped pre-compiled (like nuget) if you are project referencing things you wont run into this problem.
Thanks to all who answered and helped me think this through

Implictly casting a dynamic object at runtime

Say I have the following code:
class MyField : DynamicObject
{
public dynamic Value { get; private set; }
public override bool TryConvert(ConvertBinder binder, out object result)
{
result = binder.Type == Value.GetType() ? Value : null;
return result != null;
}
public MyField(dynamic v)
{
Value = v;
}
}
// ...
public static class Program
{
static void doSomething(ulong address) { /* ... */ }
public void Main(string[] args)
{
dynamic field = new MyField((ulong)12345);
doSomething(field); // fails as field is not a ulong.
doSomething((ulong)field); // succeeds as field can be casted to a ulong.
ulong field2 = field; // also succeeds
}
}
Is there a way to get the first call to doSomething to succeed? I'm writing a library to read a particular file format which uses serialized C-style structures; reading the file entails reading these saved structure definitions and then "populating" them with the data contained in the rest of the file. I have a "structure" DynamicObject class (to support dot-notation access) and a "field" DynamicObject class, which is primarily necessary to hold additional information on the contents of the field; although I could probably get rid of it, it would make certain other operations more difficult. What I'd like to do is just "pretend" MyField is a certain type (well, technically just any built-in primitive or array of primitives, including 2D arrays) and implicitly convert it to that type. However, the runtime fails to try to implicitly convert field to the type required by the underlying method signature if field doesn't match the type required.
In the vein of Greg's answer, I came up with a solution that makes the runtime happy. It's not exactly what I was originally looking for, but it seems like the best solution.
Since I already have a large if-else tree in my source wherein I take an array of bytes and interpret them as an actual value-type, and indeed my current source does use an underlying generic MyField<T>, so this works fine. I can't recall why I wanted MyField to be dynamic in the first place.
Anyway, this is a modified solution.
class MyField<T>
{
public dynamic Value { get; private set; }
public MyField(dynamic v) { Value = v; }
public static implicit operator T(MyField field)
{
return (T)field.Value;
}
}
I keep coming back to wanting the runtime to just figure out what it needs to cast MyField to at runtime but I guess it's not that big of a deal. If anyone comes up with something better, let me know. I'm going to keep this question open in the meantime.
You potentially might want to look into Generics. Coupled with an interface may make the dynamic usage far more viable.
public interface Helper <TInput, TOutput>
{
<TOutput> DoSomething(TInput input);
}
So when you use this interface with a class, you'll implement your type for both input and output. Which will give you quite a bit of flexibility, which should avoid those cast that you mentioned earlier. A small example, I mean you could obviously adjust it based on needs but I still don't understand what you're trying to really do.

C# simpler run time generics

Is there a way to invoke a generic function with a type known only at run time?
I'm trying to do something like:
static void bar()
{
object b = 6;
string c = foo<typeof(b)>();
}
static string foo<T>()
{
return typeof (T).Name;
}
Basically I want to decide on the type parameter only at run time, but the function I'm calling depends on the type parameter.
Also I know this can be done with reflections... but it's not the nicest solution to the problem...
I'm sort of looking for dynamic features in C#...
I'm writhing a bridge between two classes the first one is basically a big tree with different types of of objects (composite by interface) the other is a sort of a "super visitor".
the supper visitor accepts key-value dictioneries that map types to object it looks like:
dic.Add(object value)
and T is not necessarily the type of the value... a lot of times it isn't...
I know it's written poorly, but i can't fix it...
I can work around it, but only at runtime...
I already did it with reflections, but if there's a better way to do it without them i would be happy to learn...
Thank you
This is a bit of a hack but you can get dynamic to do the reflection work for you by something like,
class Program
{
static void Main(string[] args)
{
var b = 6;
var t = (dynamic)new T();
var n = t.Foo(b);
}
class T
{
public string Foo<T>(T a)
{
return typeof(T).Name;
}
}
}
Here the dynamic call will extract the type of b and use it as a type parameter for Foo().
You can use dynamic keyword if you're using .NET 4. In a word, the type of the variable will be resolved at run time so it is a super generic type ;) You can read a article here or read the MSDN documentation
Saly refelction is THE solution to the problem, whether it is nice or not is irrelevant here. It is the runtime designed mechanism to achieve exactly this. As there is no parameter or generics to use as input, this is the only way to do it - it is also senseless. As in: your example is bad. Because in the example the type is hardcoded.
If the method where b exists has b as generic parameter, the type is available for passing to foo. If not - reflection is THE way to go, albeit the syntax looks clumsy. Only one time, though.
This I believe is the only way:
var foo = typeof(Foo<>).MakeGenericType(typeof (bar));
You can set up a class which takes a type parameter at run time which can be used in the methods in that class.
public class GenericClass<T>()
{
ICommonInterface TheObject;
public GenericClass(T theObject)
{
TheObject = theObject;
}
public string GetName()
{
return TheObject.Name;
}
}
But this is only really useful if the Types being passed in share interfaces so have common properties between them. In your example it seems that relection is the answer as depending on the type you want to access specific properties.

The type parameter cannot be used with type arguments

I wanted to code a helper method in Unit test project, which will initialize the presenter set the views instance to it and set the presenter state.
It threw me the exception:
the type parameter cannot be used with type arguments
Code:
public static **TPresenter<TView>** Initialize<TPresenter,TView>()
where TPresenter: BasePresenter<TView>, new()
where TView : new()
{
}
After couple of minutes I found the issue was with my return type TPresenter<Tview>
I read few posts which didn't clearly explain Why I'm not be able to say T1<T2>
I was forced to make the presenter assignment through reference parameter. Any explanations are welcome!
Basically there's no way of saying that a type parameter is itself a generic type with a particular number of type parameters - which you need to be able to do in order to make TPresenter<TView> make sense.
It's not clear what you mean by making it work via a reference parameter - whatever type you used for that ref parameter should be fine as a return type too. My guess is that it was just of type TPresenter, not TPresenter<TView>.
There is no such thing as a TPresenter<TView> it is meaningless. TPresenter is just a placeholder, until it is constrained by the where it could be anything, e.g. there is no int<tview> so you can't have that. Once you add the constraint it means it has to be a BasePresenter<TView> or some derived type so it will always be a Something<TView> so again TPresenter<TView> is meaningless.
This is an old one, but I hit it too. In the Class definition, just use the single type, then multiple types where you use it. E.g:
public class Template1<T>{}
void SomeFunc()
{
<Template1<SomeClass1,SomeClass2>> someValue = new <Template1<SomeClass1,SomeClass2>>()
}
//or even:
void SomeOtherFunc<U,V>()
{
<Template1<U,V>> someValue = new <Template1<U,V>>();
}
I was getting similar error in my code. #Jon Skeet correctly points to the right direction. The return type is already generic, as specified by TPresenter : BasePresenter<TView>. So we can simply use it as TPresenter instead of TPresenter<TView>.
public class BasePresenter<T>
{
}
public class Demo
{
public static TPresenter Initialize<TPresenter, TView>() where TPresenter: BasePresenter<TView>, new()
{
return null;
}
}
Small addition, I came here because i was trying to write an extension method;
public static T AutoJoinGroup<T, TD>(this T<TD> groupHubClientBase, string groupName)
where T : GroupHubClientBase<TD>
{
...
}
As you can see, I tried to use T<TD> which is incorrect, you can just use T

Why does C# forbid generic attribute types?

This causes a compile-time exception:
public sealed class ValidatesAttribute<T> : Attribute
{
}
[Validates<string>]
public static class StringValidation
{
}
I realize C# does not support generic attributes. However, after much Googling, I can't seem to find the reason.
Does anyone know why generic types cannot derive from Attribute? Any theories?
Well, I can't answer why it's not available, but I can confirm that it's not a CLI issue. The CLI spec doesn't mention it (as far as I can see) and if you use IL directly you can create a generic attribute. The part of the C# 3 spec that bans it - section 10.1.4 "Class base specification" doesn't give any justification.
The annotated ECMA C# 2 spec doesn't give any helpful information either, although it does provide an example of what's not allowed.
My copy of the annotated C# 3 spec should arrive tomorrow... I'll see if that gives any more information. Anyway, it's definitely a language decision rather than a runtime one.
EDIT: Answer from Eric Lippert (paraphrased): no particular reason, except to avoid complexity in both the language and compiler for a use case which doesn't add much value.
An attribute decorates a class at compile-time, but a generic class does not receive its final type information until runtime. Since the attribute can affect compilation, it has to be "complete" at compile time.
See this MSDN article for more information.
I don't know why it's not allowed, but this is one possible workaround
[AttributeUsage(AttributeTargets.Class)]
public class ClassDescriptionAttribute : Attribute
{
public ClassDescriptionAttribute(Type KeyDataType)
{
_KeyDataType = KeyDataType;
}
public Type KeyDataType
{
get { return _KeyDataType; }
}
private Type _KeyDataType;
}
[ClassDescriptionAttribute(typeof(string))]
class Program
{
....
}
This is not truly generic and you still have to write specific attribute class per type, but you may be able to use a generic base interface to code a little defensively, write lesser code than otherwise required, get benefits of polymorphism etc.
//an interface which means it can't have its own implementation.
//You might need to use extension methods on this interface for that.
public interface ValidatesAttribute<T>
{
T Value { get; } //or whatever that is
bool IsValid { get; } //etc
}
public class ValidatesStringAttribute : Attribute, ValidatesAttribute<string>
{
//...
}
public class ValidatesIntAttribute : Attribute, ValidatesAttribute<int>
{
//...
}
[ValidatesString]
public static class StringValidation
{
}
[ValidatesInt]
public static class IntValidation
{
}
Generic Attributes are available since C# 11. Now, this is possible:
[GenericAttribute<int>()]
public int Method();
However, this is not possible yet:
[GenericAttribute<T>()]
public int Method<T>(T param);
T is not known at compile time.
Also,
The type arguments must satisfy the same restrictions as the typeof
operator. Types that require metadata annotations aren't allowed. For
example, the following types aren't allowed as the type parameter:
dynamic
string? (or any nullable reference type)
(int X, int Y) (or any other tuple types using C# tuple syntax).
These types aren't directly represented in metadata. They include annotations that
describe the type. In all cases, you can use the underlying type
instead:
object for dynamic.
string instead of string?.
ValueTuple<int, int> instead of (int X, int Y).
Source: https://learn.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-11#generic-attributes
This is a very good question. In my experience with attributes, I think the constraint is in place because when reflecting on an attribute it would create a condition in which you would have to check for all possible type permutations: typeof(Validates<string>), typeof(Validates<SomeCustomType>), etc...
In my opinion, if a custom validation is required depending on the type, an attribute may not be the best approach.
Perhaps a validation class that takes in a SomeCustomValidationDelegate or an ISomeCustomValidator as a parameter would be a better approach.
This is not currently a C# language feature, however there is much discussion on the official C# language repo.
From some meeting notes:
Even though this would work in principle, there are bugs in most
versions of the runtime so that it wouldn't work correctly (it was
never exercised).
We need a mechanism to understand which target runtime it works on. We
need that for many things, and are currently looking at that. Until
then, we can't take it.
Candidate for a major C# version, if we can make a sufficient number
of runtime versions deal with it.
Generic attributes are supported since .NET 7 and C# 11 (in preview in .NET 6 and C# 10).
My workaround is something like this:
public class DistinctType1IdValidation : ValidationAttribute
{
private readonly DistinctValidator<Type1> validator;
public DistinctIdValidation()
{
validator = new DistinctValidator<Type1>(x=>x.Id);
}
public override bool IsValid(object value)
{
return validator.IsValid(value);
}
}
public class DistinctType2NameValidation : ValidationAttribute
{
private readonly DistinctValidator<Type2> validator;
public DistinctType2NameValidation()
{
validator = new DistinctValidator<Type2>(x=>x.Name);
}
public override bool IsValid(object value)
{
return validator.IsValid(value);
}
}
...
[DataMember, DistinctType1IdValidation ]
public Type1[] Items { get; set; }
[DataMember, DistinctType2NameValidation ]
public Type2[] Items { get; set; }

Categories