When I skip the identifier with delegate type, the compiler throws error saying identifier required. So, when declaring delegate, why one has to specify the identifier of type? Having only the type information is enough in the declaration right?
public delegate void MyDel(object o, EventArgs e); // accepted by compiler
public delegate void MyDel(object, EventArgs); // throws error, why?
NOTE: C++ supports declarations with only types. As I'm coming from C++ background, I expected the same behavior here.
If nothing else, so that whilst you're writing the documentation you can clearly indicate which of the parameters you're discussing. (E.g. especially for delegates with multiple parameters of the same type)
It's also consistent with other areas (such as abstract methods or interface methods) that also have no body, but still require the parameters to be named.
There is also the problem of named calls to the method
MyDel myDel = MyMethod;
myDel(o:sender,e: eve);
c# allows it and if you didn't have a name how could you do this.
Note that I can't find anything official about this. The following is my guess.
First of all, having parameter names makes it easy for people to know what that parameter does. If you have just object instead of object sender, it is very ambiguous what is the importance of this parameter. If you put the word sender there, people will know on first sight that this parameter represents the sender of an event.
Secondly, this makes it easy for the IDE to generate code for you. Ever tried making the Windows Forms Designer generate an event handler for you? It generates the parameter names according to the parameters in the delegate declaration. If you don't put parameter names in the declaration, the IDE can't generate meaningful names for you.
And lastly, keeping this syntax similar to that of method declarations is probably less work for the compiler developers. :)
There's at least two good reasons that come to my mind immediately:
Consistency with method declarations
You are right that the parameter name is not part of the signature and as such it is technically not required to match the delegate (consequently, the parameter names are ignored when matching the delegate to a method). However, consistency is an important feature of a language. It makes learning it easier and reduces cognitive workload, which in turn increases productivity. A delegate is a "placeholder" for a method. To be consistent, it makes sense to make its definition as similar to a method definition as possible. A method would be declared like this:
void PropertyChangedHandler(object sender, PropertyChangedEventArgs e) {
//...
}
A delegate to this method can be defined like:
delegate void PropertyChangedEventHandler(object sender, PropertyChangedEventArgs e);
As you can see, the only difference is the delegate keyword (and of course the lack of a method body, which is irrelevant here, because that's not part of the signature). That's easy to learn and remember.
Developement aids
In Visual Studio you can type the event name, then +=, press Tab twice and the event handler method will be generated for you. The names of the delegate's parameters are used for the generated method. If the delegate would only come with parameter types and not names, the parameters would have to be named param1, param2 etc., which wouldn't be very meaningful. The same applies to other development aids, for example when you write code to invoke the delegate, IntelliSense will show you the names of the delegate's parameters. That's much more useful than just their types.
Any good programmer will define the parameter names clearly indicating what they are for. If only type is allowed then it leads to confusion. This confusion needs documentation and careful reading of it leading to loss of productivity. For this designers of language decided to have parameter names to be defined when defining a delegate or an interface.
Imagine SQLCommand defn
SqlCommand(string, SqlConnection)
vs
SqlCommand(string cmdText, SqlConnection connection)
Related
When learning more about the standard event model in .NET, I found that before introducing generics in C#, the method that will handle an event is represented by this delegate type:
//
// Summary:
// Represents the method that will handle an event that has no event data.
//
// Parameters:
// sender:
// The source of the event.
//
// e:
// An object that contains no event data.
public delegate void EventHandler(object sender, EventArgs e);
But after generics were introduced in C# 2, I think this delegate type was rewritten using genericity:
//
// Summary:
// Represents the method that will handle an event when the event provides data.
//
// Parameters:
// sender:
// The source of the event.
//
// e:
// An object that contains the event data.
//
// Type parameters:
// TEventArgs:
// The type of the event data generated by the event.
public delegate void EventHandler<TEventArgs>(object sender, TEventArgs e);
I have two questions here:
First, why wasn't the TEventArgs type parameter made contravariant ?
If I'm not mistaken it is recommended to make the type parameters that appear as formal parameters in a delegate's signature contravariant and the type parameter that will be the return type in the delegate signature covariant.
In Joseph Albahari's book, C# in a Nutshell, I quote:
If you’re defining a generic delegate type, it’s good practice to:
Mark a type parameter used only on the return value as covariant (out).
Mark any type parameters used only on parameters as contravariant (in).
Doing so allows conversions to work naturally by respecting
inheritance relationships between types.
Second question: Why was there no generic constraint to enforce that the TEventArgs derive from System.EventArgs?
As follows:
public delegate void EventHandler<TEventArgs> (object source, TEventArgs e) where TEventArgs : EventArgs;
Thanks in advance.
Edited to clarify the second question:
It seems like the generic constraint on TEventArgs (where TEventArgs : EventArgs) was there before and it was removed by Microsoft, so seemingly the design team realized that it didn’t make much practical sense.
I edited my answer to include some of the screenshots from
.NET reference source
First off, to address some concerns in the comments to the question: I generally push back hard on "why not" questions because it's hard to find concise reasons why everyone in the world chose to not do this work, and because all work is not done by default. Rather, you have to find a reason to do work, and take away resources from other work that is less important to do it.
Moreover, "why not" questions of this form, which ask about the motivations and choices of people who work at a particular company may only be answerable by the people who made that decision, who are probably not around here.
However, in this case we can make an exception to my general rule of closing "why not" questions because the question illustrates an important point about delegate covariance that I have never written about before.
I did not make the decision to keep event delegates non-variant, but had I been in a position to do so, I would have kept event delegates non-variant, for two reasons.
The first is purely an "encourage good practices" point. Event handlers are usually purpose-built for handling a particular event, and there is no good reason I'm aware of to make it easier than it already is to use delegates that have mismatches in the signature as handlers, even if those mismatches can be dealt with through variance. An event handler that matches exactly in every respect the event it is supposed to be handling gives me more confidence that the developer knows what they're doing when constructing an event-driven workflow.
That's a pretty weak reason. The stronger reason is also the sadder reason.
As we know, generic delegate types can be made covariant in their return types and contravariant in their parameter types; we normally think of variance in the context of assignment compatibility. That is, if we have a Func<Mammal, Mammal> in hand, we can assign it to a variable of type Func<Giraffe, Animal> and know that the underlying function will always take a mammal -- because now it will only get giraffes -- and will always return an animal -- because it returns mammals.
But we also know that delegates may be added together; delegates are immutable, so adding two delegates together produces a third; the sum is the sequential composition of the summands.
Field-like events are implemented using delegate summation; that's why adding a handler to an event is represented as +=. (I am not a big fan of this syntax, but we're stuck with it now.)
Though both these features work well independently of each other, they work poorly in combination. When I implemented delegate variance, our tests discovered in short order that there were a number of bugs in the CLR regarding delegate addition where the underlying delegate types were mismatched due to variance-enabled conversions. These bugs had been there since CLR 2.0, but until C# 4.0, no mainstream language had ever exposed the bugs, no test cases had been written for them, and so on.
Sadly, I do not recall what the reproducers for the bugs were; it was twelve years ago and I do not know if I still have any notes on it tucked away on a disk somewhere.
We worked with the CLR team at the time to try and get these bugs addressed for the next version of the CLR, but they were not considered high enough priority compared to their risk. Lots of types like IEnumerable<T> and IComparable<T> and so on were made variant in those releases, as were the Func and Action types, but it is rare to add together two mismatched Funcs using a variant conversion. But for event delegates, their only purpose in life is to be added together; they would be added together all the time, and had they been variant, there would have been risk of exposing these bugs to a great many users.
I lost track of the issues shortly after C# 4 and I honestly do not know if they were ever addressed. Try adding together some mismatched delegates in various combinations and see if anything bad happens!
So that's a good but unfortunate reason why to not make event delegates variant in the C# 4.0 release timeframe. Whether there is still a good reason, I don't know. You'd have to ask someone on the CLR team.
Up until now, C# inferrence has always worked well for me. I have created a test example to simplify the case.
class Parent
{
public void InferrenceTesting<T>() where T : Parent
{
}
}
class Child : Parent
{
public void Test()
{
//this line gives me a compiler error : The type arguments for method 'Parent.InferrenceTesting<T>()' cannot be inferred from the usage. Try specifying the type arguments explicitly.
this.InferrenceTesting();
}
}
I have read quite a lot on inferrence, but I am clueless as of why this doesn't work.
Inference of generic method type arguments to the method type parameters proceeds by making inferences based on the relationships between the formal arguments and the formal parameters.
Your method has zero formal arguments and zero formal parameters, so no inferences are made.
Note that in particular inferences are never made from generic parameter constraints. Constraints are not part of the signature of a method and inference concerns itself with signatures. Rather, constraints are checked after type inference has succeeded. If you're expecting some sort of inference to be made from your where clause, your expectation is mistaken.
I have read quite a lot on inference, but I am clueless as of why this doesn't work.
You may wish to read my blog articles on type inference if this subject interests you. They may be more accurate than some of the other articles you've read on this subject; I occasionally see misinformation out there. From my current blog:
https://ericlippert.com/category/csharp/type-inference/
And my former Microsoft blog:
https://blogs.msdn.microsoft.com/ericlippert/tag/type-inference/
In particular, see
https://blogs.msdn.microsoft.com/ericlippert/2009/12/10/constraints-are-not-part-of-the-signature/
The comments to that blog are quite interesting. If you've ever wanted to see like a hundred people tell me that I'm wrong, the design is wrong, the implementation is wrong, well, that's the place to go.
There's nothing it can use to infer the type from. You've said that T has to be a type of Parent but since you're not passing a parameter of that type (which the compiler can use to infer the type) you'll have to explicitly name the type.
The compiler has no information to infer from - you need a parameter or some other information to tell the compiler what T should be in order to infer it.
I'm no expert in this, but I think there is nothing to infer from.
The simple fact that this method is declared in a Parent derivate has nothing to do with it.
You need an argument of type T for that method so that the compiler has something from which he can infere what should be used as T.
In your call, T does not need to be either Parent or Child. It can be anything as long as it inherits from Parent.
Is there a name for this pattern?
Let's say you want to create a method that takes a variable number of arguments, each of which must be one of a fixed set of types (in any order or combination), and some of those types you have no control over. A common approach would be to have your method take arguments of type Object, and validate the types at runtime:
void MyMethod (params object[] args)
{
foreach (object arg in args)
{
if (arg is SomeType)
DoSomethingWith((SomeType) arg);
else if (arg is SomeOtherType)
DoSomethingElseWith((SomeOtherType) arg);
// ... etc.
else throw new Exception("bogus arg");
}
}
However, let's say that, like me, you're obsessed with compile-time type safety, and want to be able to validate your method's argument types at compile time. Here's an approach I came up with:
void MyMethod (params MyArg[] args)
{
// ... etc.
}
struct MyArg
{
public readonly object TheRealArg;
private MyArg (object obj) { this.TheRealArg = obj; }
// For each type (represented below by "GoodType") that you want your
// method to accept, define an implicit cast operator as follows:
static public implicit operator MyArg (GoodType x)
{ return new MyArg(x); }
}
The implicit casts allow you to pass arguments of valid types directly to your routine, without having to explicitly cast or wrap them. If you try to pass a value of an unacceptable type, the error will be caught at compile time.
I'm sure others have used this approach, so I'm wondering if there's a name for this pattern.
There doesn't seem to be a named pattern on the Interwebs, but based on Ryan's comment to your question, I vote the name of the pattern should be Variadic Typesafety.
Generally, I would use it very sparingly, but I'm not judging the pattern as good or bad. Many of the commenters have made good points pro and con, which we see for other patterns such as Factory, Service Locator, Dependency Injection, MVVM, etc. It's all about context. So here's a stab at that...
Context
A variable set of disparate objects must be processed.
Use When
Your method can accept a variable number of arguments of disparate types that don't have a common base type.
Your method is widely used (i.e. in many places in code and/or by a great number of users of your framework. The point being that the type-safety provides enough of a benefit to warrant its use.
The arguments can be passed in any order, yet the set of disparate types is finite and the only set acceptable to the method.
Expressiveness is your design goal and you don't want to put the onus on the user to create wrappers or adapters (see Alternatives).
Implementation
You already provided that.
Examples
LINQ to XML (e.g. new XElement(...))
Other builders such as those that build SQL parameters.
Processor facades (e.g. those that could accept different types of delegates or command objects from different frameworks) to execute the commands without the need to create explicit command adapters.
Alternatives
Adapter. Accept a variable number of arguments of some adapter type (e.g. Adapter<T> or subclasses of a non-generic Adapter) that the method can work with to produce the desired results. This widens the set your method can use (the types are no-longer finite), but nothing is lost if the adapter does the right thing to enable the processing to still work. The disadvantage is the user has the additional burden of specifying existing and/or creating new adapters, which perhaps detracts from the intent (i.e. adds "ceremony", and weakens "essence").
Remove Type Safety. This entails accepting a very base type (e.g. Object) and placing runtime checks. Burden on what to know to pass is passed to the user, but the code is still expressive. Errors don't reveal themselves until runtime.
Composite. Pass a single object that is a composite of others. This requires a pre-method-call build up, but devolves back to using one of the above patterns for the items in the composite's collection.
Fluent API. Replace the single call with a series of specific calls, one for each type of acceptable argument. A canonical example is StringBuilder.
It is called an anti-pattern more commonly known as Poltergeist.
Update:
If the types of args are always constant and the order does not matter, then create overloads that each take collections (IEnumerable<T>) changing T in each method for the type you need to operate on. This will reduce your code complexity by:
Removing the MyArg class
eliminating the need for type casting in your MyMethod
will add an additional compile time type safety in that if you try to call the method with a list of args that you can't handle, you will get a compiler exception.
This looks like a subset of the Composite Pattern. Quoting from Wikipedia:
The composite pattern describes that a group of objects are to be treated in the same way as a single instance of an object.
Could you give me some reasons for limitations of the dynamic type in C#? I read about them in "Pro C# 2010 and the .NET 4 platform". Here is an excerpt (if quoting books is illegal here, tell me and I will remove the excerpt):
While a great many things can be
defined using the dynamic keyword,
there are some limitations regarding
its usage. While they are not show
stoppers, do know that a dynamic data
item cannot make use of lambda
expressions or C# anonymous methods
when calling a method. For example,
the following code will always result
in errors, even if the target method
does indeed take a delegate parameter
which takes a string value and returns
void.
dynamic a = GetDynamicObject();
// Error! Methods on dynamic data can’t use lambdas!
a.Method(arg => Console.WriteLine(arg));
To circumvent this restriction, you
will need to work with the underlying
delegate directly, using the
techniques described in Chapter 11
(anonymous methods and lambda
expressions, etc). Another limitation
is that a dynamic point of data cannot
understand any extension methods (see
Chapter 12). Unfortunately, this would
also include any of the extension
methods which come from the LINQ APIs.
Therefore, a variable declared with
the dynamic keyword has very limited
use within LINQ to Objects and other
LINQ technologies:
dynamic a = GetDynamicObject();
// Error! Dynamic data can’t find the Select() extension method!
var data = from d in a select d;
Thanks in advance.
Tomas's conjectures are pretty good. His reasoning on extension methods is spot on. Basically, to make extension methods work we need the call site to at runtime somehow know what using directives were in force at compile time. We simply did not have enough time or budget to develop a system whereby this information could be persisted into the call site.
For lambdas, the situation is actually more complex than the simple problem of determining whether the lambda is going to expression tree or delegate. Consider the following:
d.M(123)
where d is an expression of type dynamic. *What object should get passed at runtime as the argument to the call site "M"? Clearly we box 123 and pass that. Then the overload resolution algorithm in the runtime binder looks at the runtime type of d and the compile-time type of the int 123 and works with that.
Now what if it was
d.M(x=>x.Foo())
Now what object should we pass as the argument? We have no way to represent "lambda method of one variable that calls an unknown function called Foo on whatever the type of x turns out to be".
Suppose we wanted to implement this feature: what would we have to implement? First, we'd need a way to represent an unbound lambda. Expression trees are by design only for representing lambdas where all types and methods are known. We'd need to invent a new kind of "untyped" expression tree. And then we'd need to implement all of the rules for lambda binding in the runtime binder.
Consider that last point. Lambdas can contain statements. Implementing this feature requires that the runtime binder contain the entire semantic analyzer for every possible statement in C#.
That was orders of magnitude out of our budget. We'd still be working on C# 4 today if we'd wanted to implement that feature.
Unfortunately this means that LINQ doesn't work very well with dynamic, because LINQ of course uses untyped lambdas all over the place. Hopefully in some hypothetical future version of C# we will have a more fully-featured runtime binder and the ability to do homoiconic representations of unbound lambdas. But I wouldn't hold my breath waiting if I were you.
UPDATE: A comment asks for clarification on the point about the semantic analyzer.
Consider the following overloads:
class C {
public void M(Func<IDisposable, int> f) { ... }
public void M(Func<int, int> f) { ... }
...
}
and a call
d.M(x=> { using(x) { return 123; } });
Suppose d is of compile time type dynamic and runtime type C. What must the runtime binder do?
The runtime binder must determine at runtime whether the expression x=>{...} is convertible to each of the delegate types in each of the overloads of M.
In order to do that, the runtime binder must be able to determine that the second overload is not applicable. If it were applicable then you could have an int as the argument to a using statement, but the argument to a using statement must be disposable. That means that the runtime binder must know all the rules for the using statement and be able to correctly report whether any possible use of the using statement is legal or illegal.
Clearly that is not restricted to the using statement. The runtime binder must know all the rules for all of C# in order to determine whether a given statement lambda is convertible to a given delegate type.
We did not have time to write a runtime binder that was essentially an entire new C# compiler that generates DLR trees rather than IL. By not allowing lambdas we only have to write a runtime binder that knows how to bind method calls, arithmetic expressions and a few other simple kinds of call sites. Allowing lambdas makes the problem of runtime binding on the order of dozens or hundreds of times more expensive to implement, test and maintain.
Lambdas: I think that one reason for not supporting lambdas as parameters to dynamic objects is that the compiler wouldn't know whether to compile the lambda as a delegate or as an expression tree.
When you use a lambda, the compiler decides based on the type of the target parameter or variable. When it is Func<...> (or other delegate) it compiles the lambda as an executable delegate. When the target is Expression<...> it compiles lambda into an expression tree.
Now, when you have a dynamic type, you don't know whether the parameter is delegate or expression, so the compiler cannot decide what to do!
Extension methods: I think that the reason here is that finding extension methods at runtime would be quite difficult (and perhaps also inefficient). First of all, the runtime would need to know what namespaces were referenced using using. Then it would need to search all classes in all loaded assemblies, filter those that are accessible (by namespace) and then search those for extension methods...
Eric (and Tomas) says it well, but here is how I think of it.
This C# statement
a.Method(arg => Console.WriteLine(arg));
has no meaning without a lot of context. Lambda expressions themselves have no types, rather they are convertible to delegate (or Expression) types. So the only way to gather the meaning is to provide some context which forces the lambda to be converted to a specific delegate type. That context is typically (as in this example) overload resolution; given the type of a, and the available overloads Method on that type (including extension members), we can possibly place some context that gives the lambda meaning.
Without that context to produce the meaning, you end up having to bundle up all kinds of information about the lambda in the hopes of somehow binding the unknowns at runtime. (What IL could you possibly generate?)
In vast contrast, one you put a specific delegate type there,
a.Method(new Action<int>(arg => Console.WriteLine(arg)));
Kazam! Things just got easy. No matter what code is inside the lambda, we now know exactly what type it has, which means we can compile IL just as we would any method body (we now know, for example, which of the many overloads of Console.WriteLine we're calling). And that code has one specific type (Action<int>), which means it is easy for the runtime binder to see if a has a Method that takes that type of argument.
In C#, a naked lambda is almost meaningless. C# lambdas need static context to give them meaning and rule out ambiguities that arise from many possible coercisons and overloads. A typical program provides this context with ease, but the dynamic case lacks this important context.
Is it possible to create a list containing delegates of different types ?
For example considere this two delegates :
class MyEventArg1 : EventArgs {}
class MyEventArg2 : EventArgs {}
EventHandler<MyEventArgs1> handler1;
EventHandler<MyEventArgs2> handler2;
I would like to do something like that :
List<EventHandler<EventArgs>> handlers = new List<EventHandler<EventArgs>>();
handlers.Add((EventHandler<EventArgs>)handler1);
handlers.Add((EventHandler<EventArgs>)handler2);
But the cast from one delegate to another seems not possible.
My goal is to store the delegates in a list not to call them; but just to unregister them automaticly.
Thanks
You will be able to do this in C# 4.0 thanks to the generics variance but until then you need to find another way (maybe ArrayList).
Yes, this doesn't work, the delegates are completely unrelated types. Normally, generic types would have only System.Object as the common base type. But here, since they are delegates, you could store them in a List<Delegate>. I doubt that's going to help you getting them unregistered though. But I can't really envision what that code might look like.
It's possible for a generic delegate declaration to specify that certain type parameters should be covariant or contravariant, which would allow the types of assignments you're after. Unfortunately, the internal implementation of multicast delegates makes it impossible to combine delegates of different types (a "simple" delegate holds information about its type, along with a method pointer and a reference to its target; a multicast delegate information about its type along with holds the method pointers and target references for each of its constituent delegates, but does not hold any reference to the original delegates that were combined, nor does it hold any information about their types). An attempt to combine an EventHandler<DerivedEventArgs> with an EventHandler<EventArgs> will thus fail at run-time.
If EventHandler<T> were contravariant with respect to T, an attempt to pass an EventHandler<EventArgs> to a standard event AddHandler method which expects an EventHandler<DerivedEventArgs> would compile, and would even succeed if no other handlers were subscribed, since the EventHandler<EventArgs> would be Delegate.Combined with null, thus being stored in the event's delegate field as an EventHandler<EventArgs>. Unfortunately, a subsequent attempt to add anEventHandler<DerivedEventArgs> (which is actually the expected type) would fail since its type doesn't match the delegate it's being combined with. Microsoft decided this behavior would violate the Principle of Least Astonishment (if passing the "wrong" delegate will cause any problem, it should do so when that delegate is passed, rather than when a later one is passed), and decided to minimize the likelihood of the scenario by making it so that an attempt to pass an EventHandler<EventArgs> to a handler that expects an EventHandler<DerivedEventArgs> will fail compilation, even though the act could succeed if it was the only subscription.