Lambda syntax in C# 3 makes it really convenient to create one-liner anonymous methods. They're a definite improvement over the wordier anonymous delegate syntax that C# 2 gave us. The convenience of lambdas, however, brings with it a temptation to use them in places where we don't necessarily need the functional programming semantics they provide.
For instance, I frequently find that my event handlers are (or at least start out as) simple one-liners that set a state value, or call another function, or set a property on another object, etc. For these, should I clutter my class with yet another simple function, or should I just stuff a lambda into the event in my constructor?
There are some obvious disadvantages to lambdas in this scenario:
I can't call my event handler directly; it can only be triggered by the event. Of course, in the case of these simple event handlers, there's hardly a time I would need to call them directly.
I can't unhook my handler from the event. On the other hand, rarely do I ever need to unhook event handlers, so this isn't much of issue, anyway.
These two things don't bother me much, for the reasons stated. And I could solve both of those problems, if they really were problems, by storing the lambda in a member delegate, but that would kind of defeat the purposes of using lambdas for their convenience and of keeping the class clean of clutter.
There are two other things, though, that I think are maybe not so obvious, but possibly more problematic.
Each lambda function forms a closure over its containing scope. This could mean that temporary objects created earlier in the constructor stay alive for much longer than they need to due to the closures maintaining references to them. Now hopefully, the compiler is smart enough to exclude objects from the closure that the lambda doesn't use, but I'm not sure. Does anybody know?
Luckily again, this isn't always an issue, as I don't often create temporary objects in my constructors. I can imagine a scenario where I did, though, and where I couldn't easily scope it outside of the lambda.
Maintainability might suffer. Big time. If I have some event handlers defined as functions, and some defined as lambdas, I worry it might make it more difficult to track down bugs, or to just understand the class. And later, if and when my event handlers end up expanding, I'll either have to move them to class-level functions, or deal with the fact that my constructor now contains a significant amount of the code that implements the functionality of my class.
So I want to draw on the advice and experience of others, perhaps those with experience in other languages with functional programming features. Are there any established best practices for this kind of thing? Would you avoid using lambdas in event handlers or in other cases where the lambda significantly outlives its enclosing scope? If not, at what threshold would you decide to use a real function instead of a lambda? Have any of the above pitfalls significantly bitten anybody? Are there any pitfalls I haven't thought of?
I generally have one routine dedicated to wiring up event handlers. Therein, i use anonymous delegates or lambdas for the actual handlers, keeping them as short as possible. These handlers have two tasks:
Unpack event parameters.
Call a named method with appropriate parameters.
This done, i've avoided cluttering up my class namespace with event handler methods that cannot be cleanly used for other purposes, and forced myself to think about the needs and purposes of the action methods that i do implement, generally resulting in cleaner code.
Each lambda function forms a closure over its containing scope. This could mean that temporary objects created earlier in the constructor stay alive for much longer than they need to due to the closures maintaining references to them. Now hopefully, the compiler is smart enough to exclude objects from the closure that the lambda doesn't use, but I'm not sure. Does anybody know?
From what I have read, the C# compiler either generates an anonymous method, or an anonymous inner class, depending on if it needs to close over the containing scope or not.
In other words, if you don't access the containing scope from within your lambda, it won't generate up the closure.
However, this is a bit of "hearsay", and I'd love to have someone who is more knowledgeable with the C# compiler weigh in on that.
All that said, the old C# 2.0 anonymous delegate syntax did the same thing, and I've almost always uses anonymous delegates for short event handlers.
You have covered the various pros and cons quite well, if you need to unhook your event handler, don't use an anonymous method, otherwise I'm all for it.
Based on a little experiment with the compiler I would say the compiler is smart enough to create a closure. What I did was a simple constructor which had two different lambdas which were used for a Predicate in List.Find().
The first lamdba used a hard coded value, the second used a parameter in the constructor. The first lambda was implemented as a private static method on the class. The second lambda was implemented as a class which performed the closing.
So your assumption that the compiler is smart enough is correct.
Most of the same characteristics of lambdas can apply equally well in other places where you can use them. If event handlers isn't a place for them, I can't think of any better. It's a single-point self-contained unit of logic, located at its single point.
In many cases, the event is designed to get a little package of context that turns out to be just right for the job at hand.
I consider this to be one of the "good smells" in a refactoring sense.
Concerning lambdas, this question I asked recently has some relevant facts about effects on object lifespan in the accepted answer.
Another interesting thing I recently learned is that the C# compiler comprehends multiple closures in the same scope as a single closure in respect of the things it captures and keeps alive. Sadly I can't find the original source for this. I will add that if I stumble upon it again.
Personally, I don't use lambdas as event handlers because I feel the readability advantage really comes when the logic is flowing from a request to a result. The event handler tends to be added in a constructor or initialiser, but it will rarely be called at this point in the object's lifecycle. So why should my constructor read like it's doing things now that are actually happening much later?
On the other hand, I do use a slightly different kind of event mechanism overall, which I find preferable to the C# language feature: an iOS-style NotificationCenter rewritten in C#, with a dispatch table keyed by Type (derived from Notification) and with Action < Notification > values. This ends up allowing single-line "event" Type definitions, like so:
public class UserIsUnhappy : Notification { public int unhappiness; }
Related
I have done a fair bit of reading and I am at the stage where I am beginning to grasp what they do, but am at a loss when it comes to why or where I would use them. In each example I've seen the recurring definition seems to be as a method pointer, and you can use this in place of a call to the method which is apparently useful when the developer doesn't know which method to call or the selection of a method is based on a condition or state.
This is where I struggle a bit, why can't I just have an if statement or a switch statement and then call the method directly based on the outcome? What's so bad about calling a method directly from an object instance? From my understanding a Delegate offers a better way to do this but I can't understand what's better about it, from my perspective it's just a round-about way to achieve the same thing an if statement could do when deciding which method to call.
I'm at a loss and have been rambling on for quite a bit now, any help at all on the matter would be greatly appreciated!
why can't I just have an if statement or a switch statement and then
call the method directly based on the outcome?
This would be fine, if you had 2 or 3 different branches and methods. Now imagine having tens or hundreds of methods which can be potentially called depending on a situation. I wouldn't want to be the one to write that if statement.
Imagine having 100 different potential abilities for a character in a game. Each ability can have its own method to perform. Depending on what abilities a player has, you can just throw those methods into a list for that character using delegates. Now its fully customize-able, and player's abilities aren't hard-written into the code, they can be picked up or lost during the course of the game super easily, and there can be thousands of abilities not to mention the amount of potential combinations.
Think about it this way. According to SOLID principle of OOD (for example) every object should have responsibility over a single part of the functionality. Using this principle we can assume that:
Classes are responsible for working with custom objects, structs with - sets of data, methods are responsible for actions, events are responsible of signalizing that something happens and delegates are responsible for the corresponding action on this events that should take place.
Events and methods are 'busy' with their own single part of the functionality and therefore cannot handle the events themselves and be responsible for methods. That's why we need delegates...
I am new to the delegates concept. I've learnt it is similar to pointers in C++. In its advantages, it mentioned effective use of delegates improves the performance.
Considering it's a pointer. How does it improve the performance of an application?
If anybody could explain this with a simple example, that would be helpful.
Delegates aren't directly about improving performance - they are about abstracting invocation. In C++ terms, it is indeed like a method pointer.
Most uses of delegates are not related to performance. Let's be clear about that from the outset.
However, one main time this can be used to help with performance is that this allows for scenarios like meta-programming. The code (usually library code) can construct complex chains of behaviours based on configuration information at runtime, and then compile that information into a method via any of Expression, TypeBuilder or DynamicMethod (or basically any API that lets you construct IL). But to invoke such a dynamically generated method, you need a delegate - because your static IL that was compiled from C# can't refer to a method that didn't exist at the time.
Note that an alternative way to do this would be to use TypeBuilder to create (at runtime) a new type that inherits from a subclass or implements a known interface, then create an instance of the dynamically generated type, which can be cast to the expected API in the usual manner and invoked normally.
Delegates do not have an significant positive or negative impact on the performance of your application. What they provide is a means of decoupling aspects of your application from each other.
Lets say you have a situation where class A calls B.foo(). A is now partially coupled to B. You might then have a situation where B needs to call A.bar(). You now risk tightly coupling the two together. If instead of exposing A to B, you instead provide bar as a delegate, then you have removed that coupling.
There's a lot of code like this in company's application I'm working at:
var something = new Lazy<ISomething>(() =>
(ISomething)SomethingFactory
.GetSomething<ISomething>(args));
ISomething sth = something.Value;
From my understanding of Lazy this is totally meaningless, but I'm new at the company and I don't want to argue without reason.
So - does this code have any sense?
Code that is being actively developed is never static, so one possibility of why they code it this way is in case they need to move the assignment to another place in the code later on. However, it sounds as if this is occurring within a method, and normally I would expect Lazy initialization to occur most often for class fields or properties, where it would make more sense (because you may not know which method in the class would first use it).
Unfortunately, it could just as likely be more a lack of knowledge of how the Lazy feature works in C# (or lazy init in general), and maybe they are just trying to use the latest "cool feature" they found out about.
I have seen weird or odd things proliferate in code at a company, simply because people saw it coded one way, and then just copied it, because they thought the original person knew what they were doing and it made sense. The best thing to do is to ask why it was done that way. Worst case, you'll learn something about your company's procedures or coding practices. Best case, you may wind up educating them if they say "gee, I don't know".
Well, in this case is meaningless of course because you are getting the value right after creating the object but maybe this is done to follow a standard or something like that.
At my company we do similar things registering the objects in the Unity container and calling Unity to create the instance just after registering it.
Unless they are using something multiple times in the method, it seems pretty useless, and slightly less efficient than just performing the action immediately. Otherwise, Lazy<T> is going through the Value get and checking to see if the value has been materialized yet, and performing a Func call.. Usefull for deferred loading, but pointless if it is just used once in a method immediately..
Lazy<T> however is usually really helpful for Properties on a class
It can be useful if the Lazy.Value is going to be moved out of the method in the future, but anyway it can be considered as overengineering, and not the best implementation as the Lazy declaration seemed to be extracted to a property in this case.
Thus shortly - yes, it's useless.
Say we have a Game class.
The game class needs to pass down a reference to it's spritebatch. That is, the class calls a method passing it, and that method in turn passes it to other methods, until it is finally used.
Is this bad for performance? Is it better to just use statics?
I see one obvious disadvantage of statics though, being unable to make duplicate functionality in the same application.
It is not easy to answer your question as you have not specifically mentioned the requirement but generally i can give you some advice.
Always consider encapsulation: Do not expose the properties if they are not used else where.
Performance :For reference types, there is no any performance penalty, as they are already a reference type.but if your type is a value type then there will be a very small performance penalty.
So there is a Design or Performance trade off exists, Unless your method is called millions of times, you never have to think about public static property.
There are cons and pros like in everything.
Is this is a good or bad from performance point of view, depends on how computational intensive and how often used that code inside your game.
So here are my considerations on subject.
Passing like parameter:
Cons : passing more variable on stack, to push it into the function call.It's very fast, but again, it depends how the code you're talking about is used, so absence of it can bring some benefits, that's why inserted this point in cons.
Pros : you esplicitly manifest that the function on top of calling stack needs that parameter for read and/or write, so one looking on that code could easily imagine semantic dependencies of your calls.
Use like static:
Cons : There is no clear evidence (if not via direct knowledge or good written documentation) what parameters would or could affect the calculus inside that functions.
Pros : You don't pass it on the stack for all functions in chain.
I would personally recommend: use it like a parameter, because this clearly manifests what calling code depends on and even if there would be some measurable performance drawback, most probably it will not be relevant in your case. But again, as Rico Mariani always suggests: measure, measure, measure...
Statics is mostly not the best way. Because if later one you want to make multiple instances you might be in trouble.
Of course passing references cost a bit of performance, but depending on the amount of creation of instances it will matter more or less. Unless you are creating millions of objects every small amount of time it might be an issue.
In Java, with no delegates, events are modeled with interface callbacks after the observer pattern. It strikes me that if working on a framework with more than half a dozen events, using delegates becomes a fairly verbose exercise.
As a Java developer who forgot his C#, I was wondering if there is EVER a valid reason to use interfaces for events or whether one should really ought to use delegates all over?
If it always makes sense to react to multiple callbacks, then it would potentially make sense to use an interface. However, you might want to write some adapter methods to allow the interface to be implemented by providing delegates for some of the callbacks - just the ones you want.
This is how Reactive Extensions works... almost no-one ever really implements IObserver<T> - they use the IObservable<T>.Subscribe extension method which allows the caller to specify the OnNext, OnCompleted and OnError handlers via delegates.
That way you get the benefits of delegates (which are generally easier to specify than interfaces, due to lambda expressions etc) but also one consistent object to pass around which represents all the related callbacks.
Delegates are much more flexible. Since there are no anonymous classes in C# (in the Java sense) you cannot easily implement interfaces inline. Therefore, whenever an API requires that I implement an interface, I have to go out and write out that class, which, when contrasted with lambdas, forces me to separate logic physically further away from each other, which often decreases readability.