Uses of delegates in C# (or other languages) - c#

I have always wondered how delegates can be useful and why shall we use them? Other then being type safe and all those advantages in Visual Studio Documentation, what are real world uses of delegates.
I already found one and it's very targeted.
using System;
namespace HelloNamespace {
class Greetings{
public static void DisplayEnglish() {
Console.WriteLine("Hello, world!");
}
public static void DisplayItalian() {
Console.WriteLine("Ciao, mondo!");
}
public static void DisplaySpanish() {
Console.WriteLine("Hola, imundo!");
}
}
delegate void delGreeting();
class HelloWorld {
static void Main(string [] args) {
int iChoice=int.Parse(args[0]);
delGreeting [] arrayofGreetings={
new delGreeting(Greetings.DisplayEnglish),
new delGreeting(Greetings.DisplayItalian),
new delGreeting(Greetings.DisplaySpanish)};
arrayofGreetings[iChoice-1]();
}
}
}
But this doesn't show me exactly the advantages of using delegates rather than a conditional "If ... { }" that parses the argument and run the method.
Does anyone know why it's better to use delegate here rather than "if ... { }". Also do you have other examples that demonstrate the usefulness of delegates.
Thanks!

Delegates are a great way of injecting functionality into a method. They greatly help with code reuse because of this.
Think about it, lets say you have a group of related methods that have almost the same functionality but vary on just a few lines of code. You could refactor all of the things these methods have in common into one single method, then you could inject the specialised functionality in via a delegate.
Take for example all of the IEnumerable extension methods used by LINQ. All of them define common functionality but need a delegate passing to them to define how the return data is projected, or how the data is filtered, sorted, etc...

The most common real-world everyday use of delegates that I can think of in C# would be event handling. When you have a button on a WinForm, and you want to do something when the button is clicked, then what you do is you end up registering a delegate function to be called by the button when it is clicked.
All of this happens for you automatically behind the scenes in the code generated by Visual Studio itself, so you might not see where it happens.
A real-world case that might be more useful to you would be if you wanted to make a library that people can use that will read data off an Internet feed, and notify them when the feed has been updated. By using delegates, then programmers who are using your library would be able to have their own code called whenever the feed is updated.

Lambda expressions
Delegates were mostly used in conjunction with events. But dynamic languages showed their much broader use. That's why delegates were underused up until C# 3.0 when we got Lambda expressions. It's very easy to do something using Lambda expressions (that generates a delegate method)
Now imagine you have a IEnumerable of strings. You can easily define a delegate (using Lambda expression or any other way) and apply it to run on every element (like trimming excess spaces for instance). And doing it without using loop statements. Of course your delegates may do even more complex tasks.

I will try to list some examples that are beyond a simple if-else scenario:
Implementing call backs. For example you are parsing an XML document and want a particular function to be called when a particular node is encountered. You can pass delegates to the functions.
Implementing the strategy design pattern. Assign the delegate to the required algorithm/ strategy implementation.
Anonymous delegates in the case where you want some functionality to be executed on a separate thread (and this function does not have anything to send back to the main program).
Event subscription as suggested by others.

Delegates are simply .Net's implementation of first class functions and allow the languages using them to provide Higher Order Functions.
The principle benefit of this sort of style is that common aspects can be abstracted out into a function which does just what it needs to do (for example traversing a data structure) and is provided another function (or functions) that it asks to do something as it goes along.
The canonical functional examples are map and fold which can be changed to do all sorts of things by the provision of some other operation.
If you want to sum a list of T's and have some function add which takes two T's and adds them together then (via partial application) fold add 0 becomes sum. fold multiply 1 would become the product, fold max 0 the maximum. In all these examples the programmer need not think about how to iterate over the input data, need not worry about what to do if the input is empty.
These are simple examples (though they can be surprisingly powerful when combined with others) but consider tree traversal (a more complex task) all of that can be abstracted away behind a treefold function. Writing of the tree fold function can be hard, but once done it can be re-used widely without having to worry about bugs.
This is similar in concept and design to the addition of foreach loop constructs to traditional imperative languages, the idea being that you don't have to write the loop control yourself (since it introduces the chance of off by one errors, increases verbosity that gets in the way of what you are doing to each entry instead showing how you are getting each entry. Higher order functions simply allow you to separate the traversal of a structure from what to do while traversing extensibly within the language itself.
It should be noted that delegates in c# have been largely superseded by lambdas because the compiler can simply treat it as a less verbose delegate if it wants but is also free to pass through the expression the lambda represents to the function it is passed to to allow (often complex) restructuring or re-targeting of the desire into some other domain like database queries via Linq-to-Sql.
A principle benefit of the .net delegate model over c-style function pointers is that they are actually a tuple (two pieces of data) the function to call and the optional object on which the function is to be called. This allows you to pass about functions with state which is even more powerful. Since the compiler can use this to construct classes behind your back(1), instantiate a new instance of this class and place local variables into it thus allowing closures.
(1) it doesn't have to always do this, but for now that is an implementation detail

In your example your greating are the same, so what you actually need is array of strings.
If you like to gain use of delegates in Command pattern, imagine you have:
public static void ShakeHands()
{ ... }
public static void HowAreYou()
{ ... }
public static void FrenchKissing()
{ ... }
You can substitute a method with the same signature, but different actions.
You picked way too simple example, my advice would be - go and find a book C# in Depth.

Here's a real world example. I often use delegates when wrapping some sort of external call. For instance, we have an old app server (that I wish would just go away) which we connect to through .Net remoting. I'll call the app server in a delegate from a 'safecall ' function like this:
private delegate T AppServerDelegate<T>();
private T processAppServerRequest<T>(AppServerDelegate<T> delegate_) {
try{
return delegate_();
}
catch{
//Do a bunch of standard error handling here which will be
//the same for all appserver calls.
}
}
//Wrapped public call to AppServer
public int PostXYZRequest(string requestData1, string requestData2,
int pid, DateTime latestRequestTime){
processAppServerRequest<int>(
delegate {
return _appSvr.PostXYZRequest(
requestData1,
requestData2,
pid,
latestRequestTime);
});
Obviously the error handling is done a bit better than that but you get the rough idea.

Delegates are used to "call" code in other classes (that might not necessarily be in the same, class, or .cs or even the same assembly).
In your example, delegates can simply be replaced by if statements like you pointed out.
However, delegates are pointers to functions that "live" somewhere in the code where for organizational reasons for instance you don't have access to (easily).

Delegates and related syntactic sugar have significantly changed the C# world (2.0+)
Delegates are type-safe function pointers - so you use delegates anywhere you want to invoke/execute a code block at a future point of time.
Broad sections I can think of
Callbacks/Event handlers: do this when EventX happens. Or do this when you are ready with the results from my async method call.
myButton.Click += delegate { Console.WriteLine("Robbery in progress. Call the cops!"); }
LINQ: selection, projection etc. of elements where you want to do something with each element before passing it down the pipeline. e.g. Select all numbers that are even, then return the square of each of those
var list = new int[] { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }
.Where(delegate(int x) { return ((x % 2) == 0); })
.Select(delegate(int x) { return x * x; });
// results in 4, 16, 36, 64, 100

Another use that I find a great boon is if I wish to perform the same operation, pass the same data or trigger the same action in multiple instances of the same object type.

In .NET, delegates are also needed when updating the UI from a background thread. As you can not update controls from thread different from the one that created the controls, you need to invoke the update code withing the creating thread's context (mostly using this.Invoke).

Related

Passing constructor delegate or object for unmanaged resources

In my (simplified) problem I have a method "Reading" that can use many different implementation of some IDisposableThing. I am passing delegates to the constructor right now so I can use the using statement.
Is this approach of passing a delegate of the constructor of my object appropriate?
My problem is that things like List<Func<IDisposable>> etc start looking bit scary (because delegates look like crap in c#) and passing in a object seems more usual and a clearer statement of intent.
Is there a better/different way of managing this situation without delegates?
public void Main()
{
Reading(() => new DisposableThingImplementation());
Reading(() => new AnotherDisposableThingImplementation());
}
public void Reading(Func<IDisposableThing> constructor)
{
using (IDisposableThing streamReader = constructor())
{
//do things
}
}
As I said in the comment, it's difficult to say what's best for your situation, so instead I'll just list your options so you can make an informed decision:
Continue doing what you're doing
Having to use around objects with an unpleasantly complicated-looking type is maybe not ideal visually, but in your situation it may well be perfectly appropriate
Use a custom delegate type
You can define a delegate like:
public delegate IDisposableThing DisposableThingConstructor();
Then anywhere you would write Func<IDisposableThing>, you can just write DisposableThingConstructor instead. For a commonly used delegate type, this may improve code readability, though this too is a matter of taste.
Move the using statements out of Reading
This really depends on whether it's sensible for the lifecycle management of these objects to be a responsibility of the Reading method or not. Given what we have of your code at the moment, we can't really judge this for you. An implementation with the lifecycle management moved out would look like:
public void Main()
{
using(var disposableThing = new DisposableThingImplementation())
Reading(disposableThing);
}
public void Reading(IDisposableThing disposableThing)
{
//do things
}
Use a factory pattern
In this option, you create a class which returns new IDisposableThing implementations. Lots of information can be found on the factory pattern which you may well already know, so I won't repeat it all here. This option may well be overkill for your purposes here, adding a lot of pointless complexity, but depending on how those DisposableThings are constructed, it may have additional benefits which make it worthwhile.
Use a generic argument
This option will only work if all of your IDisposableThing implementations have a parameterless constructor. I'm guessing that's not the case, but in case it is, it's a relatively straightforward approach:
public void Reading<T>() where T : IDisposableThing, new()
{
using(var disposableThing = new T())
{
//do things
}
}
Use an Inversion of Control container
This is another option which would certainly be overkill if used for this purpose alone. I include it mostly for completeness. Inversion of control containers like Ninject will give you easy ways to manage the lifecycles of objects passed into others.
I very much doubt this would be an appropriate solution in your case, especially since the disposable objects are not being used in another class's constructor. If you later run into a situation where you're trying to manage object lifecycle in a larger, complex object graph, this option might be worth revisiting.
Construct the objects outside of the using statement
This is specifically described as "not a best practice" in the MSDN documentation, but it is an option. You can do:
public void Main()
{
Reading(new DisposableThingImplementation());
}
public void Reading(IDisposableThing disposableThing)
{
using (disposableThing)
{
//do things
}
}
At the end of the using statement, the Dispose method will be called, but the object will not be garbage collected because it is still in scope. Trying to use the object after that would be very likely to cause problems because it is not fully initialized. So again, while this is an option, it's unlikely to be a good one.
Is this approach of passing a delegate of the constructor of my object appropriate? My problem is that things like List<Func<IDisposable>> etc start looking bit scary (because delegates look like crap in c#) and passing in a object seems more usual and a clearer statement of intent.
Yes, it's fine. However I understand your concern about passing a list of those things... Perhaps creating a custom delegate with the same signature as Func<IDisposable> and a more explicit name (e.g. SomethingFactory) would be clearer.
Is there a better/different way of managing this situation without delegates?
You could pass a factory or a list of factories to the method. I don't think it's really "better", though; it's mostly the same, since your factory would typically be represented as an interface with a single method, which is essentially the same as a delegate.

Delegates (Lambda expressions) Vs Interfaces and abstract classes

I have been looking for a neat answer to this design question with no success. I could not find help neither in the ".NET Framework design guidelines" nor in the "C# programing guidelines".
I basically have to expose a pattern as an API so the users can define and integrate their algorithms into my framework like this:
1)
// This what I provide
public abstract class AbstractDoSomething{
public abstract SomeThing DoSomething();
}
Users need to implementing this abstract class, they have to implement the DoSomething method (that I can call from within my framework and use it)
2)
I found out that this can also acheived by using delegates:
public sealed class DoSomething{
public String Id;
Func<SomeThing> DoSomething;
}
In this case, a user can only use DoSomething class this way:
DoSomething do = new DoSomething()
{
Id="ThisIsMyID",
DoSomething = (() => new Something())
}
Question
Which of these two options is best for an easy, usable and most importantly understandable to expose as an API?
EDIT
In case of 1 : The registration is done this way (assuming MyDoSomething extends AbstractDoSomething:
MyFramework.AddDoSomething("DoSomethingIdentifier", new MyDoSomething());
In case of 2 : The registration is done like this:
MyFramework.AddDoSomething(new DoSomething());
Which of these two options is best for an easy, usable and most importantly understandable to expose as an API?
The first is more "traditional" in terms of OOP, and may be more understandable to many developers. It also can have advantages in terms of allowing the user to manage lifetimes of the objects (ie: you can let the class implement IDisposable and dispose of instances on shutdown, etc), as well as being easy to extend in future versions in a way that doesn't break backwards compatibility, since adding virtual members to the base class won't break the API. Finally, it can be simpler to use if you want to use something like MEF to compose this automatically, which can simplify/remove the process of "registration" from the user's standpoint (as they can just create the subclass, and drop it in a folder, and have it discovered/used automatically).
The second is a more functional approach, and is simpler in many ways. This allows the user to implement your API with far fewer changes to their existing code, as they just need to wrap the necessary calls in a lambda with closures instead of creating a new type.
That being said, if you're going to take the approach of using a delegate, I wouldn't even make the user create a class - just use a method like:
MyFramework.AddOperation("ThisIsMyID", () => DoFoo());
This makes it a little bit more clear, in my opinion, that you're adding an operation to the system directly. It also completely eliminates the need for another type in your public API (DoSomething), which again simplifies the API.
I would go with the abstract class / interface if:
DoSomething is required
DoSomething will normally get really big (so DoSomething's implementation can be splited into several private / protected methods)
I would go with delegates if:
DoSomething can be treated as an event (OnDoingSomething)
DoSomething is optional (so you default it to a no-op delegate)
Though personally, if in my hand, I would always go by Delegate Model. I just love the simplicity and elegance of higher order functions. But while implementing the model, be careful about memory leaks. Subscribed events are one of the most common reasons of memory leaks in .Net. This means, suppose if you have an object that has some events exposed, the original object would never be disposed until all events are unsubscribed since event creates a strong reference.
As is typical for most of these types of questions, I would say "it depends". :)
But I think the reason for using the abstract class versus the lambda really comes down to behavior. Usually, I think of the lambda being used as a callback type of functionality -- where you'd like something custom happen when something else happens. I do this a lot in my client-side code:
- make a service call
- get some data back
- now invoke my callback to handle that data accordingly
You can do the same with the lambdas -- they are specific and are targeted for very specific situations.
Using the abstract class (or interface) really comes down to where your class' behavior is driven by the environment around it. What's happening, what client am I dealing with, etc.? These larger questions could suggest that you should define a set of behaviors and then allow your developers (or consumers of your API) to create their own sets of behavior based upon their requirements. Granted, you could do the same with lambdas, but I think it would be more complex to develop and also more complex to clearly communicate to your users.
So, I guess my rough rule of thumb is:
- use lambdas for specific callback or side-effect customized behaviors;
- use abstract classes or interfaces to provide a mechanism for object behavior customization (or at least the majority of the object's primary behavior).
Sorry I can't give you a clear definition, but I hope this helps. Good luck!
A few things to consider :
How many different functions/delegates would need to be over-ridden? If may functions, inheretance will group "sets" of overrides in an easier to understand way. If you have a single "registration" function, but many sub-portions can be delegated out to the implementor, this is a classic case of the "Template" pattern, which makes the most sense to be inherited.
How many different implementations of the same function will be needed? If just one, then inheretance is good, but if you have many implementations a delegate might save overhead.
If there are multiple implementations, will the program need to switch between them? Or will it only use a single implementation. If switching is required, delegates might be easier, but I would caution this, especially depending on the answer to #1. See the Strategy Pattern.
If the override needs access to any protected members, then inheretance. If it can rely only on publics, then delegate.
Other choices would be events, and extension methods as well.

How to enforce the use of a method's return value in C#?

I have a piece of software written with fluent syntax. The method chain has a definitive "ending", before which nothing useful is actually done in the code (think NBuilder, or Linq-to-SQL's query generation not actually hitting the database until we iterate over our objects with, say, ToList()).
The problem I am having is there is confusion among other developers about proper usage of the code. They are neglecting to call the "ending" method (thus never actually "doing anything")!
I am interested in enforcing the usage of the return value of some of my methods so that we can never "end the chain" without calling that "Finalize()" or "Save()" method that actually does the work.
Consider the following code:
//The "factory" class the user will be dealing with
public class FluentClass
{
//The entry point for this software
public IntermediateClass<T> Init<T>()
{
return new IntermediateClass<T>();
}
}
//The class that actually does the work
public class IntermediateClass<T>
{
private List<T> _values;
//The user cannot call this constructor
internal IntermediateClass<T>()
{
_values = new List<T>();
}
//Once generated, they can call "setup" methods such as this
public IntermediateClass<T> With(T value)
{
var instance = new IntermediateClass<T>() { _values = _values };
instance._values.Add(value);
return instance;
}
//Picture "lazy loading" - you have to call this method to
//actually do anything worthwhile
public void Save()
{
var itemCount = _values.Count();
. . . //save to database, write a log, do some real work
}
}
As you can see, proper usage of this code would be something like:
new FluentClass().Init<int>().With(-1).With(300).With(42).Save();
The problem is that people are using it this way (thinking it achieves the same as the above):
new FluentClass().Init<int>().With(-1).With(300).With(42);
So pervasive is this problem that, with entirely good intentions, another developer once actually changed the name of the "Init" method to indicate that THAT method was doing the "real work" of the software.
Logic errors like these are very difficult to spot, and, of course, it compiles, because it is perfectly acceptable to call a method with a return value and just "pretend" it returns void. Visual Studio doesn't care if you do this; your software will still compile and run (although in some cases I believe it throws a warning). This is a great feature to have, of course. Imagine a simple "InsertToDatabase" method that returns the ID of the new row as an integer - it is easy to see that there are some cases where we need that ID, and some cases where we could do without it.
In the case of this piece of software, there is definitively never any reason to eschew that "Save" function at the end of the method chain. It is a very specialized utility, and the only gain comes from the final step.
I want somebody's software to fail at the compiler level if they call "With()" and not "Save()".
It seems like an impossible task by traditional means - but that's why I come to you guys. Is there an Attribute I can use to prevent a method from being "cast to void" or some such?
Note: The alternate way of achieving this goal that has already been suggested to me is writing a suite of unit tests to enforce this rule, and using something like http://www.testdriven.net to bind them to the compiler. This is an acceptable solution, but I am hoping for something more elegant.
I don't know of a way to enforce this at a compiler level. It's often requested for objects which implement IDisposable as well, but isn't really enforceable.
One potential option which can help, however, is to set up your class, in DEBUG only, to have a finalizer that logs/throws/etc. if Save() was never called. This can help you discover these runtime problems while debugging instead of relying on searching the code, etc.
However, make sure that, in release mode, this is not used, as it will incur a performance overhead since the addition of an unnecessary finalizer is very bad on GC performance.
You could require specific methods to use a callback like so:
new FluentClass().Init<int>(x =>
{
x.Save(y =>
{
y.With(-1),
y.With(300)
});
});
The with method returns some specific object, and the only way to get that object is by calling x.Save(), which itself has a callback that lets you set up your indeterminate number of with statements. So the init takes something like this:
public T Init<T>(Func<MyInitInputType, MySaveResultType> initSetup)
I can think of three a few solutions, not ideal.
AIUI what you want is a function which is called when the temporary variable goes out of scope (as in, when it becomes available for garbage collection, but will probably not be garbage collected for some time yet). (See: The difference between a destructor and a finalizer?) This hypothetical function would say "if you've constructed a query in this object but not called save, produce an error". C++/CLI calls this RAII, and in C++/CLI there is a concept of a "destructor" when the object isn't used any more, and a "finaliser" which is called when it's finally garbage collected. Very confusingly, C# has only a so-called destructor, but this is only called by the garbage collector (it would be valid for the framework to call it earlier, as if it were partially cleaning the object immediately, but AFAIK it doesn't do anything like that). So what you would like is a C++/CLI destructor. Unfortunately, AIUI this maps onto the concept of IDisposable, which exposes a dispose() method which can be called when a C++/CLI destructor would be called, or when the C# destructor is called -- but AIUI you still have to call "dispose" manually, which defeats the point?
Refactor the interface slightly to convey the concept more accurately. Call the init function something like "prepareQuery" or "AAA" or "initRememberToCallSaveOrThisWontDoAnything". (The last is an exaggeration, but it might be necessary to make the point).
This is more of a social problem than a technical problem. The interface should make it easy to do the right thing, but programmers do have to know how to use code! Get all the programmers together. Explain simply once-and-for-all this simple fact. If necessary have them all sign a piece of paper saying they understand, and if they wilfully continue to write code which doesn't do anythign they're worse than useless to the company and will be fired.
Fiddle with the way the operators are chained, eg. have each of the intermediateClass functions assemble an aggregate intermediateclass object containing all of the parameters (you mostly do it this was already (?)) but require an init-like function of the original class to take that as an argument, rather than have them chained after it, and then you can have save and the other functions return two different class types (with essentially the same contents), and have init only accept a class of the correct type.
The fact that it's still a problem suggests that either your coworkers need a helpful reminder, or they're rather sub-par, or the interface wasn't very clear (perhaps its perfectly good, but the author didn't realise it wouldn't be clear if you only used it in passing rather than getting to know it), or you yourself have misunderstood the situation. A technical solution would be good, but you should probably think about why the problem occurred and how to communicate more clearly, probably asking someone senior's input.
After great deliberation and trial and error, it turns out that throwing an exception from the Finalize() method was not going to work for me. Apparently, you simply can't do that; the exception gets eaten up, because garbage collection operates non-deterministically. I was unable to get the software to call Dispose() automatically from the destructor either. Jack V.'s comment explains this well; here was the link he posted, for redundancy/emphasis:
The difference between a destructor and a finalizer?
Changing the syntax to use a callback was a clever way to make the behavior foolproof, but the agreed-upon syntax was fixed, and I had to work with it. Our company is all about fluent method chains. I was also a fan of the "out parameter" solution to be honest, but again, the bottom line is the method signatures simply could not change.
Helpful information about my particular problem includes the fact that my software is only ever to be run as part of a suite of unit tests - so efficiency is not a problem.
What I ended up doing was use Mono.Cecil to Reflect upon the Calling Assembly (the code calling into my software). Note that System.Reflection was insufficient for my purposes, because it cannot pinpoint method references, but I still needed(?) to use it to get the "calling assembly" itself (Mono.Cecil remains underdocumented, so it's possible I just need to get more familiar with it in order to do away with System.Reflection altogether; that remains to be seen....)
I placed the Mono.Cecil code in the Init() method, and the structure now looks something like:
public IntermediateClass<T> Init<T>()
{
ValidateUsage(Assembly.GetCallingAssembly());
return new IntermediateClass<T>();
}
void ValidateUsage(Assembly assembly)
{
// 1) Use Mono.Cecil to inspect the codebase inside the assembly
var assemblyLocation = assembly.CodeBase.Replace("file:///", "");
var monoCecilAssembly = AssemblyFactory.GetAssembly(assemblyLocation);
// 2) Retrieve the list of Instructions in the calling method
var methods = monoCecilAssembly.Modules...Types...Methods...Instructions
// (It's a little more complicated than that...
// if anybody would like more specific information on how I got this,
// let me know... I just didn't want to clutter up this post)
// 3) Those instructions refer to OpCodes and Operands....
// Defining "invalid method" as a method that calls "Init" but not "Save"
var methodCallingInit = method.Body.Instructions.Any
(instruction => instruction.OpCode.Name.Equals("callvirt")
&& instruction.Operand is IMethodReference
&& instruction.Operand.ToString.Equals(INITMETHODSIGNATURE);
var methodNotCallingSave = !method.Body.Instructions.Any
(instruction => instruction.OpCode.Name.Equals("callvirt")
&& instruction.Operand is IMethodReference
&& instruction.Operand.ToString.Equals(SAVEMETHODSIGNATURE);
var methodInvalid = methodCallingInit && methodNotCallingSave;
// Note: this is partially pseudocode;
// It doesn't 100% faithfully represent either Mono.Cecil's syntax or my own
// There are actually a lot of annoying casts involved, omitted for sanity
// 4) Obviously, if the method is invalid, throw
if (methodInvalid)
{
throw new Exception(String.Format("Bad developer! BAD! {0}", method.Name));
}
}
Trust me, the actual code is even uglier looking than my pseudocode.... :-)
But Mono.Cecil just might be my new favorite toy.
I now have a method that refuses to be run its main body unless the calling code "promises" to also call a second method afterwards. It's like a strange kind of code contract. I'm actually thinking about making this generic and reusable. Would any of you have a use for such a thing? Say, if it were an attribute?
What if you made it so Init and With don't return objects of type FluentClass? Have them return, e.g., UninitializedFluentClass which wraps a FluentClass object. Then calling .Save(0 on the UnitializedFluentClass object calls it on the wrapped FluentClass object and returns it. If they don't call Save they don't get a FluentClass object.
In Debug mode beside implementing IDisposable you can setup a timer that will throw a exception after 1 second if the resultmethod has not been called.
Use an out parameter! All the outs must be used.
Edit: I am not sure of it will help, tho...
It would break the fluent syntax.

why do we need delegates [duplicate]

I'm looking to implement the Observer pattern in VB.NET or C# or some other first-class .NET language. I've heard that delegates can be used for this, but can't figure out why they would be preferred over plain old interfaces implemented on observers. So,
Why should I use delegates instead of defining my own interfaces and passing around references to objects implementing them?
Why might I want to avoid using delegates, and go with good ol'-fashioned interfaces?
When you can directly call a method, you don't need a delegate.
A delegate is useful when the code calling the method doesn't know/care what the method it's calling is -- for example, you might invoke a long-running task and pass it a delegate to a callback method that the task can use to send notifications about its status.
Here is a (very silly) code sample:
enum TaskStatus
{
Started,
StillProcessing,
Finished
}
delegate void CallbackDelegate(Task t, TaskStatus status);
class Task
{
public void Start(CallbackDelegate callback)
{
callback(this, TaskStatus.Started);
// calculate PI to 1 billion digits
for (...)
{
callback(this, TaskStatus.StillProcessing);
}
callback(this, TaskStatus.Finished);
}
}
class Program
{
static void Main(string[] args)
{
Task t = new Task();
t.Start(new CallbackDelegate(MyCallbackMethod));
}
static void MyCallbackMethod(Task t, TaskStatus status)
{
Console.WriteLine("The task status is {0}", status);
}
}
As you can see, the Task class doesn't know or care that -- in this case -- the delegate is to a method that prints the status of the task to the console. The method could equally well send the status over a network connection to another computer. Etc.
You're an O/S, and I'm an application. I want to tell you to call one of my methods when you detect something happening. To do that, I pass you a delegate to the method of mine which I want you to call. I don't call that method of mine myself, because I want you to call it when you detect the something. You don't call my method directly because you don't know (at your compile-time) that the method exists (I wasn't even written when you were built); instead, you call whichever method is specified by the delegate which you receive at run-time.
Well technically, you don't have to use delegates (except when using event handlers, then it's required). You can get by without them. Really, they are just another tool in the tool box.
The first thing that comes to mind about using them is Inversion Of Control. Any time you want to control how a function behaves from outside of it, the easiest way to do it is to place a delegate as a parameter, and have it execute the delegate.
You're not thinking like a programmer.
The question is, Why would you call a function directly when you could call a delegate?
A famous aphorism of David Wheeler
goes: All problems in computer science
can be solved by another level of
indirection.
I'm being a bit tongue-in-cheek. Obviously, you will call functions directly most of the time, especially within a module. But delegates are useful when a function needs to be invoked in a context where the containing object is not available (or relevant), such as event callbacks.
There are two places that you could use delegates in the Observer pattern. Since I am not sure which one you are referring to, I will try to answer both.
The first is to use delegates in the subject instead of a list of IObservers. This approach seems a lot cleaner at handling multicasting since you basically have
private delegate void UpdateHandler(string message);
private UpdateHandler Update;
public void Register(IObserver observer)
{
Update+=observer.Update;
}
public void Unregister(IObserver observer)
{
Update-=observer.Update;
}
public void Notify(string message)
{
Update(message);
}
instead of
public Subject()
{
observers = new List<IObserver>();
}
public void Register(IObserver observer)
{
observers.Add(observer);
}
public void Unregister(IObserver observer)
{
observers.Remove(observer);
}
public void Notify(string message)
{
// call update method for every observer
foreach (IObserver observer in observers)
{
observer.Update(message);
}
}
Unless you need to do something special and require a reference to the entire IObserver object, I would think the delegates would be cleaner.
The second case is to use pass delegates instead of IObervers for example
public delegate void UpdateHandler(string message);
private UpdateHandler Update;
public void Register(UpdateHandler observerRoutine)
{
Update+=observerRoutine;
}
public void Unregister(UpdateHandler observerRoutine)
{
Update-=observerRoutine;
}
public void Notify(string message)
{
Update(message);
}
With this, Observers don't need to implement an interface. You could even pass in a lambda expression. This changes in the level of control is pretty much the difference. Whether this is good or bad is up to you.
A delegate is, in effect, passing around a reference to a method, not an object... An Interface is a reference to a subset of the methods implemented by an object...
If, in some component of your application, you need access to more than one method of an object, then define an interface representing that subset of the objects' methods, and assign and implement that interface on all classes you might need to pass to this component... Then pass the instances of these classes by that interface instead of by their concrete class..
If, otoh, in some method, or component, all you need is one of several methods, which can be in any number of different classes, but all have the same signature, then you need to use a delegate.
I'm repeating an answer I gave to this question.
I've always like the Radio Station metaphor.
When a radio station wants to broadcast something, it just sends it out. It doesn't need to know if there is actually anybody out there listening. Your radio is able to register itself with the radio station (by tuning in with the dial), and all radio station broadcasts (events in our little metaphor) are received by the radio who translates them into sound.
Without this registration (or event) mechanism. The radio station would have to contact each and every radio in turn and ask if it wanted the broadcast, if your radio said yes, then send the signal to it directly.
Your code may follow a very similar paradigm, where one class performs an action, but that class may not know, or may not want to know who will care about, or act on that action taking place. So it provides a way for any object to register or unregister itself for notification that the action has taken place.
Delegates are strong typing for function/method interfaces.
If your language takes the position that there should be strong typing, and that it has first-class functions (both of which C# does), then it would be inconsistent to not have delegates.
Consider any method that takes a delegate. If you didn't have a delegate, how would you pass something to it? And how would the the callee have any guarantees about its type?
I've heard some "events evangelists" talk about this and they say that as more decoupled events are, the better it is.
Preferably, the event source should never know about the event listeners and the event listener should never care about who originated the event. This is not how things are today because in the event listener you normally receive the source object of the event.
With this said, delegates are the perfect tool for this job. They allow decoupling between event source and event observer because the event source doesn't need to keep a list of all observer objects. It only keeps a list of "function pointers" (delegates) of the observers.
Because of this, I think this is a great advantage over Interfaces.
Look at it the other way. What advantage would using a custom interface have over using the standard way that is supported by the language in both syntax and library?
Granted, there are cases where it a custom-tailored solution might have advantages, and in such cases you should use it. In all other cases, use the most canonical solution available. It's less work, more intuitive (because it's what users expect), has more support from tools (including the IDE) and chances are, the compiler treats them differently, resulting in more efficient code.
Don't reinvent the wheel (unless the current version is broken).
Actually there was an interesting back-and-forth between Sun and Microsoft about delegates. While Sun made a fairly strong stance against delegates, I feel that Microsoft made an even stronger point for using delegates. Here are the posts:
http://java.sun.com/docs/white/delegates.html
http://msdn.microsoft.com/en-us/vjsharp/bb188664.aspx
I think you'll find these interesting reading...
i think it is more related to syntatic sugar and a way to organize your code, a good use would be to handle several methods related to a common context which ones belong to a object or a static class.
it is not that you are forced to use them, you can programme sth with and without them, but maybe using them or not might affect how organized, readable and why not cool the code would be, maybe bum some lines in your code.
Every example given here is a good one where you could implement them, as someone said it, is just another feature in the language you can play with.
greetings
Here is something that i can write down as a reason of using delegate.
The following code is written in C# And please follow the comments.
public delegate string TestDelegate();
protected void Page_Load(object sender, EventArgs e)
{
TestDelegate TD1 = new TestDelegate(DiaplayMethodD1);
TestDelegate TD2 = new TestDelegate(DiaplayMethodD2);
TD2 = TD1 + TD2; // Make TD2 as multi-cast delegate
lblDisplay.Text = TD1(); // invoke delegate
lblAnotherDisplay.Text = TD2();
// Note: Using a delegate allows the programmer to encapsulate a reference
// to a method inside a delegate object. Its like the function pointer
// in C or C++.
}
//the Signature has to be same.
public string DiaplayMethodD1()
{
//lblDisplay.Text = "Multi-Cast Delegate on EXECUTION"; // Enable on multi-cast
return "This is returned from the first method of delegate explanation";
}
// The Method can be static also
public static string DiaplayMethodD2()
{
return " Extra words from second method";
}
Best Regards,
Pritom Nandy,
Bangladesh
Here is an example that might help.
There is an application that uses a large set of data. A feature is needed that allows the data to be filtered. 6 different filters can be specified.
The immediate thought is to create 6 different methods that each return the data filtered. For example
public Data FilterByAge(int age)
public Data FilterBySize(int size)
.... and so on.
This is fine but is a very limited and produces rubbish code because it's closed for expansion.
A better way is to have a single Filter method and to pass information on how the data should be filtered. This is where a delegate can be used. The delegate is a function that can be applied to the data in order to filter it.
public Data Filter(Action filter)
then the code to use this becomes
Filter(data => data.age > 30);
Filter(data => data.size = 19);
The code data => blah blah becomes a delegate. The code becomes much more flexible and remains open.

C#: Why can't we have inner methods / local functions?

Very often it happens that I have private methods which become very big and contain repeating tasks but these tasks are so specific that it doesn't make sense to make them available to any other code part.
So it would be really great to be able to create 'inner methods' in this case.
Is there any technical (or even philosophical?) limitation that prevents C# from giving us this? Or did I miss something?
Update from 2016: This is coming and it's called a 'local function'. See marked answer.
Well, we can have "anonymous methods" defined inside a function (I don't suggest using them to organize a large method):
void test() {
Action t = () => Console.WriteLine("hello world"); // C# 3.0+
// Action t = delegate { Console.WriteLine("hello world"); }; // C# 2.0+
t();
}
If something is long and complicated than usually its good practise to refactor it to a separate class (either normal or static - depending on context) - there you can have private methods which will be specific for this functionality only.
I know a lot of people dont like regions but this is a case where they could prove useful by grouping your specific methods into a region.
Could you give a more concrete example? After reading your post I have the following impression, which is of course only a guess, due to limited informations:
Private methods are not available outside your class, so they are hidden from any other code anyway.
If you want to hide private methods from other code in the same class, your class might be to big and might violate the single responsibility rule.
Have a look at anonymous delegates an lambda expressions. It's not exactly what you asked for, but they might solve most of your problems.
Achim
If your method becomes too big, consider putting it in a separate class, or to create private helper methods. Generally I create a new method whenever I would normally have written a comment.
The better solution is to refactor this method to separate class. Create instance of this class as private field in your initial class. Make the big method public and refactor big method into several private methods, so it will be much clear what it does.
Seems like we're going to get exactly what I wanted with Local Functions in C# 7 / Visual Studio 15:
https://github.com/dotnet/roslyn/issues/2930
private int SomeMethodExposedToObjectMembers(int input)
{
int InnerMethod(bool b)
{
// TODO: Change return based on parameter b
return 0;
}
var calculation = 0;
// TODO: Some calculations based on input, store result in calculation
if (calculation > 0) return InnerMethod(true);
return InnerMethod(false);
}
Too bad I had to wait more than 7 years for this :-)
See also other answers for earlier versions of C#.

Categories