I need to fire the SpeechRecognizedEvent manually for unit testing so I can't use the EmulateSpeech Method from a SpeechRecognitionEngine
Edit:
I have already encapsulated the SpeechRecognition into a separate Class with its own interface to mock it.
I need to call the Event because I have an AutoResetEvent, which I Set() during the event. The Unit test needs this to proceed.
The general idea with unit tests is not to use real things as they either:
Slow (e.g. database)
Dangerous to poke to often (e.g. Google Search API)
Not available (e.g. web service or hardware)
For such scenarios you suppose to use mocks/stubs. In other words things that behave identically, but in reality under your full control.
In your case SpeechRecognitionEngine, even if it might be available, would be too cumbersome for unit tests. Who/what would speak things to it? And even if you trigger an event, why to instantiate an instance of real SpeechRecognitionEngine?
Looking at MSDN for SpeechRecognitionEngine definition indicates that it doesn't implement an interface, which means it would be difficult to mock/stub.
For this case, you need to wrap, in other words, encapsulate SpeechRecognitionEngine into your own class which implements your interface. Then, all you need to do is to have two implementations of your interface, one with real SpeechRecognitionEngine for a real speech recognition, and another class for unit tests, which would just mimic your own callback, instead of using SpeechRecognized event.
You just swap one instance for another, and your code won't see a difference, as they are implementing single interface.
If you just want to simulate an event, you just call an event handler, as this is a method. Or another method, if you can't create some EventArgs. But the problem is that you'll have to expose inner methods from outside of your class (e.g. mark it public or internal), and this does looks nasty.
private void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
this.ProcessSpeechRecognition(e.Result);
}
public void ProcessSpeechRecognition(RecognitionResult result)
{
// your logic here
}
Then in test you just call something similar to the below:
ProcessSpeechRecognition(new RecognitionResult { Text = "test" });
Despite posting an answer before that describes best practices for TDD; here is an answer specific to SpeechRecognitionEngine.
Microsoft has already thought about emulation of speech recognition. Here is MSDN article for SpeechRecognitionEngine.EmulateRecognize Method:
https://learn.microsoft.com/en-us/dotnet/api/system.speech.recognition.speechrecognitionengine.emulaterecognize
Related
There seems to be a ton of advice for this sorta thing in the context of a GUI application. I think my particular scenario is different enough to warrent me asking. To sum up my question how do you test events?
Now for the long winded explanation. I've worked with Point of Service hardware for a little while now. This means that I had to write a few OPOS Service Objects. After a few years of doing that I managed to write my first COM visible C# service object and put it into production. I tried my best to unit test the entire project but found it rather difficult, but was able to come up with good unit tests for most all of the Service Object's interface implementation. The hardest part was the Event's part. Granted this scenario was the biggest and most common one that I faced, but I've come across similar scenarios in other applications of mine where testing for an event just seemed awkward. So to set the scene the Common Control object (CO) has at most 5 events that a person can subscribe too. When the CO calls the method Open OPOS finds the Service Object (SO) and creates an instance of it, then calls it's OpenService method. The third parameter of the SO is a reference to the CO. Now I don't have to define the entire CO, I only have to define the call back method for those 5 events. An example of the msr's definition is this
[ComImport, InterfaceType(ComInterfaceType.InterfaceIsDual), Guid("CCB91121-B81E-11D2-AB74-0040054C3719")]
internal interface MsrControlObject
{
[DispId(1)]
void SOData(int status);
[DispId(2)]
void SODirectIO(int eventNumber, ref int pData, ref string pString);
[DispId(3)]
void SOError(int resultCode, int resultCodeExtended, int errorLocus, ref int pErrorResponse);
[DispId(4)]
void SOOutputCompleteDummy(int outputId);
[DispId(5)]
void SOStatusUpdate(int data);
}
and my OpenService method would have this line of code
public class FakeMsrServiceObject : IUposBase
{
MsrControlObject _controlObject;
public int OpenService(string deviceClass, string deviceName, object dispatchObject)
{
_controlObject = (MsrControlObject)dispatchObject;
}
//example of how to fire an event
private void FireDataEvent(int status)
{
_controlObject.SODataEvent(status);
}
}
So I thought to myself for better testing, lets make a ControlObjectDispatcher. It will allow me to enqueue events, then fire them to the CO when conditions are correct. This is where I'm at. Now I know sorta how to test drive the implementation of it. But it just feels wrong. Lets take the DataEvent as an example. 2 conditions have to be met for a DataEvent to be fired. First the boolean property DataEventEnabled must be true, and the other boolean property FreezeEvents must be false. Also all events are strictly FIFO. So.. a Queue is perfect. And since I've written this before I know what the implementation will be. But writing a test for it that instills confidence to a new person to the project is difficult. Consider this pseudo code
[Test]
public void WhenMultipleEventsAreQueuedTheyAreFiredSequentiallyWhenConditionsAreCorrect()
{
_dispatcher.EnqueueDataEvent(new DataEvent(42));
_dispatcher.EnqueueStatusUpdateEvent(new StatusUpdateEvent(1));
Sleep(1000);
_spy.AssertNoEventsHaveFired();
_spy.AssertEventsCount(2);
_serviceObject.SetNumericProperty(PIDX_DataEventEnabled, 1);
_spy.AssertDataEventFired();
_spy.AssertStatusUpdateEventFired();
_serviceObject.GetnumericProperty(PIDX_DataEventEnabled).Should().BeEqualTo(0, "because firing a DataEvent sets DataEventEnabled to false");
}
Everyone reading this hear could wonder (without knowing the implementation) How do i know that say after 1 minute that this event fires? How do I know that that crazy Robert Snyder person didn't use a for loop and forget to exit the FireDataEvent routine after the iterations were all up? You really don't. Granted you could test for a minute.. but that defeats the purpose of a unit test.
So to sum up again... How does a person write a test for events? Events can fire whenever.. and they can sometimes take longer to process and fire then expected. I've seen in my integration tests for the first implementation of this where if I didn't sleep for say 50ms before asserting that an event was called then the test would fail with something like. expected data event to have fired, but was never fired
Are their any test frameworks built for events? Are their any common coding practices that cover this?
It’s a bit unclear if you’re looking to do unit testing or integration testing for your events, since you talk about both. However, given the tags on the question, I’m going to assume your primary interest is from a unit testing perspective. From a unit testing perspective it doesn’t make much difference if you are testing an event, or a normal method. The goal of the unit is to test individual chunks of functionality in isolation, so whilst having a sleep in an integration test might make sense (although I’d still try to avoid it and use some other kind of synchronisation where possible), in a unit test I’d take it as a flag that the functionality being tested hasn’t been isolated sufficiently.
So, for me, there’s two slices of event driven testing. Firstly you want to test that any events your class fires are fired when the appropriate conditions are met. Secondly you want to test that any handlers for the events perform the expected actions.
Testing the handlers behave as expected should be similar to any other test that you would right. You setup the handler to the expected state, set your expectations, call into it as if you were the event generator and then verify any relevant behaviour.
Testing that events are fired is essentially the same, setup the state that you would expect to fire an event, and then verify that an appropriately populated event is fired and any other state change takes place.
So, looking at your pseudo code, I would say that you have at least two tests:
// Setup (used for both tests)
// Test may be missing setup for dispatcher state == data event disabled.
_dispatcher.EnqueueDataEvent(new DataEvent(42));
_dispatcher.EnqueueStatusUpdateEvent(new StatusUpdateEvent(1));
// Sleep(1000); // This shouldn’t be in a unit test.
// Test 1 – VerifyThatDataAndStatusEventsDoNotFireWhenDataEventDisabled
_spy.AssertNoEventsHaveFired();
_spy.AssertEventsCount(2);
// Test 2 – VerifyThatEnablingDataEventFiresPendingDataEvents
_serviceObject.SetNumericProperty(PIDX_DataEventEnabled, 1);
_spy.AssertDataEventFired();
_spy.AssertStatusUpdateEventFired();
// Test 3? – This could be part of Test 2, or it could be a different test to VerifyThatDataEventsAreDisabledOnceADataEventHasBeenTriggered.
_serviceObject.GetnumericProperty(PIDX_DataEventEnabled).Should().BeEqualTo(0, "because firing a DataEvent sets DataEventEnabled to false");
Looking at the test code, there isn’t anything to suggest that you’ve implemented the actual serviceObject using any kind of for loop, or that a minute of testing would have any impact on the behaviour of the serviceObject. If you weren’t thinking about it from the perspective of event programming would you really be considering if calling into SetNumericProperty would result in you using ‘a for loop and forgeting to exit the FireDataEvent routine after the iterations were all up?’ It seems like if that’s the case then either SetNumericProperty wouldn’t return or you have a non-linear implementation and you’re possibly testing the wrong thing in the wrong place. Without seeing your event generation code it’s hard to advise on that though…
Events can fire whenever… and they can sometimes take longer to
process and fire then expected
Whilst this may be true when your application is running, it shouldn’t be true when you’re doing unit testing. Your unit tests should trigger events for defined conditions and test that those events have been triggered and been generated correctly. To achieve this you need to aim for unit isolation and you may have to accept that some thin elements need to be integration tested, rather than unit tested in order to achieve this isolation. So if you were dealing with events triggered from outside your app you may end up with something like this:
public interface IInternalEventProcessor {
void SOData(int status);
void SODirectIO(int eventNumber, int pData, string pString);
};
public class ExternalEventProcessor {
IInternalEventProcessor _internalProcessor;
public ExternalEventProcessor(IInternalEventProcessor internalProcessor, /*Ideally pass in interface to external system to allow unit testing*/) {
_internalProcessor = internalProcessor;
// Register event subscriptions with external system
}
public void SOData(int status) {
_internalProcessor.SOData(status);
}
void SODirectIO(int eventNumber, ref int pData, ref string pString) {
_internalProcessor.SODirectIO(eventNumber, pData, pString);
}
}
The purpose of the ExternalEventProcessor is to decouple the dependency on the external system so that unit testing of the event handling in the InternalEventProcessor is easier. Ideally, you would still be able to unit test the ExternalEventProcessor registers for events correctly and passes through by supplying mocks of the external system and the internal implementation, however if you can’t then because the class is cut down the bare minimum having integration testing only for this class might be a realistic option.
I have been looking for a neat answer to this design question with no success. I could not find help neither in the ".NET Framework design guidelines" nor in the "C# programing guidelines".
I basically have to expose a pattern as an API so the users can define and integrate their algorithms into my framework like this:
1)
// This what I provide
public abstract class AbstractDoSomething{
public abstract SomeThing DoSomething();
}
Users need to implementing this abstract class, they have to implement the DoSomething method (that I can call from within my framework and use it)
2)
I found out that this can also acheived by using delegates:
public sealed class DoSomething{
public String Id;
Func<SomeThing> DoSomething;
}
In this case, a user can only use DoSomething class this way:
DoSomething do = new DoSomething()
{
Id="ThisIsMyID",
DoSomething = (() => new Something())
}
Question
Which of these two options is best for an easy, usable and most importantly understandable to expose as an API?
EDIT
In case of 1 : The registration is done this way (assuming MyDoSomething extends AbstractDoSomething:
MyFramework.AddDoSomething("DoSomethingIdentifier", new MyDoSomething());
In case of 2 : The registration is done like this:
MyFramework.AddDoSomething(new DoSomething());
Which of these two options is best for an easy, usable and most importantly understandable to expose as an API?
The first is more "traditional" in terms of OOP, and may be more understandable to many developers. It also can have advantages in terms of allowing the user to manage lifetimes of the objects (ie: you can let the class implement IDisposable and dispose of instances on shutdown, etc), as well as being easy to extend in future versions in a way that doesn't break backwards compatibility, since adding virtual members to the base class won't break the API. Finally, it can be simpler to use if you want to use something like MEF to compose this automatically, which can simplify/remove the process of "registration" from the user's standpoint (as they can just create the subclass, and drop it in a folder, and have it discovered/used automatically).
The second is a more functional approach, and is simpler in many ways. This allows the user to implement your API with far fewer changes to their existing code, as they just need to wrap the necessary calls in a lambda with closures instead of creating a new type.
That being said, if you're going to take the approach of using a delegate, I wouldn't even make the user create a class - just use a method like:
MyFramework.AddOperation("ThisIsMyID", () => DoFoo());
This makes it a little bit more clear, in my opinion, that you're adding an operation to the system directly. It also completely eliminates the need for another type in your public API (DoSomething), which again simplifies the API.
I would go with the abstract class / interface if:
DoSomething is required
DoSomething will normally get really big (so DoSomething's implementation can be splited into several private / protected methods)
I would go with delegates if:
DoSomething can be treated as an event (OnDoingSomething)
DoSomething is optional (so you default it to a no-op delegate)
Though personally, if in my hand, I would always go by Delegate Model. I just love the simplicity and elegance of higher order functions. But while implementing the model, be careful about memory leaks. Subscribed events are one of the most common reasons of memory leaks in .Net. This means, suppose if you have an object that has some events exposed, the original object would never be disposed until all events are unsubscribed since event creates a strong reference.
As is typical for most of these types of questions, I would say "it depends". :)
But I think the reason for using the abstract class versus the lambda really comes down to behavior. Usually, I think of the lambda being used as a callback type of functionality -- where you'd like something custom happen when something else happens. I do this a lot in my client-side code:
- make a service call
- get some data back
- now invoke my callback to handle that data accordingly
You can do the same with the lambdas -- they are specific and are targeted for very specific situations.
Using the abstract class (or interface) really comes down to where your class' behavior is driven by the environment around it. What's happening, what client am I dealing with, etc.? These larger questions could suggest that you should define a set of behaviors and then allow your developers (or consumers of your API) to create their own sets of behavior based upon their requirements. Granted, you could do the same with lambdas, but I think it would be more complex to develop and also more complex to clearly communicate to your users.
So, I guess my rough rule of thumb is:
- use lambdas for specific callback or side-effect customized behaviors;
- use abstract classes or interfaces to provide a mechanism for object behavior customization (or at least the majority of the object's primary behavior).
Sorry I can't give you a clear definition, but I hope this helps. Good luck!
A few things to consider :
How many different functions/delegates would need to be over-ridden? If may functions, inheretance will group "sets" of overrides in an easier to understand way. If you have a single "registration" function, but many sub-portions can be delegated out to the implementor, this is a classic case of the "Template" pattern, which makes the most sense to be inherited.
How many different implementations of the same function will be needed? If just one, then inheretance is good, but if you have many implementations a delegate might save overhead.
If there are multiple implementations, will the program need to switch between them? Or will it only use a single implementation. If switching is required, delegates might be easier, but I would caution this, especially depending on the answer to #1. See the Strategy Pattern.
If the override needs access to any protected members, then inheretance. If it can rely only on publics, then delegate.
Other choices would be events, and extension methods as well.
I just realized static events exist - and I'm curious how people use them. I wonder how the relative comparison holds up to static vs. instance methods. For instance, a static method is basically a global function. But I've always associated events with instances of objects and I'm having trouble thinking of them at the global level.
Here some code to refer to if it helps an explanation:
void Main()
{
var c1 = new C1();
c1.E1 += () => Console.WriteLine ("E1");
C1.E2 += () => Console.WriteLine ("E2");
c1.F1();
}
// <<delegate>>+D()
public delegate void D();
// +<<event>>E1
// +<<class>><<event>>E2
// +F()
// <<does>>
// <<fire>>E1
// <<fire>>E2
public class C1
{
public void F1()
{
OnE1();
OnE2();
}
public event D E1;
private void OnE1()
{
if(E1 != null)
{
E1();
}
}
static public event D E2;
static private void OnE2()
{
if(E2 != null)
{
E2();
}
}
}
Be wary of static events. Remember that, when an object subscribes to an event, a reference to that object is held by the publisher of the event. That means that you have to be very careful about explicitly unsubscribing from static events as they will keep the subscriber alive forever, i.e., you may end up with the managed equivalent of a memory leak.
Much of OOP can be thought of in terms of message passing.
A method call is a message from the caller to the callee (carrying the parameters) and a message back with the return value.
An event is a message from the source to the subscriber. There are thus potentially two instances involved, the one sending the message and the one receiving it.
With a static event, there is no sending instance (just a type, which may or may not be a class). There still can be a recipient instance encoded as the target of the delegate.
In case you're not familiar with static methods
You're probably already familiar with static methods. In case you're not, An easy-to-understand difference is that you don't need to create an instance of an object toi use a static method, but you DO need to create an instance of an object to call a non-static method.
A good example is the System.IO.Directory and System.IO.DirectoryInfo classes.
The Directory class offers static methods, while the DirectoryInfo class does not.
There are two articles describing them here for you to see the difference for yourself.
http://visualcsharptutorials.com/2011/01/system-io-directory-class/
http://visualcsharptutorials.com/2011/01/system-io-directoryinfo-class/
Now on to static events...
However, static events are seldom seen in the wild. There are very few cases that I can think opf where I'd actually want to use one, but there is a CodeProject article that does show one potential use.
http://www.codeproject.com/KB/cs/staticevent.aspx
The key thought here is taken from the explanation (bold added by me to point out the relevant text):
We saw this property as a separate object and we made sure that there
is only one instance of it at a time. And all instances of
transactions knew where to find it when needed. There is a fine
difference though. The transactions will not need to know about the
changes happening on the exchange rate, rather they will use the last
changed value at the time that they use it by requesting the current
value. This is not enough when, for example, we want to implement an
application where the user interface reacts immediately on changes in
the UI characteristics like font, as if it has to happen at
real-time. It would be very easy if we could have a static property
in the Font class called currentFont and a static method to change
that value and a static event to all instances to let them know when
they need to update their appearance.
As .NET developers we're trained to work with a disconnected model. Think of ADO.NET compared to classic ADO. IN a VB6 app, you could use data controls that would allow the following functionality: If you were running the app on your PC, the data in your grid would update when someone on another PC edited the data.
This isn't something that .NET developers are used to. We're very used to the disconnected model. Static events enable a more "connected" experience. (even if that experience is something we're not used to any more.)
for some insight check this link http://www.codeproject.com/KB/cs/staticevent.aspx
static event can be used
when no instance exists
to do some multicast event for all existing instances...
when you have a static class which can fire events...
BUT one should use them with cuation... see discussion http://groups.google.com/group/microsoft.public.dotnet.languages.csharp/browse_thread/thread/2ac862f346b24a15/8420fbd9294ab12a%238420fbd9294ab12a?sa=X&oi=groupsr&start=1&num=2
more info
http://msdn.microsoft.com/en-us/library/8627sbea.aspx
http://dylanbeattie.blogspot.com/2008/05/firing-static-events-from-instance.html
http://www.nivisec.com/2008/09/static-events-dont-release.html
Static members are not "global," they are simply members of the class, not of class instances. This is as true for events as it is for methods, properties, fields, etc.
I can't give an example for using a static event, because I generally don't find static members to be useful in most cases. (They tend to hint at anti-patterns, like Singleton.)
I'm looking to implement the Observer pattern in VB.NET or C# or some other first-class .NET language. I've heard that delegates can be used for this, but can't figure out why they would be preferred over plain old interfaces implemented on observers. So,
Why should I use delegates instead of defining my own interfaces and passing around references to objects implementing them?
Why might I want to avoid using delegates, and go with good ol'-fashioned interfaces?
When you can directly call a method, you don't need a delegate.
A delegate is useful when the code calling the method doesn't know/care what the method it's calling is -- for example, you might invoke a long-running task and pass it a delegate to a callback method that the task can use to send notifications about its status.
Here is a (very silly) code sample:
enum TaskStatus
{
Started,
StillProcessing,
Finished
}
delegate void CallbackDelegate(Task t, TaskStatus status);
class Task
{
public void Start(CallbackDelegate callback)
{
callback(this, TaskStatus.Started);
// calculate PI to 1 billion digits
for (...)
{
callback(this, TaskStatus.StillProcessing);
}
callback(this, TaskStatus.Finished);
}
}
class Program
{
static void Main(string[] args)
{
Task t = new Task();
t.Start(new CallbackDelegate(MyCallbackMethod));
}
static void MyCallbackMethod(Task t, TaskStatus status)
{
Console.WriteLine("The task status is {0}", status);
}
}
As you can see, the Task class doesn't know or care that -- in this case -- the delegate is to a method that prints the status of the task to the console. The method could equally well send the status over a network connection to another computer. Etc.
You're an O/S, and I'm an application. I want to tell you to call one of my methods when you detect something happening. To do that, I pass you a delegate to the method of mine which I want you to call. I don't call that method of mine myself, because I want you to call it when you detect the something. You don't call my method directly because you don't know (at your compile-time) that the method exists (I wasn't even written when you were built); instead, you call whichever method is specified by the delegate which you receive at run-time.
Well technically, you don't have to use delegates (except when using event handlers, then it's required). You can get by without them. Really, they are just another tool in the tool box.
The first thing that comes to mind about using them is Inversion Of Control. Any time you want to control how a function behaves from outside of it, the easiest way to do it is to place a delegate as a parameter, and have it execute the delegate.
You're not thinking like a programmer.
The question is, Why would you call a function directly when you could call a delegate?
A famous aphorism of David Wheeler
goes: All problems in computer science
can be solved by another level of
indirection.
I'm being a bit tongue-in-cheek. Obviously, you will call functions directly most of the time, especially within a module. But delegates are useful when a function needs to be invoked in a context where the containing object is not available (or relevant), such as event callbacks.
There are two places that you could use delegates in the Observer pattern. Since I am not sure which one you are referring to, I will try to answer both.
The first is to use delegates in the subject instead of a list of IObservers. This approach seems a lot cleaner at handling multicasting since you basically have
private delegate void UpdateHandler(string message);
private UpdateHandler Update;
public void Register(IObserver observer)
{
Update+=observer.Update;
}
public void Unregister(IObserver observer)
{
Update-=observer.Update;
}
public void Notify(string message)
{
Update(message);
}
instead of
public Subject()
{
observers = new List<IObserver>();
}
public void Register(IObserver observer)
{
observers.Add(observer);
}
public void Unregister(IObserver observer)
{
observers.Remove(observer);
}
public void Notify(string message)
{
// call update method for every observer
foreach (IObserver observer in observers)
{
observer.Update(message);
}
}
Unless you need to do something special and require a reference to the entire IObserver object, I would think the delegates would be cleaner.
The second case is to use pass delegates instead of IObervers for example
public delegate void UpdateHandler(string message);
private UpdateHandler Update;
public void Register(UpdateHandler observerRoutine)
{
Update+=observerRoutine;
}
public void Unregister(UpdateHandler observerRoutine)
{
Update-=observerRoutine;
}
public void Notify(string message)
{
Update(message);
}
With this, Observers don't need to implement an interface. You could even pass in a lambda expression. This changes in the level of control is pretty much the difference. Whether this is good or bad is up to you.
A delegate is, in effect, passing around a reference to a method, not an object... An Interface is a reference to a subset of the methods implemented by an object...
If, in some component of your application, you need access to more than one method of an object, then define an interface representing that subset of the objects' methods, and assign and implement that interface on all classes you might need to pass to this component... Then pass the instances of these classes by that interface instead of by their concrete class..
If, otoh, in some method, or component, all you need is one of several methods, which can be in any number of different classes, but all have the same signature, then you need to use a delegate.
I'm repeating an answer I gave to this question.
I've always like the Radio Station metaphor.
When a radio station wants to broadcast something, it just sends it out. It doesn't need to know if there is actually anybody out there listening. Your radio is able to register itself with the radio station (by tuning in with the dial), and all radio station broadcasts (events in our little metaphor) are received by the radio who translates them into sound.
Without this registration (or event) mechanism. The radio station would have to contact each and every radio in turn and ask if it wanted the broadcast, if your radio said yes, then send the signal to it directly.
Your code may follow a very similar paradigm, where one class performs an action, but that class may not know, or may not want to know who will care about, or act on that action taking place. So it provides a way for any object to register or unregister itself for notification that the action has taken place.
Delegates are strong typing for function/method interfaces.
If your language takes the position that there should be strong typing, and that it has first-class functions (both of which C# does), then it would be inconsistent to not have delegates.
Consider any method that takes a delegate. If you didn't have a delegate, how would you pass something to it? And how would the the callee have any guarantees about its type?
I've heard some "events evangelists" talk about this and they say that as more decoupled events are, the better it is.
Preferably, the event source should never know about the event listeners and the event listener should never care about who originated the event. This is not how things are today because in the event listener you normally receive the source object of the event.
With this said, delegates are the perfect tool for this job. They allow decoupling between event source and event observer because the event source doesn't need to keep a list of all observer objects. It only keeps a list of "function pointers" (delegates) of the observers.
Because of this, I think this is a great advantage over Interfaces.
Look at it the other way. What advantage would using a custom interface have over using the standard way that is supported by the language in both syntax and library?
Granted, there are cases where it a custom-tailored solution might have advantages, and in such cases you should use it. In all other cases, use the most canonical solution available. It's less work, more intuitive (because it's what users expect), has more support from tools (including the IDE) and chances are, the compiler treats them differently, resulting in more efficient code.
Don't reinvent the wheel (unless the current version is broken).
Actually there was an interesting back-and-forth between Sun and Microsoft about delegates. While Sun made a fairly strong stance against delegates, I feel that Microsoft made an even stronger point for using delegates. Here are the posts:
http://java.sun.com/docs/white/delegates.html
http://msdn.microsoft.com/en-us/vjsharp/bb188664.aspx
I think you'll find these interesting reading...
i think it is more related to syntatic sugar and a way to organize your code, a good use would be to handle several methods related to a common context which ones belong to a object or a static class.
it is not that you are forced to use them, you can programme sth with and without them, but maybe using them or not might affect how organized, readable and why not cool the code would be, maybe bum some lines in your code.
Every example given here is a good one where you could implement them, as someone said it, is just another feature in the language you can play with.
greetings
Here is something that i can write down as a reason of using delegate.
The following code is written in C# And please follow the comments.
public delegate string TestDelegate();
protected void Page_Load(object sender, EventArgs e)
{
TestDelegate TD1 = new TestDelegate(DiaplayMethodD1);
TestDelegate TD2 = new TestDelegate(DiaplayMethodD2);
TD2 = TD1 + TD2; // Make TD2 as multi-cast delegate
lblDisplay.Text = TD1(); // invoke delegate
lblAnotherDisplay.Text = TD2();
// Note: Using a delegate allows the programmer to encapsulate a reference
// to a method inside a delegate object. Its like the function pointer
// in C or C++.
}
//the Signature has to be same.
public string DiaplayMethodD1()
{
//lblDisplay.Text = "Multi-Cast Delegate on EXECUTION"; // Enable on multi-cast
return "This is returned from the first method of delegate explanation";
}
// The Method can be static also
public static string DiaplayMethodD2()
{
return " Extra words from second method";
}
Best Regards,
Pritom Nandy,
Bangladesh
Here is an example that might help.
There is an application that uses a large set of data. A feature is needed that allows the data to be filtered. 6 different filters can be specified.
The immediate thought is to create 6 different methods that each return the data filtered. For example
public Data FilterByAge(int age)
public Data FilterBySize(int size)
.... and so on.
This is fine but is a very limited and produces rubbish code because it's closed for expansion.
A better way is to have a single Filter method and to pass information on how the data should be filtered. This is where a delegate can be used. The delegate is a function that can be applied to the data in order to filter it.
public Data Filter(Action filter)
then the code to use this becomes
Filter(data => data.age > 30);
Filter(data => data.size = 19);
The code data => blah blah becomes a delegate. The code becomes much more flexible and remains open.
I have an app which consists of several different assemblies, one of which holds the various interfaces which the classes obey, and by which the classes communicate across assembly boundaries. There are several classes firing events, and several which are interested in these events.
My question is as follows: is it good practice to implement a central EventConsolidator of some kind? This would be highly coupled, as it would need to know every class (or at least interface) throwing an event, and every consumer of an event would need to have a reference to EventConsolidator in order to subscribe.
Currently I have the situation where class A knows class B (but not C), class B knows class C, etc. Then if C fires an event B needs to pick it up and fire its own event in order for A to respond. These kinds of chains can get quite long, and it may be that B is only interested in the event in order to pass it along. I don't want A to know about C though, as that would break encapsulation.
What is good practice in this situation? Centralise the events, or grin and bear it and define events in each intermediate class? Or what are the criteria by which to make the decision? Thanks!
Edit: Here is another question asking essentially the same thing.
You could put the event itself in an interface, so that A didn't need to know about C directly, but only that it has the relevant event. However, perhaps you mean that the instance of A doesn't have sight of an instance of C...
I would try to steer clear of a centralised event system. It's likely to make testing harder, and introduced tight coupling as you said.
One pattern which is worth knowing about is making event proxying simple. If B only exposes an event to proxy it to C, you can do:
public event FooHandler Foo
{
add
{
c.Foo += value;
}
remove
{
c.Foo -= value;
}
}
That way it's proxying the subscription/unsubscription rather than the act of raising the event. This has an impact on GC eligibility, of course - which may be beneficial or not, depending on the situation. Worth thinking about though.
What you could try is using the event brokering of either NInject or the Unity Application Block.
This allows you to, for example:
[Publish("foo://happened")]
public event EventHandler<FooArgs> FooHappened;
[Subscribe("foo://happened")]
public void Foo_Happened(object sender, FooArgs args)
{ }
If both objects are created through the container the events will be hooked up automatically.
I'd probably try to massage the domain so that each class can directly depend on the appropriate event source. What I mean is asking the question why don't A know about C? Is there perhaps a D waiting to emerge?
As an alternative approach you could consider an event broker architecture. It means observers don't know directly about the source. Here's an interesting video.
This would be highly coupled, as it would need to know every class
I think you answered your own question if you consider that coupling is bad! Passing events through a chain of potential handlers is a fairly common pattern in many environments; It may not be the most efficient approach, but it avoids the complexity that your suggested approach would involve.
Another approach you could take is to use a message dispatcher. This involves using a common message format (or at least a common message header format) to represent events, and then placing those messages into a queue. A dispatcher then picks up each of those events in turn (or based on some prioritisation), and routes them directly to the required handler. Each handler must be registered with the dispatcher at startup.
A message in this case could simply be a class with a few specific fields at the start. The specific message could simply be a derivative, or you could pass your message-specific data as an 'object' parameter along with the message header.
You can check out the EventBroker object in the M$ patterns and practises lib if you want centralised events.
Personally I think its better to think about your architecture instead and even though we use the EventBroker here, none of our new code uses it and we're hoping to phase it out one sunny day.
we have our own event broker implementation (open source)
Tutorial at: http://sourceforge.net/apps/mediawiki/bbvcommon/index.php?title=Event_Broker
And a performance analysis at: www.planetgeek.ch/2009/07/12/event-broker-performance/
Advantages compared to CAB:
- better logging
- extension support
- better error handling
- extendable handlers (UI, Background Thread, ...)
and some more I cannot recall right now.
Cheers,
Urs