Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Are there any guidelines on when you should use a delegate for indirect association, and an observer?
In C#, you get to use delegates for simple call-backs. I guess pointers to function and pointers to member functions can be considered as delegates too (am I right?).
I realize that do use an observer, you need to create an interface, and implement it, so it is more strongly-typed and the relationship is more formal. For a delegate, as long as the function signature and accessibility matches, you can "hook them up".
Does delegates make the observer pattern moot? How do you decide on a delegate vs. an observer pattern?
The observer pattern is already implemented for you in the form of events.
The advantage of events is that they can have multiple subscribers, while with a delegate, you can only have one. Events are better for public interfaces, and scenarios where you don't have complete control over who wants to get notified that something happens. In reality, events are just automatically managed lists of delegates. You'll have to see what makes more sense in your scenario.
Edit: As commenter Rabbi mentions, the above isn't entirely true, as any delegate can become a multicast delegate. The purpose of the event modifier is to make a delegate that can only be invoked inside the class that defines it. This is most useful for ensuring encapsulation in public interfaces.
One advantage of the observer pattern is that if you have a large number of events that are generally always subscribed to by an interested party, then passing a single object into a method to subscribe to the events is much easier than subscribing to each event individually. With C#'s lack of specifying interfaces and methods for anonymous classes as can be done with Java, implementing the observer pattern becomes a bit more laborious so most opt for using events anyway.
Another benefit of the traditional observer pattern is that it handles better in cases where you need to interrogate the subscriber for some reason. I've come across this need with objects that pass a web-service boundary where there are problems with delegates whereas the observer pattern is just simply a reference to another object so it works fine as long as your serialization keeps integrity of references within the object graph like the NetDataContractSerializer does. In these cases it's possible to discriminate between subscribers that should be removed before making the service boundary based on whether the referenced subscriber is also within the same object graph.
Delegates can be used to implement the observer pattern - think of events.
To do it without events take a look here: http://www.dofactory.com/Patterns/PatternObserver.aspx
It wouldn't take much to refactor that into using delegates if you preferred.
The only advantage I can think of with implementing an interface is member name consistency across all implementations.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am working on a serialization system using Json, but I need to save events from Buttons (onClick, onHover, etc.) Is there a way about doing this efficiently? (NOTE: The events are all Actions)
Frankly, it is is terrible idea to try to serialize events.
JSON is usually used to serialize data; events are not data - they are implementation details. Most JSON serializers (or more broadly: most serializers) are not interested in delegates / events, because that isn't relevant to data, so: there's a good chance that anything you'd want to do here will need to be manual. Specifically, the problem here is that an event (or rather, the underlying multicast delegate) is effectively zero, one, or multiple pairs of "instance" (optional) and "method" (required).
The method here is a MethodInfo, and there aren't great ways to serialize a MethodInfo as text (although it is at least theoretically possible, although it would be very brittle vs changes to your code.
The instance, however, is an object - and most serializers hate that; in this case, it would combine object (reference) tracking, possibly of objects not otherwise inside the payload, of indeterminate types (so: possibly needing to store type metadata).
Also, deserializing an object model that allows you to point to arbitrary types and methods is a massive security hole, and is a well-known RCE weakness in serializers that (unwisely, IMO) allow this kind of thing (such as BinaryFormatter; for a longer discussion of this topic, see here).
As for what to do instead: whenever an implementation isn't a great fit for a given serializer, the most pragmatic option is to stop fighting the serializer, and work with it instead of against it. For example, it might be that you can create a model that looks kinda like your domain model, but instead of having events/delegates, it might just have a string[] / List<string> that represents the events you need to apply, and your code would worry about how to map between them (mapping methods to strings, and figuring out what the target instance should be, etc). This avoids all of the pain points above, and additionally means that your data is now platform independent, with the payload and the implementation details (your form layout) separate from each-other.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
We are about to start a project right from scratch. As per the design discussions i am thinking to bring this topic.
Most of the times,I have seen that Abstract classes were being used just to provide some default/common behavior with some concrete methods.
So, i thought of defining those concrete methods as Extension methods for the Interface i am going to develop in place of Abstract classes.
Can someone guide me regarding my design decision. If you are not going to agree with my point please justify your argument with possible scenario/issues which we can face in case of doing so. So, that it will improve my knowledge.
Both approaches are very very different.
Using an abstract class and though abstract/virtual methods, you allow the derived classes to override a behavior which is not the case for extension method. Extension methods are extensions at the end of the day, they are not part of the type and are hard to spot when someone is examining the API and the features the type provides.
Second point, creating an extension method for a type that you create yourself is not that logical IMHO. Using a base Abstract class keeps your hierarchy clear and keeps your model open for modifications of overridden behaviors.
Extension methods were introduced in C# because a very particular requirement.
When they were designing LINQ they realized that they wouldn't want to create a new interface which would contain all known LINQ methods like Where or Select, because it would mean that any enumerable or collection implementation would need to implement it.
Above mentioned fact has an important drawback: it would need to extensively change the source code of a lot of classes from the Base Class Library and any third-party library or project implementing custom collections couldn't take advantage of LINQ at all.
Then they thought about an approach that could directly work with iterators (i.e. IEnumerator<T>) and that could be compatible with any IEnumerable<T> without having to modify any existing code but just adding new code to new assembly members.
And they invented extension methods, which would be implemented like static methods and they would act as instance members of a given type.
Since the inception of extension methods, they've been implemented in many other scenarios, but they always cover these two use cases:
I've a large code base and I want to offer a functionality to all types deriving (classes) or implementing (interfaces) some other type without having to modify them implementing a new interface across a lot of code (increasing the chance of introducing new bugs).
I don't own the source code of some project and I want to extend some types to support some new methods.
Anything outside these use cases is an abuse of extension methods.
Extension methods aren't a replacement to regular class-based object-oriented programming.
Basically you could extend every class or interface - nothing else is done with the Linq-extension methods.
However you can not define those methods directly in the interface, you allways need a static public class that contains those extensions.
To answer your questions I doubt that defining a default-behaviour within extension-methods is a good thing as it completely compromizes the actual intention of that interface. When creating an extension-method all instances of that (extented) class/interface share those methods, thus what you´re doing is to say every instance of my interface is able to be treated as my abstract class.
Having said this you should differ between the behaviour (the interface) and the actual processing (the class). Mixing both will eventually make your design quite complicated.
Next is by defining extension-mtehods you completely bypass inheritance. So what if you want to override the default-behaviour? You would be lost defining them as new or any different wewird workaround because your design was not open for inheritance at all.
Last point from my view is that you should use extension-methods for classes you don´t have control about. However when you can modify the code you´ll probably won´t use them.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
what is your preferred implementation style for a factory pattern? For example, consider a website where I want to use a factory pattern to save to 2+ external systems. This is my first impression for a clean implementation:
Create a class named ExternalSystemManagerFactory
In the constructor of this class pass in an enumeration to indicate the target external system. For example: ExternalSystemManager.System1 or ExternalSystemManager.System2
Create a property on this class named ExternalSystemManager of type IExternalSystemManager
The constructor would set this property value based on the constructor argument
Create a method stub on IExternalSystemManager named SaveToExternalSystem
Create 2 concrete classes for my external systems that implement IExternalSystemManager (EsmSystem1, EsmSystem2)
Then in my client class, I could save to ExternalSystem1 like this:
new ExternalSystemManagerFactory(ExternalSystemManager.System1).ExternalSystemManager.SaveToExternalSystem();
Does this seem like a reasonable implementation? Do you see any potential issues with this implementation? Is this a fairly common implementation style or is there a general trend towards a different implementation style?
In my opinion when it comes to patterns, it typically has to do with how it "feels" when you use it. If you are comfortable with accessing your data in the way you have written it, then by all means go for it. I'm a firm believer that there really isn't a perfect way to implement a pattern and I actually avoid them unless my code blatantly has a need and they emerge naturally. So my advice is...Don't force it, but if it feel good, then do it.
The approach that you describe is ok, if it is only about two implementations. If the number of external systems that you want to access increases, you'd always have to change
the enum
the switch statement in the constructor that chooses the concrete implementation.
In the abstract factory pattern that the Gang of Four describes, you'd get rid of the enum and implement it like this:
An abstract base class/interface for the factory.
An implementation of the factory for each concrete external system.
You create the concrete factory at one spot in your code and always access it through the interface.
An advantage of this implementation is that you can easily configure which factory to create instead of using a switch statement in your code. Besides that you wouldn't have to adjust the switch statement each time you connect a new external system, it also allows you to create implementations for new systems without touching the assembly of the factory at all.
Another approach you might want to consider if you have lots of dependencies you want to create is to use an Inversion of Control Container. You register the types that should be created for an interface at the beginning of your application and ask the IOC container if you need an instance or inject it in the constructors of the classes. There are several IOC containers available, e.g. Microsoft Unity, Ninject, AutoFac, .... This will save you lots of time if you have several or huge factories.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have been programming for about 2 months now and I'm self-learned, while I get the basics about inheritance, polymorphism, interfaces, delegates, data & reference types, loops, if/switch, LINQ, xml, SQL, etc etc. I just cannot wrap my head around events!
I have read at least 4-5 different tutorials and writeups online but they are way too confusing to me, there's an Event type, EventHandler, delegates, event raise/subscribe, there's just too much stuff going on and, I don't know if I'm thickheaded or not but it's INCREDIBLY confusing for me.
Please explain events to me in a way that a complete beginner programmer like me can understand, many thanks!
Action and reaction.
As John said in your comments.. "When I click a button, something happens!"
It doesn't get much simpler than this:
http://en.wikipedia.org/wiki/Observer_pattern
At their core event simply contain a list of functions (or at least a way to access these functions) which will be called the event is raised. Raising an event is simply the act of triggering the subject to notify all the functions which are subscribed (read: contained in the list) to that event.
How this list of subscriptions is comprised varies greatly depending on the capabilities of the framework. In the observer pattern (commonly used in Java) you do this by passing in an object that has implemented the appropriate interface. The subject iterates through the list of observers and call the function defined by the interface. The drawback to this pattern is that you have to have the potential for a naming collision between two entirely different subjects which can be difficult (though not impossible) to work around.
Delegates remedy this issue by allowing you to pass in the function or method itself. A delegate is sort of like an interface in that it establishes a contract but instead of class members it just specifies a set of parameters for a function. The subject can then iterate through a list of these methods, which are commonly referred to as event handlers, and pass the appropriate parameters. Delegates are less troublesome than an observer pattern but they can still be time consuming.
More recently C# added the generic delegates Action and Func which are a bit easier to work with.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
You have an immutable object, and you set its internal variables in the constructor which accepts couple of parameters.
Question:
Do you see any problems to VALIDATE constructor parameters in the constructor method of an immutable object and throw ArgumentExceptions if not valid?
(to me it makes sense but I wanted to ask in case there are some better ways or something not OK with this - for example if it is a better design to move validation from constructor to a factory)
Or if I generalize it by rephrasing the question:
Is it OK to put business rules-wise logic in the constructor methods? Or should constructors always do nothing more than setting object's internals?
Thanks
In a way, it makes sense to validate in the constructor itself because you know that all usages of it will pass through that single point, and any other developer that will use your code will be protected from making mistakes because of your "low-level" validations.
If you move the validation higher up the call chain, you leave the class code cleaner but you expose the code to the possibility of "you're using it wrong" bugs.
Constructor validation has a slight problem in case of invalid data: What do you do then? You have to throw an exception, which might be awkward and also a performance hit, if you create "invalid" instances often.
To get rid of try ... catch every time you instantiate the object, you would have to create a factory anyway.
I think the factory is a good approach, but in a slightly different way - validate the arguments given to the factory method and only then create a (valid) instance.
A class should, to the best of it's ability, document the guarantees it makes, and do its best to keep itself in a valid state at all times. Any incoming calls that are either inappropriate or would put the object in an invalid state should generate exceptions.
This holds true for constructors too. A constructor that doesn't validate its inputs makes it possible for others to create invalid instances of your class. But if you always validate, then anyone with a reference to your class can be confident that it is valid.
If it was me I'd validate the parameters before I pass them into the constructor. You never know how your code is going to evolve so doing the validation in a factory as you suggest should provide a bit more visibility and feels 'cleaner'.
If you have a choice for where to raise an exception, just go with wherever you're more likely to remember to surround it with a try..catch, it helps to consider other users of your codebase too. This more often then not depends on the purpose of the class, and how you see it being used. However consistency is also important.
Sometimes it's useful to not raise exceptions in either and instead have a separate ValidateInstance() function for immutable types. Your other choices are as you say class creation (via factory or constructor) or on class usage (usually a bad idea if an error can be thrown sooner.. but sometimes makes sense).
putting them in the constructor has the advantage that they will also surface in a Factory method, if you chose to make one later.
HTH