Is interface based callback/events ever used over delegates? - c#

In Java, with no delegates, events are modeled with interface callbacks after the observer pattern. It strikes me that if working on a framework with more than half a dozen events, using delegates becomes a fairly verbose exercise.
As a Java developer who forgot his C#, I was wondering if there is EVER a valid reason to use interfaces for events or whether one should really ought to use delegates all over?

If it always makes sense to react to multiple callbacks, then it would potentially make sense to use an interface. However, you might want to write some adapter methods to allow the interface to be implemented by providing delegates for some of the callbacks - just the ones you want.
This is how Reactive Extensions works... almost no-one ever really implements IObserver<T> - they use the IObservable<T>.Subscribe extension method which allows the caller to specify the OnNext, OnCompleted and OnError handlers via delegates.
That way you get the benefits of delegates (which are generally easier to specify than interfaces, due to lambda expressions etc) but also one consistent object to pass around which represents all the related callbacks.

Delegates are much more flexible. Since there are no anonymous classes in C# (in the Java sense) you cannot easily implement interfaces inline. Therefore, whenever an API requires that I implement an interface, I have to go out and write out that class, which, when contrasted with lambdas, forces me to separate logic physically further away from each other, which often decreases readability.

Related

C# Forcing static fields [duplicate]

I am developing a set of classes that implement a common interface. A consumer of my library shall expect each of these classes to implement a certain set of static functions. Is there anyway that I can decorate these class so that the compiler will catch the case where one of the functions is not implemented.
I know it will eventually be caught when building the consuming code. And I also know how to get around this problem using a kind of factory class.
Just curious to know if there is any syntax/attributes out there for requiring static functions on a class.
Ed Removed the word 'interface' to avoid confusion.
No, there is no language support for this in C#. There are two workarounds that I can think of immediately:
use reflection at runtime; crossed fingers and hope...
use a singleton / default-instance / similar to implement an interface that declares the methods
(update)
Actually, as long as you have unit-testing, the first option isn't actually as bad as you might think if (like me) you come from a strict "static typing" background. The fact is; it works fine in dynamic languages. And indeed, this is exactly how my generic operators code works - it hopes you have the static operators. At runtime, if you don't, it will laugh at you in a suitably mocking tone... but it can't check at compile-time.
No. Basically it sounds like you're after a sort of "static polymorphism". That doesn't exist in C#, although I've suggested a sort of "static interface" notion which could be useful in terms of generics.
One thing you could do is write a simple unit test to verify that all of the types in a particular assembly obey your rules. If other developers will also be implementing the interface, you could put that test code into some common place so that everyone implementing the interface can easily test their own assemblies.
This is a great question and one that I've encountered in my projects.
Some people hold that interfaces and abstract classes exist for polymorphism only, not for forcing types to implement certain methods. Personally, I consider polymorphism a primary use case, and forced implementation a secondary. I do use the forced implementation technique fairly often. Typically, it appears in framework code implementing a template pattern. The base/template class encapsulates some complex idea, and subclasses provide numerous variations by implementing the abstract methods. One pragmatic benefit is that the abstract methods provide guidance to other developers implementing the subclasses. Visual Studio even has the ability to stub the methods out for you. This is especially helpful when a maintenance developer needs to add a new subclass months or years later.
The downside is that there is no specific support for some of these template scenarios in C#. Static methods are one. Another one is constructors; ideally, ISerializable should force the developer to implement the protected serialization constructor.
The easiest approach probably is (as suggested earlier) to use an automated test to check that the static method is implemented on the desired types. Another viable idea already mentioned is to implement a static analysis rule.
A third option is to use an Aspect-Oriented Programming framework such as PostSharp. PostSharp supports compile-time validation of aspects. You can write .NET code that reflects over the assembly at compile time, generating arbitrary warnings and errors. Usually, you do this to validate that an aspect usage is appropriate, but I don't see why you couldn't use it for validating template rules as well.
Unfortunately, no, there's nothing like this built into the language.
While there is no language support for this, you could use a static analysis tool to enforce it. For example, you could write a custom rule for FxCop that detects an attribute or interface implementation on a class and then checks for the existence of certain static methods.
The singleton pattern does not help in all cases. My example is from an actual project of mine. It is not contrived.
I have a class (let's call it "Widget") that inherits from a class in a third-party ORM. If I instantiate a Widget object (therefore creating a row in the db) just to make sure my static methods are declared, I'm making a bigger mess than the one I'm trying to clean up.
If I create this extra object in the data store, I've got to hide it from users, calculations, etc.
I use interfaces in C# to make sure that I implement common features in a set of classes.
Some of the methods that implement these features require instance data to run. I code these methods as instance methods, and use a C# interface to make sure they exist in the class.
Some of these methods do not require instance data, so they are static methods. If I could declare interfaces with static methods, the compiler could check whether or not these methods exist in the class that says it implements the interface.
No, there would be no point in this feature. Interfaces are basically a scaled down form of multiple inheritance. They tell the compiler how to set up the virtual function table so that non-static virtual methods can be called properly in descendant classes. Static methods can't be virtual, hence, there's no point in using interfaces for them.
The approach that gets you closer to what you need is a singleton, as Marc Gravell suggested.
Interfaces, among other things, let you provide some level of abstraction to your classes so you can use a given API regardless of the type that implements it. However, since you DO need to know the type of a static class in order to use it, why would you want to enforce that class to implement a set of functions?
Maybe you could use a custom attribute like [ImplementsXXXInterface] and provide some run time checking to ensure that classes with this attribute actually implement the interface you need?
If you're just after getting those compiler errors, consider this setup:
Define the methods in an interface.
Declare the methods with abstract.
Implement the public static methods, and have the abstract method overrides simply call the static methods.
It's a little bit of extra code, but you'll know when someone isn't implementing a required method.

TDD - Extract interface or make methods virtual

Whenever I want to stub a method in an otherwise trivial class, I most often extract an interface.
Now if the constructor of that class is public and isn't too complex or dependent on complex types, it would have the same effect to just make the method in question virtual and inherit.
Is this preferable over extracting an interface? If so, why?
Edit:
class Parser
{
public IDictionary<string, int> DoLengthyParseTask(Stream s)
{
// is slow even with using memory stream
}
}
There are two ways: Either extract an interface or make the method virtual. I actually prefer interfaces, but that could lead to an explosion of IParser Parser tuples...
You need to consider what you are trying to accomplish outside of your unit testing. Do not let your tool dictate your design.
Dealing in interfaces can help decouple your code, but these should be natural points of separation in your code (e.g. business logic or data access). Making methods virtual makes sense if you are going to inherit and overwrite those methods.
In your case, I would attempt to test the behavior that uses DoLengthyParseTask and not the method directly. This will provide a more robust test suite as well. You need to carefully consider whether this method really needs to be public(meaning it can and should be referenced outside its own assembly).
Interfaces just make a contract for you, basically a promise that your implementation will provide access to a specified set of contact points (methods, properties, etc), with no specification of behaviour. You are free to do whatever you want as long as you honor the promise.
A base class on the other hand, in addition of a contract, specifies at least some behaviour that is coded in the class (unless everything is abstract, but that is another story). Making a method virtual still enables you to call in the implementation of the base, and still provide your own code along with it.
This inheritance of behaviour is basically the reason why multiple inheritance is a no-no in modern OOP, and multiple interface implementation is relatively common.
That said, you need to weight whether you just want to extract a contract, or you want to extract some behaviour as well, and the answer should be obvious for a specific case.
As for the IParser / Parser pairs, first they are great for unit testing and for dependency injection, and second, they do not charge you for class creation, so feel free to create as many as you want.
By programming to an interface you get benefits of ease of mocking/stubbing in unit testing and loosely coupled code (and as a result, much higher flexibility), literally for free (the only drawback is more artifacts to manage).
Interfaces and inheritance are two separate things and it's not a good idea to use them interchangeably, even though it's possible. By marking method virtual you're essentially telling others not only they're free to change (override) this method in their implementations, but that you actually expect them to (and are you?).
Such design comes with rather heavy consequences, so unless you explicitly need it - you shouldn't use it. Try sticking to programming to interface instead.
One of good object oriented design principles state that you should program to an interface (design by contract, Liskov Substitution Principle) and prefer composition over inheritance (not only your classes should implement interfaces/abstract classes, but also consist of such implementations).
It's worth noticing that your Parser example makes perfect candidate to be hidden behind abstraction (be it interface or base class). From its consumer point of view it doesn't matter how the data is created - for now you might think it's XML stream only, but requirements tend to change (and/or grow), and you might soon find yourself implementing binary file parser, data stream mining parser and what-not-else. Do it properly now, save yourself time and trouble later.

Is there a way to force a C# class to implement certain static functions?

I am developing a set of classes that implement a common interface. A consumer of my library shall expect each of these classes to implement a certain set of static functions. Is there anyway that I can decorate these class so that the compiler will catch the case where one of the functions is not implemented.
I know it will eventually be caught when building the consuming code. And I also know how to get around this problem using a kind of factory class.
Just curious to know if there is any syntax/attributes out there for requiring static functions on a class.
Ed Removed the word 'interface' to avoid confusion.
No, there is no language support for this in C#. There are two workarounds that I can think of immediately:
use reflection at runtime; crossed fingers and hope...
use a singleton / default-instance / similar to implement an interface that declares the methods
(update)
Actually, as long as you have unit-testing, the first option isn't actually as bad as you might think if (like me) you come from a strict "static typing" background. The fact is; it works fine in dynamic languages. And indeed, this is exactly how my generic operators code works - it hopes you have the static operators. At runtime, if you don't, it will laugh at you in a suitably mocking tone... but it can't check at compile-time.
No. Basically it sounds like you're after a sort of "static polymorphism". That doesn't exist in C#, although I've suggested a sort of "static interface" notion which could be useful in terms of generics.
One thing you could do is write a simple unit test to verify that all of the types in a particular assembly obey your rules. If other developers will also be implementing the interface, you could put that test code into some common place so that everyone implementing the interface can easily test their own assemblies.
This is a great question and one that I've encountered in my projects.
Some people hold that interfaces and abstract classes exist for polymorphism only, not for forcing types to implement certain methods. Personally, I consider polymorphism a primary use case, and forced implementation a secondary. I do use the forced implementation technique fairly often. Typically, it appears in framework code implementing a template pattern. The base/template class encapsulates some complex idea, and subclasses provide numerous variations by implementing the abstract methods. One pragmatic benefit is that the abstract methods provide guidance to other developers implementing the subclasses. Visual Studio even has the ability to stub the methods out for you. This is especially helpful when a maintenance developer needs to add a new subclass months or years later.
The downside is that there is no specific support for some of these template scenarios in C#. Static methods are one. Another one is constructors; ideally, ISerializable should force the developer to implement the protected serialization constructor.
The easiest approach probably is (as suggested earlier) to use an automated test to check that the static method is implemented on the desired types. Another viable idea already mentioned is to implement a static analysis rule.
A third option is to use an Aspect-Oriented Programming framework such as PostSharp. PostSharp supports compile-time validation of aspects. You can write .NET code that reflects over the assembly at compile time, generating arbitrary warnings and errors. Usually, you do this to validate that an aspect usage is appropriate, but I don't see why you couldn't use it for validating template rules as well.
Unfortunately, no, there's nothing like this built into the language.
While there is no language support for this, you could use a static analysis tool to enforce it. For example, you could write a custom rule for FxCop that detects an attribute or interface implementation on a class and then checks for the existence of certain static methods.
The singleton pattern does not help in all cases. My example is from an actual project of mine. It is not contrived.
I have a class (let's call it "Widget") that inherits from a class in a third-party ORM. If I instantiate a Widget object (therefore creating a row in the db) just to make sure my static methods are declared, I'm making a bigger mess than the one I'm trying to clean up.
If I create this extra object in the data store, I've got to hide it from users, calculations, etc.
I use interfaces in C# to make sure that I implement common features in a set of classes.
Some of the methods that implement these features require instance data to run. I code these methods as instance methods, and use a C# interface to make sure they exist in the class.
Some of these methods do not require instance data, so they are static methods. If I could declare interfaces with static methods, the compiler could check whether or not these methods exist in the class that says it implements the interface.
No, there would be no point in this feature. Interfaces are basically a scaled down form of multiple inheritance. They tell the compiler how to set up the virtual function table so that non-static virtual methods can be called properly in descendant classes. Static methods can't be virtual, hence, there's no point in using interfaces for them.
The approach that gets you closer to what you need is a singleton, as Marc Gravell suggested.
Interfaces, among other things, let you provide some level of abstraction to your classes so you can use a given API regardless of the type that implements it. However, since you DO need to know the type of a static class in order to use it, why would you want to enforce that class to implement a set of functions?
Maybe you could use a custom attribute like [ImplementsXXXInterface] and provide some run time checking to ensure that classes with this attribute actually implement the interface you need?
If you're just after getting those compiler errors, consider this setup:
Define the methods in an interface.
Declare the methods with abstract.
Implement the public static methods, and have the abstract method overrides simply call the static methods.
It's a little bit of extra code, but you'll know when someone isn't implementing a required method.

Utility classes.. Good or Bad?

I have been reading that creating dependencies by using static classes/singletons in code, is bad form, and creates problems ie. tight coupling, and unit testing.
I have a situation where I have a group of url parsing methods that have no state associated with them, and perform operations using only the input arguments of the method. I am sure you are familiar with this kind of method.
In the past I would have proceeded to create a class and add these methods and call them directly from my code eg.
UrlParser.ParseUrl(url);
But wait a minute, that is introducing a dependency to another class. I am unsure whether these 'utility' classes are bad, as they are stateless and this minimises some of the problems with said static classes, and singletons. Could someone clarify this?
Should I be moving the methods to the calling class, that is if only the calling class will be using the method. THis may violate the 'Single Responsibilty Principle'.
From a theoretical design standpoint, I feel that Utility classes are something to be avoided when possible. They basically are no different than static classes (although slightly nicer, since they have no state).
From a practical standpoint, however, I do create these, and encourage their use when appropriate. Trying to avoid utility classes is often cumbersome, and leads to less maintainable code. However, I do try to encourage my developers to avoid these in public APIs when possible.
For example, in your case, I feel that UrlParser.ParseUrl(...) is probably better handled as a class. Look at System.Uri in the BCL - this handles a clean, easy to use interface for Uniform Resource Indentifiers, that works well, and maintains the actual state. I prefer this approach to a utility method that works on strings, and forcing the user to pass around a string, remember to validate it, etc.
Utility classes are ok..... as long as they don't violate design principles. Use them as happily as you'd use the core framework classes.
The classes should be well named and logical. Really they aren't so much "utility" but part of an emerging framwework that the native classes don't provide.
Using things like Extension methods can be useful as well to align functionality onto the "right" class. BUT, they can be a cause of some confusion as the extensions aren't packaged with the class they extend usually, which is not ideal, but, still, can be very useful and produce cleaner code.
You could always create an interface and use that with dependency injection with instances of classes that implement that interface instead of static classes.
The question becomes, is it really worth the effort? In some systems, the answer in yes, but in others, especially smaller ones, the answer is probably no.
This really depends on the context, and on how we use it.
Utility classes, itself, is not bad. However, It will become bad if we use it the bad way. Every design pattern (especially Singleton pattern) can easily be turned into anti-pattern, same goes for Utility classes.
In software design, we need a balancing between flexibility & simplicity. If we're going to create a StringUtils which is only responsible for string-manipulation:
Does it violate SRP (Single Responsibility Principle)? -> Nope, it's the developers that put too much responsibilities into utility classes that violate SRP.
"It can not be injected using DI frameworks" -> Are StringUtils implementation gonna varies? Are we gonna switch its implementations at runtime? Are we gonna mock it? Of course not.
=> Utility classes, themselve, are not bad. It's the developers' fault that make it bad.
It all really depends on the context. If you're just gonna create a utility class that only contains single responsibility, and is only used privately inside a module or a layer. Then you're still good with it.
I agree with some of the other responses here that it is the classic singleton which maintains a single instance of a stateful object which is to be avoided and not necessarily utility classes with no state that are evil. I also agree with Reed, that if at all possible, put these utility methods in a class where it makes sense to do so and where one would logically suspect such methods would reside. I would add, that often these static utility methods might be good candidates for extension methods.
I really, really try to avoid them, but who are we kidding... they creep into every system. Nevertheless, in the example given I would use a URL object which would then expose various attributes of the URL (protocol, domain, path and query-string parameters). Nearly every time I want to create a utility class of statics, I can get more value by creating an object that does this kind of work.
In a similar way I have created a lot of custom controls that have built in validation for things like percentages, currency, phone numbers and the like. Prior to doing this I had a Parser utility class that had all of these rules, but it makes it so much cleaner to just drop a control on the page that already knows the basic rules (and thus requires only business logic validation to be added).
I still keep the parser utility class and these controls hide that static class, but use it extensively (keeping all the parsing in one easy to find place). In that regard I consider it acceptable to have the utility class because it allows me to apply "Don't Repeat Yourself", while I get the benefit of instanced classes with the controls or other objects that use the utilities.
Utility classes used in this way are basically namespaces for what would otherwise be (pure) top-level functions.
From an architectural perspective there is no difference if you use pure top-level "global" functions or basic (*) pure static methods. Any pros or cons of one would equally apply to the other.
Static methods vs global functions
The main argument for using utility classes over global ("floating") functions is code organization, file and directory structure, and naming:
You might already have a convention for structuring class files in directories by namespace, but you might not have a good convention for top-level functions.
For version control (e.g. git) it might be preferable to have a separate file per function, but for other reasons it might be preferable to have them in the same file.
Your language might have an autoload mechanism for classes, but not for functions. (I think this would mostly apply to PHP)
You might prefer to write import Acme:::Url; Url::parse(url) over import function Acme:::parse_url; parse_url();. Or you might prefer the latter.
You should check if your language allows passing static methods and/or top-level functions as values. Perhaps some languages only allow one but not the other.
So it largely depends on the language you use, and conventions in your project, framework or software ecosystem.
(*) You could have private or protected methods in the utility class, or even use inheritance - something you cannot do with top-level functions. But most of the time this is not what you want.
Static methods/functions vs object methods
The main benefit of object methods is that you can inject the object, and later replace it with a different implementation with different behavior. Calling a static method directly works well if you don't ever need to replace it. Typically this is the case if:
the function is pure (no side effects, not influenced by internal or external state)
any alternative behavior would be considered as wrong, or highly strange. E.g. 1 + 1 should always be 2. There is no reason for an alternative implementation where 1 + 1 = 3.
You may also decide that the static call is "good enough for now".
And even if you start with static methods, you can make them injectable/pluggable later. Either by using function/callable values, or by having small wrapper classes with object methods that internally call the static method.
They're fine as long as you design them well ( That is, you don't have to change their signature from time to time).
These utility methods do not change that often, because they do one thing only. The problem comes when you want to tight a more complex object to another. If one of them needs to change or be replaced, it will be harder to to if you have them highly coupled.
Since these utility methods won't change that often I would say that is not much problem.
I think it would be worst if you copy/paste the same utility method over and over again.
This video How to design a good API and why it matters by Joshua Bloch, explains several concepts to bear in mind when designing an API ( that would be your utility library ). Although he's a recognized Java architect the content applies to all the programming languages.
Use them sparingly, you want to put as much logic as you can into your classes so they dont become just data containers.
But, at the same time you can't really avoid utilites, they are required sometimes.
In this case i think it's ok.
FYI there is the system.web.httputility class which contains alot of common http utilities which you may find useful.

Lambdas for event handlers?

Lambda syntax in C# 3 makes it really convenient to create one-liner anonymous methods. They're a definite improvement over the wordier anonymous delegate syntax that C# 2 gave us. The convenience of lambdas, however, brings with it a temptation to use them in places where we don't necessarily need the functional programming semantics they provide.
For instance, I frequently find that my event handlers are (or at least start out as) simple one-liners that set a state value, or call another function, or set a property on another object, etc. For these, should I clutter my class with yet another simple function, or should I just stuff a lambda into the event in my constructor?
There are some obvious disadvantages to lambdas in this scenario:
I can't call my event handler directly; it can only be triggered by the event. Of course, in the case of these simple event handlers, there's hardly a time I would need to call them directly.
I can't unhook my handler from the event. On the other hand, rarely do I ever need to unhook event handlers, so this isn't much of issue, anyway.
These two things don't bother me much, for the reasons stated. And I could solve both of those problems, if they really were problems, by storing the lambda in a member delegate, but that would kind of defeat the purposes of using lambdas for their convenience and of keeping the class clean of clutter.
There are two other things, though, that I think are maybe not so obvious, but possibly more problematic.
Each lambda function forms a closure over its containing scope. This could mean that temporary objects created earlier in the constructor stay alive for much longer than they need to due to the closures maintaining references to them. Now hopefully, the compiler is smart enough to exclude objects from the closure that the lambda doesn't use, but I'm not sure. Does anybody know?
Luckily again, this isn't always an issue, as I don't often create temporary objects in my constructors. I can imagine a scenario where I did, though, and where I couldn't easily scope it outside of the lambda.
Maintainability might suffer. Big time. If I have some event handlers defined as functions, and some defined as lambdas, I worry it might make it more difficult to track down bugs, or to just understand the class. And later, if and when my event handlers end up expanding, I'll either have to move them to class-level functions, or deal with the fact that my constructor now contains a significant amount of the code that implements the functionality of my class.
So I want to draw on the advice and experience of others, perhaps those with experience in other languages with functional programming features. Are there any established best practices for this kind of thing? Would you avoid using lambdas in event handlers or in other cases where the lambda significantly outlives its enclosing scope? If not, at what threshold would you decide to use a real function instead of a lambda? Have any of the above pitfalls significantly bitten anybody? Are there any pitfalls I haven't thought of?
I generally have one routine dedicated to wiring up event handlers. Therein, i use anonymous delegates or lambdas for the actual handlers, keeping them as short as possible. These handlers have two tasks:
Unpack event parameters.
Call a named method with appropriate parameters.
This done, i've avoided cluttering up my class namespace with event handler methods that cannot be cleanly used for other purposes, and forced myself to think about the needs and purposes of the action methods that i do implement, generally resulting in cleaner code.
Each lambda function forms a closure over its containing scope. This could mean that temporary objects created earlier in the constructor stay alive for much longer than they need to due to the closures maintaining references to them. Now hopefully, the compiler is smart enough to exclude objects from the closure that the lambda doesn't use, but I'm not sure. Does anybody know?
From what I have read, the C# compiler either generates an anonymous method, or an anonymous inner class, depending on if it needs to close over the containing scope or not.
In other words, if you don't access the containing scope from within your lambda, it won't generate up the closure.
However, this is a bit of "hearsay", and I'd love to have someone who is more knowledgeable with the C# compiler weigh in on that.
All that said, the old C# 2.0 anonymous delegate syntax did the same thing, and I've almost always uses anonymous delegates for short event handlers.
You have covered the various pros and cons quite well, if you need to unhook your event handler, don't use an anonymous method, otherwise I'm all for it.
Based on a little experiment with the compiler I would say the compiler is smart enough to create a closure. What I did was a simple constructor which had two different lambdas which were used for a Predicate in List.Find().
The first lamdba used a hard coded value, the second used a parameter in the constructor. The first lambda was implemented as a private static method on the class. The second lambda was implemented as a class which performed the closing.
So your assumption that the compiler is smart enough is correct.
Most of the same characteristics of lambdas can apply equally well in other places where you can use them. If event handlers isn't a place for them, I can't think of any better. It's a single-point self-contained unit of logic, located at its single point.
In many cases, the event is designed to get a little package of context that turns out to be just right for the job at hand.
I consider this to be one of the "good smells" in a refactoring sense.
Concerning lambdas, this question I asked recently has some relevant facts about effects on object lifespan in the accepted answer.
Another interesting thing I recently learned is that the C# compiler comprehends multiple closures in the same scope as a single closure in respect of the things it captures and keeps alive. Sadly I can't find the original source for this. I will add that if I stumble upon it again.
Personally, I don't use lambdas as event handlers because I feel the readability advantage really comes when the logic is flowing from a request to a result. The event handler tends to be added in a constructor or initialiser, but it will rarely be called at this point in the object's lifecycle. So why should my constructor read like it's doing things now that are actually happening much later?
On the other hand, I do use a slightly different kind of event mechanism overall, which I find preferable to the C# language feature: an iOS-style NotificationCenter rewritten in C#, with a dispatch table keyed by Type (derived from Notification) and with Action < Notification > values. This ends up allowing single-line "event" Type definitions, like so:
public class UserIsUnhappy : Notification { public int unhappiness; }

Categories