How should comments for interface and class methods be different - c#

I ran into this dilemma when working on an ASP.net web application using Web Client Software Factory(WCSF) in C#, and the same could apply to other platform and languages. My situation is like this:
I am defining an IView interface for each web page/user control based on WCSF paradigm, then have the page class implement the IView interface, basically implementing each of the methods defined in the interface. When I tried to add xml-documentation on the method level, I found myself basically repeating the same comment content for both interface method, and its counter-part in the implementing class.
So my question is: should there be some substantial difference between the documentation content on the interface method and corresponding class method? Should they be emphasizing on different aspect or something?
Somebody told me that the interface method comment should say "what" the method is supposed to do, and the class method comment should say "how" it does it. But I remember reading somewhere before that the method level comment should only say "what" the method is supposed to do, never the implementation detail of the method, since the implementation should not be a concern for method users and it might change.

Personally, I think these comments should be the same - both should say "what the method is going to do", in your terms.
There is no reason for XML comments to mention implementation details. The one exception, potentially, would be to mention potential side effects (ie: this method may take a long time), but I personally would do that in the <remarks> section of the XML doc comments.

Call me a nut but I'd use a descriptive name for the method and call it a day (no comments for either). I might add comments to the implementation if something about it is surprising or why its there is nonobvious.

Related

Is it acceptable to use extension methods on a class which you can modify

I've recently been toying with the idea of using extension methods to implement helper utilities on classes which I control (ie, are in the same program and I can modify). The rationale behind it is that many times, these helper utilities are used in very specific scenarios and don't require access to the classes internal values.
For instance, let's say I have a StackExchange class. It'd have methods like PostQuestion and Search and AnswerQuestion.
Now, what if I wanted to manually calculate my reputation to ensure that StackOverflow isn't cheating me. I'd implement something along the lines of:
int rep=0;
foreach(var post in StackExchangeInstance.MyPosts)
{
rep+=post.RepEarned;
}
I could add a method to the StackExchange class, but it doesn't require any internals, and it is only used from one or two other portions of the program.
Now imagine if instead you had 10 or 20 of these specific helper methods. Useful in a certain scenario for sure, but definitely not for the general case. My idea is changing something like
public static RepCalcHelpers
{
public static int CalcRep(StackExchange inst){ ... }
}
To something like
namespace Mynamespace.Extensions.RepCalculations
{
public static RepCalcExtensions
{
public static int CalcRep(this Stackexchange inst){...}
}
}
Note the namespace. I'd ideally use this to group extension methods within a certain scenario. For instance, "RepCalculations", "Statistics", etc.
I've tried searching for if this type of pattern is at all heard of, and haven't found any evidence of extension methods being used for anything but classes you can't modify.
What shortcomings are there with this "pattern"? Should I instead stick to inheritance or composition, or just a good ol' static helper class for this?
I would read the section of Framework Design Guidelines on Extension methods. Here is a post by one of the authors for the 2nd edition. The use case you are describing
(specialized helper methods) is cited by Phil Haack as a valid use for extension methods with the drawback that it requires extra knowledge of the API to find those "hidden" methods.
Not mentioned in that post but recommended in the book is that the extension methods go into a separate namespace from the extended class. Otherwise, they will always appear with intellisense and there is no way to turn them off.
I think I have seen this pattern somewhere else. It could quite confusing, but also quite powerful. That way you can provide a class in a library and a set of extension methods in separate namespace. Then whoever is using your library can choose to import namespace with your extension methods or provide their own extension methods.
A good candidate for this pattern would be if you have some extension methods used for unit testing only (e.g. to compare if two objects are equal in a sense you'd need for unit tests only).
You seem to be making the comparison that the extension method is equivalent to a public instance method. It's really not.
An extension method is just a public static utility method that happens to have a more convenient syntax for being called.
So first we have to ask ourselves, it it appropriate for this method to be an instance method of the class itself or is it more appropriate for it to be a static method of an external class. The fact that very few users of the class need this functionality because it's highly localized and not truly behavior that the class itself performs but rather behavior performed on the class by an external entity means that it's appropriate for it to be static. The primary drawback is that it's behavior that is potentially harder to find if someone has a User and wants to recalculate their rep. Now, in this particular case it's a bit on the fence, and you could go the other way, but I am leaning towards static method.
Now that we've decided it should be static it's an entirely separate question of whether or not it should be an extension method or not. This is much more subjective and goes into the personal preference realm. Are the methods likely to be chained? If so, extension methods chain much more nicely than nested calls to static methods. Is it likely to be used a lot in the files that do use it? If yes, extension methods are likely going to simplify the code a bit, if not, it doesn't really help as much, or even hurts. To the toy example I'd probably say that I personally wouldn't, but I wouldn't have any problem at all with someone who did (after all you can still use an extension method as if it's a regular public static method syntax wise). For a non-toy example, it's mostly a case-by-case decision. A key point is to be careful what classes you extend, and to ask yourself if a user is willing to clutter the Intellisense of a type just to call a methods slightly more conveniently (this again gets back to how much it's used per file it's used in).
It's also worth mentioning that there are a few edge cases where extension methods can be more powerful than instanced methods. In particular through utilizing type inference. With a regular instance method it's easy enough to accept a type or any sub-type of that type, but sometimes it's useful to return whatever the type is that was passed in instead of the parent type. This is used particularly in fluent APIs. This isn't a very common example though, and is only loosely related to your question, so I won't expand on that.
Extension methods could be very useful in cases where you class implements an interface and you want to avoid having to implement the same method on other "future" classes that implement the same interface. For example, StackExchange implements IStackExchange and ProgrammersExchange also implements IStackExchange. Your example extension method would be useful for implementing the CalcRep just once, and not having to re-implement it on both classes. This is exactly the reason for all the extension methods present in the static Enumerable class.
Other than this I dont see a compelling reason for using extension methods on a class you can already modify. If anything it has the disadvantage of being considered late in the overload resolution process.

Are empty interfaces code smell? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Closed 7 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I have a function that returns same kind of objects (query results) but with no properties or methods in common. In order to have a common type I resorted using an empty interface as a return type and "implemented" that on both.
That doesn't sound right of course. I can only console myself by clinging to hope that someday those classes will have something in common and I will move that common logic to my empty interface. Yet I'm not satisfied and thinking about whether I should have two different methods and conditionally call next. Would that be a better approach?
I've been also told that .NET Framework uses empty interfaces for tagging purposes.
My question is: is an empty interface a strong sign of a design problem or is it widely used?
EDIT: For those interested, I later found out that discriminated unions in functional languages are the perfect solution for what I was trying to achieve. C# doesn't seem friendly to that concept yet.
EDIT: I wrote a longer piece about this issue, explaining the issue and the solution in detail.
Although it seems there exists a design pattern (a lot have mentioned "marker interface" now) for that use case, i believe that the usage of such a practice is an indication of a code smell (most of the time at least).
As #V4Vendetta posted, there is a static analysis rule that targets this:
http://msdn.microsoft.com/en-us/library/ms182128(v=VS.100).aspx
If your design includes empty interfaces that types are expected to implement, you are probably using an interface as a marker or a way to identify a group of types. If this identification will occur at run time, the correct way to accomplish this is to use a custom attribute. Use the presence or absence of the attribute, or the properties of the attribute, to identify the target types. If the identification must occur at compile time, then it is acceptable to use an empty interface.
This is the quoted MSDN recommendation:
Remove the interface or add members to it. If the empty interface is being used to label a set of types, replace the interface with a custom attribute.
This also reflects the Critique section of the already posted wikipedia link.
A major problem with marker interfaces is that an interface defines a contract for implementing classes, and that contract is inherited by all subclasses. This means that you cannot "unimplement" a marker. In the example given, if you create a subclass that you do not want to serialize (perhaps because it depends on transient state), you must resort to explicitly throwing NotSerializableException (per ObjectOutputStream docs).
You state that your function "returns entirely different objects based on certain cases" - but just how different are they? Could one be a stream writer, another a UI class, another a data object? No ... I doubt it!
Your objects might not have any common methods or properties, however, they are probably alike in their role or usage. In that case, a marker interface seems entirely appropriate.
If not used as a marker interface, I would say that yes, this is a code smell.
An interface defines a contract that the implementer adheres to - if you have empty interfaces that you don't use reflection over (as one does with marker interfaces), then you might as well use Object as the (already existing) base type.
You answered your own question... "I have a function that returns entirely different objects based on certain cases."... Why would you want to have the same function that returns completely different objects? I can't see a reason for this to be useful, maybe you have a good one, in which case, please share.
EDIT: Considering your clarification, you should indeed use a marker interface. "completely different" is quite different than "are the same kind". If they were completely different (not just that they don't have shared members), that would be a code smell.
As many have probably already said, an empty interface does have valid use as a "marker interface".
Probably the best use I can think of is to denote an object as belonging to a particular subset of the domain, handled by a corresponding Repository. Say you have different databases from which you retrieve data, and you have a Repository implementation for each. A particular Repository can only handle one subset, and should not be given an instance of an object from any other subset. Your domain model might look like this:
//Every object in the domain has an identity-sourced Id field
public interface IDomainObject
{
long Id{get;}
}
//No additional useful information other than this is an object from the user security DB
public interface ISecurityDomainObject:IDomainObject {}
//No additional useful information other than this is an object from the Northwind DB
public interface INorthwindDomainObject:IDomainObject {}
//No additional useful information other than this is an object from the Southwind DB
public interface ISouthwindDomainObject:IDomainObject {}
Your repositories can then be made generic to ISecurityDomainObject, INorthwindDomainObject, and ISouthwindDomainObject, and you then have a compile-time check that your code isn't trying to pass a Security object to the Northwind DB (or any other permutation). In situations like this, the interface provides valuable information regarding the nature of the class even if it does not provide any implementation contract.

Why methods in C# are not automatically virtual? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Why C# implements methods as non-virtual by default?
It would be much more less work to define which methods are NOT overideable instead of which are overideable because (at least for me), when you're designing a class, you don't care if its heirs will override your methods or not...
So, why methods in C# are not automatically virtual? What is the common sense in this?
Anders Hejlsberg answered that question in this interview and I quote:
There are several reasons. One is
performance. We can observe that as
people write code in Java, they forget
to mark their methods final.
Therefore, those methods are virtual.
Because they're virtual, they don't
perform as well. There's just
performance overhead associated with
being a virtual method. That's one
issue.
A more important issue is versioning.
There are two schools of thought about
virtual methods. The academic school
of thought says, "Everything should be
virtual, because I might want to
override it someday." The pragmatic
school of thought, which comes from
building real applications that run in
the real world, says, "We've got to be
real careful about what we make
virtual."
When we make something virtual in a
platform, we're making an awful lot of
promises about how it evolves in the
future. For a non-virtual method, we
promise that when you call this
method, x and y will happen. When we
publish a virtual method in an API, we
not only promise that when you call
this method, x and y will happen. We
also promise that when you override
this method, we will call it in this
particular sequence with regard to
these other ones and the state will be
in this and that invariant.
Every time you say virtual in an API,
you are creating a call back hook. As
an OS or API framework designer,
you've got to be real careful about
that. You don't want users overriding
and hooking at any arbitrary point in
an API, because you cannot necessarily
make those promises. And people may
not fully understand the promises they
are making when they make something
virtual.
You should care which members can be overridden in derived classes.
Deciding which methods to make virtual should be a deliberate, well-thought-out decision - not something that happens automatically - the same as any other decisions regarding the public surface of your API.
Beyond the design and clarity reasons a non-virtual method is also technically better for a couple of reasons:
Virtual methods take longer to call (because the runtime needs to navigate through the virtual lookup table to find the actual method to call)
Virtual methods can't be inlined (because the compiler doesn't know at compile time which method will eventually be called)
Therefore unless you have specific intentions to override the method it is better for it to be non-virtual.
Convention? Nothing more, I would think. I know Java automatically makes methods virtual, while C# does not, so there's clearly some disagreement at some level as to what's better. Personally, I prefer the C# default - consider that overriding methods is a lot less common than not overriding them, so it would seem more concise to define virtual methods explicitly.
See also the answer of Anders Hejlsberg (the inventor of C#) at A Conversation with Anders Hejlsberg, Part IV.
To paraphrase Eric Lippert, one of the guys who designed C#:
So you code doesn't get accidentally broken when the source code you recieved from a third party is changed. In other words, to prevent the Brittle Base Class problem.
If a method is to be virtual, if because you (supposedly) made the conscious decision to allow the function to be replaceable, and designed, tested and document around that. What happens if, say, you made a function "frob" and, in some subsequent version, the base class's makers decide to also make a function "frob"?
So it's clear whether you're allowing overriding or a method or forcing hiding of a method (via the new keyword).
Forcing you to add the keyword removes any ambiguity that might be there.
There are always two approaches when you want to specify that you are allowing or denying something. You can either trust everyone and punish sinners or you can distrust everyone and force them to ask permission.
There are some minor performance problems with virtual methods - can't be inlined, slower to call than non-virtual methods - but that really isn't that important.
More significantly they pose threat for your design. It's not about caring what others will do with your class it's about good object design. When a method is virtual you are saying that you can plug it out and replace it with different implementation. As I say you must treat such a method as an enemy - you can't trust it. You can't rely on any side-effects. You have to set up very strict contract for that method and stick with it.
If you consider that human is very lazy and forgetting creature which approach is more prudent?
I have never personally used virtual method in my design. If there is some logic that my class uses and I want it to be interchangeable then I just create interface for it. That interface than constitutes above mentioned contract. There are some situations where you really need virtual methods but I think that those are quite rare.
I believe there is an efficiency issue as well as the reasons others have posted. There is no need to spend the cpu cycles to look for an override if the method is not virtual.
When someone inherits from your class, that would give them the ability to change how any method works when the base class uses it. If you have a method that you absolutely need it to perform an action a certain way in the base class, you would have no way not allowing someone to change that functionality.
Here's one example. Suppose you have a function that you expect to not return an error. Someone comes in and decides to change it so that on Tuesday, it throws an out of range exception. Now the code in the base class fails, because something it depended on happening changed.
Because it's not Java
Seriously, just a different backing philosophy. Java wanted wanted extensibility to be the default and encapsulation to be explicit and C# wanted extensibility to be explicit and encapsulation to be the default.
Actually, that's bad design practice. Not caring which methods are override-able and which are not, I mean. You should always think about what should and should be override-able just as you should carefully consider what should or shouldn't be public!

Interface design? Can I do it iteratively? How should I handle changes to the interface?

What is the best approach for defining Interfaces in either C# or Java? Do we need to make generic or add the methods as and when the real need arises?
Regards,
Srinivas
Once an interface is defined, it is intended to not be changed.
You have to be thoughtful about the purpose of the interface and be as complete as possible.
If you find the need, later, to add a method, really you should define a new interface, possibly a _V2 interface, with the additional method.
Addendum: Here you will find some good guidelines on interface design in C#, as part of a larger, valuable work on C# design in general. It generally applies to Java as well.
Excerpts:
Although most APIs are best modeled using classes and structs, there are cases in which interfaces are more appropriate or are the only option.
DO provide at least one type that is
an implementation of an interface.
This helps to validate the design of
the interface. For example,
System.Collections.ArrayList is an
implementation of the
System.Collections.IList interface.
DO provide at least one API consuming
each interface you define (a method
taking the interface as a parameter or
a property typed as the interface).
This helps to validate the interface
design. For example, List.Sort
consumes IComparer interface.
DO NOT add members to an interface that
has previously shipped. Doing so
would break implementations of the
interface. You should create a new
interface to avoid versioning
problems.
I recommend relying on the broad type design guidelines.
To quote Joshua Bloch:
When in doubt, leave it out.
You can always add to an interface later. Once a member is a part of your interface it is very difficult to change or remove it. Be very conservative in your creation of you interfaces as they are binding contracts.
As a side note here is an excellent interview with Vance Morrison (of the Microsoft CLR team) in which he mentions the possibility of a future version of the CLR allowing "mixins" or interfaces with default implementations for their members.
If your interface is part of code that is shared with other projects and teams, listen to Cheeso. But if your interface is part of a private project and you have access to all the change points then you probably didn't need interfaces to begin with but go ahead and change them.
If the interface is going to be public, I feel that a good deal of care needs to be put into the design because changes to the interface is going to be difficult if a lot of code is going to suddenly break in the next iteration.
Changes to the interface needs to be taken with care, therefore, it would be ideal if changes wouldn't have to be made after the initial release. This means, that the first iteration will be very important in terms of the design.
However, if changes are required, one way to implement the changes to the interface would be deprecate the old methods, and provide a transition path for old code to use the newly-designed features. This does mean that the deprecated methods will still stick around to prevent the code using the old methods from breaking -- this is not ideal, so it is a "price to pay" for not getting it right the first time around.
On a related matter, yesterday, I stumbled upon the Google Tech Talk: How to Design a Good API and Why It Matters by Joshua Bloch. He was behind the design and implementation of the Java Collection libraries and such, and is the author of Effective Java.
The video is about an hour long where he goes into details and examples about what makes a good API, why we should be making well-designed APIs, and more. It's a good video to watch to get some ideas and inspiration for certain things to look out for when thinking about designing APIs.
Adding methods later to an interface immediately breaks all implementations of the interface that didn't accidentaly implement those methods. For that reason, make sure your interface specification is complete. I'd propose you start with a (sample) client of the interface, the part that actually uses instances of classes implementing said interface. Whatever the client needs must be part of the interface (obviously). Then make a (sample) implementation of the interface and look what additional methods are both generally usefull and available (in possible other implementations) so they should also be part of the interface. Check for symetry completeness (e.g. if there is an "openXYZ", there should also be a "closeXYZ". if there is an "addFooBar", there should be a "removeFooBar". etc.)
If possible, let a coworker check your specification.
And: Be sure you really want an interface. Maybe an abstract base class is a better fit for your needs.
Well, it really depends on your particular situation. If your team is the sole user/maintainer of that interface, then by all means, modify it as you see fit and forget all about that "best practice blabla" kind of stuff. It is YOUR code after all... Never blindly follow best pracice stuff without understanding its rationale.
Now, if you're making a public API that other team or customer, will work with (think plugins, extension points or things like that) then you have to be conservative with what you put in the interface. As other mentionned, you may have to add _V2 kind of interface int these cases. Microsoft did with several web browser COM interfaces.
The guidelines Microsoft publishes in Framework Design Guidelines are just that: guideline for PUBLIC interface. Not for private internal stuff; tough many of them still apply. Know what applies or not to your situation.
No rule will make up for lack of common sense.

Is this a design pattern?

All over our codebase we have this repeated pattern where there's an interface with one method. Is this a real design pattern? If so what is it and what would the benefits be?
Here are a few examples:
public interface IRunnable
{
void Run();
}
public interface IAction
{
void Perform();
}
public interface ICommand
{
void Execute(ActionArgs _actionargs);
}
I've seen this referenced as the Command pattern.
I first learned about it reading Uncle Bob's Agile Principles, Patterns, and Practices in C#.
I believe its elegance is its simplicity. I've used it when I wrote a file processing service. The service performed all of the administration of reading / deleting files. When a file needed to be processed, it's respective plugin was loaded. Each plugin implemented a Process method, and did whatever was needed to process that type of file. (Mainly, parse the contents and insert into a database.)
Everytime I had to process a new file type with a new layout, all I had to do was create a new Plugin that implemented Process.
This worked for me because I needed a simple solution. If you need to take in more than one parameter, this probably is not the pattern to use.
Any of these could very well be specific cases of the Command Pattern, depending on how it's being used and the context. Part of this would depend on why and how you're setting this up.
The command pattern also normally includes a concept of state and of various objects. Typically, this type of interface would suggest that, so I'm guessing this is what you are thinking of as a design pattern here, but without the caller or multiple targets it's difficult to tell if this is a classic example of it or not...
However, this, in and of itself, is just basic interface abstraction to me, and not something I'd classify as a design pattern.
As It was said it is a Command Design Pattern. But it is ( as for me ) more like Java way of achieving the result. In C# you can use delegates and in the C++ function pointers and functors.
There is no big sense to create more and more classes if you already have some implementation of the reaction in a some Class method. Which you can bind in the C++ or set to delegate in the C#. In Java I suppose you have no choice but to write the code you have found.
I'm not sure whether you could call it a design pattern as the interfaces you provided does not provide solutions to commonly experienced problems but rather solution to very specific problems in the project that you're developing.
The reason you're properly using interfaces is due to the fact that you cannot have all your classes that needs these methods extend a base class that contains these, yet you need to know that specific classes promise to implement these.
Might be, as some of the previous posters suggested: http://en.wikipedia.org/wiki/Command_pattern
You can remove this repetition (or prevent it for future code) by using lambda expressions. Lambda expressions are exactly for this situation.
If anything, then it's a functor. It's used in languages without first class function( pointer)s for the sort of things function( pointer)s are used for, such as the main function for a thread.
There are applications for Interfaces with only one method. I mean, in .NET there are plenty - INotifyPropertyChanged, for one (the PropertyChanged event). It just guarantees that an object has a certain method (regardless of what type of object it actually is), so you can call it (again, regardless of type).
Dim runnableObjects As List(Of Object)
runnableObjects.Add(New MyRunnableObject1)
runnableObjects.Add(New MyRunnableObject2)
For Each o As IRunnable In runnableObjects
o.Run()
Next
Maybe I'm missing something, but the first two look like they could be part of the strategy pattern. Basically, an object has a member of type IAction, and that member is assigned/reassigned at runtime based on the needs of the system to perform a task in a particular way (ie using a particular strategy).

Categories