Should I recommend sealing classes by default? - c#

In a big project I work for, I am considering recommending other programmers to always seal their classes if they haven't considered how their classes should be subclassed. Often times, less-experienced programmers never consider this.
I find it odd that in Java and C# classes are non-sealed / non-final by default. I think making classes sealed greatly improves readability of the code.
Notice that this is in-house code that we can always change should the rare case occur that we need to subclass.
What are your experiences? I meet quite some resistance to this idea. Are people that lazy they could not be bothered to type sealed?

Okay, as so many other people have weighed in...
Yes, I think it's entirely reasonable to recommend that classes are sealed by default.
This goes along with the recommendation from Josh Bloch in his excellent book Effective Java, 2nd edition:
Design for inheritance, or prohibit it.
Designing for inheritance is hard, and can make your implementation less flexible, especially if you have virtual methods, one of which calls the other. Maybe they're overloads, maybe they're not. The fact that one calls the other must be documented otherwise you can't override either method safely - you don't know when it'll be called, or whether you're safe to call the other method without risking a stack overflow.
Now if you later want to change which method calls which in a later version, you can't - you'll potentially break subclasses. So in the name of "flexibility" you've actually made the implementation less flexible, and had to document your implementation details more closely. That doesn't sound like a great idea to me.
Next up is immutability - I like immutable types. I find them easier to reason about than mutable types. It's one reason why the Joda Time API is nicer than using Date and Calendar in Java. But an unsealed class can never be known to be immutable. If I accept a parameter of type Foo, I may be able to rely on the properties declared in Foo not to be changed over time, but I can't rely on the object itself not being modified - there could be a mutable property in the subclass. Heaven help me if that property is also used by an override of some virtual method. Wave goodbye to many of the benefits of immutability. (Ironically, Joda Time has very large inheritance hierarchies - often with things saying "subclasses should be immutable. The large inheritance hierarchy of Chronology made it hard to understand when porting to C#.)
Finally, there's the aspect of overuse of inheritance. Personally I favour composition over inheritance where feasible. I love polymorphism for interfaces, and occasionally I use inheritance of implementation - but it's rarely a great fit in my experience. Making classes sealed avoids them being inappropriately derived from where composition would be a better fit.
EDIT: I'd also like to point readers at Eric Lippert's blog post from 2004 on why so many of the framework classes are sealed. There are plenty of places where I wish .NET provided an interface we could work to for testability, but that's a slightly different request...

It is my opinion that architectural design decisions are made to communicate to other developers (including future maintenance developers) something important.
Sealing classes communicates that the implementation should not be overridden. It communicates that the class should not be impersonated. There are good reasons to seal.
If you take the unusual approach of sealing everything (and this is unusual), then your design decisions now communicate things that are really not important - like that the class wasn't intended to be inherited by the original/authoring developer.
But then how would you communicate to other developers that the class should not be inherited because of something? You really can't. You are stuck.
Also, sealing a class doesn't improve readability. I just don't see that. If inheritance is a problem in OOP development, then we have a much larger problem.

I'd like to think that I'm a reasonably-experienced programmer and, if I've learned nothing else, it's that I am remarkably bad at predicting the future.
Typing sealed is not hard, I just don't want to irritate a developer down the road (who could be me!) who discovers that a problem could be easily solved with a little inheritance.
I also have no idea how sealing a class makes it more readable. Are you trying to force people to prefer composition to inheritance?

© Jeffrey Richter
There are three reasons why a sealed
class is better than an unsealed
class:
Versioning: When a class is originally sealed, it can change to
unsealed in the future without
breaking compatibility. However, once
a class is unsealed, you can never
change it to sealed in the future as
this would break all derived classes.
In addition, if the unsealed class
defines any unsealed virtual methods,
ordering of the virtual method calls
must be maintained with new versions
or there is the potential of breaking
derived types in the future.
Performance: As discussed in the previous section, calling a virtual
method doesn’t perform as well as
calling a nonvirtual method because
the CLR must look up the type of the
object at runtime in order to
determine which type defines the
method to call. However, if the JIT
compiler sees a call to a virtual
method using a sealed type, the JIT
compiler can produce more efficient
code by calling the method
nonvirtually. It can do this because
it knows there can’t possibly be a
derived class if the class is sealed.
Security: and predictability A class must protect its own state and not
allow itself to ever become corrupted.
When a class is unsealed, a derived
class can access and manipulate the
base class’s state if any data fields
or methods that internally manipulate
fields are accessible and not private.
In addition, a virtual method can be
overridden by a derived class, and the
derived class can decide whether to
call the base class’s implementation.
By making a method, property, or event
virtual, the base class is giving up
some control over its behavior and its
state. Unless carefully thought out,
this can cause the object to behave
unpredictably, and it opens up
potential security holes.

There shouldn't be anything wrong in inheriting from a class.
You should seal a class only when theres a good reason why it should never be inherited.
Besides, if you seal them all, it will only decrease maintainability. Every time someone will want to inherit from one of your classes, he will see it is sealed, then he'll either remove the seal (mess with code he shouldn't have to mess with) or worse: create a poor implementation of your class for himself.
Then you'll have 2 implementations of the same thing, one probably worse than the other, and 2 pieces of code to maintain.
Better just keep it unsealed. No harm in it being unsealed.

Frankly I think that classes not being sealed by default in c# is kind of weird and out of place with how the rest of the defaults work in the language.
By default, classes are internal.
By default fields are private.
By default members are private.
There seems to be a trend that points to least plausible access by default. It would stand to reason that a unsealed keyword should exits in c# instead of a sealed.
Personally I'd rather classes were sealed by default. In most ocassions when someone writes a class, he is not designing it with subclassing in mind and all the complexities that come along with it. Designing for future subclassing should be a conscious act and therefore I'd rather you explicitly have to state it.

"...consider[ing] how their classes should be sub classed..." shouldn't matter.
At least a half dozen times over the past few years I've found myself cursing some open source team or another for a sloppy mix of protected and private, making it impossible to simply extend a class without copying the source of the entire parent class. (In most cases, overriding a particular method required access to private members.)
One example was a JSTL tag that almost did what I wanted. I need to override one small thing. Nope, sorry, I had to completely copy the source of the parent.

I only seal classes if I am working on a reusable component that I intend to distribute, and I don't want the end user to inherit from it, or as a system architect if I know I don't want another developer on the team to inherit from it. However there is usually some reason for it.
Just because a class isn't being inherited from, I don't think it should automatically be marked sealed. Also, it annoys me to no end when I want to do something tricky in .NET, but then realize MS marks tons of their classes sealed.

This is a very opinionated question that's likely to garner some very opinionated answers ;-)
That said, in my opinion, I strongly prefer NOT making my classes sealed/final, particularly at the beginning. Doing this makes it very difficult to infer the intended extensibility points, and it's nearly impossible to get them right at the beginning. IMHO, overuse of encapsulation is worse than overuse of polymorphism.

Your house, your rule.
You can also have the complementary rule instead: a class that can be subclassed must be annotated; nobody should subclass a class that's not annotated so. This rule is not harder to follow than your rule.

The main purpose of a sealed class to take away the inheritance feature from the user so they cannot derive a class from a sealed class.Are you sure you want to do that. Or do you want to start having all classes as sealed and then when you need to make it inheritable you will change it .. Well that might be ok when every thing is in house and in one team but incase other teams in future use your dlls it will be not possible to recompile whole source code everytime a class needs to be unsealed ....
I wont recommend this but thats just my opinion

I don't like that way to think. Java and c# are made to be OOP languages. These languages are designed in a way where a class can have a parent or a child. That's it.
Some people say that we should always start from the most restricting modifier (private, protected...) and set your member to public only when you use it externally. These people are ,to me, lazy and don't want to think about a good design at the beginning of the project.
My answer is: Design your apps in a good way now. Set your class to seal when it needs to be sealed and private when it needs to be private. Don't make them sealed by default.

I find that sealed / final classes are actually pretty rare, in my experience; I would certainly not recommend suggesting all classes be sealed / final by default. That specification makes a certain statement about the state of the code (i.e., that it's complete) that is not necessarily always true during development time.
I'll also add that leaving a class unsealed requires more design / test effort to make sure that the exposed behaviours are well-defined and tested; heavy unit testing is critical, IMO, to achieve a level of confidence in the behaviour of the class that appears to be desired by the proponents of "sealed". But IMO, that increased level of effort translates directly to a high level of confidence and to higher quality code.

Related

C# Forcing static fields [duplicate]

I am developing a set of classes that implement a common interface. A consumer of my library shall expect each of these classes to implement a certain set of static functions. Is there anyway that I can decorate these class so that the compiler will catch the case where one of the functions is not implemented.
I know it will eventually be caught when building the consuming code. And I also know how to get around this problem using a kind of factory class.
Just curious to know if there is any syntax/attributes out there for requiring static functions on a class.
Ed Removed the word 'interface' to avoid confusion.
No, there is no language support for this in C#. There are two workarounds that I can think of immediately:
use reflection at runtime; crossed fingers and hope...
use a singleton / default-instance / similar to implement an interface that declares the methods
(update)
Actually, as long as you have unit-testing, the first option isn't actually as bad as you might think if (like me) you come from a strict "static typing" background. The fact is; it works fine in dynamic languages. And indeed, this is exactly how my generic operators code works - it hopes you have the static operators. At runtime, if you don't, it will laugh at you in a suitably mocking tone... but it can't check at compile-time.
No. Basically it sounds like you're after a sort of "static polymorphism". That doesn't exist in C#, although I've suggested a sort of "static interface" notion which could be useful in terms of generics.
One thing you could do is write a simple unit test to verify that all of the types in a particular assembly obey your rules. If other developers will also be implementing the interface, you could put that test code into some common place so that everyone implementing the interface can easily test their own assemblies.
This is a great question and one that I've encountered in my projects.
Some people hold that interfaces and abstract classes exist for polymorphism only, not for forcing types to implement certain methods. Personally, I consider polymorphism a primary use case, and forced implementation a secondary. I do use the forced implementation technique fairly often. Typically, it appears in framework code implementing a template pattern. The base/template class encapsulates some complex idea, and subclasses provide numerous variations by implementing the abstract methods. One pragmatic benefit is that the abstract methods provide guidance to other developers implementing the subclasses. Visual Studio even has the ability to stub the methods out for you. This is especially helpful when a maintenance developer needs to add a new subclass months or years later.
The downside is that there is no specific support for some of these template scenarios in C#. Static methods are one. Another one is constructors; ideally, ISerializable should force the developer to implement the protected serialization constructor.
The easiest approach probably is (as suggested earlier) to use an automated test to check that the static method is implemented on the desired types. Another viable idea already mentioned is to implement a static analysis rule.
A third option is to use an Aspect-Oriented Programming framework such as PostSharp. PostSharp supports compile-time validation of aspects. You can write .NET code that reflects over the assembly at compile time, generating arbitrary warnings and errors. Usually, you do this to validate that an aspect usage is appropriate, but I don't see why you couldn't use it for validating template rules as well.
Unfortunately, no, there's nothing like this built into the language.
While there is no language support for this, you could use a static analysis tool to enforce it. For example, you could write a custom rule for FxCop that detects an attribute or interface implementation on a class and then checks for the existence of certain static methods.
The singleton pattern does not help in all cases. My example is from an actual project of mine. It is not contrived.
I have a class (let's call it "Widget") that inherits from a class in a third-party ORM. If I instantiate a Widget object (therefore creating a row in the db) just to make sure my static methods are declared, I'm making a bigger mess than the one I'm trying to clean up.
If I create this extra object in the data store, I've got to hide it from users, calculations, etc.
I use interfaces in C# to make sure that I implement common features in a set of classes.
Some of the methods that implement these features require instance data to run. I code these methods as instance methods, and use a C# interface to make sure they exist in the class.
Some of these methods do not require instance data, so they are static methods. If I could declare interfaces with static methods, the compiler could check whether or not these methods exist in the class that says it implements the interface.
No, there would be no point in this feature. Interfaces are basically a scaled down form of multiple inheritance. They tell the compiler how to set up the virtual function table so that non-static virtual methods can be called properly in descendant classes. Static methods can't be virtual, hence, there's no point in using interfaces for them.
The approach that gets you closer to what you need is a singleton, as Marc Gravell suggested.
Interfaces, among other things, let you provide some level of abstraction to your classes so you can use a given API regardless of the type that implements it. However, since you DO need to know the type of a static class in order to use it, why would you want to enforce that class to implement a set of functions?
Maybe you could use a custom attribute like [ImplementsXXXInterface] and provide some run time checking to ensure that classes with this attribute actually implement the interface you need?
If you're just after getting those compiler errors, consider this setup:
Define the methods in an interface.
Declare the methods with abstract.
Implement the public static methods, and have the abstract method overrides simply call the static methods.
It's a little bit of extra code, but you'll know when someone isn't implementing a required method.

TDD - Extract interface or make methods virtual

Whenever I want to stub a method in an otherwise trivial class, I most often extract an interface.
Now if the constructor of that class is public and isn't too complex or dependent on complex types, it would have the same effect to just make the method in question virtual and inherit.
Is this preferable over extracting an interface? If so, why?
Edit:
class Parser
{
public IDictionary<string, int> DoLengthyParseTask(Stream s)
{
// is slow even with using memory stream
}
}
There are two ways: Either extract an interface or make the method virtual. I actually prefer interfaces, but that could lead to an explosion of IParser Parser tuples...
You need to consider what you are trying to accomplish outside of your unit testing. Do not let your tool dictate your design.
Dealing in interfaces can help decouple your code, but these should be natural points of separation in your code (e.g. business logic or data access). Making methods virtual makes sense if you are going to inherit and overwrite those methods.
In your case, I would attempt to test the behavior that uses DoLengthyParseTask and not the method directly. This will provide a more robust test suite as well. You need to carefully consider whether this method really needs to be public(meaning it can and should be referenced outside its own assembly).
Interfaces just make a contract for you, basically a promise that your implementation will provide access to a specified set of contact points (methods, properties, etc), with no specification of behaviour. You are free to do whatever you want as long as you honor the promise.
A base class on the other hand, in addition of a contract, specifies at least some behaviour that is coded in the class (unless everything is abstract, but that is another story). Making a method virtual still enables you to call in the implementation of the base, and still provide your own code along with it.
This inheritance of behaviour is basically the reason why multiple inheritance is a no-no in modern OOP, and multiple interface implementation is relatively common.
That said, you need to weight whether you just want to extract a contract, or you want to extract some behaviour as well, and the answer should be obvious for a specific case.
As for the IParser / Parser pairs, first they are great for unit testing and for dependency injection, and second, they do not charge you for class creation, so feel free to create as many as you want.
By programming to an interface you get benefits of ease of mocking/stubbing in unit testing and loosely coupled code (and as a result, much higher flexibility), literally for free (the only drawback is more artifacts to manage).
Interfaces and inheritance are two separate things and it's not a good idea to use them interchangeably, even though it's possible. By marking method virtual you're essentially telling others not only they're free to change (override) this method in their implementations, but that you actually expect them to (and are you?).
Such design comes with rather heavy consequences, so unless you explicitly need it - you shouldn't use it. Try sticking to programming to interface instead.
One of good object oriented design principles state that you should program to an interface (design by contract, Liskov Substitution Principle) and prefer composition over inheritance (not only your classes should implement interfaces/abstract classes, but also consist of such implementations).
It's worth noticing that your Parser example makes perfect candidate to be hidden behind abstraction (be it interface or base class). From its consumer point of view it doesn't matter how the data is created - for now you might think it's XML stream only, but requirements tend to change (and/or grow), and you might soon find yourself implementing binary file parser, data stream mining parser and what-not-else. Do it properly now, save yourself time and trouble later.

Why methods in C# are not automatically virtual? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Why C# implements methods as non-virtual by default?
It would be much more less work to define which methods are NOT overideable instead of which are overideable because (at least for me), when you're designing a class, you don't care if its heirs will override your methods or not...
So, why methods in C# are not automatically virtual? What is the common sense in this?
Anders Hejlsberg answered that question in this interview and I quote:
There are several reasons. One is
performance. We can observe that as
people write code in Java, they forget
to mark their methods final.
Therefore, those methods are virtual.
Because they're virtual, they don't
perform as well. There's just
performance overhead associated with
being a virtual method. That's one
issue.
A more important issue is versioning.
There are two schools of thought about
virtual methods. The academic school
of thought says, "Everything should be
virtual, because I might want to
override it someday." The pragmatic
school of thought, which comes from
building real applications that run in
the real world, says, "We've got to be
real careful about what we make
virtual."
When we make something virtual in a
platform, we're making an awful lot of
promises about how it evolves in the
future. For a non-virtual method, we
promise that when you call this
method, x and y will happen. When we
publish a virtual method in an API, we
not only promise that when you call
this method, x and y will happen. We
also promise that when you override
this method, we will call it in this
particular sequence with regard to
these other ones and the state will be
in this and that invariant.
Every time you say virtual in an API,
you are creating a call back hook. As
an OS or API framework designer,
you've got to be real careful about
that. You don't want users overriding
and hooking at any arbitrary point in
an API, because you cannot necessarily
make those promises. And people may
not fully understand the promises they
are making when they make something
virtual.
You should care which members can be overridden in derived classes.
Deciding which methods to make virtual should be a deliberate, well-thought-out decision - not something that happens automatically - the same as any other decisions regarding the public surface of your API.
Beyond the design and clarity reasons a non-virtual method is also technically better for a couple of reasons:
Virtual methods take longer to call (because the runtime needs to navigate through the virtual lookup table to find the actual method to call)
Virtual methods can't be inlined (because the compiler doesn't know at compile time which method will eventually be called)
Therefore unless you have specific intentions to override the method it is better for it to be non-virtual.
Convention? Nothing more, I would think. I know Java automatically makes methods virtual, while C# does not, so there's clearly some disagreement at some level as to what's better. Personally, I prefer the C# default - consider that overriding methods is a lot less common than not overriding them, so it would seem more concise to define virtual methods explicitly.
See also the answer of Anders Hejlsberg (the inventor of C#) at A Conversation with Anders Hejlsberg, Part IV.
To paraphrase Eric Lippert, one of the guys who designed C#:
So you code doesn't get accidentally broken when the source code you recieved from a third party is changed. In other words, to prevent the Brittle Base Class problem.
If a method is to be virtual, if because you (supposedly) made the conscious decision to allow the function to be replaceable, and designed, tested and document around that. What happens if, say, you made a function "frob" and, in some subsequent version, the base class's makers decide to also make a function "frob"?
So it's clear whether you're allowing overriding or a method or forcing hiding of a method (via the new keyword).
Forcing you to add the keyword removes any ambiguity that might be there.
There are always two approaches when you want to specify that you are allowing or denying something. You can either trust everyone and punish sinners or you can distrust everyone and force them to ask permission.
There are some minor performance problems with virtual methods - can't be inlined, slower to call than non-virtual methods - but that really isn't that important.
More significantly they pose threat for your design. It's not about caring what others will do with your class it's about good object design. When a method is virtual you are saying that you can plug it out and replace it with different implementation. As I say you must treat such a method as an enemy - you can't trust it. You can't rely on any side-effects. You have to set up very strict contract for that method and stick with it.
If you consider that human is very lazy and forgetting creature which approach is more prudent?
I have never personally used virtual method in my design. If there is some logic that my class uses and I want it to be interchangeable then I just create interface for it. That interface than constitutes above mentioned contract. There are some situations where you really need virtual methods but I think that those are quite rare.
I believe there is an efficiency issue as well as the reasons others have posted. There is no need to spend the cpu cycles to look for an override if the method is not virtual.
When someone inherits from your class, that would give them the ability to change how any method works when the base class uses it. If you have a method that you absolutely need it to perform an action a certain way in the base class, you would have no way not allowing someone to change that functionality.
Here's one example. Suppose you have a function that you expect to not return an error. Someone comes in and decides to change it so that on Tuesday, it throws an out of range exception. Now the code in the base class fails, because something it depended on happening changed.
Because it's not Java
Seriously, just a different backing philosophy. Java wanted wanted extensibility to be the default and encapsulation to be explicit and C# wanted extensibility to be explicit and encapsulation to be the default.
Actually, that's bad design practice. Not caring which methods are override-able and which are not, I mean. You should always think about what should and should be override-able just as you should carefully consider what should or shouldn't be public!

Interface design? Can I do it iteratively? How should I handle changes to the interface?

What is the best approach for defining Interfaces in either C# or Java? Do we need to make generic or add the methods as and when the real need arises?
Regards,
Srinivas
Once an interface is defined, it is intended to not be changed.
You have to be thoughtful about the purpose of the interface and be as complete as possible.
If you find the need, later, to add a method, really you should define a new interface, possibly a _V2 interface, with the additional method.
Addendum: Here you will find some good guidelines on interface design in C#, as part of a larger, valuable work on C# design in general. It generally applies to Java as well.
Excerpts:
Although most APIs are best modeled using classes and structs, there are cases in which interfaces are more appropriate or are the only option.
DO provide at least one type that is
an implementation of an interface.
This helps to validate the design of
the interface. For example,
System.Collections.ArrayList is an
implementation of the
System.Collections.IList interface.
DO provide at least one API consuming
each interface you define (a method
taking the interface as a parameter or
a property typed as the interface).
This helps to validate the interface
design. For example, List.Sort
consumes IComparer interface.
DO NOT add members to an interface that
has previously shipped. Doing so
would break implementations of the
interface. You should create a new
interface to avoid versioning
problems.
I recommend relying on the broad type design guidelines.
To quote Joshua Bloch:
When in doubt, leave it out.
You can always add to an interface later. Once a member is a part of your interface it is very difficult to change or remove it. Be very conservative in your creation of you interfaces as they are binding contracts.
As a side note here is an excellent interview with Vance Morrison (of the Microsoft CLR team) in which he mentions the possibility of a future version of the CLR allowing "mixins" or interfaces with default implementations for their members.
If your interface is part of code that is shared with other projects and teams, listen to Cheeso. But if your interface is part of a private project and you have access to all the change points then you probably didn't need interfaces to begin with but go ahead and change them.
If the interface is going to be public, I feel that a good deal of care needs to be put into the design because changes to the interface is going to be difficult if a lot of code is going to suddenly break in the next iteration.
Changes to the interface needs to be taken with care, therefore, it would be ideal if changes wouldn't have to be made after the initial release. This means, that the first iteration will be very important in terms of the design.
However, if changes are required, one way to implement the changes to the interface would be deprecate the old methods, and provide a transition path for old code to use the newly-designed features. This does mean that the deprecated methods will still stick around to prevent the code using the old methods from breaking -- this is not ideal, so it is a "price to pay" for not getting it right the first time around.
On a related matter, yesterday, I stumbled upon the Google Tech Talk: How to Design a Good API and Why It Matters by Joshua Bloch. He was behind the design and implementation of the Java Collection libraries and such, and is the author of Effective Java.
The video is about an hour long where he goes into details and examples about what makes a good API, why we should be making well-designed APIs, and more. It's a good video to watch to get some ideas and inspiration for certain things to look out for when thinking about designing APIs.
Adding methods later to an interface immediately breaks all implementations of the interface that didn't accidentaly implement those methods. For that reason, make sure your interface specification is complete. I'd propose you start with a (sample) client of the interface, the part that actually uses instances of classes implementing said interface. Whatever the client needs must be part of the interface (obviously). Then make a (sample) implementation of the interface and look what additional methods are both generally usefull and available (in possible other implementations) so they should also be part of the interface. Check for symetry completeness (e.g. if there is an "openXYZ", there should also be a "closeXYZ". if there is an "addFooBar", there should be a "removeFooBar". etc.)
If possible, let a coworker check your specification.
And: Be sure you really want an interface. Maybe an abstract base class is a better fit for your needs.
Well, it really depends on your particular situation. If your team is the sole user/maintainer of that interface, then by all means, modify it as you see fit and forget all about that "best practice blabla" kind of stuff. It is YOUR code after all... Never blindly follow best pracice stuff without understanding its rationale.
Now, if you're making a public API that other team or customer, will work with (think plugins, extension points or things like that) then you have to be conservative with what you put in the interface. As other mentionned, you may have to add _V2 kind of interface int these cases. Microsoft did with several web browser COM interfaces.
The guidelines Microsoft publishes in Framework Design Guidelines are just that: guideline for PUBLIC interface. Not for private internal stuff; tough many of them still apply. Know what applies or not to your situation.
No rule will make up for lack of common sense.

Is using "base" bad practice even though it might be good for readability?

I know this is a subjective question, but I'm always curious about best-practices in coding style. ReSharper 4.5 is giving me a warning for the keyword "base" before base method calls in implementation classes, i.e.,
base.DoCommonBaseBehaviorThing();
While I appreciate the "less is better" mentality, I also have spent a lot of time debugging/maintaining highly-chained applications, and feel like it might help to know that a member call is to a base object just by looking at it. It's simple enough to change ReSharper's rules, of course, but what do y'all think? Should "base" be used when calling base members?
The only time you should use base.MethodCall(); is when you have an overridden method of the same name in the child class, but you actually want to call the method in the parent.
For all other cases, just use MethodCall();.
Keywords like this and base do not make the code more readable and should be avoided for all cases unless they are necessary--such as in the case I described above.
I am not really sure using this is a bad practice or not. base, however is not a matter of good or bad practice, but a matter of semantics. Whereas this is polymorphic, meaning that even if the method using it belongs to a base class, it will use the overriden method, base is not. base will always refer to the method defined at the base class of the method calling it, hence it is not polymorphic. This is a huge semantic difference. base should then be used accordingly. If you want that method, use base. If you want the call to remain polymorphic, don't use base.
Another important point to take into consideration is that while you haven't currently overridden that method that doesn't mean you won't ever in the future and by prefacing all of your calls with base. you won't get the new functionality without performing a find and replace for all your calls.
While prefacing calls with this. will not do anything other than decrease / increase readability (ignoring the situation where two variables in scope have the same name) the base. prefix will change the functionality of the code you write in many common scenarios. So I would never add base. unless it is needed.
I think generally you should use base only when overriding previous functionality.
Some languages (C# does not) also provide this functionality by calling the function by it's base class name explicitly like this: Foo.common() (called from somewhere in Bar, of course).
This would allow you to skip upwards in the chain, or pick from multiple implementations -- in the case of multiple-inheritance.
Regardless, I feel base should be used only when needed to explicitly call your parent's functionality because you are or have overridden that functionality in this class.
It's really a matter of personal preference. If you like seeing "base." at the beginning of your members, you can easily turn off the rule (Go to Options>Inspection Severity>Code Redundancies>Redundant 'base.' qualifier). Don't let non-behavioral static code analysis rules affect your preferred coding style.
EDIT
One thing to consider is that the static code analysis in FXCop and R# are there to provide rules for all possible needs. To actually adhere to all of the rules simultaneously is a little onerous. You should define your preferred coding style (if you're working in a team, do it collectively), and stick with it. Modify your rules to match your coding standards, not vice versa.

Categories