How do get set methods stop dependencies? - c#

So I understand that if we want to change the implementation detail of a class, using those details outside of the class will cause errors when things are changed, this is why we set those fields to private. However, if we use get set methods with a private field doesn't this do the same thing? If I decided I didn't want my class to have a name and a username, just a name, and I delete the private username field, the get / set methods will break with that and it will cause the places where those methods are used to also break. Isn't referencing one class a dependency no matter what in case we change that classes methods or fields? What is the point of Get Set methods then and how do they stop code from breaking like this?

However, if we use get set methods with a private field doesn't this do the same thing?
Yes. Arguably, yes. The original idea of Object Oriented Programming, as Alan Kay -who coined the term- initially thought about it, has been distorted. Alan Kay has expressed his dislike for setters:
Lots of so called object oriented languages have setters and when you have a setter on an object you turned it back into a data structure.
-- Alan Kay - Programming and Scaling (video).
Isn't referencing one class a dependency no matter what in case we change that classes methods or fields?
Correct. If you are referencing a class from another, your classes are tightly coupled. In that case a change of one class will propagate to the other. Regardless if the change is in public fields, getter, setters or something else.
If you are using an interface or similar indirection, they are loosely coupled. This looseness gives you an opportunity to stop the propagation of the change. Which you may or may not do.
Finally, if you are using an observer pattern or similar (e.g. events or listeners), you can have classes decoupled. This is, in a way, retrofitting the idea of passing messages as originally conceived by Alan Kay.
What is the point of Get Set methods then and how do they stop code from breaking like this?
They allow you to change the internal representation of the class. While the common approach is to have setters and getters correspond to a field, that does not have to be the case. A getter might return a constant, or compute a value form multiple fields. Similarly, a setter might update multiple fields (or even do nothing).
Reasons to have setters:
They give you an opportunity to implement validations.
They give you an opportunity to raise "changed" events.
They might be necessary to work with other systems (e.g. some Dependency Injection frameworks, also some User Interface frameworks).
You need to update multiple fields to keep an invariant. Presumably updating those other fields don't result in some public property changing value in an unexpected way (also don't break single responsibility principle, but that should be obvious). See Principle of least astonishment.
Reasons of getters:
They give you an opportunity to implement lazy initialization.
They give you an opportunity to return computed values.
They might make debugging easier. Consider some getters for DEBUG builds only.
If you had public fields, and then you decided you needed anything like what I described above, you may want to change to getters and setters. Plus, that change require to recompile the code that uses it (even if the source is the same, which would be the case with C# properties). Which is a reason it is advised to do it preemptively, in particular in code libraries (so that an application that uses it does not have to be recompiled if the library changed to a newer version that needed these changes).
These are reasons to not have getters: Often, getters exist to access a member to call method on it, which leads to very awkward interfaces (see Law of Demeter). Or to take a decision, which may lead to a Time-of-check to time-of-use bug, which also means the interface is not thread-safe ready. Or to do a computation, which is often better if the class has a method to do it itself (Tell, Don't Ask).
And for setters, aside for being a code smell of bad encapsulation, could be indicative of an unintended state machine. If code needs to call a setter (change the state), to make sure it has the intended value before calling a method, just make it a parameter (yes, even if you are going to repeat that parameter in a lot of methods). Such interface is easy to misuse, plus is not thread-safe ready. In general, avoid any interface design in which the code using it has to call things in an order that it does not forces you to (a good design will not let you call things in an order that results in an invalid state (see poka-yoke). Of course, not every contract can be expressed in the interface, we have exceptions for the rest.).
A thread-safe ready interface, is one that can be implemented in a thread-safe fashion. If an interface is not thread-safe ready, the only way to avoid threading problems while using it is to wrap access to it with locks external to it, regardless of how the interface is implemented. Often because the interface prevents consolidating reads and writes leading to a Time-of-check to time-of-use bug or an ABA problem.
There is value in public fields, when appropriate, too. In particular for performance, and for interoperability with native code. You will find, for example, that Vector types used in game development libraries often have public fields for its coordinates.
As you can see, there can be good reasons for both having and not having getters and setters. Similarly, there can be good reasons for both having or not having public fields. Plus, either case can be problematic if not used appropriately.
We have guidelines and "best practices" to avoid the pitfalls. Not having public fields is a very good default. And not every field needs getters and setters. However, you can make getters and setters, and you can make fields public. Do that if you have a good reason to do it.
If you make every field public you will likely run into trouble, braking encapsulation. If you make getters and setters for each and every field, it is not much better. Use them thoughtfully.

Related

Why is it good to use Properties instead of fields? [duplicate]

This question already has answers here:
What is the difference between a field and a property?
(33 answers)
Closed 9 years ago.
I read a lot on how I am never supposed to use fields in my models and DTOS but I dont ever read why this is.
public int property{ get; set; }
public int Foo;
under the hood that is the difference between theese two?
One important difference is that interfaces can have properties but not fields.
This article given by JON SKEET is very useful in understanding this.
Practical benefits of properties
There are times when you could use non-private fields, because for
whatever reason you don't care about the compatibility reasons above.
However, there are still benefits to using properties even for trivial
situations:
There's more fine-grained access control with properties. Need it to be publicly gettable but really only want it set with protected
access? No problem (from C# 2 onwards, at least).
Want to break into the debugger whenever the value changes? Just add a breakpoint in the setter.
Want to log all access? Just add logging to the getter.
Properties are used for data binding; fields aren't.
One good reason is because it allows you to incluede logic and verification inside of the getters and setters
Taken from: http://csharpindepth.com/Articles/Chapter8/PropertiesMatter.aspx
Practical benefits of properties
There are times when you could use non-private fields, because for whatever reason you don't care about the compatibility reasons above. However, there are still benefits to using properties even for trivial situations:
There's more fine-grained access control with properties. Need it to be publicly gettable but really only want it set with protected access? No problem (from C# 2 onwards, at least).
Want to break into the debugger whenever the value changes? Just add a breakpoint in the setter.
Want to log all access? Just add logging to the getter.
Properties are used for data binding; fields aren't.
None of these are traditional "adding real logic" uses of properties, but all are tricky/impossible with plain fields. You could do this on an "as and when I need it" basis, but why not just be consistent to start with? It's even more of a no-brainer with the automatic properties of C# 3.
The philosophical reason for only exposing properties
For every type you write, you should consider its interface to the rest of the world (including classes within the same assembly). This is its description of what it makes available, its outward persona. Implementation shouldn't be part of that description, any more than it absolutely has to be. (That's why I prefer composition to inheritance, where the choice makes sense - inheritance generally exposes or limits possible implementations.)
A property communicates the idea of "I will make a value available to you, or accept a value from you." It's not an implementation concept, it's an interface concept. A field, on the other hand, communicates the implementation - it says "this type represents a value in this very specific way". There's no encapsulation, it's the bare storage format. This is part of the reason fields aren't part of interfaces - they don't belong there, as they talk about how something is achieved rather than what is achieved.
I quite agree that a lot of the time, fields could actually be used with no issues in the lifetime of the application. It's just not clear beforehand which those times are, and it still violates the design principle of not exposing implementation.

Should I recommend sealing classes by default?

In a big project I work for, I am considering recommending other programmers to always seal their classes if they haven't considered how their classes should be subclassed. Often times, less-experienced programmers never consider this.
I find it odd that in Java and C# classes are non-sealed / non-final by default. I think making classes sealed greatly improves readability of the code.
Notice that this is in-house code that we can always change should the rare case occur that we need to subclass.
What are your experiences? I meet quite some resistance to this idea. Are people that lazy they could not be bothered to type sealed?
Okay, as so many other people have weighed in...
Yes, I think it's entirely reasonable to recommend that classes are sealed by default.
This goes along with the recommendation from Josh Bloch in his excellent book Effective Java, 2nd edition:
Design for inheritance, or prohibit it.
Designing for inheritance is hard, and can make your implementation less flexible, especially if you have virtual methods, one of which calls the other. Maybe they're overloads, maybe they're not. The fact that one calls the other must be documented otherwise you can't override either method safely - you don't know when it'll be called, or whether you're safe to call the other method without risking a stack overflow.
Now if you later want to change which method calls which in a later version, you can't - you'll potentially break subclasses. So in the name of "flexibility" you've actually made the implementation less flexible, and had to document your implementation details more closely. That doesn't sound like a great idea to me.
Next up is immutability - I like immutable types. I find them easier to reason about than mutable types. It's one reason why the Joda Time API is nicer than using Date and Calendar in Java. But an unsealed class can never be known to be immutable. If I accept a parameter of type Foo, I may be able to rely on the properties declared in Foo not to be changed over time, but I can't rely on the object itself not being modified - there could be a mutable property in the subclass. Heaven help me if that property is also used by an override of some virtual method. Wave goodbye to many of the benefits of immutability. (Ironically, Joda Time has very large inheritance hierarchies - often with things saying "subclasses should be immutable. The large inheritance hierarchy of Chronology made it hard to understand when porting to C#.)
Finally, there's the aspect of overuse of inheritance. Personally I favour composition over inheritance where feasible. I love polymorphism for interfaces, and occasionally I use inheritance of implementation - but it's rarely a great fit in my experience. Making classes sealed avoids them being inappropriately derived from where composition would be a better fit.
EDIT: I'd also like to point readers at Eric Lippert's blog post from 2004 on why so many of the framework classes are sealed. There are plenty of places where I wish .NET provided an interface we could work to for testability, but that's a slightly different request...
It is my opinion that architectural design decisions are made to communicate to other developers (including future maintenance developers) something important.
Sealing classes communicates that the implementation should not be overridden. It communicates that the class should not be impersonated. There are good reasons to seal.
If you take the unusual approach of sealing everything (and this is unusual), then your design decisions now communicate things that are really not important - like that the class wasn't intended to be inherited by the original/authoring developer.
But then how would you communicate to other developers that the class should not be inherited because of something? You really can't. You are stuck.
Also, sealing a class doesn't improve readability. I just don't see that. If inheritance is a problem in OOP development, then we have a much larger problem.
I'd like to think that I'm a reasonably-experienced programmer and, if I've learned nothing else, it's that I am remarkably bad at predicting the future.
Typing sealed is not hard, I just don't want to irritate a developer down the road (who could be me!) who discovers that a problem could be easily solved with a little inheritance.
I also have no idea how sealing a class makes it more readable. Are you trying to force people to prefer composition to inheritance?
© Jeffrey Richter
There are three reasons why a sealed
class is better than an unsealed
class:
Versioning: When a class is originally sealed, it can change to
unsealed in the future without
breaking compatibility. However, once
a class is unsealed, you can never
change it to sealed in the future as
this would break all derived classes.
In addition, if the unsealed class
defines any unsealed virtual methods,
ordering of the virtual method calls
must be maintained with new versions
or there is the potential of breaking
derived types in the future.
Performance: As discussed in the previous section, calling a virtual
method doesn’t perform as well as
calling a nonvirtual method because
the CLR must look up the type of the
object at runtime in order to
determine which type defines the
method to call. However, if the JIT
compiler sees a call to a virtual
method using a sealed type, the JIT
compiler can produce more efficient
code by calling the method
nonvirtually. It can do this because
it knows there can’t possibly be a
derived class if the class is sealed.
Security: and predictability A class must protect its own state and not
allow itself to ever become corrupted.
When a class is unsealed, a derived
class can access and manipulate the
base class’s state if any data fields
or methods that internally manipulate
fields are accessible and not private.
In addition, a virtual method can be
overridden by a derived class, and the
derived class can decide whether to
call the base class’s implementation.
By making a method, property, or event
virtual, the base class is giving up
some control over its behavior and its
state. Unless carefully thought out,
this can cause the object to behave
unpredictably, and it opens up
potential security holes.
There shouldn't be anything wrong in inheriting from a class.
You should seal a class only when theres a good reason why it should never be inherited.
Besides, if you seal them all, it will only decrease maintainability. Every time someone will want to inherit from one of your classes, he will see it is sealed, then he'll either remove the seal (mess with code he shouldn't have to mess with) or worse: create a poor implementation of your class for himself.
Then you'll have 2 implementations of the same thing, one probably worse than the other, and 2 pieces of code to maintain.
Better just keep it unsealed. No harm in it being unsealed.
Frankly I think that classes not being sealed by default in c# is kind of weird and out of place with how the rest of the defaults work in the language.
By default, classes are internal.
By default fields are private.
By default members are private.
There seems to be a trend that points to least plausible access by default. It would stand to reason that a unsealed keyword should exits in c# instead of a sealed.
Personally I'd rather classes were sealed by default. In most ocassions when someone writes a class, he is not designing it with subclassing in mind and all the complexities that come along with it. Designing for future subclassing should be a conscious act and therefore I'd rather you explicitly have to state it.
"...consider[ing] how their classes should be sub classed..." shouldn't matter.
At least a half dozen times over the past few years I've found myself cursing some open source team or another for a sloppy mix of protected and private, making it impossible to simply extend a class without copying the source of the entire parent class. (In most cases, overriding a particular method required access to private members.)
One example was a JSTL tag that almost did what I wanted. I need to override one small thing. Nope, sorry, I had to completely copy the source of the parent.
I only seal classes if I am working on a reusable component that I intend to distribute, and I don't want the end user to inherit from it, or as a system architect if I know I don't want another developer on the team to inherit from it. However there is usually some reason for it.
Just because a class isn't being inherited from, I don't think it should automatically be marked sealed. Also, it annoys me to no end when I want to do something tricky in .NET, but then realize MS marks tons of their classes sealed.
This is a very opinionated question that's likely to garner some very opinionated answers ;-)
That said, in my opinion, I strongly prefer NOT making my classes sealed/final, particularly at the beginning. Doing this makes it very difficult to infer the intended extensibility points, and it's nearly impossible to get them right at the beginning. IMHO, overuse of encapsulation is worse than overuse of polymorphism.
Your house, your rule.
You can also have the complementary rule instead: a class that can be subclassed must be annotated; nobody should subclass a class that's not annotated so. This rule is not harder to follow than your rule.
The main purpose of a sealed class to take away the inheritance feature from the user so they cannot derive a class from a sealed class.Are you sure you want to do that. Or do you want to start having all classes as sealed and then when you need to make it inheritable you will change it .. Well that might be ok when every thing is in house and in one team but incase other teams in future use your dlls it will be not possible to recompile whole source code everytime a class needs to be unsealed ....
I wont recommend this but thats just my opinion
I don't like that way to think. Java and c# are made to be OOP languages. These languages are designed in a way where a class can have a parent or a child. That's it.
Some people say that we should always start from the most restricting modifier (private, protected...) and set your member to public only when you use it externally. These people are ,to me, lazy and don't want to think about a good design at the beginning of the project.
My answer is: Design your apps in a good way now. Set your class to seal when it needs to be sealed and private when it needs to be private. Don't make them sealed by default.
I find that sealed / final classes are actually pretty rare, in my experience; I would certainly not recommend suggesting all classes be sealed / final by default. That specification makes a certain statement about the state of the code (i.e., that it's complete) that is not necessarily always true during development time.
I'll also add that leaving a class unsealed requires more design / test effort to make sure that the exposed behaviours are well-defined and tested; heavy unit testing is critical, IMO, to achieve a level of confidence in the behaviour of the class that appears to be desired by the proponents of "sealed". But IMO, that increased level of effort translates directly to a high level of confidence and to higher quality code.

Is it good practice to use reflection in your business logic?

I need to work on an application that consists of two major parts:
The business logic part with specific business classes (e.g. Book, Library, Author, ...)
A generic part that can show Books, Libraries, ... in data grids, map them to a database, ...).
The generic part uses reflection to get the data out of the business classes without the need to write specific data-grid or database logic in the business classes. This works fine and allows us to add new business classes (e.g. LibraryMember) without the need to adjust the data grid and database logic.
However, over the years, code was added to the business classes that also makes use of reflection to get things done in the business classes. E.g. if the Author of a Book is changed, observers are called to tell the Author itself that it should add this book to its collection of books written by him (Author.Books). In these observers, not only the instances are passed, but also information that is directly derived from the reflection (the FieldInfo is added to the observer call so that the caller knows that the field "Author" of the book is changed).
I can clearly see advantages in using reflection in these generic modules (like the data grid or database interface), but it seems to me that using reflection in the business classes is a bad idea. After all, shouldn't the application work without relying on reflection as much as possible? Or is the use of reflection the 'normal way of working' in the 21st century?
Is it good practice to use reflection in your business logic?
EDIT: Some clarification on the remark of Kirk:
Imagine that Author implements an observer on Book.
Book calls all its observers whenever some field of Book changes (like Title, Year, #Pages, Author, ...). The 'FieldInfo' of the changed field is passed in the observer.
The Author-observer then uses this FieldInfo to decide whether it is interested in this change. In this case, if FieldInfo is for the field Author of Book, the Author-Observer will update its own vector of Books.
The main danger with Reflection is that the flexibility can escalate into disorganized, unmaintainable code, particularly if more junior devs are used to make changes, who may not fully understand the Reflection code or are so enamored of it that they use it to solve every problem, even when simpler tools would suffice.
My observation has been that over-generalization leads to over-complication. It gets worse when the actual boundary cases turn out to not be accommodated by the generalized design, requiring hacks to fit in the new features on schedule, transmuting flexibility into complexity.
I avoid using reflection. Yes, it makes your program more flexible. But this flexibility comes at a high price: There is no compile-time checking of field names or types or whatever information you're collecting through reflection.
Like many things, it depends on what you're doing. If the nature of your logic is that you NEVER compare the field names (or whatever) found to a constant value, then using reflection is probably a good thing. But if you use reflection to find field names, and then loop through them searching for the fields named "Author" and "Title", you've just created a more-complex simulation of an object with two named fields. And what if you search for "Author" when the field is actually called "AuthorName", or you intend to search for "Author" and accidentally type "Auhtor"? Now you have errors that won't show up until runtime instead of being flagged at compile time.
With hard-coded field names, your IDE can tell you every place that a certain field is used. With reflection ... not so easy to tell. Maybe you can do a text search on the name, but if field names are passed around as variables, it can get very difficult.
I'm working on a system now where the original authors loved reflection and similar techniques. There are all sorts of places where they need to create an instance of a class and instead of just saying "new" and the class, they create a token that they look up in a table to get the class name. What does this gain? Yes, we could change the table to map that token to a different name. And this gains us ... what? When was the last time that you said, "Oh, every place that my program creates an instance of Customer, I want to change to create an instance of NewKindOfCustomer." If you have changes to a class, you change the class, not create a new class but keep the old one around for nostalgia.
To take a similar issue, I make a regular practice of building data entry screens on the fly by asking the database for a list of field names, types, and sizes, and then laying it out from there. This gives me the advantage of using the same program for all the simpler data entry screens -- just pass in the table name as a parameter -- and if a field is added or deleted, zero code change is required. But this only works as long as I don't care what the fields are. Once I start having validations or side effects specific to this screen, the system is more trouble than it's worth, and I'm better off to fall back to more explicit coding.
Based on your edit, it sounds like you are using reflection purely as a mechanism for identifying fields. This is as opposed to dynamic behavior such as looking up the fields, which should be avoided when possible (since such lookups usually use strings which ruin static type safety). Using FieldInfo to provide an identifier for a field is fairly harmless, though it does expose some internals (the info class) in a way that is not entirely ideal.
I tend not to use reflection where i can help it. by using interfaces and coding against these i can do a lot of things that some would use reflection for.
But im a big fan of if it works, it works.
Also by using reflection you probably have something that can adapt fairly easily.
Ie the only objection most would have is fairly religious ... and if your performance is fine and the code is maintainable and clear .... who cares?
Edit: based on your edit i would indeed use interfaces to achieve what you want. Unless i misunderstand you.
I think it is a good idea to stay away from Reflection when possible, but dont be afraid to resort to it when it provides a better or more flexible solution to your problem. The performance hit for anything but tight loop operations is likely to be minimal in the overall scheme of an application or Web Form request.
Just a good article to share about reflection -
http://www.simple-talk.com/dotnet/.net-framework/a-defense-of-reflection-in-.net/
I tend to use interfaces in my business layer and leave the reflection to my presentation layer. This is not an absolute but rather a guideline.

C# / Object oriented design - maintaining valid object state

When designing a class, should logic to maintain valid state be incorporated in the class or outside of it ? That is, should properties throw exceptions on invalid states (i.e. value out of range, etc.), or should this validation be performed when the instance of the class is being constructed/modified ?
It belongs in the class. Nothing but the class itself (and any helpers it delegates to) should know, or be concerned with, the rules that determine valid or invalid state.
Yes, properties should check on valid/invalid values when being set. That's what it's for.
It should be impossible to put a class into an invalid state, regardless of the code outside it. That should make it clear.
On the other hand, the code outside it is still responsible for using the class correctly, so frequently it will make sense to check twice. The class's methods may throw an ArgumentException if passed something they don't like, and the calling code should ensure that this doesn't happen by having the right logic in place to validate input, etc.
There are also more complex cases where there are different "levels" of client involved in a system. An example is an OS - an application runs in "User mode" and ought to be incapable of putting the OS into an invalid state. But a driver runs in "Kernel mode" and is perfectly capable of corrupting the OS state, because it is part of a team that is responsible for implementing the services used by the applications.
This kind of dual-level arrangement can occur in object models; there can be "exterior" clients of the model that only see valid states, and "interior" clients (plug-ins, extensions, add-ons) which have to be able to see what would otherwise be regarded as "invalid" states, because they have a role to play in implementing state transitions. The definition of invalid/valid is different depending on the role being played by the client.
Generally this belongs in the class itself, but to some extent it has to also depend on your definition of 'valid'. For example, consider the System.IO.FileInfo class. Is it valid if it refers to file that no longer exists? How would it know?
I would agree with #Joel. Typcially this would be found in the class. However, I would not have the property accessors implement the validation logic. Rather I'd recommend a validation method for the persistence layer to call when the object is being persisted. This allows you to localize the validation logic in a single place and make different choices for valid/invalid based on the persistence operation being performed. If, for example, you are planning to delete an object from the database, do you care that some of its properties are invalid? Probably not -- as long as the ID and row versions are the same as those in the database, you just go ahead and delete it. Likewise, you may have different rules for inserts and updates, e.g., some fields may be null on insert, but required on update.
It depends.
If the validation is simple, and can be checked using only information contained in the class, then most of the time it's worth while to add the state checks to the class.
There are sometimes, however, where it's not really possible or desirable to do so.
A great example is a compiler. Checking the state of abstract syntax trees (ASTs) to make sure a program is valid is usually not done by either property setters or constructors. Instead, the validation is usually done by a tree visitor, or a series of mutually recursive methods in some sort of "semantic analysis class". In either case, however, properties are validated long after their values are set.
Also, with objects used to old UI state it's usually a bad idea (from a usability perspective) to throw exceptions when invalid values are set. This is particularly true for apps that use WPF data binding. In that case you want to display some sort of modeless feedback to the customer rather than throwing an exception.
The class really should maintain valid values. It shouldn't matter if these are entered through the constructor or through properties. Both should reject invalid values. If both a constructor parameter and a property require the same validation, you can either use a common private method to validate the value for both the property and the constructor or you can do the validation in the property and use the property inside your constructor when setting the local variables. I would recommend using a common validation method, personally.
Your class should throw an exception if it receives invalid values. All in all, good design can help reduce the chances of this happening.
The valid state in a class is best express with the concept of class invariant. It is a boolean expression which must hold true for the objects of that class to be valid.
The Design by Contract approach suggests that you, as a developer of class C, should guarantee that the class invariant holds:
After construction
After a call to a public method
This will imply that, since the object is encapsulated (noone can modify it except via calls to public methods), the invariant will also be satisfied at entering any public method, or at entering the destructor (in languages with destructors), if any.
Each public method states preconditions that the caller must satisfy, and postconditions that will be satisfied by the class at the end of every public method. Violating a precondition effectively violates the contract of the class, so that it can still be correct but it doesn't have to behave in any particular way, nor maintain the invariant, if it is called with a precondition violation. A class that fulfills its contract in the absence of caller violations can be said to be correct.
A concept different from correct but complementary to it (and certainly belonging to the multiple factors of software quality) is that of robust. In our context, a robust class will detect when one of its methods is called without fulfilling the method preconditions. In such cases, an assertion violation exception will typically be thrown, so that the caller knows that he blew it.
So, answering your question, both the class and its caller have obligations as part of the class contract. A robust class will detect contract violations and spit. A correct caller will not violate the contract.
Classes belonging to the public interface of a code library should be compiled as robust, while inner classes could be tested as robust but then run in the released product as just correct, without the precondition checks on. This depends on a number of things and was discussed elsewhere.

Is using "base" bad practice even though it might be good for readability?

I know this is a subjective question, but I'm always curious about best-practices in coding style. ReSharper 4.5 is giving me a warning for the keyword "base" before base method calls in implementation classes, i.e.,
base.DoCommonBaseBehaviorThing();
While I appreciate the "less is better" mentality, I also have spent a lot of time debugging/maintaining highly-chained applications, and feel like it might help to know that a member call is to a base object just by looking at it. It's simple enough to change ReSharper's rules, of course, but what do y'all think? Should "base" be used when calling base members?
The only time you should use base.MethodCall(); is when you have an overridden method of the same name in the child class, but you actually want to call the method in the parent.
For all other cases, just use MethodCall();.
Keywords like this and base do not make the code more readable and should be avoided for all cases unless they are necessary--such as in the case I described above.
I am not really sure using this is a bad practice or not. base, however is not a matter of good or bad practice, but a matter of semantics. Whereas this is polymorphic, meaning that even if the method using it belongs to a base class, it will use the overriden method, base is not. base will always refer to the method defined at the base class of the method calling it, hence it is not polymorphic. This is a huge semantic difference. base should then be used accordingly. If you want that method, use base. If you want the call to remain polymorphic, don't use base.
Another important point to take into consideration is that while you haven't currently overridden that method that doesn't mean you won't ever in the future and by prefacing all of your calls with base. you won't get the new functionality without performing a find and replace for all your calls.
While prefacing calls with this. will not do anything other than decrease / increase readability (ignoring the situation where two variables in scope have the same name) the base. prefix will change the functionality of the code you write in many common scenarios. So I would never add base. unless it is needed.
I think generally you should use base only when overriding previous functionality.
Some languages (C# does not) also provide this functionality by calling the function by it's base class name explicitly like this: Foo.common() (called from somewhere in Bar, of course).
This would allow you to skip upwards in the chain, or pick from multiple implementations -- in the case of multiple-inheritance.
Regardless, I feel base should be used only when needed to explicitly call your parent's functionality because you are or have overridden that functionality in this class.
It's really a matter of personal preference. If you like seeing "base." at the beginning of your members, you can easily turn off the rule (Go to Options>Inspection Severity>Code Redundancies>Redundant 'base.' qualifier). Don't let non-behavioral static code analysis rules affect your preferred coding style.
EDIT
One thing to consider is that the static code analysis in FXCop and R# are there to provide rules for all possible needs. To actually adhere to all of the rules simultaneously is a little onerous. You should define your preferred coding style (if you're working in a team, do it collectively), and stick with it. Modify your rules to match your coding standards, not vice versa.

Categories