SOLID , SRP , IComparable - c#

OK, Is implementing IComparable and other interfaces (Like IDisposable) on a single class a violation of the SRP principle .
SRP states that every class should implement a signle responsability and methods should be interconnected in a high degree to achieve cohesive classes .
Isn't comparison another responsability ?
Some clarification would be appreciated.

If I were you I would try to adhere to SRP, but not so strictly as the effort finally becomes counter-productive. So now with that said, what should you do? Either you implement IComparable and have comparison fully encapsulated in the object, or have a separate comparator and implement the comparison logic in it. Now for comparison, as far as SRP is concerned, if the comparison is fairly basic and should not be subject to changes, I would implement IComparable and be done with it. If you can reasonably foresee some changes in the future, or if the comparison is use case-dependent, then I would go the comparator route. The ultimate goal is to develop closed components and make them cooperate by composing them, so if comparison has little chance to change, the component can be closed and you won't hear about it again. You could also comment the use of IComparable in your code, and if some change happens in the future, switch to composing with a comparator, as the change that was said not to happen did indeed happen.

I would argue that impementations of IComparable and IDisposable are not responsibilities at all, and therefore would not violate SRP.
In the context of SRP, a responsibility is to an interactor of your system (i.e. a user, role or external system). If your system has a Business Requirements Document, all responsibilities should be at least inferred within the functional or non-functional requirements. If not, ask yourself which business owner is going to ask you to change how an object disposes itself.
On the first project I worked on after learning about SRC, we interpreted it as "one public method per class" and applied it as a hard rule. While that made it easy to stay in "compliance", we ended up with code that was far more complicated than it needed to be.
If your IComparable/IDisposable implementations need to change, that change will be driven by the functional (business) part of your class also requiring change (at the same time and for the same reason).

Related

TDD - Extract interface or make methods virtual

Whenever I want to stub a method in an otherwise trivial class, I most often extract an interface.
Now if the constructor of that class is public and isn't too complex or dependent on complex types, it would have the same effect to just make the method in question virtual and inherit.
Is this preferable over extracting an interface? If so, why?
Edit:
class Parser
{
public IDictionary<string, int> DoLengthyParseTask(Stream s)
{
// is slow even with using memory stream
}
}
There are two ways: Either extract an interface or make the method virtual. I actually prefer interfaces, but that could lead to an explosion of IParser Parser tuples...
You need to consider what you are trying to accomplish outside of your unit testing. Do not let your tool dictate your design.
Dealing in interfaces can help decouple your code, but these should be natural points of separation in your code (e.g. business logic or data access). Making methods virtual makes sense if you are going to inherit and overwrite those methods.
In your case, I would attempt to test the behavior that uses DoLengthyParseTask and not the method directly. This will provide a more robust test suite as well. You need to carefully consider whether this method really needs to be public(meaning it can and should be referenced outside its own assembly).
Interfaces just make a contract for you, basically a promise that your implementation will provide access to a specified set of contact points (methods, properties, etc), with no specification of behaviour. You are free to do whatever you want as long as you honor the promise.
A base class on the other hand, in addition of a contract, specifies at least some behaviour that is coded in the class (unless everything is abstract, but that is another story). Making a method virtual still enables you to call in the implementation of the base, and still provide your own code along with it.
This inheritance of behaviour is basically the reason why multiple inheritance is a no-no in modern OOP, and multiple interface implementation is relatively common.
That said, you need to weight whether you just want to extract a contract, or you want to extract some behaviour as well, and the answer should be obvious for a specific case.
As for the IParser / Parser pairs, first they are great for unit testing and for dependency injection, and second, they do not charge you for class creation, so feel free to create as many as you want.
By programming to an interface you get benefits of ease of mocking/stubbing in unit testing and loosely coupled code (and as a result, much higher flexibility), literally for free (the only drawback is more artifacts to manage).
Interfaces and inheritance are two separate things and it's not a good idea to use them interchangeably, even though it's possible. By marking method virtual you're essentially telling others not only they're free to change (override) this method in their implementations, but that you actually expect them to (and are you?).
Such design comes with rather heavy consequences, so unless you explicitly need it - you shouldn't use it. Try sticking to programming to interface instead.
One of good object oriented design principles state that you should program to an interface (design by contract, Liskov Substitution Principle) and prefer composition over inheritance (not only your classes should implement interfaces/abstract classes, but also consist of such implementations).
It's worth noticing that your Parser example makes perfect candidate to be hidden behind abstraction (be it interface or base class). From its consumer point of view it doesn't matter how the data is created - for now you might think it's XML stream only, but requirements tend to change (and/or grow), and you might soon find yourself implementing binary file parser, data stream mining parser and what-not-else. Do it properly now, save yourself time and trouble later.

Should I recommend sealing classes by default?

In a big project I work for, I am considering recommending other programmers to always seal their classes if they haven't considered how their classes should be subclassed. Often times, less-experienced programmers never consider this.
I find it odd that in Java and C# classes are non-sealed / non-final by default. I think making classes sealed greatly improves readability of the code.
Notice that this is in-house code that we can always change should the rare case occur that we need to subclass.
What are your experiences? I meet quite some resistance to this idea. Are people that lazy they could not be bothered to type sealed?
Okay, as so many other people have weighed in...
Yes, I think it's entirely reasonable to recommend that classes are sealed by default.
This goes along with the recommendation from Josh Bloch in his excellent book Effective Java, 2nd edition:
Design for inheritance, or prohibit it.
Designing for inheritance is hard, and can make your implementation less flexible, especially if you have virtual methods, one of which calls the other. Maybe they're overloads, maybe they're not. The fact that one calls the other must be documented otherwise you can't override either method safely - you don't know when it'll be called, or whether you're safe to call the other method without risking a stack overflow.
Now if you later want to change which method calls which in a later version, you can't - you'll potentially break subclasses. So in the name of "flexibility" you've actually made the implementation less flexible, and had to document your implementation details more closely. That doesn't sound like a great idea to me.
Next up is immutability - I like immutable types. I find them easier to reason about than mutable types. It's one reason why the Joda Time API is nicer than using Date and Calendar in Java. But an unsealed class can never be known to be immutable. If I accept a parameter of type Foo, I may be able to rely on the properties declared in Foo not to be changed over time, but I can't rely on the object itself not being modified - there could be a mutable property in the subclass. Heaven help me if that property is also used by an override of some virtual method. Wave goodbye to many of the benefits of immutability. (Ironically, Joda Time has very large inheritance hierarchies - often with things saying "subclasses should be immutable. The large inheritance hierarchy of Chronology made it hard to understand when porting to C#.)
Finally, there's the aspect of overuse of inheritance. Personally I favour composition over inheritance where feasible. I love polymorphism for interfaces, and occasionally I use inheritance of implementation - but it's rarely a great fit in my experience. Making classes sealed avoids them being inappropriately derived from where composition would be a better fit.
EDIT: I'd also like to point readers at Eric Lippert's blog post from 2004 on why so many of the framework classes are sealed. There are plenty of places where I wish .NET provided an interface we could work to for testability, but that's a slightly different request...
It is my opinion that architectural design decisions are made to communicate to other developers (including future maintenance developers) something important.
Sealing classes communicates that the implementation should not be overridden. It communicates that the class should not be impersonated. There are good reasons to seal.
If you take the unusual approach of sealing everything (and this is unusual), then your design decisions now communicate things that are really not important - like that the class wasn't intended to be inherited by the original/authoring developer.
But then how would you communicate to other developers that the class should not be inherited because of something? You really can't. You are stuck.
Also, sealing a class doesn't improve readability. I just don't see that. If inheritance is a problem in OOP development, then we have a much larger problem.
I'd like to think that I'm a reasonably-experienced programmer and, if I've learned nothing else, it's that I am remarkably bad at predicting the future.
Typing sealed is not hard, I just don't want to irritate a developer down the road (who could be me!) who discovers that a problem could be easily solved with a little inheritance.
I also have no idea how sealing a class makes it more readable. Are you trying to force people to prefer composition to inheritance?
© Jeffrey Richter
There are three reasons why a sealed
class is better than an unsealed
class:
Versioning: When a class is originally sealed, it can change to
unsealed in the future without
breaking compatibility. However, once
a class is unsealed, you can never
change it to sealed in the future as
this would break all derived classes.
In addition, if the unsealed class
defines any unsealed virtual methods,
ordering of the virtual method calls
must be maintained with new versions
or there is the potential of breaking
derived types in the future.
Performance: As discussed in the previous section, calling a virtual
method doesn’t perform as well as
calling a nonvirtual method because
the CLR must look up the type of the
object at runtime in order to
determine which type defines the
method to call. However, if the JIT
compiler sees a call to a virtual
method using a sealed type, the JIT
compiler can produce more efficient
code by calling the method
nonvirtually. It can do this because
it knows there can’t possibly be a
derived class if the class is sealed.
Security: and predictability A class must protect its own state and not
allow itself to ever become corrupted.
When a class is unsealed, a derived
class can access and manipulate the
base class’s state if any data fields
or methods that internally manipulate
fields are accessible and not private.
In addition, a virtual method can be
overridden by a derived class, and the
derived class can decide whether to
call the base class’s implementation.
By making a method, property, or event
virtual, the base class is giving up
some control over its behavior and its
state. Unless carefully thought out,
this can cause the object to behave
unpredictably, and it opens up
potential security holes.
There shouldn't be anything wrong in inheriting from a class.
You should seal a class only when theres a good reason why it should never be inherited.
Besides, if you seal them all, it will only decrease maintainability. Every time someone will want to inherit from one of your classes, he will see it is sealed, then he'll either remove the seal (mess with code he shouldn't have to mess with) or worse: create a poor implementation of your class for himself.
Then you'll have 2 implementations of the same thing, one probably worse than the other, and 2 pieces of code to maintain.
Better just keep it unsealed. No harm in it being unsealed.
Frankly I think that classes not being sealed by default in c# is kind of weird and out of place with how the rest of the defaults work in the language.
By default, classes are internal.
By default fields are private.
By default members are private.
There seems to be a trend that points to least plausible access by default. It would stand to reason that a unsealed keyword should exits in c# instead of a sealed.
Personally I'd rather classes were sealed by default. In most ocassions when someone writes a class, he is not designing it with subclassing in mind and all the complexities that come along with it. Designing for future subclassing should be a conscious act and therefore I'd rather you explicitly have to state it.
"...consider[ing] how their classes should be sub classed..." shouldn't matter.
At least a half dozen times over the past few years I've found myself cursing some open source team or another for a sloppy mix of protected and private, making it impossible to simply extend a class without copying the source of the entire parent class. (In most cases, overriding a particular method required access to private members.)
One example was a JSTL tag that almost did what I wanted. I need to override one small thing. Nope, sorry, I had to completely copy the source of the parent.
I only seal classes if I am working on a reusable component that I intend to distribute, and I don't want the end user to inherit from it, or as a system architect if I know I don't want another developer on the team to inherit from it. However there is usually some reason for it.
Just because a class isn't being inherited from, I don't think it should automatically be marked sealed. Also, it annoys me to no end when I want to do something tricky in .NET, but then realize MS marks tons of their classes sealed.
This is a very opinionated question that's likely to garner some very opinionated answers ;-)
That said, in my opinion, I strongly prefer NOT making my classes sealed/final, particularly at the beginning. Doing this makes it very difficult to infer the intended extensibility points, and it's nearly impossible to get them right at the beginning. IMHO, overuse of encapsulation is worse than overuse of polymorphism.
Your house, your rule.
You can also have the complementary rule instead: a class that can be subclassed must be annotated; nobody should subclass a class that's not annotated so. This rule is not harder to follow than your rule.
The main purpose of a sealed class to take away the inheritance feature from the user so they cannot derive a class from a sealed class.Are you sure you want to do that. Or do you want to start having all classes as sealed and then when you need to make it inheritable you will change it .. Well that might be ok when every thing is in house and in one team but incase other teams in future use your dlls it will be not possible to recompile whole source code everytime a class needs to be unsealed ....
I wont recommend this but thats just my opinion
I don't like that way to think. Java and c# are made to be OOP languages. These languages are designed in a way where a class can have a parent or a child. That's it.
Some people say that we should always start from the most restricting modifier (private, protected...) and set your member to public only when you use it externally. These people are ,to me, lazy and don't want to think about a good design at the beginning of the project.
My answer is: Design your apps in a good way now. Set your class to seal when it needs to be sealed and private when it needs to be private. Don't make them sealed by default.
I find that sealed / final classes are actually pretty rare, in my experience; I would certainly not recommend suggesting all classes be sealed / final by default. That specification makes a certain statement about the state of the code (i.e., that it's complete) that is not necessarily always true during development time.
I'll also add that leaving a class unsealed requires more design / test effort to make sure that the exposed behaviours are well-defined and tested; heavy unit testing is critical, IMO, to achieve a level of confidence in the behaviour of the class that appears to be desired by the proponents of "sealed". But IMO, that increased level of effort translates directly to a high level of confidence and to higher quality code.

Are SOLID principles really solid?

The design pattern the first letter in this acronym stands for is the Single Responsibility Principle. Here is a quote:
the single responsibility principle
states that every object should have a
single responsibility, and that
responsibility should be entirely
encapsulated by the class.
That's simple and clear until we start to code. Suppose we have a class with a well defined single responsibility. To serialize the class instances we need to add special atrributes to that class. So now the class has another responsibility. Doesn't that violate the SRP?
Let's see another example - an interface implementation. When we implement an interface we simply add other responsibilities, say disposing of its resources or comparing its instances or whatever.
So my question. Is it possible to strictly keep to SRP? How can it be done?
As you will one day discover, none of the most known principles in software development can be 100% followed.
Programming is often about making compromises - abstract pureness vs. code size vs. speed vs.efficiency.
You just need to learn to find the right balance: not let your application fall into abyss of chaos but not tie yourself hands with multitude of abstraction layers.
I don't think that being serializable or disposable amounts to multiple responsibilities.
Well I suppose the first thing to note is that these are just good Software Engineering principles - you have to apply judgment also. So in that sense - no they are not solid (!)
I think the question you asked raises the key point - how do you define the single resposibility that the class should have?
It is important not to get too bogged down on details when defining a responsibility - just because a class does many things in code dosn't mean that it has many responibilities.
However, please do stick with it though. Although it is probably impossible to apply in all cases - it is still better than having a single "God Object" (Anti-Pattern) in your code.
If you are having problems with these I would recommend reading the following:
Refactoring - Martin Fowler: Although it is obviously about refactoring, this book is also very helpful in displaying how to decompose problems into their logical parts or resposibilities - which is key to SRP. This book also touches on the other principles - however it does it in a lot less academic way than you may have seen before.
Clean Code - Robert Martin: Who better to read than the greatest exponent of the SOLID principles. Seriously, I found this to be a really helpful book in all areas of software craftsmanship - not just the SOLID principles. Like Fowler's book, this book is pitched at all levels of experiance so I would recommend to anyone.
To better understand the SOLID principles you have to understand the problem that they solve:
Object-oriented programming grew out of structured/procedural programming--it added a new organizational system (classes, et al) as well as behaviors (polymorphism, inheritance, composition). This meant that OO was not seperate from structured/procedural, but was a progression, and that developers could do very procedural OO if they wanted.
So... SOLID came around as something of a litmus test to answer the question of "Am I really doing OO, or am I just using procedural objects?" The 5 principles, if followed, means that you are quite far to the OO side of the spectrum. Failing to meet these rules doesn't mean you're not doing OO, but it means its much more structural/procedural OO.
There's a legitimate concern here, as these cross-cutting concerns (serialization, logging, data binding notification, etc.) end up adding implementation to multiple classes that is only there to support some other subsystem. This implementation has to be tested, so the class has definitely been burdened with additional responsibilities.
Aspect-Oriented Programming is one approach that attempts to resolve this issue. A good example in C# is serialization, for which there is a wide range of different attributes for different types of serialization. The idea here is that the class shouldn't implement code that performs serialization, but rather declare how it is to be serialized. Metadata is a very natural place to include details that are important for other subsystems, but not relevant to the testable implementation of a class.
The thing to remember about design principles is there's always exceptions, and you wont always find that your scenario/implementation matches a given principle 100%.
That being said, adding attributes to the properties isn't really adding any functionality or behavior to the class, assuming your serialization/deserialization code is in some other class. You're just adding information about the structure of your class, so it doesn't seem like a violation of the principle.
I think there are many minor, common tasks a class can do without them obscuring the primary responsibility of a class: Serialisation, Logging, Exception Handling, Transaction Handling etc.
It's down to your judgement as to what tasks in your class constitute an actual responsibility in terms of your business/application logic, and what is just plumbing code.
So, if instead of designing one "Dog" class with "Bark", "Sleep" and "Eat" methods, I must design "AnimalWhoBarks", "AnimalWhoSleeps", "AnimalWhoEats" classes, etc? Why? How does that make my code any better? How am I supposed to simply implement the fact that my dog will not go to sleep and will bark all night if he hasn't eat?
"Split big classes in smaller ones", is a fine practical advice, but "every object should have a single responsibility" is an absolute OOP dogmatic nonsense.
Imagine the .NET framework was written with SRP in mind. Instead of 40000 classes, you would have millions.
Generalized SRP ("Generalized" is the important word here) is just a crime IMHO. You can't reduce software development to 5 bookish principles.
By changing your definition of "single responsibility" - SOLID principles are quite liquid and (like other catchy acronyms of acronyms) don't mean what they seem to mean.
They can be used as a checklist or cheat sheet, but not as complete guidelines and certainly not learning material.
What is a responsibility? In the words of Robert C. Martin (aka. Uncle Bob) himself:
SRP = A class should have one, and only one, reason to change.
The "single responsibility" and "one reason to change", in SRP, has been a source of much confusion (e.g. Dan North, Boris). Likely since responsibilities of pieces of code can, of course, be arbitrarily conceived and designated at will by the programmer. Plus, reasons/motivations for changing a piece of code may be as multi-faceted as humans are. But what Uncle Bob meant was "business reason".
The SRP was conceived in an enterprise business context where multiple teams have to coordinate work, and it's important that they can work independently most of the time, within their various business subunits (teams/departments etc.). So they don't step on each others toes too much (which also explains SOLID's infatuation with interfaces..), and so that changing code to meet a business request (a business use case, or a typical request from a business department) can be localised and cohesive, making a targeted and isolated change to the code without touching common/global code.
In summary: SRP means that code should have "one business reason" to change, and have "a single business responsibility".
This is evident from the original source, where Robert C. Martin aka "Uncle Bob" writes (where brackets and emphasis are mine):
Gather together those things that change for the same reason, and separate those things that change for different reasons.
This principle is often known as the single responsibility principle, or SRP. In short, it says that a subsystem, module, class, or even a function, should not have more than one reason to change.
This principle is often known as the single responsibility principle, or SRP. In short, it says that a subsystem, module, class, or even a function, should not have more than one reason to change. The classic example is a class that has methods that deal with business rules, reports, and databases:
public class Employee {
public Money calculatePay() ...
public String reportHours() ...
public void save() ...
}
Some programmers might think that putting these three functions together in the same class is perfectly appropriate. After all, classes are supposed to be collections of functions that operate on common variables. However, the problem is that the three functions change for entirely different reasons. The calculatePay function will change whenever the business rules for calculating pay do. The reportHours function will change whenever someone wants a different format for the report. The save function will change whenever the DBAs [Database Administrators] change the database schema. These three reasons to change combine to make Employee very volatile. It will change for any of those reasons.
Source: Chapter 76. The Single Responsibility Principle (from the book: "97 Things Every Programmer Should Know")
Why was it important to separate these two responsibilities into separate classes? Because each responsibility is an axis of change. When the requirements change, that change will be manifest through a change in responsibility amongst the classes. If a class assumes more than one responsibility, then there will be more than one reason for it to change. If a class has more then one responsibility, then the responsibilities become coupled. Changes to one responsibility may impair or inhibit the class’ ability to meet the others. This kind of coupling leads to fragile designs that break in unexpected ways when changed.
Source: Chapter 9 - Single Responsibility Principle
Much misunderstanding would have been averted if Uncle Bob instead of writing "reasons" had written "business reasons", which was only made clearer when one reads and interprets the fine print later on. Or if people went to the original source of SOLID and read it thoroughly, instead of going by some hearsay version of it online.
SOLID criticisms:
Why I don't teach SOLID (Brian Geihsler, 2014)
Why Every Element of SOLID is Wrong (Dan North, 2017)
The StackOverflow Podcast transcript (Joel Spolsky and Jeff Atwood, 2018)
SOLID defense:
In Defense of the SOLID Principles (Carlos Schults, 2018),
Related to the Single-Responsibility Principle (SRP):
connascence.io and Connascence at wikipedia: "In software engineering, two components are connascent if a change in one would require the other to be modified in order to maintain the overall correctness of the system." and "Components that are "born" together will often need to change together over time. Excessive connascence in our software means that the system is hard to change and hard to maintain." from Jim Weirich's youtube talk
PS:
SRP can be compared with its closely related sibling, BoundedContext, which has perhaps best been defined as "A specific responsibility enforced by explicit boundaries" by Sam Newman, at 12:38. A lot of these principles are just derivations/restatements of the over-arching important software principle Prefer Cohesion over Coupling. Uncle Bob even introduces SRP by saying: "This principle was described in the work of Tom DeMarco and Meilir Page-Jones. They called it cohesion. As we’ll see in Chapter 21, we have a more specific definition of cohesion at the package level. However, at the class level the definition is similar."
SRP is a bit vague term in my view. No one clearly can define what a responsibility supposed to be.
The way I implement it s that I strictly keep my method size below 40 and target below 15.
Try to follow common sense and don't be over obsessed with it. Try keep classes below 500 lines and methods below 30 lines at max. That will allow it to fit to a single page.
Once this becomes your habit you will notice how easy it's to scale your codebase.
Reference: Uncle Bob Martin in Clean Code
S.O.L.I.D stands for:
Single responsibility Principle
Open-closed Principle
Liskov's substitution Principle
Interface segregation Principle
Dependency inversion Principle
These are the standards we refer to, when we talk about OOP. However, none of those principles can be fulfilled perfectly in software development.
You can view a very well explaining presentation about this topic here http://www.slideshare.net/jonkruger/advanced-objectorientedsolid-principles

Interface design? Can I do it iteratively? How should I handle changes to the interface?

What is the best approach for defining Interfaces in either C# or Java? Do we need to make generic or add the methods as and when the real need arises?
Regards,
Srinivas
Once an interface is defined, it is intended to not be changed.
You have to be thoughtful about the purpose of the interface and be as complete as possible.
If you find the need, later, to add a method, really you should define a new interface, possibly a _V2 interface, with the additional method.
Addendum: Here you will find some good guidelines on interface design in C#, as part of a larger, valuable work on C# design in general. It generally applies to Java as well.
Excerpts:
Although most APIs are best modeled using classes and structs, there are cases in which interfaces are more appropriate or are the only option.
DO provide at least one type that is
an implementation of an interface.
This helps to validate the design of
the interface. For example,
System.Collections.ArrayList is an
implementation of the
System.Collections.IList interface.
DO provide at least one API consuming
each interface you define (a method
taking the interface as a parameter or
a property typed as the interface).
This helps to validate the interface
design. For example, List.Sort
consumes IComparer interface.
DO NOT add members to an interface that
has previously shipped. Doing so
would break implementations of the
interface. You should create a new
interface to avoid versioning
problems.
I recommend relying on the broad type design guidelines.
To quote Joshua Bloch:
When in doubt, leave it out.
You can always add to an interface later. Once a member is a part of your interface it is very difficult to change or remove it. Be very conservative in your creation of you interfaces as they are binding contracts.
As a side note here is an excellent interview with Vance Morrison (of the Microsoft CLR team) in which he mentions the possibility of a future version of the CLR allowing "mixins" or interfaces with default implementations for their members.
If your interface is part of code that is shared with other projects and teams, listen to Cheeso. But if your interface is part of a private project and you have access to all the change points then you probably didn't need interfaces to begin with but go ahead and change them.
If the interface is going to be public, I feel that a good deal of care needs to be put into the design because changes to the interface is going to be difficult if a lot of code is going to suddenly break in the next iteration.
Changes to the interface needs to be taken with care, therefore, it would be ideal if changes wouldn't have to be made after the initial release. This means, that the first iteration will be very important in terms of the design.
However, if changes are required, one way to implement the changes to the interface would be deprecate the old methods, and provide a transition path for old code to use the newly-designed features. This does mean that the deprecated methods will still stick around to prevent the code using the old methods from breaking -- this is not ideal, so it is a "price to pay" for not getting it right the first time around.
On a related matter, yesterday, I stumbled upon the Google Tech Talk: How to Design a Good API and Why It Matters by Joshua Bloch. He was behind the design and implementation of the Java Collection libraries and such, and is the author of Effective Java.
The video is about an hour long where he goes into details and examples about what makes a good API, why we should be making well-designed APIs, and more. It's a good video to watch to get some ideas and inspiration for certain things to look out for when thinking about designing APIs.
Adding methods later to an interface immediately breaks all implementations of the interface that didn't accidentaly implement those methods. For that reason, make sure your interface specification is complete. I'd propose you start with a (sample) client of the interface, the part that actually uses instances of classes implementing said interface. Whatever the client needs must be part of the interface (obviously). Then make a (sample) implementation of the interface and look what additional methods are both generally usefull and available (in possible other implementations) so they should also be part of the interface. Check for symetry completeness (e.g. if there is an "openXYZ", there should also be a "closeXYZ". if there is an "addFooBar", there should be a "removeFooBar". etc.)
If possible, let a coworker check your specification.
And: Be sure you really want an interface. Maybe an abstract base class is a better fit for your needs.
Well, it really depends on your particular situation. If your team is the sole user/maintainer of that interface, then by all means, modify it as you see fit and forget all about that "best practice blabla" kind of stuff. It is YOUR code after all... Never blindly follow best pracice stuff without understanding its rationale.
Now, if you're making a public API that other team or customer, will work with (think plugins, extension points or things like that) then you have to be conservative with what you put in the interface. As other mentionned, you may have to add _V2 kind of interface int these cases. Microsoft did with several web browser COM interfaces.
The guidelines Microsoft publishes in Framework Design Guidelines are just that: guideline for PUBLIC interface. Not for private internal stuff; tough many of them still apply. Know what applies or not to your situation.
No rule will make up for lack of common sense.

How to implement SOLID principles into an existing project

I apologize for the subjectiveness of this question, but I am a little stuck and I would appreciate some guidance and advice from anyone who's had to deal with this issue before:
I have (what's become) a very large RESTful API project written in C# 2.0 and some of my classes have become monstrous. My main API class is an example of this -- with several dozen members and methods (probably approaching hundreds). As you can imagine, it's becoming a small nightmare, not only to maintain this code but even just navigating the code has become a chore.
I am reasonably new to the SOLID principles, and I am massive fan of design patterns (but I am still at that stage where I can implement them, but not quite enough to know when to use them - in situations where its not so obvious).
I need to break my classes down in size, but I am at a loss of how best to go about doing it. Can my fellow StackOverflow'ers please suggest ways that they have taken existing code monoliths and cut them down to size?
Single Responsibility Principle - A class should have only one reason to change. If you have a monolithic class, then it probably has more than one reason to change. Simply define your one reason to change, and be as granular as reasonable. I would suggest to start "large". Refactor one third of the code out into another class. Once you have that, then start over with your new class. Going straight from one class to 20 is too daunting.
Open/Closed Principle - A class should be open for extension, but closed for change. Where reasonable, mark your members and methods as virtual or abstract. Each item should be relatively small in nature, and give you some base functionality or definition of behavior. However, if you need to change the functionality later, you will be able to add code, rather than change code to introduce new/different functionality.
Liskov Substitution Principle - A class should be substitutable for its base class. The key here, in my opinion, is do to inheritance correctly. If you have a huge case statement, or two pages of if statements that check the derived type of the object, then your violating this principle and need to rethink your approach.
Interface Segregation Principle - In my mind, this principle closely resembles the Single Responsibility principle. It just applies specifically to a high level (or mature) class/interface. One way to use this principle in a large class is to make your class implement an empty interface. Next, change all of the types that use your class to be the type of the interface. This will break your code. However, it will point out exactly how you are consuming your class. If you have three instances that each use their own subset of methods and properties, then you now know that you need three different interfaces. Each interface represents a collective set of functionality, and one reason to change.
Dependency Inversion Principle - The parent / child allegory made me understand this. Think of a parent class. It defines behavior, but isn't concerned with the dirty details. It's dependable. A child class, however, is all about the details, and can't be depended upon because it changes often. You always want to depend upon the parent, responsible classes, and never the other way around. If you have a parent class depending upon a child class, you'll get unexpected behavior when you change something. In my mind, this is the same mindset of SOA. A service contract defines inputs, outputs, and behavior, with no details.
Of course, my opinions and understandings may be incomplete or wrong. I would suggest learning from people who have mastered these principles, like Uncle Bob. A good starting point for me was his book, Agile Principles, Patterns, and Practices in C#. Another good resource was Uncle Bob on Hanselminutes.
Of course, as Joel and Jeff pointed out, these are principles, not rules. They are to be tools to help guide you, not the law of the land.
EDIT:
I just found these SOLID screencasts which look really interesting. Each one is approximately 10-15 minutes long.
There's a classic book by Martin Fowler - Refactoring: Improving the Design of Existing Code.
There he provides a set of design techniques and example of decisions to make your existing codebase more manageable and maintainable (and that what SOLID principals are all about). Even though there are some standard routines in refactoring it is a very custom process and one solution couldn't be applied to all project.
Unit testing is one of the corner pillars for this process to succeed. You do need to cover your existing codebase with enough code coverage so that you'd be sure you don't break stuff while changing it. Actually using modern unit testing framework with mocking support will lead encourage you to better design.
There are tools like ReSharper (my favorite) and CodeRush to assist with tedious code changes. But those are usually trivial mechanical stuff, making design decisions is much more complex process and there's no so much tool support. Using class diagrams and UML helps. That what I would start from, actually. Try to make sense of what is already there and bring some structure to it. Then from there you can make decisions about decomposition and relations between different components and change your code accordingly.
Hope this helps and happy refactoring!
It will be a time consuming process. You need to read the code and identify parts that do not meet the SOLID principles and refactor into new classes. Using a VS add-in like Resharper (http://www.jetbrains.com) will assist with the refactoring process.
Ideally you will have good coverage of automated unit tests so that you can ensure your changes do not introduce problems with the code.
More Information
In the main API class, you need to identify methods that relate to each other and create a class that more specifically represents what actions the method performs.
e.g.
Let's say I had an Address class with separate variables containing street number, name, etc. This class is responsible for inserting, updating, deleting, etc. If I also needed to format an address a specific way for a postal address, I could have a method called GetFormattedPostalAddress() that returned the formatted address.
Alternatively, I could refactor this method into a class called AddressFormatter that takes an Address in it constructor and has a Get property called PostalAddress that returns the formatted address.
The idea is to separate different responsibilities into separate classes.
What I've done when presented with this type of thing (and I'll readily admit that I haven't used SOLID principles before, but from what little I know of them, they sound good) is to look at the existing codebase from a connectivity point of view. Essentially, by looking at the system, you should be able to find some subset of functionality that is internally highly coupled (many frequent interactions) but externally loosely coupled (few infrequent interactions). Usually, there are a few of these pieces in any large codebase; they are candidates for excision. Essentially, once you've identified your candidates, you have to enumerate the points at which they are externally coupled to the system as a whole. This should give you a good idea of the level of interdependency involved. There usually is a fair bit of interdependency involved. Evaluate the subsets and their connection points for refactoring; frequently (but not always) there ends up being a couple of clear structural refactorings that can increase the decoupling. With an eye on those refactorings, use the existing couplings to define the minimal interface required to allow the subsystem to work with the rest of the system. Look for commonalities in those interfaces (frequently, you find more than you'd expect!). And finally, implement these changes that you've identified.
The process sounds terrible, but in practice, it's actually pretty straightforward. Mind you, this is not a roadmap towards getting to a completely perfectly designed system (for that, you'd need to start from scratch), but it very certainly will decrease the complexity of the system as a whole and increase the code comprehensibility.
OOD - Object Oriented Design
SOLID - class design
Single Responsibility Principle - SRP - introduced by Uncle Bob. Method, class, module are responsible only for doing single thing(one single task)
Open/Closed Principle - OCP - introduced by Bertrand Meyer. Method, class, module are open for extension and closed for modification. Use a power of inheritance, abstraction, polymorphism, extension, wrapper. [Java example], [Swift example]
[Liskov Substitution Principle] - LSP - introduced by Barbara Liskov and Jeannette Wing. A subtype can replace supertype without side effects
Interface Segregation Principle - ISP - introduced by Uncle Bob. Your interface should be as small as possible
[Dependency Inversion Principle(DIP)] - DIP - introduced by Uncle Bob. Internal class, layer should not be depended on external class, layer. For example when you have aggregation[About] dependency you should rather use some abstraction/interfaces. [DIP vs DI vs IoC]
6 principles about packages/modules(.jar, .aar, .framework):
what to put inside a package
The Release Reuse Equivalency
The Common Closure
The Common Reuse
couplings between packages
The Acyclic Dependencies
The Stable Dependencies
The Stable Abstractions
[Protocol Oriented Programming(POP)]

Categories