The design pattern the first letter in this acronym stands for is the Single Responsibility Principle. Here is a quote:
the single responsibility principle
states that every object should have a
single responsibility, and that
responsibility should be entirely
encapsulated by the class.
That's simple and clear until we start to code. Suppose we have a class with a well defined single responsibility. To serialize the class instances we need to add special atrributes to that class. So now the class has another responsibility. Doesn't that violate the SRP?
Let's see another example - an interface implementation. When we implement an interface we simply add other responsibilities, say disposing of its resources or comparing its instances or whatever.
So my question. Is it possible to strictly keep to SRP? How can it be done?
As you will one day discover, none of the most known principles in software development can be 100% followed.
Programming is often about making compromises - abstract pureness vs. code size vs. speed vs.efficiency.
You just need to learn to find the right balance: not let your application fall into abyss of chaos but not tie yourself hands with multitude of abstraction layers.
I don't think that being serializable or disposable amounts to multiple responsibilities.
Well I suppose the first thing to note is that these are just good Software Engineering principles - you have to apply judgment also. So in that sense - no they are not solid (!)
I think the question you asked raises the key point - how do you define the single resposibility that the class should have?
It is important not to get too bogged down on details when defining a responsibility - just because a class does many things in code dosn't mean that it has many responibilities.
However, please do stick with it though. Although it is probably impossible to apply in all cases - it is still better than having a single "God Object" (Anti-Pattern) in your code.
If you are having problems with these I would recommend reading the following:
Refactoring - Martin Fowler: Although it is obviously about refactoring, this book is also very helpful in displaying how to decompose problems into their logical parts or resposibilities - which is key to SRP. This book also touches on the other principles - however it does it in a lot less academic way than you may have seen before.
Clean Code - Robert Martin: Who better to read than the greatest exponent of the SOLID principles. Seriously, I found this to be a really helpful book in all areas of software craftsmanship - not just the SOLID principles. Like Fowler's book, this book is pitched at all levels of experiance so I would recommend to anyone.
To better understand the SOLID principles you have to understand the problem that they solve:
Object-oriented programming grew out of structured/procedural programming--it added a new organizational system (classes, et al) as well as behaviors (polymorphism, inheritance, composition). This meant that OO was not seperate from structured/procedural, but was a progression, and that developers could do very procedural OO if they wanted.
So... SOLID came around as something of a litmus test to answer the question of "Am I really doing OO, or am I just using procedural objects?" The 5 principles, if followed, means that you are quite far to the OO side of the spectrum. Failing to meet these rules doesn't mean you're not doing OO, but it means its much more structural/procedural OO.
There's a legitimate concern here, as these cross-cutting concerns (serialization, logging, data binding notification, etc.) end up adding implementation to multiple classes that is only there to support some other subsystem. This implementation has to be tested, so the class has definitely been burdened with additional responsibilities.
Aspect-Oriented Programming is one approach that attempts to resolve this issue. A good example in C# is serialization, for which there is a wide range of different attributes for different types of serialization. The idea here is that the class shouldn't implement code that performs serialization, but rather declare how it is to be serialized. Metadata is a very natural place to include details that are important for other subsystems, but not relevant to the testable implementation of a class.
The thing to remember about design principles is there's always exceptions, and you wont always find that your scenario/implementation matches a given principle 100%.
That being said, adding attributes to the properties isn't really adding any functionality or behavior to the class, assuming your serialization/deserialization code is in some other class. You're just adding information about the structure of your class, so it doesn't seem like a violation of the principle.
I think there are many minor, common tasks a class can do without them obscuring the primary responsibility of a class: Serialisation, Logging, Exception Handling, Transaction Handling etc.
It's down to your judgement as to what tasks in your class constitute an actual responsibility in terms of your business/application logic, and what is just plumbing code.
So, if instead of designing one "Dog" class with "Bark", "Sleep" and "Eat" methods, I must design "AnimalWhoBarks", "AnimalWhoSleeps", "AnimalWhoEats" classes, etc? Why? How does that make my code any better? How am I supposed to simply implement the fact that my dog will not go to sleep and will bark all night if he hasn't eat?
"Split big classes in smaller ones", is a fine practical advice, but "every object should have a single responsibility" is an absolute OOP dogmatic nonsense.
Imagine the .NET framework was written with SRP in mind. Instead of 40000 classes, you would have millions.
Generalized SRP ("Generalized" is the important word here) is just a crime IMHO. You can't reduce software development to 5 bookish principles.
By changing your definition of "single responsibility" - SOLID principles are quite liquid and (like other catchy acronyms of acronyms) don't mean what they seem to mean.
They can be used as a checklist or cheat sheet, but not as complete guidelines and certainly not learning material.
What is a responsibility? In the words of Robert C. Martin (aka. Uncle Bob) himself:
SRP = A class should have one, and only one, reason to change.
The "single responsibility" and "one reason to change", in SRP, has been a source of much confusion (e.g. Dan North, Boris). Likely since responsibilities of pieces of code can, of course, be arbitrarily conceived and designated at will by the programmer. Plus, reasons/motivations for changing a piece of code may be as multi-faceted as humans are. But what Uncle Bob meant was "business reason".
The SRP was conceived in an enterprise business context where multiple teams have to coordinate work, and it's important that they can work independently most of the time, within their various business subunits (teams/departments etc.). So they don't step on each others toes too much (which also explains SOLID's infatuation with interfaces..), and so that changing code to meet a business request (a business use case, or a typical request from a business department) can be localised and cohesive, making a targeted and isolated change to the code without touching common/global code.
In summary: SRP means that code should have "one business reason" to change, and have "a single business responsibility".
This is evident from the original source, where Robert C. Martin aka "Uncle Bob" writes (where brackets and emphasis are mine):
Gather together those things that change for the same reason, and separate those things that change for different reasons.
This principle is often known as the single responsibility principle, or SRP. In short, it says that a subsystem, module, class, or even a function, should not have more than one reason to change.
This principle is often known as the single responsibility principle, or SRP. In short, it says that a subsystem, module, class, or even a function, should not have more than one reason to change. The classic example is a class that has methods that deal with business rules, reports, and databases:
public class Employee {
public Money calculatePay() ...
public String reportHours() ...
public void save() ...
}
Some programmers might think that putting these three functions together in the same class is perfectly appropriate. After all, classes are supposed to be collections of functions that operate on common variables. However, the problem is that the three functions change for entirely different reasons. The calculatePay function will change whenever the business rules for calculating pay do. The reportHours function will change whenever someone wants a different format for the report. The save function will change whenever the DBAs [Database Administrators] change the database schema. These three reasons to change combine to make Employee very volatile. It will change for any of those reasons.
Source: Chapter 76. The Single Responsibility Principle (from the book: "97 Things Every Programmer Should Know")
Why was it important to separate these two responsibilities into separate classes? Because each responsibility is an axis of change. When the requirements change, that change will be manifest through a change in responsibility amongst the classes. If a class assumes more than one responsibility, then there will be more than one reason for it to change. If a class has more then one responsibility, then the responsibilities become coupled. Changes to one responsibility may impair or inhibit the class’ ability to meet the others. This kind of coupling leads to fragile designs that break in unexpected ways when changed.
Source: Chapter 9 - Single Responsibility Principle
Much misunderstanding would have been averted if Uncle Bob instead of writing "reasons" had written "business reasons", which was only made clearer when one reads and interprets the fine print later on. Or if people went to the original source of SOLID and read it thoroughly, instead of going by some hearsay version of it online.
SOLID criticisms:
Why I don't teach SOLID (Brian Geihsler, 2014)
Why Every Element of SOLID is Wrong (Dan North, 2017)
The StackOverflow Podcast transcript (Joel Spolsky and Jeff Atwood, 2018)
SOLID defense:
In Defense of the SOLID Principles (Carlos Schults, 2018),
Related to the Single-Responsibility Principle (SRP):
connascence.io and Connascence at wikipedia: "In software engineering, two components are connascent if a change in one would require the other to be modified in order to maintain the overall correctness of the system." and "Components that are "born" together will often need to change together over time. Excessive connascence in our software means that the system is hard to change and hard to maintain." from Jim Weirich's youtube talk
PS:
SRP can be compared with its closely related sibling, BoundedContext, which has perhaps best been defined as "A specific responsibility enforced by explicit boundaries" by Sam Newman, at 12:38. A lot of these principles are just derivations/restatements of the over-arching important software principle Prefer Cohesion over Coupling. Uncle Bob even introduces SRP by saying: "This principle was described in the work of Tom DeMarco and Meilir Page-Jones. They called it cohesion. As we’ll see in Chapter 21, we have a more specific definition of cohesion at the package level. However, at the class level the definition is similar."
SRP is a bit vague term in my view. No one clearly can define what a responsibility supposed to be.
The way I implement it s that I strictly keep my method size below 40 and target below 15.
Try to follow common sense and don't be over obsessed with it. Try keep classes below 500 lines and methods below 30 lines at max. That will allow it to fit to a single page.
Once this becomes your habit you will notice how easy it's to scale your codebase.
Reference: Uncle Bob Martin in Clean Code
S.O.L.I.D stands for:
Single responsibility Principle
Open-closed Principle
Liskov's substitution Principle
Interface segregation Principle
Dependency inversion Principle
These are the standards we refer to, when we talk about OOP. However, none of those principles can be fulfilled perfectly in software development.
You can view a very well explaining presentation about this topic here http://www.slideshare.net/jonkruger/advanced-objectorientedsolid-principles
Related
OK, Is implementing IComparable and other interfaces (Like IDisposable) on a single class a violation of the SRP principle .
SRP states that every class should implement a signle responsability and methods should be interconnected in a high degree to achieve cohesive classes .
Isn't comparison another responsability ?
Some clarification would be appreciated.
If I were you I would try to adhere to SRP, but not so strictly as the effort finally becomes counter-productive. So now with that said, what should you do? Either you implement IComparable and have comparison fully encapsulated in the object, or have a separate comparator and implement the comparison logic in it. Now for comparison, as far as SRP is concerned, if the comparison is fairly basic and should not be subject to changes, I would implement IComparable and be done with it. If you can reasonably foresee some changes in the future, or if the comparison is use case-dependent, then I would go the comparator route. The ultimate goal is to develop closed components and make them cooperate by composing them, so if comparison has little chance to change, the component can be closed and you won't hear about it again. You could also comment the use of IComparable in your code, and if some change happens in the future, switch to composing with a comparator, as the change that was said not to happen did indeed happen.
I would argue that impementations of IComparable and IDisposable are not responsibilities at all, and therefore would not violate SRP.
In the context of SRP, a responsibility is to an interactor of your system (i.e. a user, role or external system). If your system has a Business Requirements Document, all responsibilities should be at least inferred within the functional or non-functional requirements. If not, ask yourself which business owner is going to ask you to change how an object disposes itself.
On the first project I worked on after learning about SRC, we interpreted it as "one public method per class" and applied it as a hard rule. While that made it easy to stay in "compliance", we ended up with code that was far more complicated than it needed to be.
If your IComparable/IDisposable implementations need to change, that change will be driven by the functional (business) part of your class also requiring change (at the same time and for the same reason).
I am trying to learn unit testing but there is a design issue resulting from it. Consider class A has a dependency on class B. If you want to create a stub for B in order to unit test A, most isolation frameworks require that B has to be an interface or all the methods used by A must be virtual. B can't be a concrete class with non-virtual methods in essence in order to unit test.
This imposes major restrictions on the design of production code. If I have to create an interface for every dependency then number of classes will double. Following Single Responsibility Principle leads to small classes that depend on each other so this will blow up the number of interfaces. Also I create interfaces for volatile dependencies(likely to change in the future) or if the design requires it for extensibility. Polluting the production code with interfaces just for testing will increase its complexity significantly.
Making all methods virtual doesn't seem to be a good solution either. It gives inheritors the impression that these methods are ok to be overridden even if they aren't and in reality this is just a side effect of unit testing.
Does this mean that testable object oriented design doesn't allow concrete dependencies or does it mean that concrete dependencies shouldn't be faked? "Every dependency must be faked(stub or mock) to unit test correctly" is what I learned so far so I don't think latter is the case. Isolation frameworks other than JustMock and Isolator doesn't allow faking concrete dependencies without virtual methods and some argue that power of JustMock and Isolator lead to bad design. I think that the ability to mock any class is very powerful and it will keep the design of production code clean if you know what you are doing.
I realized later that this question also asks the same and there seems to have no solution. Choosing between creating an interface or making all methods virtual is a restriction of C# which is statically typed language. Duck-typed language such as Ruby doesn't impose this and a fake object can easily be created without changing the original class. In Ruby fake object just needs to create the appropriate methods and it can be used instead of the original dependency.
Edit:
I finished reading the book The Art of Unit Testing by Roy Osherove and found that the following paragraphs are related:
Testable designs usually only matter in static languages, such as C# or VB.NET,
where testability depends on proactive design choices that allow things to be replaced.
Designing for testability matters less in more dynamic languages, where things are
much more testable by default. In such languages, most things are easily replaceable,
regardless of the project design. This rids the community of such languages from the
straw-man argument that the lack of testability of code means it’s badly designed and
lets them focus on what good design should achieve, at a deeper level.
Testable designs have virtual methods, nonsealed classes, interfaces, and a clear
separation of concerns. They have fewer static classes and methods, and many more
instances of logic classes. In fact, testable designs correlate to SOLID design principles
but don’t necessarily mean you have a good design. Perhaps it’s time that the end goal
should not be testability but good design alone.
This basically means that making the design testable because of the restrictions of a static language doesn't make it a "good design" inherently. For me a good design accomplishes what is needed for today's requirements and doesn't think about future too much. Of course making every dependency abstract is good for maintainability in the future but it makes the API very complex. I want to make a dependency an interface if its likely to change or many concrete classes implement that interface not because testability requires it. Doing this because testability requires it leads to "bad design".
In my self-directed efforts to bring my programming skills and habits into the 21st Century (migrating from Pascal & Fortran to C# and C++), I've been studying a fair amount of available source code. So far as I have been able to determine, Classes are unique 'standalone' entities (much like their Function ancestors).
I have, however, run across numerous instances where one or more Classes are nested within another Class. My 'gut instinct' in this regard is that doing so is simply due to extremely poor methodology - however, I'm not yet familiar enough with modern OOP methodology to truly make such a determination.
Hence, the following overlapping questions:
Is there legitimate reasoning for nesting one Class inside another? And, if so, what is the rationale in doing so as opposed to each Class being wholly independent?
(Note: The examples I've seen have been using C#, but it seems that this aspect applies equally to C++.)
Hmm this might call for a very suggestive answer and therefore many will disagree with my answer. I am one of those people who believe that there is no real need for nested classes and I am tended to agree with your statement:
My 'gut instinct' in this regard is that doing so is simply due to extremely poor methodology
Cases where people feel the need to design nested classes is where functionality that is tightly coupled to the behavior designed in the outer class. E.g. event handling can be designed in an inner class, or the Threading behavior can find its way to inner classes.
I rather refactor specific behavior out of the 'outer' class so that I end up with two smaller classes that both have clear responsibilities.
To me the main drawback of designing inner classes is that they tend to clutter functionality which is hard(er) to use with principals as TDD (test driven development).
If you are not relying on test driven principals, I think it will not harm you a lot. It is (like so many things) a matter of taste, not a matter of right or wrong. I learned that the topic can lead to long and exhausting discussion. Quite similar to whether you should use static helper classes that tend to do more than just 'be a helper' and are getting more and more state over time.
The discussions can become a lot more concrete if you ever run into a real life example. Until then it will be mostly people's "gut feeling".
Common uses for nested classes in C# include classes that are for internal use by your class that ou don't want exposed in your module's internal namespace. For example, you may need to throw an exception that you don't expose to the outside world. In that case you would want to make it a nested class so that others can't use it.
Another example is a binary tree's Node class. Not only would it be something you wouldn't want exposed outside of your class, but it might need access to private members for internal operations.
One area where you encounter nested classes quite frequently (although perhaps without noticing) is enumerators (i.e. classes implementing IEnumerator<T>).
In order for a collection to support multiple simultaneous enumerations (e.g. on multiple threads, or even just nested foreach loops over the same collection), the enumeration logic needs to be separated from the collection into another class.
However, enumerators often need specific knowledge of the implementation details of a collection in order to work correctly. If you were to do this with an entirely separate (non-nested) class, you'd have to make those internals more accessible than they really should be, thus breaking encapsulation. (One could perhaps argue that it could be done using internal members, but I think this still breaks encapsulation to some extent, even if it is "only" exposing implementation details to the members of the same assembly).
By using a private nested class for the enumerator, these problems go away whilst maintaining proper encapsulation, since the nested class can access the internals of its containing class.
There are pros and cons with having nested classes. The pros center around deliberate uses such as IEnumerator<T> that Iridium mentioned and nesting classes to facilitate the Command Pattern in WPF/MVVM. SQL Create, Read, Update and Delete (CRUD) operations are popular to encapsulate in classes/commands in code using these patterns for Line of Business / Data-driven apps.
Negatives associated with nesting classes are associated with debugging - the more classes you nest, the greater the complexity, the easier it is introduce bugs, the harder it is to debug it all. Even using something like the Command Pattern, as organizing as it is, can bloat your Class with too many classes/commands.
Considering a hypothetical situation where an old, legacy presentation library has been maintained over the years, and has gradually had more and more business logic coded into it through a process of hasty corrections and lack of proper architectural oversight. Alternatively, consider a business class or namespace that is not separated from presentation by assembly boundaries, and was thus capable of referencing something like System.Windows.Forms without being forced to add a reference (a much more sobering action than a simple using clause).
In situations like this, it's not unimaginable that the business code used by this UI code will eventually be called upon for reuse. What is a good way to refactor the two layers apart to allow for this?
I'm loosely familiar with design patterns--at least in principle anyway. However, I don't have a whole ton of practical experience so I'm somewhat unsure of my intuitions. I've started along the path of using the Strategy pattern for this. The idea is to identify the places where the business logic calls up to UI components to ask the user a question and gather data, and then to encapsulate those into a set of interfaces. Each method on that interface will contain the UI-oriented code from the original workflow, and the UI class will then implement that interface.
The new code that wants to reuse the business logic in question will also implement this interface, but substitute either new windows or possibly pre-fab or parameterized answers to the questions originally answered by the UI components. This way, the biz logic can be treated as a real library, albeit with a somewhat awkward interface parameter passed to some of its methods.
Is this a decent approach? How better should I go about this? I will defer to your collective internet wisdom.
Thanks!
I humbly suggest Model–View–Controller - MVC has a high probability as a successful solution to your problem. It separates various logic, much as you describe.
HTH
You seem to be taking a good approach, in which you break dependencies between concrete elements in your design to instead depend on abstractions (interfaces). When you break dependencies like this, you should immediately start using unit tests to cover your legacy code base and to evolve the design with improved assurance.
I've found the book Working Effectively with Legacy Code to be invaluable in these situations. Also, don't jump right into the patterns without first looking at the principles of object oriented design, like the SOLID principles. They often guide your choice of patterns and decisions about the evolution of the system.
I would approach it by clearly identifying the entities and the actions they can do or can be done to them. Then one by one try to start creating independent business logic objects for those refactoring the logic out of the UI, making the UI call to the BL objects.
At that point if I understand your scenario correctly you would have a hand full of BL objects, some portion of which made win forms calls, the win forms calls would need to be promoted out into the UI layer.
Then as JustBoo says, I think you'll have a distinct enough situation to start abstracting out controllers from your BL and UI and make it all function in an MVC design.
Okay, given your various comments, I would take Mr. Hoffa's advice and extend it. I'm sure you've heard hard problems should be broken down into ever smaller units of work until they can be "conquered."
Using that technique, coupled with the methodologies of Refactoring could solve your problems. There is a book and lots of information on the web about it. You now have a link. That page has a ton of links to information.
One more link from the author of the book.
So, you refactor, slowly but surely to the creamy goodness of MVC, step-by-step.
HTH
I apologize for the subjectiveness of this question, but I am a little stuck and I would appreciate some guidance and advice from anyone who's had to deal with this issue before:
I have (what's become) a very large RESTful API project written in C# 2.0 and some of my classes have become monstrous. My main API class is an example of this -- with several dozen members and methods (probably approaching hundreds). As you can imagine, it's becoming a small nightmare, not only to maintain this code but even just navigating the code has become a chore.
I am reasonably new to the SOLID principles, and I am massive fan of design patterns (but I am still at that stage where I can implement them, but not quite enough to know when to use them - in situations where its not so obvious).
I need to break my classes down in size, but I am at a loss of how best to go about doing it. Can my fellow StackOverflow'ers please suggest ways that they have taken existing code monoliths and cut them down to size?
Single Responsibility Principle - A class should have only one reason to change. If you have a monolithic class, then it probably has more than one reason to change. Simply define your one reason to change, and be as granular as reasonable. I would suggest to start "large". Refactor one third of the code out into another class. Once you have that, then start over with your new class. Going straight from one class to 20 is too daunting.
Open/Closed Principle - A class should be open for extension, but closed for change. Where reasonable, mark your members and methods as virtual or abstract. Each item should be relatively small in nature, and give you some base functionality or definition of behavior. However, if you need to change the functionality later, you will be able to add code, rather than change code to introduce new/different functionality.
Liskov Substitution Principle - A class should be substitutable for its base class. The key here, in my opinion, is do to inheritance correctly. If you have a huge case statement, or two pages of if statements that check the derived type of the object, then your violating this principle and need to rethink your approach.
Interface Segregation Principle - In my mind, this principle closely resembles the Single Responsibility principle. It just applies specifically to a high level (or mature) class/interface. One way to use this principle in a large class is to make your class implement an empty interface. Next, change all of the types that use your class to be the type of the interface. This will break your code. However, it will point out exactly how you are consuming your class. If you have three instances that each use their own subset of methods and properties, then you now know that you need three different interfaces. Each interface represents a collective set of functionality, and one reason to change.
Dependency Inversion Principle - The parent / child allegory made me understand this. Think of a parent class. It defines behavior, but isn't concerned with the dirty details. It's dependable. A child class, however, is all about the details, and can't be depended upon because it changes often. You always want to depend upon the parent, responsible classes, and never the other way around. If you have a parent class depending upon a child class, you'll get unexpected behavior when you change something. In my mind, this is the same mindset of SOA. A service contract defines inputs, outputs, and behavior, with no details.
Of course, my opinions and understandings may be incomplete or wrong. I would suggest learning from people who have mastered these principles, like Uncle Bob. A good starting point for me was his book, Agile Principles, Patterns, and Practices in C#. Another good resource was Uncle Bob on Hanselminutes.
Of course, as Joel and Jeff pointed out, these are principles, not rules. They are to be tools to help guide you, not the law of the land.
EDIT:
I just found these SOLID screencasts which look really interesting. Each one is approximately 10-15 minutes long.
There's a classic book by Martin Fowler - Refactoring: Improving the Design of Existing Code.
There he provides a set of design techniques and example of decisions to make your existing codebase more manageable and maintainable (and that what SOLID principals are all about). Even though there are some standard routines in refactoring it is a very custom process and one solution couldn't be applied to all project.
Unit testing is one of the corner pillars for this process to succeed. You do need to cover your existing codebase with enough code coverage so that you'd be sure you don't break stuff while changing it. Actually using modern unit testing framework with mocking support will lead encourage you to better design.
There are tools like ReSharper (my favorite) and CodeRush to assist with tedious code changes. But those are usually trivial mechanical stuff, making design decisions is much more complex process and there's no so much tool support. Using class diagrams and UML helps. That what I would start from, actually. Try to make sense of what is already there and bring some structure to it. Then from there you can make decisions about decomposition and relations between different components and change your code accordingly.
Hope this helps and happy refactoring!
It will be a time consuming process. You need to read the code and identify parts that do not meet the SOLID principles and refactor into new classes. Using a VS add-in like Resharper (http://www.jetbrains.com) will assist with the refactoring process.
Ideally you will have good coverage of automated unit tests so that you can ensure your changes do not introduce problems with the code.
More Information
In the main API class, you need to identify methods that relate to each other and create a class that more specifically represents what actions the method performs.
e.g.
Let's say I had an Address class with separate variables containing street number, name, etc. This class is responsible for inserting, updating, deleting, etc. If I also needed to format an address a specific way for a postal address, I could have a method called GetFormattedPostalAddress() that returned the formatted address.
Alternatively, I could refactor this method into a class called AddressFormatter that takes an Address in it constructor and has a Get property called PostalAddress that returns the formatted address.
The idea is to separate different responsibilities into separate classes.
What I've done when presented with this type of thing (and I'll readily admit that I haven't used SOLID principles before, but from what little I know of them, they sound good) is to look at the existing codebase from a connectivity point of view. Essentially, by looking at the system, you should be able to find some subset of functionality that is internally highly coupled (many frequent interactions) but externally loosely coupled (few infrequent interactions). Usually, there are a few of these pieces in any large codebase; they are candidates for excision. Essentially, once you've identified your candidates, you have to enumerate the points at which they are externally coupled to the system as a whole. This should give you a good idea of the level of interdependency involved. There usually is a fair bit of interdependency involved. Evaluate the subsets and their connection points for refactoring; frequently (but not always) there ends up being a couple of clear structural refactorings that can increase the decoupling. With an eye on those refactorings, use the existing couplings to define the minimal interface required to allow the subsystem to work with the rest of the system. Look for commonalities in those interfaces (frequently, you find more than you'd expect!). And finally, implement these changes that you've identified.
The process sounds terrible, but in practice, it's actually pretty straightforward. Mind you, this is not a roadmap towards getting to a completely perfectly designed system (for that, you'd need to start from scratch), but it very certainly will decrease the complexity of the system as a whole and increase the code comprehensibility.
OOD - Object Oriented Design
SOLID - class design
Single Responsibility Principle - SRP - introduced by Uncle Bob. Method, class, module are responsible only for doing single thing(one single task)
Open/Closed Principle - OCP - introduced by Bertrand Meyer. Method, class, module are open for extension and closed for modification. Use a power of inheritance, abstraction, polymorphism, extension, wrapper. [Java example], [Swift example]
[Liskov Substitution Principle] - LSP - introduced by Barbara Liskov and Jeannette Wing. A subtype can replace supertype without side effects
Interface Segregation Principle - ISP - introduced by Uncle Bob. Your interface should be as small as possible
[Dependency Inversion Principle(DIP)] - DIP - introduced by Uncle Bob. Internal class, layer should not be depended on external class, layer. For example when you have aggregation[About] dependency you should rather use some abstraction/interfaces. [DIP vs DI vs IoC]
6 principles about packages/modules(.jar, .aar, .framework):
what to put inside a package
The Release Reuse Equivalency
The Common Closure
The Common Reuse
couplings between packages
The Acyclic Dependencies
The Stable Dependencies
The Stable Abstractions
[Protocol Oriented Programming(POP)]