I'm reading and example provided in "Head first design patterns book" about decoration pattern.
I have noticed 2 things:
if you will need to remove a decorator from the stack of the wrapped decorators, you will have to iterate one by one through the component reference, which is O(n) complexity.
Conceptually I find it wrong to wrap (encapsulate) the base component in to the decorator object. It should be reversed; the component object should encapsulate the decorating objects.
I'm new to design patterns, and there is high probability that I'm wrong. Please explain to me what is specifically wrong with the way I think so I can learn.
I have created a different design, which solves the problems that I have mentioned. Maybe they add new problems; please feel free to point out the issues.
Here is the UML Diagram of the suggestion:
Basically what I did is that I have created a dictionary in the Component class which saves which decorators have been added, and made the Decorator abstract class not inherit from the component yet from Interface (so the component abstract class).
In this way, we can remove any decoration we want with O(1) complexity, and it is more logically constructed in the way that the component wraps the decorator, not the vice versa.
I understand that maybe I didn't noticed some advantage of the original Decorator pattern design. Please advice me.
Here is my code url.
Edit:
An example of when a customer will need to remove a decorator:
say for example the customer is choosing the condiments, he is adding whip, removing caramel, see each time how the total price vary, based on what the client is choosing to be added as a decorator.
if you will need to remove a decorator from the stack of the wrapped decorators, you will have to iterate one by one through the component reference, which is O(n) complexity.
It's true that removing decorators is theoretically more complex when they're wrapped. However, you need to consider what is a likely n. I'm going to guess for the decorator pattern as it was proposed, there are probably a small (max(n) == 20) iterators. Iterating over that many would not be a practical problem.
Conceptually I find it wrong to wrap (encapsulate) the base component in to the decorator object. It should be reversed; the component object should encapsulate the decorating objects.
The Decorator pattern seeks to add functionality (via concrete decorators) without having to modify the component class. In some cases, a Component can't be changed (e.g., it comes from a standard library). With the approach you propose, the Component would have to be modified to encapsulate its decorators. This is not the intention of Decorator.
In the original GoF book, the design patterns have clear definitions of the problem they solve, and the consequences (which are not all positive!) of the design pattern. In line with your complexity point (removing decorators), the authors mentioned this consequence:
Lots of little objects. A design that uses Decorator often results in systems composed of lots of little objects that all look alike. The objects differ only in the way they are interconnected, not in their class or in the value of their variables. Although these systems are easy to customize by those who understand them, they can be hard to learn and debug.
Why would you need to "remove a decorator from the stack of the warped decorators"? What does this mean? I think you're confusing two different concepts, the decorator pattern and stacks. The first is a design pattern in object-oriented programming, the latter is a data structure.
The decorator pattern exists so that new functionality can be added to a base component without the need to redefine the components that use/are dependent on it. That's why the decorator component "encapsulates" the base component, so that it can then use whatever functionalities it contains, while adding others required. If the base component encapsulated the decorator components, how would you reference a functionality present in any given one of them? Following you example, imagine I call Mocca.GetCost(). If it's not overriden nor redefined, CondimentDecorated.GetCost() will be called, which in turn, I imagine, considering what you're trying to do, will call Beverage.GetCost(). What will this method do? Iterate over the dictionary to look for which decorator method to call? This doesn't make sense, as, in calling CondimentDecorated.GetCost() you'll only be calling Beverage.GetCost() again. How will all this work if you can, as you said, "remove any decoration you want" from the decorators dictionary? What will be the behaviour then, when you call Mocca.GetCost()?
It's not that what you're trying to do isn't possible, and it's great to question why something is the way it is. But there's just a lot OOP misconceptions and violations here. Meaning, question not only how things could be made better, but also why they are done the way they are.
Related
I have some integrations (like Salesforce) that I would like to hide behind a product-agnostic wrapper (like a CrmService class instead of SalesforceService class).
It seems simple enough that I can just create a CrmService class and use the SalesforceService class as an implementation detail in the CrmService, however, there is one problem. The SalesforceService uses some exceptions and enums. It would be weird if my CrmService threw SalesforceExceptions or you were required to use Salesforce enums.
Any ideas how I can accomplish what I want cleanly?
EDIT: Currently for exceptions, I am catching the Salesforce one and throwing my own custom one. I'm not sure what I should do for the enums though. I guess I could map the Salesforce enums to my own provider-agnostic ones, but I'm looking for a general solution that might be cleaner than having to do this mapping. If that is my only option (to map them), then that is okay, just trying to get ideas.
The short answer is that you are on the right track, have a read through the Law of Demeter.
The fundamental notion is that a given object should assume as
little as possible about the structure or properties of anything else
(including its subcomponents), in accordance with the principle of
"information hiding".
The advantage of following the Law of Demeter is that the resulting
software tends to be more maintainable and adaptable. Since objects
are less dependent on the internal structure of other objects, object
containers can be changed without reworking their callers.
Although it may also result in having to write many wrapper
methods to propagate calls to components; in some cases, this can
add noticeable time and space overhead.
So you see you are following quite a good practise which I do generally follow myself, but it does take some effort.
And yes you will have to catch and throw your own exceptions and map enums, requests and responses, its a lot of upfront effort but if you ever have to change out Salesforce in a few years you will be regarded a hero.
As with all things software development, you need to way up the effort versus the benefit you will gain, if you think you are likely never to change out salesforce? then is it really needed? ... for you to decide.
To make use of good OOP practices, I would create a small interface ICrm with the basic members that all your CRM's have in common. This interface will include the typical methods like MakePayment(), GetPayments(), CheckOrder(), etc. Also create the Enums that you need like OrderStatus or ErrorType, for example.
Then create and implement your specific classes implementing the interface, e.g. class CrmSalesForce : ICrm. Here you can convert the specific details to this particular CRM (SalesForce in that case) to your common ICrm. Enums can be converted to string and the other way around if you have to (http://msdn.microsoft.com/en-us/library/kxydatf9(v=vs.110).aspx).
Then, as a last step, create your CrmService class and use in it Dependency Injection (http://msdn.microsoft.com/en-us/library/ff921152.aspx), that's it, pass a type of ICrm as a parameter in its constructor (or methods if you prefer to) . That way you keep your CrmService class quite cohesive and independent, so you create and use different Crm's without the need to change most of your code.
Whenever I want to stub a method in an otherwise trivial class, I most often extract an interface.
Now if the constructor of that class is public and isn't too complex or dependent on complex types, it would have the same effect to just make the method in question virtual and inherit.
Is this preferable over extracting an interface? If so, why?
Edit:
class Parser
{
public IDictionary<string, int> DoLengthyParseTask(Stream s)
{
// is slow even with using memory stream
}
}
There are two ways: Either extract an interface or make the method virtual. I actually prefer interfaces, but that could lead to an explosion of IParser Parser tuples...
You need to consider what you are trying to accomplish outside of your unit testing. Do not let your tool dictate your design.
Dealing in interfaces can help decouple your code, but these should be natural points of separation in your code (e.g. business logic or data access). Making methods virtual makes sense if you are going to inherit and overwrite those methods.
In your case, I would attempt to test the behavior that uses DoLengthyParseTask and not the method directly. This will provide a more robust test suite as well. You need to carefully consider whether this method really needs to be public(meaning it can and should be referenced outside its own assembly).
Interfaces just make a contract for you, basically a promise that your implementation will provide access to a specified set of contact points (methods, properties, etc), with no specification of behaviour. You are free to do whatever you want as long as you honor the promise.
A base class on the other hand, in addition of a contract, specifies at least some behaviour that is coded in the class (unless everything is abstract, but that is another story). Making a method virtual still enables you to call in the implementation of the base, and still provide your own code along with it.
This inheritance of behaviour is basically the reason why multiple inheritance is a no-no in modern OOP, and multiple interface implementation is relatively common.
That said, you need to weight whether you just want to extract a contract, or you want to extract some behaviour as well, and the answer should be obvious for a specific case.
As for the IParser / Parser pairs, first they are great for unit testing and for dependency injection, and second, they do not charge you for class creation, so feel free to create as many as you want.
By programming to an interface you get benefits of ease of mocking/stubbing in unit testing and loosely coupled code (and as a result, much higher flexibility), literally for free (the only drawback is more artifacts to manage).
Interfaces and inheritance are two separate things and it's not a good idea to use them interchangeably, even though it's possible. By marking method virtual you're essentially telling others not only they're free to change (override) this method in their implementations, but that you actually expect them to (and are you?).
Such design comes with rather heavy consequences, so unless you explicitly need it - you shouldn't use it. Try sticking to programming to interface instead.
One of good object oriented design principles state that you should program to an interface (design by contract, Liskov Substitution Principle) and prefer composition over inheritance (not only your classes should implement interfaces/abstract classes, but also consist of such implementations).
It's worth noticing that your Parser example makes perfect candidate to be hidden behind abstraction (be it interface or base class). From its consumer point of view it doesn't matter how the data is created - for now you might think it's XML stream only, but requirements tend to change (and/or grow), and you might soon find yourself implementing binary file parser, data stream mining parser and what-not-else. Do it properly now, save yourself time and trouble later.
There is a pattern that I use from time to time, but I'm not quite sure what it is called. I was hoping that the SO community could help me out.
The pattern is pretty simple, and consists of two parts:
A factory method that creates objects based on the arguments passed in.
Objects created by the factory.
So far this is just a standard "factory" pattern.
The issue that I'm asking about, however, is that the parent in this case maintains a set of references to every child object that it ever creates, held within a dictionary. These references can sometimes be strong references and sometimes weak references, but it can always reference any object that it has ever created.
When receiving a request for a "new" object, the parent first searches the dictionary to see if an object with the required arguments already exists. If it does, it returns that object, if not, it returns a new object and also stores a reference to the new object within the dictionary.
This pattern prevents having duplicative objects representing the same underlying "thing". This is useful where the created objects are relatively expensive. It can also be useful where these objects perform event handling or messaging - having one object per item being represented can prevent multiple messages/events for a single underlying source.
There are probably other reasons to use this pattern, but this is where I've found this useful.
My question is: what to call this?
In a sense, each object is a singleton, at least with respect to the data it contains. Each is unique. But there are multiple instances of this class, however, so it's not at all a true singleton.
In my own personal terminology, I tend to call the parent class a "global singleton". I then call the created objects "local singletons". I sometimes also say that the created objects have "reference equality", meaning that if two variables reference the same data (the same underlying item) then the reference they each hold must be to the same exact object, hence "reference equality".
But these are my own invented terms, and I am not sure that they are good ones.
Is there standard terminology for this concept? And if not, could some naming suggestions be made?
Thanks in advance...
Update #1:
In Mark Seemans' reply, below, he gives the opinion that "The structure you describe is essentially a DI Container used as a Static Service Locator (which I consider an anti-pattern)."
While I agree that there are some similarities, and Mark's article is truly excellent, I think that this Dependency Injection Container / Static Service Locator pattern is actually a narrower implementation of the general pattern that I am describing.
In the pattern described in the article, the service (the 'Locator' class) is static, and therefore requires injection to have variability in its functionality. In the pattern I am describing, the service class need not be static at all. One could provide a static wrapper, if one wants, but being a static class is not at all required, and without a static class, dependency injection is not needed (and, in my case, not used).
In my case the 'Service' is either an interface or an abstract class, but I don't think that 'Service' and 'Client' classes are even required to be defined for the pattern I am describing. It is convenient to do so, but if all the code is internal, the Server class could simply be a 'Parent' class that controls the creation of all children via a factory method and keeps weak (or possibly strong) references to all of its children. No injection, nothing static, and not even a requirement to have defined interfaces or abstract classes.
So my pattern is really not a 'Static Service Locator' and neither is it a 'Dependency Injection Container'.
I think the pattern I'm describing is much more broad than that. So the question remains: can anyone identify the name for this approach? If not, then any ideas for what to call this are welcome!
Update #2:
Ok, it looks like Gabriel Ščerbák got it with the GoF "Flyweight" design pattern. Here are some articles on it:
Flyweight pattern (Wikipedia)
Flyweight design pattern (dofactory.com)
Flyweight Design Pattern (sourcemaking.com)
A 'flyweight factory' (server) and 'flyweight objects' (client) approach using interfaces or abstract classes is well explained in the dofactory.com article an is exactly what I was trying to explain here.
The Java example given in the Wikipedia article is the approach I take when implementing this approach internally.
The Flyweight pattern also seems to be very similar to the concept of hash consing and the Multiton pattern.
Slightly more distantly related would be an object pool, which differs in that it would tend to pre-create and/or hold on to created objects even beyond their usage to avoid the creation & setup time.
Thanks all very much, and thanks especially to Gabriel.
However, if anyone has any thoughts on what to call these child objects, I'm open to suggestions. "Internally-Mapped children"? "Recyclable objects"? All suggestions are welcome!
Update #3:
This is in reply to TrueWill, who wrote:
The reason this is an anti-pattern is because you are not using DI. Any class that consumes the Singleton factory (aka service locator) is tightly coupled to the factory's implementation. As with the new keyword, the consumer class's dependencies on the services provided by the factory are not explicit (cannot be determined from the public interface). The pain comes in when you try to unit test consumer classes in isolation and need to mock/fake/stub service implementations. Another pain point is if you need multiple caches (say one per session or thread)
Ok, so Mark, below, said that I was using DI/IoC. Mark called this an anti-pattern.
TrueWill agreed with Mark's assessment that I was using DI/IoC in his comment within Mark's reply: "A big +1. I was thinking the same thing - the OP rolled his own DI/IoC Container."
But in his comment here, TrueWill states that I am not using DI and it is for this reason that it is an anti-pattern.
This would seem to mean that this is an anti-pattern whether using DI or not...
I think there is some confusion going on, for which I should apologize. For starters, my question begins by talking about using a singleton. This alone is an anti-pattern. This succeeded in confusing the issue with respect to the pattern I am trying to achieve by implying a false requirement. A singleton parent is not required, so I apologize for that implication.
See my "Update #1" section, above, for a clarification. See also the "Update #2" section discussing the Flyweight pattern that represents the pattern that I was trying to describe.
Now, the Flyweight pattern might be an anti-pattern. I don't think that it is, but that could be discussed. But, Mark and TrueWill, I promise you that in no way am I using DI/IoC, honest, nor was I trying to imply that DI/IoC is a requirement for this pattern.
Really sorry for any confusion.
It looks like classic Flyweight design pattern from the GoF book. It does not cover the singleton factory with hash map, but generally covers saving on space and performance by reusing already created object, which is referenced by many other objects. Check out this pattern.
The objects themselves aren't following the singleton pattern, so maybe referring to the fetched objects as singleton can be confusing.
What about a Recycling Factory? :]
The structure you describe is essentially a DI Container used as a Static Service Locator (which I consider an anti-pattern).
Each of the services created by this Service Locator has a so-called Singleton lifetime. Most DI Containers support this as among several available lifetimes.
What is the best approach for defining Interfaces in either C# or Java? Do we need to make generic or add the methods as and when the real need arises?
Regards,
Srinivas
Once an interface is defined, it is intended to not be changed.
You have to be thoughtful about the purpose of the interface and be as complete as possible.
If you find the need, later, to add a method, really you should define a new interface, possibly a _V2 interface, with the additional method.
Addendum: Here you will find some good guidelines on interface design in C#, as part of a larger, valuable work on C# design in general. It generally applies to Java as well.
Excerpts:
Although most APIs are best modeled using classes and structs, there are cases in which interfaces are more appropriate or are the only option.
DO provide at least one type that is
an implementation of an interface.
This helps to validate the design of
the interface. For example,
System.Collections.ArrayList is an
implementation of the
System.Collections.IList interface.
DO provide at least one API consuming
each interface you define (a method
taking the interface as a parameter or
a property typed as the interface).
This helps to validate the interface
design. For example, List.Sort
consumes IComparer interface.
DO NOT add members to an interface that
has previously shipped. Doing so
would break implementations of the
interface. You should create a new
interface to avoid versioning
problems.
I recommend relying on the broad type design guidelines.
To quote Joshua Bloch:
When in doubt, leave it out.
You can always add to an interface later. Once a member is a part of your interface it is very difficult to change or remove it. Be very conservative in your creation of you interfaces as they are binding contracts.
As a side note here is an excellent interview with Vance Morrison (of the Microsoft CLR team) in which he mentions the possibility of a future version of the CLR allowing "mixins" or interfaces with default implementations for their members.
If your interface is part of code that is shared with other projects and teams, listen to Cheeso. But if your interface is part of a private project and you have access to all the change points then you probably didn't need interfaces to begin with but go ahead and change them.
If the interface is going to be public, I feel that a good deal of care needs to be put into the design because changes to the interface is going to be difficult if a lot of code is going to suddenly break in the next iteration.
Changes to the interface needs to be taken with care, therefore, it would be ideal if changes wouldn't have to be made after the initial release. This means, that the first iteration will be very important in terms of the design.
However, if changes are required, one way to implement the changes to the interface would be deprecate the old methods, and provide a transition path for old code to use the newly-designed features. This does mean that the deprecated methods will still stick around to prevent the code using the old methods from breaking -- this is not ideal, so it is a "price to pay" for not getting it right the first time around.
On a related matter, yesterday, I stumbled upon the Google Tech Talk: How to Design a Good API and Why It Matters by Joshua Bloch. He was behind the design and implementation of the Java Collection libraries and such, and is the author of Effective Java.
The video is about an hour long where he goes into details and examples about what makes a good API, why we should be making well-designed APIs, and more. It's a good video to watch to get some ideas and inspiration for certain things to look out for when thinking about designing APIs.
Adding methods later to an interface immediately breaks all implementations of the interface that didn't accidentaly implement those methods. For that reason, make sure your interface specification is complete. I'd propose you start with a (sample) client of the interface, the part that actually uses instances of classes implementing said interface. Whatever the client needs must be part of the interface (obviously). Then make a (sample) implementation of the interface and look what additional methods are both generally usefull and available (in possible other implementations) so they should also be part of the interface. Check for symetry completeness (e.g. if there is an "openXYZ", there should also be a "closeXYZ". if there is an "addFooBar", there should be a "removeFooBar". etc.)
If possible, let a coworker check your specification.
And: Be sure you really want an interface. Maybe an abstract base class is a better fit for your needs.
Well, it really depends on your particular situation. If your team is the sole user/maintainer of that interface, then by all means, modify it as you see fit and forget all about that "best practice blabla" kind of stuff. It is YOUR code after all... Never blindly follow best pracice stuff without understanding its rationale.
Now, if you're making a public API that other team or customer, will work with (think plugins, extension points or things like that) then you have to be conservative with what you put in the interface. As other mentionned, you may have to add _V2 kind of interface int these cases. Microsoft did with several web browser COM interfaces.
The guidelines Microsoft publishes in Framework Design Guidelines are just that: guideline for PUBLIC interface. Not for private internal stuff; tough many of them still apply. Know what applies or not to your situation.
No rule will make up for lack of common sense.
I apologize for the subjectiveness of this question, but I am a little stuck and I would appreciate some guidance and advice from anyone who's had to deal with this issue before:
I have (what's become) a very large RESTful API project written in C# 2.0 and some of my classes have become monstrous. My main API class is an example of this -- with several dozen members and methods (probably approaching hundreds). As you can imagine, it's becoming a small nightmare, not only to maintain this code but even just navigating the code has become a chore.
I am reasonably new to the SOLID principles, and I am massive fan of design patterns (but I am still at that stage where I can implement them, but not quite enough to know when to use them - in situations where its not so obvious).
I need to break my classes down in size, but I am at a loss of how best to go about doing it. Can my fellow StackOverflow'ers please suggest ways that they have taken existing code monoliths and cut them down to size?
Single Responsibility Principle - A class should have only one reason to change. If you have a monolithic class, then it probably has more than one reason to change. Simply define your one reason to change, and be as granular as reasonable. I would suggest to start "large". Refactor one third of the code out into another class. Once you have that, then start over with your new class. Going straight from one class to 20 is too daunting.
Open/Closed Principle - A class should be open for extension, but closed for change. Where reasonable, mark your members and methods as virtual or abstract. Each item should be relatively small in nature, and give you some base functionality or definition of behavior. However, if you need to change the functionality later, you will be able to add code, rather than change code to introduce new/different functionality.
Liskov Substitution Principle - A class should be substitutable for its base class. The key here, in my opinion, is do to inheritance correctly. If you have a huge case statement, or two pages of if statements that check the derived type of the object, then your violating this principle and need to rethink your approach.
Interface Segregation Principle - In my mind, this principle closely resembles the Single Responsibility principle. It just applies specifically to a high level (or mature) class/interface. One way to use this principle in a large class is to make your class implement an empty interface. Next, change all of the types that use your class to be the type of the interface. This will break your code. However, it will point out exactly how you are consuming your class. If you have three instances that each use their own subset of methods and properties, then you now know that you need three different interfaces. Each interface represents a collective set of functionality, and one reason to change.
Dependency Inversion Principle - The parent / child allegory made me understand this. Think of a parent class. It defines behavior, but isn't concerned with the dirty details. It's dependable. A child class, however, is all about the details, and can't be depended upon because it changes often. You always want to depend upon the parent, responsible classes, and never the other way around. If you have a parent class depending upon a child class, you'll get unexpected behavior when you change something. In my mind, this is the same mindset of SOA. A service contract defines inputs, outputs, and behavior, with no details.
Of course, my opinions and understandings may be incomplete or wrong. I would suggest learning from people who have mastered these principles, like Uncle Bob. A good starting point for me was his book, Agile Principles, Patterns, and Practices in C#. Another good resource was Uncle Bob on Hanselminutes.
Of course, as Joel and Jeff pointed out, these are principles, not rules. They are to be tools to help guide you, not the law of the land.
EDIT:
I just found these SOLID screencasts which look really interesting. Each one is approximately 10-15 minutes long.
There's a classic book by Martin Fowler - Refactoring: Improving the Design of Existing Code.
There he provides a set of design techniques and example of decisions to make your existing codebase more manageable and maintainable (and that what SOLID principals are all about). Even though there are some standard routines in refactoring it is a very custom process and one solution couldn't be applied to all project.
Unit testing is one of the corner pillars for this process to succeed. You do need to cover your existing codebase with enough code coverage so that you'd be sure you don't break stuff while changing it. Actually using modern unit testing framework with mocking support will lead encourage you to better design.
There are tools like ReSharper (my favorite) and CodeRush to assist with tedious code changes. But those are usually trivial mechanical stuff, making design decisions is much more complex process and there's no so much tool support. Using class diagrams and UML helps. That what I would start from, actually. Try to make sense of what is already there and bring some structure to it. Then from there you can make decisions about decomposition and relations between different components and change your code accordingly.
Hope this helps and happy refactoring!
It will be a time consuming process. You need to read the code and identify parts that do not meet the SOLID principles and refactor into new classes. Using a VS add-in like Resharper (http://www.jetbrains.com) will assist with the refactoring process.
Ideally you will have good coverage of automated unit tests so that you can ensure your changes do not introduce problems with the code.
More Information
In the main API class, you need to identify methods that relate to each other and create a class that more specifically represents what actions the method performs.
e.g.
Let's say I had an Address class with separate variables containing street number, name, etc. This class is responsible for inserting, updating, deleting, etc. If I also needed to format an address a specific way for a postal address, I could have a method called GetFormattedPostalAddress() that returned the formatted address.
Alternatively, I could refactor this method into a class called AddressFormatter that takes an Address in it constructor and has a Get property called PostalAddress that returns the formatted address.
The idea is to separate different responsibilities into separate classes.
What I've done when presented with this type of thing (and I'll readily admit that I haven't used SOLID principles before, but from what little I know of them, they sound good) is to look at the existing codebase from a connectivity point of view. Essentially, by looking at the system, you should be able to find some subset of functionality that is internally highly coupled (many frequent interactions) but externally loosely coupled (few infrequent interactions). Usually, there are a few of these pieces in any large codebase; they are candidates for excision. Essentially, once you've identified your candidates, you have to enumerate the points at which they are externally coupled to the system as a whole. This should give you a good idea of the level of interdependency involved. There usually is a fair bit of interdependency involved. Evaluate the subsets and their connection points for refactoring; frequently (but not always) there ends up being a couple of clear structural refactorings that can increase the decoupling. With an eye on those refactorings, use the existing couplings to define the minimal interface required to allow the subsystem to work with the rest of the system. Look for commonalities in those interfaces (frequently, you find more than you'd expect!). And finally, implement these changes that you've identified.
The process sounds terrible, but in practice, it's actually pretty straightforward. Mind you, this is not a roadmap towards getting to a completely perfectly designed system (for that, you'd need to start from scratch), but it very certainly will decrease the complexity of the system as a whole and increase the code comprehensibility.
OOD - Object Oriented Design
SOLID - class design
Single Responsibility Principle - SRP - introduced by Uncle Bob. Method, class, module are responsible only for doing single thing(one single task)
Open/Closed Principle - OCP - introduced by Bertrand Meyer. Method, class, module are open for extension and closed for modification. Use a power of inheritance, abstraction, polymorphism, extension, wrapper. [Java example], [Swift example]
[Liskov Substitution Principle] - LSP - introduced by Barbara Liskov and Jeannette Wing. A subtype can replace supertype without side effects
Interface Segregation Principle - ISP - introduced by Uncle Bob. Your interface should be as small as possible
[Dependency Inversion Principle(DIP)] - DIP - introduced by Uncle Bob. Internal class, layer should not be depended on external class, layer. For example when you have aggregation[About] dependency you should rather use some abstraction/interfaces. [DIP vs DI vs IoC]
6 principles about packages/modules(.jar, .aar, .framework):
what to put inside a package
The Release Reuse Equivalency
The Common Closure
The Common Reuse
couplings between packages
The Acyclic Dependencies
The Stable Dependencies
The Stable Abstractions
[Protocol Oriented Programming(POP)]