Interfaces separated from the class implementation in separate projects? [closed] - c#

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
We work on a middle-size project (3 developers over more than 6 months) and need to make following decision: We'd like to have interfaces separated from concrete implementation. The first is to store the interface in a separate file.
We'd like to go further and separate the data even more: We'd like to have one project (CSPROJ) with interface in one .CS file plus another .CS file with help classes (like some public classes used within this interface, some enums etc.). Then, we'd like to have another project (CSPROJ) with a factory pattern, concrete interface implementation and other "worker" classes.
Any class which wants to create an object implementing this interface must include the first project which contains the interfaces and public classes, not the implementation itself.
This solution has one big disadvantage: it multiplies the number of assemblies by 2, because you would have for every "normal" project one project with interace and one with implementation.
What would you recommend? Do you think it's a good idea to place all interfaces in one separate project rather than one interface in its own project?

I would distinguish between interfaces like this:
Standalone interfaces whose purpose you can describe without talking about the rest of your project. Put these in a single dedicated "interface assembly", which is probably referenced by all other assemblies in your project. Typical examples: ILogger, IFileSystem, IServiceLocator.
Class coupled interfaces which really only make sense in the context of your project's classes. Put these in the same assembly as the classes they are coupled to.
An example: suppose your domain model has a Banana class. If you retrieve bananas through a IBananaRepository interface, then that interface is tightly coupled to bananas. It is impossible to implement or use the interface without knowing something about bananas. Therefore it is only logical that the interface resides in the same assembly as Banana.
The previous example has a technical coupling, but the coupling might just be a logical one. For example, a IFecesThrowingTarget interface may only make sense as a collaborator of the Monkey class even if the interface declaration has no technical link to Monkey.
My answer does depend on the notion that it's okay to have some coupling to classes. Hiding everything behind an interface would be a mistake. Sometimes it's okay to just "new up" a class, instead of injecting it or creating it via a factory.

Yes, I think this is a good idea. Actually, we do it here all the time, and we eventually have to do it because of a simple reason:
We use Remoting to access server functionality. So the Remote Objects on the server need to implement the interfaces and the client code has to have access to the interfaces to use the remote objects.
In general, I think you are more loosely coupled when you put the interfaces in a separate project, so just go along and do it. It isn't really a problem to have 2 assemblies, is it?
ADDITION:
Just crossed my mind: By putting the interfaces in a separate assembly, you additionally get the benefit of being able to reuse the interfaces if a few of them are general enough.

I think it you should consider first whether ALL interfaces belong to the 'public interface' of your project.
If they are to be shared by multiple projects, executables and/or services, i think it's fair to put them into a separate assembly.
However, if they are for internal use only and there for your convenience, you could choose to keep them in the same assembly as the implementation, thus keeping the overall amount of assemblies relatively low.

I wouldn't do it unless it offers a proven benefit for your application's architecture.
It's good to keep an eye on the number of assemblies you're creating. Even if an interface and its implementation are in the same assembly, you can still achieve the decoupling you rightly seek with a little discipline.

If an implementation of an interface ends up having a lot of dependencies (on other assemblies, etc), then having the interface in an isolated assembly can simply life for higher level consumers.
They can reference the interface without inadvertently becoming dependent on the specific implementation's dependencies.

We used to have quite a number of separate assemblies in our shared code. Over time, we found that we almost invariably referenced these in groups. This made more work for the developers, and we had to hunt to find what assembly a class or interface was in. We ended up combining some of these assemblies based on usage patterns. Life got easier.
There are a lot of considerations here - are you writing a library for developers, are you deploying the DLLs to offsite customers, are you using remoting (thanks, Maximilian Mayerl) or writing WCF services, etc. There is no one right answer - it depends.
In general I agree with Jeff Sternal - don't break up the assemblies unless it offers a proven benefit.

There are pros and cons to the approach, and you will also need to temper the decision with how it best fits into your architectural approach.
On the "pro" side, you can achieve a level of separation to help enforce correct implementations of the interfaces. Consider that if you have junior- or mid-level developer working on implementations, the interfaces themselves can be defined in a project that they only have read access on. Perhaps a senior-level, team lead, or architect is responsible for the design and maintenance of the interfaces. If these interfaces are used on multiple projects, this can help mitigate the risk of unintentional breaking changes on other projects when only working in one. Also, if you work with third party vendors who you distribute an API to, packaging the interfaces is a very good thing to do.
Obviously, there are some down sides. The assembly does not contain executable code. In some shops that I have worked at, they have frowned upon not having functionality in an assembly, regardless of the reason. There definitely is additional overhead. Depending on how you set up your physical file and namespace structure, you might have multiple assemblies doing the same thing (although not required).
On a semi-random note, make sure to document your interfaces well. Documentation inheritance from interfaces using GhostDoc is a beautiful thing.

This is a good idea and I appreciate some of the distinctions in the accepted answer. Since both enumerations and especially interfaces are by their very nature dependency-less this gives them special properties and makes them immune from circular dependencies and even just complex dependency graphs that make a system "brittle". A co-worker of mine once called a similar technique the "memento pattern" and never failed to point out a useful application of it.
Put an interface into a project that already has many dependencies and that interface, at least with respect to the project, comes with all the dependencies of the product. Do this often and you're more likely to face situations with circular dependencies. The temptation is then to compensate with patches that wouldn't otherwise be needed.
It's as if coupling interfaces with projects having many dependencies contaminates them. The design intent of interfaces is to de-couple so in most cases it makes little sense to couple them to classes.

Related

Dependancy injection : injecting a single class from a large assembly

To preface this, it is ultimately likely that we will end up injecting the majority of classes implemented in the assembly but for the purpose of this question I am only interested in a single class.
A colleague is in the process of implementing a new service following a clean architecture pattern and we were discussing the initial solution/project layout. I noted that there were a number of projects that were not sat under the infrastructure project although they were responsible for dealing with calling external APIs. In my mind, unless we get to a level of complexity that dictates splitting these into separate projects, then infrastructure code should remain in a single project. There were 2 additional projects that each contained a single class which implemented the interface defined in core.
The only argument that he provided that I felt might have some influence was on the subject of DI. The comment he made was that if the class was contained within a large assembly then there would be an increased overhead injecting that class into whatever process needed it when compared to loading the same class from an assembly that only contained that class.
Essential, what I am looking to understand is that if Assembly A contained classes A,B,C and D and I only injected class A, would the whole assembly be loaded before instantiating class A.
This got me thinking as to whether that was a valid argument or potentially while a valid point, the overhead as so negligible that you might as well ignore it.
How could I demonstrate what the difference, if any, is between the 2 scenarios?
You only want to include what should be included and nothing more, you don't ever want to include an entire assembly just because you wanted to use a single class or a couple of functions.
This will not only violate things like single responsibility and the concept of modularity, but it will cause you many more issues down the line if you start doing things like this. Right now it might seem like a small overhead if it's "just one class in just two projects", but that quickly becomes 10 classes in 4 projects with 100 dependencies.
I'm not exactly clear on what you're specifically trying to do but any project/service/module/component/class should import and include only what's needed for its own functionality and nothing more. If you're using DI it should inject the exact resource that's required, nothing more. There are very few edge case scenarios where that rule can be broken, i.e. if it causes significant company costs or if there's some other valid reason for violating it.
(I don't know how your assembly libraries work so my answer is purely based on general design.)

Architecture: Dependency Injection, Loosely Coupled Assemblies, Implementation Hiding

I've been working on a personal project which, beyond just making something useful for myself, I've tried to use as a way to continue finding and learning architectural lessons. One such lesson has appeared like a Kodiak bear in the middle of a bike path and I've been struggling quite mightily with it.
The problem is essentially an amalgam of issues at the intersection of dependency injection, assembly decoupling and implementation hiding (that is, implementing my public interfaces using internal classes).
At my jobs, I've typically found that various layers of an application hold their own interfaces which they publicly expose, but internally implement. Each assembly's DI code registers the internal class to the public interface. This technique prevents outside assemblies from newing-up an instance of the implementation class. However, some books I've been reading while building this solution have spoken against this. The main things that conflict with my previous thinking have to do with the DI composition root and where one should keep the interfaces for a given implementation. If I move dependency registration to a single, global composition root (as Mark Seemann suggests), then I can get away from each assembly having to run its own dependency registrations. However, the downside is that the implementation classes have to be public (allowing any assembly to instantiate them). As for decoupling assemblies, Martin Fowler instructs to put interfaces in the project with the code that uses the interface, not the one that implements it. As an example, here is a diagram he provided, and, for contrast, a diagram for how I would normally implement the same solution (okay, these aren't quite the same; kindly focus on the arrows and notice when implementation arrows cross assembly boundaries instead of composition arrows).
Martin Style
What I've normally seen
I immediately saw the advantage in Martin's diagram, that it allows the lower assemblies to be swapped out for another, given that it has a class that implements the interface in the layer above it. However, I also saw this seemingly major disadvantage: If you want to swap out the assembly from an upper layer, you essentially "steal" the interface away that the lower layer is implementing.
After thinking about it for a little bit, I decided the best way to be fully decoupled in both directions would be to have the interfaces that specify the contract between layers in their own assemblies. Consider this updated diagram:
Is this nutty? Is it right on? To me, it seems like this solves the problem of interface segregation. It doesn't, however, solve the problem of not being able to hide the implementation class as internal. Is there anything reasonable that can be done there? Should I not be worried about this?
One solution that I'm toying around with in my head is to have each layer implement the proxy layer's interface twice; once with a public class and once with an internal class. This way, the public class could merely wrap/decorate the internal class, like this:
Some code might look like this:
namespace MechanismProxy // Simulates Mechanism Proxy Assembly
{
public interface IMechanism
{
void DoStuff();
}
}
namespace MechanismImpl // Simulates Mechanism Assembly
{
using MechanismProxy;
// This class would be registered to IMechanism in the DI container
public class Mechanism : IMechanism
{
private readonly IMechanism _internalMechanism = new InternalMechanism();
public void DoStuff()
{
_internalMechanism.DoStuff();
}
}
internal class InternalMechanism : IMechanism
{
public void DoStuff()
{
// Do whatever
}
}
}
... of course, I'd still have to address some issues regarding constructor injection and passing the dependencies injected into the public class to the internal one. There's also the problem that outside assemblies could possibly new-up the public Mechanism... I would need a way to ensure only the DI container can do that... I suppose if I could figure that out, I wouldn't even need the internal version. Anyway, if anyone can help me understand how to overcome these architectural problems, it would be mightily appreciated.
However, the downside is that the implementation classes have to be public (allowing any assembly to instantiate them).
Unless you are building a reusable library (that gets published on NuGet and gets used by other code bases you have no control over), there is typically no reason to make classes internal. Especially since you program to interfaces, the only place in the application that depends on those classes is the Composition Root.
Also note that, in case you move the abstractions to a different library, and let both the consuming and the implementing assembly depend on that assembly, those assemblies don't have to depend on each other. This means that it doesn't matter at all whether those classes are public or internal.
This level of separation (placing the interfaces in an assembly of its own) however is hardly ever needed. In the end it's all about the required granularity during deployment and the size of the application.
As for decoupling assemblies, Martin Fowler instructs to put interfaces in the project with the code that uses the interface, not the one that implements it.
This is the Dependency Inversion Principle, which states:
In a direct application of dependency inversion, the abstracts are owned by the upper/policy layers
This is a somewhat opinion based topic, but since you asked, I'll give mine.
Your focus on creating as many assemblies as possible to be as flexible as possible is very theoretical and you have to weigh the practical value vs costs.
Don't forget that assemblies are only a container for compiled code. They become mostly relevant only when you look at the processes for developing, building and deploying/delivering them. So you have to ask a lot more questions before you can make a good decision on how exactly to split up the code into assemblies.
So here a few examples of questions I'd ask beforehand:
Does it make sense from your application domain to split up the
assemblies in this way (e.g. will you really need to swap out
assemblies)?
Will you have separate teams in place for developing those?
What will the scope be in terms of size (both LOC and team sizes)?
Is it required to protect the implementation from being available/visible? E.g. are those external interfaces or internal?
Do you really need to rely on assemblies as a mechanism to enforce your architectural separation? Or are there other, better measures (e.g. code reviews, code checkers, etc.)?
Will your calls really only happen between assemblies or will you need remote calls at some point?
Do you have to use private assemblies?
Will sealed classes help instead enforcing your architecture?
For a very general view, leaving these additional factors out, I would side with Martin Fowler's diagram, because that is just the standard way of how to provide/use interfaces. If your answers to the questions indicate additional value by further splitting up/protecting the code that may be fine, too. But you'd have to tell us more about your application domain and you'd have to be able to justify it well.
So in a way you are confronted with two old wisdoms:
Architecture tends to follow organizational setups.
It is very easy to over-engineer (over-complicate) architectures but it is very hard to design them as simple as possible. Simple is most of the time better.
When coming up with an architecture you want to consider those factors upfront otherwise they'll come and haunt you later in the form of technical debt.
However, the downside is that the implementation classes have to be public (allowing any assembly to instantiate them).
That doesn't sound like a downside. Implementation classes, that are bound to abstractions in your Composition Root, could probably be used in an explicit way somewhere else for some other reasons. I don't see any benefit from hiding them.
I would need a way to ensure only the DI container can do that...
No, you don't.
Your confusion probably stems from the fact that you think of the DI and Composition Root like there is a container behind, for sure.
In fact, however, the infrastructure could be completely "container-agnostic" in a sense that you still have your dependencies injected but you don't think of "how". A Composition Root that uses a container is your choice, as good choice as possible another Composition Root where you manually compose dependencies. In other words, the Composition Root could be the only place in your code that is aware of a DI container, if any is used. Your code is built agaist the idea of Dependency Inversion, not the idea of Dependency Inversion container.
A short tutorial of mine can possibly shed some light here
http://www.wiktorzychla.com/2016/01/di-factories-and-composition-root.html

Where should shared interfaces in Prism be placed?

I understand this could be interpreted as an opinion question, but it is technical and a problem I am currently trying to solve.
In the Prism documentation, it is stated that modules should have loose coupling with no direct references, only going through shared interfaces. Like in the following picture:
My issue is, if only a few modules required an IOrdersRepository, the infrastructure is the wrong place for it, as this contains shared code for all of the modules. If I placed the interface in another module, then both modules will need to directly reference that one, breaking the loose coupling.
Should I simply create a library which contains this interface and doesn't follow the module pattern?
Thanks,
Luke
It should be definitely Infrastructure module. Markus' argument is absolutely right - you shouldn't create separate assembly for each shared set of interfaces. It's much more better to have Infrastructure module with a lot of interfaces istead of a lot of modules with some interfaces in each one. Imagine, that one time you will find, that 2 of yours "set of interfaces" should use some shared interface! What will you do? Add yet one assembly for that "super-shared" interfaces? Or combine those modules to one? It's wrong I think.
So - definitely Infrastructure module!
PS. Imagine, that .NET Framework has 1000s libraries - one for collections, anotherone for math functions etc....
UPDATE:
Actually, I use Infrastructure module mostly for interfaces and very basic DTOs. All shared code I move to another assembly (like YourApplication.UIControls, YourApplication.DAL etc.). I haven't enough reasons to do exactly this way, but this is my way to understand Prism's recomendations. Just IMHO.
UPDATE 2:
If you want to share your service so wide - I think it absolutely makes sence to have structure like:
YourApplication.Infrastructure - "very-shared" interfaces (like IPaymentService)
YourApplication.Modules.PaymentModule - "very-shared" implementation of your PaymentService
YourApplication.WPF.Infrastucture - infrastructure of your WPF application (in addition to YourApplication.Infrastructure
YourApplication.WPF.Modules.PaymentUI - some WPF specific UI for your YourApplication.Modules.PaymentModule
YourApplication.WebSite.Modules.PaymentUI - UI for web-site
And so on.. So, your modules will have almost always references to YourApplication.Infrastructure and YourApplication.TYPEOFAPP.Infrastructure, where TYPEOFAPP can be WPF, WebSite, WinService etc.. Or you can name it like YourApplication.Modules.PaymentUI.WPF..

How to implement SOLID principles into an existing project

I apologize for the subjectiveness of this question, but I am a little stuck and I would appreciate some guidance and advice from anyone who's had to deal with this issue before:
I have (what's become) a very large RESTful API project written in C# 2.0 and some of my classes have become monstrous. My main API class is an example of this -- with several dozen members and methods (probably approaching hundreds). As you can imagine, it's becoming a small nightmare, not only to maintain this code but even just navigating the code has become a chore.
I am reasonably new to the SOLID principles, and I am massive fan of design patterns (but I am still at that stage where I can implement them, but not quite enough to know when to use them - in situations where its not so obvious).
I need to break my classes down in size, but I am at a loss of how best to go about doing it. Can my fellow StackOverflow'ers please suggest ways that they have taken existing code monoliths and cut them down to size?
Single Responsibility Principle - A class should have only one reason to change. If you have a monolithic class, then it probably has more than one reason to change. Simply define your one reason to change, and be as granular as reasonable. I would suggest to start "large". Refactor one third of the code out into another class. Once you have that, then start over with your new class. Going straight from one class to 20 is too daunting.
Open/Closed Principle - A class should be open for extension, but closed for change. Where reasonable, mark your members and methods as virtual or abstract. Each item should be relatively small in nature, and give you some base functionality or definition of behavior. However, if you need to change the functionality later, you will be able to add code, rather than change code to introduce new/different functionality.
Liskov Substitution Principle - A class should be substitutable for its base class. The key here, in my opinion, is do to inheritance correctly. If you have a huge case statement, or two pages of if statements that check the derived type of the object, then your violating this principle and need to rethink your approach.
Interface Segregation Principle - In my mind, this principle closely resembles the Single Responsibility principle. It just applies specifically to a high level (or mature) class/interface. One way to use this principle in a large class is to make your class implement an empty interface. Next, change all of the types that use your class to be the type of the interface. This will break your code. However, it will point out exactly how you are consuming your class. If you have three instances that each use their own subset of methods and properties, then you now know that you need three different interfaces. Each interface represents a collective set of functionality, and one reason to change.
Dependency Inversion Principle - The parent / child allegory made me understand this. Think of a parent class. It defines behavior, but isn't concerned with the dirty details. It's dependable. A child class, however, is all about the details, and can't be depended upon because it changes often. You always want to depend upon the parent, responsible classes, and never the other way around. If you have a parent class depending upon a child class, you'll get unexpected behavior when you change something. In my mind, this is the same mindset of SOA. A service contract defines inputs, outputs, and behavior, with no details.
Of course, my opinions and understandings may be incomplete or wrong. I would suggest learning from people who have mastered these principles, like Uncle Bob. A good starting point for me was his book, Agile Principles, Patterns, and Practices in C#. Another good resource was Uncle Bob on Hanselminutes.
Of course, as Joel and Jeff pointed out, these are principles, not rules. They are to be tools to help guide you, not the law of the land.
EDIT:
I just found these SOLID screencasts which look really interesting. Each one is approximately 10-15 minutes long.
There's a classic book by Martin Fowler - Refactoring: Improving the Design of Existing Code.
There he provides a set of design techniques and example of decisions to make your existing codebase more manageable and maintainable (and that what SOLID principals are all about). Even though there are some standard routines in refactoring it is a very custom process and one solution couldn't be applied to all project.
Unit testing is one of the corner pillars for this process to succeed. You do need to cover your existing codebase with enough code coverage so that you'd be sure you don't break stuff while changing it. Actually using modern unit testing framework with mocking support will lead encourage you to better design.
There are tools like ReSharper (my favorite) and CodeRush to assist with tedious code changes. But those are usually trivial mechanical stuff, making design decisions is much more complex process and there's no so much tool support. Using class diagrams and UML helps. That what I would start from, actually. Try to make sense of what is already there and bring some structure to it. Then from there you can make decisions about decomposition and relations between different components and change your code accordingly.
Hope this helps and happy refactoring!
It will be a time consuming process. You need to read the code and identify parts that do not meet the SOLID principles and refactor into new classes. Using a VS add-in like Resharper (http://www.jetbrains.com) will assist with the refactoring process.
Ideally you will have good coverage of automated unit tests so that you can ensure your changes do not introduce problems with the code.
More Information
In the main API class, you need to identify methods that relate to each other and create a class that more specifically represents what actions the method performs.
e.g.
Let's say I had an Address class with separate variables containing street number, name, etc. This class is responsible for inserting, updating, deleting, etc. If I also needed to format an address a specific way for a postal address, I could have a method called GetFormattedPostalAddress() that returned the formatted address.
Alternatively, I could refactor this method into a class called AddressFormatter that takes an Address in it constructor and has a Get property called PostalAddress that returns the formatted address.
The idea is to separate different responsibilities into separate classes.
What I've done when presented with this type of thing (and I'll readily admit that I haven't used SOLID principles before, but from what little I know of them, they sound good) is to look at the existing codebase from a connectivity point of view. Essentially, by looking at the system, you should be able to find some subset of functionality that is internally highly coupled (many frequent interactions) but externally loosely coupled (few infrequent interactions). Usually, there are a few of these pieces in any large codebase; they are candidates for excision. Essentially, once you've identified your candidates, you have to enumerate the points at which they are externally coupled to the system as a whole. This should give you a good idea of the level of interdependency involved. There usually is a fair bit of interdependency involved. Evaluate the subsets and their connection points for refactoring; frequently (but not always) there ends up being a couple of clear structural refactorings that can increase the decoupling. With an eye on those refactorings, use the existing couplings to define the minimal interface required to allow the subsystem to work with the rest of the system. Look for commonalities in those interfaces (frequently, you find more than you'd expect!). And finally, implement these changes that you've identified.
The process sounds terrible, but in practice, it's actually pretty straightforward. Mind you, this is not a roadmap towards getting to a completely perfectly designed system (for that, you'd need to start from scratch), but it very certainly will decrease the complexity of the system as a whole and increase the code comprehensibility.
OOD - Object Oriented Design
SOLID - class design
Single Responsibility Principle - SRP - introduced by Uncle Bob. Method, class, module are responsible only for doing single thing(one single task)
Open/Closed Principle - OCP - introduced by Bertrand Meyer. Method, class, module are open for extension and closed for modification. Use a power of inheritance, abstraction, polymorphism, extension, wrapper. [Java example], [Swift example]
[Liskov Substitution Principle] - LSP - introduced by Barbara Liskov and Jeannette Wing. A subtype can replace supertype without side effects
Interface Segregation Principle - ISP - introduced by Uncle Bob. Your interface should be as small as possible
[Dependency Inversion Principle(DIP)] - DIP - introduced by Uncle Bob. Internal class, layer should not be depended on external class, layer. For example when you have aggregation[About] dependency you should rather use some abstraction/interfaces. [DIP vs DI vs IoC]
6 principles about packages/modules(.jar, .aar, .framework):
what to put inside a package
The Release Reuse Equivalency
The Common Closure
The Common Reuse
couplings between packages
The Acyclic Dependencies
The Stable Dependencies
The Stable Abstractions
[Protocol Oriented Programming(POP)]

Interfaces in Class Files

Should my interface and concrete implementation of that interface be broken out into two separate files?
If you want other classes to implement that interface, it would probably be a good idea, if only for cleanliness. Anyone looking at your interface should not have to look at your implementation of it every time.
If there is only one implementation: why the interface?
If there is more than one implementation: where do you put the others?
If by different files you mean different xxx.cs files within your assembly, then normally due to my own practices I would say yes - but this is down to the house standards you use. If you're just programming for yourself, then I would say this is good coding practice, it keeps everything clean and easy to read. The smaller the blocks of code in any given file, the easier something is to follow (within reason), obviously you can start getting into partial classes where things can start getting ridiculous if you don't keep a reign on it.
As a rule, I keep my projects in a logical folder structure where portions of the project might be allocated into folders DAL or BM and within there I might have a number of logically named folders which each contain a number of files: one interface, one implementation and any helper classes specific to those.
However, all that said, your team/in-house best practices should be adopted if you're working within a team of developers.
Separate files... FTW! You might even want to create separate projects/assemblies depending on how extensible your code is. At the very least it should probably be in a separate namespace.
The whole point of an interface is so that the code that uses the interface doesn't care about the implementation. Therefore they should be as loosely associated as possible, which they won't be if they are in the same file.
But as #balabaster notes, it depends on what your team's practices (although they are not always "best practices") are.
Yes, for the classes they're called partial class,
take a look link text
General rule of thumb, yes. An Interface means it may be implemented by other classes, it is cleaner and easier to manager when they are clearly in separate files.
What's more, depending on the level of separation and isolation your application is going to take, you would even want to place your interfaces in its own project. Then consuming projects would reference the interface project instead of each and every assembly that carries implementations of that interface.
Yes, even if one gives counter arguments such as there's only one implementation or he/she foresees that there will be only one implementation for a long time or he/she is the only user/developer, etc. If there are multiple implementations, multiple users, etc, then it's obvious that you would want to keep them in separate files. So why should one treat it differently in the case of one implementation only?

Categories