Is this misuse of a singleton? - c#

I've read all about singletons and how they pose the threat of being an anti pattern if misused. So I wanted to get a second opinion as to whether or not this is a misuse of the pattern.
Essentially I have 5 repositories. All they do is store data. In reality, all of the data they store is closely related, I've only made them 5 different repositories so that the classes are short and easy to pick through. I know that if I make each of these repositories a singleton, I can say good bye to any maintainable unit tests; however; I had this idea that I could make each repository a normal class and then make a singleton that simply stores 1 copy of each of the repositories.
This way I can fulfill the requirements of there only being one central location for the data per instance of my program but I can also unit test each repository in some of the operations it needs to perform.

Sure, you may be able to unit-test repositories, but you will certainly have a hard time unit testing all the other code which depends on these repositories. Singleton couples the calling code to the one-and-only implementation through direct access scattered all over your code base. And even if there really is a single copy of your data, i.e. an in-memory database of some kind, there is no reason to let other layers know this.
Also, by "making them a simple class" (I presume you mean, by exposing a public constructor?), you are defeating the whole point of a singleton, i.e. the notion that no other code can instantiate a different instance.
If repositories are a dependency for a certain class, then simply pass them using constructor injection and make your testing easier. This will allow you to easily mock each repository when testing classes from the domain layer.

What constitutes misuse of a singleton (or it's overall usefulness at all) is one of those religious arguments that has no clear-cut answer.
However, you have hit on one of the major drawbacks of the singleton pattern, and that is that it complicates unit tests. Worse, it generally degrades the overall maintainability of your code.
Consider this possible replacement;
public interface IRepositoryContainer
{
IRepository GetRepository<T>() where T : IRepository;
}
public partial class App : Application, IRepositoryContainer
{
private List<IRepository> repositories =
new List<IRepository>() { new MyRespository() };
public IRepository GetRepository<T>()
where T : IRepository
{
return repositories.Where(t => t is T).SingleOrDefault();
}
then where before you would have called a singleton, instead...
IRepository repo = (Application.Current as IRepositoryContainer).GetRepository<MyRespository>();
Just a thought, but there is already only one instance of the application, interfaces can help you solve this problem.
(yes, the example code above is wpf, but the principals should apply)

Related

Interface inheritance to breakup god objects?

I work on a fairly large product. It's been in development since .Net 1.0 was still a work-in-progress and so it has a lot of bad quality code and was not written with unit tests in mind. Now we're trying to improve the quality and implement tests for each feature and bug fix. One of the biggest problems we're having now is dependency hell and god objects. There is one god object in particular that's bad: Session. Basically, anything related to the current session of the program is in this object. There are also a few other god objects.
Anyway, we've made this god object "mockable" by using Resharper to extract an interface out of them. However, this still makes it hard to test because most of the time you have to look at the code that you write to figure out what really needs mocked out of the 100 different methods and properites.
Just splitting this class is out of the question right now because there are literally hundreds if not thousands of references to this class.
Because I have an interface(and nearly all code has been refactored to use the interface) though, I had an interesting idea. What if I made the ISession interface inherit from other interfaces.
For instance, if we had something like this:
interface IBar
{
string Baz{get;set;}
}
interface IFoo
{
string Biz{get;set;}
}
interface ISession: IFoo, IBar
{
}
In this way, existing code using ISession wouldn't have to be updated, nor does the actual implementation have to be updated. But, in new code we write and refactor we can use the more granular IFoo or IBar interfaces, but pass in an ISession.
And eventually, I see this as probably making it easier to eventually break up the actual ISession and Session god interface/object
Now to you. Is this a good way of testing against these god objects and eventually breaking them up? Is this a documented approach and/or design pattern? Have you ever done anything like this?
From my standpoint this is good and right approach. Later you can inject more specific service instances as IFoo/IBar rather than ISession, I would say this is a good intermediate step before further refactoring to extract classes from a god class to many specific services.
Some pros I see:
First stage: extract interfaces
A super (god) class type is abstracted by interfaces
Code becomes less coupled since relies on single-function responsible interfaces (services)
You have ground to move further and split a god class into many services without massive refactoring sicne everythign already relies on interfaces API
Second stage: split a god class into many small services with keeping Single Responsibility principle in mind
Third-stage: Structurize existing unit tests so tests grouped per service type rather than all together around a god class

Is this a bad use of a static property?

If I have a class with a service that I want all derived classes to have access to (say a security object, or a repository) then I might do something like this:
public abstract class A
{
static ISecurity _security;
public ISecurity Security { get { return _security; } }
public static void SetSecurity(ISecurity security) { _security = security; }
}
public class Bootstrapper
{
public Bootstrapper()
{
A.SetSecurity(new Security());
}
}
It seems like lately I see static properties being shunned everywhere as something to absolutely avoid. To me, this seems cleaner than adding an ISecurity parameter to the constructor of every single derived class I make. Given all I've read lately though, I'm left wondering:
Is this is an acceptable application of dependency injection or am I violating some major design principle that could come back to haunt me later? I am not doing unit tests at this point so maybe if I were then I would suddenly realize the answer to my question. To be honest though I probably won't change my design over that, but if there is some other important reason why I should change it then I very well might.
Edit: I made a couple stupid mistakes the first time I wrote that code... it's fixed now. Just thought I'd point that out in case anyone happened to notice :)
Edit: SWeko makes a good point about all deriving classes having to use the same implementation. In cases where I've used this design, the service is always a singleton so it effectively enforces an already existing requirement. Naturally, this would be a bad design if that weren't the case.
This design could be problematic for a couple of reasons.
You already mention unit testing, which is rather important. Such static dependency can make testing much harder. When the fake ISecurity ever has to be anything else than a Null Object implementation, you will find yourself having to removing the fake implementation on test tear down. Removing it during test tear down prevents other tests from being influenced when you forget to remove that fake object. A test tear down makes your test more complicated. Not that much complicated, but having this adds up when many tests have tear down code and you'll have a hard time finding a bug in your test suit when one test forget to run the tear down. You will also have to make sure the registered ISecurity fake object is thread-safe and won't influence other tests that might run in parallel (test frameworks such as MSTest run tests in parallel for obvious performance reasons).
Another possible problem with injecting the dependency as static, is that you force this ISecurity dependency to be a singleton (and probably to be thread-safe). This disallows for instance to apply any interceptors and decorators that have a different lifestyle than singleton
Another problem is that removing this dependency from the constructor disables any analysis or diagnostics that could be done by the DI framework on your behalf. Since you manually set this dependency, the framework has no knowledge about this dependency. In a sense you move the responsibility of managing dependencies back to the application logic, instead of allowing the Composition Root to be in control over the way dependencies are wired together. Now the application has to know that ISecurity is in fact thread-safe. This is a responsibility that in general belongs to the Composition Root.
The fact that you want to store this dependency in a base type might even be an indication of a violation of a general design principle: The Single Responsibility Principle (SRP). It has some resemblance with a design mistake I made myself in the past. I had a set of business operations that all inherited from a base class. This base class implemented all sorts of behavior, such as transaction management, logging, audit trailing, adding fault tolerance, and.... adding security checks. This base class became an unmanageable God Object. It was unmanageable, simply because it had too many responsibilities; it violated the SRP. Here's my story if you want to know more about this.
So instead of having this security concern (it's probably a cross-cutting concern) implemented in a base class, try removing the base class all together and use a decorator to add security to those classes. You can wrap each class with one or more decorators and each decorator can handle one specific concern. This makes each decorator class easy to follow because they will follow the SRP.
The problem is that is not really dependency injection, even if it is encapsulated in the definition of the class. Admittedly,
static Security _security;
would be worse than Security, but still, the instances of A do not get to use whatever security the caller passed to them, they need to depend on the global setting of a static property.
What I'm trying to say is that your usage is not that different from:
public static class Globals
{
public static ISecurity Security {get; set;}
}

How do I mock this?

In a .NET windows app, I have a class named EmployeeManager. On instantiation, this class loads employees into a List from the database that haven't completed registration. I'd like to use EmployeeManager in unit test. However, I don't want to involve the database.
From what I understand about this scenario, I need an IEmployeeManager interface, which is only used for testing purposes. This doesn't seem right since the interface has no other use. However, it will allow me to create some EmployeeManager test class that loads employees without involving the database. This way, I can assign values that would have otherwise come from the database.
Is the above correct and do I need to Mock it? Mocking (Moq framework) seems to use lots of code just to do simple things such as assigning a property. I don't get the point. Why mock when I can just create a simple test class from IEmployeeManager that will provide what I need?
Inversion of control is your solution, not Mock objects. Here's why:
You mock the interface to make sure that some code that utilizes your IEmployeeManager is using it properly. You aren't using the test code to prove IEmployeeManager works. So there has to be another class that takes an IEmployeeManager, for instance, which you will actually be testing with your mock object.
If you are actually just testing EmployeeManager, you can do much better. Consider dependency injection. In this manner, you will expose a constructor for EmployeeManager that will take at least one parameter which is an interface. Your EmployeeManager code will internally use this interface for any implementation specific calls that it needs to make.
See Strategy Pattern
This will lead you into a whole, exciting world of Inversion of Control. And as you dig into that, you will find that problems like these have been effectively solved with IoC containers such as AutoFac, Ninject, and Structure Map, to name a few.
Mocking interfaces is great, and you can mock an interface that you then pass into IoC. But you'll find that IoC is a much more robust solution to your problem. And yes, while you might only be implementing a second alternative just for testing, it is still important to do for that very reason -- seperating the strategy under test from the business logic of EmployeeManager.
From what I understand about this scenario, I need an IEmployeeManager interface, which is only used for testing purposes. This doesn't seem right since the interface has no other use.
It's well worth creating the interface. Note also that the interface actually has multiple purposes:
The interface identifies roles or responsibilities provided by an actor. In this case, the interface identifies the roles and responsibilities of the EmployeeManager. By using an interface you're preventing an accidental dependency on something database specific.
The interface reduces coupling. Since your application won't depend on the EmployeeManager, you're free to swap out its implementation without needing to recompile the rest of the application. Of course, this depends on project structure, number of assemblies, etc., but it nevertheless allows this type of reuse.
The interface promotes testability. When you use an interface it becomes much easier to generate dynamic proxies that allow your software to be more easily tested.
The interface forces thought1. Ok, I kind of already alluded to it, but it's worth saying again. Just using an interface alone should make you think about an object's roles and responsibilities. An interface shouldn't be a kitchen sink. An interface represents a cohesive set of roles and responsibilities. If an interface's methods aren't cohesive or aren't almost always used together then it's likely that an object has multiple roles. Though not necessarily bad, it implies that multiple distinct interfaces are better. The larger an interface the harder it is to make it covariant or contravariant and, therefore, more malleable in code.
However, it will allow me to create some EmployeeManager test class that loads employees without involving the database.... I don't get the point. Why mock when I can just create a simple test class from IEmployeeManager that will provide what I need?
As one poster pointed out, it sounds like you're talking about creating a stub test class. Mocking frameworks can be used to create stubs, but one of the most important features about them is that they allow you to test behavior instead of state. Now let's look at some examples. Assume the following:
interface IEmployeeManager {
void AddEmployee(ProspectiveEmployee e);
void RemoveEmployee(Employee e);
}
class HiringOfficer {
private readonly IEmployeeManager manager
public HiringOfficer(IEmployeeManager manager) {
this.manager = manager;
}
public void HireProspect(ProspectiveEmployee e) {
manager.AddEmployee(e);
}
}
When we test the HiringOfficer's HireEmployee behavior, we're interested in validating that he correctly communicated to the employee manager that this perspective employee be added as an employee. You'll often see something like this:
// you have an interface IEmployeeManager and a stub class
// called TestableEmployeeManager that implements IEmployeeManager
// that is pre-populated with test data
[Test]
public void HiringOfficerAddsProspectiveEmployeeToDatabase() {
var manager = new TestableEmployeeManager(); // Arrange
var officer = new HiringOfficer(manager); // BTW: poor example of real-world DI
var prospect = CreateProspect();
Assert.AreEqual(4, manager.EmployeeCount());
officer.HireProspect(prospect); // Act
Assert.AreEqual(5, manager.EmployeeCount()); // Assert
Assert.AreEqual("John", manager.Employees[4].FirstName);
Assert.AreEqual("Doe", manager.Employees[4].LastName);
//...
}
The above test is reasonable... but not good. It's a state-based test. That is, it verifies the behavior by checking the state before and after some action. Sometimes this is the only way to test things; sometimes it's the best way to test something.
But, testing behavior is often better, and this is where mocking frameworks shine:
// using Moq for mocking
[Test]
public void HiringOfficerCommunicatesAdditionOfNewEmployee() {
var mockEmployeeManager = new Mock<EmployeeManager>(); // Arrange
var officer = new HiringOfficer(mockEmployeeManager.Object);
var prospect = CreateProspect();
officer.HireProspect(prospect); // Act
mockEmployeeManager.Verify(m => m.AddEmployee(prospect), Times.Once); // Assert
}
In the above we tested the only thing that really mattered -- that the hiring officer communicated to the employee manager that a new employee needed to be added (once, and only once... though I actually wouldn't bother checking the count in this case). Not only that, I validated that the employee that I asked the hiring officer to hire was added by the employee manager. I've tested the critical behavior. I didn't need even a simple test stub. My test was shorter. The actual behavior was much more evident -- it becomes possible to see the interaction and validate interaction between objects.
It is possible to make your stub test class record interactions, but then you're emulating the mocking frameworks. If you're going to test behavior -- use a mocking framework.
As another poster mentioned, dependency injection (DI) and inversion of control (IoC) are important. My example above isn't a good example of this, but both should be carefully considered and judiciously used. There's a lot of writing on the subject available.
1 - Yes, thinking is still optional, but I'd strongly recommend it ;).
Creating an IEmployeeManager interface in order to be able to mock is that way most .NET developers would go about making such a class testable.
Another option is to inherit from EmployeeManager and override the method you want to test so it will not involve the database - this too means you will need to change your design.
Extracting interface in your scenario is a good idea. I would not worry too much about the fact that you only need this for testing. Extracting this interface makes your code decoupled from database. After that you will have a choice between writing your own implementation for testing or use mocking framework to generate this implementation for you. This is a matter of personal preference. It depends on how familiar you are with mocking framework and whether you want to spend time learning new syntax.
In my opinion it is worth learning. It will save you a lot of typing. They are also flexible and don't always require an interface to generate test implementation. RhinoMocks for example can mock concrete classes as long they have empty constructor and methods are virtual. Another advantage is that mocking APIs use consistent naming so you will get familiar with 'Mocks', 'Stubs' etc. In your scenario by the way you need stub, not mock. Writing an actual mock manually may be more labor intensive than using framework.
The danger with mocking frameworks is that some of them are so powerful and can mock pretty much anything, including private fields (TypeMock). In other words they are too forgiving to design mistakes and allow you to write very coupled code.
This is a good read on the subject of hand written vs. generated stubs
By making your classes implement Interfaces you are not only making them more testable, you're making your application more flexible and maintainable. When you say "This doesn't seem right since the interface has no other use", is flawed since it allows you to loosely couple your classes.
If I could suggest a couple of books Head First Design Patterns and Head First Software Development will do a much better job of explaining the concepts then I could in a SO answer.
If you don't want to use a mocking framework like Moq, it's simple enough to roll your own mock/stubs, here is a quick blog post on it Rolling your own Mock Objects

What do programmers mean when they say, "Code against an interface, not an object."?

I've started the very long and arduous quest to learn and apply TDD to my workflow. I'm under the impression that TDD fits in very well with IoC principles.
After browsing some of TDD tagged questions here in SO, I read it's a good idea to program against interfaces, not objects.
Can you provide simple code examples of what this is, and how to apply it in real use cases? Simple examples is key for me (and other people wanting to learn) to grasp the concepts.
Consider:
class MyClass
{
//Implementation
public void Foo() {}
}
class SomethingYouWantToTest
{
public bool MyMethod(MyClass c)
{
//Code you want to test
c.Foo();
}
}
Because MyMethod accepts only a MyClass, if you want to replace MyClass with a mock object in order to unit test, you can't. Better is to use an interface:
interface IMyClass
{
void Foo();
}
class MyClass : IMyClass
{
//Implementation
public void Foo() {}
}
class SomethingYouWantToTest
{
public bool MyMethod(IMyClass c)
{
//Code you want to test
c.Foo();
}
}
Now you can test MyMethod, because it uses only an interface, not a particular concrete implementation. Then you can implement that interface to create any kind of mock or fake that you want for test purposes. There are even libraries like Rhino Mocks' Rhino.Mocks.MockRepository.StrictMock<T>(), which take any interface and build you a mock object on the fly.
It's all a matter of intimacy. If you code to an implementation (a realized object) you are in a pretty intimate relationship with that "other" code, as a consumer of it. It means you have to know how to construct it (ie, what dependencies it has, possibly as constructor params, possibly as setters), when to dispose of it, and you probably can't do much without it.
An interface in front of the realized object lets you do a few things -
For one you can/should leverage a factory to construct instances of the object. IOC containers do this very well for you, or you can make your own. With construction duties outside of your responsibility, your code can just assume it is getting what it needs. On the other side of the factory wall, you can either construct real instances, or mock instances of the class. In production you would use real of course, but for testing, you may want to create stubbed or dynamically mocked instances to test various system states without having to run the system.
You don't have to know where the object is. This is useful in distributed systems where the object you want to talk to may or may not be local to your process or even system. If you ever programmed Java RMI or old skool EJB you know the routine of "talking to the interface" that was hiding a proxy that did the remote networking and marshalling duties that your client didn't have to care about. WCF has a similar philosophy of "talk to the interface" and let the system determine how to communicate with the target object/service.
** UPDATE **
There was a request for an example of an IOC Container (Factory). There are many out there for pretty much all platforms, but at their core they work like this:
You initialize the container on your applications startup routine. Some frameworks do this via config files or code or both.
You "Register" the implementations that you want the container to create for you as a factory for the interfaces they implement (eg: register MyServiceImpl for the Service interface). During this registration process there is typically some behavioral policy you can provide such as if a new instance is created each time or a single(ton) instance is used
When the container creates objects for you, it injects any dependencies into those objects as part of the creation process (ie, if your object depends on another interface, an implementation of that interface is in turn provided and so on).
Pseudo-codishly it could look like this:
IocContainer container = new IocContainer();
//Register my impl for the Service Interface, with a Singleton policy
container.RegisterType(Service, ServiceImpl, LifecyclePolicy.SINGLETON);
//Use the container as a factory
Service myService = container.Resolve<Service>();
//Blissfully unaware of the implementation, call the service method.
myService.DoGoodWork();
When programming against an interface you will write code that uses an instance of an interface, not a concrete type. For instance you might use the following pattern, which incorporates constructor injection. Constructor injection and other parts of inversion of control aren't required to be able to program against interfaces, however since you're coming from the TDD and IoC perspective I've wired it up this way to give you some context you're hopefully familiar with.
public class PersonService
{
private readonly IPersonRepository repository;
public PersonService(IPersonRepository repository)
{
this.repository = repository;
}
public IList<Person> PeopleOverEighteen
{
get
{
return (from e in repository.Entities where e.Age > 18 select e).ToList();
}
}
}
The repository object is passed in and is an interface type. The benefit of passing in an interface is the ability to 'swap out' the concrete implementation without changing the usage.
For instance one would assume that at runtime the IoC container will inject a repository that is wired to hit the database. During testing time, you can pass in a mock or stub repository to exercise your PeopleOverEighteen method.
It means think generic. Not specific.
Suppose you have an application that notify the user sending him some message. If you work using an interface IMessage for example
interface IMessage
{
public void Send();
}
you can customize, per user, the way they receive the message. For example somebody want to be notified wih an Email and so your IoC will create an EmailMessage concrete class. Some other wants SMS, and you create an instance of SMSMessage.
In all these case the code for notifying the user will never be changed. Even if you add another concrete class.
The big advantage of programming against interfaces when performing unit testing is that it allows you to isolate a piece of code from any dependencies you want to test separately or simulate during the testing.
An example I've mentioned here before somewhere is the use of an interface to access configuration values. Rather than looking directly at ConfigurationManager you can provide one or more interfaces that let you access config values. Normally you would supply an implementation that reads from the config file but for testing you can use one that just returns test values or throws exceptions or whatever.
Consider also your data access layer. Having your business logic tightly coupled to a particular data access implementation makes it hard to test without having a whole database handy with the data you need. If your data access is hidden behind interfaces you can supply just the data you need for the test.
Using interfaces increases the "surface area" available for testing allowing for finer grained tests that really do test individual units of your code.
Test your code like someone who would use it after reading the documentation. Do not test anything based on knowledge you have because you have written or read the code. You want to make sure that your code behaves as expected.
In the best case you should be able to use your tests as examples, doctests in Python are a good example for this.
If you follow these guidelines changing the implementation shouldn't be an issue.
Also in my experience it is good practice to test each "layer" of your application. You will have atomic units, which in itself have no dependencies and you will have units which depend on other units until you eventually get to the application which in itself is a unit.
You should test each layer, do not rely on the fact that by testing unit A you also test unit B which unit A depends on (the rule applies to inheritance as well.) This, too, should be treated as an implementation detail, even though you might feel as if you are repeating yourself.
Keep in mind that once written tests are unlikely to change while the code they test will change almost definitely.
In practice there is also the problem of IO and the outside world, so you want to use interfaces so that you can create mocks if necessary.
In more dynamic languages this is not that much of an issue, here you can use duck typing, multiple inheritance and mixins to compose test cases. If you start disliking inheritance in general you are probably doing it right.
This screencast explains agile development and TDD in practice for c#.
By coding against an interface means that in your test, you can use a mock object instead of the real object. By using a good mock framework, you can do in your mock object whatever you like.

How many levels of abstraction do I need in the data persistence layer?

I'm writing an application using DDD techniques. This is my first attempt at a DDD project. It is also my first greenfield project and I am the sole developer. I've fleshed out the domain model and User interface. Now I'm starting on the persistence layer. I start with a unit test, as usual.
[Test]
public void ShouldAddEmployerToCollection()
{
var employerRepository = new EmployerRepository();
var employer = _mockery.NewMock<Employer>();
employerRepository.Add(employer);
_mockery.VerifyAllExpectationsHaveBeenMet();
}
As you can see I haven't written any expectations for the Add() function. I got this far and realized I haven't settled on a particular database vendor yet. In fact I'm not even sure it calls for a db engine at all. Flat files or xml may be just as reasonable. So I'm left wondering what my next step should be.
Should I add another layer of abstraction... say a DataStore interface or look for an existing library that's already done the work for me? I'd like to avoid tying the program to a particular database technology if I can.
With your requirements, the only abstraction you really need is a repository interface that has basic CRUD semantics so that your client code and collaborating objects only deal with IEmployerRepository objects rather than concrete repositories. You have a few options for going about that:
1) No more abstractions. Just construct the concrete repository in your top-level application where you need it:
IEmployeeRepository repository = new StubEmployeeRepository();
IEmployee employee = repository.GetEmployee(id);
Changing that in a million places will get old, so this technique is only really viable for very small projects.
2) Create repository factories to use in your application:
IEmployeeRepository repository = repositoryFactory<IEmployee>.CreateRepository();
IEmployee employee = repository.GetEmployee(id);
You might pass the repository factory into the classes that will use it, or you might create an application-level static variable to hold it (it's a singleton, which is unfortunate, but fairly well-bounded).
3) Use a dependency injection container (essentially a general-purpose factory and configuration mechanism):
// A lot of DI containers use this 'Resolve' format.
IEmployeeRepository repository = container.Resolve<IEmployee>();
IEmployee employee = repository.GetEmployee(id);
If you haven't used DI containers before, there are lots of good questions and answers about them here on SO (such as Which C#/.NET Dependency Injection frameworks are worth looking into? and Data access, unit testing, dependency injection), and you would definitely want to read Martin Fowler's Inversion of Control Containers and the Dependency Injection pattern).
At some point you will have to make a call as to what your repository will do with the data. When you're starting your project it's probably best to keep it as simple as possible, and only add abstraction layers when necessary. Simply defining what your repositories / DAOs are is probably enough at this stage.
Usually, the repository / repositories / DAOs should know about the implementation details of which database or ORM you have decided to use. I expect this is why you are using repositories in DDD. This way your tests can mock the repositories and be agnostic of the implementation.
I wrote a blog post on implementing the Repository pattern on top of NHibernate, I think it will benefit you regardless of whether you use NHibernate or not.
Creating a common generic and extensible NHiberate Repository
One thing I've found with persistence layers is to make sure that there is a spot where you can start doing abstraction. If you're database grows, you might need to start implementing sharding and unless there's already an abstraction layer already available, it can be difficult to add one later.
I believe you shouldn't add yet another layer below the repository classes just for the purpose of unit testing, specially if you haven't chosen your persistence technology. I don't think you can create an interface more granular than "repository.GetEmployee(id)" without exposing details about the persistence method.
If you're really considering using flat text or XML files, I believe the best option is to stick with the repository interface abstraction. But if you have decided to use databases, and you're just not sure about the vendor, an ORM tool might be the way to go.

Categories