Repository pattern - is a global repository a good idea? - c#

I have the following interfaces in a WCF service:
IProductRepository
IFieldRepository
IFieldValueRepository
ICategoryRepository
I implement each of these in a separate project called DatabaseRepository, while Product, Field, FieldValue and Category all sit in a common library shared between the service and the repository project.
Products contain Fields, which in turn contain FieldValues. I don't like the idea of my service constructor having 4 repositories passed in, so I also have IGlobalRepository, which contains a property for each other repository. I instantiate my service by passing a concrete implementation of IGlobalRepository to the constructor, using Ninject conventions based binding to handle this for me when the service is hosted in IIS.
So on my web service GetProductsByCategory looks something like this:
private IGlobalRepository Repo { get; set; }
public IEnumerable<Product> GetProductsByCategory(int CategoryID){
return Repo.ProductRepository.GetByCategory(int CategoryID);
}
This is all well and good, except that products have fields, and fields have values. Does that mean that I need to pass concrete implementations of IFieldRepository and IFieldValueRepository to Repo.ProductRepository?
I'm sure someone will suggest that ProductRepository should also be responsible for fetching field data, but I have it in a separate repository so that I can fetch fields independent of the product they are attached to.
Before I stared adopting the repository pattern, I would simply call a static method on Field or FieldValue in order to get what I needed. Passing around repositories seems like a much less elegant way of working.
So now to my actual question:
Is a global repository a good idea? I realise this is partly subjective, but would love to hear the opinions of others, and more importantly, what is considered best practice for this kind of scenario.

I don't think a global repository is a good idea.
You should create the repository definition according to usage. So yeah, the product repo should return all required data and it has nothing to do with the FieldRepo or other repo.
The app is coupled only to the interface and one repository can implement multiple interfaces. Also, you can have diferent concrete repositories working with the same db. The point of the repo is not to have one repo for each entity, it's to provide an interface for the app to get what it needs from the presistence. How you structure the things INSIDE persistence, that's a different story.
So, at least as an expriment try redefining the repo interfaces to return directly the objects the app needs it, ignoring which is entity, which object is part of another and so on.
Then start implementing the concrete repos. Remember, you have as many models as the app needs. Only in trivial cases there's one model to rule'm all

Related

3 Tier Architecture with NHibernate, Ninject and Windows Forms

So I'm in the middle of rafactoring a small to medium sized Windows Forms application backed by a SQLite database accessed through NHibernate. The current solution contains only an App Project and Lib Project so it is not very well structured and tightly coupled in many places.
I started off with a structure like in this answer but ran into some problems down the road.
DB initialization:
Since the code building the NHibernate SessionFactory is in the DAL and I need to inject an ISession into my repositories, I need to reference the DAL and NHibernate in my Forms project directly to be able to set up the DI with Ninject (which should be done in the App Project / Presentation Layer right?)
Isn't that one of the things I try to avoid with such an architecture?
In an ideal world which projects should reference eachother?
DI in general:
I have a decently hard time figuring out how to do DI properly. I read about using a composition root to only have one place where the Ninject container is directly used but that doesn't really play well with the current way NHibernate Sessions are used.
We have a MainForm which is obviously the applications entry point and keeps one Session during its whole lifetime. In addition the user can open multiple SubForms (mostly but not exclusively) for editing single entities) which currently each have a separate Session with a shorter lifetime. This is accomplished with a static Helper exposing the SessionFactory and opening new Sessions as required.
Is there another way of using DI with Windows Forms besides the composition root pattern?
How can I make use of Ninjects capabilites to do scoped injection to manage my NHibernate Sessions on a per-form basis (if possible at all)?
Terminology:
I got a little confused as to what is a Repository versus a Service. One comment on the posted answer states "it is ok for the repository to contain business-logic, you can just call it a service in this case". It felt a little useless with our repositories only containing basic CRUD operations when we often wanted to push filtering etc. into the database. So we went ahead and extended the repositories with methods like GetByName or more complex GetAssignmentCandidates. It felt appropiate since the implementations are in the Business Layer but they are still called repositories. Also we went with Controllers for classes interacting directly with UI elements but I think that name is more common in the Web world.
Should our Repositories actually be called Services?
Sorry for the wall of text. Any answers would be greatly appreciated!
Regarding 1:
Yes and no. Yes you would prefer the UI Layer not to be dependent on some specifics of x-layers down. But it isn't. The composition root is just residing in the same assembly, logically it's not the same layer.
Regarding 2:
Limit the usage of the container. Factories (for Sessions,..) are sometimes necessary. Using static should be avoided. Some Frameworks however prevent you from using the ideal design. In that case try to approximate as much as possible.
If you can currently do new FooForm() then you can replace this by DI or a DI Factory (p.Ex. ninject.extensions.Factory). If you have absolutely no control on how a type is instanciated then you'll need to use static to access the kernel like a service locator and then "locate" direct dependencies (while indirect dependencies are injected into direct dependencies by the DI container).
Regarding 3: i think this is somewhat controversial and probably often missunderstood. I don't think it's really that important what you call your classes (of course it is, but consistency across your code base is more important than deciding whether to name them all Repository or Service), what's important is how you design their responsibilities and relationships.
As such i myself prefer to extract filters and stuff in the -Query named classes, each providing exactly one method. But others have other preferences... i think there's been enough blog posts etc. on this topic that there's no use in rehashing this here.
Best practice to implement for situation like yours is to use MVP design pattern. Here its the architecture that i can offer to you.
MyApp.Infrastructure // Base Layer - No reference
MyApp.Models // Domain Layer - Reference to Infrastructure
MyApp.Presenter // Acts like controllers in MVC - Reference to Service, Models,
MyApp.Repository.NH // DAL layer - Reference to Models, Infrastructure
MyApp.Services // BLL Layer - Reference to Repository, Models
MyApp.Services.Cache // Cached BLL Layer(Extremely recommended) - Reference to Services, Models
MyApp.UI.Web.WebForms // UI Layer - Reference to all of layers
I will try to do my best to explain with the example of basic implementation of 'Category' model.
-Infrastructure-
EntityBase.cs
BussinesRule.cs
IEntity.cs
IRepository.cs
-Models-
Categories(Folder)
Category.cs // Implements IEntity and derives from EntityBase
ICategoryRepository.cs // Implements IRepository
-Presenter-
Interfaces
IHomeView.cs // Put every property and methods you need.
ICategoryPresenter.cs
Implementations
CategoryPresenter.cs // Implements ICategoryPresenter
CategoryPresenter(IHomeView view, ICategorySevice categorySevice){
}
-Repository-
Repositories(Folder)
GenricRepository.cs // Implements IRepository
CategoryRepository : Implements ICategoryRepository and derives from GenricRepository
-Services-
Interfaces
ICategorySevice.cs
AddCategory(Category model);
Implementations
CategorySevice.cs // Implements ICategorySevice
CategorySevice(ICategoryRepository categoryRepository ){}
AddCategory(Category model){
// Do staff by ICategoryRepository implementation.
}
-Services.Cache-
// It all depents of your choose.. Radis or Web cache..
-UI.Web.WebForms-
Views - Home(Folder) // Implement a structure like in MVC views.
Index.aspx // Implements IHomeView
Page_Init(){
// Get instance of Presenter
var categoryPresenter = CategoryPresenter(this, new CategorySevice);
}
I'm not sure if i got your question correct, but maybe give you an idea:)

Is it bad practice to have a class helper to convert DAL objects to Core objects

I'm struggling to get a good architecture for my current project. It's my fist time designing a serious n-tiers app trying to use the best practices of software engineering (DI, Unit tests, etc...). My project is using the Onion architecture.
I have 4 layers
The Core Layer : It's Holding my business objects. Here I have classes representing my business entities with their methods. Some of these objects have a reference to a Service Interface.
The DAL (Data Access) Layer : It defines POCO objects and implements the Repository Interfaces defined in the Core Layer. In this Layer I thought that it was a good idea to design a big utility class whose role is to convert the POCOs objects from the DAL to Business Object from the Core.
The Service Layer : It implements the Service Interfaces defined in the Core. The Role of this Layer is to provide access to the Repositories defined in the DAL. I primarly believed that this Layer was useless so I directly used the Repository Interfaces defines in my Core Layer. However after some weeks spent writing very long instanciation code - having constructors taking 5-6 IRepository parameters - I got the point of Service Layer.
The presentation Layer. Nothing special to say here, except that I configure dependency injection in this Layer (I'm using Ninject ).
I've changed my architecture and rewrote my code at least 3 times because many time I saw that something was wrong with my code. (Things like long constructors with long parameter lists). Fortunately bit per bit I'm getting the point of the various coding pattern found in litterature.
However I've just come across a cyclical dependency with my DI and I'm seriously wondering if my DAL2Core Helper was a good idea...
Thanks to this helper I can write code such as :
DAL.Point p = DAL2Core.PointConverter(point); // point is a Core Object
context.Points.Add(p);
context.SaveChanges();
Which reduces a little code redundancy. Then each of my repositories defined in the DAL have its own DAL2Core member:
private IDAL2CoreHelper DAL2Core;
And I inject it from the Repository constructor.
The DAL2Core class itself is a a bit messy...
First of all, it has a property for every Repository, and every Processor (Service Layer). The reason of the presence of the Processors is that my Core Objects need that a Processor Dependency be injected. I've put some of the repositories and Processors referenced in my DAL2Core utility class below just to illustrate :
[Inject]
private Core.IUserRepository UserRepository{ get; set; }
[Inject]
private Core.IPointsRepository PointsRepository { get; set; }
...
[Inject]
private Core.IUserProcessor UserProcessor{ get; set; }
[Inject]
private Core.IPointsProcessor CoursProcessor { get; set; }
(Since the DAL2Core Helper, is required by the repositories, a constructor injection would cause cyclical dependencies)
And then this class has lot of simple methods such as :
public Core.User UserConverter(DAL.User u)
{
Core.User user = new Core.User(UserProcessor);
user.FirstName= u.FirstName;
user.Name= u.Name;
user.ID = u.ID;
user.Phone= u.Phone;
user.Email= u.Email;
user.Birthday= u.Birthday;
user.Photo = u.Photo;
return user;
}
This class is like 600 hundred lines. Thinking about it, I realize that I don't save much code because much of the time the DAL2Core Convertion code is only called from one place, so perhaps it would be better to leave this code in the repositories ? And - the biggest problem - since I decided to decouple this helper from my Repository Classes, cyclical depencies exception are thrown by Ninject.
What do you think about the design I tried, is it a good / common practice ? And how can I smartly and efficiently perform this DAL2Core convertion without code smells. I really look forward to solving this architecture issue, I've spent the last three weeks dealing with plumbing and architecture issues and not really advancing that project. I'm becoming very late. However I really want to produce a high quality code. I just want to avoid architectural solutions that look like overkills to me, with lot of Factories etc... But I admit that some times, this feeling juste come from a lack of understanding from me (Like for the Service Layer).
Thanks in advance for your help !
What you are looking to use is AutoMapper , Value injecter or something similar for this purpose.
Essentially, it is a good practice to seperate data models between layers, to reduce coupling and increase testability. If you come up with a generic Mapper you will reduce code redundancy.
Hope this helps.

Should DAL classes be public?

Say I've got a DAL that multiple applications use to access the same data. The DAL defines its own classes and interfaces for dealing with that data, but should the applications using the DAL be working with those classes, or just the interfaces?
Another way; should it be:
List<Product> products = MyDAL.Repository.GetProducts();
or:
List<IProduct> products = MyDAL.Repository.GetProducts();
Is it good or bad that each application utilizing the DAL will have to create its own implementation details for Product?
Passing interfaces around, instead of classes, is one (very good) thing. Making your DAL classes private is a different (and not necessarily good) thing.
For instance, what if one of the applications that use your DAL want to change the behavior of Product slightly? How can you subclass or decorate your original class if it's private?
Say one of your apps is a web application, that needs to store a product's image as a url instead of a file path? Or to add caching, logging or something else on top of Product?
There are way too many questions here in order to determine the best way.
If these applications can reuse the additional functionality given by the classes in your DAL, then I'd say absolutely reuse them.
Taking "Product" for example. If the DAL has a definition of Product that is pretty close to or the same as the definition the applications need, then reuse is your best bet.
If the applications explicitly do NOT want the functionality given by the classes and instead want to provide their own implementation, then just use the interfaces.
Again, looking at "Product": if the applications have their own definition of Product with perhaps additional or just plain different properties and methods then they should implement the interface.
It's really a question of how the classes in question are going to be used.
Returning interfaces is better but then you will need GetProducts() to know about those implementations to properly query the data store. You might want to use an IOC framework for that.

How Should viewmodels communicate with repositories?

I got a bunch of repositoiries. They retrive data from a EF 3.5 generated model. For the sake of simplicity let's just call them repository1,2 and 3. They all implement the generic interface: public interface IRepository<T>{..} How should my viewmodels communicate with theese repositories? I tried to create some kind of a factory with a IRepository GetRepository(string repositoryName){..}method, but I couldn't get it to work. So should I just reference the repositories in the viewmodels when needed or is there a better solution to this? I would prefer a code sample, thanks.
cheers
These answers and the free introduction-chapter from Dependency Injection in .NET recommend having the repositories and ui separated from the businesslogic. Dependencies should go towareds the core-logic like this:
dal/repositories -> Businesslayer, Models and IRepository <- UI
I have also wondered where the ViewModels fit into this. They should definetly not be connected to the repositories at all, but whether the ViewModels belong in the businesslayer (servicelayer) or with the UI seems debatable. I'm just staring out with asp.net mvc and are currently most if favour of putting them with the businesslayer to keep the controllers simple. Also it seems reasonable that the businesslayer gathers items from various repositories that logically belong together and that they are acted on together via the ViewModel. Maybe as a transaction, so that updates to all repositories must succeed or be rolled back.
I can't think of a situation where your view model should EVER communicate with your repository. A ViewModel should be a flat model for use by the Client.
What exactly are you trying to do?
You might find the BookLibrary sample application of the WPF Application Framework (WAF) interesting. It uses the Entity Framework together with MVVM. But it doesn't introduce a repository to work with the Entity Framework.
A Repository serves up T's. What I've done is add a static property to my T's to get the repostory via IOC:
public class Part // This is one kind of T
{
public static IRepository<Part> Repository { get { return IoC.GetInstance<IRepository<Part>>(); } }
...
}
then when I need a Part...
var part = Part.Repository.Find(id);
For my unit testing IoC serves up mock repositories. In production, the real thing.

How many levels of abstraction do I need in the data persistence layer?

I'm writing an application using DDD techniques. This is my first attempt at a DDD project. It is also my first greenfield project and I am the sole developer. I've fleshed out the domain model and User interface. Now I'm starting on the persistence layer. I start with a unit test, as usual.
[Test]
public void ShouldAddEmployerToCollection()
{
var employerRepository = new EmployerRepository();
var employer = _mockery.NewMock<Employer>();
employerRepository.Add(employer);
_mockery.VerifyAllExpectationsHaveBeenMet();
}
As you can see I haven't written any expectations for the Add() function. I got this far and realized I haven't settled on a particular database vendor yet. In fact I'm not even sure it calls for a db engine at all. Flat files or xml may be just as reasonable. So I'm left wondering what my next step should be.
Should I add another layer of abstraction... say a DataStore interface or look for an existing library that's already done the work for me? I'd like to avoid tying the program to a particular database technology if I can.
With your requirements, the only abstraction you really need is a repository interface that has basic CRUD semantics so that your client code and collaborating objects only deal with IEmployerRepository objects rather than concrete repositories. You have a few options for going about that:
1) No more abstractions. Just construct the concrete repository in your top-level application where you need it:
IEmployeeRepository repository = new StubEmployeeRepository();
IEmployee employee = repository.GetEmployee(id);
Changing that in a million places will get old, so this technique is only really viable for very small projects.
2) Create repository factories to use in your application:
IEmployeeRepository repository = repositoryFactory<IEmployee>.CreateRepository();
IEmployee employee = repository.GetEmployee(id);
You might pass the repository factory into the classes that will use it, or you might create an application-level static variable to hold it (it's a singleton, which is unfortunate, but fairly well-bounded).
3) Use a dependency injection container (essentially a general-purpose factory and configuration mechanism):
// A lot of DI containers use this 'Resolve' format.
IEmployeeRepository repository = container.Resolve<IEmployee>();
IEmployee employee = repository.GetEmployee(id);
If you haven't used DI containers before, there are lots of good questions and answers about them here on SO (such as Which C#/.NET Dependency Injection frameworks are worth looking into? and Data access, unit testing, dependency injection), and you would definitely want to read Martin Fowler's Inversion of Control Containers and the Dependency Injection pattern).
At some point you will have to make a call as to what your repository will do with the data. When you're starting your project it's probably best to keep it as simple as possible, and only add abstraction layers when necessary. Simply defining what your repositories / DAOs are is probably enough at this stage.
Usually, the repository / repositories / DAOs should know about the implementation details of which database or ORM you have decided to use. I expect this is why you are using repositories in DDD. This way your tests can mock the repositories and be agnostic of the implementation.
I wrote a blog post on implementing the Repository pattern on top of NHibernate, I think it will benefit you regardless of whether you use NHibernate or not.
Creating a common generic and extensible NHiberate Repository
One thing I've found with persistence layers is to make sure that there is a spot where you can start doing abstraction. If you're database grows, you might need to start implementing sharding and unless there's already an abstraction layer already available, it can be difficult to add one later.
I believe you shouldn't add yet another layer below the repository classes just for the purpose of unit testing, specially if you haven't chosen your persistence technology. I don't think you can create an interface more granular than "repository.GetEmployee(id)" without exposing details about the persistence method.
If you're really considering using flat text or XML files, I believe the best option is to stick with the repository interface abstraction. But if you have decided to use databases, and you're just not sure about the vendor, an ORM tool might be the way to go.

Categories