This is more of a question where I am looking for opinions. I am working on a project that uses both NHibernate and EntityFramework (this is by design, wanted flexibility). So, I went ahead and started working on a Repository pattern, but came across a slight dilemma.
Basically, I wanted to know what you guys think about the following areas:
Should the Repository be a singleton? - This will allow me to keep the sessions opened, but at the same time, I think it's going to keep connections opened to the database. For NHibernate, the ORM can only gurantee an objet is the same within the same session. This is ideal for easy coding, but there are definatly ways to overcome this using keys and overriding the GetHashCode and Equals methods.
If it's not a singleton (or even if it is), should I be closing the connections as soon as they are used? For NHibernate, that means closing the session each time the Repository is "Disposed", which is after every use.
Have you implemented a Repository pattern for either NHibernate or EF 4.0 and found any useful ideas?
Don't code the creation of singletons yourself (ie the singleton pattern itself), use an IOC framework like StructureMap to handle the Lifecycle management of objects.
This we can't answer. If it's singleton it must be thread safe in regards to the resources it manages internally (like a connection pool of DB live connections). Threadsafe code isn't trivial.
This we can't answer. It depends on how you act with your model. It also depends whether you want people to be able to read through a DataReader which requires an active connection to the database. This also affects things like lazy loading which requires active sessions which becomes a nightmare with databinding.
Here's everything I've come up with in regards to creating a repository pattern for NH: Creating a common generic and extensible NHiberate Repository version 2
First question, does it have to be NHibernate? Why not take a look at using EF4 with an IoC, my favorite is StructureMap, then you non longer have to worry about making your repository singleton as StructureMap gives options to keep the scope open by request, by HttpContext, by Hybrid. You of course have the option to use the Singleton pattern with your Repository I'm just not sold on it being a viable option in a case like this.
Hope that helps more than it confuses you.
Related
I'm working on my first Blazor Server project and I am slowly fixing a lot of initial design errors that I made when I started out. I've been using C# for a while, but I'm new to web development, new to ASP.Net, new to Blazor, and new to web architecture standards, hence why I made so many mistakes early on when I didn't have a strong understanding of how best to implement my project in a way that promotes clean code and long term maintainability.
I've recently restructured my solution so that it follows the "Clean Architecture" outlined in this Microsoft documentation. I now have the following projects, which aim to mirror those described in the document:
CoreWebApp: A Blazor project, pages and components live here.
Core: A Class Library project, the domain model, interfaces, business logic, etc, live here.
Infrastructure: Anything to do with having EF Core access the underlying database lives here, ie ApplicationDbContext, any implementations of Repositories, etc.
I am at a point where I want to move existing implementations of the repository pattern into the Infrastructure project. This will allow me to decouple the Core project from the Infrastructure project by utilising the Dependency Injection system so that any business logic that uses the repositories depends only on the interfaces to those repositories (as defined in Core) and not the actual implementations themselves (to be defined in Infrastructure).
Both the Microsoft documentation linked above, and this video by CodeWrinkles on YouTube make the following two suggestions on how to correctly use DbContext in a Blazor Server project (I'll talk specifically about using DbContext in the context of a repository):
Scope usage of a repository to each individual database request. Basically every time you need the repository you instantiate a new instance, do what needs to be done, and as soon as the use of the repo goes out of scope it is automatically disposed. This is the shortest lived scope for the underlying DbContext and helps to prevent concurrency issues, but also forgoes the benefits of change tracking.
Scope the usage of a repository to the lifecycle of a component. Basically you create an instance of a repository in OnInitialisedAsync, and destroy the repository in the Dispose() method of the component. This allows usage of EF Cores change tracking.
The problem with these two approaches is that they don't allow for use of the DI system, in both cases the repository must be new'd and thus the coupling between Core and Infrastructure remains unbroken.
The one thing that I can't seem to understand is why case 2 can't be achieved by declaring the repository as a Transient service in Program.cs. (I suppose case 1 could also be achieved, you'd just hide spinning up a new DbContext on every access to the repository within the methods it exposes). In both the Microsoft documentation and the CodeWrinkles video they seem to lean pretty heavily on this wording for why the Transient scope isn't well aligned with DbContext:
Transient results in a new instance per request; but as components can be long-lived, this results in a longer-lived context than may be intended.
It seems counterintuitive to make this statement, and then provide a solution to the DbContext lifetime problem that will enable a lifetime that will align with the stated problem.
Scoping a repository to the lifetime of a component seems, to me, to be exactly the same as injecting a Transient instance of a repository as a service. When the component is created a new instance of the service is created, when the user navigates away from the page this instance is destroyed. If the user comes back to the page another instance is created and it will be different to the previous instance due to the nature of Transient services.
What I'd like to know is if there is any reason why I shouldn't create my repositories as Transient services? Is there some deeper aspect to the problem that I've missed? Or is the information that has been provided trying to lead me into not being able to take advantage of the DI system for no apparent reason? Any discussion on this is greatly appreciated!
It's a complex issue. With no silver bullet solution. Basically, you can't have you cake and eat it.
You either use EF as an [ORM] Object Request Mapper or you let EF manage your complex objects and in the process surrender your "Clean Design" architecture.
In a Clean Design solution, you map data classes to tables or views. Each transaction uses a "unit of work" Db Context obtained from a DBContextFactory. You only enable tracking on Create/Update/Delete transactions.
An order is a good example.
A Clean Design solution has data classes for the order and order items. A composite order object in the Core domain is built by make two queries into the data pipeline. One item query to get the order and one list query to get the order items associated with that order.
EF lets you build a data class which includes both the order data and a list of order items. You can open that data class in a DbContext, "process" that order by making changes and then call "SaveAsync" to save it back to the database. EF does all the complex stuff in building the queries and tracking the changes. It also holds the DbContext open for a long period.
Using EF to manage your complex objects closely couples your application domain with your infrastructure domain. Your application is welded to EF and the data stores it supports. It's why you will see some authors asserting that implementing the Repository Pattern with EF is an anti-pattern.
Taking the Order example above, you normally use a Scoped DI View Service to hold and manage the Order data. Your Order Form (Component) injects the service, calls an async get method to populate the service with the current data and displays it. You will almost certainly only ever have one Order open in an SPA. The data lives in the view service not the UI front end.
You can use transient services, but you must ensure they:
Don't use DBContexts
Don't implement IDisposable
Why? The DI container retains a reference to any Transient service it creates that implements IDisposable - it needs to make sure the service is disposed. However, it only disposes that service when the container itself is disposed. You build up redundant instances until the SPA closes down.
There are some situations where the Scoped service is too broad, but the Transient option isn't applicable such as a service that implements IDisposable. Using OwningComponentBase can help you solve that problem, but it can introduce a new set of problems.
If you want to see a working Clean Design Repository pattern example there's an article here - https://www.codeproject.com/Articles/5350000/A-Different-Repository-Pattern-Implementation - with a repo.
Is it correct to create Unit of Work in order to share the DbContext among the Repositories?
If isn't what is the recommendation? I really think it is needed to share the DbContext sometimes.
I'm asking this because of the answer for this question: In-memory database doesn't save data
Is it correct to create Unit of Work in order to share the DbContext among the Repositories?
It is design decision, but yes. There is no problem in doing that. It is absolutely valid that code from multiple repositories is executed under one single connection.
I really think it is needed to share the DbContext sometimes.
Absolutely; there are many times when you need to share DbContext.
Your linked answer is really good. I specially like the three points it mention. OP on that question is doing some unnecessary complicated things like Singleton, Service Locator and Async calls without understanding how they work. All these are good things but only if they are used at right time at right place.
Following is from your linked answer:
The best thing is that all of these could be avoided if people stopped attempting to create a Unit of Work + Repository pattern over yet another Unit of Work and Repository. Entity Framework Core already implements these:
Yes; this is true. But even so, repository and UoW may be helpful in some cases. This is design decision based on business needs. I have explained this in my answers below:
https://stackoverflow.com/a/49850950/5779732
https://stackoverflow.com/a/50877329/5779732
Using ORM directly in calling code has following issues:
It makes code little more complicated.
Database code is merged in business logic.
As many ORM objects are used in-line in calling code, it is very hard to unit test the code.
All those issues could be overcome by creating Concrete Repositories in Data Access Layer. DAL should expose concrete repositories to calling code (BLL or Services or Controller whatever) through interfaces. This way, your database and ORM code is fully consumed in DAL and you can easily unit-test calling code by mocking repositories. Refer this article explaining benefit of repository even with ORMs.
Apart from all above, one other issue generally discussed is "What if we decide to change ORM in future". This is entirely false in my personal understanding. It happens very rarely and in most cases, should not be considered while design.
I recommend avoid overthinking and over-design. Focus on your business objectives.
Refer this example code to understand how to inject UoW in repositories. The code sample is with Dapper. But overall design may still useful to you.
What you need is a class that contains multiple repositories and creates a UoW. Then, when you have a use case in which you need to use multiple repositories with shared UoW, this class creates it and pass it to repositories.
I typically call this class Service, but I think there is not some standardized naming.
I'm implementing a DAL using entity framework. On our application, we have three layers (DAL, business layer and presentation). This is a web app. When we began implementing the DAL, our team thought that DAL should have classes whose methods receive a ObjectContext given by services on the business layer and operate over it. The rationale behind this decision is that different ObjectContexts see diferent DB states, so some operations can be rejected due to problems with foreign keys match and other inconsistencies.
We noticed that generating and propagating an object context from the services layer generates high coupling between layers. Therefore we decided to use DTOs mapped by Automapper (not unmanaged entities or self-tracking entities arguing high coupling, exposing entities to upper layers and low efficiency) and UnitOfWork. So, here are my questions:
Is this the correct approach to design a web application's DAL? Why?
If you answered "yes" to 1., how is this to be reconciled the concept of DTO with the UnitOfWork patterns?
If you answered "no" to 1., which could be a correct approach to design a DAL for a Web application?
Please, if possible give bibliography supporting your answer.
About the current design:
The application has been planned to be developed on three layers: Presentation, business and DAL. Business layer has both facades and services
There is an interface called ITransaction (with only two methods to dispose and save changes) only visible at services. To manage a transaction, there is a class Transaction extending a ObjectContext and ITransaction. We've designed this having in mind that at business layer we do not want other ObjectContext methods to be accessible.
On the DAL, we created an abstract repository using two generic types (one for the entity and the other for its associated DTO). This repository has CRUD methods implemented in a generic way and two generic methods to map the DTOs and entities of the generic repository with AutoMapper. The abstract repository constructor takes an ITransaction as argument and it expects the ITransaction to be an ObjectContext in order to assign it to its proctected ObjectContext property.
The concrete repositories should only receive and return .net types and DTOs.
We now are facing this problem: the generic method to create does not generate a temporal or a persistent id for the attached entities (until we use SaveChanges(), therefore breaking the transactionality we want); this implies that service methods cannot use it to associate DTOs in the BL)
There are a number of things going on here...The assumption I'll make is that you're using a 3-Tier architecture. That said, I'm unclear on a few design decisions you've made and what the motivations were behind making them. In general, I would say that your ObjectContext should not be passed around in your classes. There should be some sort of manager or repository class which handles the connection management. This solves your DB state management issue. I find that a Repository pattern works really well here. From there, you should be able to implement the unit of work pattern fairly easily since your connection management will be handled in one place. Given what I know about your architecture, I would say that you should be using a POCO strategy. Using POCOs does not tightly couple you to any ORM provider. The advantage is that your POCOs will be able to interact with your ObjectContext (probably via Repository of some sort) and this will give you visibility into change tracking. Again, from there you will be able to implement the Unit of Work (transaction) pattern to give you full control over how your business transaction should behave. I find this is an incredibly useful article for explaining how all this fits together. The code is buggy but accurately illustrates best practices for the type of architecture you're describing: Repository, Specification and Unit of Work Implementation
The short version of my answer to question number 1 is "no". The above link provides what I believe to be a better approach for you.
I always believed that code can explain things better than worlds for programmers. And this is especially true for this topic. Thats why I suggest you to look at the great sample application in witch all consepts you expecting are implemented.
Project is called Sharp Architecture, it is centered around MVC and NHibernate, but you can use the same approaches just replacing NHibernate parts with EF ones when you need them. The purpose of this project is to provide an application template with all community best practices for building web applications.
It covers all common and most of the uncommon topics when using ORM's, managing transactions, managing dependencies with IoC containers, use of DTOs, etc.
And here is a sample application.
I insist on reading and trying this, it will be a real trasure for you like it was for me.
You should take a look what dependency injection and inversion of control in general means. That would provide ability to control life cycle of ObjectContext "from outside". You could ensure that only 1 instance of object context is used for every http request. To avoid managing dependencies manually, I would recommend using StructureMap as a container.
Another useful (but quite tricky and hard to do it right) technique is abstraction of persistence. Instead of using ObjectContext directly, You would use so called Repository which is responsible to provide collection like API for Your data store. This provides useful seam which You can use to switch underlying data storing mechanism or to mock out persistence completely for tests.
As Jason suggested already - You should also use POCO`s (plain old clr objects). Despite that there would still be implicit coupling with entity framework You should be aware of, it's much better than using generated classes.
Things You might not find elsewhere fast enough:
Try to avoid usage of unit of work. Your model should define transactional boundaries.
Try to avoid usage of generic repositories (do note point about IQueryable too).
It's not mandatory to spam Your code with repository pattern name.
Also, You might enjoy reading about domain driven design. It helps to deal with complex business logic and gives great guidelines to makes code less procedural, more object oriented.
I'll focus on your current issues: To be honest, I don't think you should be passing around your ObjectContext. I think that is going to lead to problems. I'm assuming that a controller or a business service will be passing the ObjectContext/ITransaction to the Repository. How will you ensure that your ObjectContext is disposed of properly down stream? What happens when you use nested transactions? What manages the rollbacks, for transactions down stream?
I think your best bet lies in putting some more definition around how you expect to manage transactions in your architecture. Using TransactionScope in your controller/service is a good start since the ObjectContext respects it. Of course you may need to take into account that controllers/services may make calls to other controllers/services which have transactions in them. In order to allow for scenarios where you want full control over your business transactions and the subsequent database calls, you'll need to create some sort of TransactionManager class which enlists, and generally manages transactions up and down your stack. I've found that NCommon does an extraordinary job at both abstracting and managing transactions. Take a look at UnitOfWorkScope and TransactionManager classes in there. Although I disagree with NCommon's approach of forcing the Repository to rely on the UnitOfWork, that could easily be refactored out if you wanted.
As far as your persistantID issue goes, check this out
On my quest to learn NHibernate I have reached the next hurdle; how should I go about integrating it with StructureMap?
Although code examples are very welcome, I'm more interested in the general procedure.
What I was planning on doing was...
Use Fluent NHibernate to create my class mappings for use in NHibs Configuration
Implement ISession and ISessionFactory
Bootstrap an instance of my ISessionFactory into StructureMap as a singleton
Register ISession with StructureMap, with per-HttpRequest caching
However, don't I need to call various tidy-up methods on my session instance at the end of the HttpRequest (because thats the end of its life)?
If i do the tidy-up in Dispose(), will structuremap take care of this for me?
If not, what am I supposed to do?
Thanks
Andrew
I use StructureMap with fluent-nhibernate (and NH Validator) in 3 of my current projects. 2 of those are ASP MVC apps and the third is a WCF web service.
Your general strategy sounds about right (except you won't be making your own Session or SessionFactory, as was already pointed out in comments). For details, snag my configuration code from here:
http://brendanjerwin.github.com/development/dotnet/2009/03/11/using-nhibernate-validator-with-fluent-nhibernate.html
The post is really about integrating NH Validator and Fluent-NHibernate but you can see exactly how I register the session factory and ISession with StructureMap in the "Bonus" section of the post.
RE: Tidy up: You should try and always work within a transaction and either commit or roll-back the transaction at the end of your unit of work. NH only utilizes SQL Connections when it needs them and will take care of the cleanup of that limited resource for you. Normal garbage collection will take care of your sessions themselves.
The Session Factory is a very expensive object that you will want to only initialize once and keep around for the life of your app.
I've not used structure map but maybee I can still help guide you in the right direction. Fluent nHibernate is awsome good choice over the hbm files.
As for the http request, you do not need to ensure that you close the session when the http request ends. If you don't you'll end up leaking nHibernate session. I'm not sure if structure map will handle this for you, what I've done is I have an http module which closes the session.
One thing to note though that bite me, is that you will make to sure you wrap all your data access in a transaction and ensure nHibernate actually commits its changes. If you do this as part of your session close, you could miss the chance to handle errors. I'm curious to hear what you ended up having to do to get this workign.
Closed as exact duplicate of this question. But reopened, as the other Singleton questions are for general use and not use for DB access
I was thinking of making an internal data access class a Singleton but couldn't convince myself on the choice mainly because the class has no state except for local variables in its methods.
What is the purpose of designing such classes to be Singletons after all?
Is it warranting sequential access to the database which is not convincing since most modern databases could handle concurrency well?
Is it the ability to use a single connection repeatedly which could be taken care of through connection pooling?
Or Is it saving memory by running a single instance?
Please enlighten me on this one.
I've found that the singleton pattern is appropriate for a class that:
Has no state
Is full of basic "Service Members"
Has to tightly control its resources.
An example of this would be a data access class.
You would have methods that take in parameters, and return say, a DataReader, but you don't manipulate the state of the reader in the singleton, You just get it, and return it.
At the same time, you can take logic that could be spread among your project (for data access) and integrate it into a single class that manages its resources (database connections) properly, regardless of who is calling it.
All that said, Singleton was invented prior to the .NET concept of fully static classes, so I am on the fence on if you should go one way or or the other. In fact, that is an excellent question to ask.
From "Design Patterns: Elements Of Reusable Object-Oriented Software":
It's important for some classes to
ahve exactly one instance. Although
there can be many printers in a
system, there should only be one
printer spooler. There should only be
one file system and one window
manager. ...
Use the Singleton pattern when:
there must be exactly one instance of a class, and it must be accessible to clients from a well-known access point
the sole instance should be extensible by subclassing and clients should be able to use an extended instance without modifying their code
Generally speaking, in web development, the only things that should actually implement Singleton pattern are in the web framework itself; all the code you write in your app (generally speaking) should assume concurrency, and rely on something like a database or session state to implement global (cross-user) behaviors.
You probably wouldn't want to use a Singleton for the circumstances you describe. Having all connections to a DB go via a single instance of a DBD/DBI type class would seriously throttle your request throughput performance.
The Singleton is a useful Design Pattern for allowing only one instance of your class. The Singleton's purpose is to control object creation, limiting the number to one but allowing the flexibility to create more objects if the situation changes. Since there is only one Singleton instance, any instance fields of a Singleton will occur only once per class, just like static fields.
Source: java.sun.com
using a singleton here doesn't really give you anything, but limits flexibility
you WANT concurrency or you won't scale
worrying about connections and memory here is a premature optimization
As one example, object factories are very often good candidates to be singletons.
If a class has no state, there's no point in making it a singleton; all well-behaved languages will only create, at most, a single pointer to the vector table (or equivalent structure) for dispatching the methods.
If there is instance state that can vary among instances of the class, then a singleton pattern won't work; you need more than one instance.
It follows, then, by exhaustion, that the only cases in which Singleton should be used is when there is state that must be shared among all accessors, and only state that must be shared among all accessors.
There are several things that can lead to something like a singleton:
the Factory pattern: you construct
and return an object, using some
shared state.
Resource pools: you have a shared
table of some limited resources,
like database connections, that you
must manage among a large group of
users. (The bumpo version is where
there is one DB connection held by
a singleton.)
Concurrency control of an external
resource; a semaphore is generally
going to be a variant of singleton,
because P/V operations must
atomically modify a shared counter.
The Singleton pattern has lost a lot of its shine in recent years, mostly due to the rise of unit testing.
Singletons can make unit testing very difficult- if you can only ever create one instance, how can you write tests that require "fresh" instances of the object under test? If one test modifies that singleton in some way, any further tests against that same object aren't really starting with a clean slate.
Singletons are also problematic because they're effectively global variables. We had a threading issue a few weeks back at my office due to a Singleton global that was being modified from various threads; the developer was blinded by the use of a sanctioned "Pattern", not realizing that what he was really creating was a global variable.
Another problem is that it can be pathologically difficult to create true singletons in certain situations. In Java for example, it's possible to create multiple instances of your "singleton" if you do not properly implement the readResolve() method for Serializable classes.
Rather than creating a Singleton, consider providing a static factory method that returns an instance; this at least gives you the ability to change your mind down the road without breaking your API.
Josh Bloch has a good discussion of this in Effective Java.
You have a repository layer that you want created once, and that reference used everywhere else.
If you go with a standard singleton, there is a bad side effect. You basically kill testability. All code is tightly couple to the singleton instance. Now you cannot test any code without hitting the database (which greatly complicates unit testing).
My advice:
Find an IOC that you like and integrate it into your application (StructureMap, Unity, Spring.Net, Castle Windsor, Autofac, Ninject...pick one).
Implement an interface for you repository.
Tell the IOC to treat the repository as a singleton, and to return it when code is asking for the repository by the interface.
Learn about dependency injection.
This is a lot of work for a simple question. But you will be better off.
with c#, I would say that a singleton is rarely appropriate. Most uses for a singleton are better resolved with a static class. Being mindful of thread safety is extremely important though with anything static. For database access, you probably don't want a single connection, as mentioned above. Your best bet is to create a connection, and use the built in pooling. You can create a static method that returns a fresh connection to reduce code, if you like. However an ORM pattern/framework may be better still.
In c# 3.5 extension methods may be more appropriate than a static class.