What I have:
public interface IRepository
{
IDisposable CreateConnection();
User GetUser();
//other methods, doesnt matter
}
public class Repository
{
private SqlConnection _connection;
IDisposable CreateConnection()
{
_connection = new SqlConnection();
_connection.Open();
return _connection;
}
User GetUser()
{
//using _connection gets User from Database
//assumes _connection is not null and open
}
//other methods, doesnt matter
}
This enables classes that are using IRepository to be easily testable and IoC containers friendly. However someone using this class has to call CreateConnection before calling any methods that are getting something from database, otherwise exception will be thrown. This itself is kind of good - we dont want to have long lasting connections in application. So using this class I do it like this.
using(_repository.CreateConnection())
{
var user = _repository.GetUser();
//do something with user
}
Unfortunetelly this is not very good solution because people using this class (including even me!) often forget to call _repository.CreateConnection() before calling methods to get something from database.
To resolve this I was looking at Mark Seemann blog post SUT Double where he implements Repository pattern in correct way. Unfortunately he makes Repository implement IDisposable, which means I cannot simply inject it by IoC and DI to classes and use it after, because after just one usage it will be disposed. He uses it once per request and he uses ASP.NET WebApi capabilities to dispose it after request processing is done. This is something I cannot do because I have my classes instances which use Repository working all the time.
What is the best possible solution here? Should I use some kind of factory that will give me IDisposable IRepository ? Will it be easily testable then?
There are a few problematic spots in your design. First of all, your IRepository interface implements multiple levels of abstractions. Creating a user is a much higher level concept than connection management. By placing these behaviours together you are breaking the Single Responsibility Principle which dictates that a class should only have one responsibility, one reason to change. You are also violating the Interface Segregation Principle that pushes us toward narrow role interfaces.
On top of that, the CreateConnection() and GetUser method are temporal coupled. Temporal Coupling is a code smell and you are already witnessing this to be a problem, because you are able to forget the call to CreateConnection.
Besides this, the creation of the connection is something you will start to see on every repository in the system and every piece of business logic will need to either create a connection, or get an existing connection from the outside. This becomes unmaintainable in the long run. Connection management however is a cross-cutting concern; you don't want the business logic to be concerned in such low level concern.
You should start by splitting the IRepository into two different interfaces:
public interface IRepository
{
User GetUser();
}
public interface IConnectionFactory
{
IDisposable CreateConnection();
}
Instead of letting business logic manage the connection itself, you can manage the transaction at a higher level. This could be the request, but this might be too course grained. What you need is to start the transaction somewhere in between the presentation layer code and the business layer code, but without having to having to duplicate yourself. In other words, you want to be able to transparently apply this cross-cutting concern, without having to write it over and over again.
This is one of the many reasons that I started to use application designs as described here a few years ago, where business operations are defined using message objects and their corresponding business logic is hidden behind a generic interface. After applying these patterns, you will have a very clear interception point where you can start transactions with their corresponding connections and let the whole business operation run within that same transaction. For instance, you can use the following generic code that can be applied around every piece of business logic in your application:
public class TransactionCommandHandlerDecorator<TCommand> : ICommandHandler<TCommand>
{
private readonly ICommandHandler<TCommand> decorated;
public TransactionCommandHandlerDecorator(ICommandHandler<TCommand> decorated) {
this.decorated = decorated;
}
public void Handle(TCommand command) {
using (var scope = new TransactionScope()) {
this.decorated.Handle(command);
scope.Complete();
}
}
}
This code wraps everything around a TransactionScope. This allows your repository to simply open and close a connection; this wrapper will ensure that the same connection is used nonetheless. This way you can inject an IConnectionFactory abstraction into your repository and let the repository directly close the connection at the end of its method call, while under the covers .NET will keep the real connection opened.
Create a repository factory that creates IDisposable repositories.
public interface IRepository : IDisposable {
User GetUser();
//other methods, doesn't matter
}
public interface IRepositoryFactory {
IRepository Create();
}
You create them within a using and they are disposed of when done.
using(var repository = factory.Create()) {
var user = repository.GetUser();
//do something with user
}
You can inject the factory and create the repositories as needed.
So, you already mentioned that
we dont want to have long lasting connections in application
which is absolutely right!
You need to open connection in each repository method implementation, execute queries or commands against the database, and then close the connection. I don't see why you would expose anything like connection to the domain layer. In other words, remove CreateConnection() methods from repositories. They are not needed. Each method will open/close it inside, when implemented.
There are times when you would want to wrap several repository method calls into something, but that is only related to transaction, not connection. In that case there are 2 answers:
Check the correctness of your Repository pattern implementation. You should have repositories only for Aggregate Roots. Not every entity qualifies as aggregate root. Aggregate root is the guaranteed transaction boundary, so you should not be worried about transactions anyway out of repository - each repository method call will naturally follow the boundary, since it handles only a single aggregate root at a time.
If you still need to execute operations against several aggregate roots in one go, then you will have to implement a pattern called Unit of Work. This is essentially a business layer transaction implementation. I don't recommend relying on built-in transaction features into storage technologies for this specific case (several aggregates in one go), because they differ from vendor to vendor (while relational DBs can guarantee several aggregate roots in one go, NoSQL DBs only guarantee single aggregate at a time).
From my experience, you should only need to modify single aggregate at a time. Unit of Work is a very rare case pattern. So, just rethink your repositories and aggregate roots, that should do the trick for you.
Just for the completeness of the answer - you do need to have repository interfaces, which you already have. Thus, your approach is already unit-testable.
You are mixing apples with oranges and peaches.
There are three concepts at play here:
The repository contract
The implementation details
Repository lifetime management
Your repository conceptually holds users, but it has a CreateConnection() method that indicates details of the implementation (a connection is needed). Not good.
What you need to do is remove the CreateConnection() method from the interface. Now you have a true definition of what a user repository is (by the way, you should call it that, IUserRepository).
On to the implementation details:
You have a user repository that talks to a database, so you should implement a DatabaseUserRepository class. This is where the details of creating a connection and handling it are stored. You may decide to keep an open connection for the lifetime of the object, or you may decide it's best to open and close a connection for every operation.
On to the lifetime of the object:
You have a dependency container. You may have decided you want your repository to be used as a singleton because your DatabaseUserRepository class implements atomic, thread-safe operations, or you may want your repository to be transient so a new instance is created because it implements a unit of work pattern which means that all changes are saved together (e.g. EF.SaveChanges()).
See the difference now?
The interface allows for unit testing. Any component that needs data from the database can use a mock repository that loads garbage from memory (e.g. MemoryUserRepository).
The implementation provides a repository that stores users in a database. You may even decide to have two versions of this class that implement the interface along with different strategies or patterns.
The lifetime of the repository will be setup according to the implementation details in the dependency container.
I would create a Connection Factory...
public class ConnectionFactory
{
public IDbConnection Create()
{
// your logic here
}
}
Now make it a dependency to your repositories, and use it inside you repositories as well... You don`t need an IDisposable repository, you need to dispose the connection.
I'm on the cellphone, so it's hard to give you a more detailed example. If you need, i can edit it later with a more detailed example.
Related
Recently I have read many articles on how to use the Unit of Work and Repository pattern, also different opinions on not to use it because it is just an additional layer of complexity and how Entity Framework DbContext is allready such an implementation.
Also many of the tutorials are simple ones where the code is kind of the following code:
public Task<IAsyncResult> ControllerTestMethod(...)
{
await _testUnitOfWork.TestRepository.AddAsync(...);
await _testUnitOfWork.CommitAsync();
}
This is kind of the simple variant where the UoW instantiates the TestRepository.
In the solution that we are currently refactoring we have the following layering:
// layer 1 Controllers
public TestController(ITestService testService)
// layer 2 Services
public TestService(ITestRepository testRepository, ITestItemsRepository testItemsRepository)
// layer 3 Repositories
public TestRepository(TestDbContext context)
public TestItemsRepository(TestDbContext context)
A simple flow in the service would be represented by something like
public async Task TestIt()
{
var test = _testRepository.GetAsync(...)
// business logic
await _testRepository.UpdateAsync(...)
await _testItemsRepository.UpdateAsync(...)
}
Of course this is not DDD, there is a lack of clearly defined Aggregate root and many other things, but life is not easy, you try to be a boy-scout and make things better without being an expert and learning things on the way...
As you have noticed the main concern is no atomicity, lack of transactions, performance degradation because of multiple roundtrips to the DB.
The repository calls SaveChangesAsync in each operation hitting the DB each time.
If some repository call fails in the middle, the data might get inconsistent.
At this step of refactoring eventual consistency seems to be out of the question so we are trying to go into the direction of UnitOfWork or Transactions.
Thinking on where would the UnitOfWork fit in regarding layers.
Questions and discussion points where any help would be appreciated
I. Thinking on the fact that Unit of Work calling SaveChangesAsync at the end with the Repositories not hitting the DB would be better then explicit transactions because of the following:
a. Single roundtrip (if not too many operations in batch) to the database vs multiple roundtrips with explicit transaction having multiple SaveChangesAsync
b. Taking advantage of EF features (including here also retry execution strategy ...)
.....
II. UnitOfWork unit testing and dependency injection through Service layer
Do not really like the version where the UnitOfWork creates the instance of the repository and exposes it through getters
Even if for me it seems good to have the UnitOfWork control the Repository because the same DBContext should be used in both UnitOfWork and repositories I have an issue on doing everything through _testUnitOfWork.TestRepository.DoSomething(...) because from the service point of view does not feel right to have the entire UnitOfWork injected as dependency into the services and refactoring it to something like
epublic TestService(IUnitOfWork testUnitOfWork)
...
public async Task TestIt()
{
var test = _testRepository.GetAsync(...)
// business logic
await _testUnitOfWork.TestRepository.UpdateAsync(...)
await _testUnitOfWork.TestItemsRepository.UpdateAsync(...)
await _testUnitOfWork.CommitAsync()
}
What I also feel is not the cleanest way but currently the best solution perhaps is just to have the UnitOfWork having the DbContext injected and no knowledge about the Repositories. This way the code would look like
public TestService(ITestRepository testRepository, ITestItemsRepository testItemsRepository)
....
public async Task TestIt()
{
var test = _testRepository.GetAsync(...)
// business logic
await _testRepository.UpdateAsync(...)
await _testItemsRepository.UpdateAsync(...)
await _testUnitOfWork.CommitAsync()
}
This seems to be doable through the dependency injection mechanism by having scoped lifetime registrations so that the same DbContext is injected into both UnitOfWork and repositories but I have some concerns
a. Any service will have a UnitOfWork passed in that has several getters returning several Repositories even if the service would use only a couple of them. Other option I see is to have all kind of unit of works in combination with different repositories depending on the need of the service (you might say aggregate roots and UnitOfWork on the aggregate root, but still as the current implementation there are valid scenarios where only some of the repositories are involved in the operation, and other scenarios where more repositories are involved)
b. Unit testing is not so straight forward but might be acceptable by setting up the getters of the IUnitOfWorkMock to return an ITestRepositoryMock ....
(It is not recommended from what I understand to have DbContext directly injected without repositories and test through and InMemoryDatabase)
c. What about the scenario where the consumer is a background task or windows services or somtehing where the scope and the lifetime of the DbContext is not so straight forward. The functionality would be to have an infinite loop where data is retrieved and updated.
In this case is it even acceptable to have a single instance of the UnitOfWork and DbContext and call several times CommitAsync through the same instance of TestService like:
while (true)
{
// do stuff
await _testService.TestIt();
}
Would it be valid or better to explicitly create a scope at each run to have different UnitOfWork and DBContext instances like:
while (true)
{
using (var scope = services.CreateScope())
{
var testService = scope.ServiceProvider.GetRequiredService<ItestService>();
// do stuff
await testService.TestIt();
}
}
III. Where should the UnitOfWork be injected at the end since we have reusable services being part of bigger flows (somekind of enlisting multiple services in unit of work) like the following:
public class SuperComplexTestService(IUnitOfWork unitOfWork)
{
public SuperComplexTestService()
....
public async Task TestAll(..)
{
await _testService.TestIt(..);
foreach (.....)
{
await someOtherService.DoIt(...);
}
await _unitOfWork.CommitAsync()
}
}
In conclusion would be happy to hear about any suggestions on how to refactor the current solution with hundreds of services having specific repositories injected, into a solution with transactional support using Entity Framework (preferably UnitOfWork because of less Database calls and other mentioned features...)
Also maybe you have some hints on how this should be implemented from the scratch(for future whishful thinking where time is unlimited and we can rewrite what we want)
Maybe some reference application with service layer and unit of work (Not thinking at this point on eventual consistency over distributed transaction scenarios with multiple data sources)
Take into account the points mentioned and current architecture and the fact that we cannot rewrite everything from scratch.
I'm using elastic search with my asp.net core web api. I don't quite get where the line is drawn when it comes to repository responsibility.
Here is how I have defined my implementations:
public SearchRespository: ISearchRespository<Product>
{
private ElasticClient _client
public async Task<ISearchResponse<Product>> SearchAsync(ISearchRequest request)
{
var response = _client.SearchAsync<Product>(request);
return await products;
}
. . . // others
}
In my controller:
public SearchController : Controller
{
private ISearchRespository _repo;
public SearchController(ISearchRespository repo)
{
_repo = repo;
}
public async Task<IActionResult> Search()
{
// build my search request from Request.Query
var response = await _client.SearchAsync(request);
var model = new SearchModel
{
Products = response.Documents;
Aggregations = response.Aggregations;
}
return Ok(model)
}
As it stands the repo is passing the elastic response as is. My question is have I drawn my line right? what if I just move _client to my controller or move building request and constructing model to _repo? how do you guys get your repository right?
The fact that you use Elastic Search should be an implementation detail that especially the controller shouldn't know about, so you are absolutely right in abstracting this away from the controller. I often look at the SOLID principles to get a sense whether I'm on the right track or not. If we look at the Dependency Inversion Principle, you'll see that it guides us towards a style that is also known as Ports and Adapters, which basically means that the use of an external tool is abstracted away (the port), and on the boundary of the application you implement an Adapter that connects to that third party.
So from the sense of the Dependency Inversion Principle, you're on the right track.
There is however a lot of misunderstanding of what Martin Fowler's Repository Pattern is trying to solve. The definition is as follows:
Mediates between the domain and data mapping layers using a collection-like interface for accessing domain objects.
Important to note here is that a repository is intended to be used by the domain layer.
There is however a lot of misuse of the repository pattern, because many developers start to use it as a grouping structure for queries. The repository is -as I see it- not intended for all queries in the system; but only for queries that are the domain needs. These queries support the domain in making decisions for the mutations on the system.
Most queries that your system requires however are not this kind of query. Your code is a good example, since in this case you skip the domain completely and only do a read operation.
This is something that is not suited for the repository. We can verify this by comparing it with the SOLID principles again.
Let's say we have the following repository interface:
public interface IUserRepository
{
User[] FindUsersBySearchText(string searchText, bool includeInactiveUsers);
User[] GetUsersByRoles(string[] roles);
UserInfo[] GetHighUsageUsers(int reqsPerDayThreshold);
// More methods here
}
This is a typical repository abstraction that you'll see developers write. Such abstraction is problematic from perspective of the SOLID principles, because:
The Interface Segregation Principle is violated, because the interfaces are wide (have many methods) and consumers of those interfaces are forced to depend on methods that they don’t use.
The Single Responsibility Principle is violated, because the methods in the repository implementation are not highly cohesive. The only thing that relates those methods is the fact that they belong to the same concept or entity.
The design violates the Open/Closed Principle, because almost every time a query is added to the system, an existing interface and its implementations need to be changed. Every interface has at least two implementations: one real implementation and one test implementation.
A design like this also causes a lot of pain down the road, because it becomes hard to apply cross-cutting concerns (like security, auditing, logging, caching, etc) down the line.
So this is not something the Repository pattern is intended to solve; such design is just a big SOLID violation.
The solution here is to model queries separately in your system and not use a repository at all. There are many articles written about this, and you can read my take on this here.
If I look at your design, it actually has some resemblance with the design I'm promoting here, since you seem to have a generic query method that can handle many type of queries. The query message (your ISearchRequest) seems however specific to Elastic Search. This is something you should strive to prevent, as the Dependency Inversion Principle states.
I'm a little bit familiar with Entity Framework for some simple projects, but now I want to go deeper and write better code.
There is plenty of topics talking about whether using statics methods in DAL or not. For the moment I'm more of the side of people who think yes we can use statics methods.
But I'm still thinking if some practices are good or not.
Lot of people do like this:
public IList<Person> GetAll()
{
using (var dbContext = new MyDbContext())
{
return dbContext.Persons.ToList();
}
}
But I'm wondering if doing like this is a good practice:
public static IQueryable<Person> GetAll()
{
var dbContext = new MyDbContext();
return dbContext.Persons;
}
The goal is to use only static methods in a static class as I think it's legit because this class is just a DAL, there will never be any property. I also need to do like this instead of using the using() scope to avoid disposing the context as this method return an IQueryable.
I'm sure some people already think "OMG no, your context will never be disposed", so please read this article: http://blog.jongallant.com/2012/10/do-i-have-to-call-dispose-on-dbcontext.html
I tried by myself and yes the context is disposed only when I don't need it anymore.
I repeat, the goal here is to use static methods so I can't use a dbContext property which the constructor instantiate.
So why people always use the using() scope?
Is it a bad practice to do it like I would like to?
Another bonus question: where is the [NotMapped] attribute with EF6? I've checked on both System.ComponentModel.DataAnnotations and System.ComponentModel.DataAnnotations.Schema but can't find it, this attribute is not recognized by the compiler.
Thank's for your answers
Following the Repository pattern, IQueryable<T> shall never be returned anyway.
Repository pattern, done right
Besides, your repositories depend on your DbContext. Let's say you have to work on customers in an accounting system.
Customer
public class Customer {
public int Id { get; protected set; }
public string GivenName { get; set; }
public string Surname { get; set; }
public string Address { get; set; }
}
CustomerRepository
public class CustomerRepository {
public CustomerRepository(DbContext context) {
if (context == null) throw new ArgumentNullException("context");
this.context = context;
}
public IList<Customer> GetAll() { return context.Customers.ToList(); }
public IList<Invoice> GetInvoicesFor(Customer customer) {
return context.Invoices
.Where(invoice => invoice.Customer.Id == customer.Id)
.ToList();
}
private readonly DbContext context;
}
So in fact, to answer your question in a more concise and precise way, I think neither approach is good. I would more preferably use a DbContext per business concern. When you access let's say the Customers Management features, then instantiate a single DbContext that shall be shared across all of your required repositories, then dispose this very DbContext once you exit this set of features. This way, you shall not have to use Using statements, and your contexts should be managed adequately.
Here's another short and simple good reference for the Repository pattern:
Repository (Martin Fowler)
In response to comments from the OP
But actually the point is I don't want to follow the repository pattern. People say "what about if your data source change?" I want to answer, what about if it never change? What the point of having a such powerful class but not using it just in case one day the database provider may change
Actually, the Repository Pattern doesn't only serve the purpose of easier data source change, it also encourages better separation of concerns and a more functional approach closer to the business domain as members in the repository shall all revolve around business terminologies.
For sure the repository itself cannot take over control to dispose a data context or whatever the object it uses to access the underlying data source, since it doesn't belong to it, it is only lended to it so that it can fulfill its tasks.
As for your point about will the data source change someday? No one can predict it. It is most likely to never change in most of the systems I have worked on. A database change is more likely to be seen after 10 years of the initial development for moerdnization purposes. That day, however, you'll understand how the Repository Pattern saves you time and headaches in comparison to a tightly coupled code. I work with tightly coupled code from legacy systems, and I do take the advantages for benefits. Prevention is better than cure.
But please lets focus on instantiate the dbContext in methods without the using() statement. Is it really bad? I mean also when we inject the context in the constructor we don't handle the dispose(), we let entity framework doing it and it manages it pretty well.
No, it isn't necessarily bad not to use using statements, as long as you dispose all unnecessary resources as long as they are no longer used. The using statements serves this purpose for you by doing it automatically instead of you having to take care about.
As for the Repository pattern, it can't dispose the context that is passed to it, and it shan't be disposed neither because the context is actually contextualized to a certain matter and is used across other features within a given business context.
Let's say you have Customer management features. Within them, you might also require to have the invoices for this customer, along with the transaction history. A single data context shall be used across all of the data access as long as the user works within the customer management business context. You shall then have one DbContext injected in your Customer management feature. This very same DbContext shall be shared across all of the repositories used to access your data source.
Once the user exits the Customer management functionalities, the DbContext shall get disposed accordingly as it may cause memory leaks. It is false to believe that as long as it is no longer used, everything gets garbage collected. You never know how the .NET Framework manages its resources and how long it shall take to dispose your DbContext. You only know that it might get disposed somehow, someday.
If the DbContext gets disposed immediately after a data access is performed, you'll have to instantiate a new instance everytime you need to access the underlying data source. It's a matter of common sense. You have to define the context under which the DbContext shall be used, and make it shared across the identified resources, and dispose it as soon as it is no longer needed. Otherwise, it could cause memory leaks and other such problems.
In response to comment by Mick
I would go further and suggest you should always return IQueryable to enable you to reuse that result passing it into other calls on your repositories. Sorry but your argument makes absolutely no sense to me. Repositories are not meant to be stand-alone one stop shops they should be used to break up logic into small, understandable, encapsulated, easily maintained chunks.
I shall disagree to always return IQueryable<T> through a Repository, otherwise what is it good to have multiple methods for? To retrieve the data within your repository, one could simply do:
public class Repository<T> where T : class {
public Repository(DbContext dataContext) { context = dataContext; }
public IQueryable<T> GetAll() { return context.Set<T>(); }
private readonly DbContext context;
}
and place predicates everywhere in your code to filter the data as per the views needs or whatsoever. When it is time to change the filter criterion or the like, you'll have to browse all of your code to make sure anyone didn't use a filter which was actually unexpected, and may cause the system to misbehave.
On a side note, I do understand your point and I might admit that for some reasons as described in your comment, it might be useful to return IQueryable<T>. Besides, I simply wonder for what good is it, since a repository's responsibility is to provide a class everything it needs to get its information data. So if one needs to pass along IQueryable<T>s to another repository, it sounds strange to me as if one wouldn't have completely investigated every possible way to retrieve data. If one need some data to process another query, lazy-loading can do it and no need to return IQueryables. As per its name, IQueryables are made to perform queries, and the Repository's responsibility to to access the data, that is, to perform queries.
I implemented a repository pattern to my application dat .
I have :
public class EFRepository<T>
{
DbContext // My db context
public IQureable<T> GetQuery()
{
DataContext.CreateQuery(...);
}
}
Now let say I have user repository :
public class UserRepository : EFRepository
{
public UserGetUserDetails(int userId)
{
GetQuery().Where(u=>u.Id = userId).First();
}
}
my problem is how to release the DbContext when I use the EF repository in derived repositories.
Lets say : UserRepository: EFRepository , and it uses the GetQuery then I have to dispose the context.
Any good idea how to make this in generic repository?
You should think about what unit of work you have. (there are many other tutorials on the internet). The idea is to keep the same dbcontext and to re-use it while being in the same unit of work. This way, entities will already be attached to the context when needing them, etc..
Now, this being a web application, your unit of work would be in this case a request. While in the same request, reuse your DBContext. There are many ways to do this and just off the top of my head - you will want something like 'OnActionExecuting' where you take care of your context.
But even better would be to use an Inversion of Control pattern (there are many frameworks out there that use this, i primarily use NInject . This will automatically create a new instance of a certain class, when needed, depending on the scope you suggested - in this case 'onRequestScope'. A lot more to say about IoC but not the scope of the question
I have used a similar pattern in the past and in my case I actually inherited from DbContext, which itself implements IDisposable. Provided you are using EFRepository or classes derived from it in a using block you should be fine.
If you would prefer a DbContext member variable, then EFRepository will need to implement IDisposable and call DbContext.Dispose() from its Dispose method.
Environment: ASP.NET MVC3 C#
Say I have some repository (semi-psuedo):
public interface IRepository
{
create();read();update();delete();opendb();closedb();
}
public class CarRepository : IRepository
{
private DbContext namedDbContext;
public void opendb()
{
namedDbContext = new DbContext();
}
public void closedb()
{
namedDbContext.dispose();
}
}
And then in a controller the repository is injected and used as follows to manually control the db connection lifetime:
public class SomeController : Controller
{
private IRepository CarRepository;
public void SomeController(IRepository _carRepository)
{
CarRepository = _carRepository;
}
public ActionResult SomeAction(int CarId)
{
CarRepository.opendb();
var car = CarRepository.read(CarId);
CarRepository.closedb();
}
}
Is this considered bad practice because it is taking the control of the connection from the repository and placing it in the controller? I am worried about memory leaks from using dependency injection and want to ensure duplicate connections are not opened, nor long running and unused.
Yes. Sure. Most ADO.NET drivers uses connection pooling, so the actual connection process isn't that heavy. And you have TransactionScope which can take care of transaction over multiple connections, but it wont be as fast as one transaction over one connection.
I am worried about memory leaks from using dependency injection and want to ensure duplicate connections are not opened, nor long running and unused.
A IoC will guaranteed clean up the connection (a large user base have made sure of that). There is no guarantee that a programmer will do the cleanup in all places.
The REpository pattern provides an abstraction of the persistence layer. It shouldn't expose any of the persistence details such as db connection. What if the storage is an xml file, or a cloud storage?
So yes, it is bad practice. If you want more control, you might make the repository use the unit of work pattern, so that a higher level should decide when a transaction is commited, but that's it. No knowledge of the database should be exposed by the repository.
AS for memory leaks, make repository implmement IDIsposable (where you close any outstanding open conenctions)and just makes sure that the DI container manages a repository instance per request, it will call Dispose on it.
Part of a repository is abstracting away the details of persistence.
I see two problems with your proposal:
You are leaking the abstraction more than necessary by naming these methods "opendb" and "closedb" and
If you go down this route, you should return IDisposable (the connection object) from the opendb() method, and wrap the action in a using block to ensure that the connection gets closed.
Typically, you can just let the repository create a connection for each method, so you just have to get it right in your repository methods. The challenge comes when you want to perform multiple actions against the repository, without using a separate connection for each piece.
To achieve that, you could expose the notion of a unit-of-work from the repository. Your unit of work will implement the interface for the repository's methods, so you can't call them outside of a unit-of-work. It will also implement IDisposable, so whenever you call into your repository you will use a using block. Internally, the repository will manage the connection, but will neither expose the connection nor "talk about it."
For example:
public ActionResult SomeAction(int CarId)
{
using (var repo = CarRepository.BeginUnitOfWork())
{
var car = repo.read(CarId);
// do something meaningful with the car, do more with the repo, etc.
}
}