What is the best practices to connect to postgresql from microservice - c#

I have a multi tenant c# project with 60 microservices connected to multiple postgresql databases. I'm using open/close connection on each transaction. I'm not sure that this is the best practices.
Do I have to open one connection to each database on each microservice and use it on all my activities or open/close on each transaction

You should consider use Unit of Work pattern plus Dependency Injection practices.
I let you here a Microsoft document explaining the Unit of Work pattern
https://learn.microsoft.com/en-us/aspnet/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
If this approach isn´t valid for you, using only ORMs like Dapper or Entity Framework should be another approach, but in this case I strongly recommend you to isolate this in "repositories" entities. This way, everytime you instance and use a repository (using Dependency Injection) you won´t have to deal with transaction details (the connection will be opened every time you instance this entity and closed when you call Dispose method).

Related

How and where to implement transactions when using the service and repository patterns? [duplicate]

This question already has answers here:
How to nest transactions in EF Core 6?
(3 answers)
Closed last year.
I am trying to implement the service and repository patterns in my ASP.NET Core 6 based web application (using EF Core), but I think I am doing a few things wrong. In my controllers I inject a service instance which I can then use like this to get, create, update or delete entities:
[HttpPost]
public async Task<IActionResult> CreateProject([FromBody] Project body)
{
int projectId = await this.projectService.CreateProjectAsync(body);
return CreatedAtAction(nameof(GetProject), new { id = projectId }, null);
}
The CreateProjectAsync function then performs validations (if necessary) and calls the corresponding CreateProjectAsync function of the ProjectRepository class. One important thing to note is that the Project class is created by myself and serves as view model, too. It is mapped to the corresponding EF Core type (such as TblProject) in the repository before it is created/updated in the database or after it has been read from the database.
This approach works fine in many cases but I often encounter problems when I need to use transactions. One example would be that in addition to projects, I also have related entities which I want to create at the same time when creating a new project. I only want this operation to be successful when the project and the related entities were both successfully created, which I cannot do without using transactions. However, my services are not able to create a transaction because they are not aware of the EF Core DbContext class, so my only option right now is to create the transactions directly in the repository. Doing this would force me to create everything in the same repository, but every example I've seen so far suggests to not mix up different entities in a single repository.
How is this usually done in similar projects? Is there anything wrong with my architecture which I should consider changing to make my life easier with this project?
Great question! By default, the EF context is registered as a scoped dependency - so one per request. You could create a simple transaction service (registered as a scoped dependency) that would expose transaction handling. This service could be injected into both your repositories and other service layers as need be.
By returning the transaction object when a transaction is begun, you could grant each layer autonomy over committing or rolling back it's own transaction.
You could also add some cleanup logic to the Dispose method of the transaction service to commit or rollback any open transactions when the service is being disposed.
Edit for personal practice:
I have abandoned the repository pattern for several projects except for frequent get operations. Using the EF context directly in service layers allows me to take advantage of navigation properties to perform complex multi-table inserts or edits. I also get the added benefit of a single round trip to the database to execute these operations, as well as implicit transaction wrapping. Personally, it gets more and more difficult to make the case for the repository pattern on EF projects - other than you get really locked in to EF because it is all over the lower service layers.

oData queries and Disposable SQL connection

When implementing an oData service using WebApi 2.0, the standard practice is to expose an IQueryable from your service to the ApiControllers' action. This way the framework can apply the oData query to your IQueryable.
I'm currently reading, amongst others, about how important it is to always call Dispose on your database connections. Preferably by using the "using" statement like so:
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
// Execute operations against the database
} // Connection is automatically closed.
My question is: When is the database connection closed in the case of oData? You obviously can't dispose the connection yourself, before the framework applies the oData query - this would throw an exception.
A side topic would be: do you agree on exposing IQueryable<> from your services? I've read about this and it's a long debated issue - people argue that the database work should be contained in the repository, while others like to give the querying freedom to the services' clients. I agree with containing the queries in the repository, but in the case of oData I don't like to over-complicate things, if the framework espectes IQueryable, then I give it IQueryable. What do you think ?
Usually in such cases database connection is closed when asp.net controller instance is disposed. Suppose you use Entity Framework context to do queries. Then you create (maybe lazily) that context instance when needed and then override Dispose method of ODataController and dispose it there. For example, take a look at this article: http://www.asp.net/web-api/overview/odata-support-in-aspnet-web-api/odata-v4/create-an-odata-v4-endpoint.
I personally never use this approach because I prefer to have more control on allowed operations. However, see no harm in certain cases to allow read access to certain tables via IQueryable approach described - in cases when on client you have rich filtering possibilities. In such case, you will reinvent this approach anyway, because you will use some custom filters, or accept a lot of parameters to your querying method.

Should I be opening a database connection in my controllers?

My controllers need to get data that is passed back to the view. Where should I open a connection to pass to the repository? Here is an example. This is a part of one of my controllers.
using (var connection = this.GetActiveConnection())
{
var repository = new RefRepository(connection);
var codes = repository.GetPoACodes();
}
Is it bad practice to be opening a connection in the controller? If I don't pass it via the controller, where should I pass the connection to the repository?
Actually Repositories must handle connection themselves, it shouldn't be controller concern.
Controller class must remain thin, and if it gets fat, it would be a code smell.
It would a very good practice if you can use a Dependency Injection Framework (like Ninject, StructureMap, ...) to wire those dependencies , to handle DbContext or Session and SessionFactory (EF or NHibernate), Transaction or namely Unit of Work pattern, Exception Handling and Logging if you want to go thus far.
If you're using Visual Studio then in Project Templates there is an option which create repositories also, you can create a sample project, read the code, and learn how the code is organized.
In this article, under Web API header, you can find out how to do it.
No, you should not open a database connection on your controller.
Controller should talk to your "domain model", which is not the same as your database schema.
In your case, maybe your repository must handle it.
Or even more, if your using Entity Framework or NHibernate, for example, it'd be better let them handle that matter.

Architecture for database-aware Application

I'm looking for a reference implementation of the "Unit of work" and "repository" pattern for MS SQL Server or Plain old ADO.NET. But all samples are build aroud an existing context like Linq2SQL or EF. According to my understanding, these technologies are themselves almost implementing these pattern.
But how do I deal with a "plain" SQL Repository without any context and SaveChanges() methods? Is the right way to use TransactionScope? For example collect all SQL Operations in a List of commands and then simple execute them one after each other within a Tx Scope... or is this too simple?
Why am I looking for this? I have the task of building a data layer that can both deal with an ancient Sybase database as well as SQL Server (maybe additional in conjunction with a POCO based EF4 Component)
For this my Idea is to create an abstraction layer with a Repository and Unit of Work Pattern and create different implementations for each Technology.
Update:
I was on vacation the last week. Sorry for the delay. Today I built up an basic picture of my architecture for this. [link] (s7.directupload.net/file/d/2570/whb7ulbs_jpg.htm). My idea is to create a simple ObjectContext like the EF ObjectContext that exists parallel to the EF Context and is used by my repository. This context collects ATOM Sql Transactions in a kind of Stack and executes them within the Transaction within the Unit of Work part. Good idea? Bad Idea? Hard to do? I'm looking forward to your views on this.
I don't envy your task; supporting multiple backend databases in your application is going to be tricky.
Here's an example of a Unit Of Work pattern using ASP.NET MVC and LightSpeed: link
Personally, I would use EF or NHibernate (prefer EF); SQL Anywhere supports ADO.NET and Entity Framework, so (ideally) you wouldn't need to do anything special to support that database.
Good luck!
If you are just worried about transaction scope, let me point you towards the System.Transactions library, and its TransactionScope object. Great class. any sql or other transaction managed system that is manipulated within the same thread that instantiated the transaction scope will be automatically added to the transaction. that way, if any part of the code fails, and throws an exception, you can just not call the scope.Complete() method and all the operations within the transaction scope are rolled back. very nice class.

Suggestions for pluggable database layer

I am currently working on a software project which uses SQL Server database to store data. Since there are some plans to move away from SQL Server to Oracle, one of the requirements is to build pluggable database layer.
So my question is what is the best way to do it?
You have lots of choices. One option is to go with one of the various Object Relational Mapper (ORM) frameworks out there. NHibernate is a popular one, but Microsoft's Entity Framework (in v4) is a reasonable possibility as well, and it integrates better with Linq, if that's the sort of thing you're interested in.
A second option (not necessarily exclusive of the above) is to implement something like the Repository pattern, and run all database access through a repository layer. Some folks see the ORM frameworks as a replacement for the Repository pattern; other see a repository layer adding value on top of the ORM framework. The real value that a repository gives you is the ability to swap out ORM layers. Only you know whether that's a reasonable likelihood, and even if it is, it may be more work to implement the additional repository level than to just re-bind everything to the new ORM.
I suggest using Entity Framework, quite easier than NHibernate and being matured very fast. For oracle support in EF, you'll need the oracle provider.
Along with using Entity framework, I suggest using a pattern such as Repository Pattern. This way, you can use Dependency Injection to change the implementation of your choice. Your application becomes independent of database or the ORM itself. So you could also use NHibernate if you wish.
I would recommend using NHibernate for your data layer. With it, you can easily swap out the configuration to work with almost any database driver you want.
NHibernate has good support for both MsSQL and Oracle.
You can write all your queries in Hibernates query language (HQL) which will be dialect agnostic. Another option is to use the linq provider in NHibernate 3 to get strongly typed data access.
As some other have mentioned, i would also recommend using the repository pattern and inject a Unit of Work or a SessionFactory.
Edit: Oracle now has released a beta of their Entity Framework provider: http://thedatafarm.com/blog/data-access/oracle-entity-framework-beta-released-today/
1: http://nhforge.org/Default.aspx## Heading ##
If you want to go ORM way, you can use NHibernate as alexn suggested, or Telerik's OpenAccess ORM. There is also EF provider for oracle and other DBMSes, but they are not free.
If you want to build your whole DAL from scratch, than go with Repository pattern. Create an interface for repository and create each repository provider for each db. Here's a discussion about.

Categories