I'm trying to implement unit of work repository pattern with Dapper in ASP.NET Core Web API.
I have created the model,repository and UOW. When I try to do GET request I got an error
System.InvalidOperationException: BeginExecuteReader requires the
command to have a transaction when the connection assigned to the
command is in a pending local transaction. The Transaction property
of the command has not been initialized.
Here is my controller;
public class CityController : ControllerBase
{
private readonly IUnitOfWork unitOfWork;
public CityController(IUnitOfWork unitOfWork)
{
this.unitOfWork = unitOfWork;
}
[HttpGet]
public async Task<IEnumerable<City>> GetAll()
{
return await unitOfWork.Cities.All();
}
CityRepository.cs
internal class CityRepository : GenericRepository<City>, ICityRepository
{
public CityRepository(IDbTransaction transaction)
: base(transaction)
{
}
public async Task<IEnumerable<City>> All()
{
var model = await Connection.QueryAsync<City>("SELECT * FROM DT_Inspection.City");
return model.ToList();
}
}
public IConfiguration configuration;
private IDbTransaction _transaction;
private IDbConnection _connection;
ICityRepository _cityRepository;
private bool _disposed;
public UnitOfWork(IConfiguration _configuration)
{
configuration = _configuration;
_connection = new SqlConnection(_configuration.GetConnectionString("DefaultConnection"));
_connection.Open();
_transaction = _connection.BeginTransaction();
}
public ICityRepository Cities { get { return _cityRepository ?? (_cityRepository = new CityRepository(_transaction)); }
public void Commit()
{
try
{
_transaction.Commit();
}
catch
{
_transaction.Rollback();
throw;
}
finally
{
_transaction.Dispose();
_transaction = _connection.BeginTransaction();
resetRepositories();
}
}
For starters, that's the exact opposite of a Unit-of-Work. Unit of work means you have a single, indivisible bunch ( a unit) of operations (work) that needs to be committed or discarded as one. Once it completes, it's gone and can't be reused. That's a feature.
A UoW typically implies that the work doesn't affect the data source until it's committed, but your code starts an expensive long-lived transaction that does lock records from the very first read.
The class you use though creates a global long-lived connection and a global, implicit transaction. That's a very bad practice. These lines specificially, are a major bug :
_transaction = _connection.BeginTransaction();
resetRepositories();
You could achieve the same effect in any database through some connection settings but very few people do this.
Database connections and transactions are meant to be short-lived. Otherwise they accumulate locks and tie up resources on the server, causing blocking or even deadlocks between different transactions. Otherwise you could run into deadlocks or long delays even with a couple of concurrent clients. This was a huge problem in the 1990s before disconnected operations and optimistic concurrency were introduced. What you try to do puts you back in the 1990s.
The difference really is 1000x worse performance, and having to use 10x+ more database servers to handle the same amount of traffic.
That's why the docs, courses and tutorial (the good ones) all show connections and transactions created right before they're used :
using(var cn=new SqlConnection(...))
{
cn.Open();
using(var tx=cn.BeginTransaction())
{
using (var cmd1=new SqlCommand(sql1,cn,tx))
{
...
}
using (var cmd2=new SqlCommand(sql2,cn,tx))
{
...
}
}
}
If you use an explicit database transaction, you must pass the active transaction to the command itself. That's what the exception you got says. The alternative is to use a TransactionScope and create open connections inside it. In this case, the connection is implicitly enrolled in the transaction :
using(var cn=new SqlConnection(...))
{
using(var scope=new TransactionScope())
{
cn.Open();
using (var cmd=new SqlCommand(sql,cn))
{
...
}
...
}
}
Dapper is a thin mapper over ADO.NET, it doesn't replace it. This means you still have to use ADO.NET, connections and transactions correctly. If you want to use to use explicit transactions, you need to pass it through the transaction parameter to Query or Execute:
using(var cn=new SqlConnection(...))
{
cn.Open();
using(var tx=cn.BeginTransaction())
{
var results1=cn.QueryAsync<City>(sql1,transaction:tx);
var results2=cn.QueryAsync<City>(sql2,transaction:tx);
}
}
Or you can use a TransactionScope :
using(var scope=new TransactionScope())
{
using(var cn=new SqlConnection(...))
{
cn.Open();
var results1=cn.QueryAsync<City>(sql1);
var results2=cn.QueryAsync<City>(sql2);
}
}
The implementation leaks already. "Repository" (it's actually a Data Access Object, not a Repository) would need access to the _transaction field's value. Or you could use a TransactionScope and forget about that UoW. After all, access to the database is the DAO/Repository's job, not the UoW's. Maybe you could use the UoW as a thin wrapper over a TransactionScope, or have the Repository create and initialize the UoW with an explicit transaction from the connection it owns.
Assuming you use a TransactionScope, your UoW should be nothing more than a wrapper:
class UnitOfWork:IDisposable
{
TransactionScope _scope=new TransactionScope();
public void Dispose()
{
_scope.Dispose();
}
}
The "repository" shouldn't even know about the UoW. It should control connections though:
internal class CityRepository
{
string _connString;
public CityRepository(IConfiguration configuration)
{
_connString=configuration.GetConnectionString("DefaultConnection")
}
public async Task<IEnumerable<City>> All()
{
using(var cn=new SqlConnection(_connStr))
{
var model = await Connection.QueryAsync<City>("SELECT * FROM DT_Inspection.City");
return model.ToList();
}
}
}
Only the controller would need to create the UoW, and then, only if there's any chance of modifying data. Reads don't need transactions :
public class CityController : ControllerBase
{
private ICityRepository _cityRepo;
public CityController(ICityRepository cityRepo)
{
_cityRepo=cityRepo;
}
[HttpGet]
public Task<IEnumerable<City>> GetAll()
{
return _cityRepo.All();
}
[HttpPost]
public async Task Post(City[] cities)
{
using(var uow=new UnitOfWork())
{
foreach(var city in cities)
{
_cityRepo.Insert(city);
}
}
}
Related
Based on these two samples
https://github.com/jasontaylordev/CleanArchitecture
https://github.com/jasontaylordev/NorthwindTraders
I added an Application and Infrastructure layer to my API project. The important part is that I will only use the MySQL.Data package for the database stuff (no Entity Framework or other helping libraries).
I thought it would be a good practise to define interfaces for repositories in the Application layer
public interface IUsersRepository
{
Task<IList<User>> GetUsers();
Task<User> GetUserByUsername(string username);
// ...
}
and implement them in the Infrastructure layer. So when it comes to the DI container setup via IServiceCollection I can setup those repositories with services.AddTransient(typeof(IUsersRepository), typeof(UsersRepository));. Due to the fact I'm not using an ORM tool I have to setup the connection by myself. That's why I defined an interface in the Application layer
public interface IDatabaseContext
{
DbConnection DatabaseConnection { get; }
}
and create the connection to the MySQL database in the Infrastructure layer
public class DatabaseContext : IDatabaseContext
{
public DbConnection DatabaseConnection { get; }
public DatabaseContext()
{
DatabaseConnection = new MySqlConnection("server=127.0.0.1;uid=root;pwd=12345;database=test");
}
}
To make this injectable I add it to the services collection with services.AddSingleton(typeof(IDatabaseContext), typeof(DatabaseContext));
I think the implementing repositories should only care for their own query because they might get chained for a transaction. Currently they don't take care for the connection
public class UsersRepository : IUsersRepository
{
private readonly IDatabaseContext databaseContext;
public UsersRepository(IDatabaseContext databaseContext)
{
this.databaseContext = databaseContext;
}
public async Task<IList<User>> GetUsers()
{
using (DbCommand getUsersCommand = databaseContext.DatabaseConnection.CreateCommand())
{
// setup command string, parameters and execute command afterwards
}
}
}
The problem is that now every repository call requires a connection handling before execution in the Application layer. By that I mean I have to wrap the call like so
await databaseContext.DatabaseConnection.OpenAsync();
IList<User> users = await usersRepository.GetUsers();
// ...
await databaseContext.DatabaseConnection.CloseAsync();
so the calling class needs to inject the repository and the IDatabaseContext. I'm also not sure if opening/closing the connection for each query / transaction is a good idea.
Maybe there are some better approaches to enhance the current one. I would like to create a self managing database connection. The application layer shouldn't open/close connections. It should only call the repository methods. The repository methods shouldn't do it neither because they might run in a transaction and only the first query should open it and the last one closes it.
It would be awesome to define new repository methods with the SQL logic only and all the connection stuff is handled once. Any ideas?
First, if you enable connection pooling on the MySql connector then you can skip the CloseAsync call and Dispose the connection each time you have used it, that will allow the pooling mechanism of the connector to reuse connections as needed. To enable it add Pooling=True to your connection string.
Second, to avoid all the extra code you can create a base class for the repositories and implement all the connection handling on it, I would create a function that takes a Func<DbConnection,Task<T>> and some type of static factory to reduce code rewrite:
//static DB factory
public static class DBFactory
{
public async Task<DBConnection> GetConnection()
{
//Create here your connection
var newCon = //..
await newCon.OpenAsync();
return newCon;
}
public async Task ExecuteTransaction(Func<DBConnection, MySqlTransaction, Task<bool>> TransactedCode)
{
using(var dbConnection = await GetConnection())
{
var transact = dbConnection.BeginTransaction();
try
{
if(await TransactedCode(dbConnection, transact))
transact.Commit();
else
transact.RollBack();
}
catch{ transact.RollBack(); }
}
}
}
//Base class for repositories
public abstract class BaseRepository
{
protected async Task<T> ExecuteResultWithConnection<T>(Func<DBConnection, MySqlTransaction, Task<T>> RepositoryMethod)
{
using(var dbCon = await DBFactory.GetConnection())
{
return await RepositoryMethod(dbCon, null);
}
}
protected async Task ExecuteWithConnection(Func<DBConnection, MySqlTransaction, Task> RepositoryMethod)
{
using(var dbCon = await DBFactory.GetConnection())
{
await RepositoryMethod(dbCon, null);
}
}
}
//Example of repository
public class TestRepository : BaseRepository
{
public async Task<IList<TestObject>> GetTestObjects(DBConnection con = null, MysqlTransaction Transact = null)
{
if(con != null)
{
//execute the code without calling the base function
//using con as your connection and transact if supplied
return yourResult;
}
else
{
return await ExecuteResultWithConnection(async (dbCon, transact) => {
//Here you have your connection ready to be used as dbCon
//without transaction
return yourResult;
});
}
}
public async Task AddTestObject(TestObject NewObject, DBConnection con = null, MysqlTransaction Transact = null)
{
if(con != null)
{
//execute the code without calling the base function
//using con as your connection and transact if supplied
}
else
{
await ExecuteWithConnection(async (dbCon, transact) => {
//Here you have your connection ready to be used as dbCon
//without transaction
});
}
}
}
Now, calling a repository is totally clean:
var repo = new TestRepository();
var objs = await repo.GetTestObjects();
await repo.AddTestObject(new TestObject{ /* whatever */ });
Also, you can create transactions:
await DBFactory.ExecuteTransaction(async (dbCon, transact) => {
var someObject = repo.GetTestObjects(dbCon, transact);
await repo.AddTestObject(new TestObject{ /* whatever */ }, dbCon, transact);
await repo.AddTestObject(new TestObject{ /* whatever */ }, dbCon, transact);
await repo.AddTestObject(new TestObject{ /* whatever */ }, dbCon, transact);
return true;
//If any of the inserts fails with an exception the transaction
//will be automatically rolled back.
//You can also return false if the transaction must be rolled back.
});
Remember, this is just an example, in the real world you will have a more complex infrastructure, this only gives you an idea of what you could do.
I'm currently using EF6 and a service pattern for business rules. I have the need for transactions. Unfortunally, I'm don't have the deep knowledge about how EF manages transcations. Please look at the code below:
public class Service1
{
private DbContext context;
private string connectionString = "yada, yada, yada";
public Service1()
{
context = new DbContext(connectionString); //DbContext is IRL a
context from an EF-dbfirst project, this is only for illustration
}
public void SaveStuff(object stuff)
{
var transaction = context.Database.BeginTransaction();
try
{
var s2 = new Service2();
context.[DbEntity].Add(stuff);
s2.SaveOtherStuff("stuff");
context.SaveChanges();
transaction.Commit();
}
catch
{
transaction.Rollback();
}
}
}
public class Service2
{
private DbContext context;
private string connectionString = "yada, yada, yada";
public Service2()
{
context = new DbContext(connectionString);
}
public void SaveOtherStuff(object stuff)
{
context.[DbEntity].Add(stuff);
}
}
The transaction is created with Service1 context object. Will the transaction follow to the Service2 or is Service2 using a context object without any knowledge of the ongoing transaction? Do I need to pass the context object owning the transaction to Service2 in some way (contructor)?
The answer is no, this all seems like a bad design.
Some notes : You should not really be creating and holding on to DbContexts or Transactions, both should be Disposed in a using statement ideally
However, i believe you can use the same Transaction for multiple DBContexts, though i think this might be dependent on SqlServer version and if the databases lives in the same SqlServer "Dont quote me on that"
using (TransactionScope scope = new TransactionScope())
{
using (myContext context = new myContext())
{
Test t = new Test()
t.Name="TEST";
context.AddToTests(t);
context.SaveChanges();
}
using (myContext context = new myContext())
{
Demo d = context.Demos.First(s => s.Name == "Demo1");
context.DeleteObject(d);
context.SaveChanges();
}
scope.Complete();
// also make sure you add error checking and rollback
}
Lastly, Having the same contexts over different Service calls just seems wrong, you do not need to cache a DBContext, EF does this job extremely well behind the scenes
Update
I just noticed you create a new DBContext per service. anyway use a using statement for both DbContext and Transaction or risk being vilified by your colleagues :)
I don't understand why you need to instantiate two dbcontext but if you share the same dbcontext through the services you don't need to work with explicit transaction (.SaveChanges() creates a sql transaction itself) then pass the dbContext to the Service2 in the constructor.
If you want to work with two different dbContext you can share only the transaction and the connection of the first dbContext, then:
public void SaveStuff(object stuff)
{
using(var transaction = context.Database.BeginTransaction())
{
try
{
var s2 = new Service2(transaction, context.Database.Connection);
context.[DbEntity].Add(stuff);
s2.SaveOtherStuff("stuff");
context.SaveChanges();
transaction.Commit();
}
catch
{
transaction.Rollback();
}
}
}
public class Service2
{
private DbContext context;
public Service2(DbContextTransaction transaction, DbConnection connection)
{
context = new DbContext(connection, false);
context.Database.UseTransaction(transaction);
}
public void SaveOtherStuff(object stuff)
{
context.[DbEntity].Add(stuff);
}
}
I have the following piece of database transaction wrapper written :
public class ExampleWrapper : IDisposable
{
public DbContextTransaction Transaction { get; set; }
public DcatContext DatabaseContext { get; set; }
public ExampleWrapper ()
{
DatabaseContext = new someContext();
Transaction = DatabaseContext.Database.BeginTransaction();
}
public void Dispose()
{
Transaction.Dispose();
}
public void Commit()
{
Transaction.Commit();
}
public void RollBack()
{
Transaction.Rollback();
}
}
I have code something similar to the following in my DAL :
public async Task<Object> Retrieve(string Id)
{
using (var context= new SomeContext())
{
return Context.Object.Find(id);
}
}
I am trying to keep the business layer separate from any Entity Framework dependency and that's the reason for the wrapper Class. If I am trying to do a transaction from the BLL for example:
using(var wrapper = new ExampleWrapper())
{
//make calls to the DAL
something.Retrieve(Id)
}
My question is I am initializing the Dbcontext twice (Here while creating a transaction, as well as in the DAL) . Is there any suggestion any of you have how I can do this better? .
PS: Just using retrieve in a transaction as an example .
You can use Dependency Injection (e.g Ninject) to inject the DbContext to the
Wrapper. Then you can define the lifetime of the DBContext instances. If it is a web application, then you can define the lifetime of the DbContext to be within the life of the Web Request (InRequestScope), so that it will automatically dispose of the DbContext after the Web Request ends.
You would no longer require creating the DbContext, since Ninject creates it for you and manages its disposal.
I'm trying to build a quick test that deletes and recreates a database every time it runs. I have the following:
[TestClass]
public class PocoTest
{
private TransactionScope _transactionScope;
private ProjectDataSource _dataSource;
private Repository _repository = new Repository();
private const string _cstring = "Data Source=.;Initial Catalog=test_db;Trusted_Connection=True";
[TestInitialize]
public virtual void TestInitialize()
{
_dataSource = new ProjectDataSource(_cstring);
_dataSource.Database.Delete();
_dataSource.Database.CreateIfNotExists();
_transactionScope = new TransactionScope();
}
[TestMethod]
public void TestBasicOperations()
{
var item = _repository.AddItem(new Item(){Details = "Test Item"});
// AddItem makes a call through the data context to add a set and then calls datacontext.SaveChanges()
}
[TestCleanup]
public void TestCleanup()
{
// rollback
if (_transactionScope != null)
{
_transactionScope.Dispose();
}
}
However when I run the test I get the following error:
Result Message: Test method
Project.Repository.UnitTests.PocoTest.TestBasicOperations threw
exception: System.Data.SqlClient.SqlException: CREATE DATABASE
statement not allowed within multi-statement transaction.
ProjectDataSource is here:
public class ProjectDataSource : DbContext, IProjectDataSource
{
public ProjectDataSource() : base("DefaultConnection")
{
}
public ProjectDataSource(string connectionString) : base(connectionString)
{
}
public DbSet<Set> Sets { get; set; }
}
Repository:
public class Repository : IRepository
{
private readonly ProjectDataSource _db = new ProjectDataSource();
public Item AddItem(Item item)
{
_db.Items.Add(item);
_db.SaveChanges();
return item;
}
}
Why is this happening?
Also - if it makes any difference - the error doesn't occur if I comment out the AddItem line in TestMethod.
You can also use db.Database.ExecuteSqlCommand(TransactionalBehavior.DoNotEnsureTransaction, sqlCommand);
See https://stackoverflow.com/a/24344654/375114 for details
For your information, this error occurs by design and it happens whenever non-transactionable commands are issued to Microsoft SQL Server within an active transaction.
The solution is, therefore, granting that Database.CreateIfNotExists() hits the database out of any transaction scope. Remember, SQL Profiler is your friend.
You can get a roughly updated list of commands that are not allowed to run whithin transactions.
Note: In case one wonders why am I providing a list based on a Sybase's product, bear in mind that Microsoft SQL Server shares most of its basic genetic with Sybase' engine. For further reading, refer to https://en.wikipedia.org/wiki/Microsoft_SQL_Server
In case anyone else runs into this issue:
In my Repository class, I have another definition of what's commonly labeled a "dbContext" - ProjectDataSource. This means that one context was created in my test class, while another was created in my Repository object. Sending the connectionstring to my repo class solved the problem:
In Repository:
public class Repository : IRepository
{
private readonly ProjectDataSource _db;
public Repository(string connectionString)
{
_db = new ProjectDataSource(connectionString);
}
public Repository()
{
_db = new ProjectDataSource();
}
From my test:
private TransactionScope _transactionScope;
private Repository _repository;
private ProjectDataSource _dataSource;
private const string _connectionString = "Data Source=.;Initial Catalog=test_db;Trusted_Connection=True";
[TestInitialize]
public virtual void TestInitialize()
{
_repository = new Repository(_connectionString);
_dataSource = new ProjectDataSource(_connectionString);
_dataSource.Database.Delete();
_dataSource.Database.CreateIfNotExists();
_transactionScope = new TransactionScope();
}
You can not use implicit commits around certain SQL commands.
Creating and Deleting databases is an example
SQL server will do an AUTOCommit
See the remarks section in the MS SQL help.
http://msdn.microsoft.com/en-us/library/ms176061.aspx
and something on Auto Commit for more info...
http://msdn.microsoft.com/en-us/library/ms187878%28v=sql.105%29
Try this code
using (TransactionScope ts = new TransactionScope(TransactionScopeOption.Suppress))
{
var sqlCommand = String.Format("Create DATABASE [{0}]", "TempBackupDB");
_context.Database.ExecuteSqlCommand(TransactionalBehavior.DoNotEnsureTransaction, sqlCommand);
ts.Complete();
}
You need to run Update-Database command on package manager console.
I know that MongoDB is not supposed to support unit of work, etc. But I think it would be nice to implement the repository which would store only the intentions (similar to criteria) and then commit them to the DB. Otherwise in every method in your repository you have to create connection to DB and then close it. If we place the connection to DB in some BaseRepository class, then we tie our repository to concrete DB and it is really difficult to test repositories, to test IoC which resolve repositories.
Is creating a session in MongoDB a bad idea? Is there a way to separate the connection logic from repository?
Here is some code by Rob Conery. Is it a good idea to always connect to your DB on every request? What is the best practice?
There is one more thing. Imagine I want to provide an index for a collection. Previously I did in a constructor but with Rob's approach this seems out of logic to do it there.
using Norm;
using Norm.Responses;
using Norm.Collections;
using Norm.Linq;
public class MongoSession {
private string _connectionString;
public MongoSession() {
//set this connection as you need. This is left here as an example, but you could, if you wanted,
_connectionString = "mongodb://127.0.0.1/MyDatabase?strict=false";
}
public void Delete<T>(System.Linq.Expressions.Expression<Func<T, bool>> expression) where T : class, new() {
//not efficient, NoRM should do this in a way that sends a single command to MongoDB.
var items = All<T>().Where(expression);
foreach (T item in items) {
Delete(item);
}
}
public void Delete<T>(T item) where T : class, new() {
using(var db = Mongo.Create(_connectionString))
{
db.Database.GetCollection<T>().Delete(item);
}
}
public void DeleteAll<T>() where T : class, new() {
using(var db = Mongo.Create(_connectionString))
{
db.Database.DropCollection(typeof(T).Name);
}
}
public T Single<T>(System.Linq.Expressions.Expression<Func<T, bool>> expression) where T : class, new() {
T retval = default(T);
using(var db = Mongo.Create(_connectionString))
{
retval = db.GetCollection<T>().AsQueryable()
.Where(expression).SingleOrDefault();
}
return retval;
}
public IQueryable<T> All<T>() where T : class, new() {
//don't keep this longer than you need it.
var db = Mongo.Create(_connectionString);
return db.GetCollection<T>().AsQueryable();
}
public void Add<T>(T item) where T : class, new() {
using(var db = Mongo.Create(_connectionString))
{
db.GetCollection<T>().Insert(item);
}
}
public void Add<T>(IEnumerable<T> items) where T : class, new() {
//this is WAY faster than doing single inserts.
using(var db = Mongo.Create(_connectionString))
{
db.GetCollection<T>().Insert(items);
}
}
public void Update<T>(T item) where T : class, new() {
using(var db = Mongo.Create(_connectionString))
{
db.GetCollection<T>().UpdateOne(item, item);
}
}
//this is just some sugar if you need it.
public T MapReduce<T>(string map, string reduce) {
T result = default(T);
using(var db = Mongo.Create(_connectionString))
{
var mr = db.Database.CreateMapReduce();
MapReduceResponse response =
mr.Execute(new MapReduceOptions(typeof(T).Name) {
Map = map,
Reduce = reduce
});
MongoCollection<MapReduceResult<T>> coll = response.GetCollection<MapReduceResult<T>>();
MapReduceResult<T> r = coll.Find().FirstOrDefault();
result = r.Value;
}
return result;
}
public void Dispose() {
_server.Dispose();
}
}
Don't worry too much about opening and closing connections. The MongoDB C# driver maintains an internal connection pool, so you won't suffer overheads of opening and closing actual connections each time you create a new MongoServer object.
You can create a repository interface that exposes your data logic, and build a MongoDB implementation that is injected where it's needed. That way, the MongoDB specific connection code is abstratced away from your application, which only sees the IRepository.
Be careful trying to implement a unit-of-work type pattern with MongoDB. Unlike SQL Server, you can't enlist multiple queries in a transaction that can be rolled back if one fails.
For a simple example of a repository pattern that has MongoDB, SQL Server and JSON implementations, check out the NBlog storage code. It uses Autofac IoC to inject concrete repositories into an ASP.NET MVC app.
While researching design patterns, I was creating a basic repository pattern for .Net Core and MongoDB. While reading over the MongoDB documentation I came across an article about transactions in MongoDB. In the article it specified that:
Starting in version 4.0, MongoDB provides the ability to perform
multi-document transactions against replica sets.
Looking around the intertubes I came across a library that does a really good job of implementing the Unit of Work pattern for MongoDB.
If you are interested in an implementation similar to Rob Connery's and NBlog storage code but using the mongodb csharp driver 2.0 (that is asynchronous), you can look at:
https://github.com/alexandre-spieser/mongodb-generic-repository
You can then write a custom repository inheriting from BaseMongoRepository.
public interface ITestRepository : IBaseMongoRepository
{
void DropTestCollection<TDocument>();
void DropTestCollection<TDocument>(string partitionKey);
}
public class TestRepository : BaseMongoRepository, ITestRepository
{
public TestRepository(string connectionString, string databaseName) : base(connectionString, databaseName)
{
}
public void DropTestCollection<TDocument>()
{
MongoDbContext.DropCollection<TDocument>();
}
public void DropTestCollection<TDocument>(string partitionKey)
{
MongoDbContext.DropCollection<TDocument>(partitionKey);
}
}
Briefly
You can use this nuget-package UnitOfWork.MongoDb. This is a wrapper for MongoDb.Driver with some helpful functions and features. Also you can find sample code and video (ru).
Read settings for connection
// read MongoDb settings from appSettings.json
services.AddUnitOfWork(configuration.GetSection(nameof(DatabaseSettings)));
// --- OR ----
// use hardcoded
services.AddUnitOfWork(config =>
{
config.Credential = new CredentialSettings { Login = "sa", Password = "password" };
config.DatabaseName = "MyDatabase";
config.Hosts = new[] { "Localhost" };
config.MongoDbPort = 27017;
config.VerboseLogging = false;
});
Injections
namespace WebApplicationWithMongo.Pages
{
public class IndexModel : PageModel
{
private readonly IUnitOfWork _unitOfWork;
private readonly ILogger<IndexModel> _logger;
public IndexModel(IUnitOfWork unitOfWork, ILogger<IndexModel> logger)
{
_unitOfWork = unitOfWork;
_logger = logger;
}
public IPagedList<Order>? Data { get; set; }
}
}
After injection you can get repository.
Get repository
public async Task<IActionResult> OnGetAsync(int pageIndex = 0, int pageSize = 10)
{
var repository = _unitOfWork.GetRepository<Order, int>();
Data = await repository.GetPagedAsync(pageIndex, pageSize, FilterDefinition<Order>.Empty, HttpContext.RequestAborted);
return Page();
}
GetPagedAsync one of some helpful implementations
Transactions
If you need ACID operations (transactions) you can use IUnitOfWork something like this. (Replicate Set should be correctly set up). For example:
await unitOfWork.UseTransactionAsync<OrderBase, int>(ProcessDataInTransactionAsync1, HttpContext.RequestAborted, session);
Method ProcessDataInTransactionAsync1 can be looks like this:
async Task ProcessDataInTransactionAsync1(IRepository<OrderBase, int> repositoryInTransaction, IClientSessionHandle session, CancellationToken cancellationToken)
{
await repository.Collection.DeleteManyAsync(session, FilterDefinition<OrderBase>.Empty, null, cancellationToken);
var internalOrder1 = DocumentHelper.GetInternal(99);
await repositoryInTransaction.Collection.InsertOneAsync(session, internalOrder1, null, cancellationToken);
logger!.LogInformation("InsertOne: {item1}", internalOrder1);
var internalOrder2 = DocumentHelper.GetInternal(100);
await repositoryInTransaction.Collection.InsertOneAsync(session, internalOrder2, null, cancellationToken);
logger!.LogInformation("InsertOne: {item2}", internalOrder2);
var filter = Builders<OrderBase>.Filter.Eq(x => x.Id, 99);
var updateDefinition = Builders<OrderBase>.Update.Set(x => x.Description, "Updated description");
var result = await repositoryInTransaction.Collection
.UpdateOneAsync(session, filter, updateDefinition, new UpdateOptions { IsUpsert = false }, cancellationToken);
if (result.IsModifiedCountAvailable)
{
logger!.LogInformation("Update {}", result.ModifiedCount);
}
throw new ApplicationException("EXCEPTION! BANG!");
}
This nuget is open-source Calabonga.UnitOfWork.MongoDb