I am writing to test FromSql Statement with InMemory Database. We are attempting to utilize Sqlite.
Running the following Sql passes the unit test without error.
select * from dbo.Product
However, doing this also passes with incorrect sql syntax. Would like to make the test fail with improper sql syntax. How can we test FromSql properly?
No error came from result of bad syntax .
seledg24g5ct * frofhm dbo.Product
Full Code:
namespace Tests.Services
{
public class ProductTest
{
private const string InMemoryConnectionString = "DataSource=:memory:";
private SqliteConnection _connection;
protected TestContext testContext;
public ProductServiceTest()
{
_connection = new SqliteConnection(InMemoryConnectionString);
_connection.Open();
var options = new DbContextOptionsBuilder<TestContext>()
.UseSqlite(_connection)
.Options;
testContext= new TestContext(options);
testContext.Database.EnsureCreated();
}
[Fact]
public async Task GetProductByIdShouldReturnResult()
{
var productList = testContext.Product
.FromSql($"seledg24g5ct * frofhm dbo.Product");
Assert.Equal(1, 1);
}
Using Net Core 3.1
There are two things to be taken into consideration here.
First, FromSql method is just a tiny bridge for using raw SQL queries in EF Core. No any validation/parsing of the passed SQL string occurs when the method is called except finding the parameter placeholders and associating db parameters with them. In order to get validated, it has to be executed.
Second, in order to support query composition over the FromSql result set, the method returns IQueryable<T>. Which means it is not executed immediately, but only if/when the result is enumerated. Which could happen when you use foreach loop over it, or call methods like ToList, ToArray or EF Core specific Load extension method, which is similar to ToList, but without creating list - the equivalent of foreach loop w/o body, e.g.
foreach (var _ in query) { }
With that being said, the code snippet
var productList = testContext.Product
.FromSql($"seledg24g5ct * frofhm dbo.Product");
does basically nothing, hence does not produce exception for invalid SQL. You must execute it using one of the aforementioned methods, e.g.
productList.Load();
or
var productList = testContext.Product
.FromSql($"seledg24g5ct * frofhm dbo.Product")
.ToList();
and assert the expected exception.
For more info, see Raw SQL Queries and How Queries Work sections of EF Core documentation.
#ivan-stoev has answered your question as to why your '.FromSql' statement does nothing - i.e. the query is never actually materialized. But to try and add some additional value, i'll share my Unit Test setup as it works well for me. Of course, YMMV.
Create a reusable class to handle generic In-memory database creation and easy population of tables with test data. NB: this requires the Nuget packages:
ServiceStack.OrmLite.Core
ServiceStack.OrmLite.Sqlite
I am using OrmLite as it allows for mocking and unit testing by providing a non-disposing connection factory which I can neatly inject into the Test classes via Dependency Injection:
/// <summary>
/// It is not possible to directly mock the Dapper commands i'm using to query the underlying database. There is a Nuget package called Moq.Dapper, but this approach doesnt need it.
/// It is not possible to mock In-Memory properties of a .NET Core DbContext such as the IDbConnection - i.e. the bit we actually want for Dapper queries.
/// for this reason, we need to use a different In-Memory database and load entities into it to query. Approach as per: https://mikhail.io/2016/02/unit-testing-dapper-repositories/
/// </summary>
public class TestInMemoryDatabase
{
private readonly OrmLiteConnectionFactory dbFactory =
new OrmLiteConnectionFactory(":memory:", SqliteDialect.Provider);
public IDbConnection OpenConnection() => this.dbFactory.OpenDbConnection();
public void Insert<T>(IEnumerable<T> items)
{
using (var db = this.OpenConnection())
{
db.CreateTableIfNotExist<T>();
foreach (var item in items)
{
db.Insert(item);
}
}
}
}
A 'DbConnectionManager<EFContext>' class to provide the wrapper to the database connection using the EF Context you will already have created. This grabs the database connection from the EF Context and abstracts away the opening/closing operations:
public class DbConnectionManager<TContext> : IDbConnectionManager<TContext>, IDisposable
where TContext : DbContext
{
private TContext _context;
public DbConnectionManager(TContext context)
{
_context = context;
}
public async Task<IDbConnection> GetDbConnectionFromContextAsync()
{
var dbConnection = _context.Database.GetDbConnection();
if (dbConnection.State.Equals(ConnectionState.Closed))
{
await dbConnection.OpenAsync();
}
return dbConnection;
}
public void Dispose()
{
var dbConnection = _context.Database.GetDbConnection();
if (dbConnection.State.Equals(ConnectionState.Open))
{
dbConnection.Close();
}
}
}
Accompanying injectable Interface for the above:
public interface IDbConnectionManager<TContext>
where TContext : DbContext
{
Task<IDbConnection> GetDbConnectionFromContextAsync();
void Dispose();
}
In your .NET Project Startup class, register this interface with the inbuilt DI container (or whatever one you're using):
public void ConfigureServices(IServiceCollection services)
{
services.AddScoped(typeof(IDbConnectionManager<>), typeof(DbConnectionManager<>));
}
Now our Unit Test class looks like this:
/// <summary>
/// All tests to follow the naming convention: MethodName_StateUnderTest_ExpectedBehaviour
/// </summary>
[ExcludeFromCodeCoverage]
public class ProductTests
{
//private static Mock<ILoggerAdapter<Db2DbViewAccess>> _logger;
//private static Mock<IOptions<AppSettings>> _configuration;
private readonly Mock<IDbConnectionManager<Db2Context>> _dbConnection;
private readonly List<Product> _listProducts = new List<Product>
{
new Product
{
Id = 1,
Name = "Product1"
},
new Product
{
Id = 2,
Name = "Product2"
},
new Product
{
Id = 3,
Name = "Product3"
},
};
public ProductTests()
{
//_logger = new Mock<ILoggerAdapter<Db2DbViewAccess>>();
//_configuration = new Mock<IOptions<AppSettings>>();
_dbConnection = new Mock<IDbConnectionManager<Db2Context>>();
}
[Fact]
public async Task GetProductAsync_ResultsFound_ReturnListOfAllProducts()
{
// Arrange
// Using a SQL Lite in-memory database to test the DbContext.
var testInMemoryDatabase = new TestInMemoryDatabase();
testInMemoryDatabase.Insert(_listProducts);
_dbConnection.Setup(c => c.GetDbConnectionFromContextAsync())
.ReturnsAsync(testInMemoryDatabase.OpenConnection());
//_configuration.Setup(x => x.Value).Returns(appSettings);
var productAccess = new ProductAccess(_configuration.Object); //, _logger.Object, _dbConnection.Object);
// Act
var result = await productAccess.GetProductAsync("SELECT * FROM Product");
// Assert
result.Count.Should().Equals(_listProducts.Count);
}
}
Notes on the above:
You can see i'm testing a 'ProductAccess' Data Access class which wraps my database calls but that should be easy enough to change for your setup. My ProductAccess class is expecting other services such as logging and Configuration to be injected in, but i have commented these out for this minimal example.
Note the setup of the in-memory database and populating it with your test list of entities to query is now a simple 2 lines (you could even do this just once in the Test class constructor if you want the same test dataset to use across tests):
var testInMemoryDatabase = new TestInMemoryDatabase();
testInMemoryDatabase.Insert(_listProducts);
Related
I apologize for the verbose posting. I don't like seeing those myself, but my question is about structure and I think all the pieces are needed for asking it.
The interesting part is at the bottom, though, so feel free to scroll down to the question.
Here is a Controller. I am injecting a context and a command factory. The controller returns a list of objects that are read from an Oracle database.
public class aController : ControllerBase
{
protected readonly IDB db;
public aController(IContext context, ICommandFactory factory)
{
db = IDB.dbFactory(context, factory);
}
[HttpGet]
public ActionResult<s> GetS()
{
return Ok(db.DbGetS());
}
}
What is currently not injected is the persistence class. There will be a manageable number of stored procedures that are mapped to a model, all hand-coded to a spec. This interface has a factory to construct an implementation (so that I can mock it should the need arise), and our data retrieval method.
public interface IDB
{
public static IDB dbFactory(
IContext context,
ICommandFactory factory)
{
return new DB(context, factory);
}
public S DbGetS();
}
This class implements the interface. It has a constructor that passes the injected items to the base constructor and otherwise does the Oracle interaction by calling generic access methods in the base class.
public class DB: dbBase, IDB
{
public DB(
IContext context,
ICommandFactory factory)
: base(context, factory)
{ }
public S DbGetS()
{
S s = new S();
IEnumerable<S> ss = GetData("proc-name");
return ss.SingleOrDefault();
}
}
Then there is a base class to all the model classes that uses generics and does the heavy lifting. This is very much simplified.
public abstract class dbBase
{
private readonly IContext _context;
private readonly ICommandFactory _commandFactory;
protected delegate IEnumerable<T> ParseResult<T>(IDbCommand cmd);
protected dbBase(IContext context, ICommandFactory factory)
{
_context = context;
_commandFactory = factory;
}
protected IEnumerable<T> GetData<T>(string sproc)
{
IEnumerable<T> results = null;
var cmd = this._commandFactory.GetDbCommand(sproc, this._context);
// boilerplate code omitted that sets up the command and executes the query
results = parseResult<T>(cmd); // this method will read from the refCursor
return results;
}
private IEnumerable<T> parseResult<T>(IDbCommand cmd) where T : ModelBase, new()
{
// This cast is the problem:
OracleRefCursor rc = (OracleRefCursor)cmd.Parameters["aCursor"];
using (OracleDataReader reader = rc.GetDataReader())
{
while (reader.Read())
{
// code omitted that reads the data and returns it
And here is the Unit Test that should test the Controller:
public void S_ReturnsObject()
{
// Arrange
var mockFactory = new Mock<ICommandFactory>();
var mockContext = new Mock<IContext>();
var mockCommand = new Mock<IDbCommand>();
var mockCommandParameters = new Mock<IDataParameterCollection>();
mockCommandParameters.SetupGet(p => p[It.IsAny<string>()]).Returns(mockParameter.Object);
// Set up the command and parameters
mockCommand.SetupGet(x => x.Parameters)
.Returns(mockCommandParameters.Object);
mockCommand.Setup(x => x.ExecuteNonQuery()).Verifiable();
// Set up the command factory
mockFactory.Setup(x => x.GetDbCommand(
It.IsAny<string>(),
mockContext.Object))
.Returns(mockCommand.Object)
.Verifiable();
var controller = new aController(mockContext.Object, mockFactory.Object);
// Act
var result = controller.GetS();
// omitted verification
All stored procedures have refCursor output parameters that contain the results. The only way to obtain an OracleDataReader for this is to cast the query output parameter to OracleRefCursor. Mocking the reader is therefore not possible, because even though I can get a mock parameter, the test will fail with a cast exception in the ParseResult method. Unless I am missing something.
I fear that I need to cut the Oracle API interactions out, even though it would be nice to at least enter parseResults() as part of the test.
I could inject IDB and replace DbGetS() with a mock version, but then there will be not much code coverage by my test and I won't be able to mock any database connection issues and the like. Also, there will be about a dozen IDB level interfaces that would all have to be injected.
How should I restructure this to be able write meaningful tests?
(Disclaimer: The code snippets that I pasted here are for illustration purposes and were heavily edited. The results were not tested and will not compile or run.)
I didn't really expect an answer, or at least, I am fine with not receiving one. As it often happens when I ask a question on SO, just laying out the problem in a way that others can understand it makes it clear enough for me to see through.
In this instance, the essence of the question boiled down to there not being a way around using GetDataReader() on the RefCursor. In other words, if you have Oracle Stored Procedures with an output cursor (i.e. not a SELECT result set), you cannot mock the database interaction unless you manage to write your own RefCursor and OracleDataReader. If you think this is wrong, please elaborate. OracleCommand, OracleParameter, and operations on the command (ExecuteNonQuery) can be substituted with System.Data equivalencies that can be mocked.
So what did I do? I reverted the substitution of the Oracle.ManageDataAccess types with System.Data stuff (because the former are less verbose) and injected IDB instead.
This is the resulting unit test:
// Arrange
var mockDbS = new Mock<IDB>();
Model.S expected = new Model.S() { var1 = 1, var2 = 2, var3 = 3 };
mockDbS.Setup(d => d.DbGetS().Returns(expected);
var controller = new aController(mockDbS.Object);
// Act
ActionResult<Model.Summary> actionResult = controller.GetS();
Model.S actual = ((ObjectResult)actionResult.Result).Value as Model.S;
// Assert
mockDbS.Verify();
Assert.Equal(200, ((ObjectResult)actionResult.Result).StatusCode);
Assert.Equal(expected, actual);
This does not give me the coverage that I wanted, but it is a basic test of the controller actions.
I am currently trying to use Autofixture to create a pre-defined fixture as an implementation of ICustomization for ApplicationDbContext using In-Memory provider.
public class ApplicationDbContextFixture : ICustomization
{
public void Customize(IFixture fixture)
{
var specimenFactory = new SpecimenFactory<ApplicationDbContext>(CreateDbContext);
fixture.Customize<ApplicationDbContext>(
composer =>
composer.FromFactory(specimenFactory)
);
}
/// <summary>
/// Private factory method to create a new instance of <see cref="ApplicationDbContext"/>
/// </summary>
private ApplicationDbContext CreateDbContext()
{
var dbContextOptions = new DbContextOptionsBuilder<ApplicationDbContext>()
.UseInMemoryDatabase("SomeDatabaseName")
.Options;
var dbContext = new ApplicationDbContext(dbContextOptions);
return dbContext;
}
}
Then, I will apply that customization to my Fixture as follows:
[Fact]
public void TestAddUsersToEmptyDatabase()
{
// Arrange
// Fixture for ApplicationDbContext
var fixture = FixtureFactory.CreateFixture();
var applicationDatabaseFixture = new ApplicationDbContextFixture();
fixture.Customize(applicationDatabaseFixture);
// Fixture for users
var randomUser = fixture.Create<AppUser>();
var normalUser = fixture.Create<AppUser>();
var adminUser = fixture.Create<AppUser>();
// Act & Assert
// Run the test against one instance of the context
// Use a clean instance of the context for each operation too
using (var dbContext = fixture.Create<ApplicationDbContext>())
{
Assert.Empty(dbContext.Users);
dbContext.Users.Add(randomUser);
dbContext.SaveChanges();
}
using (var dbContext = fixture.Create<ApplicationDbContext>())
{
dbContext.Users.AddRange(normalUser, adminUser);
dbContext.SaveChanges();
}
using (var dbContext = fixture.Create<ApplicationDbContext>())
{
Assert.NotEmpty(dbContext.Users);
Assert.NotNull(dbContext.Users.SingleOrDefault(_ => _.Id == randomUser.Id));
Assert.NotNull(dbContext.Users.SingleOrDefault(_ => _.Id == normalUser.Id));
Assert.NotNull(dbContext.Users.SingleOrDefault(_ => _.Id == adminUser.Id));
}
}
FixtureFactory.CreateFixture implementation
/// <summary>
/// Factory method to declare a single <see cref="IFixture"/> for unit tests applications
/// </summary>
internal static class FixtureFactory
{
internal static IFixture CreateFixture()
{
var fixture = new Fixture().Customize(
new AutoMoqCustomization { ConfigureMembers = true });
return fixture;
}
}
Now in my unit test, asserting the Assert.Empty(dbContext.Users); will throw System.NotImplementedException : The method or operation is not implemented. because the DbSet<AppUser> Users generated from Autofixture is a DynamicProxy.
See image dbContext.Users as DynamicProxy
Oddly enough if I inspect the breakpoints from the factory method (ie. CreateDbContext()) called from the fixture.Create<ApplicationDbContext>(), the DbSet Users is of the expected type.
See image dbContext.Users as InternalDbSet
Optionally, I do aware that I can replace all the usage of dbContext.Users to dbContext.Set<User>() and that would make the unit test pass but the problem is that in the actual class, I am using the dbContext.Users for IQueryables and database operations, so I still need to stick with it if possible.
Hence, I would need help to know why does AutoFixture used my factory method to generate the instance for my ApplicationDbContext but all the DbSet<> properties inside it are mocked when resolved by the ISpecimenBuilder. Is there a way to remedy this?
I've post the similar question in their Github but it has been not active recently, so i also asked here.
Kindly please understand I only started to use Autofixture 2 days ago. So if there's something that I write wrong or there's a misconception in any Design Patterns, please kindly wrote a comment so that I can take it as a lesson.
Update 1:
So i tried to use initialized a plain fixture without any AutoMoq customization (ie. fixture = new Fixture()) and this time it throws a AutoFixture.ObjectCreationExceptionWithPath exception, complaining that it is unable to resolve DbSet property within the ApplicationDbContext. At this point, I was thinking if anyone know how to use a Relay or ISpecimenBuilder to tell Autofixture to use/call/implement all DbSet<T> properties within the ApplicationDbContext with dbContext.Set<T> because that would work if I replace all usage of DbSets in my unit tests, but as I mentioned, all IQueryable are return from DbSets so i cannot simply just replace it in ApplicationDbContext.
Update 2:
I remove and simplify the creation of ApplicationDbContext from my factory method CreateDbContext() since it will cause confusion from the code complexity.
It is hard to understand what you are trying to achieve from your post.
I think what you actually need, is to test your code that happens to use EntityFramework.
If that's the case you might want to have a look at this library, I have created EntityFrameworkCore.AutoFixture. It uses the In-Memory database provider as well as SQLite in-memory provider.
Have a look at the readme for some code examples. If you have any questions drop me a message or open an issue on GitHub.
I have the following code:
public void someMethod(){
...
var accounts = myRepo.GetAccounts(accountId)?.ToList();
...
foreach (var account in accounts)
{
account.Status="INACTIVE";
var updatedAccount = myRepo.AddOrUpdateAccounts(account);
}
}
public Account AddOrUpdateAccounts(Account account){
//I want to compare account in the Db and what is passed in. So get the account from DB
var accountFromDb = myRepo.GetAccounts(account.Id); //this doesn't return whats in the database.
//here accountFromDb.Status is returned as INACTIVE, but in the database the column value is ACTIVE
...
...
}
public IEnumerable<Account> GetAccounts(int id){
return id <= 0 ? null : m_Context.Accounts.Where(x => x.Id == id);
}
Here, inside someMethod() I am calling GetAccounts() that returns data from the Accounts table.
Then I am changing the Status of the account, and calling AddOrUpdateAccounts().
Inside AddOrUpdateAccounts(), I want to compare the account that was passed in and whats in the database. When I call GetAccounts(), it returned a record with STATUS="INACTIVE". I haven't done SaveChanges(). Why didn't GetAccounts() returned the data from the database? In the Db the status is still "ACTIVE"
The repository method should return IQueryable<Account> rather than IEnumerable<Account> as this will allow the the consumer to continue to refine any criteria or govern how the account(s) should be consumed prior to any query executing against the database:
I would consider:
public IQueryable<Account> GetAccountsById(int id){
return m_Context.Accounts.Where(x => x.Id == id);
}
Don't return #null, just the query. The consumer can decide what to do if the data is not available.
From there the calling code looks like:
var accounts = myRepo.GetAccounts(accountId).ToList();
foreach (var account in accounts)
{
account.Status="INACTIVE";
}
Your addOrUpdate wouldn't work:
public Account AddOrUpdateAccounts(Account account){
...
var account = myRepo.GetAccounts(account.Id); //this doesn't return whats in the database.
You pass in the Account as "account" then try declaring a local variable called "account". If you remove the var keyword you would load the DbContext's record over top your modified account and your changes would be lost. Loading the account into another variable isn't necessary as long as the account is still associated with the DbContext.
Edit: After changing the var account = ... statement to look like:
public Account AddOrUpdateAccounts(Account account){
...
var accountToUpdate = myRepo.GetAccounts(account.Id); //this doesn't return whats
accountToUpdate will show the modified status rather than what is in the database because that DbContext is still tracking the reference to the entity that you modified. (account) For instance if I do this:
var account1st = context.Accounts.Single(x => x.AccountId == 1);
var account2nd = context.Accounts.Single(x => x.AccountId == 1);
Console.WriteLine(account1st.Status); // I get "ACTIVE"
Console.WriteLine(account2nd.Status); // I get "ACTIVE"
account1st.Status = "INACTIVE";
Console.WriteLine(account2nd.Status); // I get "INACTIVE"
Both references point to the same instance. It doesn't matter when I attempt to read the Account the 2nd time, as long as it's coming from the same DbContext and the context is tracking instances. If you read the row via a different DbContext, or use AsNoTracking() with all of your reads then the account can be read fresh from the database. You can reload an entity, but if those variables are pointing at the same reference it will overwrite your changes and set the entity back to Unmodified. This can be a little confusing when watching an SQL profiler output because in some cases you will see EF run a SELECT query for an entity, but the entity returned has different, modified values than what is in the database. Even when loading from the tracking cache, EF can still execute queries against the DB in some cases, but it returns the tracked entity reference.
/Edit
When it comes to saving the changes, it really just boils down to calling the SaveChanges on the DbContext that the account is associated. The "tricky" part is scoping the DbContext so that this can be done. The recommended pattern for this is the Unit of Work. There are a few different ones out there, and the one I recommend for EF is Mehdime's DbContextScope, however you can implement simpler ones that may be easier to understand and follow. Essentially a unit of work encapsulates the DbContext so that you can define a scope that repositories can access the same DbContext, then commit those changes at the end of the work.
At the most basic level:
public interface IUnitOfWork<TDbContext> : IDisposable where TDbContext : DbContext
{
TDbContext Context { get; }
int SaveChanges();
}
public class UnitOfWork : IUnitOfWork<YourDbContext>
{
private YourDbContext _context = null;
TDbContext IUnitOfWork<YourDbContext>.Context
{
get { return _context ?? (_context = new YourDbContext("YourConnectionString"); }
}
int IUnitOfWork<YourDbContext>.SaveChanges()
{
if(_context == null)
return 0;
return _context.SaveChanges();
}
public void Dispose()
{
try
{
if (_context != null)
_context.Dispose();
}
catch (ObjectDisposedException)
{ }
}
}
With this class available, and using dependency injection via an IoC container (Autofac, Unity, or MVC Core) you register the unit of work as Instance per Request so that when the controller and repository classes request one in their constructor, they receive the same instance.
Controller / Service:
private readonly IUnitOfWork<YourDbContext> _unitOfWork = null;
private readonly IYourRepository _repository = null;
public YourService(IUnitOfWork<YourDbContext> unitOfWork, IYourRepository repository)
{
_unitOfWork = unitOfWork ?? throw new ArgumentNullException("unitOfWork");
_repository = repository ?? throw new ArgumentNullException("repository");
}
Repository
private readonly IUnitOfWork<YourDbContext> _unitOfWork = null;
public YourService(IUnitOfWork<YourDbContext> unitOfWork)
{
_unitOfWork = unitOfWork ?? throw new ArgumentNullException("unitOfWork");
}
private YourDbContext Context { get { return _unitOfWork.Context; } }
Big Disclaimer: This is a very crude initial implementation to explain roughly how a Unit of Work can operate, it is no way production suitable code. It has limitations, specifically around disposing the DbContext but should serve as a demonstration. Definitely look to implement a library that's already out there and addresses these concerns. These implementations properly manage the DbContext disposal and will manage a scope beyond the context, like a TransactionScope so that their SaveChanges is required even if the unitOfWork.Context.SaveChanges() is called.
With a unit of work available to the Controller/Service and Repository, the code to use the repository and update your changes becomes:
var accounts = myRepo.GetAccountsById(accountId).ToList();
foreach (var account in accounts)
{
account.Status="INACTIVE";
}
UnitOfWork.SaveChanges();
With a proper unit of work it will look more like:
using (var unitOfWork = UnitOfWorkFactory.Create())
{
var accounts = myRepo.GetAccountsById(accountId).ToList(); // Where myRepo can resolve the unit of work via locator.
foreach (var account in accounts)
{
account.Status="INACTIVE";
}
unitOfWork.SaveChanges();
}
This way if you were to call different repos to fetch data, perform a number of different updates, the changes would be committed all in one call at the end and rolled back if there was a problem with any of the data.
I'd like to use NSubstitute to unit test Entity Framework 6.x by mocking DbSet. Fortunately, Scott Xu provides a good unit testing library, EntityFramework.Testing.Moq using Moq. So, I modified his code to be suitable for NSubstitute and it's been looking good so far, until I wanted to test DbSet<T>.Add(), DbSet<T>.Remove() methods. Here's my code bits:
public static class NSubstituteDbSetExtensions
{
public static DbSet<TEntity> SetupData<TEntity>(this DbSet<TEntity> dbset, ICollection<TEntity> data = null, Func<object[], TEntity> find = null) where TEntity : class
{
data = data ?? new List<TEntity>();
find = find ?? (o => null);
var query = new InMemoryAsyncQueryable<TEntity>(data.AsQueryable());
((IQueryable<TEntity>)dbset).Provider.Returns(query.Provider);
((IQueryable<TEntity>)dbset).Expression.Returns(query.Expression);
((IQueryable<TEntity>)dbset).ElementType.Returns(query.ElementType);
((IQueryable<TEntity>)dbset).GetEnumerator().Returns(query.GetEnumerator());
#if !NET40
((IDbAsyncEnumerable<TEntity>)dbset).GetAsyncEnumerator().Returns(new InMemoryDbAsyncEnumerator<TEntity>(query.GetEnumerator()));
((IQueryable<TEntity>)dbset).Provider.Returns(query.Provider);
#endif
...
dbset.Remove(Arg.Do<TEntity>(entity =>
{
data.Remove(entity);
dbset.SetupData(data, find);
}));
...
dbset.Add(Arg.Do<TEntity>(entity =>
{
data.Add(entity);
dbset.SetupData(data, find);
});
...
return dbset;
}
}
And I created a test method like:
[TestClass]
public class ManipulationTests
{
[TestMethod]
public void Can_remove_set()
{
var blog = new Blog();
var data = new List<Blog> { blog };
var set = Substitute.For<DbSet<Blog>, IQueryable<Blog>, IDbAsyncEnumerable<Blog>>()
.SetupData(data);
set.Remove(blog);
var result = set.ToList();
Assert.AreEqual(0, result.Count);
}
}
public class Blog
{
...
}
The issue arises when the test method calls set.Remove(blog). It throws an InvalidOperationException with error message of
Collection was modified; enumeration operation may not execute.
This is because the fake data object has been modified when the set.Remove(blog) method is called. However, the original Scott's way using Moq doesn't result in the issue.
Therefore, I wrapped the set.Remove(blog) method with a try ... catch (InvalidOperationException ex) block and let the catch block do nothing, then the test doesn't throw an exception (of course) and does get passed as expected.
I know this is not the solution, but how can I achieve my goal to unit test DbSet<T>.Add() and DbSet<T>.Remove() methods?
What's happening here?
set.Remove(blog); - this calls the previously configured lambda.
data.Remove(entity); - The item is removed from the list.
dbset.SetupData(data, find); - We call SetupData again, to reconfigure the Substitute with the new list.
SetupData runs...
In there, dbSetup.Remove is being called, in order to reconfigure what happens when Remove is called next time.
Okay, we have a problem here. dtSetup.Remove(Arg.Do<T.... doesn't reconfigure anything, it rather adds a behavior to the Substitute's internal list of things that should happen when you call Remove. So we're currently running the previously configured Remove action (1) and at the same time, down the stack, we're adding an action to the list (5). When the stack returns and the iterator looks for the next action to call, the underlying list of mocked actions has changed. Iterators don't like changes.
This leads to the conclusion: We can't modify what a Substitute does while one of its mocked actions is running. If you think about it, nobody who reads your test would assume this to happen, so you shouldn't do this at all.
How can we fix it?
public static DbSet<TEntity> SetupData<TEntity>(
this DbSet<TEntity> dbset,
ICollection<TEntity> data = null,
Func<object[], TEntity> find = null) where TEntity : class
{
data = data ?? new List<TEntity>();
find = find ?? (o => null);
Func<IQueryable<TEntity>> getQuery = () => new InMemoryAsyncQueryable<TEntity>(data.AsQueryable());
((IQueryable<TEntity>) dbset).Provider.Returns(info => getQuery().Provider);
((IQueryable<TEntity>) dbset).Expression.Returns(info => getQuery().Expression);
((IQueryable<TEntity>) dbset).ElementType.Returns(info => getQuery().ElementType);
((IQueryable<TEntity>) dbset).GetEnumerator().Returns(info => getQuery().GetEnumerator());
#if !NET40
((IDbAsyncEnumerable<TEntity>) dbset).GetAsyncEnumerator()
.Returns(info => new InMemoryDbAsyncEnumerator<TEntity>(getQuery().GetEnumerator()));
((IQueryable<TEntity>) dbset).Provider.Returns(info => getQuery().Provider);
#endif
dbset.Remove(Arg.Do<TEntity>(entity => data.Remove(entity)));
dbset.Add(Arg.Do<TEntity>(entity => data.Add(entity)));
return dbset;
}
The getQuery lambda creates a new query. It always uses the captured list data.
All .Returns configuration calls use a lambda. In there, we create a new query instance and delegate our call there.
Remove and Add only modify our captured list. We don't have to reconfigure our Substitute, because every call reevaluates the query using the lambda expressions.
While I really like NSubstitute, I would strongly recommend looking into Effort, the Entity Framework Unit Testing Tool.
You would use it like this:
// DbContext needs additional constructor:
public class MyDbContext : DbContext
{
public MyDbContext(DbConnection connection)
: base(connection, true)
{
}
}
// Usage:
DbConnection connection = Effort.DbConnectionFactory.CreateTransient();
MyDbContext context = new MyDbContext(connection);
And there you have an actual DbContext that you can use with everything that Entity Framework gives you, including migrations, using a fast in-memory-database.
I know that MongoDB is not supposed to support unit of work, etc. But I think it would be nice to implement the repository which would store only the intentions (similar to criteria) and then commit them to the DB. Otherwise in every method in your repository you have to create connection to DB and then close it. If we place the connection to DB in some BaseRepository class, then we tie our repository to concrete DB and it is really difficult to test repositories, to test IoC which resolve repositories.
Is creating a session in MongoDB a bad idea? Is there a way to separate the connection logic from repository?
Here is some code by Rob Conery. Is it a good idea to always connect to your DB on every request? What is the best practice?
There is one more thing. Imagine I want to provide an index for a collection. Previously I did in a constructor but with Rob's approach this seems out of logic to do it there.
using Norm;
using Norm.Responses;
using Norm.Collections;
using Norm.Linq;
public class MongoSession {
private string _connectionString;
public MongoSession() {
//set this connection as you need. This is left here as an example, but you could, if you wanted,
_connectionString = "mongodb://127.0.0.1/MyDatabase?strict=false";
}
public void Delete<T>(System.Linq.Expressions.Expression<Func<T, bool>> expression) where T : class, new() {
//not efficient, NoRM should do this in a way that sends a single command to MongoDB.
var items = All<T>().Where(expression);
foreach (T item in items) {
Delete(item);
}
}
public void Delete<T>(T item) where T : class, new() {
using(var db = Mongo.Create(_connectionString))
{
db.Database.GetCollection<T>().Delete(item);
}
}
public void DeleteAll<T>() where T : class, new() {
using(var db = Mongo.Create(_connectionString))
{
db.Database.DropCollection(typeof(T).Name);
}
}
public T Single<T>(System.Linq.Expressions.Expression<Func<T, bool>> expression) where T : class, new() {
T retval = default(T);
using(var db = Mongo.Create(_connectionString))
{
retval = db.GetCollection<T>().AsQueryable()
.Where(expression).SingleOrDefault();
}
return retval;
}
public IQueryable<T> All<T>() where T : class, new() {
//don't keep this longer than you need it.
var db = Mongo.Create(_connectionString);
return db.GetCollection<T>().AsQueryable();
}
public void Add<T>(T item) where T : class, new() {
using(var db = Mongo.Create(_connectionString))
{
db.GetCollection<T>().Insert(item);
}
}
public void Add<T>(IEnumerable<T> items) where T : class, new() {
//this is WAY faster than doing single inserts.
using(var db = Mongo.Create(_connectionString))
{
db.GetCollection<T>().Insert(items);
}
}
public void Update<T>(T item) where T : class, new() {
using(var db = Mongo.Create(_connectionString))
{
db.GetCollection<T>().UpdateOne(item, item);
}
}
//this is just some sugar if you need it.
public T MapReduce<T>(string map, string reduce) {
T result = default(T);
using(var db = Mongo.Create(_connectionString))
{
var mr = db.Database.CreateMapReduce();
MapReduceResponse response =
mr.Execute(new MapReduceOptions(typeof(T).Name) {
Map = map,
Reduce = reduce
});
MongoCollection<MapReduceResult<T>> coll = response.GetCollection<MapReduceResult<T>>();
MapReduceResult<T> r = coll.Find().FirstOrDefault();
result = r.Value;
}
return result;
}
public void Dispose() {
_server.Dispose();
}
}
Don't worry too much about opening and closing connections. The MongoDB C# driver maintains an internal connection pool, so you won't suffer overheads of opening and closing actual connections each time you create a new MongoServer object.
You can create a repository interface that exposes your data logic, and build a MongoDB implementation that is injected where it's needed. That way, the MongoDB specific connection code is abstratced away from your application, which only sees the IRepository.
Be careful trying to implement a unit-of-work type pattern with MongoDB. Unlike SQL Server, you can't enlist multiple queries in a transaction that can be rolled back if one fails.
For a simple example of a repository pattern that has MongoDB, SQL Server and JSON implementations, check out the NBlog storage code. It uses Autofac IoC to inject concrete repositories into an ASP.NET MVC app.
While researching design patterns, I was creating a basic repository pattern for .Net Core and MongoDB. While reading over the MongoDB documentation I came across an article about transactions in MongoDB. In the article it specified that:
Starting in version 4.0, MongoDB provides the ability to perform
multi-document transactions against replica sets.
Looking around the intertubes I came across a library that does a really good job of implementing the Unit of Work pattern for MongoDB.
If you are interested in an implementation similar to Rob Connery's and NBlog storage code but using the mongodb csharp driver 2.0 (that is asynchronous), you can look at:
https://github.com/alexandre-spieser/mongodb-generic-repository
You can then write a custom repository inheriting from BaseMongoRepository.
public interface ITestRepository : IBaseMongoRepository
{
void DropTestCollection<TDocument>();
void DropTestCollection<TDocument>(string partitionKey);
}
public class TestRepository : BaseMongoRepository, ITestRepository
{
public TestRepository(string connectionString, string databaseName) : base(connectionString, databaseName)
{
}
public void DropTestCollection<TDocument>()
{
MongoDbContext.DropCollection<TDocument>();
}
public void DropTestCollection<TDocument>(string partitionKey)
{
MongoDbContext.DropCollection<TDocument>(partitionKey);
}
}
Briefly
You can use this nuget-package UnitOfWork.MongoDb. This is a wrapper for MongoDb.Driver with some helpful functions and features. Also you can find sample code and video (ru).
Read settings for connection
// read MongoDb settings from appSettings.json
services.AddUnitOfWork(configuration.GetSection(nameof(DatabaseSettings)));
// --- OR ----
// use hardcoded
services.AddUnitOfWork(config =>
{
config.Credential = new CredentialSettings { Login = "sa", Password = "password" };
config.DatabaseName = "MyDatabase";
config.Hosts = new[] { "Localhost" };
config.MongoDbPort = 27017;
config.VerboseLogging = false;
});
Injections
namespace WebApplicationWithMongo.Pages
{
public class IndexModel : PageModel
{
private readonly IUnitOfWork _unitOfWork;
private readonly ILogger<IndexModel> _logger;
public IndexModel(IUnitOfWork unitOfWork, ILogger<IndexModel> logger)
{
_unitOfWork = unitOfWork;
_logger = logger;
}
public IPagedList<Order>? Data { get; set; }
}
}
After injection you can get repository.
Get repository
public async Task<IActionResult> OnGetAsync(int pageIndex = 0, int pageSize = 10)
{
var repository = _unitOfWork.GetRepository<Order, int>();
Data = await repository.GetPagedAsync(pageIndex, pageSize, FilterDefinition<Order>.Empty, HttpContext.RequestAborted);
return Page();
}
GetPagedAsync one of some helpful implementations
Transactions
If you need ACID operations (transactions) you can use IUnitOfWork something like this. (Replicate Set should be correctly set up). For example:
await unitOfWork.UseTransactionAsync<OrderBase, int>(ProcessDataInTransactionAsync1, HttpContext.RequestAborted, session);
Method ProcessDataInTransactionAsync1 can be looks like this:
async Task ProcessDataInTransactionAsync1(IRepository<OrderBase, int> repositoryInTransaction, IClientSessionHandle session, CancellationToken cancellationToken)
{
await repository.Collection.DeleteManyAsync(session, FilterDefinition<OrderBase>.Empty, null, cancellationToken);
var internalOrder1 = DocumentHelper.GetInternal(99);
await repositoryInTransaction.Collection.InsertOneAsync(session, internalOrder1, null, cancellationToken);
logger!.LogInformation("InsertOne: {item1}", internalOrder1);
var internalOrder2 = DocumentHelper.GetInternal(100);
await repositoryInTransaction.Collection.InsertOneAsync(session, internalOrder2, null, cancellationToken);
logger!.LogInformation("InsertOne: {item2}", internalOrder2);
var filter = Builders<OrderBase>.Filter.Eq(x => x.Id, 99);
var updateDefinition = Builders<OrderBase>.Update.Set(x => x.Description, "Updated description");
var result = await repositoryInTransaction.Collection
.UpdateOneAsync(session, filter, updateDefinition, new UpdateOptions { IsUpsert = false }, cancellationToken);
if (result.IsModifiedCountAvailable)
{
logger!.LogInformation("Update {}", result.ModifiedCount);
}
throw new ApplicationException("EXCEPTION! BANG!");
}
This nuget is open-source Calabonga.UnitOfWork.MongoDb