I have the following code
public static DatabaseFactory {
public DatabaseProvider Create(dataSource, ProviderType provider type){
//dataSource = "Server\Instance", "MyOracleDB"
if (type == ProviderType.Sql)
return new SqlDatabaseProvider("$data source = {dataSource}; integrated security = True; MultipleActiveResultSets = True;");
throw new NotImplementedException("Provider not found");
}
}
Doing it this way I have to hard code a connection string for each provider I implement. I'm wondering if there is a dynamic way to retrieve a connection string or build it based on a value.
The purpose of a factory is to abstract the creation of an object so that the calling code doesn't need to be aware of the specifics, and that you may perform addition operations after construction, and that you may return a subclass of the factory's return type.
So it might be more typical that your calling code is not even aware of the database type. Your code may look more like this:
var mainProvider = DatabaseFactory.Create("main");
var backupProvider = DatabaseFactory.Create("backup");
Then your factory might look like this:
public static DatabaseFactory
{
public static DatabaseProvider Create(string key)
{
var providerType = GetProviderTypeFromConfig(key);
var connectionString = GetConnectionFromConfig(key);
if (providerType == ProviderType.Sql)
return new SqlDatabaseProvider(connectionString);
if (providerType == ProviderType.Oracle)
return new OracleDatabaseProvider(connectionString);
throw new NotImplementedException("Provider not found");
}
}
Now you would need to write the code for GetProviderTypeFromConfig and GetConnectionFromConfig which would go off to some XML/JSON file, or even spin up a DB connection itself, to get the actual values used.
This type of code then becomes easier to test too as each part can be unit tested.
Related
I have a already written (was written years ago) C# function, I have been asked to cover this method with Unit Tests.
public string PlaceOrder(int requestId, string orderedby)
{
try
{
using (DatabaseContext dbContext = new DatabaseContext("myConnectionStringHere"))
{
var req = dbContext.Orders.Where(row => row.id == requestId).FirstOrDefault();
if (req == null)
return "not found";
req.status="A";
dbContext.SaveChanges();
return "found";
}
}
catch (Exception ex)
{
return "error";
}
}
Now while Unit testing I need to make sure that it does not write anything to database, so I have to MOQ it.
How can I MOQ, it contains Using block.
I know architecture could have been better and design patterns should have been followed but I am not allowed to change the structure of the application as it is a legacy application.
The general guidance here is to prefer Integration Test with in memory (w/o sqlite) database over unit testing.
Let me suggest you four helper libraries which can make your testing easier:
EntityFrameworkCoreMock
Github link
The prerequisite here is to mark your DbSet as virtual like this:
public virtual DbSet<Order> Orders { get; set; }
Then you can create a mock where you can populate your Orders collection with some dummy data:
var initialOrders = new[]
{
new Order { ... },
new Order { ... },
};
var dbContextMock = new DbContextMock<DatabaseContext>(new DbContextOptionsBuilder<DatabaseContext>().Options);
var ordersDbSetMock = dbContextMock.CreateDbSetMock(db => db.Orders, initialOrders);
You have to rewrite your containing class of the PlaceOrder method in a way to receive a DatabaseContext parameter in the constructor to be able inject dbContextMock.Object during testing.
In the assertion phase you can query your data and make assertion against it. Since you do not call Add, Remove or any other CRUD method, you can only Verify the SaveChanges call.
public void GivenAnExistingOrder_WhenICallPlaceOrder_ThenSaveChangesIsCalledOnce()
{
...
//Assert
dbMock.Verify(db => db.SaveChanges(), Times.Once);
}
public void GivenANonExistingOrder_WhenICallPlaceOrder_ThenSaveChangesIsCalledNever()
{
...
//Assert
dbMock.Verify(db => db.SaveChanges(), Times.Never);
}
EntityFrameworkCore.Testing
Github link
It is working more or less in the way as the previous library.
var dbContextMock = Create.MockedDbContextFor<DatabaseContext>();
dbContextMock.Set<Order>().AddRange(initialOrders);
dbContextMock.SaveChanges();
The assertions work in the same way.
A 3rd (less mature) library is called Moq.EntityFrameworkCore.
If you really keen to perform unit testing by avoiding in memory database then you should give a try to the MockQueryable library.
const int requestId = 1;
var orders = new List<Order>();
var ordersMock = orders.AsQueryable().BuildMockDbSet();
ordersMock.Setup(table => table.Where(row => row.Id == requestId)).Returns(...)
Here you are basically mocking what should be the result of your Where filter. In order to be able to use this the containing class of the PlaceOrder should receive a DbSet<Order> parameter via its constructor.
Or if you have an IDatabaseContext interface then you can use that one as well like this:
Mock<IQueryable<Order>> ordersMock = orders.AsQueryable().Build();
Mock<IDatabaseContext> dbContextMock = ...
dbContextMock.Setup(m => m.ReadSet<Order>()).Returns(ordersMock.Object));
Many things should be changed here:
1:
Do not implement your connection string this way, directly in the code base.
Instead, DI your database into your classes.
so this pseudo code should help out with the general idea.
public void ConfigureService(IServiceCollection serviceCollection)
{
...
string connectionString = //secure storage;
serviceCollection.AddDbContext<DatabaseContext>(options => {
options.UseSqlServer(connectionString);
});
...
}
And then
public class OrderRepository
{
private IServiceScopeFactory _serviceScopeFactory ;
public OrderRepository(IServiceScopeFactory serviceScopeFactory ){
_serviceScopeFactory = serviceScopeFactory ;
}
...
public string PlaceOrder(int requestId, string orderedby)
{
try
{
using (var context = serviceScopeFactory.CreateScope())
{
var req = context.Orders.Where(row => row.id == requestId).FirstOrDefault();
if (req == null)
return "not found";
req.status="A";
context.SaveChanges();
return "found";
}
}
catch (Exception ex)
{
return "error";
}
}
...
}
if you want to make an integration test, you can then use an InMemory db to emulate whatever you want.
Or you can connect to a "real" db, and do it that way.
If you want to make it a unit test, you can see at this link:
How to setup a DbContext Mock
2:
returning a string saying found/not found for a order being placed, seems extremely counter productive.
if your aim is to log this information, provider a DI logger, that can log this. (Try importing the ILogger interface, it's a microsoft extension on logging, can't remember the nuget package name)
should enable you to log with DI very efficiently.
If your aim is to let a possible UI display this message, there is no way the message content should originate from back-end or domain logic.
At least not like this.
Then you should make an interface for a response, and return an implementation of said interface, that exists somewhere else as a minimum but even that is a bit like peeing your pants.
(And contains a UI friendly message, can contain a possible stacktrace/exception), and other possible relevant information, like what Id you were trying to place an order on etc.)
You should make it something that happens at the interface between your UI and domain logic, provided that is what the string is intended for. Where you would expect to see error handling.
3:
WTF is up with the catch? you just return error? Well ? What error? you lose the stack-trace this way?
Someone should be punished for that.
I've writing various tests for my app and now I got to a problem that I'm unable to solve.
The test I'm writing is a simple command that executes and action that modifies a database and then a query to validate that the values are correct.
In order to connect to the database my BaseContext gets the connection string from an interface:
public BaseContext(DbContextOptions options, IConnectionStringProvider connectionStringProvider)
: base(options)
{
_connectionStringProvider = connectionStringProvider;
}
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
if (!optionsBuilder.IsConfigured)
{
var connString =_connectionStringProvider.GetConnectionString();
optionsBuilder.UseOracle(connString);
}
}
My connection string provider interface looks like this:
public interface IConnectionStringProvider
{
string GetConnectionString();
Dictionary<string, string> GetConnectionSession();
}
And the implementation looks like this:
public class HttpConnectionStringProvider : IConnectionStringProvider
{
public HttpConnectionStringProvider(IConfiguration configuration, IHttpContextAccessor httpContextAccessor)
{
_configuration = configuration ?? throw new NullReferenceException(nameof(configuration));
_httpContext = httpContextAccessor.HttpContext ?? throw new NullReferenceException("HttpContext");
}
public string GetConnectionString()
{
// Do something with http context and configuration file and return connection string
}
}
All this interfaces are registered using autofac.
Then when executing the following test:
[Test]
public async Task AddProductTest()
{
string connectionString = "fixed_connection_string";
var mock = new MockRepository(MockBehavior.Default);
var mockConnectionStringProvider = new Mock<IConnectionStringProvider>();
mockConnectionStringProvider.Setup(x => x.GetConnectionString())
.Returns(connectionString);
await ProductModule.ExecuteCommandAsync(new AddProductCommand(1, "nameProduct"));
var products = await ProdcutModule.ExecuteQueryAsync(new GetProducts(1));
// Assert
Assert.That(products .Status, Is.EqualTo(ResultQueryStatus.Ok));
Assert.That(products .Value, Is.Not.Empty);
Assert.That(products .Value.Any(x => x.Name== "nameProduct"));
}
The IConfiguration and IHttpContextAccessor are mocked using NSubstitute library. But even if I deleted them just to test if IConnectionStringProvider returned the value expected in setup it didn't work.
When running the test in debug I see that steps into the method GetConnectionString() when it should be mocked. I don't know what I'm doing wrong I suppose there is something that I don't understand about testing.
I am curious what you are attempting to do:
Mocking out the connection string for creating someone like a DB connection string or similar, doesn't make a lot of sense. It is part of the base functionality of the DI bits for C# service handling in Setup.
if you want to mess with this, of course you can do it, but the purpose escapes me a bit.
However, if you simply wish to test ProductModule as something you have written, then it makes more sense.
Then when you make this statement:
ProductModule productModule = new ProductModule( - parameters - );
your parameters likely requires a IConnectionStringProvider.
There you need to use your mockConnectionStringProvider.Object
That will allow you to insert your mocked object, as the object used in your ProductModule constructor.
I apologize for the verbose posting. I don't like seeing those myself, but my question is about structure and I think all the pieces are needed for asking it.
The interesting part is at the bottom, though, so feel free to scroll down to the question.
Here is a Controller. I am injecting a context and a command factory. The controller returns a list of objects that are read from an Oracle database.
public class aController : ControllerBase
{
protected readonly IDB db;
public aController(IContext context, ICommandFactory factory)
{
db = IDB.dbFactory(context, factory);
}
[HttpGet]
public ActionResult<s> GetS()
{
return Ok(db.DbGetS());
}
}
What is currently not injected is the persistence class. There will be a manageable number of stored procedures that are mapped to a model, all hand-coded to a spec. This interface has a factory to construct an implementation (so that I can mock it should the need arise), and our data retrieval method.
public interface IDB
{
public static IDB dbFactory(
IContext context,
ICommandFactory factory)
{
return new DB(context, factory);
}
public S DbGetS();
}
This class implements the interface. It has a constructor that passes the injected items to the base constructor and otherwise does the Oracle interaction by calling generic access methods in the base class.
public class DB: dbBase, IDB
{
public DB(
IContext context,
ICommandFactory factory)
: base(context, factory)
{ }
public S DbGetS()
{
S s = new S();
IEnumerable<S> ss = GetData("proc-name");
return ss.SingleOrDefault();
}
}
Then there is a base class to all the model classes that uses generics and does the heavy lifting. This is very much simplified.
public abstract class dbBase
{
private readonly IContext _context;
private readonly ICommandFactory _commandFactory;
protected delegate IEnumerable<T> ParseResult<T>(IDbCommand cmd);
protected dbBase(IContext context, ICommandFactory factory)
{
_context = context;
_commandFactory = factory;
}
protected IEnumerable<T> GetData<T>(string sproc)
{
IEnumerable<T> results = null;
var cmd = this._commandFactory.GetDbCommand(sproc, this._context);
// boilerplate code omitted that sets up the command and executes the query
results = parseResult<T>(cmd); // this method will read from the refCursor
return results;
}
private IEnumerable<T> parseResult<T>(IDbCommand cmd) where T : ModelBase, new()
{
// This cast is the problem:
OracleRefCursor rc = (OracleRefCursor)cmd.Parameters["aCursor"];
using (OracleDataReader reader = rc.GetDataReader())
{
while (reader.Read())
{
// code omitted that reads the data and returns it
And here is the Unit Test that should test the Controller:
public void S_ReturnsObject()
{
// Arrange
var mockFactory = new Mock<ICommandFactory>();
var mockContext = new Mock<IContext>();
var mockCommand = new Mock<IDbCommand>();
var mockCommandParameters = new Mock<IDataParameterCollection>();
mockCommandParameters.SetupGet(p => p[It.IsAny<string>()]).Returns(mockParameter.Object);
// Set up the command and parameters
mockCommand.SetupGet(x => x.Parameters)
.Returns(mockCommandParameters.Object);
mockCommand.Setup(x => x.ExecuteNonQuery()).Verifiable();
// Set up the command factory
mockFactory.Setup(x => x.GetDbCommand(
It.IsAny<string>(),
mockContext.Object))
.Returns(mockCommand.Object)
.Verifiable();
var controller = new aController(mockContext.Object, mockFactory.Object);
// Act
var result = controller.GetS();
// omitted verification
All stored procedures have refCursor output parameters that contain the results. The only way to obtain an OracleDataReader for this is to cast the query output parameter to OracleRefCursor. Mocking the reader is therefore not possible, because even though I can get a mock parameter, the test will fail with a cast exception in the ParseResult method. Unless I am missing something.
I fear that I need to cut the Oracle API interactions out, even though it would be nice to at least enter parseResults() as part of the test.
I could inject IDB and replace DbGetS() with a mock version, but then there will be not much code coverage by my test and I won't be able to mock any database connection issues and the like. Also, there will be about a dozen IDB level interfaces that would all have to be injected.
How should I restructure this to be able write meaningful tests?
(Disclaimer: The code snippets that I pasted here are for illustration purposes and were heavily edited. The results were not tested and will not compile or run.)
I didn't really expect an answer, or at least, I am fine with not receiving one. As it often happens when I ask a question on SO, just laying out the problem in a way that others can understand it makes it clear enough for me to see through.
In this instance, the essence of the question boiled down to there not being a way around using GetDataReader() on the RefCursor. In other words, if you have Oracle Stored Procedures with an output cursor (i.e. not a SELECT result set), you cannot mock the database interaction unless you manage to write your own RefCursor and OracleDataReader. If you think this is wrong, please elaborate. OracleCommand, OracleParameter, and operations on the command (ExecuteNonQuery) can be substituted with System.Data equivalencies that can be mocked.
So what did I do? I reverted the substitution of the Oracle.ManageDataAccess types with System.Data stuff (because the former are less verbose) and injected IDB instead.
This is the resulting unit test:
// Arrange
var mockDbS = new Mock<IDB>();
Model.S expected = new Model.S() { var1 = 1, var2 = 2, var3 = 3 };
mockDbS.Setup(d => d.DbGetS().Returns(expected);
var controller = new aController(mockDbS.Object);
// Act
ActionResult<Model.Summary> actionResult = controller.GetS();
Model.S actual = ((ObjectResult)actionResult.Result).Value as Model.S;
// Assert
mockDbS.Verify();
Assert.Equal(200, ((ObjectResult)actionResult.Result).StatusCode);
Assert.Equal(expected, actual);
This does not give me the coverage that I wanted, but it is a basic test of the controller actions.
I need to build a Data Access Library to be used from many small applications afterwards.
It will heavily use the DataReader objects. The tables may exist with same structure either in SQL Servers or in DB2/400. This means that a method for example
GetItemsByWarehouse()
Must be able to run either against SQL Server DB or DB2. Where it will run depends on the server availability and user selection.
What i plan to do (and need advice on it) is :
Implement the DAL based on Singleton design Pattern to ensure that i will have only one instance of my Library.
Have a property that will set the connection string.
Have a property that will set if the target server is AS400 or SQL.
I dont know if this course of action is correct. Should i implement point #3 or i could get the type from the connection string?
Also How i should implement such a method as above? check the property and decide inside the method if i will use Sqlconnection or OleDbConnection e.t.c?
I paste this code from my micro Orm . There are multiple overloads for the constructor to specify what Db you want used.
public class DbAccess : IDisposable
{
public DbAccess()
{
var cnx=ConfigurationManager.ConnectionStrings[0];
if (cnx==null) throw new InvalidOperationException("I need a connection!!!");
Init(cnx.ConnectionString,ProviderFactory.GetProviderByName(cnx.ProviderName));
}
public DbAccess(string connectionStringName)
{
var cnx = ConfigurationManager.ConnectionStrings[connectionStringName];
if (cnx == null) throw new InvalidOperationException("I need a connection!!!");
Init(cnx.ConnectionString, ProviderFactory.GetProviderByName(cnx.ProviderName));
}
public DbAccess(string cnxString,string provider)
{
Init(cnxString,ProviderFactory.GetProviderByName(provider));
}
public DbAccess(string cnxString,DBType provider)
{
Init(cnxString,ProviderFactory.GetProvider(provider));
}
public DbAccess(string cnxString,IHaveDbProvider provider)
{
Init(cnxString, provider);
} //other stuff
}
Note that the DAO (DbAccess) doesn't care about the concrete provider.
Here's how the ProviderFactory looks. Here you can add a method to detect the db and to return a provider.
internal static class ProviderFactory
{
public static IHaveDbProvider GetProviderByName(string providerName)
{
switch (providerName)
{
case SqlServerProvider.ProviderName:return new SqlServerProvider();
case MySqlProvider.ProviderName:return new MySqlProvider();
case PostgresProvider.ProviderName:return new PostgresProvider();
case OracleProvider.ProviderName:return new OracleProvider();
case SqlServerCEProvider.ProviderName:return new SqlServerCEProvider();
case SqliteProvider.ProviderName:return new SqliteProvider();
}
throw new Exception("Unkown provider");
}
public static IHaveDbProvider GetProvider(DBType type)
{
switch (type)
{
case DBType.SqlServer: return new SqlServerProvider();
case DBType.SqlServerCE: return new SqlServerCEProvider();
case DBType.MySql: return new MySqlProvider();
case DBType.PostgreSQL:return new PostgresProvider();
case DBType.Oracle:return new OracleProvider();
case DBType.SQLite:return new SqliteProvider();
}
throw new Exception("Unkown provider");
}
}
For more code snippets and inspiration you can check the Github repo
I would advice against the Singleton pattern, it's much better to let a DI container to manage the instance life. Also, the app should use the interface of the DAO not the concrete instance (this will help you in the future).
Take a look at Abstract Factory Pattern
You can have an interface with the DAL contracts and an implementations for each context. Using a Factory it can decide which implementation will use in each case. The factory will need the "switch rule" to decide what to use.
Currently in code i have used an object factory to return me a processor based of a string tag, which has severed its purpose up until now.
using Core;
using Data;
public static class TagProcessorFactory
{
public static ITagProcessor GetProcessor(string tag)
{
switch (tag)
{
case "gps0":
return new GpsTagProcessor();
case "analog_manager":
return new AnalogManagerTagProcessor();
case "input_manager":
return new InputManagerTagProcessor();
case "j1939":
return new J1939TagProcessor(new MemcachedProvider(new[] { "localhost" }, "DigiGateway"), new PgnRepository());
default:
return new UnknownTagProcessor();
}
}
}
Calling Code
var processor = TagProcessorFactory.GetProcessor(tag.Name);
if (!(processor is UnknownTagProcessor))
{
var data = processor.Process(unitId, tag.Values);
Trace.WriteLine("Tag <{0}> processed. # of IO Items => {1}".FormatWith(tag.Name, data.Count()));
}
as you can see one of my items has dependencies and im trying to execute testing code and i want to pass in mock repositories and cache providers but i can seem to think of a way to do this.
Is this a bad design or anyone have any ideas to fix it to make my factory testable?
Thanks
Since you are using Autofac, you can take advantage of the lookup relationship type:
public class Foo
{
private readonly IIndex<string, ITagProcessor> _tagProcessorIndex;
public Foo(IIndex<string, ITagProvider> tagProcessorIndex)
{
_tagProcessorIndex = tagProcessorIndex;
}
public void Process(int unitId, Tag tag)
{
ITagProcessor processor;
if(_tagProcessorIndex.TryGetValue(tag.Name, out processor))
{
var data = processor.Process(unitId, tag.Values);
Trace.WriteLine("Tag <{0}> processed. # of IO Items => {1}".FormatWith(tag.Name, data.Count()));
}
}
}
See the TypedNamedAndKeysServices wiki article for more information. To register the various processors, you would associate each with its key:
builder.RegisterType<GpsTagProcessor>().Keyed<ITagProcessor>("gps0");
builder.RegisterType<AnalogManagerTagProcessor>().Keyed<ITagProcessor>("analog_manager");
builder.RegisterType<InputManagerTagProcessor>().Keyed<ITagProcessor>("input_manager");
builder
.Register(c => new J1939TagProcessor(new MemcachedProvider(new[] { "localhost" }, new PgnRepository()))
.Keyed<ITagProcessor>("j1939");
Notice we don't register UnknownTagProcessor. That was a signal to the caller of the factory that no processor was found for the tag, which we express using TryGetValue instead.
Using something like StructureMap you could use the ObjectFactory which, when configured would return you a named concrete instance.
http://structuremap.net/structuremap/index.html
I suggest you look through another SO post. It solves several problems at once, including how to replace contructor values - without a mess. Specifically, the parameters to the constructor simply become static fields of a "Context" class, which are read by the constructor of the interior class.