In my example, I've got two methods for Enabling and Disabling and account and I'm gonna to write a test method for each one.
The problem is that I've got to consider the original status for the data and restore it after testing it, even in a sample database, to keep the data consisted for the next test.
public void DisableAndEnableAccount()
{
var client = new GwIntegrationServiceSoapClient();
string userName = "admin";
Account account = client.GetAccountByUsername(userName);
int accountID = account.Id;
bool isActiveOrginalValue = account.IsActive;
if (isActiveOrginalValue)
{
client.DisableAccount(accountID);
account = client.GetAccountByUsername(userName);
Assert.IsFalse(account.IsActive);
client.EnableAccount(accountID);
account = client.GetAccountByUsername(userName);
Assert.IsTrue(account.IsActive);
}
else
{
client.EnableAccount(accountID);
account = client.GetAccountByUsername(userName);
Assert.IsTrue(account.IsActive);
client.DisableAccount(accountID);
account = client.GetAccountByUsername(userName);
Assert.IsFalse(account.IsActive);
}
}
I think that my test method isn't written in a good way, Any idea how to deal with such case?
You should use test data (test user accounts) in your tests, not real ones. (In fact, it is strongly recommended to use a separate test DB for your tests, never the real live production DB.) Then you are free to set it up any way you need prior to the test. Btw it is recommended to do the setup in the separate setUp() method (or one annotated with #Before in JUnit 4).
Note however, that "classic" unit tests should not depend on external entities like a DB or the file system: they should focus on testing one unit (class, method), isolated from the rest of the world. This is usually achieved by dependency injection and mocking, i.e. hiding them behind interfaces, so that in unit tests, you can supply a dummy implementation which e.g. doesn't connect to the DB, just verifies the calls made to it and the parameters passed.
Testing the whole integrated system is still useful, just it is not unit, rather integration testing. Unit tests can be much finer grained, easier to understand and maintain, and faster, so whenever you can, it is best to start with unit tests, then once you are sure the smaller parts are working fine, put together some integration tests to verify that the system works end to end.
You're always going to have trouble restoring the data to its original state using this method. The solution, in general, is to avoid trying to do this, as #Péter Török points out (I also happen to agree almost completely with his post).
Many things could go wrong between the various steps in your test: what if the internet connection is dropped, and you lose connection to the web service? What happens if you make a mistake in your test code and break the test account's live data?
If absolutely have to have some kind of automated testing with an external web service, I would suggest coming to an arrangement with the owner of the service to be able to reset your test entities automatically, or something like that. This way, you can focus on having your tests actually test functionality, not worry about how you will trick the data back to the way it was to start with.
Related
I am working on unit testing stuffs for my controller and service layers(C#,MVC). And I am using Moq dll for mocking the real/dependency objects in unit testing.
But I am little bit confuse regarding mocking the dependencies or real objects. Lets take a example of below unit test method :-
[TestMethod]
public void ShouldReturnDtosWhenCustomersFound_GetCustomers ()
{
// Arrrange
var name = "ricky";
var description = "this is the test";
// setup mocked dal to return list of customers
// when name and description passed to GetCustomers method
_customerDalMock.Setup(d => d.GetCustomers(name, description)).Returns(_customerList);
// Act
List<CustomerDto> actual = _CustomerService.GetCustomers(name, description);
// Assert
Assert.IsNotNull(actual);
Assert.IsTrue(actual.Any());
// verify all setups of mocked dal were called by service
_customerDalMock.VerifyAll();
}
In the above unit test method I am mocking the GetCustomers method and returning a customer list. Which is already defined. And looks like below:
List<Customer> _customerList = new List<Customer>
{
new Customer { CustomerID = 1, Name="Mariya",Description="description"},
new Customer { CustomerID = 2, Name="Soniya",Description="des"},
new Customer { CustomerID = 3, Name="Bill",Description="my desc"},
new Customer { CustomerID = 4, Name="jay",Description="test"},
};
And lets have a look on the Assertion of Customer mocked object and actual object Assertion :-
Assert.AreEqual(_customer.CustomerID, actual.CustomerID);
Assert.AreEqual(_customer.Name, actual.Name);
Assert.AreEqual(_customer.Description, actual.Description);
But here I am not understanding that it(above unit test) always work fine. Means we are just testing(in Assertion) which we passed or which we are returning(in mocking object). And we know that the real/actual object will always return which list or object that we passed.
So what is the meaning of doing unit testing or mocking here?
The true purpose of mocking is to achieve true isolation.
Say you have a CustomerService class, that depends on a CustomerRepository. You write a few unit tests covering the features provided by CustomerService. They all pass.
A month later, a few changes were made, and suddenly your CustomerServices unit tests start failing - and you need to find where the problem is.
So you assume:
Because a unit test that tests CustomerServices is failing, the problem must be in that class!!
Right? Wrong! The problem could be either in CustomerServices or in any of its depencies, i.e., CustomerRepository. If any of its dependencies fail, chances are the class under test will fail too.
Now picture a huge chain of dependencies: A depends on B, B depends on C, ... Y depends on Z. If a fault is introduced in Z, all your unit tests will fail.
And that's why you need to isolate the class under test from its dependencies (may it be a domain object, a database connection, file resources, etc). You want to test a unit.
Your example is too simplistic to show off the real benefit of mocking. That's because your logic under test isn't really doing much beyond returning some data.
But imagine as an example that your logic did something based on wall clock time, say scheduled some process every hour. In a situation like that, mocking the time source lets you actually unit test such logic so that your test doesn't have to run for hours, waiting for the time to pass.
In addition to what already been said:
We can have classes without dependencies. And the only thing we have is unit testing without mocks and stubs.
When we have dependencies there are several kinds of them:
Service that our class uses mostly in a 'fire and forget' way, i.e. services that do not affect control flow of the consuming code.
We can mock these (and all other kinds) services to test they were called correctly (integration testing) or simply for injecting as they could be required by our code.
Two Way Services that provide result but do not have an internal
state and do not affect the state of the system. They can be dubbed complex data transformations.
By mocking these services you can test you expectations about code behavior for different variants of service implementation without need to heave all of them.
Services which affect the state of the system or depend on real world
phenomena or something out of your control. '#500 - Internal Server Error' gave a good example of the time service.
With mocking you can let the time flow at the speed (and direction) whatever is needed. Another example is working with DB. When unit testing it is usually desirable not to change DB state what is not true about functional test. For such kind of services 'isolation' is the main (but not the only) motivation for mocking.
Services with internal state your code depends on.
Consider Entity Framework:
When SaveChanges() is called, many things happen behind the scene. EF detects changes and fixups navigation properties. Also EF won't allow you to add several entities with the same key.
Evidently, it can be very difficult to mock the behavior and the complexity of such dependencies...but usually you have not if they are designed well. If you heavily rely on the functionality some component provides you hardly will be able to substitute this dependency. What is probably needed is isolation again. You don't want to leave traces when testing, thus butter approach is to tell EF not to use real DB. Yes, dependency means more than a mere interface. More often it is not the methods signatures but the contract for expected behavior. For instance IDbConnection has Open() and Close() methods what implies certain sequence of calls.
Sure, it is not strict classification. Better to treat it as extremes.
#dcastro writes: You want to test a unit. Yet the statement doesn't answer the question whether you should.
Lets not discount integration tests. Sometimes knowing that some composite part of the system has a failure is ok.
As to example with the chain of dependencies given by #dcastro we can try to find the place where the bag is likely to by:
Assume, Z is a final dependency. We create unit tests without mocks for it. All boundary conditions are known. 100% coverage is a must here. After that we say that Z works correctly. And if Z fails our unit tests must indicate it.
The analogue comes from engineering. Nobody tests each screw and bolt when building a plane.Statistic methods are used to prove with some certainty that factory producing the details works fine.
On the other hand, for very critical parts of your system it is reasonable to spend time and mock complex behavior of the dependency. Yes, the more complex it is the less maintainable tests are going to be. And here I'd rather call them as the specification checks.
Yes your api and tests both can be wrong but code review and other forms of testing can assure the correctness of the code to some degree. And as soon as these tests fail after some changes are made you either need to change specs and corresponding tests or find the bug and cover the case with the test.
I highly recommend you watching Roy's videos: http://youtube.com/watch?v=fAb_OnooCsQ
In this very case mocking allowed you to fake a database connection, so that you can run a test in place and in-memory, without relying on any additional resource, i.e. the database. This tests asserts that, when a service is called, a corresponded method of DAL is called.
However the later asserts of the list and the values in list aren't necessary. As you correctly noticed you just asserting that the values you "mocked" are returned. This would be useful within the mocking framework itself, to assert that the mocking methods behave as expected. But in your code is is just excess.
In general case, mocking allow one to:
Test behaviour (when something happens, then a particular method is executed)
Fake resources (for example, email servers, web servers, HTTP API request/response, database)
In contrast, unit-tests without mocking usually allow you to test the state. That is, you can detect a change in a state of an object, when a particular method was called.
All previous answers assume that mocking has some value, and then they proceed to explain what that value supposedly is.
For the sake of future generations that might arrive at this question looking to satisfy their philosophical objections on the issue, here is a dissenting opinion:
Mocking, despite being a nifty trick, should be avoided at (almost) all costs.
When you mock a dependency of your code-under-test, you are by definition making two kinds of assumptions:
Assumptions about the behavior of the dependency
Assumptions about the inner workings of your code-under-test
It can be argued that the assumptions about the behavior of the dependency are innocent because they are simply a stipulation of how the real dependency should behave according to some requirements or specification document. I would be willing to accept this, with the footnote that they are still assumptions, and whenever you make assumptions you are living your life dangerously.
Now, what cannot be argued is that the assumptions you are making about the inner workings of your code-under-test are essentially turning your test into a white-box test: the mock expects the code-under-test to issue specific calls to its dependencies, with specific parameters, and as the mock returns specific results, the code-under-test is expected to behave in specific ways.
White-box testing might be suitable if you are building high criticality (aerospace grade) software, where the goal is to leave absolutely nothing to chance, and cost is not a concern. It is orders of magnitude more labor intensive than black-box testing, so it is immensely expensive, and it is a complete overkill for commercial software, where the goal is simply to meet the requirements, not to ensure that every single bit in memory has some exact expected value at any given moment.
White-box testing is labor intensive because it renders tests extremely fragile: every single time you modify the code-under-test, even if the modification is not in response to a change in requirements, you will have to go modify every single mock you have written to test that code. That is an insanely high maintenance level.
How to avoid mocks and black-box testing
Use fakes instead of mocks
For an explanation of what the difference is, you can read this article by Martin Fowler: https://martinfowler.com/bliki/TestDouble.html but to give you an example, an in-memory database can be used as fake in place of a full-blown RDBMS. (Note how fakes are a lot less fake than mocks.)
Fakes will give you the same amount of isolation as mocks would, but without all the risky and costly assumptions, and most importantly, without all the fragility.
Do integration testing instead of unit testing
Using the fakes whenever possible, of course.
For a longer article with my thoughts on the subject, see https://blog.michael.gr/2021/12/white-box-vs-black-box-testing.html
How would I use NUnit and a test database to verify my code? I would in theory use mocks (moq) but my code is more in maintenance shape and fix it mode and I don't have the to setup all the mocks.
Do I just create a test project, then write tests that actually connect to my test database and execute the code as I wwould in the app? Then I check the code with asserts and make sure what I'm requesting is what I'm getting back correctly?
How would I use NUnit and a test database to verify my code? I would
in theory use mocks (moq) but my code is more in maintenance shape and
fix it mode and I don't have the to setup all the mocks.
Using mocks is only useful if you want to test the exact implementation behavior of a class. That means you are literally asserting that one class calls a specific method on another class. For example: I want to assert that Ninja.Attack() calls Sword.Unsheath().
Do I just create a test project, then write tests that actually
connect to my test database and execute the code as I wwould in the
app? Then I check the code with asserts and make sure what I'm
requesting is what I'm getting back correctly?
This is just a plain old unit test. If there are no obstacles to achieving this, that's a good indicator that this is going to be your most effective method of testing. It's practical and highly effective.
There's one other testing tool you didn't mention, which is called a stub. I highly recommend you read this classic article for more info:
http://martinfowler.com/articles/mocksArentStubs.html
Since we are not talking about theoretical case, this is what I would do - From my understanding what you want to test is that whether your app is properly connecting to the DB and fetching the desired data or not.
Create a test DB with the same schema
Add some dummy data in that
Open a connection to the DB from the code, request desired data
Write assertions to test what you got from the DB against what you expected
Also, I don't think these tests should be called unit tests because they are not self contained and are dependent on other factors like whether your database is up and running or not. I would say they fall close to integration tests that will test if different components of your applications are working as expectation when used together.
(Dan's answer ^^ pretty much sums what I wanted to say)
I'm in the midst of rewriting a database application and using Entity Framework to access the DB. Currently I am using MSTest and a copy of the underlying database as part of these tests. My MSTest involves the following code as part of each test:
[TestInitialize()]
public void MyTestInitialize()
{
transScope = new TransactionScope(TransactionScopeOption.RequiresNew, new TransactionOptions { Timeout = new TimeSpan(0, 10, 0) });
}
[TestCleanup()]
public void MyTestCleanup()
{
Transaction.Current.Rollback();
transScope.Dispose();
}
Now, this seems to work pretty well for testing and resets the DB between tests. My tests use the DB context to do CRUD operations against the test DB and then roll them back afterward.
I've read a bit about isolating the C# library from the underlying DB for testing but I'm wondering what this actually buys me. As part of this rewrite, most (but not all) of the code that was in stored procedures has been moved in the C# layer but a few remain which are called via triggers on tables. What do I gain from going through the exercise of mocking out that database layer? Frankly, when I look at doing this, it seems like a lot of additional work without any obvious value, but perhaps I'm missing the point here.
Thoughts?
It depends on the the kinds of tests you are writing. When writing unit tests where you want to test one unit of code--typically a class--then usually it's good for two things to hold true:
The tests should run as fast as possible. Meaning 100's of tests per second.
The code you are testing should be isolated from other code such that you can control the way dependencies work and test different kinds of inputs and outputs easily.
Both of these things are difficult to do if you use a real database for all tests. The first because a round-trip to the database usually takes a lot more time than just running some code. The second because you need to have your database setup with lots of different kinds of data, including negative cases and corner cases, and sometimes even make the database fail. It's often easier to instead mock the dependencies of your class and pass in any inputs needed.
That being said, I have written many tests that use a similar pattern to the one you describe and they work well and run relatively fast. Personally, I would use a combination of real unit tests without a database, and tests like the ones you have that are more like functional testing of a component.
In the end, do what works for you.
Here's an integration test I wrote for a class that interacts with a database:
[Test]
public void SaveUser()
{
// Arrange
var user = new User();
// Set a bunch of properties of the above User object
// Act
var usersCountPreSave = repository.SearchSubscribersByUsername(user.Username).Count();
repository.Save(user);
var usersCountPostSave = repository.SearchSubscribersByUsername(user.Username).Count();
// Assert
Assert.AreEqual(userCountPreSave + 1, userCountPostSave);
}
It seems to me that I can't test the Save function without involving the SearchSubscriberByUsername function to find out if the user was successfully saved. I realize that integration tests aren't meant to be unit tests which are supposed to test one unit of code at a time. But ideally, it would be nice if I could test one function in my repository class per test but I don't know how I can accomplish that.
Is it fine how I've written the code so far or is there a better way?
You have a problem with your test. When you're testing that data is saved into the database, you should be testing that it's in the database, not that the repository says that it's in the database.
If you're testing the functionality of repository, then you can't verify that functionality by asking if it has done it correctly. It's the equivalent of saying to someone 'Did you do this correctly?' They are going to say yes.
Imagine that repository never commits. Your test will pass fine, but the data won't be in the database.
So, what I would do is to to open a connection (pure SQL) to the database and check that the data has been saved correctly. You only need to a select count(*) before and after to ensure that the user has been saved. If you do this, you can avoid using the SearchSubscribersByUsername as well.
If you're testing the functionality of repository, you can't trust repository, by definition.
To unit test something like a "Save" function, you will definitely need some trustworthy channel to check the result of the operation. If you trust SearchSubscribersByUsername (because you have already made some unit tests for that function on its own), you can use it here.
If you don't trust SearchSubscribersByUsername and you think your unit test could also break because there is an error in that function (and not in Save), you should think about a different channel (perhaps you have a possibility to make a bypassing SQL access to your DB to check the Save result, which may be simpler than the implementation of SearchSubscribersByUsername)? However, do not reimplement SearchSubscribersByUsername again, that would be getting pointless. Either way, you will need at least some other function you can trust.
Unless the method you are testing returns definitive information about what you have done I don't see any way to avoid calling other methods. I think you are correct in your assumption that Integration testing needs a different way of thinking from Unit testing.
I would still build tests that focus on individual methods. So in testing Save() I may well use the capabilities of Search(), but my focus is on the edge cases of Save(). I build tests that deal with duplicate insertions or invalid input data. Then later I build a whole raft of Search() tests that deal with the edge cases of Search().
Now one possible way of thinking is that Save and Search have some commonality, a bug in Search might mask a bug in Save. Imagine, for example, if you had a caching layer down there. So possibly an alternative approach is to use some other verification mechanism. For example a direct JDBC call to the database, or alteratively introducing mocking layers at some point in your infrastructure. When building complex Integrated Systems this kind of "Back Door" verification may be essential.
Personally, I've written countless tests very similar to this and think its fine. The alternative is to stub out the database so that searchSubscribers never actually does anything but thats great deal of work for what I'd say is little gain.
I want to write unit tests with NUnit that hit the database. I'd like to have the database in a consistent state for each test. I thought transactions would allow me to "undo" each test so I searched around and found several articles from 2004-05 on the topic:
http://weblogs.asp.net/rosherove/archive/2004/07/12/180189.aspx
http://weblogs.asp.net/rosherove/archive/2004/10/05/238201.aspx
http://davidhayden.com/blog/dave/archive/2004/07/12/365.aspx
http://haacked.com/archive/2005/12/28/11377.aspx
These seem to resolve around implementing a custom attribute for NUnit which builds in the ability to rollback DB operations after each test executes.
That's great but...
Does this functionality exists somewhere in NUnit natively?
Has this technique been improved upon in the last 4 years?
Is this still the best way to test database-related code?
Edit: it's not that I want to test my DAL specifically, it's more that I want to test pieces of my code that interact with the database. For these tests to be "no-touch" and repeatable, it'd be awesome if I could reset the database after each one.
Further, I want to ease this into an existing project that has no testing place at the moment. For that reason, I can't practically script up a database and data from scratch for each test.
NUnit now has a [Rollback] attribute, but I prefer to do it a different way. I use the TransactionScope class. There are a couple of ways to use it.
[Test]
public void YourTest()
{
using (TransactionScope scope = new TransactionScope())
{
// your test code here
}
}
Since you didn't tell the TransactionScope to commit it will rollback automatically. It works even if an assertion fails or some other exception is thrown.
The other way is to use the [SetUp] to create the TransactionScope and [TearDown] to call Dispose on it. It cuts out some code duplication, but accomplishes the same thing.
[TestFixture]
public class YourFixture
{
private TransactionScope scope;
[SetUp]
public void SetUp()
{
scope = new TransactionScope();
}
[TearDown]
public void TearDown()
{
scope.Dispose();
}
[Test]
public void YourTest()
{
// your test code here
}
}
This is as safe as the using statement in an individual test because NUnit will guarantee that TearDown is called.
Having said all that I do think that tests that hit the database are not really unit tests. I still write them, but I think of them as integration tests. I still see them as providing value. One place I use them often is in testing LINQ to SQL code. I don't use the designer. I hand write the DTO's and attributes. I've been known to get it wrong. The integration tests help catch my mistake.
I just went to a .NET user group and the presenter said he used SQLlite in test setup and teardown and used the in memory option. He had to fudge the connection a little and explicit destroy the connection, but it would give a clean DB every time.
http://houseofbilz.com/archive/2008/11/14/update-for-the-activerecord-quotmockquot-framework.aspx
I would call these integration tests, but no matter. What I have done for such tests is have my setup methods in the test class clear all the tables of interest before each test. I generally hand write the SQL to do this so that I'm not using the classes under test.
Generally, I rely on an ORM for my datalayer and thus I don't write unit tests for much there. I don't feel a need to unit test code that I don't write. For code that I add in the layer, I generally use dependency injection to abstract out the actual connection to the database so that when I test my code, it doesn't touch the actual database. Do this in conjunction with a mocking framework for best results.
For this sort of testing, I experimented with NDbUnit (working in concert with NUnit). If memory serves, it was a port of DbUnit from the Java platform. It had a lot of slick commands for just the sort of thing you're trying to do. The project appears to have moved here:
http://code.google.com/p/ndbunit/
(it used to be at http://ndbunit.org).
The source appears to be available via this link:
http://ndbunit.googlecode.com/svn/trunk/
Consider creating a database script so that you can run it automatically from NUnit as well as manually for other types of testing. For example, if using Oracle then kick off SqlPlus from within NUnit and run the scripts. These scripts are usually faster to write and easier to read. Also, very importantly, running SQL from Toad or equivalent is more illuminating than running SQL from code or going through an ORM from code. Generally I'll create both a setup and teardown script and put them in setup and teardown methods.
Whether you should be going through the DB at all from unit tests is another discussion. I believe it often does make sense to do so. For many apps the database is the absolute center of action, the logic is highly set based, and all the other technologies and languages and techniques are passing ghosts. And with the rise of functional languages we are starting to realize that SQL, like JavaScript, is actually a great language that was right there under our noses all these years.
Just as an aside, Linq to SQL (which I like in concept though have never used) almost seems to me like a way to do raw SQL from within code without admitting what we are doing. Some people like SQL and know they like it, others like it and don't know they like it. :)
For anyone coming to this thread these days like me, I'd like to recommend trying the Reseed library I'm developing currently for this specific case.
Neither in-memory db replacement (lack of features) nor transaction rollback (transactions can't be nested) were a suitable option for me, so I ended up with a simple delete/insert cycle for the data restore purpose. Ended up with a library to generate those, while trying to optimize my tests speed and simplicity of setup. Would be happy if it helps anyone else.
Another alternative I'd recommend is using database snapshots to restore data, which is of comparable performance and usability.
Workflow is as follows:
delete existing snapshots;
create db;
insert data;
create snapshot ;
execute test;
restore from snapshot;
go to "execute test" until none left;
drop snapshot.
It's suitable if you could have the only data script for all the tests and allows you to execute the insertion (which is supposed to be the slowest) the only time, moreover you don't need data cleanup script at all.
For further performance improvement, as such tests could take a lot of time, consider using a pool of databases and tests parallelization.