Mocking webservice calls...sometimes - c#

I am trying to write acceptance testing for an existing app.
I've run into a problem though when calling a web service that tells us if a person is, in short, in the office or not, what hours, and who the backup is.
In most of the tests, actually calling the web service is fine... yes, ideally it shouldn't, but creating inputs and outputs for the many many times this service is called is a HUGE task.
What I'd like to do is have the Mock generate a default result regardless of the input, but it will need to be generated by code based on the parameters as there is temporal data in the call and result.
And, if I choose, to be able to setup a different result on a few select inputs to the method on a test by test scenario.
Basically, by default, people are in the office. Unless I setup the mock for them not to be.
Can I do that with Moq? And how?
I'm pretty new to writing tests and mocking so if you require more clarification, please ask.

You may be able to do that with Moq or another dynamic Mock, but it doesn't sound like it would be a good idea.
The way you describe your needs it sounds like you want the web service to contain a non-trivial amount of logic - perhaps not as complex logic as the final production web service, but a set of heuristic rules at least. That sounds to me a lot like a Fake instead of a Mock.
In short, a Fake is a lightweight implementation of the dependency.
A Mock, on the other hand, provides more or less pre-canned, static answers to input while verifying whether the input was expected or not. That doesn't sound at all like what you describe.
In short, the best way to achieve what you describe is to write a lightweight version of the web service that act as you describe. If you need to generate some bogus test data, you can consider using AutoFixture.

There is a misunderstood difference bethween mocks and stubs [Martin Forler: Mock Aren't Stubs (http://martinfowler.com/articles/mocksArentStubs.html].
Basically the mock objects serves as the objects with the mechanism that allows you to verify that the method tested during some testing scenario (mostly the Unit tests) is using the object correctly (ie. verify that the tested method called the mock object's methods in correct order, set the parameters and accessed the property three times ...). The mock serves for purpose of testing the method within the unit test whether it is using another objects that are beyond it's boundaries correctly.
On the other hand the stubs are simulating some behavior (this is what I think you seek).

You could try the web based mocking utility from sourceForge. The application allows to configure selected response based on a particula input tag value.
http://sourceforge.net/projects/easymocker
Web Service Mocker is an easy to use, completely web based SOAP web service mocking utility. This utility is found very useful in a SOA development environment during unit test, component integration test and non-functional requirement testing.

Related

What is the benefits of mocking the dependencies in unit testing?

I am working on unit testing stuffs for my controller and service layers(C#,MVC). And I am using Moq dll for mocking the real/dependency objects in unit testing.
But I am little bit confuse regarding mocking the dependencies or real objects. Lets take a example of below unit test method :-
[TestMethod]
public void ShouldReturnDtosWhenCustomersFound_GetCustomers ()
{
// Arrrange
var name = "ricky";
var description = "this is the test";
// setup mocked dal to return list of customers
// when name and description passed to GetCustomers method
_customerDalMock.Setup(d => d.GetCustomers(name, description)).Returns(_customerList);
// Act
List<CustomerDto> actual = _CustomerService.GetCustomers(name, description);
// Assert
Assert.IsNotNull(actual);
Assert.IsTrue(actual.Any());
// verify all setups of mocked dal were called by service
_customerDalMock.VerifyAll();
}
In the above unit test method I am mocking the GetCustomers method and returning a customer list. Which is already defined. And looks like below:
List<Customer> _customerList = new List<Customer>
{
new Customer { CustomerID = 1, Name="Mariya",Description="description"},
new Customer { CustomerID = 2, Name="Soniya",Description="des"},
new Customer { CustomerID = 3, Name="Bill",Description="my desc"},
new Customer { CustomerID = 4, Name="jay",Description="test"},
};
And lets have a look on the Assertion of Customer mocked object and actual object Assertion :-
Assert.AreEqual(_customer.CustomerID, actual.CustomerID);
Assert.AreEqual(_customer.Name, actual.Name);
Assert.AreEqual(_customer.Description, actual.Description);
But here I am not understanding that it(above unit test) always work fine. Means we are just testing(in Assertion) which we passed or which we are returning(in mocking object). And we know that the real/actual object will always return which list or object that we passed.
So what is the meaning of doing unit testing or mocking here?
The true purpose of mocking is to achieve true isolation.
Say you have a CustomerService class, that depends on a CustomerRepository. You write a few unit tests covering the features provided by CustomerService. They all pass.
A month later, a few changes were made, and suddenly your CustomerServices unit tests start failing - and you need to find where the problem is.
So you assume:
Because a unit test that tests CustomerServices is failing, the problem must be in that class!!
Right? Wrong! The problem could be either in CustomerServices or in any of its depencies, i.e., CustomerRepository. If any of its dependencies fail, chances are the class under test will fail too.
Now picture a huge chain of dependencies: A depends on B, B depends on C, ... Y depends on Z. If a fault is introduced in Z, all your unit tests will fail.
And that's why you need to isolate the class under test from its dependencies (may it be a domain object, a database connection, file resources, etc). You want to test a unit.
Your example is too simplistic to show off the real benefit of mocking. That's because your logic under test isn't really doing much beyond returning some data.
But imagine as an example that your logic did something based on wall clock time, say scheduled some process every hour. In a situation like that, mocking the time source lets you actually unit test such logic so that your test doesn't have to run for hours, waiting for the time to pass.
In addition to what already been said:
We can have classes without dependencies. And the only thing we have is unit testing without mocks and stubs.
When we have dependencies there are several kinds of them:
Service that our class uses mostly in a 'fire and forget' way, i.e. services that do not affect control flow of the consuming code.
We can mock these (and all other kinds) services to test they were called correctly (integration testing) or simply for injecting as they could be required by our code.
Two Way Services that provide result but do not have an internal
state and do not affect the state of the system. They can be dubbed complex data transformations.
By mocking these services you can test you expectations about code behavior for different variants of service implementation without need to heave all of them.
Services which affect the state of the system or depend on real world
phenomena or something out of your control. '#500 - Internal Server Error' gave a good example of the time service.
With mocking you can let the time flow at the speed (and direction) whatever is needed. Another example is working with DB. When unit testing it is usually desirable not to change DB state what is not true about functional test. For such kind of services 'isolation' is the main (but not the only) motivation for mocking.
Services with internal state your code depends on.
Consider Entity Framework:
When SaveChanges() is called, many things happen behind the scene. EF detects changes and fixups navigation properties. Also EF won't allow you to add several entities with the same key.
Evidently, it can be very difficult to mock the behavior and the complexity of such dependencies...but usually you have not if they are designed well. If you heavily rely on the functionality some component provides you hardly will be able to substitute this dependency. What is probably needed is isolation again. You don't want to leave traces when testing, thus butter approach is to tell EF not to use real DB. Yes, dependency means more than a mere interface. More often it is not the methods signatures but the contract for expected behavior. For instance IDbConnection has Open() and Close() methods what implies certain sequence of calls.
Sure, it is not strict classification. Better to treat it as extremes.
#dcastro writes: You want to test a unit. Yet the statement doesn't answer the question whether you should.
Lets not discount integration tests. Sometimes knowing that some composite part of the system has a failure is ok.
As to example with the chain of dependencies given by #dcastro we can try to find the place where the bag is likely to by:
Assume, Z is a final dependency. We create unit tests without mocks for it. All boundary conditions are known. 100% coverage is a must here. After that we say that Z works correctly. And if Z fails our unit tests must indicate it.
The analogue comes from engineering. Nobody tests each screw and bolt when building a plane.Statistic methods are used to prove with some certainty that factory producing the details works fine.
On the other hand, for very critical parts of your system it is reasonable to spend time and mock complex behavior of the dependency. Yes, the more complex it is the less maintainable tests are going to be. And here I'd rather call them as the specification checks.
Yes your api and tests both can be wrong but code review and other forms of testing can assure the correctness of the code to some degree. And as soon as these tests fail after some changes are made you either need to change specs and corresponding tests or find the bug and cover the case with the test.
I highly recommend you watching Roy's videos: http://youtube.com/watch?v=fAb_OnooCsQ
In this very case mocking allowed you to fake a database connection, so that you can run a test in place and in-memory, without relying on any additional resource, i.e. the database. This tests asserts that, when a service is called, a corresponded method of DAL is called.
However the later asserts of the list and the values in list aren't necessary. As you correctly noticed you just asserting that the values you "mocked" are returned. This would be useful within the mocking framework itself, to assert that the mocking methods behave as expected. But in your code is is just excess.
In general case, mocking allow one to:
Test behaviour (when something happens, then a particular method is executed)
Fake resources (for example, email servers, web servers, HTTP API request/response, database)
In contrast, unit-tests without mocking usually allow you to test the state. That is, you can detect a change in a state of an object, when a particular method was called.
All previous answers assume that mocking has some value, and then they proceed to explain what that value supposedly is.
For the sake of future generations that might arrive at this question looking to satisfy their philosophical objections on the issue, here is a dissenting opinion:
Mocking, despite being a nifty trick, should be avoided at (almost) all costs.
When you mock a dependency of your code-under-test, you are by definition making two kinds of assumptions:
Assumptions about the behavior of the dependency
Assumptions about the inner workings of your code-under-test
It can be argued that the assumptions about the behavior of the dependency are innocent because they are simply a stipulation of how the real dependency should behave according to some requirements or specification document. I would be willing to accept this, with the footnote that they are still assumptions, and whenever you make assumptions you are living your life dangerously.
Now, what cannot be argued is that the assumptions you are making about the inner workings of your code-under-test are essentially turning your test into a white-box test: the mock expects the code-under-test to issue specific calls to its dependencies, with specific parameters, and as the mock returns specific results, the code-under-test is expected to behave in specific ways.
White-box testing might be suitable if you are building high criticality (aerospace grade) software, where the goal is to leave absolutely nothing to chance, and cost is not a concern. It is orders of magnitude more labor intensive than black-box testing, so it is immensely expensive, and it is a complete overkill for commercial software, where the goal is simply to meet the requirements, not to ensure that every single bit in memory has some exact expected value at any given moment.
White-box testing is labor intensive because it renders tests extremely fragile: every single time you modify the code-under-test, even if the modification is not in response to a change in requirements, you will have to go modify every single mock you have written to test that code. That is an insanely high maintenance level.
How to avoid mocks and black-box testing
Use fakes instead of mocks
For an explanation of what the difference is, you can read this article by Martin Fowler: https://martinfowler.com/bliki/TestDouble.html but to give you an example, an in-memory database can be used as fake in place of a full-blown RDBMS. (Note how fakes are a lot less fake than mocks.)
Fakes will give you the same amount of isolation as mocks would, but without all the risky and costly assumptions, and most importantly, without all the fragility.
Do integration testing instead of unit testing
Using the fakes whenever possible, of course.
For a longer article with my thoughts on the subject, see https://blog.michael.gr/2021/12/white-box-vs-black-box-testing.html

How to do "component testing" in Visual Studio

I am looking for ideas to lead me to implement component testing for my application. Sure I use Unit Testing to test my single methods by utilizing TestMethods in a separate project but at this point I am more interested in testing in a higher level. Say that I have a class for Caching and I wrote unit tests for each and every method. They all contain their own instance of the class. And it works fine when I run the test, it initiates an object from that class and does one thing on it. But this doesnt cover the real life scenario in which the method is called by other methods and so on. I want to be able to test the entire caching component. How should I do it?
It sounds like you are talking about integration testing. Unit testing, as you say, does a great job of testing classes and methods in isolation but itegration testing tests that several components work together as expected.
One way to do this is to pick a top (or high) level object, create it with all of its dependencies as "real" objects as well and test that the public methods all produce the expected result.
In most cases you'll probably have to substitute stubs of the lowest level classes, like DB or file access classes and instrument them for the tests, but most objects would be the real thing.
Of course, like most testing efforts, all this is achieved much easier if your classes have been designed with some sort of dependency injection and paying attention to good design patterns like separation of concern.
All of this can be done using the same unit testing tools you've been using.
I would download the NUNIT-Framework
http://www.nunit.org/
It's free and simple

Using unit tests and a test database

How would I use NUnit and a test database to verify my code? I would in theory use mocks (moq) but my code is more in maintenance shape and fix it mode and I don't have the to setup all the mocks.
Do I just create a test project, then write tests that actually connect to my test database and execute the code as I wwould in the app? Then I check the code with asserts and make sure what I'm requesting is what I'm getting back correctly?
How would I use NUnit and a test database to verify my code? I would
in theory use mocks (moq) but my code is more in maintenance shape and
fix it mode and I don't have the to setup all the mocks.
Using mocks is only useful if you want to test the exact implementation behavior of a class. That means you are literally asserting that one class calls a specific method on another class. For example: I want to assert that Ninja.Attack() calls Sword.Unsheath().
Do I just create a test project, then write tests that actually
connect to my test database and execute the code as I wwould in the
app? Then I check the code with asserts and make sure what I'm
requesting is what I'm getting back correctly?
This is just a plain old unit test. If there are no obstacles to achieving this, that's a good indicator that this is going to be your most effective method of testing. It's practical and highly effective.
There's one other testing tool you didn't mention, which is called a stub. I highly recommend you read this classic article for more info:
http://martinfowler.com/articles/mocksArentStubs.html
Since we are not talking about theoretical case, this is what I would do - From my understanding what you want to test is that whether your app is properly connecting to the DB and fetching the desired data or not.
Create a test DB with the same schema
Add some dummy data in that
Open a connection to the DB from the code, request desired data
Write assertions to test what you got from the DB against what you expected
Also, I don't think these tests should be called unit tests because they are not self contained and are dependent on other factors like whether your database is up and running or not. I would say they fall close to integration tests that will test if different components of your applications are working as expectation when used together.
(Dan's answer ^^ pretty much sums what I wanted to say)

Why should I use a mocking framework instead of fakes?

There are some other variations of this question here at SO, but please read the entire question.
By just using fakes, we look at the constructor to see what kind of dependencies that a class have and then create fakes for them accordingly.
Then we write a test for a method by just looking at it's contract (method signature). If we can't figure out how to test the method by doing so, shouldn't we rather try to refactor the method (most likely break it up in smaller pieces) than to look inside it to figure our how we should test it? In other words, it also gives us a quality control by doing so.
Isn't mocks a bad thing since they require us to look inside the method that we are going to test? And therefore skip the whole "look at the signature as a critic".
Update to answer the comment
Say a stub then (just a dummy class providing the requested objects).
A framework like Moq makes sure that Method A gets called with the arguments X and Y. And to be able to setup those checks, one needs to look inside the tested method.
Isn't the important thing (the method contract) forgotten when setting up all those checks, as the focus is shifted from the method signature/contract to look inside the method and create the checks.
Isn't it better to try to test the method by just looking at the contract? After all, when we use the method we'll just look at the contract when using it. So it's quite important the it's contract is easy to follow and understand.
This is a bit of a grey area and I think that there is some overlap. On the whole I would say using mock objects is preferred by me.
I guess some of it depends on how you go about testing code - test or code first?
If you follow a test driven design plan with objects implementing interfaces then you effectively produce a mock object as you go.
Each test treats the tested object / method as a black box.
It focuses you onto writing simpler method code in that you know what answer you want.
But above all else it allows you to have runtime code that uses mock objects for unwritten areas of the code.
On the macro level it also allows for major areas of the code to be switched at runtime to use mock objects e.g. a mock data access layer rather than one with actual database access.
Fakes are just stupid dummy objects. Mocks enable you to verify that the controlflow of the unit is correct (e.g. that it calls the correct functions with the expected arguments). Doing so is very often a good way to test things. An example is that a saveProject()-function probably want's to call something like saveToProject() on the objects to be saved. I consider doing this a lot better than saving the project to a temporary buffer, then loading it to verify that everything was fine (this tests more than it should - it also verifies that the saveToProject() implementation(s) are correct).
As of mocks vs stubs, I usually (not always) find that mocks provide clearer tests and (optionally) more fine-grained control over the expectations. Mocks can be too powerful though, allowing you to test an implementation to the level that changing implementation under test leaving the result unchanged, but the test failing.
By just looking on method/function signature you can test only the output, providing some input (stubs that are only able to feed you with needed data). While this is ok in some cases, sometimes you do need to test what's happening inside that method, you need to test whether it behaves correctly.
string readDoc(name, fileManager) { return fileManager.Read(name).ToString() }
You can directly test returned value here, so stub works just fine.
void saveDoc(doc, fileManager) { fileManager.Save(doc) }
here you would much rather like to test, whether method Save got called with proper arguments (doc). The doc content is not changing, the fileManager is not outputting anything. This is because the method that is tested depends on some other functionality provided by the interface. And, the interface is the contract, so you not only want to test whether your method gives correct results. You also test whether it uses provided contract in correct way.
I see it a little different. Let me explain my view:
I use a mocking framework. When I try to test a class, to ensure it will work as intended, I have to test all the situations may happening. When my class under test uses other classes, I have to ensure in certain test situation that a special exceptions is raised by a used class or a certain return value, and so on... This is hardly to simulate with the real implementations of those classes, so I have to write fakes of them. But I think that in the case I use fakes, tests are not so easy to understand. In my tests I use MOQ-Framework and have the setup for the mocks in my test method. In case I have to analyse my testmethod, I can easy see how the mocks are configured and have not to switch to the coding of the fakes to understand the test.
Hope that helps you finding your answer ...

Should I be using mock objects when unit testing?

In my ASP.Net MVC application I am using IoC to facilitate unit testing. The structure of my application is a Controller -> Service Class -> Repository type of structure. In order to do unit testing, I have I have an InMemoryRepository class that inherits my IRepository, which instead of going out to a database, it uses an internal List<T> member. When I construct my unit tests, I just pass an instance of an internal repository instead of my EF repository.
My service classes retrieve objects from the repository through an AsQueryable interface that my repository classes implement, thus allowing me to use Linq in my service classes without the service class while still abstracting the data access layer out. In practice this seems to work well.
The problem that I am seeing is that every time I see Unit Testing talked about, they are using mock objects instead of the internal method that I see. On the face value it makes sense, because if my InMemoryRepository fails, not only will my InMemoryRepository unit tests fail, but that failure will cascade down into my service classes and controllers as well. More realistically I am more concerned about failures in my service classes affecting controller unit tests.
My method also requires me to do more setup for each unit test, and as things become more complicated (e.g. I implement authorization into the service classes) the setup becomes much more complicated, because I then have to make sure each unit test authorizes it with the service classes correctly so the main aspect of that unit test doesn't fail. I can clearly see how mock objects would help out in that regard.
However, I can't see how to solve this completely with mocks and still have valid tests. For example, one of my unit tests is that if I call _service.GetDocumentById(5), It gets the correct document from the repository. The only way this is a valid unit test (as far as I understand it) is if I have 2 or 3 documents stored, and my GetdocumentById() method correctly retrieves the one with an Id of 5.
How would I have a mocked repository with an AsQueryable call, and how would I make sure I don't mask any issues I make with my Linq statements by hardcoding the return statements when setting up the mocked repository? Is it better to keep my service class unit test using the InMemoryRepository but change my controller unit tests to use mocked service objects?
Edit:
After going over my structure again I remembered a complication that is preventing mocking in controller unit tests, as I forgot my structure is a bit more complicated than I originally said.
A Repository is a data store for one type of object, so if my document service class needs document entities, it creates a IRepository<Document>.
Controllers are passed an IRepositoryFactory. The IRepositoryFactory is a class which is supposed to make it easy to create repositories without having to repositories directly into the controller, or having the controller worry about what service classes require which repositories. I have an InMemoryRepositoryFactory, which gives the service classes InMemoryRepository<Entity> instantiations, and the same idea goes for my EFRepositoryFactory.
In the controller's constructors, private service class objects are instantiated by passing in the IRepositoryFactory object that is passed into that controller.
So for example
public class DocumentController : Controller
{
private DocumentService _documentService;
public DocumentController(IRepositoryFactory factory)
{
_documentService = new DocumentService(factory);
}
...
}
I can't see how to mock my service layer with this architecture so that my controllers are unit tested and not integration tested. I could have a bad architecture for unit testing, but I'm not sure how to better solve the issues that made me want to make a repository factory in the first place.
One solution to your problem is to change your controllers to demand IDocumentService instances instead of constructing the services themselves:
public class DocumentController : Controller
{
private IDocumentService _documentService;
// The controller doesn't construct the service itself
public DocumentController(IDocumentService documentService)
{
_documentService = documentService;
}
...
}
In your real application, let your IoC container inject IRepositoryFactory instances into your services. In your controller unit tests, just mock the services as needed.
(And see Misko Hevry's article about constructors doing real work for an extended discussion of the benefits of restructuring your code like this.)
Personally, I would design the system around the Unit of Work pattern that references repositories. This could make things much simpler and allows you to have more complex operations running atomically. You would typically have a IUnitOfWorkFactory that is supplied as dependency in the Service classes. A service class would create a new unit of work and that unit of work references repositories. You can see an example of this here.
If I understand correctly you are concerned about errors in one piece of (low level) code failing a lot of tests, making it harder to see the actual problem. You take InMemoryRepository as a concrete example.
While your concern is valid, I personally wouldn't worry about a failing InMemoryRepository. It is a test objects, and you should keep those tests objects as simple as possible. This prevents you from having to write tests for your test objects. Most of the time I assume they are correct (however, I sometimes use self checks in such a class by writing Assert statements). A test will fail when such an object misbehaves. It's not optimal, but you would normally find out quick enough what the problem is in my experience. To be productive, you will have to draw a line somewhere.
Errors in the controller caused by the service are another cup of tea IMO. While you could mock the service, this would make testing more difficult and less trustworthy. It would be better to NOT test the service at all. Only test the controller! The controller will call into the service and if your service doens't behave well, your controller tests would find out. This way you only test the top level objects in your application. Code coverage will help you spot parts of your code you don't test. Of course this isn't possible in all scenario's, but this often works well. When the service works with a mocked repository (or unit of work) this would work very well.
Your second concern was that those depedencies make you have much test setup. I've got two things to say about this.
First of all, I try to minimize my dependency inversion to only what I need to be able to run my unit tests. Calls to the system clock, database, Smtp server and file system should be faked to make unit tests fast and reliable. Other things I try not to invert, because the more you mock, the less reliable the tests become. You are testing less. Minimizing the dependency inversion (to what you need to have good RTM unit tests) helps making test setup easier.
But (second point) you also need to write your unit tests in a way that they are readable and maintainable (the hard part about unit testing, or in fact making software in general). Having big test setups makes them hard to understand and makes test code hard to change when a class gets a new dependency. I found out that one of the best ways to make tests more readable and maintainable is to use simple factory methods in your test class to centralize the creation of types that you need in the test (I never use mocking frameworks). There are two patterns that I use. One is a simple factory method, such as one that creates a valid type:
FakeDocumentService CreateValidService()
{
return CreateValidService(CreateInitializedContext());
}
FakeDocumentService CreateValidService(InMemoryUnitOfWork context)
{
return new FakeDocumentSerice(context);
}
This way tests can simply call these methods and when they need a valid object they simply call one of the factory methods. Of course when one of these methods accidentally creates an invalid object, many tests will fail. It's hard to prevent this, but easily fixed. And easily fixed means that the tests are maintainable.
The other pattern I use is the use of a container type that holds the arguments/properties of the actual object you want to create. This gets especially useful when an object has many different properties and/or constructor arguments. Mix this with a factory for the container and a builder method for the object to create and you get very readable test code:
[TestMethod]
public void Operation_WithValidArguments_Succeeds()
{
// Arrange
var validArgs = CreateValidArgs();
var service = BuildNewService(validArgs);
// Act
service.Operation();
}
[TestMethod]
[ExpectedException(typeof(InvalidOperationException))]
public void Operation_NegativeAge_ThrowsException()
{
// Arrange
var invalidArgs = CreateValidArgs();
invalidArgs.Age = -1;
var service = BuildNewService(invalidArgs);
// Act
service.Operation();
}
This allows you to let the test only specify what matters! This is very important to make tests readable! The CreateValidArgs() method could create an container with over 100 arguments that would make a valid SUT (system under test). You now centralized in one place the default valid configuration. I hope this makes sense.
Your third concern was about not being able to test if LINQ queries behave expectedly with the given LINQ provider. This is a valid problem, because it is quite easy to write LINQ (to Expression tree) queries that run perfectly when used over in-memory objects, but fail when querying the database. Sometimes it is impossible to translate a query (because you call an .NET method that has no counterpart in the database) or the LINQ provider has limitations (or bugs). Especially the LINQ provider of Entity Framework 3.5 sucks hard.
However, this is a problem you cannot solve with unit tests per definition. Because when you call the database in your tests, it's not a unit test anymore. Unit tests however never totally replace manual testing :-)
Still, it's a valid concern. In addition to unit testing you can do integration testing. In this case you run your code with the real provider and a (dedicated) test database. Run each test within a database transaction and rollback the transaction at the end of the test (TransactionScope works great with this!). Note however that writing maintainable integration tests is even harder than writing maintainable unit tests. You have to make sure that the model of your test database is in sync. Each integration test should insert the data it needs for that test in the database, which is often a lot of work to write and maintain. Best is to keep the amount of integration tests to a minimum. Have enough integration tests to make you feel confident about making changes to the system. For instance, having to call a service method with a complicated LINQ statement in a single test will often be enough to test if your LINQ provider is able to build valid SQL out of it. Most of the time I just assume the LINQ provider will have the same behavior as the LINQ to Objects (.AsQueryable()) provider. Again, you will have to draw the line somewhere.
I hope this helps.
I think your approach is sound for testing the service layer itself, but, as you suggested, it would be better if the service layer is mocked out completely for your business logic and other high-level testing. This makes your higher-level tests easier to implement/maintain, as there's no need to exercise the service layer again if it's already been tested.

Categories