Whats a great way to perfom integration testing? - c#

We have written our own integration test harness where we can write a number of "operations" or tests, such as "GenerateOrders". We have a number of parameters we can use to configure the tests (such as the number of orders). We then write a second operation to confirm the test has passed/Failed (i.e. there are(nt) orders).
The tool is used for
Integration Testing
Data generation
End-to-end testing (By mixing and matching a number of tests)
It seems to work well, however requires development experience to maintain and write new tests. Our test team would like to get involved, who have little C# development experience.
We are just about to start a new Greenfield project and I am doing some research into the optimum way to write and maintain integration tests.
The questions are as follows:
How do you perform integration testing?
What tool do you use for it (FitNess?, Custom?, NUnit)?
I am looking forward to peoples suggestions/comments.
Thanks in advance,
David

Integration testing may be done at a user interface level (via automated functional tests - AFT) or service/api interface level.
There are several tools in both cases:
I have worked on projects that successfully used Sahi or Selenium for AFT of web apps, white for AFT for .NET WPF or winforms apps, swtBot for AFT of Eclipse Rich client apps and frankenstein for AFT of Java swing apps.
Fitnesse is useful for service/api level tests or for tests that run just below the UI. When done right, it has the advantage of having business-readable tests i.e. non-developers can read and understand the tests. Tools like NUnit are less useful for this purpose. SOAPUI is particularly suited for testing SOAP web services.
Factors to consider:
Duration: Can you tolerate 8 hour test runs?
Brittleness: AFTs can be quite brittle against an evolving application (e.g. ids and positions of widgets may change). Adequate skill and effort is needed to not hard code the changing parts.
Fidelity: How close to real world do you want it to be? e.g. You may have to mock out interactions with a payment gateway unless the provider provides you a test environment that you can pummel with your tests.
Some nuances are captured here.
Full disclosure: The author is associated with the organization behind most (not all) of the above free and open source tools.

You could try the Concordion framework for writing user acceptance tests in HTML files. It takes a BDD-style approach. There is a .Net port as well

It's not out of Beta yet, but StoryTeller looks promising:

Related

Unit testing WCF calls, is it possible and how?

I'm working on a tool that checks some large application functionality. All of my calls to large application are made using WCF, application is too big to create Mock out of it. I want to create Unit Tests for my tool so i won't break functionality while re-factoring or extending functionality.
Is it possible and how ?
You can, but they won't be unit tests in the normal sense of the word, they will be more like automated regression tests. A typical test method might contain the following:
Write expected test data into the database
Make the WCF call
Read data out of the database and assert that it's what you expect
Reset the database if necessary
It takes a lot of care to get this right, but it does work.
The phrase "application is too big to create Mock out of it" raised a warning sign. I do not mean to sound harsh but if your application logic cannot be broken down into smaller unit-testable parts then there's something wrong with the architecture of the application.
I've been unit testing WCF powered Silverlight ViewModels for years using Moq with great results. A concept which can be applied to ASP.Net/MVC as well.
I guess it depends whether you're looking at unit testing the classes behind the service or functionally testing the service itself.
For functional testing of WCF services I use WCF Storm which could do with a little polish, but works great. You can build and run functional tests as well as performance tests. Very useful little tool.
http://www.wcfstorm.com/

Automated testing for an "untestable" application in C#. Unit/Integration/Functional/System Testing - How To?

I am having a hard time introducing automated testing to a particular application. Most of the material I've found focus on unit testing. This doesn't help with this application for several reasons:
The application is multi-tiered
Not written with testing in mind (lots of singletons hold application level settings, objects not mockable, "state" is required many places)
Application is very data-centric
Most "functional processes" are end to end (client -> server -> database)
Consider this scenario:
var item = server.ReadItem(100);
var item2 = server.ReadItem(200);
client.SomethingInteresting(item1, item2);
server.SomethingInteresting(item1, item2);
Definition of server call:
Server::SomethingInteresting(Item item1, Item item2)
{
// Semi-interesting things before going to the server.
// Possibly stored procedure call doing some calculation
database.SomethingInteresting(item1, item2);
}
How would one set up some automated tests for that, where there is client interaction, server interaction then database interaction?
I understand that this doesn't constitute a "unit". This is possibly an example of functional testing.
Here are my questions:
Is testing an application like this a lost cause?
Can this be accomplished using something like nUnit or MS Test?
Would the "setup" of this test simply force a full application start up and login?
Would I simply re-read item1 and item2 from the database to validate they have been written correctly?
Do you guys have any tips on how to start this daunting process?
Any assistance would be greatly appreciated.
If this is a very important application in your company that is expected to last several years, then I would suggest start small. Start looking for little pieces that you can test or test with small modifications to the code to make it testable.
It's not ideal, but in a year or so you'll probably much better than you are today.
You should look into integration test runners like Cucumber or Fitnesse. Depending on your scenario, an integration test runner will either act as a client to your server and make the appropriate calls into the server, or it will share some domain code with your existing client and run that code. You will give your scenarios a sequence of calls that should be made and verify that the correct results are output.
I think you'll be able to make at least some parts of your application testable with a little work. Good luck!
Look at Pex and Moles. Pex can analyse your code and generate unit tests that will test all the borderline conditions etc. You can start introducing a mocking framework to start stubbing out some of the complext bit.
To take it further, you have to start hitting the database with integration tests. To do this you have to control the data in the database and clean up after the test is done. Few ways of doing this. You can run sql scripts as part of the set up/tear down for each test. You can use a compile time AOP framework such as Postsharp to run the sql scripts for by just specifying them in attributes. Or if your project doesn't want to use Postsharp, you can use a built in dot net feature called "Context Bound Objects" to inject code to run the Sql Scripts in an AOP fashion. More details here - http://www.chaitanyaonline.net/2011/09/25/improving-integration-tests-in-net-by-using-attributes-to-execute-sql-scripts/
Basically, the steps I would suggest,
1) Install Pex and generate some unit tests. This will be the quickest, least effort way to generate a good set of Regression tests.
2) Analyse the unit tests and see if there are other tests or permutations of the test you could use. If so write them by hand. Introduce a mocking framework if you require.
3) Start doing end to end integration testing. Write sql scripts to insert and delete data into the database. Write a unit tests that will run the insert sql scripts. Call some very high level method in your application front end, such as "AddNewUser" and make sure it makes it all the way to the backend. The front end, high level method may call any number of intermediate methods in various layers.
4) Once you find yourself writing a lot of integration tests in this pattern "Run sql script to insert test data" -> Call high level functional method -> "Run Sql script to clean up test data", you can clean up the code and encapsulate this pattern using AOP techniques such as Postsharp or Context Bound Methods.
Ideally you would refactor your code over time so you can make it more testable. Ideally any incremental improvement in this area will allow you to write more unit tests, which should drastically increase your confidence in your existing code, and your ability to write new code without breaking existing tested code.
I find such refactoring often gives other benefits too, such as a more flexible and maintainable design. These benefits are good to keep in mind when trying to justify the work.
Is testing an application like this a lost cause?
I've been doing automated integration testing for years, and found it to be very rewarding, both for myself, and for the companies I've worked for :) I only recently started understanding how to make an application fully unit testable within the last 3 years or so, and I was doing full integration tests (with custom implemented/hack test hooks) before that.
So, no, it is not a lost cause, even with the application architected the way it is. But if you know how to do unit testing, you can get a lot of benefits from refactoring the application: stability and maintainability of tests, ease of writing them, and isolation of failures to the specific area of code that caused the failure.
Can this be accomplished using something like nUnit or MS Test?
Yes. There are web page/UI testing frameworks that you can use from .Net unit testing libraries, including one built into Visual Studio.
You might also get a lot of benefit by calling the server directly rather than through the UI (similar benefits to those you get if you refactor the application). You can also try mocking the server, and testing the GUI and business logic of the client application in isolation.
Would the "setup" of this test simply force a full application start up and login?
On the client side, yes. Unless you want to test login, of course :)
On the server side, you might be able to leave the server running, or do some sort of DB restore between tests. A DB restore is painful (and if you write it wrong, will be flaky) and will slow down the tests, but it will help immensely with test isolation.
Would I simply re-read item1 and item2 from the database to validate they have been written correctly?
Typical integration tests are based around the concept of a Finite State Machine. Such tests treat the code under test as if it is a set of state transitions, and make assertions on the final state of the system.
So you'd set the DB to a known state before hand, call a method on the server, then check the DB afterwards.
You should always do state-based assertions on a level below the level of code you are exercising. So if you exercise web service code, validate the DB. If you exercise the client and are running against a real version of the service, validate at the service level or at the DB level. You should never exercise a web service and perform assertions on that same web service (for example). Doing so masks bugs, and can give you no real confidence that you're actually testing the full integration of all the components.
Do you guys have any tips on how to start this daunting process?
Break up your work. Identify all the components of the system (every assembly, each running service, etc). Try to piece those apart into natural categories. Spend some time designing test cases for each category.
Design your test cases with priority in mind. Examine each test case. Think of a hypothetical scenario that could cause it to fail, and try to anticipate whether it would be a show stopping bug, if the bug could sit around a while, or if it could be punted to another release. Assign priority on your test cases based off these guesses.
Identify a piece of your application (possibly a specific feature) that has as few dependencies as possible, and write only the highest priority test cases for it (ideally, they should be the simplest/most basic tests). Once you are done, move on to the next piece of the application.
Once you have all the logical pieces of your application (all assemblies, all features) covered by your highest priority test cases, do what you can to get those test cases run every day. Fix test failures in those cases before working on additional test implementation.
Then get code coverage running on your application under test. See what parts of the application you have missed.
Repeat with each successive higher priority set of test cases, until the whole application is tested at some level, and you ship :)

Any ASP.NET (WebForm) apps which have good unit tests (CodePlex or anywhere)?

I am looking for any ASP.NET (WebForms & C#) app which has some good unit tests in its solution. Good meaning testing different kinds of edge cases and does a good code coverage. Any app on CodePlex, GitHub or anywhere is fine.
This is for educational purposes so I prefer smaller apps than large ones.
Any recommendations?
Clarification:
While the app is WebForms, the unit tests I am interested is more on business logic, not the UI. Yes any .NET app can do but if it is WebForms with some UI testing, the better.
ScrewTurn wiki is open-source, version 3.x is coded in C# and ASP.NET 3.5, and the source comes bundled with tests for the components.
I personally believe that unit tests in the business logic layer are not sufficient for any web application, no matter what the framework used.
The best policy is to have a serious "smoke test", perhaps kept in an Excel workbook, where you exercise every known possible set of inputs and actions and are able to determine that "everything works" whenever you do a new deployment.
Unit tests are valuable, but they cannot reproduce the behavior of an application in production.
Take a look at this. Don't know exactly if it's what you wanted :) It is ASP.NET with MVP pattern implemented and it has some test coverage - .NET Framework 3.5 Bundle
I've just finish a web e-commerce site.
For the business logic I create a separate dll, and I use NUnit to test it. Nunit is simple to use, it's simple (and fast) to launch with a .bat file and it is possible to use a graphic interface.
In the web site, I use NLOG for logging user action. In this case the best way (after YOU test it for a while) is to give your site to some final customer, and watch the log file every time there is an exception.
I know you are looking for coding examples to learn unit testing but
Foundations of Programming - Building Better Software By Karl Seguin
is a good book to help guide you through the process of unit testing a mocking. It's only 79 pages long and should get you up and running quickly.
http://openmymind.net/FoundationsOfProgramming.pdf

Stress Testing Multi-User Application?

I have an application that I'm building that has had concurrency problems in the past.
This was before implementing any LINQ error handling.
Now, I have some LINQ error handling added to my code, and I was wondering if you could give me tips about how to stress test the hell out of my application. It is super important that everything works when I deploy this thing, so your input would help.
I have two boxes set up at my desk right now to simulate two users doing whatever.
Edit:
Environment: Windows XP
App Type: WinForm
User Count: 15
I strongly suggest using CHESS, a free tool from Microsoft for finding and reproducing concurrency bugs. I have found it invaluable, indispensible, and generally life-saving.
A lot of this depends on the way your application is set up. Do you have unit tests defined? Is it a multi-layer (data/UI/business/whatever) app where you don't need direct human interaction to test the functions that you care about?
There are third part testing applications that let you set up and run scripts for testing your applications (often by recording steps that you do and then running them). WinRunner is but one example. These test systems are usually configurable to some degree.
You can also create a test project (this will be a lot easier if you have a multi-tiered application), and effectively "roll your own" test application. That may give you more control over your tests - you can either choose to do tests with predictable output in a predictable order, or generate random data and test things in random orders at random intervals, or some combination thereof.

System Testing a desktop application

I've got a desktop application written in C# created using VS2008 Pro and unit tested with Nunit framework and Testdriven.net plugin for VS2008. I need to conduct system testing on the application.
I've previously done web based system tests using Bad Boy and Selenium plugin for Firefox, but I'm new to Visual Studio and C#.
I would appreciate if someone could share their advice regarding this.
System testing will likely need to be done via the UI. This gives you two options:
1) You can manually conduct the test cases by clicking on elements.
2) You can automate the test cases by programming against the UI. There are plenty of commercial tools to do this or you can use a programming framework like the Microsoft UI Automation Framework. These tend to use the accessibility APIs built into Windows to access your UI.
Whether you go the manual or automated route depends on how many times you will be running the tests. If you are just going to run them once or twice, don't spend the time automating. You will never earn it back. If you are going to run them often, automating can be very handy.
A word of caution: Automating the UI isn't hard, but it is very brittle. If the application is changing a lot, the tests will require a lot of maintenance.
As Thomas Owens commented on your question, first you must decide what kind of system testing you want to do. But assuming you want start with Functional System Tests. Prepare use cases you want to automate. Than you must find proper tool.
Just for start:
AtoIT – is not test atomization tool but it lets automate some tasks. So you could record/script use cases. Not really recommended, but can be done.
HP QuickTestPro – easily can be done with this tool via recording/scripting but it is expensive, so maybe not worth it for personal use.
IBM Robot – as HP QTP.
Powershell – you could write scripts in powershell and execute them. If you would use dedicated ide-like tools for powershell you could record test also. I did some web automation via powershell and it worked. With a bit of work probably you could script around your desktop app.
And the best would be to try different tools, and use one that suits you best. Try this link and this link.
System tests usually have use cases, end to end scenarios and other scripted functions that real people execute. These are the tests that don't lend themselves well to automation as they are asking your unit-tested cogs to work with each other. You might have great unit tests for your "nuts" and your "wrenches" but only a comprehensive system test will let you know if you have the right sized wrench for the nut at hand, how to select/return it from/to the drawer, etc.
In short - manual tests.
If you're willing to put money down, you could look at something like TestComplete.
Although I haven't really used it yet (our company just bought it), it seems quite nice. You can record clicks and keypresses and stuff, define success criteria, and it will replay the test for you later. It appears to be quite smart about UI changes - it remembers which button you clicked, not just the (x,y) of each click.
It's scriptable, or drag-and-drop programmable.
I'm not affiliated in any way, and this is not an endorsement, because I haven't really formed an opinion of it yet.
Perhaps NUnitForms could be useful for you?

Categories