Simple console app tests - c#

I'm currently doing some trainee work & looking for a bit of advice.
I've been set a pretty simple task of writing some unit tests for a service application in C#. The service mainly has methods which query and look up and SQL database for various things. I've been asked to write unit tests for the majority of these methods and other things like simply checking the connection and stuff. I've been told to keep it simple and probably just write the tests in a Console application. I'm wondering what the best way to go about this would be?
Would simply calling the methods from a console app with hardcoded input be suitable? Then just check what the output is & write whether is passes in the console? Or is this too simple and nasty?

You can run both MSTest and NUnit tests from the command line and in my opinion that would be far more preferable than writing your own test runner from scratch. Concentrate on writing quality tests, not the scaffolding required to execute them and deliver the results.
I would suggest it's as simple to do it "properly" as it is to craft your own solution.
Note though that tests connecting to the database are integration tests, not unit test.

Related

Which testing method to use?

First of all, I'm new to testing software. I suppose I'm rather green. Still for this project I need it to make sure all the calculations are done right. For the moment I'm using unit tests to test each module in C# to make sure it does what it should do.
I know about some different testing methods (Integration, unit testing), but I can't seemingly find any good book or reference material on testing in general. Most of em are to the point in one specific area (like unit testing). This sort of makes it really hard (for me) to get a good grasp on how to test your software the best, and which method to use.
For example, I'm using GUI elements as well, should I test those? Should I just test it by using it and then visually confirm all is in order? I do know it depends on whether it's critical or not, but at which point do you say "Let's not test that, because it's not really relevant/important"?
So a summary: How to choose which testing method is best, and where do you draw the line when you use the method to test your software?
There are several kind of tests but imho the most important are unit test, component test, function test and end-to-end test.
Unit tests checks if your classes works as intended.(JUnit). These tests are the base of your testing environment as these tells whether your methods works. Your goal is 100% coverage here as these are the most important part of your tests.
Component tests checks how several class works together in your code. Component can be a lot of stuff in the code, but its basically more than unit testing, and less than function testing. The goal coverage is around 75%.
Function tests are testing the actual functionalities you implement. For example, if you want a button that saves some input data to a database, that is a functionality of the program. That is what you test. The goal coverage here is around 50%.
End-to-end tests are testing your whole application. These can be quite robust, and you likely cant and don't want to test everything, this test is here to check if the whole system works. The goal coverage is around 25%.
This is the order of importance too.
There is no such thing in these as "better" though. Any test you can run to check if your code works as intended is equally good.
You probably want most of your test automated: so you can test while you having a coffee break or your servers can test everything while you are away from work, then check the results in the morning.
GUI testing considered the hardest part of testing, and there are several tools that help you with that, for example Selenium for browser-GUI testing.
Several architectural patterns like Model-View-Presenter tries to separate the GUI part of the application because of this and work with as dumb GUI as possible to avoid errors in there. If you can successfully separate your graphic, you will be able to mock out the gui part of the application and simply left it out from most of the testing process.
For reference I suggest "Effective Software Testing" from Elfriede Dustin, but I'm not familiar with the books on the subject; there could be better ones.
It realy depends on the software you need to test. If you are mostly interested in the logic behind (calculations) then you should write some integration tests that will test not just separate methods, but the workflow from start to finish, so try to emulate what typical user would do. I can hardly imagine user calling one particular method - most likely some specific sequence of methods. If the calculations are ok - then the chance is low that GUI will display it wrong. Besides automating GUI is time consuming process and it will require a lot of skill and energy to build it and maintain, as every simple change might brake everything. My advice is to start with writing integration tests with different values that will cover the most common scenarios.
This answer is specific to the type of application you are involved in - WinForms
For GUI use MVC/MVP pattern. Avoid writing any code in the code behind file. This will allow you to unit test your UI code. When I say UI code it means you will be able to test your code which will be invoked when a button is clicked or any action that needs to be taken on an UI event occurrence.
You should be able to unit test each of your class files (except UI code). This will mainly focus on state based testing.
Also, you will be able to write interaction test cases for tests involving multiple class files. This should cover most of your flows.
So two things to focus on State based testing and Interaction testing.

Unit testing WCF calls, is it possible and how?

I'm working on a tool that checks some large application functionality. All of my calls to large application are made using WCF, application is too big to create Mock out of it. I want to create Unit Tests for my tool so i won't break functionality while re-factoring or extending functionality.
Is it possible and how ?
You can, but they won't be unit tests in the normal sense of the word, they will be more like automated regression tests. A typical test method might contain the following:
Write expected test data into the database
Make the WCF call
Read data out of the database and assert that it's what you expect
Reset the database if necessary
It takes a lot of care to get this right, but it does work.
The phrase "application is too big to create Mock out of it" raised a warning sign. I do not mean to sound harsh but if your application logic cannot be broken down into smaller unit-testable parts then there's something wrong with the architecture of the application.
I've been unit testing WCF powered Silverlight ViewModels for years using Moq with great results. A concept which can be applied to ASP.Net/MVC as well.
I guess it depends whether you're looking at unit testing the classes behind the service or functionally testing the service itself.
For functional testing of WCF services I use WCF Storm which could do with a little polish, but works great. You can build and run functional tests as well as performance tests. Very useful little tool.
http://www.wcfstorm.com/

Automated testing for an "untestable" application in C#. Unit/Integration/Functional/System Testing - How To?

I am having a hard time introducing automated testing to a particular application. Most of the material I've found focus on unit testing. This doesn't help with this application for several reasons:
The application is multi-tiered
Not written with testing in mind (lots of singletons hold application level settings, objects not mockable, "state" is required many places)
Application is very data-centric
Most "functional processes" are end to end (client -> server -> database)
Consider this scenario:
var item = server.ReadItem(100);
var item2 = server.ReadItem(200);
client.SomethingInteresting(item1, item2);
server.SomethingInteresting(item1, item2);
Definition of server call:
Server::SomethingInteresting(Item item1, Item item2)
{
// Semi-interesting things before going to the server.
// Possibly stored procedure call doing some calculation
database.SomethingInteresting(item1, item2);
}
How would one set up some automated tests for that, where there is client interaction, server interaction then database interaction?
I understand that this doesn't constitute a "unit". This is possibly an example of functional testing.
Here are my questions:
Is testing an application like this a lost cause?
Can this be accomplished using something like nUnit or MS Test?
Would the "setup" of this test simply force a full application start up and login?
Would I simply re-read item1 and item2 from the database to validate they have been written correctly?
Do you guys have any tips on how to start this daunting process?
Any assistance would be greatly appreciated.
If this is a very important application in your company that is expected to last several years, then I would suggest start small. Start looking for little pieces that you can test or test with small modifications to the code to make it testable.
It's not ideal, but in a year or so you'll probably much better than you are today.
You should look into integration test runners like Cucumber or Fitnesse. Depending on your scenario, an integration test runner will either act as a client to your server and make the appropriate calls into the server, or it will share some domain code with your existing client and run that code. You will give your scenarios a sequence of calls that should be made and verify that the correct results are output.
I think you'll be able to make at least some parts of your application testable with a little work. Good luck!
Look at Pex and Moles. Pex can analyse your code and generate unit tests that will test all the borderline conditions etc. You can start introducing a mocking framework to start stubbing out some of the complext bit.
To take it further, you have to start hitting the database with integration tests. To do this you have to control the data in the database and clean up after the test is done. Few ways of doing this. You can run sql scripts as part of the set up/tear down for each test. You can use a compile time AOP framework such as Postsharp to run the sql scripts for by just specifying them in attributes. Or if your project doesn't want to use Postsharp, you can use a built in dot net feature called "Context Bound Objects" to inject code to run the Sql Scripts in an AOP fashion. More details here - http://www.chaitanyaonline.net/2011/09/25/improving-integration-tests-in-net-by-using-attributes-to-execute-sql-scripts/
Basically, the steps I would suggest,
1) Install Pex and generate some unit tests. This will be the quickest, least effort way to generate a good set of Regression tests.
2) Analyse the unit tests and see if there are other tests or permutations of the test you could use. If so write them by hand. Introduce a mocking framework if you require.
3) Start doing end to end integration testing. Write sql scripts to insert and delete data into the database. Write a unit tests that will run the insert sql scripts. Call some very high level method in your application front end, such as "AddNewUser" and make sure it makes it all the way to the backend. The front end, high level method may call any number of intermediate methods in various layers.
4) Once you find yourself writing a lot of integration tests in this pattern "Run sql script to insert test data" -> Call high level functional method -> "Run Sql script to clean up test data", you can clean up the code and encapsulate this pattern using AOP techniques such as Postsharp or Context Bound Methods.
Ideally you would refactor your code over time so you can make it more testable. Ideally any incremental improvement in this area will allow you to write more unit tests, which should drastically increase your confidence in your existing code, and your ability to write new code without breaking existing tested code.
I find such refactoring often gives other benefits too, such as a more flexible and maintainable design. These benefits are good to keep in mind when trying to justify the work.
Is testing an application like this a lost cause?
I've been doing automated integration testing for years, and found it to be very rewarding, both for myself, and for the companies I've worked for :) I only recently started understanding how to make an application fully unit testable within the last 3 years or so, and I was doing full integration tests (with custom implemented/hack test hooks) before that.
So, no, it is not a lost cause, even with the application architected the way it is. But if you know how to do unit testing, you can get a lot of benefits from refactoring the application: stability and maintainability of tests, ease of writing them, and isolation of failures to the specific area of code that caused the failure.
Can this be accomplished using something like nUnit or MS Test?
Yes. There are web page/UI testing frameworks that you can use from .Net unit testing libraries, including one built into Visual Studio.
You might also get a lot of benefit by calling the server directly rather than through the UI (similar benefits to those you get if you refactor the application). You can also try mocking the server, and testing the GUI and business logic of the client application in isolation.
Would the "setup" of this test simply force a full application start up and login?
On the client side, yes. Unless you want to test login, of course :)
On the server side, you might be able to leave the server running, or do some sort of DB restore between tests. A DB restore is painful (and if you write it wrong, will be flaky) and will slow down the tests, but it will help immensely with test isolation.
Would I simply re-read item1 and item2 from the database to validate they have been written correctly?
Typical integration tests are based around the concept of a Finite State Machine. Such tests treat the code under test as if it is a set of state transitions, and make assertions on the final state of the system.
So you'd set the DB to a known state before hand, call a method on the server, then check the DB afterwards.
You should always do state-based assertions on a level below the level of code you are exercising. So if you exercise web service code, validate the DB. If you exercise the client and are running against a real version of the service, validate at the service level or at the DB level. You should never exercise a web service and perform assertions on that same web service (for example). Doing so masks bugs, and can give you no real confidence that you're actually testing the full integration of all the components.
Do you guys have any tips on how to start this daunting process?
Break up your work. Identify all the components of the system (every assembly, each running service, etc). Try to piece those apart into natural categories. Spend some time designing test cases for each category.
Design your test cases with priority in mind. Examine each test case. Think of a hypothetical scenario that could cause it to fail, and try to anticipate whether it would be a show stopping bug, if the bug could sit around a while, or if it could be punted to another release. Assign priority on your test cases based off these guesses.
Identify a piece of your application (possibly a specific feature) that has as few dependencies as possible, and write only the highest priority test cases for it (ideally, they should be the simplest/most basic tests). Once you are done, move on to the next piece of the application.
Once you have all the logical pieces of your application (all assemblies, all features) covered by your highest priority test cases, do what you can to get those test cases run every day. Fix test failures in those cases before working on additional test implementation.
Then get code coverage running on your application under test. See what parts of the application you have missed.
Repeat with each successive higher priority set of test cases, until the whole application is tested at some level, and you ship :)

Integration Testing vs. Unit Testing

I've recently started reading The Art of Unit Testing, and the light came on regarding the difference between Unit tests and Integration tests. I'm pretty sure there were some things I was doing in NUnit that would have fit better in an Integration test.
So my question is, what methods and tools do you use for Integration testing?
In my experience, you can use (mostly) the same tools for unit and integration testing. The difference is more in what you test, not how you test. So while setup, code tested and checking of results will be different, you can use the same tools.
For example, I have used JUnit and DBUnit for both unit and integration tests.
At any rate, the line between unit and integrations tests can be somewhat blurry. It depends on what you define as a "unit"...
Selenium along with Junit for unit+integration testing including the UI
Integration tests are the "next level" for people passionate about unit testing.
Nunit itself can be used for integration testing(No tool change).
eg scenario:
A Unit test was created using Nunit using mock(where it goes to DB/API)
To use integration test we do as follows
instead of mocks use real DB
leads to data input in DB
leads to data corruption
leads to deleting and recreating DB on every test
leads to building a framework for data management(tool addition?)
As you can see from #2 onwards we are heading into an unfamiliar territory as unit test developers. Even though the tool remains the same.
leads to you wondering, why so much time for integration test setup?
leads to : shall I stop unit testing as both kinds of tests takes
more time?
leads to : we only do integration testing
leads to : would all devs agree to this? (some devs might hate testing altogether)
leads to : since no unit test, no code coverage.
Now we are heading to issues with business goals and dev physco..
I think I answered your question a bit more than needed. Anyways, like to read more and you think unit tests are a danger? then head to this
1) Method: Test Point Metrics is best approach in any environment. By this approach not only we can do unit and integration testing but also validate the requirements.
Time for writing Test Point Metrics is just after the Requirement understanding
A template of Test Point Metrics available here:
http://www.docstoc.com/docs/80205542/Test-Plan
Typically there are 3 kind of testing.
1. Manual
2. Automated
3. Hybrid approach
In all above cases Test Point Metrics approach works.
2) Tool:
Tool will depend upon the requirements of project anyhow following are best tools according to my R&D
1. QTP
2. Selenium
3. AppPerfect
For more clear answer about tool, please specify your type of project.
Regards:
Muhammad Husnain
I mostly use JUnit for unit testing in combination with Mockito to mock/stub out dependencies so i can test my unit of code in isolation.
For integration tests these are normally involve 'integration' with an external system/module like a database/message queue/framework etc... so to test these your best bet would be the use a combination of tools.
For e.g. i use JUnit as well but rather than mock out the dependencies i actually configure those dependencies as it were calling code. In addition, i test a flow of control so that each method are not tested in isolation as it is in Unit testing but instead together. Regarding Database connectivity, i use an embedded database with some dummy test data etc.

Unit testing database application with business logic performed in the UI

I manage a rather large application (50k+ lines of code) by myself, and it manages some rather critical business actions. To describe the program simple, I would say it's a fancy UI with the ability to display and change data from the database, and it's managing around 1,000 rental units, and about 3k tenants and all the finances.
When I make changes, because it's so large of a code base, I sometimes break something somewhere else. I typically test it by going though the stuff I changed at the functional level (i.e. I run the program and work through the UI), but I can't test for every situation. That is why I want to get started with unit testing.
However, this isn't a true, three tier program with a database tier, a business tier, and a UI tier. A lot of the business logic is performed in the UI classes, and many things are done on events. To complicate things, everything is database driven, and I've not seen (so far) good suggestions on how to unit test database interactions.
How would be a good way to get started with unit testing for this application. Keep in mind. I've never done unit testing or TDD before. Should I rewrite it to remove the business logic from the UI classes (a lot of work)? Or is there a better way?
I would start by using some tool that would test the application through the UI. There are a number of tools that can be used to create test scripts that simulate the user clicking through the application.
I would also suggest that you start adding unit tests as you add pieces of new functionality. It is time consuming to create complete coverage once the appliction is developed, but if you do it incrementally then you distribute the effort.
We test database interactions by having a separate database that is used just for unit tests. In that way we have a static and controllable dataset so that requests and responses can be guaranteed. We then create c# code to simulate various scenarios. We use nUnit for this.
I'd highly recommend reading the article Working Effectively With Legacy Code. It describes a workable strategy for what you're trying to accomplish.
One option is this -- every time a bug comes up, write a test to help you find the bug and solve the problem. Make it such that the test will pass when the bug is fixed. Then, once the bug is resolved you have a tool that'll help you detect future changes that might impact the chunk of code you just fixed. Over time your test coverage will improve, and you can run your ever-growing test suite any time you make a potentially far-reaching change.
TDD implies that you build (and run) unit tests as you go along. For what you are trying to do - add unit tests after the fact - you may consider using something like Typemock (a commercial product).
Also, you may have built a system that does not lend itself to be unit tested, and in this case some (or a lot) of refactoring may be in order.
First, I would recommend reading a good book about unit testing, like The Art Of Unit Testing. In your case, it's a little late to perform Test Driven Development on your existing code, but if you want to write your unit tests around it, then here's what I would recommend:
Isolate the code you want to test into code libraries (if they're not already in libraries).
Write out the most common Use Case scenarios and translate them to an application that uses your code libraries.
Make sure your test program works as you expect it to.
Convert your test program into unit tests using a testing framework.
Get the green light. If not, then your unit tests are faulty (assuming your code libraries work) and you should do some debugging.
Increase the code and scenario coverage of your unit tests: What if you entered unexpected results?
Get the green light again. If the unit test fails, then it's likely that your code library does not support the extended scenario coverage, so it's refactoring time!
And for new code, I would suggest you try it using Test Driven Development.
Good luck (you'll need it!)
I'd recommend picking up the book Working Effectively with Legacy Code by Michael Feathers. This will show you many techniques for gradually increasing the test coverage in your codebase (and improving the design along the way).
Refactoring IS the better way. Even though the process is daunting you should definitely separate the presentation and business logic. You will not be able to write good unit tests against your biz logic until you make that separation. It's as simple as that.
In the refactoring process you will likely find bugs that you didn't even know existed and, by the end, be a much better programmer!
Also, once you refactor your code you'll notice that testing your db interactions will become much easier. You will be able write tests that perform actions like: "add new tenant" which will involve creating a mock tenant object and saving "him" to the db. For you next test you would write "GetTenant" and try and get that tenant that you just created from the db and into your in-memory representation... Then compare your first and second tenant to make sure all fields match values. Etc. etc.
I think it is always a good idea to separate your business logic from UI. There several benefits to this including easier unit testing and expandability. You might also want to refer to pattern based programming. Here is a link http://en.wikipedia.org/wiki/Design_pattern_(computer_science) that will help you understand design patterns.
One thing you could do for now, is within your UI classes isolate all the business logic and different business bases functions and than within each UI constructor or page_load have unit test calls that test each of the business functions. For improved readability you could apply #region tag around the business functions.
For your long term benefit, you should study design patterns. Pick a pattern that suits your project needs and redo your project using the design pattern.
It depends on the language you are using. But in general start with a simple testing class that uses some made up data(but still something 'real') to test your code with. Make it simulate what would happen in the app. If you are making a change in a particular part of the app write something that works before you change the code. Now since you have already written the code getting testing up is going to be quite a challenge when trying to test the entire app. I would suggest start small. But now as you write code, write unit testing first then write your code. You might also considering refactoring but I would weigh the costs of refactoring vs rewriting a little as you go unit testing along the way.
I haven't tried adding test for legacy applications since it is really a difficult chore. If you are planning to move some of the business logic out of the UI and in a separate layer, You may add your initial Test units here(refactoring and TDD). Doing so will give you an introduction for creating unit test for your system. It is really a lot of work but I guess it is the best place to start. Since it is a database driven application, I suggest that you use some mocking tools and DBunit tools while creating your test to simulate the database related issues.
There's no better way to get started unit testing than to try it - it doesn't take long, it's fun and addictive. But only if you're working on testable code.
However, if you try to learn unit testing by fixing an application like the one you've described all at once, you'll probably get frustrated and discouraged - and there's a good chance you'll just think unit testing is a waste of time.
I recommend downloading a unit testing framework, such as NUnit or XUnit.Net.
Most of these frameworks have online documentation that provides a brief introduction, like the NUnit Quick Start. Read that, then choose a simple, self-contained class that:
Has few or no dependencies on other classes - at least not on complex classes.
Has some behavior: a simple container with a bunch of properties won't really show you much about unit testing.
Try writing some tests to get good coverage on that class, then compile and run the tests.
Once you get the hang of that, start looking for opportunities to refactor your existing code, especially when adding new features or fixing bugs. When those refactorings lead to classes that meet the criteria above, write some tests for them. Once you get used to it, you can start by writing tests.

Categories