I have recently taken over a team that uses Ruby with Cucumber for BDD style integration and UI tests. My unit's Chief Executive Officer (who does not have a software background) wants to see KPIs for testing coverage... specifically lines of code covered, etc. that you would normally see when using a unit testing framework such as NUnit or XUnit. With BDD, you generally don't have these types of metrics... or am I wrong? If there is someone out there that knows how to get lines of code covered metrics from a BDD style framework like Ruby/Cucumber, your thoughts would be appreciated!
Your CEO is right. BDD and Cucumber are certainly compatible with measuring code coverage.
If I understand correctly you're using Ruby Cucumber to test a C# application. Set up the test instance of the C# application to collect coverage as it runs, run your Cucumber tests, stop the application, and generate a coverage report. Unfortunately I don't know C#, so I can't give details.
BDD is not only acceptance testing (Cucumber); BDD uses acceptance tests to drive important scenarios and unit tests to drive details. (If the team you've taken over uses only Cucumber for testing, they probably have too many Cucumber scenarios and their test suite takes a long time to run.) Assuming you do have both acceptance and unit tests, you'll want to merge coverage from both types of tests and report total coverage. The coverage achieved by the acceptance test suite or the unit test suite alone is much less important than the coverage achieved by the two suites together.
Related
I have a class that I need to Unit Test.
For background I'm developing in c# and using NUnit, but my question is more theoretical:
I don't know if I've written enough test methods and if I checked all the scenarios.
Is there a known working method/best practices/collection of rules for that?
Something like
"check every method in your class ...bla bla "
"check all the inserts to DB ...bla bla "
(this is a silly example of possible rules but if I had something not silly on my mind I wouldn't ask this question)
There are several available metrics for unit testing. Have a look into both code coverage and orthogonal testing.
However, I would say that this is not the best way of addressing the problem. While 100% code coverage is an admirable goal it can become the sort of metric which obscures that actual quality of the tests.
Personally I think you would get better results from investigating test driven development - using this approach you know you have good coverage (both in terms of lines of code and in terms of functionality of your class) because you have been writing the tests to exercise your class before you wrote the class methods themselves.
You might want to look at your test coverage. NCover is the code coverage solution from the developers of NUnit.
You can look into NCover or Visual Studio code coverage tool that support Nunit
The unit of measure for measuring test coverage in codes is called "Code Coverage".
As per Wikipedia:
Code coverage is a measure used in
software testing. It describes the
degree to which the source code of a
program has been tested. It is a form
of testing that inspects the code
directly and is therefore a form of
white box testing. In time, the use
of code coverage has been extended to
the field of digital hardware, the
contemporary design methodology of
which relies on hardware description
languages (HDLs)
Code Coverage measurements are given in percentage. Different teams and projects sets their own test coverage goal. I don't know if there is an industry "best practice" number, but most of my project sets this number at 80%.
For instance, if you are working on a project that has a lot of UI code, chances are the unit test coverage for that is low, but if you're working on a library, chances are every method has a proper unit test.
For .NET, one of the popular tool for code coverage is NCover.
As the others have mentioned Coverage provides one metric by which to measure the quality of your tests, but this does not tell you anything about how well the tests test your code. Just because a line is executed, it does not mean that all possible permutations of that line have been executed.
you may find some usefulness for a tool such as pex, which will test your code with various inputs to see what is does in those situations. this will give you good covereage (as it will tailor the inputs to generate paths through all of the possible paths through your code) but will also give you good coverage of the possible inputs (like ensuring your methods are tested with null inputs, or that methods which take lists are tested with empty lists or lists that contain null items etc)
There are other intriguing initiatives like a tool which removes lines of code, recompiles, and re runs tests. If no tests fail in this scenario, then it assumes that you have a missing test, as there should be something which depends on that line, or else why is it there? I'll look for a link to that.
I've recently started reading The Art of Unit Testing, and the light came on regarding the difference between Unit tests and Integration tests. I'm pretty sure there were some things I was doing in NUnit that would have fit better in an Integration test.
So my question is, what methods and tools do you use for Integration testing?
In my experience, you can use (mostly) the same tools for unit and integration testing. The difference is more in what you test, not how you test. So while setup, code tested and checking of results will be different, you can use the same tools.
For example, I have used JUnit and DBUnit for both unit and integration tests.
At any rate, the line between unit and integrations tests can be somewhat blurry. It depends on what you define as a "unit"...
Selenium along with Junit for unit+integration testing including the UI
Integration tests are the "next level" for people passionate about unit testing.
Nunit itself can be used for integration testing(No tool change).
eg scenario:
A Unit test was created using Nunit using mock(where it goes to DB/API)
To use integration test we do as follows
instead of mocks use real DB
leads to data input in DB
leads to data corruption
leads to deleting and recreating DB on every test
leads to building a framework for data management(tool addition?)
As you can see from #2 onwards we are heading into an unfamiliar territory as unit test developers. Even though the tool remains the same.
leads to you wondering, why so much time for integration test setup?
leads to : shall I stop unit testing as both kinds of tests takes
more time?
leads to : we only do integration testing
leads to : would all devs agree to this? (some devs might hate testing altogether)
leads to : since no unit test, no code coverage.
Now we are heading to issues with business goals and dev physco..
I think I answered your question a bit more than needed. Anyways, like to read more and you think unit tests are a danger? then head to this
1) Method: Test Point Metrics is best approach in any environment. By this approach not only we can do unit and integration testing but also validate the requirements.
Time for writing Test Point Metrics is just after the Requirement understanding
A template of Test Point Metrics available here:
http://www.docstoc.com/docs/80205542/Test-Plan
Typically there are 3 kind of testing.
1. Manual
2. Automated
3. Hybrid approach
In all above cases Test Point Metrics approach works.
2) Tool:
Tool will depend upon the requirements of project anyhow following are best tools according to my R&D
1. QTP
2. Selenium
3. AppPerfect
For more clear answer about tool, please specify your type of project.
Regards:
Muhammad Husnain
I mostly use JUnit for unit testing in combination with Mockito to mock/stub out dependencies so i can test my unit of code in isolation.
For integration tests these are normally involve 'integration' with an external system/module like a database/message queue/framework etc... so to test these your best bet would be the use a combination of tools.
For e.g. i use JUnit as well but rather than mock out the dependencies i actually configure those dependencies as it were calling code. In addition, i test a flow of control so that each method are not tested in isolation as it is in Unit testing but instead together. Regarding Database connectivity, i use an embedded database with some dummy test data etc.
We have a large body of legacy code, several portions of which are scheduled for refactoring or replacement. We wish to optimise parts that currently impact on the user-experience, facilitate reuse in a new product being planned, and hopefully improve maintainability too.
We have quite good/comprehensive functional tests for an existing product. These are a mixture of automated and manually-driven GUI tests, but they can take a developer more than half a day to run fully. The "low-level domain logic" has a good suite of unit tests (NUnit) with good coverage. Unfortunately, the remainder of the code has no unit tests (or, at least, no worthy unit tests).
What I'd like to find is a tool that automatically generates unit tests for specific methods/classes and maybe specific interfaces based on their use and behaviour in the functional tests. These unit tests would be invaluable for refactoring, and would also be run as part of our C.I. system to detect regressions much earlier than is currently happening (and to localise regressions much better than "button X doesn't work.").
Do any such tools exist? Do you have any recommendations for me?
I've come across Parasoft .TEST, which looks like it might do want I want. Do you have any comments on that, with respect to my situation?
I don't think something that just generates test code from a static analysis, ala NStub, is useful here. I suppose it is actually the generation of representative test data that is really important.
Please ignore the merits, or lack of, of automated test generation - it is not something I'd usually advocate. (Not least because you get tests that pass for broken code!)
Try Pex:
Right from the Visual Studio code editor, Pex finds interesting input-output values of your methods, which you can save as a small test suite with high code coverage. Pex performs a systematic analysis, hunting for boundary conditions, exceptions and assertion failures, which you can debug right away. Pex enables Parameterized Unit Testing, an extension of Unit Testing that reduces test maintenance costs.
Well, you could look at PEX - but I believe that invents its own data (it doesn't watch your existing tests, AFAIK).
We have written our own integration test harness where we can write a number of "operations" or tests, such as "GenerateOrders". We have a number of parameters we can use to configure the tests (such as the number of orders). We then write a second operation to confirm the test has passed/Failed (i.e. there are(nt) orders).
The tool is used for
Integration Testing
Data generation
End-to-end testing (By mixing and matching a number of tests)
It seems to work well, however requires development experience to maintain and write new tests. Our test team would like to get involved, who have little C# development experience.
We are just about to start a new Greenfield project and I am doing some research into the optimum way to write and maintain integration tests.
The questions are as follows:
How do you perform integration testing?
What tool do you use for it (FitNess?, Custom?, NUnit)?
I am looking forward to peoples suggestions/comments.
Thanks in advance,
David
Integration testing may be done at a user interface level (via automated functional tests - AFT) or service/api interface level.
There are several tools in both cases:
I have worked on projects that successfully used Sahi or Selenium for AFT of web apps, white for AFT for .NET WPF or winforms apps, swtBot for AFT of Eclipse Rich client apps and frankenstein for AFT of Java swing apps.
Fitnesse is useful for service/api level tests or for tests that run just below the UI. When done right, it has the advantage of having business-readable tests i.e. non-developers can read and understand the tests. Tools like NUnit are less useful for this purpose. SOAPUI is particularly suited for testing SOAP web services.
Factors to consider:
Duration: Can you tolerate 8 hour test runs?
Brittleness: AFTs can be quite brittle against an evolving application (e.g. ids and positions of widgets may change). Adequate skill and effort is needed to not hard code the changing parts.
Fidelity: How close to real world do you want it to be? e.g. You may have to mock out interactions with a payment gateway unless the provider provides you a test environment that you can pummel with your tests.
Some nuances are captured here.
Full disclosure: The author is associated with the organization behind most (not all) of the above free and open source tools.
You could try the Concordion framework for writing user acceptance tests in HTML files. It takes a BDD-style approach. There is a .Net port as well
It's not out of Beta yet, but StoryTeller looks promising:
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 months ago.
The community reviewed whether to reopen this question 4 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Is there such a thing as unit test generation? If so...
...does it work well?
...What are the auto generation solutions that are available for .NET?
...are there examples of using a technology like this?
...is this only good for certain types of applications, or could it be used to replace all manually written unit testing?
Take a look at Pex. Its a Microsoft Research project. From the website:
Pex generates Unit Tests from hand-written Parameterized Unit Tests through Automated Exploratory Testing based on Dynamic Symbolic Execution.
UPDATE for 2019:
As mentioned in the comments, Pex is now called IntelliTest and is a feature of Visual Studio Enterprise Edition. It supports emitting tests in MSTest, MSTest V2, NUnit, and xUnit format and it is extensible so you can use it with other unit test frameworks.
But be aware of the following caveats:
Supports only C# code that targets the .NET Framework.
Does not support x64 configurations.
Available in Visual Studio Enterprise Edition only
I believe there's no point in Unit test generation, as far as TDD goes.
You only make unit tests so that you're sure that you (as a developer) are on track w/ regards to design and specs. Once you start generating tests automatically, it loses that purpose. Sure it would probably mean 100% code coverage, but that coverage would be senseless and empty.
Automated unit tests also mean that your strategy is test-after, which is opposite of TDD's test-before tenet. Again, TDD is not about tests.
That being said I believe MSTest does have an automatic unit-test generation tool -- I was able to use one with VS2005.
Updated for 2017:
Unit Test Boilerplate Generator works for VS 2015-2017 and is being maintained. Seems to work as advertised.
Parasoft .TEST has a functionality of tests generation. It uses NUnit framework for tests description and assertions evaluation.
It is possible to prepare a regression tests suite by automated generating scenarios (constructing inputs and calling tested method) and creating assertions which are based on the current code base behavior. Later, after code base under tests evolves, assertions indicates regressions or can be easily recorded again.
I created ErrorUnit. It generates MSTest or NUnit unit tests from your paused Visual Studio or your error logs; Mocking class variables, Method Parameters, and EF Data access so far. See http://ErrorUnit.com
No Unit Test generator can do everything. Unit Tests are classically separated into three parts Arrange, Act, and Assert; the Arrange portion is the largest part of a unit test, and it sets up all the preconditions to a test, mocking all the data that is going to be acted upon in the test, the Act-portion of a Unit Test is usually one line. It activates the portion of code being tested, passing in that data. Finally, the Assert portion of the test takes the Act portion results and verifies that it met expectations (can be zero lines when just making sure there is no error).
Unit Test generators generally can only do the Arrange, and Act portions on unit test creation; however, unit test generators generally do not write Assert portions as only you know what is correct and what is incorrect for your purposes. So some manual entry/extending of Unit Tests is necessary for completeness.
I agree with Jon. Certain types of testing, like automated fuzz testing, definitely benefit from automated generation. While you can use the facilities of a unit testing framework to accomplish this, this doesn't accomplish the goals associated with good unit test coverage.
I've used NStub to stub out test for my classes. It works fairly well.
I've used tools to generate test cases. I think it works well for higher-level, end-user oriented testing. Stuff that's part of User Acceptance Testing, more so than pure unit testing.
I use the unit test tools for this acceptance testing. It works well.
See Tooling to Build Test Cases.
There is a commercial product called AgitarOne (www.agitar.com) that automatically generates JUnit test classes.
I haven't used it so can't comment on how useful it is, but if I was doing a Java project at the moment I would be looking at it.
I don't know of a .net equivalent (Agitar did one announce a .net version but AFAIK it never materialised).
I know this thread is old but for the sake of all developpers, there is a good library called nunit :
https://marketplace.visualstudio.com/items?itemName=NUnitDevelopers.TestGeneratorNUnitextension
EDIT :
Please use XUNIT ( Supported by microsoft ) : https://github.com/xunit/xunit
Autogenerated test :
https://marketplace.visualstudio.com/items?itemName=YowkoTsai.xUnitnetTestGenerator
Good dev
Selenium generates unit tests from user commands on a web page, pretty nifty.