Auto-generation of .NET unit tests [closed] - c#

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 months ago.
The community reviewed whether to reopen this question 4 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Is there such a thing as unit test generation? If so...
...does it work well?
...What are the auto generation solutions that are available for .NET?
...are there examples of using a technology like this?
...is this only good for certain types of applications, or could it be used to replace all manually written unit testing?

Take a look at Pex. Its a Microsoft Research project. From the website:
Pex generates Unit Tests from hand-written Parameterized Unit Tests through Automated Exploratory Testing based on Dynamic Symbolic Execution.
UPDATE for 2019:
As mentioned in the comments, Pex is now called IntelliTest and is a feature of Visual Studio Enterprise Edition. It supports emitting tests in MSTest, MSTest V2, NUnit, and xUnit format and it is extensible so you can use it with other unit test frameworks.
But be aware of the following caveats:
Supports only C# code that targets the .NET Framework.
Does not support x64 configurations.
Available in Visual Studio Enterprise Edition only

I believe there's no point in Unit test generation, as far as TDD goes.
You only make unit tests so that you're sure that you (as a developer) are on track w/ regards to design and specs. Once you start generating tests automatically, it loses that purpose. Sure it would probably mean 100% code coverage, but that coverage would be senseless and empty.
Automated unit tests also mean that your strategy is test-after, which is opposite of TDD's test-before tenet. Again, TDD is not about tests.
That being said I believe MSTest does have an automatic unit-test generation tool -- I was able to use one with VS2005.

Updated for 2017:
Unit Test Boilerplate Generator works for VS 2015-2017 and is being maintained. Seems to work as advertised.

Parasoft .TEST has a functionality of tests generation. It uses NUnit framework for tests description and assertions evaluation.
It is possible to prepare a regression tests suite by automated generating scenarios (constructing inputs and calling tested method) and creating assertions which are based on the current code base behavior. Later, after code base under tests evolves, assertions indicates regressions or can be easily recorded again.

I created ErrorUnit. It generates MSTest or NUnit unit tests from your paused Visual Studio or your error logs; Mocking class variables, Method Parameters, and EF Data access so far. See http://ErrorUnit.com
No Unit Test generator can do everything. Unit Tests are classically separated into three parts Arrange, Act, and Assert; the Arrange portion is the largest part of a unit test, and it sets up all the preconditions to a test, mocking all the data that is going to be acted upon in the test, the Act-portion of a Unit Test is usually one line. It activates the portion of code being tested, passing in that data. Finally, the Assert portion of the test takes the Act portion results and verifies that it met expectations (can be zero lines when just making sure there is no error).
Unit Test generators generally can only do the Arrange, and Act portions on unit test creation; however, unit test generators generally do not write Assert portions as only you know what is correct and what is incorrect for your purposes. So some manual entry/extending of Unit Tests is necessary for completeness.

I agree with Jon. Certain types of testing, like automated fuzz testing, definitely benefit from automated generation. While you can use the facilities of a unit testing framework to accomplish this, this doesn't accomplish the goals associated with good unit test coverage.

I've used NStub to stub out test for my classes. It works fairly well.

I've used tools to generate test cases. I think it works well for higher-level, end-user oriented testing. Stuff that's part of User Acceptance Testing, more so than pure unit testing.
I use the unit test tools for this acceptance testing. It works well.
See Tooling to Build Test Cases.

There is a commercial product called AgitarOne (www.agitar.com) that automatically generates JUnit test classes.
I haven't used it so can't comment on how useful it is, but if I was doing a Java project at the moment I would be looking at it.
I don't know of a .net equivalent (Agitar did one announce a .net version but AFAIK it never materialised).

I know this thread is old but for the sake of all developpers, there is a good library called nunit :
https://marketplace.visualstudio.com/items?itemName=NUnitDevelopers.TestGeneratorNUnitextension
EDIT :
Please use XUNIT ( Supported by microsoft ) : https://github.com/xunit/xunit
Autogenerated test :
https://marketplace.visualstudio.com/items?itemName=YowkoTsai.xUnitnetTestGenerator
Good dev

Selenium generates unit tests from user commands on a web page, pretty nifty.

Related

C# Testing Lines of Code Covered Using Ruby/Cucumber Testing Framework

I have recently taken over a team that uses Ruby with Cucumber for BDD style integration and UI tests. My unit's Chief Executive Officer (who does not have a software background) wants to see KPIs for testing coverage... specifically lines of code covered, etc. that you would normally see when using a unit testing framework such as NUnit or XUnit. With BDD, you generally don't have these types of metrics... or am I wrong? If there is someone out there that knows how to get lines of code covered metrics from a BDD style framework like Ruby/Cucumber, your thoughts would be appreciated!
Your CEO is right. BDD and Cucumber are certainly compatible with measuring code coverage.
If I understand correctly you're using Ruby Cucumber to test a C# application. Set up the test instance of the C# application to collect coverage as it runs, run your Cucumber tests, stop the application, and generate a coverage report. Unfortunately I don't know C#, so I can't give details.
BDD is not only acceptance testing (Cucumber); BDD uses acceptance tests to drive important scenarios and unit tests to drive details. (If the team you've taken over uses only Cucumber for testing, they probably have too many Cucumber scenarios and their test suite takes a long time to run.) Assuming you do have both acceptance and unit tests, you'll want to merge coverage from both types of tests and report total coverage. The coverage achieved by the acceptance test suite or the unit test suite alone is much less important than the coverage achieved by the two suites together.

Selective Test Runner for NUnit, MSTest and xUnit [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I want to to create a custom TestRunner which allow to run unit test for following unit framework:
MSTest
NUnit
xUnit
This custom Test Runner should have posibility to define criteria to run test. For example:
run specific test method (by name)
run test methods for specific tested method
run test methods for specific tested class
I've found solution for NUnit and in xUnit I should implement ITestRunner. But do you know how I can run tests from code for MSTest?
Do you know how I can define criteria to run only method which filled criteria defined above?
You can try Fixie (still under development). It allows you to write custom conventions to determine what is considered a test regardless of the unit testing framework used. For example, you can write one convention that identifies all void public methods in classes ending with "Tests" as unit tests regardless of attributes and so on (in fact, this is its default convention). It is very flexible. To write a custom convention, you need to implement the Convention class in your assembly and Fixie will discover it. Read it's documentation for more information and see the excellent samples and tests included in the project.
It is already integrated with visual studio through TestDriven.net. If you don't have that then you'll need to run it through its console runner.
Thanks for replying to my comment. I think I can suggest you a possible solution, and I haven't tried this by myself. But I believe this is do-able. This suggesion is more of a programtic approach mixed with some built-in command line options provided by each test framework you mentioned (NUnit, MSTest, xUnit).
If all of these test frameworks have the ability to execute tests, whether it is selectively, as a group, or as an entire test suite by the command line, you can always use Console App, with predefined parameters to execute these tests as you desired.
As far as I know all of these test frameworks provides command line test runners and options. But they can be differed based on the options they provide.
First thing is to analyse each of these test framework's ability provide command line arguments so you can execute tests as you desired.
MSTest.exe Command Line provide some options you might want to look into.
/test
/Category
Are some of the options you can look into.
NUnit console also provide some options. But I haven't looked into this much yet.
XUnit has a Console Test runner which you can look into more (i.e xunit.console MyTestLibrary.dll).
It is important to note that, both xUnit.NET and NUnit are extensible. So if you don't have the options you need, you can look at extending what has provided by the framework. Even go down the path that creating your own console runner is not that hard. Un-like MSTest, both of these frameworks are open source so you can see the options they have provided. Given that running tests selectively is a key requirement I would imagine that these frameworks have built-in options and you don't have to do much customizations.
Once you have identified abilities of these framework's Console counterparts, you can then proceed on creating your client app/console app, which you can specify the tests to run selectively. This client app would feed tests selectively to each runner and execute Console.exe's. This should run your tests selectively and accordingly. You can further automate this by providing some sort of a configurable metadata/manifest file. For example you can specify variety of tests to run from a metadata or manifest file, and your console runner would read from the metadata files.
Based on your client app configuration, if it is a Console app, you can also invoke via the build system.
Hope this would point you to a direction what you want to achieve.

Integration Testing vs. Unit Testing

I've recently started reading The Art of Unit Testing, and the light came on regarding the difference between Unit tests and Integration tests. I'm pretty sure there were some things I was doing in NUnit that would have fit better in an Integration test.
So my question is, what methods and tools do you use for Integration testing?
In my experience, you can use (mostly) the same tools for unit and integration testing. The difference is more in what you test, not how you test. So while setup, code tested and checking of results will be different, you can use the same tools.
For example, I have used JUnit and DBUnit for both unit and integration tests.
At any rate, the line between unit and integrations tests can be somewhat blurry. It depends on what you define as a "unit"...
Selenium along with Junit for unit+integration testing including the UI
Integration tests are the "next level" for people passionate about unit testing.
Nunit itself can be used for integration testing(No tool change).
eg scenario:
A Unit test was created using Nunit using mock(where it goes to DB/API)
To use integration test we do as follows
instead of mocks use real DB
leads to data input in DB
leads to data corruption
leads to deleting and recreating DB on every test
leads to building a framework for data management(tool addition?)
As you can see from #2 onwards we are heading into an unfamiliar territory as unit test developers. Even though the tool remains the same.
leads to you wondering, why so much time for integration test setup?
leads to : shall I stop unit testing as both kinds of tests takes
more time?
leads to : we only do integration testing
leads to : would all devs agree to this? (some devs might hate testing altogether)
leads to : since no unit test, no code coverage.
Now we are heading to issues with business goals and dev physco..
I think I answered your question a bit more than needed. Anyways, like to read more and you think unit tests are a danger? then head to this
1) Method: Test Point Metrics is best approach in any environment. By this approach not only we can do unit and integration testing but also validate the requirements.
Time for writing Test Point Metrics is just after the Requirement understanding
A template of Test Point Metrics available here:
http://www.docstoc.com/docs/80205542/Test-Plan
Typically there are 3 kind of testing.
1. Manual
2. Automated
3. Hybrid approach
In all above cases Test Point Metrics approach works.
2) Tool:
Tool will depend upon the requirements of project anyhow following are best tools according to my R&D
1. QTP
2. Selenium
3. AppPerfect
For more clear answer about tool, please specify your type of project.
Regards:
Muhammad Husnain
I mostly use JUnit for unit testing in combination with Mockito to mock/stub out dependencies so i can test my unit of code in isolation.
For integration tests these are normally involve 'integration' with an external system/module like a database/message queue/framework etc... so to test these your best bet would be the use a combination of tools.
For e.g. i use JUnit as well but rather than mock out the dependencies i actually configure those dependencies as it were calling code. In addition, i test a flow of control so that each method are not tested in isolation as it is in Unit testing but instead together. Regarding Database connectivity, i use an embedded database with some dummy test data etc.

Introduction to unit testing for C#

I'm looking for a good introduction/tutorial for unit testing C#. Most tutorials I've come across so far have either been too basic to be useful or too complex for someone new to unit testing.
(Using Visual Studio 2008 Professional for Windows applications)
Read The Art of Unit Testing by Roy Osherove. It is very good.
Is it just a specific tool for which you're having trouble finding good tutorials? When I was new to the subject I found the NUnit tutorial to be a good starting point:
http://www.nunit.org/index.php?p=quickStart&r=2.4
Rhino Mocks would be good to learn as well to complement the unit testing framework:
https://stackoverflow.com/questions/185021/rhino-mocks-good-tutorials
Perhaps a book? I would recommend you the Pragmatic Unit Testing in C# with NUnit.
It's very complete in my opinion.
It was when I started reading about Moq that I realized unit testing didn't have to be painful. There are some good links near the bottom of the page as to how unit tests can be built with mocking.
One nice thing about using interfaces for controlled coupling and testing is that adding an interface to an existing code base is not a breaking change. I'm adding new features to some legacy code and I've been creating interfaces for existing classes so that the new features can be developed and tested in isolation. It's been working well so far and I plan to continue this style of testing on other projects. For me, the key was to avoid designing complex stub classes with lots of ugly conditional code to expose different cases for my tests. It got to the point where the test code was so complex that I couldn't be sure if it was the code or the test that was broken.

Generating tests from run-time analysis

We have a large body of legacy code, several portions of which are scheduled for refactoring or replacement. We wish to optimise parts that currently impact on the user-experience, facilitate reuse in a new product being planned, and hopefully improve maintainability too.
We have quite good/comprehensive functional tests for an existing product. These are a mixture of automated and manually-driven GUI tests, but they can take a developer more than half a day to run fully. The "low-level domain logic" has a good suite of unit tests (NUnit) with good coverage. Unfortunately, the remainder of the code has no unit tests (or, at least, no worthy unit tests).
What I'd like to find is a tool that automatically generates unit tests for specific methods/classes and maybe specific interfaces based on their use and behaviour in the functional tests. These unit tests would be invaluable for refactoring, and would also be run as part of our C.I. system to detect regressions much earlier than is currently happening (and to localise regressions much better than "button X doesn't work.").
Do any such tools exist? Do you have any recommendations for me?
I've come across Parasoft .TEST, which looks like it might do want I want. Do you have any comments on that, with respect to my situation?
I don't think something that just generates test code from a static analysis, ala NStub, is useful here. I suppose it is actually the generation of representative test data that is really important.
Please ignore the merits, or lack of, of automated test generation - it is not something I'd usually advocate. (Not least because you get tests that pass for broken code!)
Try Pex:
Right from the Visual Studio code editor, Pex finds interesting input-output values of your methods, which you can save as a small test suite with high code coverage. Pex performs a systematic analysis, hunting for boundary conditions, exceptions and assertion failures, which you can debug right away. Pex enables Parameterized Unit Testing, an extension of Unit Testing that reduces test maintenance costs.
Well, you could look at PEX - but I believe that invents its own data (it doesn't watch your existing tests, AFAIK).

Categories