I am working on unit testing a code generator. A unit test's basic flow is:
The Unit Test calls the appropriate method and code is generated. Easy enough.
The Unit Test compiles the generated c# code (of step #1). If code compliles, proceed to step 3, else stop everything.
If step#2 suceeded, the Unit Test then runs other, pre-written unit tests on the generated compiled code of step 2. For this I will utilize the solution described here: Running individual NUnit tests programmatically and NUnit API And Running Tests Programmatically .
The approach for step #2 is what this question is about: I am thinking I have two options (1) Run Visual Studio Command Line to compile the solution Or (2) use
CSharpCodeProvider with CompilerParameters. Any recommendations would be greatly appreciated.
I personally use Roslyn on a daily basis, and so I was tempted to go with Kenneth and recommend it, but, in your case, if the only information you want to know is if the code compiled, I would lean more towards the CSharpCodeProvider class, especially if each method that is being unit tested generates a single file of code. If you would have to to any kind of analysis on the generated code or if you , it might be worth using Roslyn, but I doubt this is your case. The only other pro Roslyn might bring you is that you can open up a whole project/solution directly instead of trying to compile every separate file, which might appeal to you (it's a lot simpler to use than you might think).
Besides this advice, all I can say is that if you just have to choose between CSharpCodeProvider and the Command Line option, I would definitely go with the CSharpCodeProvider, since it already wraps and exposes the data and operations relevant to you analysis (Are there any errors ? => let's check if compiledResults.Errors.Count == 0). You don't need to call an external process (the C# compiler) in order to get what you want, which makes it a much simpler option in my opinion, while also having a lot of flexibility (i.e. The CompilerParameters)
I don't know if you have started playing around with it, but you shouldn't have too many problems with this method. Hope this helps.
Related
I use Visual Studio with Resharper and NUnit test framework.
Sometimes a small change in business logic code breaks a lot of unittests. It's OK, you know that the results of unittests would be different and new values are valid now. Is there a way to quick-fix all of them?
You can use the various refactor tools which come with VS to make (small) changes to your code that are not a result of a change in business logic. Examples of this are renaming variables and functions or moving code to a different namespace.
Especially when you use ReSharper, there are lots of options that will help you to refactor code. (Resharper menu > Refactor).
If you are changing the business logic of your application then the software requirements must have changed. Therefore the unit tests that apply to that logic should fail and there is no way to automagically correct this.
Actually there is no solution for quick-fix expected values. If your changes break a lot of integration tests, you have to manually correct all of tests.
The only hint is to minimize distance between copy-paste operations of expected values.
I'm currently researching and deciding on a code coverage tool for my company, and have so far tried NCover (Bolt and Desktop), DotCover, and NCrunch. All tools I've tried so far work well for measuring/highlighting code coverage in code called directly by unit tests, but any code called through CSLA (DataPortal_Fetch, for example) is never detected as being covered. As much of our code base resides in these functions, I'm finding the tools to be next to useless for much of what I need tested and measured.
My question then is how can I get code coverage results for CSLA code? Does anyone know of a tool that would work with these kinds of calls, or certain options/extensions I can use to get coverage results with the tools I'm using?
The project is coded in C#, and I'm using Visual Studio 2013 Professional, CSLA 3.8, and .NET 4.0. I have the latest versions of NCover Bolt and DotCover (both on trial), as well as the newest OpenCover that I could find.
Thanks in advance!
NCover Support here.
If you are using NCover Desktop, you can auto-configure to detect any .NET code that is being loaded by your testing (Bolt can only profile test runners).
We have this video that shows auto-detecting NUnit, as an example:
http://www.ncover.com/resources/videos/ncover-creating-a-new-code-coverage-project
And a lot of the same info in this help doc:
http://www.ncover.com/support/docs/desktop/user-guide/coverage_scenarios/how_do_i_collect_data_from_nunit
Please contact us at support#ncover.com if you have extra questions. Hope this helps.
Unlike TyCobb's entirely outdated opinion, current versions of CSLA don't invoke methods via reflection (except on iOS) and haven't since around 2007. But the data portal does use dynamic invocation via expression trees and that's probably the issue causing you trouble.
One option in current versions of CSLA is that the data portal is now described by an interface so you can mock the data portal, potentially creating a mock that does nothing but invoke your DP_XYZ methods directly. Even that's tricky though, unless you make them public and allow other code in your app to easily break encapsulation (yuck). The problem is that you won't be able to call the methods without using reflection, or rewriting the dynamic expression tree invocation code used inside CSLA...
Though perhaps the code coverage tools would see the code executing if it were run via reflection instead of via a runtime compiled expression?
I've spent the last few weeks trying to figure out a way to implement (or find someone who has implemented) regressive testing for our build process, but so far I haven't found anything that works. We use TFS2008 and VS2010, and upgrading to TFS2010 is not an option for us. I've tried to use NDepend to give us the list of changed methods and type dependencies, but running it through our build script has proven supremely unreliable (if I run the same build twice without changing anything I would not be surprised to have one perfect NDepend report, and one exception saying NDepend can't run for one reason or another).
Unfortunately, I'm pretty much stuck with the tools I have (TFS2008, VS2010, MSBuild, and MSTest). I could probably get another tool, but changing the tools I already have (such as moving from MSTest to NUnit, or TFS2008 to TFS2010) will not be possible.
Has anyone does this already? Or can someone point me in the right direction to find which methods and types changed between two builds programmatically?
If you have unit tests and a coverage report. Then you could diff the coverage report before and after. Any changes to the coverage would be shown in that. You could then regression test off that (which I assume is manual)
I have a class that I need to Unit Test.
For background I'm developing in c# and using NUnit, but my question is more theoretical:
I don't know if I've written enough test methods and if I checked all the scenarios.
Is there a known working method/best practices/collection of rules for that?
Something like
"check every method in your class ...bla bla "
"check all the inserts to DB ...bla bla "
(this is a silly example of possible rules but if I had something not silly on my mind I wouldn't ask this question)
There are several available metrics for unit testing. Have a look into both code coverage and orthogonal testing.
However, I would say that this is not the best way of addressing the problem. While 100% code coverage is an admirable goal it can become the sort of metric which obscures that actual quality of the tests.
Personally I think you would get better results from investigating test driven development - using this approach you know you have good coverage (both in terms of lines of code and in terms of functionality of your class) because you have been writing the tests to exercise your class before you wrote the class methods themselves.
You might want to look at your test coverage. NCover is the code coverage solution from the developers of NUnit.
You can look into NCover or Visual Studio code coverage tool that support Nunit
The unit of measure for measuring test coverage in codes is called "Code Coverage".
As per Wikipedia:
Code coverage is a measure used in
software testing. It describes the
degree to which the source code of a
program has been tested. It is a form
of testing that inspects the code
directly and is therefore a form of
white box testing. In time, the use
of code coverage has been extended to
the field of digital hardware, the
contemporary design methodology of
which relies on hardware description
languages (HDLs)
Code Coverage measurements are given in percentage. Different teams and projects sets their own test coverage goal. I don't know if there is an industry "best practice" number, but most of my project sets this number at 80%.
For instance, if you are working on a project that has a lot of UI code, chances are the unit test coverage for that is low, but if you're working on a library, chances are every method has a proper unit test.
For .NET, one of the popular tool for code coverage is NCover.
As the others have mentioned Coverage provides one metric by which to measure the quality of your tests, but this does not tell you anything about how well the tests test your code. Just because a line is executed, it does not mean that all possible permutations of that line have been executed.
you may find some usefulness for a tool such as pex, which will test your code with various inputs to see what is does in those situations. this will give you good covereage (as it will tailor the inputs to generate paths through all of the possible paths through your code) but will also give you good coverage of the possible inputs (like ensuring your methods are tested with null inputs, or that methods which take lists are tested with empty lists or lists that contain null items etc)
There are other intriguing initiatives like a tool which removes lines of code, recompiles, and re runs tests. If no tests fail in this scenario, then it assumes that you have a missing test, as there should be something which depends on that line, or else why is it there? I'll look for a link to that.
I manage a rather large application (50k+ lines of code) by myself, and it manages some rather critical business actions. To describe the program simple, I would say it's a fancy UI with the ability to display and change data from the database, and it's managing around 1,000 rental units, and about 3k tenants and all the finances.
When I make changes, because it's so large of a code base, I sometimes break something somewhere else. I typically test it by going though the stuff I changed at the functional level (i.e. I run the program and work through the UI), but I can't test for every situation. That is why I want to get started with unit testing.
However, this isn't a true, three tier program with a database tier, a business tier, and a UI tier. A lot of the business logic is performed in the UI classes, and many things are done on events. To complicate things, everything is database driven, and I've not seen (so far) good suggestions on how to unit test database interactions.
How would be a good way to get started with unit testing for this application. Keep in mind. I've never done unit testing or TDD before. Should I rewrite it to remove the business logic from the UI classes (a lot of work)? Or is there a better way?
I would start by using some tool that would test the application through the UI. There are a number of tools that can be used to create test scripts that simulate the user clicking through the application.
I would also suggest that you start adding unit tests as you add pieces of new functionality. It is time consuming to create complete coverage once the appliction is developed, but if you do it incrementally then you distribute the effort.
We test database interactions by having a separate database that is used just for unit tests. In that way we have a static and controllable dataset so that requests and responses can be guaranteed. We then create c# code to simulate various scenarios. We use nUnit for this.
I'd highly recommend reading the article Working Effectively With Legacy Code. It describes a workable strategy for what you're trying to accomplish.
One option is this -- every time a bug comes up, write a test to help you find the bug and solve the problem. Make it such that the test will pass when the bug is fixed. Then, once the bug is resolved you have a tool that'll help you detect future changes that might impact the chunk of code you just fixed. Over time your test coverage will improve, and you can run your ever-growing test suite any time you make a potentially far-reaching change.
TDD implies that you build (and run) unit tests as you go along. For what you are trying to do - add unit tests after the fact - you may consider using something like Typemock (a commercial product).
Also, you may have built a system that does not lend itself to be unit tested, and in this case some (or a lot) of refactoring may be in order.
First, I would recommend reading a good book about unit testing, like The Art Of Unit Testing. In your case, it's a little late to perform Test Driven Development on your existing code, but if you want to write your unit tests around it, then here's what I would recommend:
Isolate the code you want to test into code libraries (if they're not already in libraries).
Write out the most common Use Case scenarios and translate them to an application that uses your code libraries.
Make sure your test program works as you expect it to.
Convert your test program into unit tests using a testing framework.
Get the green light. If not, then your unit tests are faulty (assuming your code libraries work) and you should do some debugging.
Increase the code and scenario coverage of your unit tests: What if you entered unexpected results?
Get the green light again. If the unit test fails, then it's likely that your code library does not support the extended scenario coverage, so it's refactoring time!
And for new code, I would suggest you try it using Test Driven Development.
Good luck (you'll need it!)
I'd recommend picking up the book Working Effectively with Legacy Code by Michael Feathers. This will show you many techniques for gradually increasing the test coverage in your codebase (and improving the design along the way).
Refactoring IS the better way. Even though the process is daunting you should definitely separate the presentation and business logic. You will not be able to write good unit tests against your biz logic until you make that separation. It's as simple as that.
In the refactoring process you will likely find bugs that you didn't even know existed and, by the end, be a much better programmer!
Also, once you refactor your code you'll notice that testing your db interactions will become much easier. You will be able write tests that perform actions like: "add new tenant" which will involve creating a mock tenant object and saving "him" to the db. For you next test you would write "GetTenant" and try and get that tenant that you just created from the db and into your in-memory representation... Then compare your first and second tenant to make sure all fields match values. Etc. etc.
I think it is always a good idea to separate your business logic from UI. There several benefits to this including easier unit testing and expandability. You might also want to refer to pattern based programming. Here is a link http://en.wikipedia.org/wiki/Design_pattern_(computer_science) that will help you understand design patterns.
One thing you could do for now, is within your UI classes isolate all the business logic and different business bases functions and than within each UI constructor or page_load have unit test calls that test each of the business functions. For improved readability you could apply #region tag around the business functions.
For your long term benefit, you should study design patterns. Pick a pattern that suits your project needs and redo your project using the design pattern.
It depends on the language you are using. But in general start with a simple testing class that uses some made up data(but still something 'real') to test your code with. Make it simulate what would happen in the app. If you are making a change in a particular part of the app write something that works before you change the code. Now since you have already written the code getting testing up is going to be quite a challenge when trying to test the entire app. I would suggest start small. But now as you write code, write unit testing first then write your code. You might also considering refactoring but I would weigh the costs of refactoring vs rewriting a little as you go unit testing along the way.
I haven't tried adding test for legacy applications since it is really a difficult chore. If you are planning to move some of the business logic out of the UI and in a separate layer, You may add your initial Test units here(refactoring and TDD). Doing so will give you an introduction for creating unit test for your system. It is really a lot of work but I guess it is the best place to start. Since it is a database driven application, I suggest that you use some mocking tools and DBunit tools while creating your test to simulate the database related issues.
There's no better way to get started unit testing than to try it - it doesn't take long, it's fun and addictive. But only if you're working on testable code.
However, if you try to learn unit testing by fixing an application like the one you've described all at once, you'll probably get frustrated and discouraged - and there's a good chance you'll just think unit testing is a waste of time.
I recommend downloading a unit testing framework, such as NUnit or XUnit.Net.
Most of these frameworks have online documentation that provides a brief introduction, like the NUnit Quick Start. Read that, then choose a simple, self-contained class that:
Has few or no dependencies on other classes - at least not on complex classes.
Has some behavior: a simple container with a bunch of properties won't really show you much about unit testing.
Try writing some tests to get good coverage on that class, then compile and run the tests.
Once you get the hang of that, start looking for opportunities to refactor your existing code, especially when adding new features or fixing bugs. When those refactorings lead to classes that meet the criteria above, write some tests for them. Once you get used to it, you can start by writing tests.