I'm currently researching and deciding on a code coverage tool for my company, and have so far tried NCover (Bolt and Desktop), DotCover, and NCrunch. All tools I've tried so far work well for measuring/highlighting code coverage in code called directly by unit tests, but any code called through CSLA (DataPortal_Fetch, for example) is never detected as being covered. As much of our code base resides in these functions, I'm finding the tools to be next to useless for much of what I need tested and measured.
My question then is how can I get code coverage results for CSLA code? Does anyone know of a tool that would work with these kinds of calls, or certain options/extensions I can use to get coverage results with the tools I'm using?
The project is coded in C#, and I'm using Visual Studio 2013 Professional, CSLA 3.8, and .NET 4.0. I have the latest versions of NCover Bolt and DotCover (both on trial), as well as the newest OpenCover that I could find.
Thanks in advance!
NCover Support here.
If you are using NCover Desktop, you can auto-configure to detect any .NET code that is being loaded by your testing (Bolt can only profile test runners).
We have this video that shows auto-detecting NUnit, as an example:
http://www.ncover.com/resources/videos/ncover-creating-a-new-code-coverage-project
And a lot of the same info in this help doc:
http://www.ncover.com/support/docs/desktop/user-guide/coverage_scenarios/how_do_i_collect_data_from_nunit
Please contact us at support#ncover.com if you have extra questions. Hope this helps.
Unlike TyCobb's entirely outdated opinion, current versions of CSLA don't invoke methods via reflection (except on iOS) and haven't since around 2007. But the data portal does use dynamic invocation via expression trees and that's probably the issue causing you trouble.
One option in current versions of CSLA is that the data portal is now described by an interface so you can mock the data portal, potentially creating a mock that does nothing but invoke your DP_XYZ methods directly. Even that's tricky though, unless you make them public and allow other code in your app to easily break encapsulation (yuck). The problem is that you won't be able to call the methods without using reflection, or rewriting the dynamic expression tree invocation code used inside CSLA...
Though perhaps the code coverage tools would see the code executing if it were run via reflection instead of via a runtime compiled expression?
Related
I am working on unit testing a code generator. A unit test's basic flow is:
The Unit Test calls the appropriate method and code is generated. Easy enough.
The Unit Test compiles the generated c# code (of step #1). If code compliles, proceed to step 3, else stop everything.
If step#2 suceeded, the Unit Test then runs other, pre-written unit tests on the generated compiled code of step 2. For this I will utilize the solution described here: Running individual NUnit tests programmatically and NUnit API And Running Tests Programmatically .
The approach for step #2 is what this question is about: I am thinking I have two options (1) Run Visual Studio Command Line to compile the solution Or (2) use
CSharpCodeProvider with CompilerParameters. Any recommendations would be greatly appreciated.
I personally use Roslyn on a daily basis, and so I was tempted to go with Kenneth and recommend it, but, in your case, if the only information you want to know is if the code compiled, I would lean more towards the CSharpCodeProvider class, especially if each method that is being unit tested generates a single file of code. If you would have to to any kind of analysis on the generated code or if you , it might be worth using Roslyn, but I doubt this is your case. The only other pro Roslyn might bring you is that you can open up a whole project/solution directly instead of trying to compile every separate file, which might appeal to you (it's a lot simpler to use than you might think).
Besides this advice, all I can say is that if you just have to choose between CSharpCodeProvider and the Command Line option, I would definitely go with the CSharpCodeProvider, since it already wraps and exposes the data and operations relevant to you analysis (Are there any errors ? => let's check if compiledResults.Errors.Count == 0). You don't need to call an external process (the C# compiler) in order to get what you want, which makes it a much simpler option in my opinion, while also having a lot of flexibility (i.e. The CompilerParameters)
I don't know if you have started playing around with it, but you shouldn't have too many problems with this method. Hope this helps.
In my project we had BDD tests which I have written using specflow, nUnit and Watin. I run these tests from visual studio using resharper. Now I want to expose these features and scenarios to non technical people and I want them to run these tests.
Something like I want to list all the tests in a browser and user should be able to run those tests by clicking on them. Can this be achieved ? Is there any addin ?
Currently we use Team Foundation Server as our build server.
TeamCity, a Continuous integration server by JetBrains provides this as a webbased functionality. It even provides statistics and test output results.
It supports nUnit out of the box.
SpecFlow and Watin are supported with some configuration.
The BIGGEST problem you are going to have is that the plain text feature file, automatically gets converted to a xxx.feature.cs file by the SpecFlow Visual Studio plugin. So your process is this,
Modify xxxx.feature file
Find some way to get the SpecFlow plugin to generate xxx.feature.cs
Compile
Run tests by using NUnit/Xunit (as configured)
Gather and present test success report
To me this is a process has a name, I'd called it development.
BDD however is a different process, it's all about collaboration and communication with the business in order to devise a specification. In the beginning there were no tools, but the process still worked.
A number of my co-workers have been using BDD techniques on a variety of real-world projects and have found the techniques very successful. The JBehave story runner – the part that verifies acceptance criteria – is under active development.
Dan North - Introducing BDD 2006
Don't get caught up on the tools alone or you'll miss the vital part of the process. You'll get so much benefit by working with your BA to define the new specification together collaboratively.
P.S. Another way to consider this is that the specification and code should always be in step. Just by defining a new example, we don't magically move the code forwards to meet that example. Instead the most common practice is to develop the code to meet the new example, and then check in the new specification and code as a single change set.
You can use the Pickles project to produce stakeholder-friendly documentation (including HTML) from the Gherkin specifications in your source control.
https://github.com/picklesdoc/pickles
There's no facility for running the tests from the HTML. It's open-source so perhaps you can extend it this way... however, I personally don't see the value in having non-technical users actually execute the specifications. I would have your continuous integration server run the SpecFlow tests and generate a step definition report periodically. The non-technical users can then browse to these reports to see current project status.
To give access to your feature files to non technical people you can use http://www.speclog.net/
Spec log will allow non tech to edit and create new features and will automatically synchronise them with TFS.
Unfortunately it's not free and you can't run the specs from that tool.
I've spent the last few weeks trying to figure out a way to implement (or find someone who has implemented) regressive testing for our build process, but so far I haven't found anything that works. We use TFS2008 and VS2010, and upgrading to TFS2010 is not an option for us. I've tried to use NDepend to give us the list of changed methods and type dependencies, but running it through our build script has proven supremely unreliable (if I run the same build twice without changing anything I would not be surprised to have one perfect NDepend report, and one exception saying NDepend can't run for one reason or another).
Unfortunately, I'm pretty much stuck with the tools I have (TFS2008, VS2010, MSBuild, and MSTest). I could probably get another tool, but changing the tools I already have (such as moving from MSTest to NUnit, or TFS2008 to TFS2010) will not be possible.
Has anyone does this already? Or can someone point me in the right direction to find which methods and types changed between two builds programmatically?
If you have unit tests and a coverage report. Then you could diff the coverage report before and after. Any changes to the coverage would be shown in that. You could then regression test off that (which I assume is manual)
I'm looking for a tool (preferably free) that analyzes incremental code coverage of our C# solution. What I mean by this is that I don't want to know what the total code coverage is for all code or even for a namespace, but only new lines of code or perhaps lines of code that changed since the last checkin. (We use subversion for source control.)
I would like to call this tool as part of our automated build process and report back when someone checks in new code with less than X% code coverage.
Does anyone know of a tool that accomplishes this?
Thanks.
NDepend boasts the following:
NDepend gathers code coverage data from NCover™ and Visual Studio Team System™. From this
data, NDepend infers some metrics on methods, types, namespaces and assemblies :
PercentageCoverage, NbLinesOfCodeCovered, NbLinesOfCodeNotCovered and BranchCoverage
(from NCover only).
These metrics can be used conjointly with others NDepend features. For example you can
know what code have been added or refactored since the last release and is not thoroughly
covered by tests. You can write a CQL constraint to continuously check that a set of
classes is 100% covered. You can list which complex methods need more tests.
I seem to recall NDepend being able to compare with data from earlier builds, so it looks like the combination of NDepend and NCover might do the trick. Haven't tried it myself though. .)
Depending on the version of .Net you can use NCover for free. However if you are on the newer versions of .net it's not so cheap. You would probably still have to write your own stylesheet to parse the results of NCover to get specifically what you are asking.
Other than that I have not heard or seen of another tool to do this unless you wanted to write it yourself.
NCover basically uses the .Net Profiling API so in theory you could just do the same.
I use PartCover to analyse my unit tests for good effect. For the data you're looking for, you can use the console tool and extract the visit and len counts from the report xml.
In addition to the Rythmis answer, I provide this blog post that explains in detail how NDepend coupled with NCover or VSTS coverage answers the question:
Are you sure added and refactored code is covered by tests?
I just got a heaping pile of (mostly undocumented) C# code and I'd like to visualize it's structure before I dive in and start refactoring. I've done this in the past (in other languages) with tools that generate call graphs.
Can you recommend a good tool for facilitating the discovery of structure in C#?
UPDATE
In addition to the tools mentioned here I've seen (through the tubes) people say that .NET Reflector and CLR Profiler have this functionality. Any experience with these?
NDepend is pretty good at this. Additionally Visual Studio 2008 Team System has a bunch of features that allow you to keep track of cyclomatic complexity but its much more basic than NDepend. (Run code analysis)
Concerning NDepend, it can produce some usable call graph like for example:
The call graph can be made clearer by grouping its method by parent classes, namespaces or projects:
Find more explanations about NDepend call graph here.
It's bit late, but http://sequenceviz.codeplex.com/ is an awesome tool that shows the caller graph/Sequence diagram. The diagrams are generated by reverse engineering .NET Assemblies.
As of today (June 2017), the best tool in class is Resharper's Inspect feature. It allows you to find all incoming calls, outgoing calls, value origin/destination, etc.
The best part of ReSharper, compared to other tools mentioned above: it's less buggy.
I've used doxygen to some success. It's a little confusing, but free and it works.
Visual Studio 2010.
Plus, on a method-by-method basis - Reflector (Analyzer (Ctrl+R); "Depends On" and "Used By")
SequenceViz and DependencyStructureMatrix for Reflector might help you out: http://www.codeplex.com/reflectoraddins
I'm not sure if it will do it over just source code, but ANTS Profiler will produce a call graph for a running application (may be more useful anyway).