I'm currently researching and deciding on a code coverage tool for my company, and have so far tried NCover (Bolt and Desktop), DotCover, and NCrunch. All tools I've tried so far work well for measuring/highlighting code coverage in code called directly by unit tests, but any code called through CSLA (DataPortal_Fetch, for example) is never detected as being covered. As much of our code base resides in these functions, I'm finding the tools to be next to useless for much of what I need tested and measured.
My question then is how can I get code coverage results for CSLA code? Does anyone know of a tool that would work with these kinds of calls, or certain options/extensions I can use to get coverage results with the tools I'm using?
The project is coded in C#, and I'm using Visual Studio 2013 Professional, CSLA 3.8, and .NET 4.0. I have the latest versions of NCover Bolt and DotCover (both on trial), as well as the newest OpenCover that I could find.
Thanks in advance!
NCover Support here.
If you are using NCover Desktop, you can auto-configure to detect any .NET code that is being loaded by your testing (Bolt can only profile test runners).
We have this video that shows auto-detecting NUnit, as an example:
http://www.ncover.com/resources/videos/ncover-creating-a-new-code-coverage-project
And a lot of the same info in this help doc:
http://www.ncover.com/support/docs/desktop/user-guide/coverage_scenarios/how_do_i_collect_data_from_nunit
Please contact us at support#ncover.com if you have extra questions. Hope this helps.
Unlike TyCobb's entirely outdated opinion, current versions of CSLA don't invoke methods via reflection (except on iOS) and haven't since around 2007. But the data portal does use dynamic invocation via expression trees and that's probably the issue causing you trouble.
One option in current versions of CSLA is that the data portal is now described by an interface so you can mock the data portal, potentially creating a mock that does nothing but invoke your DP_XYZ methods directly. Even that's tricky though, unless you make them public and allow other code in your app to easily break encapsulation (yuck). The problem is that you won't be able to call the methods without using reflection, or rewriting the dynamic expression tree invocation code used inside CSLA...
Though perhaps the code coverage tools would see the code executing if it were run via reflection instead of via a runtime compiled expression?
I am trying to automatically open an NDepend Project when the Solution builds in an automated build in TFS2010.
This stems from this previous question. The aforementioned post is where I tried (and failed) to integrate NDepend's code metrics software with an automated Team Build via messing with the XML of my solution.
I decided since I wasn't getting anywhere in messing with the XML, that I would try a different route. In another program I have developed, I used
System.Diagnostics.Process.Start("blah.txt");
to trigger Notepad to run and open the text file "blah.txt."
I figured I could use the same concept to possibly help me with this NDepend integration. So I researched MSDN to see if I can find out more about the Process.Start method. And using this example
Process.Start("IExplore.exe", "C:\\myPath\\myFile.htm");
I substituted in my own paths to what I believe should open the project file "myProj.ndproj" inside the VisualNDepend application like this
System.Diagnostics.Process.Start("C:\\tools\\NDepend\\VisualNDepend","C:\\myProj\\myProj.ndproj");
I may be taking that example and tweaking it out of context, I'm not sure, but it seemed to me that what I tried should work. The solution built fine without any errors, but VisualNDepend didn't run.
It finally hit me that I was trying to use this code that would only execute when the program ran when I really need it to execute when the program builds within TFS and Visual Studio.
I asked my coworkers if they knew of any built-in ways within TFS or VS that would recognize whether or not the solution was being built or not. And they didn't really know of anything in particular. I tried "Googling" this topic and couldn't find any information that was useful to me.
Does anyone know of how to accomplish this? Or am I chasing a lost cause by trying to execute some C# code behind the solution? In which case, is my best bet trying to tweak the XML like I had previously been attempting?
I would recommend writing a custom build task (or tasks). You can essentially make the task do anything you'd like -- run a process, spit out results, etc., and it can be invoked directly from your MSBuild script.
I'm not sure if I'm answering your question (or if I even have a grasp on what you're trying to do), but that's probably the area I'd be looking to find my solution.
I have a C# user control project which causes intermittent .NET run time error, a generic error, and wondering if there is any code analysis tool that I can point at my .sln file which would tell me what may be causing my error
Is there a tool that will tell you what you're doing wrong?
No. That's part of the fun of programming. It's impossible for a computer program to look at a piece of code and definitively determine what all of the errors are.
Are there tools out there that can tell me some things my program is doing wrong?
Yes, these are called static analysis tools. FxCop is a free tool available from Microsoft that will an amazing amount of static analysis on your code base.
I'm not 100% sure if the standalone version can be pointed at a .sln file. But it can easily be pointed at the build output from a solution.
http://msdn.microsoft.com/en-us/library/bb429476.aspx
What you need is a Static Code Analysis tool - Besides FXCop which JaredPar mentioned there are others.
Another option I have found recently which gives a useful way of finding issues like this is PEX which does white box unit testing. So when you run the PEX explorations it will attempt to send a lot of values at your methods via it's autogenerated unit tests which may help find odd issues caused by strange/unexpected data.
But I would not confuse static analysis tool with a debugger, and I believe what you need here is debugging. In other words, FxCop might tell you that everything's great, but you can still get a run time exception.
Some errors in run-time can be really difficult to spot simply by looking at the code (race conditions with multiple threads, for example). So there is no "code analysis tool" that could a run-time exception simply by analyzing the code.
Check this link for some examples on debugging: http://msdn.microsoft.com/en-us/library/ms954594.aspx. You will have probably have to do some stepping through your code using a debugger, maybe Trace some data to a log file, and then try to find exactly where it goes wrong.
As far as tools go FX Cop is really good for doing Code Analysis and it's free but something else to look at is http://www.jetbrains.com/resharper/ for doing on the fly code Analysis and it promotes good development practices.
But these may be the wrong tools for the job and may not solve the problem your having, the code may be syntactically correct but there is a logic error that is causing your problem it's like a spellchecker, all words my be spelled correct "Evert bird is conical" means something completely different than "Every word is correct".
Your probably going to need to spend some time in the debugger or using some form of trace tool like dotTrace Profiler there are a couple more out on Visual Studio Gallery like http://www.debuginspector.com/
NDepend is fully integrated in VS2005, VS2008 and VS2010. So you can simply point NDepend to the sln that you wish to analyze, and NDepend will build a full report for you.
In a few clicks, you can visualize which types depends on which types, etc. This will obviously not magically solve all your problems, but that is likely to put you on the right track.
I'm looking for a tool (preferably free) that analyzes incremental code coverage of our C# solution. What I mean by this is that I don't want to know what the total code coverage is for all code or even for a namespace, but only new lines of code or perhaps lines of code that changed since the last checkin. (We use subversion for source control.)
I would like to call this tool as part of our automated build process and report back when someone checks in new code with less than X% code coverage.
Does anyone know of a tool that accomplishes this?
Thanks.
NDepend boasts the following:
NDepend gathers code coverage data from NCoverâ„¢ and Visual Studio Team Systemâ„¢. From this
data, NDepend infers some metrics on methods, types, namespaces and assemblies :
PercentageCoverage, NbLinesOfCodeCovered, NbLinesOfCodeNotCovered and BranchCoverage
(from NCover only).
These metrics can be used conjointly with others NDepend features. For example you can
know what code have been added or refactored since the last release and is not thoroughly
covered by tests. You can write a CQL constraint to continuously check that a set of
classes is 100% covered. You can list which complex methods need more tests.
I seem to recall NDepend being able to compare with data from earlier builds, so it looks like the combination of NDepend and NCover might do the trick. Haven't tried it myself though. .)
Depending on the version of .Net you can use NCover for free. However if you are on the newer versions of .net it's not so cheap. You would probably still have to write your own stylesheet to parse the results of NCover to get specifically what you are asking.
Other than that I have not heard or seen of another tool to do this unless you wanted to write it yourself.
NCover basically uses the .Net Profiling API so in theory you could just do the same.
I use PartCover to analyse my unit tests for good effect. For the data you're looking for, you can use the console tool and extract the visit and len counts from the report xml.
In addition to the Rythmis answer, I provide this blog post that explains in detail how NDepend coupled with NCover or VSTS coverage answers the question:
Are you sure added and refactored code is covered by tests?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm working with a small (4 person) development team on a C# project. I've proposed setting up a build machine which will do nightly builds and tests of the project, because I understand that this is a Good Thing. Trouble is, we don't have a whole lot of budget here, so I have to justify the expense to the powers that be. So I want to know:
What kind of tools/licenses will I need? Right now, we use Visual Studio and Smart Assembly to build, and Perforce for source control. Will I need something else, or is there an equivalent of a cron job for running automated scripts?
What, exactly, will this get me, other than an indication of a broken build? Should I set up test projects in this solution (sln file) that will be run by these scripts, so I can have particular functions tested? We have, at the moment, two such tests, because we haven't had the time (or frankly, the experience) to make good unit tests.
What kind of hardware will I need for this?
Once a build has been finished and tested, is it a common practice to put that build up on an ftp site or have some other way for internal access? The idea is that this machine makes the build, and we all go to it, but can make debug builds if we have to.
How often should we make this kind of build?
How is space managed? If we make nightly builds, should we keep around all the old builds, or start to ditch them after about a week or so?
Is there anything else I'm not seeing here?
I realize that this is a very large topic, and I'm just starting out. I couldn't find a duplicate of this question here, and if there's a book out there I should just get, please let me know.
EDIT: I finally got it to work! Hudson is completely fantastic, and FxCop is showing that some features we thought were implemented were actually incomplete. We also had to change the installer type from Old-And-Busted vdproj to New Hotness WiX.
Basically, for those who are paying attention, if you can run your build from the command line, then you can put it into hudson. Making the build run from the command line via MSBuild is a useful exercise in itself, because it forces your tools to be current.
Update: Jenkins is the most up to date version of Hudson. Everyone should be using Jenkins now. I'll be updating the links accordingly.
Hudson is free and extremely easy to configure and will easily run on a VM.
Partly from an old post of mine:
We use it to
Deploy Windows services
Deploy web services
Run MSTests & display as much information as any junit tests
Keep track of low,med,high tasks
trendgraph warnings and errors
Here are some of the built in .net stuff that Hudson supports
MSBuild
NAnt
MSTest
Nunit
Team Foundation Server
fxcop
stylecop
compiler warnings
code tasks
Also, god forbid you are using visual source safe, it supports that as well. I'd recommend you take a look at Redsolo's article on building .net projects using Hudson
Your questions
Q: What kind of tools/licenses will I need? Right now, we use Visual Studio and Smart Assembly to build, and Perforce for source control. Will I need something else, or is there an equivalent of a cron job for running automated scripts?
A: I just installed visual studio on a fresh copy of a VM running a fresh, patched, install of a windows server OS. So you'd need the licenses to handle that. Hudson will install itself as a windows service and run on port 8080 and you will configure how often you want it to scan your code repository for updated code, or you can tell it to build at a certain time. All configurable through the browser.
Q: What, exactly, will this get me, other than an indication of a broken build? Should I set up test projects in this solution (sln file) that will be run by these scripts, so I can have particular functions tested? We have, at the moment, two such tests, because we haven't had the time (or frankly, the experience) to make good unit tests.
A: You will get an email on the first time a build fails, or becomes unstable. A build is unstable if a unit test fails or it can be marked unstable through any number of criteria that you set. When a unit test or build fails you will be emailed and it will tell you where, why and how it failed. With my configuration, we get:
list of all commits since the last working build
commit notes of those commits
list of files changed in the commits
console output from the build itself, showing the error or test failure
Q: What kind of hardware will I need for this?
A: A VM will suffice
Q: Once a build has been finished and tested, is it a common practice to put that build up on an ftp site or have some other way for internal access? The idea is that this machine makes the build, and we all go to it, but can make debug builds if we have to.
A: Hudson can do whatever you want with it, that includes ID'ing it via the md5 hash, uploading it, copying it, archiving it, etc. It does this automatically and provides you with a long running history of build artifacts.
Q: How often should we make this kind of build?
A: We have ours poll SVN every hour, looking for code changes, then running a build. Nightly is ok, but somewhat worthless IMO since what you've worked on yesterday wont be fresh in your mind in the morning when you get in.
Q: How is space managed? If we make nightly builds, should we keep around all the old builds, or start to ditch them after about a week or so?
A: Thats up to you, after so long I move our build artifacts to long term storage or delete them, but all the data which is stored in text files / xml files I keep around, this lets me store the changelog, trend graphs, etc on the server with verrrry little space consumed. Also you can set Hudson up to only keep artifacts from a trailing # of builds
Q: Is there anything else I'm not seeing here?
A: No, Go get Hudson right now, you wont be disappointed!
We've had great luck with the following combo:
Visual Studio (specifically, using the MSBuild.exe command line tool and passing it our solution files. removes the need for msbuild scripts)
NAnt (like the XML syntax/task library better than MSBuild. Also has options for P4 src control operations)
CruiseControl.net - built in web dashboard for monitoring/starting builds.
CCNet has built in notifiers to send emails when builds succeed/fail
On justification: This takes the load off developers doing manual builds and does a lot to take human error out of the equation. It is very hard to quantify this effect, but once you do it you will never go back. Having a repeatable process to build and release software is paramount. I'm sure you've been places where they build the software by hand and it gets out in the wild, only to have your build guy say "Oops, I must have forgotten to include that new DLL!"
On hardware: as powerful as you can get. More power/memory = faster build times. If you can afford it you'll never regret getting a top-notch build machine, no matter how small the group.
On space: Helps to have plenty of hard disk space. You can craft your NAnt scripts to delete intermediate files every time a build starts, so the real issue is keeping log histories and old application installers. We have software that monitors disk space and sends alerts. Then we clean up the drive manually. Usually needs to be done every 3-4 months.
On build notifications: This is built in to CCNet, but if you are going to add automated testing as an additional step then build this into the project from the get-go. It is extremely hard to back fit tests once a project gets large. There is tons of info on test frameworks out there (probably a ton of info on SO as well), so I'll defer on naming any specific tools.
At my previous workplace we used TeamCity. It's very easy and powerful to use. It can be used for free with some restrictions. There is also a tutorial on Dime Casts. The reason we didn't use CruiseControl.NET is that we had a lot of small projects and it's quite painful to set each one up in CC.NET. I would highly recommend TeamCity. To summarize if you are toward open source then CC.NET is the grand daddy with slightly higher learning curve. If your budget allow you definitely go with TeamCity or check out the free version.
How? Have a look at Carel Lotz's blog.
Why? There are several reasons that I can think of:
A working build, when properly implemented, means that all your developers can build on their machine when the build is green
A working build, when properly implemented, means that you are ready to deploy at any time
A working build, when properly implemented, means that whatever you release has made a trip to your source control system.
A working build, when properly implemented, means that you integrate early and often, reducing your integration risk.
Martin Fowler's article on Continuous Integration remains the definitive text. Have a look at it!
The main argument in favour is that it will cut the cost of your development process, by alerting you as soon as possible that you have a broken build or failing tests.
The problem of integrating the work of multiple developers is the main danger of growing a team. The larger the team gets, the harder it is to coordinate their work and stop them messing with each other's changes. The only good solution is to tell them to "integrate early and often", by checking in small units of work (sometimes called "stories") as they are completed.
You should make the build machine rebuild EVERY time some checks in, throughout the day. With Cruise Control, you can get an icon on your task bar that turns red (and even talks to you!) when the build is broken.
You should then do a nightly full clean build where the source version is labeled (given a unique build number) that you can choose to publish to your stakeholders (product managers, QA people). This is so that when a bug is reported, it is against a known build number (that's extremely important).
Ideally you should have an internal site where builds can be downloaded, and have a button you can click to publish the previous nightly build.
Just trying to build a bit on what mjmarsh said, since he laid a great foundation...
Visual Studio. MSBuild works fine.
NAnt.
NantContrib. This will provide additional tasks such as Perforce operations.
CruiseControl.net. This is again basically your "build dashboard".
All of the above (save for VS) is open source, so you're not looking at any additional licensing.
As Earwicker mentioned, build early, build often. Knowing something broke, and you can produce a deliverable is useful for catching stuff early on.
NAnt includes tasks for nunit/nunit2 as well, so you can actually automate your unit testing. You can then apply stylesheets to the results, and with the help of the framework provided by CruiseControl.net, have nice readable, printable unit test results for every build.
The same applies to the ndoc task. Have your documentation produced and available, for every build.
You can even use the exec task to execute other commands, for instance, producing a Windows Installer using InstallShield.
The idea is to automate the build as much as possible, because human beings make mistakes. Time spent up front is time saved down the road. People aren't having to babysit the build by going through the build process. Identify all the steps of your build, create NAnt scripts for each task, and build your NAnt scripts one by one until you've wholly automated your entire build process. It also then puts all of your builds in one place, which is good for comparison purposes. Something break in Build 426 that worked fine in Build 380? Well, there are the deliverables ready for testing -- grab them and test away.
No licenses needed. CruiseControl.net is freely available and only needs the .NET sdk to build.
A build server, even without automated unit tests still provides a controlled environment for building releases. No more "John usually builds on his machine but he's out sick. For some reason I can't build on my machine"
Right now I have one set up in a Virtual PC session.
Yes. The build needs to be dumped somewhere accessible. Development builds should have debugging turned on. Release build should have it turned off.
How often is up to you. If set up correctly, you can build after each check in will very little overhead. This is a great idea if you have (or are planning on having) unit tests in place.
Keep milestones and releases as long as required. Anything else depends on how often you build: continuously? throw away. Daily? Keep a week's worth. Weekly? Keep two month's worth.
The larger your project gets the more you will see the benefits of an automated build machine.
It is all about the health of the build. What this gets you is that you can set up any type of things you want to happen with the builds. Among these you can run tests, static analysis, and profiler.
Problems are dealt with much much faster, when you recently worked on that part of the application. If you commit small changes, then it almost tells you where you broke it :)
This of course assumes, you set it up to build with every check in (continuous integration).
It also can help get QA and Dev closer. As you can set up functional tests to run with it, along with profiler and anything else that improves feedback to the dev team. This doesn't mean the functional tests run with every check in (can take a while), but you set up builds/tests with tools that are common to the whole team. I have been automating smoke tests, so in my case we collaborate even more closely.
Why:
10 years ago we as software developers used to analyse something to the nth degree get the documents (written in a human language) 'signed off' then start writing code. We would unit test, string test and then we would hit system test: the first time the system as a whole would be run together, sometimes week or months after we got the documents signed off. It was only then that we would uncover all the assumptions and misunderstandings we had when we analysed everything.
Continuous Integration as and idea causes you to build a complete (although, initially, very simple) system end to end. Over time the system functionality is built out orthogonally. Every time you do a complete build you are doing the system test early and often. This means you find and fix bugs and assumptions as early as possible, when it is the cheapest time to fix them.
How:
As for the how, I blogged about this a little while ago:[ Click Here]
Over 8 posts it goes step by step on how to set up a Jenkins server in a windows environment for .NET solutions.