One of the most useful code analysis tools of Resharper is to mark symbols as not used if no usage if found in the solution.
Unfortunately any symbol that is covered by unit tests is regarded as used.
=> I am looking for a way to ignore the unit tests for this usage analysis.
Scanning through the Resharper options I found a button labeled "Edit Items to Skip". It has a long description text that says, amongst other things, "...if a certain symbol in solution is only used in files that you skip, this symbol will be highlighted as never used."
This sounded like exactly what I wanted. But by putting the unit test project on the skip-list not only reveals any effectively unused symbol, it only disables the whole code analysis for the test project. Of course I would still want to write good unit test code and thus make use of all the code analysis features of Resharper. I only do not want to count the usages of symbols outside the test project.
Any ideas?
Discovered a very simple answer:
Unload the test project and refresh the code analysis.
Related
If I specify <SonarQubeTestProject>true</SonarQubeTestProject> it will exclude the project from not just code coverage but also code smell and duplicate detection.
Is there a way to just exclude from code coverage but scan for every other aspects?
Take a look at Exclusions, which allow you to exclude files from
consideration altogether
issues
coverage
duplications
I suspect the answer is no, but I'll ask anyway...
TL;DR
I know I can exclude a class or method from coverage analysis with the [ExcludeFromCodeCoverage] attribute, but is there a way to exclude only part of a method?
Concrete example
I have a method that lazily generates a sequence of int.MaxValue elements:
private static IEnumerable<TElement> GenerateIterator<TElement>(Func<int, TElement> generator)
{
for (int i = 0; i < int.MaxValue; i++)
{
yield return generator(i);
}
}
In practice, it's never fully enumerated, so the end of the method is never reached. Because of that, DotCover considers that 20% of the method is not covered, and it highlights the closing brace as uncovered (which corresponds to return false in the generated MoveNext method).
I could write a test that consumes the whole sequence, but it takes a very long time to run, especially with coverage enabled.
So I'd like to find a way to tell DotCover that the very last instruction doesn't need to be covered.
Note: I know I don't really need to have all the code covered by unit tests; some pieces of code can't or don't need to be tested, and I usually exclude those with the [ExcludeFromCodeCoverage] attribute. But I like to have 100% reported coverage for the code that I do test, because it makes it easier to spot untested parts of the code. Having a method with 80% coverage when you know there is nothing more to test in it is quite annoying...
No, there is no way to exclude "part of a method" from coverage analysis with dotCover.
In the general sense you got a couple of options:
Extract the uncovered part into its own method, so you can properly ignore that method from analsysis
Ignore the problem
In this case there may be a third options. Since your test code exercises the majority of your method, perhaps you should just write a test method that makes sure the code runs to completion?
First and foremost, while "code coverage" can be an important metric, one must realize that it just might not be possible to have 100% "code coverage". 100% Code coverage is one of those metrics that you should aspire to attain, but which you never will; i.e. get as close as you possibly can.
OTOH, don't go crazy trying to get 100% code coverage. More importantly, is your code readable? Is it testable (I presume so since you're looking at code coverage)? Is it maintainable? Is it SOLID? Do you have passing unit, integration, and end-to-end tests? These things are more important than achieving 100% code coverage. What code coverage will tell you is how extensive your testing is (I'm not sure if the built-in code coverage analysis engine includes only unit tests, or includes all types of tests when calculating its statistics), which gives you an indication of whether or not you have enough tests. Also, while it will tell you how extensive your tests are (i.e. how many lines of code are executed by your tests), it won't tell you if your tests are any good (i.e. are your tests really testing what needs to be tested to ensure your application is working correctly).
Anyway, this may be not an answer, but food for thought.
I'm writing a Tiger compiler in C# and I'm going to translate the Tiger code into IL.
While implementing the semantic check of every node in my AST, I created lots of unit tests for this. That is pretty simple, because my CheckSemantic method looks like this:
public override void CheckSemantics(Scope scope, IList<Error> errors) {
...
}
so, if I want to write some unit test for the semantic check of some node, all I have to do is build an AST, and call that method. Then I can do something like:
Assert.That(errors.Count == 0);
or
Assert.That(errors.Count == 1);
Assert.That(errors[0] is UnexpectedTypeError);
Assert.That(scope.ExistsType("some_declared_type"));
but I'm starting the code generation in this moment, and I don't know what could be a good practice when writing unit tests for that phase.
I'm using the ILGenerator class. I've thought about the following:
Generate the code of the sample program I want to test
Save generated code as test.exe
Execute text.exe and store the output in results
Assert against results
but I'm wondering if there is a better way of doing it?
That's exactly what we do on the C# compiler team to test our IL generator.
We also run the generated executable through ILDASM and verify that the IL is produced as expected, and run it through PEVERIFY to ensure that we're generating verifiable code. (Except of course in those cases where we are deliberately generating unverifiable code.)
I've created a post-compiler in C# and I used this approach to test the mutated CIL:
Save the assembly in a temp file, that will be deleted after I'm done with it.
Use PEVerify to check the assembly; if there's a problem I copy it to a known place for further error analysis.
Test the assembly contents. In my case I'm mostly loading the assembly dynamically in a separate AppDomain (so I can tear it down later) and exercising a class in there (so it's like a self-checking assembly: here's a sample implementation).
I've also given some ideas on how to scale integration tests in this answer.
You can think of testing as doing two things:
letting you know if the output has changed
letting you know if the output is incorrect
Determining if something has changed is often considerably faster than determining if something is incorrect, so it can be a good strategy to run change-detecting tests more frequently than incorrectness-detecting tests.
In your case you don't need to run the executables produced by your compiler every time if you can quickly determine that the executable has not changed since a known good (or assumed good) copy of the same executable was produced.
You typically need to do a small amount of manipulation on the output that you're testing to eliminate differences that are expected (for example setting embedded dates to a fixed value), but once you have that done, change-detecting tests are easy to write because the validation is basically a file comparison: Is the output the same as the last known good output? Yes: Pass, No: Fail.
So the point is that if you see performance problems with running the executables produced by your compiler and detecting changes in the output of those programs, you can choose to run tests that detect changes a stage earlier by comparing the executables themselves.
Has anyone come across a tool to report on commented-out code in a .NET app? I'm talking about patterns like:
//var foo = "This is dead";
And
/*
var foo = "This is dead";
*/
This won't be found by tools like ReSharper or FxCop which look for unreferenced code. There are obvious implications around distinguishing commented code from commented text but it doesn't seem like too great a task.
Is there any existing tool out there which can pick this up? Even if it was just reporting of occurrences by file rather than full IDE integration.
Edit 1: I've also logged this as a StyleCop feature request. Seems like a good fit for the tool.
Edit 2: Yes, there's a good reason why I'd like to do this and it relates to code quality. See my comment below.
You can get an approximate answer by using a regexp that recognizes comments, that end with ";" or "}".
For a more precise scheme, see this answer:
Tool to find commented out VHDL code
I've been done this road for the same reason. I more or less did was Ira Baxter suggested (though I focused on variable_type variable = value and specifically looked for lines that consisted of 0 or more whitespace at beginning followed by // followed by code (and to handle /* */, I wrote a preprocessor that converted it into //'s. I tweaked the reg exp to cut down on false positives and also did a manual inspection just to be safe; fortunately, there were very few cases where the comment was doing pseudo-code like things as drachenstern suggests above; YMMV. I'd love to find a tool that could do this but some false positives from valid but possibly overly detailed pseudo code are going to be really to rule out, especially if they're using literate programming techniques to make the code as "readable" as pseudo code.
(I also did this for VB6 code; on the one hand, the lack of ;'s made it harder to right an "easy" reg exp, on the other hand the code used a lot less classes so it was easier to match on variable types which tend not to be in pseudo code)
Another option I looked at but didn't have available was to look at the revision control logs for lines that were code in version X and then //same line in version Y... this of courses assumes that A) they are using revision control B) you can view it, and C) they didn't write code and comment it out in the same revision. (And gets a little trickier if they use /* */ comments
There is another option for this, Sonar. It is actually a Java centric app but there is a plugin that can handle C# code. Currently it can report on:
StyleCop errors
FxCop errors
Gendarme errors
Duplicated code
Commented code
Unit test results
Code coverage (using NCover, OpenCover)
It does take a while for it to scan mostly due to the duplication checks (AFAIK uses text matching rather than C# syntax trees) and if you use the internal default derby database it can fail on large code bases. However it is very useful to for the code-base metrics you gain and it has snapshot features that enable you to see how things have changed and (hopefully) got better over time.
Since StyleCop is not actively maintained (see https://github.com/StyleCop/StyleCop#considerations), I decided to roll out my own, dead-csharp:
https://github.com/mristin/dead-csharp
Dead-csharp uses heuristics to find code patterns in the comments. The comments starting with /// are intentionally ignored (so that you can write code in structured comments).
StyleCop will catch the first pattern. It suggests you use //// as a comment for code so that it will ignore the rule.
Seeing as you mentioned NDepends it also can tell you Percentage commented http://www.ndepend.com/Metrics.aspx#PercentageComment. This is defined for application, assemblies, namespaces, types and methods.
I need a tool I can run that will show me a list of unused methods, variables, properties, and classes. CSS classes would be an added bonus.
I heard FXCop can do this? or NDepend or something?
Look at ReSharper.
Code Analysis in VSTS will generate warnings about this during the build process. You can set it up to treat Warnings As Errors.
You can use ReSharper to find unused code and Dust-Me Selectors to find unused CSS.
The tool NDepend can help find unused code in a .NET code base. Disclaimer: I am one of the developer of this tool.
NDepend proposes to write Code Rule over LINQ Query (CQLinq). Around 200 default code rules are proposed, 3 of them being dedicated to unused/dead code detection:
Potentially dead Types (hence detect unused class, struct, interface, delegate...)
Potentially dead Methods (hence detect unused method, ctor, property getter/setter...)
Potentially dead Fields
NDepend is integrated in Visual Studio, thus these rules can be checked/browsed/edited right inside the IDE. The tool can also be integrated into your CI process and it can build reports that will show rules violated and culprit code elements.
If you click these 3 links above toward the source code of these rules, you'll see that the ones concerning types and methods are a bit complex. This is because they detect not only unused types and methods, but also types and methods used only by unused dead types and methods (recursive).
This is static analysis, hence the prefix Potentially in the rule names. If a code element is used only through reflection, these rules might consider it as unused which is not the case.
In addition to using these 3 rules, I'd advise measuring code coverage by tests and striving for having full coverage. Often, you'll see that code that cannot be covered by tests, is actually unused/dead code that can be safely discarded. This is especially useful in complex algorithms where it is not clear if a branch of code is reachable or not.
Gendarme has also different rules to find unused code.