I started playing with Roslyn. It’s relatively easy to parse code and do static analysis.
I wonder if it’s possible to use it for runtime analysis? I want to call a method with parameters and check which branches were executed. In other words, I need a runtime execution plan.
Is it something which could be done with Roslyn?
I don't know what the best solution is and I would defer to anything SLaks recommends in most cases.
However...
If you want to do this with Roslyn you certainly can. In fact at my company does something similar (we map unit tests to the methods they invoke).
Here's a high level overview of our approach.
Rewrite every single function in the solution to log when it is hit in some global static lookup/data-structure. You can iterate over every file one at a time and use the CSharpSyntaxRewriter on each one. (In your case you'll be rewriting on a branch or line-by-line basis)
Run each unit test one at a time and see what gets run by analzying your global lookup.
Aggregate the results across all your unit tests to understand your complete code coverage.
Related
I suspect the answer is no, but I'll ask anyway...
TL;DR
I know I can exclude a class or method from coverage analysis with the [ExcludeFromCodeCoverage] attribute, but is there a way to exclude only part of a method?
Concrete example
I have a method that lazily generates a sequence of int.MaxValue elements:
private static IEnumerable<TElement> GenerateIterator<TElement>(Func<int, TElement> generator)
{
for (int i = 0; i < int.MaxValue; i++)
{
yield return generator(i);
}
}
In practice, it's never fully enumerated, so the end of the method is never reached. Because of that, DotCover considers that 20% of the method is not covered, and it highlights the closing brace as uncovered (which corresponds to return false in the generated MoveNext method).
I could write a test that consumes the whole sequence, but it takes a very long time to run, especially with coverage enabled.
So I'd like to find a way to tell DotCover that the very last instruction doesn't need to be covered.
Note: I know I don't really need to have all the code covered by unit tests; some pieces of code can't or don't need to be tested, and I usually exclude those with the [ExcludeFromCodeCoverage] attribute. But I like to have 100% reported coverage for the code that I do test, because it makes it easier to spot untested parts of the code. Having a method with 80% coverage when you know there is nothing more to test in it is quite annoying...
No, there is no way to exclude "part of a method" from coverage analysis with dotCover.
In the general sense you got a couple of options:
Extract the uncovered part into its own method, so you can properly ignore that method from analsysis
Ignore the problem
In this case there may be a third options. Since your test code exercises the majority of your method, perhaps you should just write a test method that makes sure the code runs to completion?
First and foremost, while "code coverage" can be an important metric, one must realize that it just might not be possible to have 100% "code coverage". 100% Code coverage is one of those metrics that you should aspire to attain, but which you never will; i.e. get as close as you possibly can.
OTOH, don't go crazy trying to get 100% code coverage. More importantly, is your code readable? Is it testable (I presume so since you're looking at code coverage)? Is it maintainable? Is it SOLID? Do you have passing unit, integration, and end-to-end tests? These things are more important than achieving 100% code coverage. What code coverage will tell you is how extensive your testing is (I'm not sure if the built-in code coverage analysis engine includes only unit tests, or includes all types of tests when calculating its statistics), which gives you an indication of whether or not you have enough tests. Also, while it will tell you how extensive your tests are (i.e. how many lines of code are executed by your tests), it won't tell you if your tests are any good (i.e. are your tests really testing what needs to be tested to ensure your application is working correctly).
Anyway, this may be not an answer, but food for thought.
I'm writing a Tiger compiler in C# and I'm going to translate the Tiger code into IL.
While implementing the semantic check of every node in my AST, I created lots of unit tests for this. That is pretty simple, because my CheckSemantic method looks like this:
public override void CheckSemantics(Scope scope, IList<Error> errors) {
...
}
so, if I want to write some unit test for the semantic check of some node, all I have to do is build an AST, and call that method. Then I can do something like:
Assert.That(errors.Count == 0);
or
Assert.That(errors.Count == 1);
Assert.That(errors[0] is UnexpectedTypeError);
Assert.That(scope.ExistsType("some_declared_type"));
but I'm starting the code generation in this moment, and I don't know what could be a good practice when writing unit tests for that phase.
I'm using the ILGenerator class. I've thought about the following:
Generate the code of the sample program I want to test
Save generated code as test.exe
Execute text.exe and store the output in results
Assert against results
but I'm wondering if there is a better way of doing it?
That's exactly what we do on the C# compiler team to test our IL generator.
We also run the generated executable through ILDASM and verify that the IL is produced as expected, and run it through PEVERIFY to ensure that we're generating verifiable code. (Except of course in those cases where we are deliberately generating unverifiable code.)
I've created a post-compiler in C# and I used this approach to test the mutated CIL:
Save the assembly in a temp file, that will be deleted after I'm done with it.
Use PEVerify to check the assembly; if there's a problem I copy it to a known place for further error analysis.
Test the assembly contents. In my case I'm mostly loading the assembly dynamically in a separate AppDomain (so I can tear it down later) and exercising a class in there (so it's like a self-checking assembly: here's a sample implementation).
I've also given some ideas on how to scale integration tests in this answer.
You can think of testing as doing two things:
letting you know if the output has changed
letting you know if the output is incorrect
Determining if something has changed is often considerably faster than determining if something is incorrect, so it can be a good strategy to run change-detecting tests more frequently than incorrectness-detecting tests.
In your case you don't need to run the executables produced by your compiler every time if you can quickly determine that the executable has not changed since a known good (or assumed good) copy of the same executable was produced.
You typically need to do a small amount of manipulation on the output that you're testing to eliminate differences that are expected (for example setting embedded dates to a fixed value), but once you have that done, change-detecting tests are easy to write because the validation is basically a file comparison: Is the output the same as the last known good output? Yes: Pass, No: Fail.
So the point is that if you see performance problems with running the executables produced by your compiler and detecting changes in the output of those programs, you can choose to run tests that detect changes a stage earlier by comparing the executables themselves.
I have the situation that the same repeating refactoring tasks have to be done for a huge number of methods in my code.
For example imagine a interface with 100 methods, each of them has one or more parameters as well as a return value. For each of these methods I need to jump to the implementation change the return type and add a line of code which converts the old return value to its new type for callers of the interface method.
Is there any way to quickly automate such refactorings?
I even thought to write a custom script to do it, but writing a intelligent script would approximately take longer than doing it maually.
A tool supporting such task can save a lot of time.
It's a good question, but in the time it took since you posted it (not to mention the time you spent searching for an answer before posting), you could have completed the changes manually.
I know, I know, it's utterly unsatisfying, but if you think of it as a form of mediation, and only do this once a year, it's not that bad.
If your problem is one interface with 100 methods, then I agree with another poster: just doing it may seem painful but it is limited in effort and you can be done really soon.
If you have this problem repeatedly, or you have very large code base (many, many interfaces for which you want to perform this task), then what you need is a tool for implementing automated change: a program transformation engine. Such a tool provides the ability to parse source code, build a program representation (an abstract syntax tree), and enables one to apply "scripted" operations on the tree either through procedural interfaces and/or through source-to-source transformation patterns.
OUr DMS Software Reengineering Toolkit is such a program transformation system. It has a C# Front End to enable its application to C# code. Configuring such a tool for a complex task is not a matter of hours, so it is not useful for "small scale" changes. For large scale changes, such tools can make it possible to do things simply not practical by hand.
Resharper and CodeRush both have features which can help with this kind of task.
Resharper's change signature functionality is probably the closest match.
Can't you generate a new interface from the class you have and then remove the ones you don't need! if it's that simple!!
change the return type : by changing... the return type, provided it is not a standard type (...), and the converter can be implemented by a TypeConverter.
When i have such boring task to do, i often switch VS2010 and use a tool that allow regex search and replace. In your example, maybe change 'return xxx;' by 'var yyy=convert(xxx); return yyy;'
(for example editor Notepad++ (free) allready offers quite some possiblities to change everything in a project (use with caution))
I have one class which talks to DataBase.
I have my integration-tests which talks to Db and asserts relevant changes. But I want those tests to be ignored when I commit my code because I do not want them to be called automatically later on.
(Just for development time I use them for now)
When I put [Ignore] attribute they are not called but code-coverage reduces dramatically.
Is there a way to keep those tests but not have them run automatically
on the build machine in a way that the fact that they are ignored does
not influence code-coverage percentage?
Whatever code coverage tool you use most likely has some kind of CoverageIgnoreAttribute or something along those lines (at least the ones I've used do) so you just place that on the method block that gets called from those unit tests and you should be fine.
What you request seems not to make sense. Code-Coverage is measured by executing your tests and log which statements/conditions etc. are executed. If you disable your tests, nothing get executed and your code-coverage goes down.
TestNG has groups so you can specify to only run some groups, automatically and have the others for usage outside of that. You didn't specify your unit testing framework but it might have something similar.
I do not know if this is applicable to your situation. But spontaneously I am thinking of a setup where you have two solution files (.sln), one with unit/integration tests and one without. The two solutions share the same code and project files with the exception that your development/testing solution includes your unit tests (which are built and run at compile time), and the other solution doesn't. Both solutions should be under source control but only the one without unit tests are built by the build server.
This kind of setup should not need you to change existing code (too much). Which I would prefer over rewriting code to fit your test setup.
So, every time I have written a lambda expression or anonymous method inside a method that I did not get quite right, I am forced to recompile and restart the entire application or unit test framework in order to fix it. This is seriously annoying, and I end up wasting more time than I saved by using these constructs in the first place. It is so bad that I try to stay away from them if I can, even though Linq and lambdas are among my favourite C# features.
I suppose there is a good technical reason for why it is this way, and perhaps someone knows? Furthermore, does anyone know if it will be fixed in VS2010?
Thanks.
Yes there is a very good reason for why you cannot do this. The simple reason is cost. The cost of enabling this feature in C# (or VB) is extremely high.
Editing a lambda function is a specific case of a class of ENC issues that are very difficult to solve with the current ENC (Edit'n'Continue) architecture. Namely, it's very difficult to ENC any method which where the ENC does one of the following:-
Generates Metadata in the form of a class
Edits or generates a generic method
The first issue is more of a logic constraint but it also bumps into a couple of limitations in the ENC architecture. Namely the problem is generating the first class isn't terribly difficult. What's bothersome is generating the class after the second edit. The ENC engine must start tracking the symbol table for not only the live code, but the generated classes as well. Normally this is not so bad, but this becomes increasingly difficult when the shape of a generated class is based on the context in which it is used (as is the case with lambdas because of closures). More importantly, how do you resolve the differences against instances of the classes that are already alive in the process?
The second issue is a strict limitation in the CLR ENC architecture. There is nothing that C# (or VB) can do to work around this.
Lambdas unfortunately hit both of these issues dead on. The short version is that ENC'ing a lambda involves lots of mutations on existing classes (which may or may not have been generated from other ENC's). The big problem comes in resolving the differences between the new code and the existing closure instances alive in the current process space. Also, lambdas tend to use generics a lot more than other code and hit issue #2.
The details are pretty hairy and a bit too involved for a normal SO answer. I have considered writing a lengthy blog post on the subject. If I get around to it I'll link it back into this particular answer.
According to a list of Supported Code Changes, you cannot add fields to existing types. Anonymous methods are compiled into oddly-named classes (kinda <>_c__DisplayClass1), which are precisely that: types. Even though your modifications to the anonymous method may not include changing the set of enclosed variables (adding those would alter fields of an existing class), I guess that's the reason it's impossible to modify anonymous methods.
It is a bit a shame that this feature is partially supported in VB but not in C#:
http://msdn.microsoft.com/en-us/library/bb385795.aspx
Implementing the same behaviour in C# would reduce the pain level by 80% for functions that contain lambda expressions, where we do not need to modify the lambda expressions nor any expression that depends on them, and probably not for a "monster cost".
Restarting a unit test should take a matter of seconds, if that. I've never liked the "edit and continue" model to be honest - you should always rerun from scratch IMO, just in case the change midway through execution would have affected the code which ran earlier. Given that, you're better off using unit tests which can be run with a very quick turnaround. If your individual unit tests take an unbearable time to start, that's something you should look at addressing.
EDIT: As for why it doesn't work - you may find that it works for some lambdas but not others. Lambda expressions which don't capture any variables (including this) are cached in a private static variable, so that only one instance of the delegate is ever created. Changing the code means reinitialising that variable which could have interesting side-effects I suspect.
I just want to point out that Visual Studio's consideration of "editing" in this context is (or at least can be) a bit stupid. When I was checking out an older commit as part of doing an interactive rebase in git and then attempting to run an unit test, that resulted in 9 error (with ENC0014 and some others).
So with no files modified, every time I attempted to debug the unit test I got those errors. Restarting Visual Studio made the errors go away, so I guess that the underlying problem is missing cache invalidation where Visual Studio does not detect/react to files being changed outside editing via its editor windows.