We have a unit tests project which is still using the "old-style" private accessors in many tests.
Since they're a maintenance nightmare, we're trying to get rid of them, and move to new new Microsoft Fakes framework, using Shims where needed.
Recently we wrote some new unit tests which use Shims, and noticed that for some reason this caused a few OTHER, old, tests, which were not modified, to run considerably slower. By slower I mean run times of about ~10sec instead of ~900millisec for the affected tests.
Running the affected tests on their own didn't seems to have this effect though - it only occurs when running them after tests with Shims.
Initially we thought this might be simply due to initialization problems, causing tests to influence one another.
However, after some experimentation, we found that the slowdown occurs even without actually adding any new test code. Simply adding the following snippet before one of the slowed-down tests caused the same effect of the test running slower:
using (ShimsContext.Create()) {}
Debugging seemed to show that the the code being tested was indeed running much slower (not the unit test code itself) , but we couldn't identify which part of it. We're not able to identify why these tests are affected while others are not.
At this point we tried profiling these tests (using the new "profile test" option in VisualStudio). However, it turns out that profiling tests with Shims is not possible for some reason. The following exception was thrown:
Microsoft.QualityTools.Testing.Fakes.UnitTestIsolation.UnitTestIsolationException: UnitTestIsolation instrumentation failed to initialize. Please restart Visual Studio and rerun this test
As a last resort we also tried moving all tests using Shims to a separate test project in the same solution. This did seem to help, and all test run times returned to normal. We used test playlists to run each project's tests before the other's, and in both cases run times were OK. It's not really a solution though, and feels more like circumventing the actual issue.
So, we're not sure how to proceed. Any thoughts and ideas would be helpful.
Thanks.
The Microsoft documentation, Better Unit Testing with Microsoft Fakes (RTM).pdf, states that you will see a performance decrease when using Shims.
This article also goes over the performance impact of Shims:
http://technet.microsoft.com/en-us/windows/jj863250.aspx
"Other" tests should be explicitly executed in shimless context (ShimsContext.ExecuteWithoutShims), because it looks like even disposed ShimsContext in other tests may have detours to logic that doesn't use shims.
Related
I have configured some tests to run in parallel using Selenium and Nunit but sometimes one of the tests kills off early and miss behaves. It's not always the same test and they are basic examples so am not sure what's happening.
I also followed the example in the following link (https://www.youtube.com/watch?v=18zrtO1l7EU) to the letter but it also does not work. Sometimes one of the tests halts and doesn't continue and then fails. I've tried with both chrome and IE drivers but the same thing happens. Could this be an Nunit version issue?
I think this was because I was initially setting the Base weddriver as Static. Since taking the static off it works fine.
We updated our solution from SpecFlow1.9 to 2.0 and NUnit2.6.4 to 3.2.1. After adapting some attributes and project settings, all tests run fine in NUnit. However, when the SpecFlow tests are executed with NCrunch, we get a SpecFlowException:
TechTalk.SpecFlow.SpecFlowException : The ScenarioContext.Current static accessor cannot
be used in multi-threaded execution. Try injecting the scenario context to the binding
class. See http://go.specflow.org/doc-multithreaded for details.
at TechTalk.SpecFlow.ScenarioContext.get_Current()
We intentionally designed our SpecFlow tests for a single-threaded environment (to keep the effort low) and we just want to continue executing these tests in one thread. So instead of injecting the scenario context as the proposed solution (we use NInject instead of the SpecFlow mini-IoC) we're looking for some setting to convince SpecFlow that it is running in a single-threaded environment.
Here are the NCrunch 2.23.0.2 settings:
I entered in the Assembly.cs files of all SpecFlow tests the following attribute:
[assembly: Parallelizable(ParallelScope.None)]
Without success; the exception keeps showing up.
Does anybody have a clue how to force SpecFlow2.0 in NCrunch2.23.0.2 with NUnit3.2.1 so that it thinks it's executing in a single-threaded environment?
Thank you for your effort!
2016-5-31: update
I installed the new version 2.1 of SpecFlow (available since 2016-5-25) but it didn't solve the problem.
I created an example project with a minimum amount of code to generate the problem. The calculator implementation is statefull and cannot be tested in a multithreaded environment.
SpecFlow throws the exception due to the (dummy) static reference ‘ScenarioContext.Current’ in CustomContext. Yes I know you should inject it if you intend to run in a multithreaded test environment. The problem is that SpecFlow THINKS it is in a multithreaded environment, but it isn’t and it shouldn't.
On investigation, this appears to be a 3-way compatibility problem between NCrunch, SpecFlow, and NUnit3.
As part of its behaviour, NCrunch will re-use test processes by calling into them multiple times (i.e. once for each batch of tests in the Processing Queue). Because NUnit3 kicks off a new thread for each test session, it ends up using a different thread for each call into SpecFlow.
SpecFlow identifies multi-threaded execution by tracking thread IDs, and since each session has a new thread, it incorrectly thinks the code is being run in parallel when actually it's just different threads being used synchronously.
Setting the 'Test process memory limit' global NCrunch configuration setting to '1' will allow you to work around the problem, as this will cause NCrunch to throw away a test process after each batch, rather than re-using it. Unfortunately, this will have a significant impact on performance.
I've reported this problem to SpecFlow. Because of it's nature, the most sensible thing would be for it to be fixed in SpecFlow itself - https://github.com/techtalk/SpecFlow/issues/638
You need to regenerate the code-behind- files of the Feature files after upgrade.
See the upgrade steps here: http://gasparnagy.com/2016/01/specflow-tips-how-to-upgrade-your-project-to-specflow-v2/
I've run into this problem where NUnit tests aren't executed by Resharper's test runner. After a git bisect session I've isolated the commit that causes this, but I can't pin down why; most of the solutions I find refer to corrupt app.config files, but my commit only changed C# code. It only fails for one of my test projects - the unit tests - while other tests (integration and acceptance tests, also driven by NUnit) run fine.
So, I've tried to troubleshoot other ways, and following this guy's troubleshooting, I installed the NUnit Test Adapter for Visual Studio to try to run the tests with VS instead of R#.
Now, re-building the entire solution and checking the Test Output window, I see the following:
NUnit 1.2.0.0 discovering tests is started
Exception System.ArgumentNullException, Exception thrown discovering tests in D:\Code\ThisProject\src\MainWebApplication\bin\MainWebApplication.dll
Exception System.ArgumentNullException, Exception thrown discovering tests in D:\Code\ThisProject\src\UnitTests\bin\Debug\UnitTests.dll
NUnit 1.2.0.0 discovering test is finished
Hm... I did introduce a new method in this commit, that throws argument null exceptions. I wonder what happens if I comment out those checks?
NUnit 1.2.0.0 discovering tests is started
Exception System.ArgumentNullException, Exception thrown discovering tests in D:\Code\ThisProject\src\MainWebApplication\bin\MainWebApplication.dll
NUnit 1.2.0.0 discovering test is finished
A test with the same name 'UnitTests.SomeNamespace.SomeTestClass.SomeTestMethod(someparameter)' already exists. This test is not added to the test window.
Wait, what?
Removing an argument null check in my code, which is in a library assembly (i.e. in neither of the assemblies that initially failed, although both of them call into this method) made test discover perform (slightly) better. What is going on?
But it gets weirder still:
After having seen the above oddness, I resumed R# (which I had suspended as part of troubleshooting) and tried to run the tests again. They all run, and passed. Yes, I double checked and uncommented the null checks (changing nothing else) and I'm back to square one - Inconclusive: test wasn't run, for every single test in the assembly.
What is going on here?
I have no idea of how it could be relevant, but there are a couple of "exceptional" things about the particular method with the null checks, so I figure I can't leave them out of a question like this (where, to my knowledge, everything including Mickey Mouse could be relevant...):
The method is often called with one of the parameters filled by a service location call (bad practice, I know, but it's a huge project with legacy code, and the calls are littered all over; there's no way to change that). If a call is made like that and the IoC container hasn't been setup, the argument will indeed be null.
The method calls out to HttpContext.GetGlobalResourceObject, passing the arguments on (which is why I want a null check in the first place...), and we've configured a custom resource provider through web.config. I haven't changed any of this configuration, just moved the call to a different place.
With the help of R# support, I managed to squash this bug.
The root cause was that my static method, which threw the ArgumentNullException, was in one case called in the constructor of an attribute. When I refactored that attribute so that no exceptions could be caused during construction, the problem went away.
Without having confirmation, I guess that the problem with throwing exceptions from argument constructors is that NUnit instantiates the attributes during test discovery, and doesn't handle exceptions correctly. Thus, NUnit completely borks as soon as an attribute constructor throws anything, and this doesn't give any of the test runners a chance to let the user know what the problem is.
To find the offending code, it proved very useful to run VS with the following command:
devenv.exe /ReSharper.LogLevel Verbose /ReSharper.LogFile c:\path\to\logfile.txt
I then compiled my project and started a unit test run (in which all tests reported "Inconclusive; test wasn't run") and then exited VS again. Toward the end of the log file, there was a stack trace that told me where the exception was coming from.
Is there a way to end a mstest as passing prematurely and not run the rest of the test code?
Assert.Inconclusive comes the closest, but marks the test as inconclusive, which causes other issues for me (emails are generated based on the status of the test).
I am doing this, because I am running the mstest's code before the rest of the tests in the class, and don't want to run it when it runs normally. Basically this:
https://groups.google.com/forum/?fromgroups=#!topic/specflow/6bzgl9LYOFI
EDIT:
I found a way around this issue by ignoring the feature_setup portion so it will only run when I trigger it through reflection and be ignored as part of the normal mstest execution.
Sadly, this answer will only be applicable in the very specific case I have here using a combination of specflow and mstest. I think using return is the correct solution for a normal mstest.
I hope somebody can help me.
I have a lot of UnitTest to my C# application in VS2010, and therefore I want to execute them in parallel so I can benefit of my four core machine.
This is "easily" done by adding parallelTestCount="0" to the execution in the Local.testsettings.
But some of my UnitTest (around 50) are not thread safe and instead of reworking them, I just want them to be run in not parallel mode.
Is that possible and if so how to do it?
You can't change the parallelism on a single test or set of tests. You could try to create a second test settings file and a second assembly containing your "unsafe" tests; you can define the folder that test assemblies are loaded from under the "Unit test" tab of the test settings dialog.
That said, your tests should be thread-safe. A unit test should be able to run in any order, in any environment, and always pass -- unless, of course, something changed in the code they're testing.