Microsoft Visual Studio Tests - Successful Individually, Failing as Group - c#

I've been working on some automation code that includes about 65 small tests. All 65 tests can be ran successfully individually. However, once we run the tests all together - they seem to fail. Once one tests fails, they all seem to fail after that. Is there a way to prevent this from happening? Is there something happening in the transition of one failing test to the next which will automatically make the next one fail?
Thanks

Related

Parallel test failure using Selenium and Nunit

I have configured some tests to run in parallel using Selenium and Nunit but sometimes one of the tests kills off early and miss behaves. It's not always the same test and they are basic examples so am not sure what's happening.
I also followed the example in the following link (https://www.youtube.com/watch?v=18zrtO1l7EU) to the letter but it also does not work. Sometimes one of the tests halts and doesn't continue and then fails. I've tried with both chrome and IE drivers but the same thing happens. Could this be an Nunit version issue?
I think this was because I was initially setting the Base weddriver as Static. Since taking the static off it works fine.

Test cases failed by running all but passed by running one by one

I built a test suite contains 15 Test cases. When I ran each Test case one by one, they all passed. But when I ran all of them in a time, only 7 first testcases passed, the rest failed all. And the error all showed that Visual Studio was unable to find the web element (I used XPath).
I put thread.sleep() to code so that each test case could have time to detect elements, but this didn't work also.
I'm sure that all the XPath were correct, please anyone could give me any ideas?

Unit tests become slow when using Microsoft fakes in OTHER tests

We have a unit tests project which is still using the "old-style" private accessors in many tests.
Since they're a maintenance nightmare, we're trying to get rid of them, and move to new new Microsoft Fakes framework, using Shims where needed.
Recently we wrote some new unit tests which use Shims, and noticed that for some reason this caused a few OTHER, old, tests, which were not modified, to run considerably slower. By slower I mean run times of about ~10sec instead of ~900millisec for the affected tests.
Running the affected tests on their own didn't seems to have this effect though - it only occurs when running them after tests with Shims.
Initially we thought this might be simply due to initialization problems, causing tests to influence one another.
However, after some experimentation, we found that the slowdown occurs even without actually adding any new test code. Simply adding the following snippet before one of the slowed-down tests caused the same effect of the test running slower:
using (ShimsContext.Create()) {}
Debugging seemed to show that the the code being tested was indeed running much slower (not the unit test code itself) , but we couldn't identify which part of it. We're not able to identify why these tests are affected while others are not.
At this point we tried profiling these tests (using the new "profile test" option in VisualStudio). However, it turns out that profiling tests with Shims is not possible for some reason. The following exception was thrown:
Microsoft.QualityTools.Testing.Fakes.UnitTestIsolation.UnitTestIsolationException: UnitTestIsolation instrumentation failed to initialize. Please restart Visual Studio and rerun this test
As a last resort we also tried moving all tests using Shims to a separate test project in the same solution. This did seem to help, and all test run times returned to normal. We used test playlists to run each project's tests before the other's, and in both cases run times were OK. It's not really a solution though, and feels more like circumventing the actual issue.
So, we're not sure how to proceed. Any thoughts and ideas would be helpful.
Thanks.
The Microsoft documentation, Better Unit Testing with Microsoft Fakes (RTM).pdf, states that you will see a performance decrease when using Shims.
This article also goes over the performance impact of Shims:
http://technet.microsoft.com/en-us/windows/jj863250.aspx
"Other" tests should be explicitly executed in shimless context (ShimsContext.ExecuteWithoutShims), because it looks like even disposed ShimsContext in other tests may have detours to logic that doesn't use shims.

vs 2008 unit testing prompt meaning

i'm unit testing in vs2008 and every time i run now it says
Executing the current test run will produce a new test run result, which will exceed
the limit of 25 that is currently specified. If you continue, older test run results and
their associated deployments will be deleted from the hard drive...
what does this mean, and how do i clear the older test run results? why is this important message?
This message basically means that the Unit Test project has saved/recorded 25 (your threshold) results of previously run unit tests.
By proceeding, it'll remove one from those 25 to include the results of your next run.
You can modify the 'alert' here in the Tools->Options dialog:
You can modify this number here:

Spurious errors from WatiN with MSTest

I'm running MSTest to use the browser-automation framework, WatiN. When I run tests individually they always pass but however when I run the entire set of automation tests 7-8 tend to fail with different errors each time. All the tests are isolated and login to the site from the beginning every time, so I don't think it's related to the way the tests are written.
Has anyone else encountered this?
Is it likely to be MSTest related?
I would guess that you are not closing the browser at the end of each and every test. This can cause the session from the previous state to still be around when you run the next step. For example, you might still be logged in when your test is expecting a login page.

Categories