I'm running MSTest to use the browser-automation framework, WatiN. When I run tests individually they always pass but however when I run the entire set of automation tests 7-8 tend to fail with different errors each time. All the tests are isolated and login to the site from the beginning every time, so I don't think it's related to the way the tests are written.
Has anyone else encountered this?
Is it likely to be MSTest related?
I would guess that you are not closing the browser at the end of each and every test. This can cause the session from the previous state to still be around when you run the next step. For example, you might still be logged in when your test is expecting a login page.
Related
We have a large test set running (based on NUnit) within Azure DevOps. Recently we enabled the "Rerun failed tests" option within the Visual Studio Test task. At first, this didn't work due to a bug in VSTest with handling custom test display names (which are required for our tests). Tests would still only run once. However, by setting a batch size, this issue is fixed and tests are finally retried correctly.
It works wonderful, except for one strange effect... for no apparent reason, the detailed logs are no longer showing our own custom output (generated by Console.WriteLine). This worked without problems before. All we get now is default output and the final test result.
Logs without "Specify a batch size" enabled are way more informative:
The logs themselves are still written: we know that as (almost) all the information is also included per-test. This is shown in the test results:
Also, simply disabling the batch size option makes the logs show up again.
Does anyone have an idea what causes this behavior, and how to fix it? So far, switching around between Console.Writeline/Trace.Writeline/Debug.Writeline etc. hasn't helped, and I haven't found much other info about this specific issue... and there are certain situations where having one complete log file is necessary (or way more practical), so it would be nice if we manage to have both retries and full logging.
Thanks in advance!
I posted the question on developercommunity.visualstudio.com as well. The discussion took a bit of time, but summarized, the following response was given:
The supported behavior is to have the Trace information from the tests be present as part of the test results file (trx)/Standard console logs and not part of the build logs (as you have mentioned in your question). Do note this is also dependent on the test framework being used. There’s no plan to have the trace information flow to the build logs from the test. The different behavior you are seeing when you turn on Batching option is due to the fact the internal flow of execution changes a bit (and eventually all flows will converge to same as batch option in coming days). we recommend not to take dependency on build logs. Instead the test tab is the place you will get in better logs in context to test case/test run.
A full test run output should be available in the trx files found in the test run. Upon check, this is indeed the case:
When opening the file in Notepad++, I finally see the logs of my complete run:
- All these lines are written by using Console.WriteLine().
- If there are more trx files than one, then the largest file holds the logs of the complete run: the small file only has the logs for the selected test.
NOTE:
We found that during a test run timeout, this file is not generated. This has been reported to Microsoft and is acknowledged as an issue:
got the point here. we are working on Advanced diagnostics in vstest task. for instance we will abort the test if a test is taking more time to complete. in this case a dump of the test process will be created as well and ofcourse we will have trx also getting uploaded. the second scenario where not a perticular test is taking time but overall run ends time out we will take a dump of test process and abort the run. Dumps will help you debug the issue.
Somewhere in the future, this dump should be available to use.
Alright, so there are many similar questions to this one however I have exhausted all options that I have seen. I am hoping that someone will have more information to share.
The Details
I run five instances of test at a time, using Nunit (v3.10.1) on Chromedriver (v2.41) in the Selenium Grid (v3.14). I only get this error when running tests on the grid, this does not happen running locally. I typically use the default configuration for both the Selenium-Hub and the Selenium-Node.
What I Have Tried
I have spent quite some time researching this issue to find a solution to my problem, I have gotten a few results that had seemed promising but have not seemed to have had any effect.
Increasing the Timeout/browserTimeout in the hub/node config
Running 1 session at a time instead of 5
Separating the tests onto different nodes on the same hub
Adding the '--no-sandbox' property to Chromedriver
Increasing wait times within my framework to give the driver more time to start up
I have also tried catching the WebDriverException, but it seems to be uncatchable sometimes.
Reverting to older versions of Selenium-Grid/Chromedriver
Let me know if there is any more information needed. I'm hoping that someone has some helpful information for this topic.
In my curiosity, I've noticed something I hadn't before. When I go to the Selenium-hub, more specifically to hubaddress/wd/hub/sessions I see the following as a result:
As a driver closes and a new one opens, the Active Sessions [ext key ...] changes. So it seems that this exception always exists, and individual instances are not triggering it. (I think) Is it possible that NUnit cannot determine the reason for test failure and instead displays this exception?
I have configured some tests to run in parallel using Selenium and Nunit but sometimes one of the tests kills off early and miss behaves. It's not always the same test and they are basic examples so am not sure what's happening.
I also followed the example in the following link (https://www.youtube.com/watch?v=18zrtO1l7EU) to the letter but it also does not work. Sometimes one of the tests halts and doesn't continue and then fails. I've tried with both chrome and IE drivers but the same thing happens. Could this be an Nunit version issue?
I think this was because I was initially setting the Base weddriver as Static. Since taking the static off it works fine.
The company I work for uses Visual Studio to develop its website and all of its features, and there is also a separate site that's been developed for testing the site. This 'testing' site can run individual test cases against the website, and must be run for each possible case.
Everything is written in VB.NET and each time the program is run a single thread is created to run the test. However, at the 'end' of the test the thread seems to still lingers. The stop button in Visual Studio must be manually clicked in order to terminate the application. Also, a process icon lingers in the task bar long after the application has closed.
It appears to me that the program is not correctly terminating all threads run during the tests, but I'm not sure if this is an issue worth brining up in the office, so I ask the following question...
What is the purpose of properly closing an application and all threads running on it, and what are the consequences, if any, of not doing so?
Well it's probably a small problem now, but it's not a good practice, IMHO. Imagine what would happen if the same code was now being executed by a continuous integration server, for instance, TeamCity (or Jenkins, or...), and the unit tests are being run continuously and automatically, by said build server.
What would happen to the build status when those threads fail to close down cleanly? We often face this problem due to bad design decisions in threading, or due to simple (and possibly, idiotic) mistakes in our unit testing code. The net effect though, is a hung build process.
I've seen CI servers hang for almost half a day before someone (mercifully) killed the build process. Essentially, this indicates a problem in our code that may or may not become a huge issue. If this was server-side code, there is potential for this code to lead to a pretty bad situation. My advice would be to dig out your introspection toolkits (memory profiling, perf profiling, etc) and see what exactly is going on, and resolve it.
We had a similar problem with an application that is being called to index SPA pages on our application server. It was throwing an exception in some cases and threads were not closing. The biggest downside is that it will consume the servers memory which is bad
Another downside as it runs as a web application that it will consume available ports and stop running when it run out of available ports.
The code should be modified to peacefully kill the thread after finishing or on exceptions and of course report any.
We have a unit tests project which is still using the "old-style" private accessors in many tests.
Since they're a maintenance nightmare, we're trying to get rid of them, and move to new new Microsoft Fakes framework, using Shims where needed.
Recently we wrote some new unit tests which use Shims, and noticed that for some reason this caused a few OTHER, old, tests, which were not modified, to run considerably slower. By slower I mean run times of about ~10sec instead of ~900millisec for the affected tests.
Running the affected tests on their own didn't seems to have this effect though - it only occurs when running them after tests with Shims.
Initially we thought this might be simply due to initialization problems, causing tests to influence one another.
However, after some experimentation, we found that the slowdown occurs even without actually adding any new test code. Simply adding the following snippet before one of the slowed-down tests caused the same effect of the test running slower:
using (ShimsContext.Create()) {}
Debugging seemed to show that the the code being tested was indeed running much slower (not the unit test code itself) , but we couldn't identify which part of it. We're not able to identify why these tests are affected while others are not.
At this point we tried profiling these tests (using the new "profile test" option in VisualStudio). However, it turns out that profiling tests with Shims is not possible for some reason. The following exception was thrown:
Microsoft.QualityTools.Testing.Fakes.UnitTestIsolation.UnitTestIsolationException: UnitTestIsolation instrumentation failed to initialize. Please restart Visual Studio and rerun this test
As a last resort we also tried moving all tests using Shims to a separate test project in the same solution. This did seem to help, and all test run times returned to normal. We used test playlists to run each project's tests before the other's, and in both cases run times were OK. It's not really a solution though, and feels more like circumventing the actual issue.
So, we're not sure how to proceed. Any thoughts and ideas would be helpful.
Thanks.
The Microsoft documentation, Better Unit Testing with Microsoft Fakes (RTM).pdf, states that you will see a performance decrease when using Shims.
This article also goes over the performance impact of Shims:
http://technet.microsoft.com/en-us/windows/jj863250.aspx
"Other" tests should be explicitly executed in shimless context (ShimsContext.ExecuteWithoutShims), because it looks like even disposed ShimsContext in other tests may have detours to logic that doesn't use shims.