I am trying to run my Unity Unit test with the following command:
"C:\Program Files\Unity\Hub\Editor\2019.4.3f1\Editor\Unity.exe" -runTests -quit -batchmode -projectPath "X:\MyProject" -logFile ./log.txt -testResults ./results.xml
However no reports are generated or message is given to the CMD console.
Here is what I have in the log.txt:
https://pastebin.com/aaZBkmUT
Why can't I run the tests? What am I doing wrong?
P.S.
What does that mean ERROR Failed to connect to local IPC ? May this be the cause of my issue somehow?
Try it eihter without -quit
They don't have it in the example. The thing is that afaik Tests run async so you might just be shutting down Unity before the results are available
Alternatively I think you could add the -runSynchronously
If included, the test run will run tests synchronously, guaranteeing that all tests runs in one editor update call. Note that this is only supported for EditMode tests, and that tests which take multiple frames (i.e. [UnityTest] tests, or tests with [UnitySetUp] or [UnityTearDown] scaffolding) will be filtered out.
to make sure your editor stays alive until the tests are done synchronously
Related
I am running test cases using WinAppDriver developed using Visual Studio c#. I see few test cases are flaky where first time they fail and second time they pass. I capture video of test execution and if test is pass I delete the video, if fail, I keep the video and attach to test result.
Now I want to keep the video if a test case pass on "rerun".
Is there any way to programmatically know if current test execution is a rerun?
Note: I run the test cases with DevOps pipeline and rerun is set to 3 attempts
I have configured some tests to run in parallel using Selenium and Nunit but sometimes one of the tests kills off early and miss behaves. It's not always the same test and they are basic examples so am not sure what's happening.
I also followed the example in the following link (https://www.youtube.com/watch?v=18zrtO1l7EU) to the letter but it also does not work. Sometimes one of the tests halts and doesn't continue and then fails. I've tried with both chrome and IE drivers but the same thing happens. Could this be an Nunit version issue?
I think this was because I was initially setting the Base weddriver as Static. Since taking the static off it works fine.
We updated our solution from SpecFlow1.9 to 2.0 and NUnit2.6.4 to 3.2.1. After adapting some attributes and project settings, all tests run fine in NUnit. However, when the SpecFlow tests are executed with NCrunch, we get a SpecFlowException:
TechTalk.SpecFlow.SpecFlowException : The ScenarioContext.Current static accessor cannot
be used in multi-threaded execution. Try injecting the scenario context to the binding
class. See http://go.specflow.org/doc-multithreaded for details.
at TechTalk.SpecFlow.ScenarioContext.get_Current()
We intentionally designed our SpecFlow tests for a single-threaded environment (to keep the effort low) and we just want to continue executing these tests in one thread. So instead of injecting the scenario context as the proposed solution (we use NInject instead of the SpecFlow mini-IoC) we're looking for some setting to convince SpecFlow that it is running in a single-threaded environment.
Here are the NCrunch 2.23.0.2 settings:
I entered in the Assembly.cs files of all SpecFlow tests the following attribute:
[assembly: Parallelizable(ParallelScope.None)]
Without success; the exception keeps showing up.
Does anybody have a clue how to force SpecFlow2.0 in NCrunch2.23.0.2 with NUnit3.2.1 so that it thinks it's executing in a single-threaded environment?
Thank you for your effort!
2016-5-31: update
I installed the new version 2.1 of SpecFlow (available since 2016-5-25) but it didn't solve the problem.
I created an example project with a minimum amount of code to generate the problem. The calculator implementation is statefull and cannot be tested in a multithreaded environment.
SpecFlow throws the exception due to the (dummy) static reference ‘ScenarioContext.Current’ in CustomContext. Yes I know you should inject it if you intend to run in a multithreaded test environment. The problem is that SpecFlow THINKS it is in a multithreaded environment, but it isn’t and it shouldn't.
On investigation, this appears to be a 3-way compatibility problem between NCrunch, SpecFlow, and NUnit3.
As part of its behaviour, NCrunch will re-use test processes by calling into them multiple times (i.e. once for each batch of tests in the Processing Queue). Because NUnit3 kicks off a new thread for each test session, it ends up using a different thread for each call into SpecFlow.
SpecFlow identifies multi-threaded execution by tracking thread IDs, and since each session has a new thread, it incorrectly thinks the code is being run in parallel when actually it's just different threads being used synchronously.
Setting the 'Test process memory limit' global NCrunch configuration setting to '1' will allow you to work around the problem, as this will cause NCrunch to throw away a test process after each batch, rather than re-using it. Unfortunately, this will have a significant impact on performance.
I've reported this problem to SpecFlow. Because of it's nature, the most sensible thing would be for it to be fixed in SpecFlow itself - https://github.com/techtalk/SpecFlow/issues/638
You need to regenerate the code-behind- files of the Feature files after upgrade.
See the upgrade steps here: http://gasparnagy.com/2016/01/specflow-tips-how-to-upgrade-your-project-to-specflow-v2/
I've been assigned a task of setting up a build server (jenkins) and running automated tests after the build agent completes the build.
We are using NUnit and selenium to run automated tests.
The main concern is wait time. Suppose several users check in their sources, a build is run and automated tests are run afterwards (there could be several hundred of these). What's the best way to set this up so that each user does NOT have to wait in queue for tests results. Also, I'm to consider things like test result reports etc.
Where do I start? What do I even google?
I'm very new at this stuff and any info on doing this would be greatly appreciated. thanks
The first thing you'll want to do is to separate your unit tests from your integration tests.
Unit tests should be fast. Integration tests will obviously be slower since you're interacting with external components.
As far as configuring your environment, to do what you're trying to do properly, you'll need to research using Jenkins in a Master/multiple-Slave configuration. This isn't terribly complex, but can take some time to set up.
What you'll likely end up doing is setting up a number of Jobs within Selenium to handle each part of your build process. ie, one job to do the compilation, at least one job to run the unit tests, and at least one job to run the integration tests (and then maybe packaging or deployment jobs depending on how far you want to take this..).
Depending on how slow your overall build process is, you could easily have one job for each component's integration tests and run these concurrently on different slave machines. A parent job could would then aggregate the results and determine whether or not the chick-in passed.
For reporting, you'll want to install the HTML Publisher Plugin, and the NUnit Plugin. These plugins will allow you to bundle the reports produced with the rest of the build artifacts.
In order to give feedback to your team, you'll also want to look at the Wall Display Plugin to display the status of the jobs.
i'm unit testing in vs2008 and every time i run now it says
Executing the current test run will produce a new test run result, which will exceed
the limit of 25 that is currently specified. If you continue, older test run results and
their associated deployments will be deleted from the hard drive...
what does this mean, and how do i clear the older test run results? why is this important message?
This message basically means that the Unit Test project has saved/recorded 25 (your threshold) results of previously run unit tests.
By proceeding, it'll remove one from those 25 to include the results of your next run.
You can modify the 'alert' here in the Tools->Options dialog:
You can modify this number here: