How to output result when invoked by xUnit? - c#

For example, there is a class to be tested.
public class ToBeTested
{
public static void Method1()
{
Debug.WriteLine("....for debugging....");
}
}
And It's invoked by xUnit test method
[Fact]
public void Test1()
{
ToBeTested.Method1();
}
However, the line Debug.WriteLine("....for debugging...."); didn't write anything in Visual Studio debug output?

The line Debug.WriteLine("....for debugging...."); will write the output to the Debug Output window of visual studio only when the tests are ran in Debug mode. Instead of "Run Tests" you can use "Debug Tests" and can see the output in window.
But if you are trying to output results while running XUnit tests, then it is better to use ITestOutputHelper in namespace Xunit.Abstractions.
Code sample is available in : https://xunit.github.io/docs/capturing-output.html
using Xunit;
using Xunit.Abstractions;
public class MyTestClass
{
private readonly ITestOutputHelper output;
public MyTestClass(ITestOutputHelper output)
{
this.output = output;
}
[Fact]
public void MyTest()
{
var temp = "my class!";
output.WriteLine("This is output from {0}", temp);
}
}

Related

Getting Exception System.MissingMethodException: Method not found: 'System.Windows.Rect System.Windows.Automation.... in in TestStackWhite Application

I am writing a test method to launch a window application.
Below is my code
namespace UnitTestProject1
{
[TestClass]
public class UnitTest1
{
[TestMethod]
public void TestMethod1()
{
ProcessStartInfo P = new ProcessStartInfo("C:\\Windows\\System32\\notepad.exe");
Application app = Application.Launch(P);
}
}
}
After running this test a Notepad window opens up and then below exception is thrown
System.MissingMethodException: Method not found: 'System.Windows.Rect
System.Windows.Automation.Provider.IRawElementProviderFragment.get_BoundingRectangle()
I don't know what namespace Application belongs but it's probably the wrong one and you are aiming for the class Process. Below is the code you need to perform this task with the method .Run().
using System.Diagnostics;
namespace UnitTestProject1
{
[TestClass]
public class UnitTest1
{
[TestMethod]
public void TestMethod1()
{
ProcessStartInfo P = new ProcessStartInfo("C:\\Windows\\System32\\notepad.exe");
Process.Start(P);
}
}
}

Selenium tests running in parallel causing error: invalid session id

Looking to get some help around making my tests Parallelizable. I have a selenium c# setup that uses a combination of NUnit, C# and selenium to run tests in sequence locally on my machine or on the CI server.
I've looked into Parallelization of testing before but have been unable to make the jump, and running in a sequence was fine.
At the moment when I add the NUnit [Parallelizable] tag, I get an 'OpenQA.Selenium.WebDriverException : invalid session id' error, based on the reading I've done I need to make each new driver I call unique. However, I'm uncertain on how to do this? or even start for that matter... is this even possible within my current set up?
My tests are currently doing limited smoke tests and just removing the repetitive regression testing against multiple browsers, however, I foresee a need to vastly expand my coverage of testing capability.
I will probably be looking at getting Browserstack or Sauselab in the long term but obviously, that requires funding, and I need to get that signed off, so I will be looking to get it running locally for now.
here is a look at the basic set up of my code
test files:
1st .cs test file
{
[TestFixture]
[Parallelizable]
public class Featur2Tests1 : TestBase
{
[Test]
[TestCaseSource(typeof(TestBase), "TestData")]
public void test1(string BrowserName, string Environment, string System)
{
Setup(BrowserName, Environment, System);
//Run test steps....
}
[Test]
[TestCaseSource(typeof(TestBase), "TestData")]
public void test2(string BrowserName, string Environment, string System)
{
Setup(BrowserName, Environment, System);
//Run test steps....
}
}
}
2nd .cs test file
{
[TestFixture]
[Parallelizable]
public class FeatureTests2 : TestBase
{
[Test]
[TestCaseSource(typeof(TestBase), "TestData")]
public void test1(string BrowserName, string Environment, string System)
{
Setup(BrowserName, Environment, System);
//Run test steps....
}
[Test]
[TestCaseSource(typeof(TestBase), "TestData")]
public void test2(string BrowserName, string Environment, string System)
{
Setup(BrowserName, Environment, System);
//Run test steps....
}
}
}
TestBase.cs where my set up for each test
{
public class TestBase
{
public static IWebDriver driver;
public void Setup(string BrowserName, string Environment, string System)
{
Driver.Intialize(BrowserName);
//do additional setup before test run...
}
[TearDown]
public void CleanUp()
{
Driver.Close();
}
public static IEnumerable TestData
{
get
{
string[] browsers = Config.theBrowserList.Split(',');
string[] Environments = Config.theEnvironmentList.Split(',');
string[] Systems = Config.theSystemList.Split(',');
foreach (string browser in browsers)
{
foreach (string Environment in Environments)
{
foreach (string System in Systems)
{
yield return new TestCaseData(browser, Environment, System);
}
}
}
}
}
}
}
The IEnumerable TestData comes from a file called config.resx and contains the following data:
{Name}: {Value}
theBrowserList: Chrome,Edge,Firefox
theEnvironmentList: QA
theSystemList: WE
This is where I create my driver in Driver.cs
{
public class Driver
{
public static IWebDriver Instance { get; set; }
public static void Intialize(string browser)
{
string appDirectory = Directory.GetParent(AppDomain.CurrentDomain.BaseDirectory).Parent.Parent.Parent.FullName;
string driverFolder = $"{appDirectory}/Framework.Platform/bin/debug";
if (browser == "Chrome")
{
ChromeOptions chromeOpts = new ChromeOptions();
chromeOpts.AddUserProfilePreference("safebrowsing.enabled", true);
chromeOpts.AddArgument("start-maximized");
chromeOpts.AddArgument("log-level=3");
Instance = new ChromeDriver(driverFolder, chromeOpts);
}
else if (browser == "IE")
{
var options = new InternetExplorerOptions { EnsureCleanSession = true };
options.AddAdditionalCapability("IgnoreZoomLevel", true);
Instance = new InternetExplorerDriver(driverFolder, options);
Instance.Manage().Window.Maximize();
}
else if (browser == "Edge")
{
EdgeOptions edgeOpts = new EdgeOptions();
Instance = new EdgeDriver(driverFolder, edgeOpts);
Instance.Manage().Window.Maximize();
Instance.Manage().Cookies.DeleteAllCookies();
}
else if (browser == "Firefox")
{
FirefoxOptions firefoxOpts = new FirefoxOptions();
Instance = new FirefoxDriver(driverFolder, firefoxOpts);
Instance.Manage().Window.Maximize();
}
else { Assert.Fail($"Browser Driver; {browser}, is not currently supported by Initialise method"); }
}
public static void Close(string browser = "other")
{
if (browser == "IE")
{
Process[] ies = Process.GetProcessesByName("iexplore");
foreach (Process ie in ies)
{
ie.Kill();
}
}
else
{
Instance.Quit();
}
}
}
}
All your tests use the same driver, which is defined in TestBase as static. The two fixtures will run in parallel and will both effect the state of the driver. If you want two tests to run in parallel, they cannot both be using the same state, with the exception of constant or readonly values.
The first thing to do would be to make the driver an instance member, so that each of the derived fixtures is working with a different driver. If that doesn't solve the problem, it will at least take you to the next step toward a solution.
do not use static and that should help resolve your issue
public IWebDriver Instance { get; set; }
using NUnit.Framework;
using OpenQA.Selenium;
using OpenQA.Selenium.Chrome;
namespace Nunit_ParalelizeTest
{
public class Base
{
protected IWebDriver _driver;
[TearDown]
public void TearDown()
{
_driver.Close();
_driver.Quit();
}
[SetUp]
public void Setup()
{
_driver = new ChromeDriver();
_driver.Manage().Window.Maximize();
}
}
}
I see there is no [Setup] on top of setup method in the TestBase. Invalid session is caused because you are trying to close a window which is not there. Also try to replace driver.close() with driver.quit();
You should call the driver separately in each test, otherwise, nunit opens only one driver for all instances. Hope this makes sence to you.

C# selenium - SetUp - test name

Trying to get the name of the test in SetUp, I get: "AdhocTestMethod"...
[SetUp]
public void SetUpFunc()
{
var asd = TestContext.CurrentContext.Test.Name;
}
[Test(Description = "testingSetup")]
public void TestName123()
{
Assert.IsTrue(false);
}
I'm using NUnit 2.6.3
A new answer has just been posted regarding "AdhocTestMethod" search for customattribute adhoctestmethod
I think you've got some problem with your setup. It works fine for me.
using NUnit.Framework;
using System;
namespace UnitTestProject1
{
public class Tests
{
[SetUp]
public void SetUpFunc()
{
var asd = TestContext.CurrentContext.Test.Name;
Console.WriteLine($"Setup: {asd}");
}
[Test(Description = "testingSetup")]
public void TestName123()
{
var asd = TestContext.CurrentContext.Test.Name;
Console.WriteLine($"Test: {asd}");
Assert.IsTrue(false);
}
}
}
It prints
Setup: TestName123
Test: TestName123
I have NUnit 2.6.3 and NUnitTestAdapter 2.1.1 installed.

AssemblyInitialize method doesnt run before tests

I am using MsTest V2 framewrok for my tests.
I have Test automation framework (TAF) project and project with tests.
Tests project inherited from TAF and contains only tests.
In TAF i have a class which contains method which should run before all tests but it doesnt work at all.
By the way BeforeTest method works fine.
public class TestBase
{
[AssemblyInitialize]
public static void BeforeClass(TestContext tc)
{
Console.WriteLine("Before all tests");
}
[TestInitialize]
public void BeforeTest()
{
Console.WriteLine("Before each test");
}
}
[TestClass]
public class FirstTest : TestBase
{
[TestMethod]
public void FailedTest()
{
Assert.IsTrue(false,"ASDASDASD");
}
}
If I put "AssemblyInitialize" method to tests projects then it work.
What am I doing wrong?
Just put [TestClass] onto your TestBase:
[TestClass]
public class TestBase
{
[AssemblyInitialize]
public static void BeforeClass(TestContext tc)
{
Console.WriteLine("Before all tests");
}
[TestInitialize]
public void BeforeTest()
{
Console.WriteLine("Before each test");
}
}

why isn't TestInitialize getting called automatically?

I'm using Microsoft.VisualStudio.TestTools.UnitTesting; but the method I marked as [TestInitialize] isn't getting called before the test. I've never used this particular testing framework before but in every other framework there is always a way of registering a Setup and TearDown method that will auto run before and after every single test. Is this not the case with the visual studio testing tools unit testing framework?
[TestClass]
public class RepoTest
{
private const string TestConnectionString = #"Server=localhost\SQL2014EXPRESS64; Database=RepoTest; Trusted_Connection=True;";
private const string MasterConnectionString = #"Server=localhost\SQL2014EXPRESS64; Database=master; Trusted_Connection=True;";
[TestInitialize]
private void Initialize()
{
using(var connection = new SqlConnection(MasterConnectionString))
using(var command = new SqlCommand(Resources.Initialize, connection))
{
command.ExecuteNonQuery();
}
}
[TestCleanup]
private void Cleanup()
{
using (var connection = new SqlConnection(MasterConnectionString))
using (var command = new SqlCommand(Resources.Cleanup, connection))
{
command.ExecuteNonQuery();
}
}
[TestMethod]
public void CreateARepo()
{
var repo = new Repo(TestConnectionString);
}
}
Make Initialize and Cleanup public. You can also check, that at msdn all examples have public accessor.
In order to reproduce, make such test class:
[TestClass]
public class Tests
{
[TestInitialize]
public void Initialize()
{
Console.WriteLine("initialize");
}
[TestCleanup]
public void Cleanup()
{
Console.WriteLine("cleanup");
}
[TestMethod]
public void Test()
{
Console.WriteLine("test body");
}
}
That test will produce the following results:
Making Initialize and Cleanup private, you'll see only test body being printed to the console:
Used Microsoft.VisualStudio.QualityTools.UnitTestFramework assembly as unit testing framework version 10.1.0.0 and ReSharper 8.2 as a test runner.

Categories