Im begginer in webdriver and c#. I want to use variable, from first test in another tests, how do I do that? I got to this point with some examples, but it does not work. I see that first test gets the login right, but when I start the second test and try to sendkeys, I get that loginName is null. (code is in short version, only to give you an idea of what Im trying to do)
[TestFixture]
public class TestClass
{
private IWebDriver driver;
private StringBuilder verificationErrors;
private string baseURL;
private bool acceptNextAlert = true;
static public String loginName;
static public String loginPassword;
[SetUp]
public void SetupTest()...
[TearDown]
public void TeardownTest()...
[Test]
public void GetLoginAndPassword()
{
loginName = driver.FindElement(By.XPath("...")).Text;
loginPassword = driver.FindElement(By.XPath("...")).Text;
Console.WriteLine(loginName);
}
[Test]
public void Test1()
{
driver.FindElement(By.Id("UserNameOrEmail")).SendKeys(loginName);
driver.FindElement(By.Id("Password")).SendKeys(loginPassword);
}
You cannot (and should not) send variables between tests. Tests methods are independent from another... and should actually Assert() something.
Your first method GetLoginAndPassword() isn't a test method per se but a utility method. If you use a Selenium PageObject pattern, this is probably a method of your PageObject class that you can run at the begining of your actual Test1() method.
The problem is that the methods marked with TestAttribute do not run sequentially in the same order you implemented them. Thus it might be possible that Test1 runs long before GetLoginAndPassword). You have to either call that method once from within the constructor or during test-initialization, or before every test-run.
[Test]
public void Test1()
{
GetLoginAndPassword();
driver.FindElement(By.Id("UserNameOrEmail")).SendKeys(loginName);
driver.FindElement(By.Id("Password")).SendKeys(loginPassword);
}
Probably your GetLoginAndPassword isnĀ“t even a method to test but a method used from your tests (unless you actually have a method within your system under test called GetLoginAndPassword. However since there are no asserts at all your tests are somewhat weird.
The purpose of unit testing is to test if a specific unit (meaning a group of closely related classes) works as specified. It is not meant to test if your complete elaborate program works as specified.
Test driven design has the advantage that you are more aware what each function should do. Each function is supposes to transform a pre-condition into a specified post-condition, regardless of what you did before or after calling the function.
If your tests assume that other tests are run before your test is run, then you won't test the use case that the other functions are not called, or only some of these other required functions are called.
This leads to the conclusion that each test method should be able to be run independently. Each test should set up the precondition, call the function and check if the postcondition is met.
But what if my function A only works correctly if other function B is called?
In that case the specification of A ought to describe what happens if B was called before A was called, as well as what would happen if A was called without calling B first.
If your unit test would first test B and then A with the prerequisite that B was called, you would not test whether A would react according to specification without calling B.
Example.
Suppose we have a divider class, that will divide any number by a given denominator that can be set using a property.
public class Divider
{
public double Denominator {get; set;}
public double Divide(double numerator)
{
return numerator / this.Denominator;
}
}
It is obvious that in normal usage one ought to set property Denominator before calling Divide:
Divider divider = new divider(){Denominator = 3.14};
Console.WriteLine(divider.Divide(10);
Your specification ought to describe what happens if Divide is called without setting Denominator to a non-zero value. This description would be like:
If method Divide is called with a parameter value X and the value of Denominator is a non-zero Y, then the return value is X/Y. If the value of Denominator is zero, then the System.DivideByZeroException is thrown.
You should create at least two tests. One for the use case that Denominator was set at a proper non-zero value, and one for the use case that the Denominator is not set at all. And if you are very thorough: a test for the use case that the Denominator is first set to a non-zero value and then to a zero value.
Related
Assume I have a test class with a theory method that runs with two inputs, "irrelevant" and "irrelevant2". The test first checks if a static class "IsInitialized", if it does, the test fails.
Then, the test initializes the static class by calling "Initialize".
[Theory]
[InlineData("irrelevant")]
[InlineData("irrelevant2")]
public void Test(string param)
{
if (MyStaticClass.IsInitialized()) { throw new Exception(); }
MyStaticClass.Initialize();
}
public static class MyStaticClass
{
private static bool Initialized = false;
public static void Initialize()
{
Initialized = true;
}
public static bool IsInitialized()
{
return Initialized;
}
}
What I expect is that both tests will pass, as the static class is only initialized after calling "Initialize". However, the result is that the first test pass and the second fails, because the static class remains in memory. I'd expect the static class state to revert to its initial. I can understand why this is happening, because of using a static. However, I'm trying to figure out if there's a way to configure the test to dispose the memory of the static class as if I would run a new test case.
This would also happen if I had two "Fact"s with the same code in them. When running each fact separately, both tests would pass. When running the test class, one would pass (the first) and the second would fail.
What you're describing is expected behavior. If you set the value of static field Initialized to "true" in one test, when you run the next test it's going to be "true".
There are a few ways to look at this:
If all you're testing is that setting a field's value actually works, just don't test that. It's not so much a behavior of your code as a feature of the language. Do we need to write tests to verify that setting a property really sets it? Likely not.
If you have to test it, don't write tests that must fail if the language works correctly. Write a test that verifies that the value is whatever you set it to. A test that verifies what the value is not isn't very useful.
Consider not using a static class and property. The reasons for using a static vs. an instance class are specific to whatever you're coding. I don't know your reasons. But often the challenges we encounter when testing reflect the problems we'll have when using the code "for real."
You asked if you could "dispose" it from memory. You can't do that, or at least the process for doing it is so complicated that you shouldn't do it. You could add a method to your static class that "resets" the value. But that's a little messy. If we have to add methods to static classes just so we can test them then maybe we should consider not using a static class.
How to programmatically tell NUnit to repeat a test?
Background:
I'm running NUnit from within my C# code, using a SimpleNameFilter and a RemoteTestRunner. My application reads a csv file, TestList.csv, that specifies what tests to run. Up to that point everything works ok.
Problem:
The problem is when I put the same test name two times in my TestList file. In that case, my application correctly reads and loads the SimpleNameFilter with two instances of the test name. This filter is then passed to the RemoteTestRunner. Then, Nunit executes the test only once. It seems that when Nunit sees the second instance of a test it already ran, it ignores it.
How can I override such behavior? I'd like to have NUnit run the same test name two times or more as specified in my TestList.csv file.
Thank you,
Joe
http://www.nunit.org/index.php?p=testCase&r=2.5
TestCaseAttribute serves the dual purpose of marking a method with
parameters as a test method and providing inline data to be used when
invoking that method. Here is an example of a test being run three
times, with three different sets of data:
[TestCase(12,3, Result=4)]
[TestCase(12,2, Result=6)]
[TestCase(12,4, Result=3)]
public int DivideTest(int n, int d)
{
return( n / d );
}
Running an identical test twice should have the same result. An individual test can either pass or it can fail. If you have tests that work sometimes and fail another then it feels like the wrong thing is happening. Which is why NUnit doesn't support this out of the box. I imagine it would also cause problems in the reporting of the results of the test run, does it say that test X worked, or failed if both happened?
The closest thing you're going to get in Nunit is something like the TestCaseSource attribute (which you already seem to know about). You can use TestCaseSource to specify a method, which can in turn, read from a file. So, you could for example have a file "cases.txt" which looks like this:
Test1,1,2,3
Test2,wibble,wobble,wet
Test1,2,3,4
And then use this from your tests like so:
[Test]
[TestCaseSource("Test1Source")]
public void Test1(string a, string b, string c) {
}
[Test]
[TestCaseSource("Test2Source")]
public void Test2(string a, string b, string c) {
}
public IEnumerable Test1Source() {
return GetCases("Test1");
}
public IEnumerable Test2Source() {
return GetCases("Test2");
}
public IEnumerable GetCases(string testName) {
var cases = new List<IEnumerable>();
var lines = File.ReadAllLines(#"cases.txt").Where(x => x.StartsWith(testName));
foreach (var line in lines) {
var args = line.Split(',');
var currentcase = new List<object>();
for (var i = 1; i < args.Count(); i++) {
currentcase.Add(args[i]);
}
cases.Add(currentcase.ToArray());
}
return cases;
}
This is obviously a very basic example, that results in Test1 being called twice and Test2 being called once, with the arguments from the text file. However, this is again only going to work if the arguments passed to the test are different, since nunit uses the arguments to create a unique test name, although you could work around this by having the test source generate a unique number for each method call and passing it to the test as an extra argument that the test simply ignores.
An alternative would be for you to run the nunit from a script that calls nunit over and over again for each line of the file, although I imagine this may cause you other issues when you're consolidating the reporting from the multiple runs.
I am writing unit tests using Nunit for a C# project.
I am trying to run a single test multiple times with different data using the TestCaseSource attribute.
I am doing this elsewhere without any problems, but now I am finding that the first time I run my tests, the code passes. The next time, it doesn't. Using some Console.WriteLine statements, I can see that the test data is different each time.
The method used to generate the data is internal to the test class, is not static and generates all required dependencies from scratch for each test.
--
I have a fake class which holds a queue of values to return when a given function is called. A new class is created for each test.
However, if the first time the test is run, it exhausts the queue, the next time it is run, no data is found. Surely this should be regenerated every time?
--
It is as if Nunit is not calling the method specified by the TestCaseSource attribute every time the test is run - only when the project is first loaded.
Is this expected? Is there a workaround?
EDIT:
Ok, here is a very basic example, below:
[TestFixture]
public class Tests
{
public interface IEntry
{
object Read();
}
[TestCaseSource("TestData")]
public void Test(Mock<IEntry> entry)
{
object o = entry.Object.Read();
object o2 = entry.Object.Read();
}
public System.Collections.IEnumerable TestData()
{
var entry = new Mock<IEntry>();
int call = 0;
entry.Setup(x => x.Read()).Returns(() =>
{
Console.WriteLine(call);
return null;
}).Callback(() =>
{
call++;
});
yield return new TestCaseData(entry);
}
}
If you watch the test output in Nunit, it should always display 0, followed by 1. In this case, each time the test is run it will be incremented. I.e. second run: 2 and 3, third run: 4 and 5, etc.
If you move the TestData code into Test, then the correct values are returned every time.
I assume you're using the NUnit GUI runner?
What you're seeing is (I believe, I can't find confirmation in the NUnit source easily) is an optimization of the test runner. Rather than re-create the values provided by the TestCaseSource attribute on every test run, it will only do so when the assembly under test changes.
If you change your code to remove the Moq dependency, it's a little clearer:
[TestFixture]
public class SampleTests
{
[TestCaseSource("TestData")]
public void Test(CallTracker callTracker)
{
callTracker.Call++;
callTracker.Call++;
}
public IEnumerable TestData()
{
yield return new TestCaseData(new CallTracker());
}
public class CallTracker
{
int call;
public int Call
{
get
{
return call;
}
set
{
call = value;
Console.WriteLine(call);
}
}
}
}
This yields the same behavior as your code. CallTracker is created anew whenever the TestCaseSource is evaluated, but since the call count keeps going up, the test runner must be resuing the same instance (for what I'm assuming is performance reasons).
Resharper's test runner in Visual Studio does not exhibit this behavior; it always shows 1 and 2 for repeated runs without recompiling. This is probably why it's slower to start running tests than the NUnit GUI runner is. Similarly, the NUnit console does not exhibit this behavior since it always starts cold.
I am working on some unittest projects in VS 2008 in C#, i created one simple small method for unit test?
public int addNumber(int a, int b)
{
return a + b;
}
well i created a unit test method as below,
[TestMethod()]
public void addNumberTest()
{
Mathematical target = new Mathematical(); // TODO: Initialize to an appropriate value
int a = 4; // TODO: Initialize to an appropriate value
int b = 2; // TODO: Initialize to an appropriate value
int expected = 0; // TODO: Initialize to an appropriate value
int actual;
actual = target.addNumber(a, b);
Assert.AreEqual(expected, actual);
Assert.Inconclusive("Verify the correctness of this test method.");
}
But when i try to run the unittest project ,
I am receiving an Inconclusive message. My question is
what exactly the Inconclusive is and when it comes into the picture?
what are the necessary things, i need to do to make my unit test passed?
You need to decide what the criteria is for a unit test to be considered passed. There isn't a blanket answer to what makes a unit test pass. The specifications ultimately dictate what constitutes a passing unit test.
If the method you are testing is indeed just adding two numbers, than the Assert.AreEqual(expected,actual) is probably enough for this particular unit test. You may also want to check Assert.IsTrue(expected>0) That may be another assertion you could tack on to this unit test.
You'll want to test it again though with other values like negatives, zeros, and really large numbers.
You won't need the Inconclusive operator for your unit tests of the addNumber method. That assertion would be more useful when dealing with objects and threads possibly. Calling the Inconclusive assertion like you have will always fail and always return the string passed into it.
I always try to stick to one assertion per test but sometimes I'm having troubles in doing so.
For example.
Say I've written a cryptographic class that encrypt and decrypts strings.
public class CryptoDummy
{
public string Decrypt(string value)
{
}
public string Encrypt(string value)
{
}
}
How would I create my unit test if the decryption is depended upon the output of the encryption ?
Most of my tests if not all up until now are composed of one method call per test and one assertion per test.
So to the point, is it fine to have multiple calls per test and assert the final results made by the method I called last ?
public class CryptoDummyTest
{
private static CryptoDummy _cryptoDummy;
// Use ClassInitialize to run code before running the first test in the class
[ClassInitialize]
public static void MyClassInitialize(TestContext testContext)
{
_cryptoDummy = new CryptoDummy();
}
[TestMethod]
public void Encrypt_should_return_ciphered_64string_when_passing_a_plaintext_value()
{
const string PLAINTEXT_VALUE = "anonymous#provider.com";
string cipheredString = _cryptoDummy.Encrypt(PLAINTEXT_VALUE);
Assert.IsTrue(cipheredString != PLAINTEXT_VALUE);
}
[TestMethod]
public void Decrypt_should_return_plaintext_when_passing_a_ciphered_value()
{
const string PLAINTEXT_VALUE = "anonymous#provider.com";
string cipheredString = _cryptoDummy.Encrypt(PLAINTEXT_VALUE);
string plaintextString = _cryptoDummy.Decrypt(cipheredString);
Assert.IsTrue(plaintextString == PLAINTEXT_VALUE);
}
}
Thank you in advance.
You shouldnt have one test depending upon another. The best way to do this would be to output the encrypted text somewhere and save it. Then on the decrypt text test you could start with an encrypted text and test you decrypt it correctly. If you use the same encryption key (which is fine for testing) the encrypted string will always be the same. So change your second unit test to something like this:
[TestMethod]
public void Decrypt_should_return_plaintext_when_passing_a_ciphered_value()
{
const string PLAINTEXT_VALUE = "anonymous#provider.com";
string cipheredString = "sjkalsdfjasdljs"; // ciphered value captured
string plaintextString = _cryptoDummy.Decrypt(cipheredString);
Assert.IsTrue(plaintextString == PLAINTEXT_VALUE);
}
This sounds strange to me. My opinion of unit testing is, that a unit test will test one special situation with a definite set of data provided. If one test depends on the result of another test, the result is not deterministic. The second thing is, that you can not be asured of the order the tests are executed!
I'm not so religious to say that you can have only one assert per test. If your result to test for example contains some kind of tree structure, you'll have to assert that every stage in the tree is correct, thous leading to multiple asserts, cause it makes (in my eyes) no sense to write for every step a single test.
Also in your given example i can't see that your last test depends on any other test. It simply calls the unit under test two times and indeed you are not really interested on how it encrypt and decrypt the data. All you are interested in, is that it works. So for that kind your tests your tests are absolutely okay.
If you need to test the algorithm used for decryption and encryption you'll have to make two tests and compare the results with some pre-defined constants to make sure that nobody is going to change the algorithm used.