Unit test methods that the output is the input of another - c#

I always try to stick to one assertion per test but sometimes I'm having troubles in doing so.
For example.
Say I've written a cryptographic class that encrypt and decrypts strings.
public class CryptoDummy
{
public string Decrypt(string value)
{
}
public string Encrypt(string value)
{
}
}
How would I create my unit test if the decryption is depended upon the output of the encryption ?
Most of my tests if not all up until now are composed of one method call per test and one assertion per test.
So to the point, is it fine to have multiple calls per test and assert the final results made by the method I called last ?
public class CryptoDummyTest
{
private static CryptoDummy _cryptoDummy;
// Use ClassInitialize to run code before running the first test in the class
[ClassInitialize]
public static void MyClassInitialize(TestContext testContext)
{
_cryptoDummy = new CryptoDummy();
}
[TestMethod]
public void Encrypt_should_return_ciphered_64string_when_passing_a_plaintext_value()
{
const string PLAINTEXT_VALUE = "anonymous#provider.com";
string cipheredString = _cryptoDummy.Encrypt(PLAINTEXT_VALUE);
Assert.IsTrue(cipheredString != PLAINTEXT_VALUE);
}
[TestMethod]
public void Decrypt_should_return_plaintext_when_passing_a_ciphered_value()
{
const string PLAINTEXT_VALUE = "anonymous#provider.com";
string cipheredString = _cryptoDummy.Encrypt(PLAINTEXT_VALUE);
string plaintextString = _cryptoDummy.Decrypt(cipheredString);
Assert.IsTrue(plaintextString == PLAINTEXT_VALUE);
}
}
Thank you in advance.

You shouldnt have one test depending upon another. The best way to do this would be to output the encrypted text somewhere and save it. Then on the decrypt text test you could start with an encrypted text and test you decrypt it correctly. If you use the same encryption key (which is fine for testing) the encrypted string will always be the same. So change your second unit test to something like this:
[TestMethod]
public void Decrypt_should_return_plaintext_when_passing_a_ciphered_value()
{
const string PLAINTEXT_VALUE = "anonymous#provider.com";
string cipheredString = "sjkalsdfjasdljs"; // ciphered value captured
string plaintextString = _cryptoDummy.Decrypt(cipheredString);
Assert.IsTrue(plaintextString == PLAINTEXT_VALUE);
}

This sounds strange to me. My opinion of unit testing is, that a unit test will test one special situation with a definite set of data provided. If one test depends on the result of another test, the result is not deterministic. The second thing is, that you can not be asured of the order the tests are executed!

I'm not so religious to say that you can have only one assert per test. If your result to test for example contains some kind of tree structure, you'll have to assert that every stage in the tree is correct, thous leading to multiple asserts, cause it makes (in my eyes) no sense to write for every step a single test.
Also in your given example i can't see that your last test depends on any other test. It simply calls the unit under test two times and indeed you are not really interested on how it encrypt and decrypt the data. All you are interested in, is that it works. So for that kind your tests your tests are absolutely okay.
If you need to test the algorithm used for decryption and encryption you'll have to make two tests and compare the results with some pre-defined constants to make sure that nobody is going to change the algorithm used.

Related

Coded UI Test cases code Generation [duplicate]

This question already has answers here:
How to run a test many times with data read from .csv file (data driving)
(3 answers)
Closed 6 years ago.
I Have a desktop Application in which I have 200 Test cases with different input parameters
Now the issue is Every time I am recording the Each and every test case with different input parameters
Is there is any way so that I can copy the code and change the parameters so that my code remains same for all the test cases only input parameters change
There are a few things to address here. Firstly, you can run the test using a data driven approach as described in the link above.
More importantly, in my opinion anyway, is how you are writing your test so that they can be data driven and what exactly are you testing that you need so many combinations?
When writing tests, it is important to have reusable code to test. I would recommend looking at something like Code First Scaffolding or Coded UI Page Modeling (I wrote the page modeling stuff). With these approaches, your test code is far more maintainable and flexible (easier to change by hand). This would allow for extremely simple data driven tests.
public void WhenPerformingCalculation_ThenResultIsCorrect() {
// imagine calculator with two numbers and a sign
var testResult =
modelUnderTest.LeftSideNumber.SetValue(3) // set first number
.Operator.SetValue("*") // set sign
.RightSideNumber.SetValue(10) // set right number
.Evaluate.Click() // press evaluate button
.Result; // get the result
Assert.AreEqual(testResult, 30);
}
becomes
public class CalculationParameters
{
public double LeftNumber {get;set;}
public string Operator {get;set;}
public double RightNumber {get;set;}
public double Result {get;set;}
public override string ToString(){ return $"{LeftNumber} {Operator} {RightNumber} = {Result}"; }
}
public void WhenPerformingCalculation_ThenResultIsCorrect() {
ICollection<CalculationParameters> parameters = getParameters();
List<Exception> exceptions = new List<Exception>();
foreach(CalculationParameters parameter in parameters)
{
try
{
var testResult =
modelUnderTest.LeftSideNumber.SetValue(parameter.LeftNumber) // set first number
.Operator.SetValue(parameter.Operator) // set sign
.RightSideNumber.SetValue(parameter.RightNumber) // set right number
.Evaluate.Click() // press evaluate button
.Result; // get the result
Assert.AreEqual(testResult, parameter.Result);
}
catch (Exception e)
{
exceptions.Add(new Exception($"Failed for parameters: {parameter}", e));
}
}
if(exceptions.Any()){
throw new AggregateException(exceptions);
}
}
Secondly, why do you need to test so many combinations of input / output in a given test? If you are testing things like, "Given the login page, when supplying invalid credentials, then a warning is supplied to the user." How many invalid inputs do you really need to test? There would be a second test for valid credentials and no data driving is necessary.
I would caution you to be careful that you are not testing stuff that should be a unit test in your UI. It sounds like you are testing different combinations of inputs to see if the UI generates the correct output, which should probably be a unit test of your underlying system. When testing the UI, typically it is sufficient to test that the bindings to your view models are correct and not test that calculations or other server logic is performed accurately.
My provided example shows what I would NOT test client side unless that calculator only exists client side (no server side validation or logic regarding the calculation). Even in that case, I would probably get a javascript test runner to test the view model powering my calculator instead of using coded ui to do this test.
Would you be able to provide some example of the combinations of input/output you are testing?
You can use the event arguments to the application through command line with a batch script or you can create a function what will pass the requested parameters.
In the main header you can use
main(string eventargs[]);
where the string variable will be the event arguments from the command line

How to use variables between unit tests? c# webdriver

Im begginer in webdriver and c#. I want to use variable, from first test in another tests, how do I do that? I got to this point with some examples, but it does not work. I see that first test gets the login right, but when I start the second test and try to sendkeys, I get that loginName is null. (code is in short version, only to give you an idea of what Im trying to do)
[TestFixture]
public class TestClass
{
private IWebDriver driver;
private StringBuilder verificationErrors;
private string baseURL;
private bool acceptNextAlert = true;
static public String loginName;
static public String loginPassword;
[SetUp]
public void SetupTest()...
[TearDown]
public void TeardownTest()...
[Test]
public void GetLoginAndPassword()
{
loginName = driver.FindElement(By.XPath("...")).Text;
loginPassword = driver.FindElement(By.XPath("...")).Text;
Console.WriteLine(loginName);
}
[Test]
public void Test1()
{
driver.FindElement(By.Id("UserNameOrEmail")).SendKeys(loginName);
driver.FindElement(By.Id("Password")).SendKeys(loginPassword);
}
You cannot (and should not) send variables between tests. Tests methods are independent from another... and should actually Assert() something.
Your first method GetLoginAndPassword() isn't a test method per se but a utility method. If you use a Selenium PageObject pattern, this is probably a method of your PageObject class that you can run at the begining of your actual Test1() method.
The problem is that the methods marked with TestAttribute do not run sequentially in the same order you implemented them. Thus it might be possible that Test1 runs long before GetLoginAndPassword). You have to either call that method once from within the constructor or during test-initialization, or before every test-run.
[Test]
public void Test1()
{
GetLoginAndPassword();
driver.FindElement(By.Id("UserNameOrEmail")).SendKeys(loginName);
driver.FindElement(By.Id("Password")).SendKeys(loginPassword);
}
Probably your GetLoginAndPassword isnĀ“t even a method to test but a method used from your tests (unless you actually have a method within your system under test called GetLoginAndPassword. However since there are no asserts at all your tests are somewhat weird.
The purpose of unit testing is to test if a specific unit (meaning a group of closely related classes) works as specified. It is not meant to test if your complete elaborate program works as specified.
Test driven design has the advantage that you are more aware what each function should do. Each function is supposes to transform a pre-condition into a specified post-condition, regardless of what you did before or after calling the function.
If your tests assume that other tests are run before your test is run, then you won't test the use case that the other functions are not called, or only some of these other required functions are called.
This leads to the conclusion that each test method should be able to be run independently. Each test should set up the precondition, call the function and check if the postcondition is met.
But what if my function A only works correctly if other function B is called?
In that case the specification of A ought to describe what happens if B was called before A was called, as well as what would happen if A was called without calling B first.
If your unit test would first test B and then A with the prerequisite that B was called, you would not test whether A would react according to specification without calling B.
Example.
Suppose we have a divider class, that will divide any number by a given denominator that can be set using a property.
public class Divider
{
public double Denominator {get; set;}
public double Divide(double numerator)
{
return numerator / this.Denominator;
}
}
It is obvious that in normal usage one ought to set property Denominator before calling Divide:
Divider divider = new divider(){Denominator = 3.14};
Console.WriteLine(divider.Divide(10);
Your specification ought to describe what happens if Divide is called without setting Denominator to a non-zero value. This description would be like:
If method Divide is called with a parameter value X and the value of Denominator is a non-zero Y, then the return value is X/Y. If the value of Denominator is zero, then the System.DivideByZeroException is thrown.
You should create at least two tests. One for the use case that Denominator was set at a proper non-zero value, and one for the use case that the Denominator is not set at all. And if you are very thorough: a test for the use case that the Denominator is first set to a non-zero value and then to a zero value.

How to programmatically tell NUnit to repeat a test?

How to programmatically tell NUnit to repeat a test?
Background:
I'm running NUnit from within my C# code, using a SimpleNameFilter and a RemoteTestRunner. My application reads a csv file, TestList.csv, that specifies what tests to run. Up to that point everything works ok.
Problem:
The problem is when I put the same test name two times in my TestList file. In that case, my application correctly reads and loads the SimpleNameFilter with two instances of the test name. This filter is then passed to the RemoteTestRunner. Then, Nunit executes the test only once. It seems that when Nunit sees the second instance of a test it already ran, it ignores it.
How can I override such behavior? I'd like to have NUnit run the same test name two times or more as specified in my TestList.csv file.
Thank you,
Joe
http://www.nunit.org/index.php?p=testCase&r=2.5
TestCaseAttribute serves the dual purpose of marking a method with
parameters as a test method and providing inline data to be used when
invoking that method. Here is an example of a test being run three
times, with three different sets of data:
[TestCase(12,3, Result=4)]
[TestCase(12,2, Result=6)]
[TestCase(12,4, Result=3)]
public int DivideTest(int n, int d)
{
return( n / d );
}
Running an identical test twice should have the same result. An individual test can either pass or it can fail. If you have tests that work sometimes and fail another then it feels like the wrong thing is happening. Which is why NUnit doesn't support this out of the box. I imagine it would also cause problems in the reporting of the results of the test run, does it say that test X worked, or failed if both happened?
The closest thing you're going to get in Nunit is something like the TestCaseSource attribute (which you already seem to know about). You can use TestCaseSource to specify a method, which can in turn, read from a file. So, you could for example have a file "cases.txt" which looks like this:
Test1,1,2,3
Test2,wibble,wobble,wet
Test1,2,3,4
And then use this from your tests like so:
[Test]
[TestCaseSource("Test1Source")]
public void Test1(string a, string b, string c) {
}
[Test]
[TestCaseSource("Test2Source")]
public void Test2(string a, string b, string c) {
}
public IEnumerable Test1Source() {
return GetCases("Test1");
}
public IEnumerable Test2Source() {
return GetCases("Test2");
}
public IEnumerable GetCases(string testName) {
var cases = new List<IEnumerable>();
var lines = File.ReadAllLines(#"cases.txt").Where(x => x.StartsWith(testName));
foreach (var line in lines) {
var args = line.Split(',');
var currentcase = new List<object>();
for (var i = 1; i < args.Count(); i++) {
currentcase.Add(args[i]);
}
cases.Add(currentcase.ToArray());
}
return cases;
}
This is obviously a very basic example, that results in Test1 being called twice and Test2 being called once, with the arguments from the text file. However, this is again only going to work if the arguments passed to the test are different, since nunit uses the arguments to create a unique test name, although you could work around this by having the test source generate a unique number for each method call and passing it to the test as an extra argument that the test simply ignores.
An alternative would be for you to run the nunit from a script that calls nunit over and over again for each line of the file, although I imagine this may cause you other issues when you're consolidating the reporting from the multiple runs.

Returning multiple assert messages in one test

Im running some tests on my code at the moment. My main test method is used to verify some data, but within that check there is a lot of potential for it to fail at any one point.
Right now, I've set up multiple Assert.Fail statements within my method and when the test is failed, the message I type is displayed as expected. However, if my method fails multiple times, it only shows the first error. When I fix that, it is only then I discover the second error.
None of my tests are dependant on any others that I'm running. Ideally what I'd like is the ability to have my failure message to display every failed message in one pass. Is such a thing possible?
As per the comments, here are how I'm setting up a couple of my tests in the method:
private bool ValidateTestOne(EntityModel.MultiIndexEntities context)
{
if (context.SearchDisplayViews.Count() != expectedSdvCount)
{
Assert.Fail(" Search Display View count was different from what was expected");
}
if (sdv.VirtualID != expectedSdVirtualId)
{
Assert.Fail(" Search Display View virtual id was different from what was expected");
}
if (sdv.EntityType != expectedSdvEntityType)
{
Assert.Fail(" Search Display View entity type was different from what was expected");
}
return true;
}
Why not have a string/stringbuilder that holds all the fail messages, check for its length at the end of your code, and pass it into Assert.Fail? Just a suggestion :)
The NUnit test runner (assuming thats what you are using) is designed to break out of the test method as soon as anything fails.
So if you want every failure to show up, you need to break up your test into smaller, single assert ones. In general, you only want to be testing one thing per test anyways.
On a side note, using Assert.Fail like that isn't very semantically correct. Consider using the other built-in methods (like Assert.Equals) and only using Assert.Fail when the other methods are not sufficient.
None of my tests are dependent on any others that I'm running. Ideally
what I'd like is the ability to have my failure message to display
every failed message in one pass. Is such a thing possible?
It is possible only if you split your test into several smaller ones.
If you are afraid code duplication which is usually exists when tests are complex, you can use setup methods. They are usually marked by attributes:
NUnit - SetUp,
MsTest - TestInitialize,
XUnit - constructor.
The following code shows how your test can be rewritten:
public class HowToUseAsserts
{
int expectedSdvCount = 0;
int expectedSdVirtualId = 0;
string expectedSdvEntityType = "";
EntityModelMultiIndexEntities context;
public HowToUseAsserts()
{
context = new EntityModelMultiIndexEntities();
}
[Fact]
public void Search_display_view_count_should_be_the_same_as_expected()
{
context.SearchDisplayViews.Should().HaveCount(expectedSdvCount);
}
[Fact]
public void Search_display_view_virtual_id_should_be_the_same_as_expected()
{
context.VirtualID.Should().Be(expectedSdVirtualId);
}
[Fact]
public void Search_display_view_entity_type_should_be_the_same_as_expected()
{
context.EntityType.Should().Be(expectedSdvEntityType);
}
}
So your test names could provide the same information as you would write as messages:
Right now, I've set up multiple Assert.Fail statements within my
method and when the test is failed, the message I type is displayed as
expected. However, if my method fails multiple times, it only shows
the first error. When I fix that, it is only then I discover the
second error.
This behavior is correct and many testing frameworks follow it.
I'd like to recommend stop using Assert.Fail() because it forces you to write specific messages for every failure. Common asserts provide good enough messages so you can replace you code with the following lines:
// Act
var context = new EntityModelMultiIndexEntities();
// Assert
Assert.Equal(expectedSdvCount, context.SearchDisplayViews.Count());
Assert.Equal(expectedSdVirtualId, context.VirtualID);
Assert.Equal(expectedSdvEntityType, context.EntityType);
But I'd recommend start using should-frameworks like Fluent Assertions which make your code mere readable and provide better output.
// Act
var context = new EntityModelMultiIndexEntities();
// Assert
context.SearchDisplayViews.Should().HaveCount(expectedSdvCount);
context.VirtualID.Should().Be(expectedSdVirtualId);
context.EntityType.Should().Be(expectedSdvEntityType);

How to test callbacks with NUnit

Is there any special support when you come to test callbacks with NUnit? Or some kind of of "best practice" which is better than my solution below?
I just started writing some tests and methods, so I have still full control - however I think it might be annoying if there are better ways to test callbacks thoroughly, especially if complexity is increasing. So this is a simple example how I am testing right now:
The method to be tested uses a delegate, which calls a Callback function, for instance as soon as a new xml element is being discovered in a stream. For testing purpose I pass the NewElementCallback Method to the delegate and store the arguments content in some test classes properties when the function is called. These properties I use for assertion. (Of course they are being reset in the test setup)
[Test]
public void NewElement()
{
String xmlString = #"<elem></elem>";
this.xml.InputStream = new StringReader(xmlString);
this.xml.NewElement += this.NewElementCallback;
this.xml.Start();
Assert.AreEqual("elem", this.elementName);
Assert.AreEqual(0, this.elementDepth);
}
private void NewElementCallback(string elementName, int elementDepth)
{
this.elementName = elementName;
this.elementDepth = elementDepth;
}
You could avoid the need for private fields if you use a lambda expression, that's how I usually do this.
[Test]
public void NewElement()
{
String xmlString = #"<elem></elem>";
string elementName;
int elementDepth;
this.xml.InputStream = new StringReader(xmlString);
this.xml.NewElement += (name,depth) => { elementName = name; elementDepth = depth };
this.xml.Start();
Assert.AreEqual("elem", elementName);
Assert.AreEqual(0, elementDepth);
}
it makes your tests more cohesive and having fields on any test class is always asking for disaster!
There isn't anything special in NUnit for this that I know of. I test these things the same way you do. I do tend to put the callback method and the state it stores on another class. I think it makes it a bit cleaner, but it isn't fundamentally different.
From your example, I can't tell exactly what you're trying to do, however, NUnit doesn't provide any specific way to test this kind of thing, however this link should present you some ideas on how to start unit-testing asynchronous code: Unit Testing Asynchronous code

Categories