Id like to have one file or class, in which i can control, which tests should be executed.
I know there is TestNG for Java, which can be used for that.
But i cant find anything for C# in google or here in stackoverflow related to this problem
My current test framework has 17 automation tests (17 classes) and much more will be added this year.
Therefor id like to have one file/class/method, in which i can set, which tests should be executed/not executed, as i don't want every test to be triggered, when i'm actively working on 2-3 automation tests.
My first idea:
In NUnit we can set a [Ignore("reason")] parameter above the class or method, which skips this test.
Is it possible to control these parameter outside of the class?
Id be happy and thankful for any other suggestions!
Nunit is the way to go for a single class with however many #Test methods you create.
You can use The #Ignore annotation you suggested to filter out tests that are not ready, or just dont want to execute.
And You can also use Event Listeners. Very useful tool that will help you control the actions of before, after, fail, pass etc... processed of all your tests. you can specify a condition on the unwanted tests and in the "TestStarted" event listener you Assert an ignore, this will effect all your tests marked by the specific condition.
Related
When writing unit test, Is there a simple way to ensure that nothing unexpected happened ?
Since the list of possible side effect is infinite, adding tons of Assert to ensure that nothing changed at every steps seems vain and it obfuscate the purpose of the test.
I might have missed some framework feature or good practice.
I'm using C#7, .net 4.6, MSTest V1.
edit:
The simpler example would be to test the setter of a viewmodel, 2 things should happen: the value should change and PropertyChanged event should be raised.
These 2 things are easy to check but now I need to make sure that other properties values didn't changed, no other event was raised, the system clipboard was not touched...
You're missing the point of unit tests. They are "proofs". You cannot logically prove a negative assertion, so there's no point in even trying.
The assertions in each unit test should prove that the desired behavior was accomplished. That's all.
If we reduce the question to absurdity, every unit test would require that we assert that the function under test didn't start a thermonuclear war.
Unit tests are not the only kind of tests you'll need to perform. There are functional tests, integration tests, usability tests, etc. Each one has its own focus. For unit tests, the focus is proving the expected behavior of a single function. So if the function is supposed to accomplish 2 things, just assert that each of those 2 things happened, and move on.
One of the options to ensure that nothing 'bad' or unexpected happens is to ensure good practices of using dependency injection and mocking:
[Test]
public void TestSomething()
{
// Arrange
var barMock = RhinoMocks.MockRepository.GenerateStrictMock<IBar>();
var foo = new Foo(barMock);
// Act
foo.DoSomething();
// Assert
...
}
In the example above if Foo accidentally touches Bar, that will result in an exception (the strict mock) and the test fails. Such approach might not be applicable in all test cases, but serves as a good addition to other potential practices.
Some addition to your edit:
In Test Driven Development you are writing only code, which will pass the test and nothing more. Furthermore you want to choose the simplest possible solutoin to accomplish this goal.
That said you will start most likely with a failing unit-test. In your situation you will not get a failing unit test at the beginning.
If you push it to the limits, you will have to check that format C:\ is not called in your application when you want to check every outcome. You might want to have a look at design principles like the KISS-principle (Keep it simple, stupid).
If the scope of "check that nothing else happened" is to ensure the state of the model didn't change, which it appears is the case from the question.
Write a helper function that takes the model before your event and the model after and compares them. Let it return the properties that are changed, then you can assert that only those properties that you intended to update are in the return list. This sort of helper is portable, maintainable, and reusable
Checking model state is a valid application of a unit test.
This is only possible in referentially transparent languages such as Safe Haskell.
I am trying to run a SpecFlow scenario from code instead of through Test Explorer or the command line. Has someone managed to do this?
From a scenario, I can extract the method name and test method with recursion, but I cannot run this scenario method. It seems to need a proper initialize and teardown, but I could't manage to do this.
My first thought was to use the TechTalk.SpecFlow.TestRunner class, but it doesn't seem to have a scenario selection method.
EDIT on why I want to do this:
We want to run specific scenarios from TFS. It is very cumbersome to connect TestMethods to WorkItems in TFS, because:
You can only assign one testmethod to one workitem
For each workitem you have to search the method name, with in itself is a hassle, because the list is very long with lots of specflow scenarios.
When your specflow scenario gets a different name (which happens a lot), TFS cannot find the correct method anymore
Specflow Scenario Outlines get practically unusable, while they are a very powerful feature.
I want to create a mechanism where each automated workitem gets the same method assigned. This method extracts the workitem id and search and executes the scenario(s) with this workitem tagged.
I had a similar problem since my tests have some dependencies between Scenarios (shame on me, but it saves tons of copy-paste lines per Feature file). In most cases I would stick to isolated Scenarios of course.
I used reflection
Find all Types with a DescriptionAttribute (aka Features)
Find their MethodInfos with a TestAttribute and DescriptionAttribute (aka Scenarios)
Store them to a Dictionary
Call them by "Title of the Feature/Title of the Scenario" with Activator.CreateInstance and Invoke
You have to set the (private) field "testRunner" according to your needs of course.
I have a fairly large Coded UI test and have set up each task in its own .cs class file. The main objective of the test is to check that objects have loaded on various pages in a browser. The test is set up to loop through an XML config file and invoke each method listed in the XML as the user sees fit.
Because I don't want every test method to run every time, I do not have the [TestMethod] attribute declared at the top of each class/method. Unfortunately, this means that each method that is invoked will not show up individually in the test results view, which is a big disadvantage.
Is there a way that I can apply the [TestMethod] attribute each time a method is invoked, but only for the methods I want?
The test runner uses reflection on the test assemblies to find methods with the [TestMethod] attribute and then calls those methods one by one to execute the tests. To do what you want you'd need to change the test runner, and even then you'd have to do something to change the IL of the test assemblies to dynamically add the attributes, reload the assemblies, and probably a whole lot of other things I'm glossing over. You'd basically be writing your own test framework if you got that far.
Instead, is there a reason you don't want to use test lists? They do what you seem to be asking for.
I need to develop a fairly simple algorithm, but am kindof confused as how to best write a test for it.
General description: User needs to be able to delete a Plan. Plan has Tasks associated with it, these need to be deleted as well (as long as they're not already done).
Pseudo-code as how the algorithm should behave:
PlanController.DeletePlan(plan)
=>
PlanDbRepository.DeletePlan()
ForEach Task t in plan.Tasks
If t.Status = Status.Open Then
TaskDbRepository.DeleteTask(t)
End If
End ForEach
Now as far as I understand it, unit tests are not supposed to touch the Database or generally require access to any outside systems, so I'm guessing I have two options here:
1) Mock out the Repository calls, and check whether they have been called the appropriate number of times as Asserts
2) Create stubs for both repository classes, setting their delete flag manually and then verify that the appropriate objects have been marked for deletion.
In both approaches, the big question is: What exactly am I testing here? What is the EXTRA value that such tests would give me?
Any insight in this would be highly appreciated. This is technically not linked to any specific unit testing framework, although we have RhinoMocks to be used. But I'd prefer a general explanation, so that I can properly wrap my head around this.
You should mock the repository and then construct a dummy plan in your unit test containing both Open and Closed tasks. Then call the actual method passing this plan and at the end verify that the DeleteTask method was called with correct arguments (tasks with only status = Open). This way you would ensure that only open tasks associated to this plan have been deleted by your method. Also don't forget (probably in a separate unit test) to verify that the plan itself has been deleted by asserting that the DeletePlan method has been called on the object your are passing.
To add to Darin's answer I'd like to tell you what you are actually testing. There's a bit of business logic in there, for example the check on the status.
This unit test might seem a bit dumb right now, but what about future changes to your code and model? This test is necessary to make sure this seemingly simple functionality will always keep working.
As you noted, you are testing that the logic in the algorithm behaves as expected. Your approach is correct, but consider the future - Months down the road, this algorithm may need to be changed, a different developer chops it up and redoes it, missing a critical piece of logic. Your unit tests will now fail, and the developer will be alerted to their mistake. Unit testing is useful at the start, and weeks/months/years down the road as well.
If you want to add more, consider how failure is handled. Have your DB mock throw an exception on the delete command, test that your algorithm handles this correctly.
The extra value provided by your tests is to check that your code does the right things (in this case, delete the plan, delete any open tasks associated with the plan and leave any closed tasks associated with the plan).
Assuming that you have tests in place for your Repository classes (i.e. that they do the right things when delete is called on them), then all you need to do is check that the delete methods are called appropriately.
Some tests you could write are:
Does deleting an empty plan only call DeletePlan?
Does deleting a plan with two open tasks call DeleteTask for both tasks?
Does deleting a plan with two closed tasks not call DeleteTask at all?
Does deleting a plan with one open and one closed task call DeleteTask once on the right task?
Edit: I'd use Darin's answer as the way to go about it though.
Interesting, I find unit testing helps to focus the mind on the specifications.
To that end let me ask this question...
If I have a plan with 3 tasks:
Plan1 {
Task1: completed
Task2: todo
Task3: todo
}
and I call delete on them, what should the happen to the Plan?
Plan1 : ?
Task1: not deleted
Task2: deleted
Task3: deleted
Is plan1 deleted, orphaning task1? or is it otherwise marked deleted?.
This is a big part of the Value I see in unit tests (Although it is only 1 of the 4 values:
1) Spec
2) Feedback
3) Regression
4) granularity
As for how to test, I wouldn't suggest mocks at all. I would consider a 2 part method
The first would look like
public void DeletePlan(Plan p)
{
var objectsToDelete = GetDeletedPlanObjects(p);
DeleteObjects(objectsToDelete);
}
And I wouldn't test this method.
I would test the method GetDeletedPlanObjects, which wouldn't touch the database anyways, and would allow you to send in scenarios like the above situation.... which I would then assert with www.approvaltests.com , but that's another story :-)
Happy Testing,
Llewellyn
I would not write unit tests for this because to me this is not testing behaviour but rather implementation. If at some point you want to chance the behaviour to not delete the tasks but rather set them to a state of 'disabled' or 'ignored', your unit tests will fail. If you test all controllers this way your unit tests are very brittle and will need to be changed often.
Refactor out the business logic to a 'TaskRemovalStrategy' if you want to test the business logic for this and leave the implementation details of the removal up to the class itself.
IMO you can write your unit tests around the abstract PlanRepository and the same tests should be useful in testing the data integrity in the database also.
For example you could write a test -
void DeletePlanTest()
{
PlanRepository repo = new PlanDbRepository("connection string");
repo.CreateNewPlan(); // create plan and populate with tasks
AssertIsTrue(repo.Plan.OpenTasks.Count == 2); // check tasks are in open state
repo.DeletePlan();
AssertIsTrue(repo.Plan.OpenTasks.Count == 0);
}
This test will work even if your repository deletes the plan and your database deletes the related tasks via a cascaded delete trigger.
The value of such test is whether the test is run for PlanDbRepository or a MockRepository it will still check that the behavior is correct. So when you change any repository code or even your database schema, you can run the tests to check nothing is broken.
You can create such tests which cover all the possible behaviors of your repository and then use them to make sure that any of your changes do not break the implementation.
You can also parameterize this test with a concrete repository instance and reuse them the test any future implementations of repositories.
I remember something like 'explicit', and google says that nunit has such attribute.
Does Microsoft.VisualStudio.TestTools.UnitTesting provide something like this?
The MSTest tools does not explicitly support this type of behavior at an attribute level. At the attribute level you can either enable a test via the TestMethod attribute or completely disable it with the Ignore attribute. Once the Ignore attribute is added, mstest will not run the test until it is removed. You cannot override this behavior via the UI.
What you can do though is disable the test via the property page. Open up the test list editor, select the test you want and hit F4 to bring up the property page. Set the Test Enabled property to false. The test will now not run until you re-enable it through the property page. It's not exactly what you're looking for but likely the closest equivalent.
You can create a "Run Manually" category for your tests using the Category attribute, and then exclude that category from your tests in the GUI. These tests will be grayed out, and you can put them back in whenever you want. I do this often for slow-running tests.
I haven't used it, and it looks pretty old (Mar 2008), but I see that TestListGenerator claims to auto-generate Test Lists based on attributes you set on your tests. If it works, this would effectively provide Categories for MS Test. While not the same as Explicit, it may let you achieve what you want.