As a beginner to TDD I am trying to write a test that assumes a property has had its value changed on a PropertyGrid (C#, WinForms, .NET 3.5).
Changing a property on an object in a property grid does not fire the event (fair enough, as it's a UI raised event, so I can see why changing the owned object may be invisible to it).
I also had the same issue with getting an AfterSelect on a TreeView to fire when changing the SelectedNode property.
I could have a function that my unit test can call that simulates the code a UI event would fire, but that would be cluttering up my code, and unless I make it public, I would have to write all my tests in the same project, or even class, of the objects I am testing (again, I see this as clutter). This seems ugly to me, and would suffer from maintainability problems.
Is there a convention to do this sort of UI based unit-testing
To unit test your code you will need to mock up an object of the UI interface element. There are many tools you can use to do this, and I can't recommend one over another. There's a good comparison between MoQ and Rhino Mocks here at Phil Haack's blog that I've found useful and might be useful to you.
Anothing thing to consider if you're using TDD is creating an interface to your views will assist in the TDD process. There is a design model for this (probably more than one, but this is one I use) called Model View Presenter (now split into Passive View and Supervisor Controller). Following one of these will make your code behind far more testable in the future.
Also, bear in mind that testing the UI itself cannot be done through unit testing. A test automation tool as already suggested in another answer will be appropriate for this, but not for unit testing your code.
Microsoft has UI Automation built into the .Net Framework. You may be able to use this to simulate a user utilising your software in the normal way.
There is an MSDN article "Using UI Automation for Automated Testing which is a good starting point.
One option I would recommend for its simplicty is to have your UI just call a helper class or method on the firing of the event and unit test that. Make sure it (your event handler in the UI) has as little logic as possible and then from there I'm sure you'll know what to do.
It can be pretty difficult to reach 100% coverage in your unit tests. By difficult I mean of course inefficient. Even once you get good at something like that it will, in my opinion, probably add more complexity to your code base than your unit test would merit. If you're not sure how to get your logic segmented into a separate class or method, that's another question I would love to help with.
I'll be interested to see what other techniques people have to work with this kind of issue.
Related
I have developed an application in WPF using MVVM because of the added benefits of seperation and test ability. However I am trying to write some unit tests as part of this but am confused about what to test. I know how to write the unit tests, hovever am unsure of what should I be testing in the view model, which is made up of my properties for data bindings and methods for some logic.
Furthermore most of my view model methods are private because they only need to be accessed from inside of the view model so they cannot be simply tested via unit tests like a public method would be. This results in being able to test very little of the view model which opposes the supposed value of MVVM in concerns to testing and from a quality POV is disadvantageous as I have to rely on manual tests to prove the functionality of my code.
I might be wrong and am new to using MVVM but any help would be appreciated on how to go about this.
When I write WPF applications I focus my testing on the models.
I test view-models by calling commands and setting properties like the user would do by using the user interface. For trivial view models that just wrap a model one-to-one or call a service with 4 lines of code I don't write any initial tests.
As soon as I find something that doesn't work as expected when running the application I go back and write a test for that particular use case. That initial "bug" usually shows what was tricky to implement in that particular view model and is a good starting point to write more tests and continue development in a more test driven fashion.
You can test the same things that the user can do on your UI.
By definition those things will be public, since the view will binding to them.
eg. Say you have a Widgets collection, and an AddWidgetCommand. You can test that executing the command will add a widget to the collection.
There is need to create small scale winform applications for private use by my company for interaction with our database. Period. I know that .NET MVC has a very mature unit testing framework for the MVC pattern as I want to do TDD.
Thus, is it possible to use NUnit or some other easy/mature unit testing framework (EDIT: Based on my experience with ASP.NET) with the following tutorial given here? I have googled and checked my technical book library, and there is a distinct lack of documentation for how to do effective unit testing for winforms. Thus, I am turning to this forum hoping individuals can share their domain knowledge with me.
The most I have found has recommended the MVC pattern (which I agree with), but not how to address specific issues with the winform. For example, how do I test a button click and subsequent action by that button?
I plan on using C# VS13 .NET 4.5. I am excited to join this resource and will contribute rep to all who answer my inquiry. Thanks.
As you have probably noticed, the idea there is to have your view described with an interface.
Thus, the controller doesn't really need a window, it needs a class implementing the interface.
And thus, you could have yet another, auto-mocked or manual implementation of the view interface that doesn't involve the WinForms subsystem but rather, exposes the data to write your assertios.
Having your stub view, you just write a script against it. You expose some methods from the class that allow you to automate the interaction:
public class ViewStub : IView
{
// implement the view interface as it is but also
// an extra stuff to let you automate this in your unit tests
public void RaiseButtonClick()
{
this.controller.DoTheButtonClickStuff();
}
}
Then your test becomes (I follow the convention from the tutorial)
ViewStub form = new ViewStub();
IModel mdl = new IncModel();
IController cnt = new IncController(view,mdl);
form.RaiseButtonClick();
Unit Testing a GUI is something that is independent of the GUI library used. The answer to the general case answers your case as well:
How can I unit test a GUI?
Unit testing a GUI is done by minimizing the footprint of classes that do depend on your GUI framework. Then the classes that still depend on the GUI framework are not unit tested (which is not at all the same as "are not tested"). Those classes are the "View" in patterns like MVP, MVC, MVVM.
Conclusion: Every good .Net Unit Testing framework is a good WinForms Unit Testing framework. One example of such a framework is NUnit, which you mentioned in your question.
Unit testing is not inteded for UI. Of course you can do it, but I don' recommend it for few reasons: You cannot unit test DPI changes, multiple screen, how your program acts in different window states or what happens if user moves with tabulator between your controls. Unit testing UI will only give you false safety.
Use UI automation tools to test UI automatically if you want to, but leave the unit testing out of it. You can keep the UI layer thin as possible and this is also a good design practice. You can use unit testing for other classes which are used in your winforms app.
Whilst practicing some TDD at work for an ASP.Net MVC project, I ran into a number of scenarios where I was writing tests to ensure that particular actions returned the correct views or had particular attributes on them ([ChildActionOnly] etc). (in fact, I found a number of interesting posts here SO about useful extension methods to help acheive this).
When I was first introduced to the concepts of unit testing and TDD when on a course a few years ago, the emphasis was based strongly on that tests should be focusing on testing logic behind the user-desired features and functionality - the core project 'requirements' if you will.
My question is - if this is the case, are menial tests checking for the correct view file to be rendered, or an action having a particular attribute etc not really encompassing what the unit testing methodology is all about? Am I writing tests for the wrong reasons (i.e. simply protecting myself and other colleagues from making a refactoring mistake) or are these valid cases of valuable unit tests?
If a handler method could return one of two (or more) views depending on some logic then a unit test that asserted the correct view would be useful. Same goes for a handler method that inserted particular attributes depending on the logic.
Am I writing tests for the wrong reasons (i.e. simply protecting other
colleagues from making a refactoring mistake) or are these valid cases
of valuable unit tests?
Catching regression errors is one of the benefits of unit tests, especially usefull when refactoring. If a you inadvertently changed the view returned while doing some refactoring that would be very usefull to catch early - rather than waiting for a test that only ran when the application was runnning.
If your action returns different views (or action results) depending on some logic. Write a test.
If you always return View() then don't.
If you return a view with a name different from your action's name - then you could argue it would be a good idea to write a test.
It depends. To use one of your examples: Checking if an action has a particular attribute is okay, but it's better to write a test that verifies a behavior is functioning as expected, that would fail if the attribute was missing.
That said, ultimately your tests are a safety net. If it has a reasonable chance of preventing a bug from slipping in at some point in the future, it's doing its job.
If it's a simple test that has low overhead and a low chance of breaking arbitrarily in the future, go for it. I'd rather have too many tests than too few.
Indeed, your tests should be testing your logic. That should not go away. However, ideally you should write tests for anything that can go wrong. For instance, insuring that all methods that need to be secured have a proper Authorize attribute. This is a security test.
Ultimate, you decide what's useful for you to test.
I recently joined a company where they use TDD, but it's still not clear for me, for example, we have a test:
[TestMethod]
public void ShouldReturnGameIsRunning()
{
var game = new Mock<IGame>();
game.Setup(g => g.IsRunning).Returns(true);
Assert.IsTrue(game.Object.IsRunning);
}
What's the purpose of it? As far as I understand, this is testing nothing! I say that because it's mocking an interface, and saying, return true for IsRunning, it will never return a different value...
I'm probably thinking wrong, as everybody says it's a good practice etc etc... or is this test wrong?
This test is excerising an interface, and verifying that the interface exposes a Boolean getter named IsRunning.
This test was probably written before the interface existed, and probably well before any concrete classes of IGame existed either. I would imagine, if the project is of any maturity, there are other tests that exercise the actual implementions and verify behavior for that level, and probably use mocking in the more traditional isolationist context rather than stubbing theoretical shapes.
The value of these kinds of tests is that it forces one to think about how a shape or object is going to be used. Something along the lines of "I don't know exactly what a game is going to do yet, but I do know that I'm going to want to check if it's running...". So this style of testing is more to drive design decisions, as well as validate behavior.
Not the most valuable test on the planet, but serves its purpose in my opinion.
Edit: I was on the train last night when I was typing this, but I wanted to address Andrew Whitaker comment directly in the answer.
One could argue that the signature in the interface itself is enough of an enforcement of the desired contract. However, this really depends on how you treat your interfaces and ultimately how you validate the system that you are trying to test.
There may be tangible value in explicitly stating this functionality as a test. You are explicitly verifying that this is a desired functionality for an IGame.
Lets take a use case, lets say that as a designer you know that you want to expose a public getter for IsRunning, so you add it to the interface. But lets say that another developer gets this and sees a property, but doesn't see any usages of it anywhere else in the code, one might assume that it is eligible for deletion (or code-rot removal, which is normally a good thing) without harm. The existence of this test, however, explicitly states this properity should exist.
Without this test, a developer could modify the interface, drop the implementation and the project would still compile and run seemingly correctly. And it would be until much later that someone might notice that interface has been changed.
With this test, even if it appears as if no one is using it, your test suite will fail. The self documenting nature of the test tells you that ShouldReturnGameIsRunning(), which at the very least should spawn a conversation if IGame should really expose an IsRunning getter.
The test is invalid. Moq is a framework used to "mock" objects used by the method under test. If you were testing IsRunning, you might use Mock to provide a certain implementation for those methods that IsRunning depends on, for example to cause certain execution paths to execute.
People can tell you all day that testing first is better; you won't realize that it actually is until you start doing it yourself and realize that you have fewer bugs with code you wrote tests for first, which will save you and your co-workers time and probably make the quality of your code higher.
It could be that the test has been written (before) the code which is good practice.
It might equally be that this is a pointless test created by a tool.
Another possibility is that the code was written by somebody who, at the time, didn't understand how to use that particular mocking framework - and wrote one or more simple tests to learn. Kent Beck talked about that sort of "learning test" in his original TDD book.
This test could have had a point in the past. It could have been testing a concrete class. It could have been testing an abstract one -- as it sometimes makes sense to mock an abstract class that you want to test. I can see how a (less informed(like myself)) follower of the TDD way of doing things could have left a small mess like this. Cleanup of dead code and dead test code, does not really fit in the natural workflow of red, green, refactor (since it doesn't break any tests!).
However, history is not justification for a test that does nothing but confuse the reader. So, be a good boyscout and remove it.
In my honest oppinion this is just rubbish, it only proves that moq works as expected
Many times I find myself torn between making a method private to prevent someone from calling it in a context that doesn't make sense (or would screw up the internal state of the object involved), or making the method public (or typically internal) in order to expose it to the unit test assembly. I was just wondering what the Stack Overflow community thought of this dilemma?
So I guess the question truly is, is it better to focus on testability or on maintaining proper encapsulation?
Lately I've been leaning towards testability, as most of the code is only going to be leveraged by a small group of developers, but I thought I would see what everyone else thought?
Its NOT ok to change method visibility on methods that the customers or users can see. Doing this is ugly, a hack, exposes methods that any dumb user could try to use and explode your app... its a liability you do not need.
You are using C# yes? Check out the internals visible to attribute class.
You can declare your testable methods as internal, and allow your unit testing assembly access to your internals.
It depends on whether the method is part of a public API or not. If a method does not belong to part of a public API, but is called publicly from other types within the same assembly, use internal, friend your unit test assembly, and unit test it.
However, if the method is not/should not be part of a public API, and it is not called by other types internal to the assembly, DO NOT test it directly. It should be protected or private, and it should only be tested indirectly by unit testing your public API. If you write unit tests for non-public (or what should be non-public) members of your types, you are binding test code to internal implementation details.
Thats a bad kind of coupling, increases the amount of unit tests you need, increases workload both in the short term (more unit tests) as well as in the long term (more test maintenance and modification in response to refactoring internal implementation details). Another problem with testing non-public members is that you test code that may not actually be needed or used. A GREAT way to find dead code is when it is not covered by any of your unit tests when your public API is covered 100%. Removing dead code is a great way to keep your code base lean and mean, and is impossible if you are not careful about what you put into your public API, and what parts of your code you unit test.
EDIT:
As a quick additional note...with a properly designed public API, you can very effectively use a tool like Microsoft PEX to automatically generate full-coverage unit tests that test every execution path of your code. Combined with a few manually written tests that cover critical behavior, anything not covered can be considered dead code and removed, and you can greatly shortcut your unit testing process.
This is a common thought.
It's generally best to test the private methods by testing the public methods that call them (so you don't explicitly test the private methods). However, I understand that there are times when you really do want to test those private methods.
The answers to this question (Java) and this question (.NET) should be helpful.
To answer the question: no, you shouldn't change method visibility for the sake of testing. You generally shouldn't be testing private methods, and when you do, there are better ways to do it.
In general I agree with #jrista. But, as usual, it depends.
When trying to work with legacy code, the key is to get it under test. After that, you can add tests for new features and existing bugs, refactor to improve design, etc. This is risky without tests. Legacy code tends to be rife with dependencies, and is often extremely difficult to get under test.
In Working Effectively with Legacy Code, Michael Feathers suggests multiple techniques for getting code under test. Many of these techniques involve breaking encapsulation or complicating the design, and the author is up front about this. Once tests are in place, the code can be improved safely.
So for legacy code, do what you have to do.
In .NET you should use Accessors for unit testing, even rather than the InternalsVisibleTo attribute. Accessors allow you to get access to any method in the class even if it is private. They even let you test abstract classes using an empty mock derived object (see the "PrivateObject" class).
Basically in your test project you use the accessor class rather than the actual class with the methods you want to test. The accessor class is the same as the "real" class, except everything is public to your test project. Visual studio can generate accessors for you.
NEVER make a type more visible to facilitate unit testing.
IMO is it WRONG to say that you should not unit test private methods. Unit tests are of exceptional value for regression testing and there is no reason why private methods should not be regression tested with granular unit tests.