Having written a small article on BDD, I got questions from people asking whether there are any cases of large-scale use of BDD (and specifically NBehave).
So my question goes to the community: do you have a project that used BDD successfully? If so, what benefits did you get, and what could have been better? Would you do BDD again? Would you recommend it to other people?
We've used somewhat of BDD at the code level in different scenarios (open source and ND projects).
Telling the view in MVC scenario, what kind of input to accept from user (DDD and Rule driven UI Validation in .NET)
result = view.GetData(
CustomerIs.Valid,
CustomerIs.From(AddressIs.Valid, AddressIs.In(Country.Russia)));
Telling the service layer, about the exception handling behavior (ActionPolicy is injected into the decorators):
var policy = ActionPolicy
.Handle<WebException>()
.Retry(3);
Using these approaches has immensely reduced code duplication, made the codebase more stable and flexible. Additionally, it made everything more simple, due to the logical encapsulation of complex details.
I was on a small team that used BDD on a website.
The way we used it was essentially TDD, but the tests are simply written as behaviors using a DSL. We did not get into large upfront design of behaviors, but we did create a large number of them, and used them exactly as you would tests.
As you might expect, it worked much as TDD, generally good. Phrasing the tests as behaviors was nice when interacting with the customers and made for a pretty decent document, but I kind of wish the behaviors were written in English and the tests programmed instead of trying to come up with some difficult intermediate language that doesn't fit either purpose perfectly.
It would still be BDD, just without this cute trick of trying to twist the language into a language delineated by a random_looking.set of_Punctuation rather_than simple.spaces, but that was only my grumpy-old-programmer attitude, everyone else was 100% happy with it.
The site is available and fully operational, so I'd call it a success: Have a look
I recently used the BDD style of GWT in a high-level requirements document. I didn't get any feedback about the GWT from the customer buy my boss said he liked it as it was very clear and easy to understand. Note he has no knowledge of BDD that I know of. I didn't put in user stories as this would probably have been a bit too airy fairy for people with a traditional waterfall background. Maybe I'll try putting in user stories next time.
By the way this was not a eye ball UI project. It was an integration project syncing data from a web service into a database. So it shows that GWT works even for non-"eye ball" UIs.
I've been using Context-Specification style on several projects (using MSpec) with great success. I am still trying to understand the real benefits of the Scenario style. The more I use the context-specification style, the more I like it, and the tighter my applications feel.
Related
I'm new to BDD, but I found it very interesting and want to develop my next project using BDD. After googling and watching screencasts I still have lots of questions about BDD in real life.
1. Declarative or Imperative scenarios?
Most of given-when-then scenarios I saw were written in terms of UI (imperative).
Scenario: Login
Given I am on the Login-page
When I enter 'AUser' in the textbox 'UserName'
And I enter 'APassword' in the textbox 'Password'
And I click the 'Login' button
Then I should see the following text 'You are logged in'
I found those tests extremely brittle and they tell nothing about business value of clicking on buttons. I think its nightmare to maintain. Why most of examples use imperative scenarios?
Scenario: Login (declarative)
Given I am not logged in
When I log in using valid credentials
Then I should be logged in
If you prefer declarative style, how do you describe such stuff like 'Home page' or 'Products page'?
Tips for writing good specifications
2. Exercise UI or not?
Most of steps implementations I saw used WatiN, White or something like that to implement scenarios from user point of view. Starting browser, clicking buttons. I think its extremely slow and brittle. Well, I can use something like Page object to make tests less brittle. But thats another amount of work. Especially for desktop applications with complex UI.
How do you implement scenarios in real-life projects - exercising UI, or via testing controllers/presenters?
Best way to apply BDD
3. Real database or not?
When Given part of scenario is implemented, often it needs some data to be in the system (e.g. some products for shop application). How do you implement that part - adding data to real database (full end-to-end testing), or providing repository stubs to controllers?
Waiting for experienced answers!
UPDATE: Added useful links on questions.
Declaritive is the proper way, IMO. If youre talking about page .aspx file names, you're doing it wrong. The purpose of the story is to facilitate communication between developers and non-develoeprs. Non-developers don't care about products.aspx, they care about a product listing. Your system does something the non-developers find value in. This is what you're trying to capture.
Well, the stories tell the high level features you need to implement. Its what your system must do. The only way to really tell if you've done this is to in fact exercise the UI. BDD SpecFlow stories to me don't replace unit tests, rather they're you're integration tests. If you break this, you've broken the value the business gets from your software. Unit tests are implementation details your users don't care about, and they only test each piece in isolation. That can't tell you if A and B actually work together all the time (in theory it should, in practice interesting [read: unexpected] things happen when you actually have the two parts playing with each other). Automated end to end tests can help with your QA as well. If a functional area breaks, you know about it, and they can spend their time in other areas of the application while you determine what broke the integration tests.
This is a tricky one. We've done a hybrid approach. We do use the database (integrate tests after all test the system functioning as one thing, rather than the individual components), but rather than resetting configurations all the time we use Deleporter to replace our repositories with Moqs when we need to. Seems to work ok, but there are certainly pros and cons either way. I think we're still largely figuring this out ourselves.
Edit: I found this article just now describing the concept of limiting yourself to talking only about specific domains to avoid brittle scenarios.
His main point is that the minimum number of domains you can talk about are the problem domain, and the solution domain. If you're talking about anything outside those two domains then you involve too many stakeholders, you introduce too much noise, and you make your scenarios brittle.
He also mentions an absolute "declarative" or "imperative" model being a myth. He talks about a cognative model called "chunking", saying that at any level of your abstraction, you can "chunk up" or "chunk down". This means you can get more explicit (how?) or more meta (what or why?). You chunk up from an imperative model by asking "what are we doing?" You chunk down by saying "how will we do this?" So I guess I wouldn't get too hung up on declarative vs imperative - it won't get you anywhere as far as this problem goes.
What will get you somewhere is figuring out which domains each term belongs in, possibly by identifying which stakeholder is the expert for the domain that term belongs in. Once you've identified all the domains, you can either pick related terms that are in one of the scenario's most prominent domains, or remove non-fitting statements entirely. If that isn't sufficient, you can split up, further specify, or move the scenario so it can satisfy these requirements.
BTW, he also uses the scenario of logging in on a UI, so you've got concrete guidance :)
Before Edit: (some of this still applies. The "DB or no DB" and "UI or no UI" questions are unrelated)
1 - Declarative or Imperative scenarios?
Declarative when you can, though imperative has some value (at some points in a project lifecycle).
Imperative is an easier way to think for testers and business analysts who aren't as familiar with information theory and design. It is also easier to think about earlier on in a project, before you've nailed down your problem domain and workflows. It can be useful for exploratory thinking.
Declarative is less subject to change over time. Since a GUI is the part of an application most subject to churn at a whim, this is extremely valuable. It is easier to think about once you've nailed down your problem domain and workflows, and are more focused on relational concepts. It is a more stable and more generally applicable model.
If you write test cases with a generic and declarative model, you could implement them using any combination of full app GUI automation, integration tests, or unit tests.
how do you describe such stuff like 'Home page' or 'Products page'?
I'm not sure I would at the base level of features and requirements. You might make sub-features and sub-requirements that describe implementation details, like specific UI workflows. If you're describing a piece of a UI, then you should be defining a UI feature/requirement.
2 - Exercise UI or not?
Both.
I think its extremely slow and brittle
Yes, it is. Perform every high level scenario/requirement with the UI and full DB integration, but don't exercise every single code path with end to end UI automation, and certainly not edge cases. If you do, you'll spend more time getting them working, and a lot less time actually testing your application.
You can architecture your application so you can do lower cost integration tests, including single-piece UI based tests. Unit tests are also valuable.
But the fewer integration tests you do, the more forehead-slapping bugs you're going to miss. It may be easier to write unit tests, and they will certainly be less brittle, but you'll be testing less of your application, by definition.
3 - Real database or not?
Both.
High level end-to-end integration tests must be done with the full system in place. This includes a real DB, running your tests with each system on a different server, etc.
The lower level you get, the more I advocate mock objects.
Unit tests should only test individual classes
Mid-level integration tests should avoid expensive, brittle, and impactful dependencies such as the file system, databases, the network, etc. Try to test the implementation of those brittle and impactful dependencies with unit tests and end-to-end tests only.
Instead of mentioning a page by name, describe what it represents, e.g.
Scenario: Customers logs in successfully
When I log in
Then I should see an overview of ACME's top selling products
You can test directly against underlying APIs or models, but the more you do this, the more you risk not catching an integration issue. One approach is to balance things with a small number of full-stack tests, and a larger number which test between two layers only.
Hi! I recently tried developing a small-sized project in C# and during the whole project our team used the Test-Driven-Development (TDD) technique (xunit, moq).
I really think this was awesome, because (when paired with C#) this approach allowed to relax when coding, relax when projecting and relax when refactoring. I suspect that all this TDD-stuff actually simplifies the coding process and, well, it allowed (eventually, for me) to get the same result with fewer brain cells working.
Right after that I tried using TDD paired with C++ (I used Google Test and Google Mock libraries), and, I don't know why but I actually think that TDD here was a step back in terms of rapid application development.
I had some moments when I had to spend huge amounts of time thinking of my tests, building proper mocks, rebuilding them and swearing at my monitor.
And, well, I obviously can't ask something like "what I did wrong?" or "what was wrong in my approach?", because I don't know what to describe. But if there are any people who are used to TDD in C++ (and, probably C#) too, could you please advise me how to do this properly.
Framework recommendations, architecture approaches, plain coding advices - if you are experienced in TDD & C++, please respond.
I think TDD is much harder to do in C++ than C#. The lack of reflection, and the common (and often well-justified) reluctance to rely on dynamic polymorphism (interfaces and in heritance) compared to static polymorphism does make it harder to mock out many classes.
There are some extremely clever unit test frameworks for C++, but the thing that's so clever about them is mainly that they try to bypass the language limitations.
TDD works best in dynamic languages. It's a great way to work in Python. It's doable in C# (which isn't dynamic, but has comprehensive reflection capabilities)
In C++, it's often problematic. That doesn't mean it can't, or shouldn't, be done, but when you do it, expect to have to work a bit harder at it. And sometimes, you may be better off using another approach entirely.
TDD is something that takes some practice to get right, regardless of the platform. What some people don't seem to realize is that the nature of the problem your trying to solve can have a big impact on how easily you can apply TDD to the solution. I've had problems in the past where I knew the solution I wanted to move towards, but it was extremely difficult to figure out how to break the problem up in a way that seemed to fit the TDD model. Now there are several reasons why this may happen, and it's impossible to say what the "right" way to handle that situation is.
At this point my first reaction to running into this sort of problem is to re-examine my original assumptions about the problem. Am I making it more complex than it needs to be? Am I trying to write tests to arrive at a design I've already decided on instead of letting the tests guide the design? Is it just a funky problem, and I need to accept that the typical TDD approach isn't going to work in this case?
For an interesting discussion of this you can look at this blog post from Uncle Bob Martin, where he talks about an attempt by Ron Jeffries to create a Sudoku Solver using TDD, and it doesn't really work. Now because this attempt did not produce a good solution doesn't mean that TDD is useless, it just means that the problem being solved is more complex, and does not lend itself to the emergent design approach of TDD.
Try the easiest - CxxTest.
I find test driven development very hard to do properly all the time; sometimes the tests just flow, sometimes a bit of a jump is required. To keep things fast I frequently step away from the TDD approach. That isn't a problem for me as I maintain a full set of unit tests for all the code I've 'completed' (allowing the relaxation while coding the new bits and refactoring) .
The code was migrated using a third party tool. what ever the tool couldnt do, was done by the .net developers, so that all compile issues were fixed. My question is, for such migration activities, do we not bother running unit tests for the functions.
Secondly, Could anyone suggest if we should use some tool in VSTS 10 to create a UML model of this code to minimize risks of issues that the client might find. How cumbersome is it.
Are there any other suggestions for how quality migrated code can be delivered, in light of the fact that the functionality of the original VB6 application is unknown to us.
for such migration activities, do we not bother running unit tests for the functions.
I wouldn't trust freshly translated code (mechanical or otherwise) at all. Absolutely it needs testing.
the functionality of the original VB6 application is unknown to us.
That will make regression testing quite... challenging. If you don't know how it is meant to behave, how do you know when you've finished it?
Of course, you could decide not to unit test the translated code, then you won't know how the new code works either - not sure that "unknown = unknown" counts as a "pass", though.
In my experience, the vast majority of applications provide a great deal of "unknown" functionality. After all the reason we write software is to help us manage information in ways that immeasurably exceed our abilities as mere morals. Over time, the size and complexity of our software grows, and grows, and grows until it contains a vast amount of "unknown" functionality. The unknown functionality was probably known and verified as "correct" at one time and it was captured in detail by the source code. However, as time passes no one fully remembers/knows what all the functionality is or even why it is "correct". The full functionality is only "remembered/known" by the source code, teams "test what they change" and the rest is assumed correct unless a problem shows up. This is particularly true of systems that have been extended and changed by many people over many years. Of course this creates risk, and we can do better, process like TDD and tools to automate unit testing are helping, but for many older systems lack of system understanding and incomplete testing are facts of life. The technical idealist in me does not like this, but the business realist in me accepts it.
All that said, this presents a major problem for migration teams. In theory these teams are "changing everything". In a VB6-to-.NET migration, "Test what we changed" means test it all. Ouch. Also the functional requirements for a migration often are "just make it do what it does now, but on the new platform." Not very useful when people do not know/remember everything the system does let alone how to verify that it does it correctly. I am working with several customers that have huge VB6 apps containing 100s of thousands of LOC organized into hundreds or forms and classes and several thousand methods, properties, and event handlers. I am sure these apps contain 10s of thousands of function points. I like to ask migration teams how long it would take them to find the error if I went into the VB6 and "broke" one little thing somewhere. I rarely get an answer...
This is why I advocate using a tool-assisted rewrite methodology. One of the most critical inputs to this process is the production-tested source code. We assume this code is "correct" since you or your customers are running their business on it. The source code is an extremely detailed, formal, and complete answer to the question: what does the system do? In our approach, the migration team iteratively customizes, calibrates, and verifies the automatic, systematic translation and re-engineering of the VB6 source to a complete .NET source. We translate, test, tune, and repeat; each time improving the quality of the translation in terms of functional correctness and conformance to .NET coding standards. Verifying and refining what the tool does is central to the methodology.
In order to verify code quality, we use code reviews and "side-by-side" testing. Code reviews are done by inspecting the .NET code using eyes, and other tools such as the .NET compiler, FXCop, NDepends, etc. We also do a lot of comparing successive generations of the translated codes using a product like BeyondCompare to verify that each translation tuning change has the desired effect and no undesired side-effect. Side-by-side testing is just what it sounds like: the general idea is to run the legacy and .NET apps in side-by-side test environments and make sure their results and behaviors match. There are at least a couple challenges here:
what do you do when you "run the app"; and
how do you make sure the results and behaviors match?
The first question is typically answered in terms of test data, use cases and automated unit tests; the second question is answered in terms of looking at the application UI, and the results (data, web pages, reports) from both systems and comparing (aka approval-based testing). Of course testing tools can go a long way to increase the efficiency. A large-scale migration is a very good time to have a discussion about starting to use testing tools.
If you are planning to migrate a large complex codebase, you need to plan to be very smart about testing. If done properly, the tool-assisted approach delivers production ready code very efficiently, and this will free up resources to produce QC artifacts and improve QC processes that will endure long after the migration.
Disclaimer: I work for Great Migrations.
From the tone of your question it sounds like you know the answer! I would say anything other than a complete set of regression tests would be a recipe for disaster! Ideally, you would want to run the same set of tests against both the old and new versions, although it sounds like you might not be able to do that...
My honest answer - make sure you've got plenty of support/maintenance developers ready to work round the clock fixing support issues!
What best practices and methods would you enforce on a new .NET development team?
Cheers
Use only Visual Studio
If you need a database, use a server (reduces SQL issues early on)
Use Version Control
Good question. I've had to deal with this very recently with my team. Here's a couple quick points:
Come up with coding and documentation standards. A search for C# style guidelines will yield some good results. StyleCop and FxCop might be useful for enforcing your standards.
Source control. SVN is popular, but I prefer Mercurial.
Depending upon what type of projects you are working on, you might want to decide on a standard architecture. Typically, we use a UI - Application - Business Logic - Infrastructure architecture.
Put your database in version control.
Update
MSDN - Design Guidelines for Class Library Developers - All Versions
I had also assumed the OP was referencing coding standards. As for the more general practices.
Unified Development Environment (Visual Studio will probably net the best results)
Version Control (Team Foundation Server is great if you can afford it, if not SVN)
Team Collaboration (Trac if you go with SVN, TFS has some stuff as well)
You are asking for a shelf of books. I don't think you'd want to read an answer long enough to actually cover what you asked.
Microsoft's Patterns & Practices group may have some suggestions that could be useful as a resource of where are some good practices.
Continuous Integration would be another practice I'd introduce along with Technical Debt.
I'd review various Agile practices and see what the team thinks are worth adopting and what isn't. Tribal Leadership would also be something I'd examine to see what stage is the tribe and try to bring it to stage 4 if possible.
If I could put some values into the team it would be to have some pride in our work, respect one another, and think of things in terms of good for the team rather than individual gain. Granted that culture wasn't part of the question it is a natural follow-up to my mind.
You need to use version control (svn is great), but at the same time you shouldn't check everything into the sourcecontrol. skip checking in compilation output and configuration files, instead check in the config files as app.config.template files and have each dev make his own copy of the config files called app.config. check in new changes to the .template file and have all devs regularly check and update their local version if it changes.
If possible, pair up junior members with more senior members. Either way, definitely have code reviews. I'd also encourage them to have scheduled workshops or discussions so that they can get more well-rounded skills and to increase their exposure to different areas that they might not currently be aware of.
I'd also encourage them to go to user group meetings.
I would start by looking through the MSDN Developer Centers site:
http://msdn.microsoft.com/en-us/aa937802.aspx
Since you are using C# I would recommend using StyleCop to maintain consistency in code layout. Since you've stated it's a new team, I'm assuming that the code base is new as well. Starting fresh with StyleCop is far easier than trying to get rid of warnings in an existing code base.
most people would agree that having automated unit tests is a very good thing. you may want to go the tdd route and never code anything that doesn't already have a test, or you may want to write tests after the code and just focus on the key areas of concern rather than striving for 100% coverage. either way, decide what you want to achieve with testing and make sure that it is adhered to. without a strict law on getting unit tests you may well find that some if not all of your code has no automated tests and the only way that code gets tested is when someone goes into the UI and actually uses it.
In no particular order,
Agile / Scrum
A nice suite of tools -Resharper, Redgate SQL Tools, FXCop,etc.
Test Driven Development
Continuous Integration
I am building a prototype for a web-based application and was considering building the front-end in HTML, which can then be reused later for the actual application. I had done a Flash-based prototype earlier, which embedded the .swf into a C# executable. Flash made for rapid turnaround time while the Windows application provided unlimited access to fancy API's for DB access and sound.
I want to consider something similar for this one too. Does this approach make sense? I am particularly concerned about the way the HTML would communicate with the container app. From what I understand out of preliminary research, it would be only through JavaScript, which might quickly get unwieldy. This is especially so because unlike the Flash-based prototype which implemented a lot of its functionality in the .swf, the HTML UI will depend entirely upon the shell to maintain state. Also, I don't need anything more than access to a database. So a desktop application might be overkill.
Another alternative that comes to mind is to build the prototype using PHP and deploy it with a portable server stack such as Server2Go or XAMPP. But I've never done something like this before. Anybody here shed some light on drawbacks of this approach?
The key requirement is rapid iterations of the UI, reusable front-end code and simplified deployment without any installations or configuration.
Some of the best programming advice I've seen came from Code Complete, and was along the lines of, "evolutionary prototypes are fine things, and throwaway prototypes are fine things, but you run into trouble when you try to make one from the other." That is, know which type of prototype you're developing, and respect it. If you're developing a throwaway prototype, don't permit yourself to use any of it, however tempting it may be, in the production system. And if you're developing an evolutionary prototype - one intended to become the production system - don't compromise quality in any way.
It sounds like you're trying to get both, the rapid development of a throwaway and the reusability of an evolutionary prototype - and you can't. Make up your mind, and stand by it. You can't have your cake and eat it, too.
I think you off to the wrong start, here. Why would you want your prototype to be fully functional? A prototype is intended to be throw-away and to help flesh out requirements and UI. If you need full functionality, why not just skip to the final product? If prototyping is really something you want to do, I suggest looking into a specialized prototyping tool.
Are you prototyping the user interface for a customer? If you are, consider something less unwieldy like paper prototypes or presentation software (like PowerPoint) until you get the UI nailed down. If you can establish the UI and are clear about the customer's requirements, you can then develop the application in whatever the actual platform is going to be with a clear model in mind.
In my current project, I prototyped the UI in PowerPoint first. In a subsequent iteration, I used static web pages and some jQuery plugins to simulate actual user interaction. That proved to be very effective in demonstrating the interface, and I didn't have to build the application first.
I would join in on folks suggesting paper prototyping as the "idea", but not necessarily the implementation. The biggest point here is that tools such as HTML or Flash let you get "bogged down" in the details - what does this color look like? What's the text on this thing? Lots of time can pass by that way. Instead, what you should be focusing on is user flows.
One tool that keeps the spirit of paper prototyping without all the "paper" drawbacks is Balsamiq: http://www.balsamiq.com/demos/mockups/Mockups.html. It was covered by Jeff and Joel in one of the Stack Overflow podcasts; I've been using it for my own projects for a while. It's freeware, and it does its job magnificently.
If you know C# then another option you can look at is Silverlight. You can then leverage your knowledge of C# and/or JavaScript and interact with a rich object model.
Would that do what you are looking for? The installation would be minimal on the part of the client - download and install the Silverlight plugin
If prototyping is something you truly wish to accomplish here, paper and pencil will be your best friends. You can draw out as many iterations as necessary. While none of this is ultimately useful later on once you begin coding, it is as quick and rapid is it goes.
As mentioned previously, there are many prototyping tools which have a bit of a learning curve, but an alternative to consider would be using a framework such as CakePHP or Ruby on Rails which make for fast application logic and leave customizing the front end being the primary hard work left. And plus, you're left with a mostly functional application when you're done with your prototyping which can be tweaked as needed.
In either scenario, you're paying with your time either upfront (in the case with learning a new framework), of over time in payments (with the case of prototyping on paper or coding by hand).