I'm new to BDD, but I found it very interesting and want to develop my next project using BDD. After googling and watching screencasts I still have lots of questions about BDD in real life.
1. Declarative or Imperative scenarios?
Most of given-when-then scenarios I saw were written in terms of UI (imperative).
Scenario: Login
Given I am on the Login-page
When I enter 'AUser' in the textbox 'UserName'
And I enter 'APassword' in the textbox 'Password'
And I click the 'Login' button
Then I should see the following text 'You are logged in'
I found those tests extremely brittle and they tell nothing about business value of clicking on buttons. I think its nightmare to maintain. Why most of examples use imperative scenarios?
Scenario: Login (declarative)
Given I am not logged in
When I log in using valid credentials
Then I should be logged in
If you prefer declarative style, how do you describe such stuff like 'Home page' or 'Products page'?
Tips for writing good specifications
2. Exercise UI or not?
Most of steps implementations I saw used WatiN, White or something like that to implement scenarios from user point of view. Starting browser, clicking buttons. I think its extremely slow and brittle. Well, I can use something like Page object to make tests less brittle. But thats another amount of work. Especially for desktop applications with complex UI.
How do you implement scenarios in real-life projects - exercising UI, or via testing controllers/presenters?
Best way to apply BDD
3. Real database or not?
When Given part of scenario is implemented, often it needs some data to be in the system (e.g. some products for shop application). How do you implement that part - adding data to real database (full end-to-end testing), or providing repository stubs to controllers?
Waiting for experienced answers!
UPDATE: Added useful links on questions.
Declaritive is the proper way, IMO. If youre talking about page .aspx file names, you're doing it wrong. The purpose of the story is to facilitate communication between developers and non-develoeprs. Non-developers don't care about products.aspx, they care about a product listing. Your system does something the non-developers find value in. This is what you're trying to capture.
Well, the stories tell the high level features you need to implement. Its what your system must do. The only way to really tell if you've done this is to in fact exercise the UI. BDD SpecFlow stories to me don't replace unit tests, rather they're you're integration tests. If you break this, you've broken the value the business gets from your software. Unit tests are implementation details your users don't care about, and they only test each piece in isolation. That can't tell you if A and B actually work together all the time (in theory it should, in practice interesting [read: unexpected] things happen when you actually have the two parts playing with each other). Automated end to end tests can help with your QA as well. If a functional area breaks, you know about it, and they can spend their time in other areas of the application while you determine what broke the integration tests.
This is a tricky one. We've done a hybrid approach. We do use the database (integrate tests after all test the system functioning as one thing, rather than the individual components), but rather than resetting configurations all the time we use Deleporter to replace our repositories with Moqs when we need to. Seems to work ok, but there are certainly pros and cons either way. I think we're still largely figuring this out ourselves.
Edit: I found this article just now describing the concept of limiting yourself to talking only about specific domains to avoid brittle scenarios.
His main point is that the minimum number of domains you can talk about are the problem domain, and the solution domain. If you're talking about anything outside those two domains then you involve too many stakeholders, you introduce too much noise, and you make your scenarios brittle.
He also mentions an absolute "declarative" or "imperative" model being a myth. He talks about a cognative model called "chunking", saying that at any level of your abstraction, you can "chunk up" or "chunk down". This means you can get more explicit (how?) or more meta (what or why?). You chunk up from an imperative model by asking "what are we doing?" You chunk down by saying "how will we do this?" So I guess I wouldn't get too hung up on declarative vs imperative - it won't get you anywhere as far as this problem goes.
What will get you somewhere is figuring out which domains each term belongs in, possibly by identifying which stakeholder is the expert for the domain that term belongs in. Once you've identified all the domains, you can either pick related terms that are in one of the scenario's most prominent domains, or remove non-fitting statements entirely. If that isn't sufficient, you can split up, further specify, or move the scenario so it can satisfy these requirements.
BTW, he also uses the scenario of logging in on a UI, so you've got concrete guidance :)
Before Edit: (some of this still applies. The "DB or no DB" and "UI or no UI" questions are unrelated)
1 - Declarative or Imperative scenarios?
Declarative when you can, though imperative has some value (at some points in a project lifecycle).
Imperative is an easier way to think for testers and business analysts who aren't as familiar with information theory and design. It is also easier to think about earlier on in a project, before you've nailed down your problem domain and workflows. It can be useful for exploratory thinking.
Declarative is less subject to change over time. Since a GUI is the part of an application most subject to churn at a whim, this is extremely valuable. It is easier to think about once you've nailed down your problem domain and workflows, and are more focused on relational concepts. It is a more stable and more generally applicable model.
If you write test cases with a generic and declarative model, you could implement them using any combination of full app GUI automation, integration tests, or unit tests.
how do you describe such stuff like 'Home page' or 'Products page'?
I'm not sure I would at the base level of features and requirements. You might make sub-features and sub-requirements that describe implementation details, like specific UI workflows. If you're describing a piece of a UI, then you should be defining a UI feature/requirement.
2 - Exercise UI or not?
Both.
I think its extremely slow and brittle
Yes, it is. Perform every high level scenario/requirement with the UI and full DB integration, but don't exercise every single code path with end to end UI automation, and certainly not edge cases. If you do, you'll spend more time getting them working, and a lot less time actually testing your application.
You can architecture your application so you can do lower cost integration tests, including single-piece UI based tests. Unit tests are also valuable.
But the fewer integration tests you do, the more forehead-slapping bugs you're going to miss. It may be easier to write unit tests, and they will certainly be less brittle, but you'll be testing less of your application, by definition.
3 - Real database or not?
Both.
High level end-to-end integration tests must be done with the full system in place. This includes a real DB, running your tests with each system on a different server, etc.
The lower level you get, the more I advocate mock objects.
Unit tests should only test individual classes
Mid-level integration tests should avoid expensive, brittle, and impactful dependencies such as the file system, databases, the network, etc. Try to test the implementation of those brittle and impactful dependencies with unit tests and end-to-end tests only.
Instead of mentioning a page by name, describe what it represents, e.g.
Scenario: Customers logs in successfully
When I log in
Then I should see an overview of ACME's top selling products
You can test directly against underlying APIs or models, but the more you do this, the more you risk not catching an integration issue. One approach is to balance things with a small number of full-stack tests, and a larger number which test between two layers only.
Related
I walked into a new job and started working on a system that made me realize again, the importance of unit testing; and I mean proper unit testing that doesn't require an individual to click on buttons and interact with the test, or even hit the database or WCF or cross any other architectural boundaries inside its context. You should be able to run those tests anywhere anytime as many times as you like.
But this system is huge, it has about 700 projects spread across more than 300 solution files. This system directly references the dlls from a deployed location in 'Program Files' and 'circular references' are found in numerous places. After the developers change something in the code base, they would manually copy dlls to places or email them to each other. Some of the controllers are 21,000 (21K) lines of code. It can be a 'big ball of mud'.
I showed them how and where to add a seam or two to the system that wouldn't affect anyone so we could aim towards starting to unit test the system, because they seem to agree that they need this safety net and they noticed the whole industry is talking about its benefits. But now I am starting to get resistance from fellow architects on the team when 'a new property to expose something on a class' breaks their build.
This was the change. A new property exposing an existing private field.
public INavigationComponent CurrentNavigationComponent { get { return _currentNavigationComponent; } }
An architect and one of his developers don't have the patience (or know-how) to figure out where they have an old dll, so they ask me to remove the property, which I did, and he still couldn't build.
So if the team is not willing to add a single property to a class which would enable another level of testing, then I feel like I am preaching to a wall.
Is a unit testable system too strict (or optimistic) a requirement for consideration of a new work environment?
I'm guessing people would respond 'That it is my/your job to sell the concept and benefits of unit testing to your team, and if you can't do that, then...'.
Welcome to the real world.
In my experience comprehensive unit testing typically increases development time by a factor of ~5; the jury is still out on how much time it saves in the long term. If you work in an industry where quality control is paramount (aviation and medicine are two that immediately spring to mind) then it's a no-brainer, but for most industries there are many other factors at play including time-to-market issues, cash flow, marketing requirements, competition forces and a whole lot of other stuff that bears no relation to actual programming whatsoever.
It's important to keep things in perspective. As a new hire it's not your job to tell them how to develop their systems. The best you can do is demonstrate that you'll work with your team and the other departments for the good of the company, even if it means having to make compromises. Don't get me wrong, it's your responsibility to communicate to your superiors your concerns about the project, particularly when you think those concerns may have long-lasting ramifications. But then it's also your job to accept whatever decisions are made, even when they go against you, and do your job the best you can regardless.
Bottom line: being a good coder is only one small part of being a good programmer.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I need help figuring out how to test old legacy asp.net web forms.
A lot of web pages in our project are written long time ago and now it is getting to a point where maintaining/adding extra features are a pain in the neck. There are no methods whatsoever. The codes are not modularized and server side code are all over the place at the front pages (.aspx) mixing together with the UI logic.
Rewriting these legacy asp.net web forms seems to be the only way to go for long-term benefits. However, here is our problem. These pages all work fine right now, but no one on our team completely understands the business logic behind them and reading through the code line by line will be very painful. We thought maybe writing some test cases and apply that same test on our newly re-factored, modern web forms and compare the results will be more promising and accurate.
Does anyone know how i can go about this? How to test legacy asp.net web forms if the codes are not organized and madularized? Any suggestion or recommendation will be helpful.
So far i have looked at Selenium but seems like this is more for UI testing than for business logic. My main focus will be what data gets pulled from the database and displayed on the form and what data gets written into database (especially which tables) after the submit.
Also looked at Visual Studio built in Test suite, but seems like this approach requires the code to be organized in methods and functions so I didn't continue my reading.
Another thought i have in mind is monitoring the database and see which tables get changed during the period when i manually open a web page and input/submit some data. Will this be a good option?
Any thought will be appreciated. Thanks!
These pages all work fine right now, but no one on our team completely understands the business logic behind them and reading through the code line by line will be very painful.
Ok, so this is really the crux of your problem. Do you have access to the stakeholders of this application (i.e whoever it was designed for?) They are probably the best people to explain to you how its supposed to work. You need to get access to these folks and have them give you at least a crash-course in the "domain" of the application.
Only when you and your colleagues fully understand how this system works can you test it. If you don't have access to the stakeholders, then don't panic - just take this thing one module at a time and start mapping it out.
You don't have to go all out and learn the whole thing up-front - take it one module or subsystem at a time, make plenty of diagrams about how the various parts of the business domain work from the perspective of the users, and do the same then for how they currently work. Put both diagrams side by side and start planning how you might refactor the code to be organised more like the business flow, and less like the existing flow.
This can be a tricky process for sure but actually once you get going, especially once you know how the system should ideally flow based on the previous point, it's not that bad - bear in mind that you will surely be able to copy/paste a lot of your existing code - in fact, you should probably avoid the temptation to try to fix bugs on the fly at this stage. Focus instead on the organisation of your classes, etc, so as to make then adhere to SOLID - any classes that broadly stick to this will typically be very testable.
Any bugs or really poorly written code can be flagged at this stage for fixing later on; a key point here is reaorganise not re-write!
Armed with that knowledge, the next step is to write a test specification for the various parts of the application, based on the new design of the modules. That means, lots of tests and test methods (using whatever framework you like, MSTest or xUnit, etc). You really can't avoid this but remember, one module at a time!
As DanielMann pointed out, it might be worth looking at something like Specflow that will let you write test specifications in a natural(ish) language form - you may even be able to get he stakeholders on board to help write the tests!
You don't have to have literally every detail specified at first; once you have identified the major "business units" in terms of logic, you can break them down into smaller and smaller chunks of conceptual behaviour
So you may end up with tests like (just an example)
LoginModule_WhenPasswordIsWrong_RedirectswithErrorMessage()
{
//write some code in here that exercises the LoginModule and assert that it behaves as expected
//The really important thing is to write these tests based on the NEW design
//and NOT the existing system.
Assert.Fail("Write the test!");
}
Now, the key thing here - most, if not all of these tests, will not even compile and even the few that do, will probably fail. That's actually a good thing! Because now you have a clear path of what you have to do - which is to make those tests pass by implementing the new design. Best to do this in a branch of the original!
So in the example above, you might not even have a clearly defined login "module" - the code might be scattered across several pages and classes. But by writing your "ideal" tests up-front, based on an ideal design, you now have a target to aim for. And also you don't have to be totally purist about it - there is no sin in bending the rules and making some tests less granular than the ideal case - you can come back and do that later.
Rinse and repeat - every system you do, is one less to do tomorrow!
Once your initial set of test methods is passing, you can then "zoom in" and start refining them, in the process fixing bugs and crappy code (the same thing, in many respects :) you came across earlier.
Best of luck with it!
I manage a rather large application (50k+ lines of code) by myself, and it manages some rather critical business actions. To describe the program simple, I would say it's a fancy UI with the ability to display and change data from the database, and it's managing around 1,000 rental units, and about 3k tenants and all the finances.
When I make changes, because it's so large of a code base, I sometimes break something somewhere else. I typically test it by going though the stuff I changed at the functional level (i.e. I run the program and work through the UI), but I can't test for every situation. That is why I want to get started with unit testing.
However, this isn't a true, three tier program with a database tier, a business tier, and a UI tier. A lot of the business logic is performed in the UI classes, and many things are done on events. To complicate things, everything is database driven, and I've not seen (so far) good suggestions on how to unit test database interactions.
How would be a good way to get started with unit testing for this application. Keep in mind. I've never done unit testing or TDD before. Should I rewrite it to remove the business logic from the UI classes (a lot of work)? Or is there a better way?
I would start by using some tool that would test the application through the UI. There are a number of tools that can be used to create test scripts that simulate the user clicking through the application.
I would also suggest that you start adding unit tests as you add pieces of new functionality. It is time consuming to create complete coverage once the appliction is developed, but if you do it incrementally then you distribute the effort.
We test database interactions by having a separate database that is used just for unit tests. In that way we have a static and controllable dataset so that requests and responses can be guaranteed. We then create c# code to simulate various scenarios. We use nUnit for this.
I'd highly recommend reading the article Working Effectively With Legacy Code. It describes a workable strategy for what you're trying to accomplish.
One option is this -- every time a bug comes up, write a test to help you find the bug and solve the problem. Make it such that the test will pass when the bug is fixed. Then, once the bug is resolved you have a tool that'll help you detect future changes that might impact the chunk of code you just fixed. Over time your test coverage will improve, and you can run your ever-growing test suite any time you make a potentially far-reaching change.
TDD implies that you build (and run) unit tests as you go along. For what you are trying to do - add unit tests after the fact - you may consider using something like Typemock (a commercial product).
Also, you may have built a system that does not lend itself to be unit tested, and in this case some (or a lot) of refactoring may be in order.
First, I would recommend reading a good book about unit testing, like The Art Of Unit Testing. In your case, it's a little late to perform Test Driven Development on your existing code, but if you want to write your unit tests around it, then here's what I would recommend:
Isolate the code you want to test into code libraries (if they're not already in libraries).
Write out the most common Use Case scenarios and translate them to an application that uses your code libraries.
Make sure your test program works as you expect it to.
Convert your test program into unit tests using a testing framework.
Get the green light. If not, then your unit tests are faulty (assuming your code libraries work) and you should do some debugging.
Increase the code and scenario coverage of your unit tests: What if you entered unexpected results?
Get the green light again. If the unit test fails, then it's likely that your code library does not support the extended scenario coverage, so it's refactoring time!
And for new code, I would suggest you try it using Test Driven Development.
Good luck (you'll need it!)
I'd recommend picking up the book Working Effectively with Legacy Code by Michael Feathers. This will show you many techniques for gradually increasing the test coverage in your codebase (and improving the design along the way).
Refactoring IS the better way. Even though the process is daunting you should definitely separate the presentation and business logic. You will not be able to write good unit tests against your biz logic until you make that separation. It's as simple as that.
In the refactoring process you will likely find bugs that you didn't even know existed and, by the end, be a much better programmer!
Also, once you refactor your code you'll notice that testing your db interactions will become much easier. You will be able write tests that perform actions like: "add new tenant" which will involve creating a mock tenant object and saving "him" to the db. For you next test you would write "GetTenant" and try and get that tenant that you just created from the db and into your in-memory representation... Then compare your first and second tenant to make sure all fields match values. Etc. etc.
I think it is always a good idea to separate your business logic from UI. There several benefits to this including easier unit testing and expandability. You might also want to refer to pattern based programming. Here is a link http://en.wikipedia.org/wiki/Design_pattern_(computer_science) that will help you understand design patterns.
One thing you could do for now, is within your UI classes isolate all the business logic and different business bases functions and than within each UI constructor or page_load have unit test calls that test each of the business functions. For improved readability you could apply #region tag around the business functions.
For your long term benefit, you should study design patterns. Pick a pattern that suits your project needs and redo your project using the design pattern.
It depends on the language you are using. But in general start with a simple testing class that uses some made up data(but still something 'real') to test your code with. Make it simulate what would happen in the app. If you are making a change in a particular part of the app write something that works before you change the code. Now since you have already written the code getting testing up is going to be quite a challenge when trying to test the entire app. I would suggest start small. But now as you write code, write unit testing first then write your code. You might also considering refactoring but I would weigh the costs of refactoring vs rewriting a little as you go unit testing along the way.
I haven't tried adding test for legacy applications since it is really a difficult chore. If you are planning to move some of the business logic out of the UI and in a separate layer, You may add your initial Test units here(refactoring and TDD). Doing so will give you an introduction for creating unit test for your system. It is really a lot of work but I guess it is the best place to start. Since it is a database driven application, I suggest that you use some mocking tools and DBunit tools while creating your test to simulate the database related issues.
There's no better way to get started unit testing than to try it - it doesn't take long, it's fun and addictive. But only if you're working on testable code.
However, if you try to learn unit testing by fixing an application like the one you've described all at once, you'll probably get frustrated and discouraged - and there's a good chance you'll just think unit testing is a waste of time.
I recommend downloading a unit testing framework, such as NUnit or XUnit.Net.
Most of these frameworks have online documentation that provides a brief introduction, like the NUnit Quick Start. Read that, then choose a simple, self-contained class that:
Has few or no dependencies on other classes - at least not on complex classes.
Has some behavior: a simple container with a bunch of properties won't really show you much about unit testing.
Try writing some tests to get good coverage on that class, then compile and run the tests.
Once you get the hang of that, start looking for opportunities to refactor your existing code, especially when adding new features or fixing bugs. When those refactorings lead to classes that meet the criteria above, write some tests for them. Once you get used to it, you can start by writing tests.
The code was migrated using a third party tool. what ever the tool couldnt do, was done by the .net developers, so that all compile issues were fixed. My question is, for such migration activities, do we not bother running unit tests for the functions.
Secondly, Could anyone suggest if we should use some tool in VSTS 10 to create a UML model of this code to minimize risks of issues that the client might find. How cumbersome is it.
Are there any other suggestions for how quality migrated code can be delivered, in light of the fact that the functionality of the original VB6 application is unknown to us.
for such migration activities, do we not bother running unit tests for the functions.
I wouldn't trust freshly translated code (mechanical or otherwise) at all. Absolutely it needs testing.
the functionality of the original VB6 application is unknown to us.
That will make regression testing quite... challenging. If you don't know how it is meant to behave, how do you know when you've finished it?
Of course, you could decide not to unit test the translated code, then you won't know how the new code works either - not sure that "unknown = unknown" counts as a "pass", though.
In my experience, the vast majority of applications provide a great deal of "unknown" functionality. After all the reason we write software is to help us manage information in ways that immeasurably exceed our abilities as mere morals. Over time, the size and complexity of our software grows, and grows, and grows until it contains a vast amount of "unknown" functionality. The unknown functionality was probably known and verified as "correct" at one time and it was captured in detail by the source code. However, as time passes no one fully remembers/knows what all the functionality is or even why it is "correct". The full functionality is only "remembered/known" by the source code, teams "test what they change" and the rest is assumed correct unless a problem shows up. This is particularly true of systems that have been extended and changed by many people over many years. Of course this creates risk, and we can do better, process like TDD and tools to automate unit testing are helping, but for many older systems lack of system understanding and incomplete testing are facts of life. The technical idealist in me does not like this, but the business realist in me accepts it.
All that said, this presents a major problem for migration teams. In theory these teams are "changing everything". In a VB6-to-.NET migration, "Test what we changed" means test it all. Ouch. Also the functional requirements for a migration often are "just make it do what it does now, but on the new platform." Not very useful when people do not know/remember everything the system does let alone how to verify that it does it correctly. I am working with several customers that have huge VB6 apps containing 100s of thousands of LOC organized into hundreds or forms and classes and several thousand methods, properties, and event handlers. I am sure these apps contain 10s of thousands of function points. I like to ask migration teams how long it would take them to find the error if I went into the VB6 and "broke" one little thing somewhere. I rarely get an answer...
This is why I advocate using a tool-assisted rewrite methodology. One of the most critical inputs to this process is the production-tested source code. We assume this code is "correct" since you or your customers are running their business on it. The source code is an extremely detailed, formal, and complete answer to the question: what does the system do? In our approach, the migration team iteratively customizes, calibrates, and verifies the automatic, systematic translation and re-engineering of the VB6 source to a complete .NET source. We translate, test, tune, and repeat; each time improving the quality of the translation in terms of functional correctness and conformance to .NET coding standards. Verifying and refining what the tool does is central to the methodology.
In order to verify code quality, we use code reviews and "side-by-side" testing. Code reviews are done by inspecting the .NET code using eyes, and other tools such as the .NET compiler, FXCop, NDepends, etc. We also do a lot of comparing successive generations of the translated codes using a product like BeyondCompare to verify that each translation tuning change has the desired effect and no undesired side-effect. Side-by-side testing is just what it sounds like: the general idea is to run the legacy and .NET apps in side-by-side test environments and make sure their results and behaviors match. There are at least a couple challenges here:
what do you do when you "run the app"; and
how do you make sure the results and behaviors match?
The first question is typically answered in terms of test data, use cases and automated unit tests; the second question is answered in terms of looking at the application UI, and the results (data, web pages, reports) from both systems and comparing (aka approval-based testing). Of course testing tools can go a long way to increase the efficiency. A large-scale migration is a very good time to have a discussion about starting to use testing tools.
If you are planning to migrate a large complex codebase, you need to plan to be very smart about testing. If done properly, the tool-assisted approach delivers production ready code very efficiently, and this will free up resources to produce QC artifacts and improve QC processes that will endure long after the migration.
Disclaimer: I work for Great Migrations.
From the tone of your question it sounds like you know the answer! I would say anything other than a complete set of regression tests would be a recipe for disaster! Ideally, you would want to run the same set of tests against both the old and new versions, although it sounds like you might not be able to do that...
My honest answer - make sure you've got plenty of support/maintenance developers ready to work round the clock fixing support issues!
Having written a small article on BDD, I got questions from people asking whether there are any cases of large-scale use of BDD (and specifically NBehave).
So my question goes to the community: do you have a project that used BDD successfully? If so, what benefits did you get, and what could have been better? Would you do BDD again? Would you recommend it to other people?
We've used somewhat of BDD at the code level in different scenarios (open source and ND projects).
Telling the view in MVC scenario, what kind of input to accept from user (DDD and Rule driven UI Validation in .NET)
result = view.GetData(
CustomerIs.Valid,
CustomerIs.From(AddressIs.Valid, AddressIs.In(Country.Russia)));
Telling the service layer, about the exception handling behavior (ActionPolicy is injected into the decorators):
var policy = ActionPolicy
.Handle<WebException>()
.Retry(3);
Using these approaches has immensely reduced code duplication, made the codebase more stable and flexible. Additionally, it made everything more simple, due to the logical encapsulation of complex details.
I was on a small team that used BDD on a website.
The way we used it was essentially TDD, but the tests are simply written as behaviors using a DSL. We did not get into large upfront design of behaviors, but we did create a large number of them, and used them exactly as you would tests.
As you might expect, it worked much as TDD, generally good. Phrasing the tests as behaviors was nice when interacting with the customers and made for a pretty decent document, but I kind of wish the behaviors were written in English and the tests programmed instead of trying to come up with some difficult intermediate language that doesn't fit either purpose perfectly.
It would still be BDD, just without this cute trick of trying to twist the language into a language delineated by a random_looking.set of_Punctuation rather_than simple.spaces, but that was only my grumpy-old-programmer attitude, everyone else was 100% happy with it.
The site is available and fully operational, so I'd call it a success: Have a look
I recently used the BDD style of GWT in a high-level requirements document. I didn't get any feedback about the GWT from the customer buy my boss said he liked it as it was very clear and easy to understand. Note he has no knowledge of BDD that I know of. I didn't put in user stories as this would probably have been a bit too airy fairy for people with a traditional waterfall background. Maybe I'll try putting in user stories next time.
By the way this was not a eye ball UI project. It was an integration project syncing data from a web service into a database. So it shows that GWT works even for non-"eye ball" UIs.
I've been using Context-Specification style on several projects (using MSpec) with great success. I am still trying to understand the real benefits of the Scenario style. The more I use the context-specification style, the more I like it, and the tighter my applications feel.