The code was migrated using a third party tool. what ever the tool couldnt do, was done by the .net developers, so that all compile issues were fixed. My question is, for such migration activities, do we not bother running unit tests for the functions.
Secondly, Could anyone suggest if we should use some tool in VSTS 10 to create a UML model of this code to minimize risks of issues that the client might find. How cumbersome is it.
Are there any other suggestions for how quality migrated code can be delivered, in light of the fact that the functionality of the original VB6 application is unknown to us.
for such migration activities, do we not bother running unit tests for the functions.
I wouldn't trust freshly translated code (mechanical or otherwise) at all. Absolutely it needs testing.
the functionality of the original VB6 application is unknown to us.
That will make regression testing quite... challenging. If you don't know how it is meant to behave, how do you know when you've finished it?
Of course, you could decide not to unit test the translated code, then you won't know how the new code works either - not sure that "unknown = unknown" counts as a "pass", though.
In my experience, the vast majority of applications provide a great deal of "unknown" functionality. After all the reason we write software is to help us manage information in ways that immeasurably exceed our abilities as mere morals. Over time, the size and complexity of our software grows, and grows, and grows until it contains a vast amount of "unknown" functionality. The unknown functionality was probably known and verified as "correct" at one time and it was captured in detail by the source code. However, as time passes no one fully remembers/knows what all the functionality is or even why it is "correct". The full functionality is only "remembered/known" by the source code, teams "test what they change" and the rest is assumed correct unless a problem shows up. This is particularly true of systems that have been extended and changed by many people over many years. Of course this creates risk, and we can do better, process like TDD and tools to automate unit testing are helping, but for many older systems lack of system understanding and incomplete testing are facts of life. The technical idealist in me does not like this, but the business realist in me accepts it.
All that said, this presents a major problem for migration teams. In theory these teams are "changing everything". In a VB6-to-.NET migration, "Test what we changed" means test it all. Ouch. Also the functional requirements for a migration often are "just make it do what it does now, but on the new platform." Not very useful when people do not know/remember everything the system does let alone how to verify that it does it correctly. I am working with several customers that have huge VB6 apps containing 100s of thousands of LOC organized into hundreds or forms and classes and several thousand methods, properties, and event handlers. I am sure these apps contain 10s of thousands of function points. I like to ask migration teams how long it would take them to find the error if I went into the VB6 and "broke" one little thing somewhere. I rarely get an answer...
This is why I advocate using a tool-assisted rewrite methodology. One of the most critical inputs to this process is the production-tested source code. We assume this code is "correct" since you or your customers are running their business on it. The source code is an extremely detailed, formal, and complete answer to the question: what does the system do? In our approach, the migration team iteratively customizes, calibrates, and verifies the automatic, systematic translation and re-engineering of the VB6 source to a complete .NET source. We translate, test, tune, and repeat; each time improving the quality of the translation in terms of functional correctness and conformance to .NET coding standards. Verifying and refining what the tool does is central to the methodology.
In order to verify code quality, we use code reviews and "side-by-side" testing. Code reviews are done by inspecting the .NET code using eyes, and other tools such as the .NET compiler, FXCop, NDepends, etc. We also do a lot of comparing successive generations of the translated codes using a product like BeyondCompare to verify that each translation tuning change has the desired effect and no undesired side-effect. Side-by-side testing is just what it sounds like: the general idea is to run the legacy and .NET apps in side-by-side test environments and make sure their results and behaviors match. There are at least a couple challenges here:
what do you do when you "run the app"; and
how do you make sure the results and behaviors match?
The first question is typically answered in terms of test data, use cases and automated unit tests; the second question is answered in terms of looking at the application UI, and the results (data, web pages, reports) from both systems and comparing (aka approval-based testing). Of course testing tools can go a long way to increase the efficiency. A large-scale migration is a very good time to have a discussion about starting to use testing tools.
If you are planning to migrate a large complex codebase, you need to plan to be very smart about testing. If done properly, the tool-assisted approach delivers production ready code very efficiently, and this will free up resources to produce QC artifacts and improve QC processes that will endure long after the migration.
Disclaimer: I work for Great Migrations.
From the tone of your question it sounds like you know the answer! I would say anything other than a complete set of regression tests would be a recipe for disaster! Ideally, you would want to run the same set of tests against both the old and new versions, although it sounds like you might not be able to do that...
My honest answer - make sure you've got plenty of support/maintenance developers ready to work round the clock fixing support issues!
Related
I read a lot about Continuous Deployment and Continuous Delivery but still i can't fully understand how to use it PROPERLY.
(I mean, i can throw up some line of bash and there you go. But i am not sure it's the correct way to do it).
So this is the question:
What are your tools, implementations and logic? How do you implement Continuous Deployment?
How you ensure everything you sent in production actually works without the unit testing (i am not allowed to do them.. I know..)?
Let's assume we have a project written in angular 8 and one in ASP.NET Framework and you have to integrate Continuous Deployment and Delivery to an IIS server.
Which tools you are going to use and why?
I Saw TeamCity, Jenkins, Gitlab CI/CD, Azure, etc.. But none of them seem to be the correct choice to me (maybe because my Continuous Deployment/Delivery commands/business logic were poor).
Now let's assume you have to update the database as well. You can use sqlpackage and a dacpac to do it. Yeah but let's assume you have already deployed the "server" app in the previous step and the database didn't updated because there are some troubles with schemas. How do you behave?
Sorry for the very LONG post and those (stupid maybe?) questions, but i am trying to learn how to use it properly and unfortunately i am the only dev in my corporation..
There's a lot to unpack here. Continuous Deployment and Delivery are beefy subject themselves, in addition to the individual narrative.
It might be helpful to think about it by pulling apart software delivery practices (CI/CD) from its implementations (Jenkins, TeamCity, etc) and tools (server, database, framework, etc). At least three separate types of beasts to make sense of.
CI/CD exists to help business mitigate risk and learn from customers as fast as possible by constantly and iteratively putting software on user's hands. CI/CD has been so successful that a company would highly benefit if their engineers leverage existing services that implement these practices as much as possible.
Jenkins/TeamCity, AWS/DigitalOcean, and all these other types of services know what they're doing so you can more of less "simply" hand your code to them - assuming everything is properly setup - and they'll pull the code, run tests (if exists hopefully, otherwise what's the point), build the project and ship it to production. This can work for "rich" or "poor", big or small, complex or simple type of application, it really is agnostic to the project's perceived state.
So going with a simplest option and jumping through the hoops to get everything setup is one step to take. To start quickly, sometimes the less one knows about the nitty gritty details of everything and the smaller the steps to take, the better it could be; so setting up a CI/CD sample pipeline with a "Hello World" type of application and a dummy unit test (because why not) is a good confidence boost and can help clarify some miunderstanding of that piece of the puzzle.
Then once that's out of the way, playing with specific tools, framework of choice, favorite technology and whatnot becomes another part of the problem to tackle.
I walked into a new job and started working on a system that made me realize again, the importance of unit testing; and I mean proper unit testing that doesn't require an individual to click on buttons and interact with the test, or even hit the database or WCF or cross any other architectural boundaries inside its context. You should be able to run those tests anywhere anytime as many times as you like.
But this system is huge, it has about 700 projects spread across more than 300 solution files. This system directly references the dlls from a deployed location in 'Program Files' and 'circular references' are found in numerous places. After the developers change something in the code base, they would manually copy dlls to places or email them to each other. Some of the controllers are 21,000 (21K) lines of code. It can be a 'big ball of mud'.
I showed them how and where to add a seam or two to the system that wouldn't affect anyone so we could aim towards starting to unit test the system, because they seem to agree that they need this safety net and they noticed the whole industry is talking about its benefits. But now I am starting to get resistance from fellow architects on the team when 'a new property to expose something on a class' breaks their build.
This was the change. A new property exposing an existing private field.
public INavigationComponent CurrentNavigationComponent { get { return _currentNavigationComponent; } }
An architect and one of his developers don't have the patience (or know-how) to figure out where they have an old dll, so they ask me to remove the property, which I did, and he still couldn't build.
So if the team is not willing to add a single property to a class which would enable another level of testing, then I feel like I am preaching to a wall.
Is a unit testable system too strict (or optimistic) a requirement for consideration of a new work environment?
I'm guessing people would respond 'That it is my/your job to sell the concept and benefits of unit testing to your team, and if you can't do that, then...'.
Welcome to the real world.
In my experience comprehensive unit testing typically increases development time by a factor of ~5; the jury is still out on how much time it saves in the long term. If you work in an industry where quality control is paramount (aviation and medicine are two that immediately spring to mind) then it's a no-brainer, but for most industries there are many other factors at play including time-to-market issues, cash flow, marketing requirements, competition forces and a whole lot of other stuff that bears no relation to actual programming whatsoever.
It's important to keep things in perspective. As a new hire it's not your job to tell them how to develop their systems. The best you can do is demonstrate that you'll work with your team and the other departments for the good of the company, even if it means having to make compromises. Don't get me wrong, it's your responsibility to communicate to your superiors your concerns about the project, particularly when you think those concerns may have long-lasting ramifications. But then it's also your job to accept whatever decisions are made, even when they go against you, and do your job the best you can regardless.
Bottom line: being a good coder is only one small part of being a good programmer.
I am interested in writing a generic Intellisense enabled editor for SQL and C# (et al. if possible!). I would like to do this in C# as an overridden or extended WPF richTextBox-type control. I know there are many example projects available and I have implemented a basic version of my own; but most of the examples that I have come across (and indeed my own) are just that, basic.
A couple of code examples are:
DIY Intellisense By yetanotherchris
CodeTextBox - another RichTextBox control with syntax highlighting and Intellisense By Tamas Honfi
I have however, found a great example of an SQL editor with Intellisense QueryCommander SQL Editor By Mikael HÃ¥kansson which seems to work well. Microsoft must use a XML library of command keywords, but my question is: How (in detail) do Microsoft implement their Intellisense (as-you-type Intellisense) and how hard would it be for me to create my own of the same standard?
Edit A: A year on and I have managed to develop my own editor control with basic intellisense mainly for my own "enjoyment". I thought I would come back provide a list of freely available .NET projects that helped me with my own development and can be used out-of-the-box and free of charge:
ICSharpCode (WinForms)
AvalonEdit (WPF)
ScintillaNET (WinForms)
Query Commander [for example of intellisense implementation] (WinForms)
Edit B: 15 months after the question was asked I am still looking for new improved editors. This one is nice...
RoslynPAD is cool!
Edit C: 2 years+ on from the question, I have found the following projects, both using WPF and backed by AvalonEdit.
CodeCompletion for AvalonEdit using NRefactory. This project is really nice and has a full implementation of intellisense using NRefactory.
ScriptCS ScriptCS makes it easy to write and execute C# with a simple text editor.
How (in detail) do Microsoft implement their as-you-type Intellisense?
I can describe it to any level of detail you care to name, but I don't have the time for more than a brief explanation. I'll explain how we do it in Roslyn.
First, we build an immutable model of the token stream using a data structure that can efficiently represent edits, since obviously edits are precisely what there are going to be a lot of.
The key insight to making it efficient for persistent reuse is to represent the character lengths of the tokens but not their character positions in the edit buffer; remember, a token at the end of the file is going to change position on every edit but the length of the token does not change. You must at all costs minimize the number of total re-lexings if you want to be efficient on extremely large files.
Once you have an immutable model that can handle inserts and deletions to build up an immutable token stream without re-lexing the entire file every time, you then have to do the same thing, but for grammatical analysis. This is in practice a considerably harder problem. I recommend that you obtain an undergraduate or graduate degree in computer science with an emphasis on parser theory if you have not already. We obtained the help of people with PhDs who did their theses on parser theory to design this particular bit of the algorithm.
Then, obviously, build a grammatical analyzer that can analyze C#. Remember, it has to analyze broken C#, not correct C#; IntelliSense has to work while the program is in a non-compiling state. So start by coming up with modifications to the grammar that have good error-recovery characteristics.
OK, so now you've got a parser that can efficiently do grammatical analysis without re-lexing or re-parsing anything but the edited region, most of the time, which means that you can do the work between keystrokes. I forgot to mention, of course you will need to come up with some mechanism to not block the UI thread while doing all of these analyses should the analysis happen to take longer than the time between two keystrokes. The new "async/await" feature of C# 5 should help with that. (I can tell you from personal experience: be careful with the proliferation of tasks and cancellation tokens. If you are careless, it is possible to get into a state where there are tens of thousands of cancelled tasks pending, and that is not fast.)
Now that you've got a grammatical analysis you need to build a semantic analyzer. Since you are only doing IntelliSense, it does not need to be a particularly sophisticated semantic analyzer. (Our semantic analyzer must do an analysis suitable for generating code from correct programs and correct error analysis from incorrect programs.) But of course, again it has to do good semantic analysis on broken programs, which does increase the complexity considerably.
My advice is to start by building a "top level" semantic analyzer, again using an immutable model that can persist the state of the declared-in-source-code types from edit to edit. The top level analyzer deals with anything that is not a statement or expression: type declarations, directives, namespaces, method declarations, constructors, destructors, and so on. The stuff that makes up the "shape" of the program when the compiler generates metadata.
Metadata! I forgot about metadata. You'll need a metadata reader. You need to be able to produce IntelliSense on expressions that refer to types in libraries, obviously. I recommend using the CCI libraries as your metadata reader, and not Reflection. Since you are only doing IntelliSense, obviously you don't need a metadata writer.
Anyway, once you have a top-level semantic analyzer, then you can write a statement-and-expression semantic analyzer that analyzes the types of the expressions in a given statement. Pay particular attention to name lookup and overload resolution algorithms. Method type inference will be particularly tricky, especially inside LINQ queries.
Once you've got all that, an IntelliSense engine should be easy; just work out the type of the expression at the current cursor position and display a dropdown appropriately.
how hard would it be for me to create my own of the same standard?
Well, we've got a team of, call it ten people, and it'll probably take, call it five years all together to get the whole thing done from start to finish. But we have lots more to do than just the IntelliSense engine. That's maybe only 40% of the work. Oh, and half those people work on VB, now that I think about it. But those people have on average probably five or ten years experience in doing this sort of work, so they're faster at it than you will be if you've never done this before.
So let's say it should take you about ten to twenty years of full time work, working alone, to build a Roslyn-quality IntelliSense engine for C# that can do acceptably-close-to-correct analysis of large programs in the time between keystrokes.
Longer if you need to do that PhD first, obviously.
Or, you could simply use Roslyn, since that's what it's for. That'll take you probably a few hours, but you don't get the fun of doing it yourself. And it is fun!
You can download the preview release here:
http://www.microsoft.com/download/en/details.aspx?id=27746
This is an area where Microsoft typically produces great results - Microsoft developer tools really are awesome. And there is a clear commercial advantage for sales of their developer tools and for sales of Windows to having the best intellisense so it makes sense for Microsoft to devote the kind of resources Eric describes in his wonderfully detailed answer. Still, I think it's worth pointing out a few of things:
Your customers may not actually need all the features that Microsoft's implementation provides. The Microsoft solution might be incredibly over-engineered in terms of the features that you need to provide to your customers/users. Unless you're actually implementing a generic coding environment that is intended to be competitive with Visual Studio, it is likely that there are aspects of your intended use that either simplify the problem, or that allow you to make compromises on the solution that Microsoft feels they cannot make. Microsoft will likely spend resources decreasing response times that are already measured in hundreds of milliseconds. That may not be something you need to do. Microsoft is spending time on providing an API for others to use for code analysis. That's likely not part of your plan. Prioritize your features and decide what "good enough" looks like for you and your customers then estimate the cost of implementing that.
In addition to bearing the obvious costs of implementing requirements that you may not actually have, Microsoft also carries some costs that may not be obvious if you haven't worked in a team. There are huge communication costs associated with teams. It's actually incredibly easy to have five smart people take longer to produce a solution than it takes for a single smart person to produce the equivalent solution. There are aspects of Microsoft's hiring practices and organizational structure that make this scenario more likely. If you hire a bunch of smart people with egos and then empower all of them to make decisions, you too can get a 5% better solution for 500% of the cost. That 5% better solution might be profitable for Microsoft, but it could be deadly for a small company.
Going from a 1 person solution to a 5 person solution increases the costs, but that's just the intra-team development costs. Microsoft has separate teams that are devoted to (roughly) design, development, and testing even for a single feature. The project-related communication between peers across these boundaries has higher friction than within each of the disciplines. This not only increases communication costs between individuals, but it also results in larger team sizes. And more than that - since it's not a single team of 12 individuals, but is instead 3 teams of 5 individuals, there is 3x the upward communication cost. More costs that Microsoft has chosen to carry that may not translate to similar costs for other companies.
My point here is not to describe Microsoft as an inefficient company. My point is that Microsoft makes a ton of decisions about everything from hiring, to team organization, to design and implementation that start from assumptions about profitability and risk that simply do not apply to companies that are not Microsoft.
In terms of the intellisense thing, there are various ways of thinking about the problem. Microsoft is producing a very generic, reusable solution that doesn't just solve intellisense, but also targets code navigation, refactoring, and various other uses for code analysis. You don't need to do things the same way if your sole goal is to make it easy for developers to enter code without having to type much. Targeting that feature doesn't take years of effort and there are all sorts of creative things you can do if you're not just providing an API, but you actually control the UI too.
I'm new to BDD, but I found it very interesting and want to develop my next project using BDD. After googling and watching screencasts I still have lots of questions about BDD in real life.
1. Declarative or Imperative scenarios?
Most of given-when-then scenarios I saw were written in terms of UI (imperative).
Scenario: Login
Given I am on the Login-page
When I enter 'AUser' in the textbox 'UserName'
And I enter 'APassword' in the textbox 'Password'
And I click the 'Login' button
Then I should see the following text 'You are logged in'
I found those tests extremely brittle and they tell nothing about business value of clicking on buttons. I think its nightmare to maintain. Why most of examples use imperative scenarios?
Scenario: Login (declarative)
Given I am not logged in
When I log in using valid credentials
Then I should be logged in
If you prefer declarative style, how do you describe such stuff like 'Home page' or 'Products page'?
Tips for writing good specifications
2. Exercise UI or not?
Most of steps implementations I saw used WatiN, White or something like that to implement scenarios from user point of view. Starting browser, clicking buttons. I think its extremely slow and brittle. Well, I can use something like Page object to make tests less brittle. But thats another amount of work. Especially for desktop applications with complex UI.
How do you implement scenarios in real-life projects - exercising UI, or via testing controllers/presenters?
Best way to apply BDD
3. Real database or not?
When Given part of scenario is implemented, often it needs some data to be in the system (e.g. some products for shop application). How do you implement that part - adding data to real database (full end-to-end testing), or providing repository stubs to controllers?
Waiting for experienced answers!
UPDATE: Added useful links on questions.
Declaritive is the proper way, IMO. If youre talking about page .aspx file names, you're doing it wrong. The purpose of the story is to facilitate communication between developers and non-develoeprs. Non-developers don't care about products.aspx, they care about a product listing. Your system does something the non-developers find value in. This is what you're trying to capture.
Well, the stories tell the high level features you need to implement. Its what your system must do. The only way to really tell if you've done this is to in fact exercise the UI. BDD SpecFlow stories to me don't replace unit tests, rather they're you're integration tests. If you break this, you've broken the value the business gets from your software. Unit tests are implementation details your users don't care about, and they only test each piece in isolation. That can't tell you if A and B actually work together all the time (in theory it should, in practice interesting [read: unexpected] things happen when you actually have the two parts playing with each other). Automated end to end tests can help with your QA as well. If a functional area breaks, you know about it, and they can spend their time in other areas of the application while you determine what broke the integration tests.
This is a tricky one. We've done a hybrid approach. We do use the database (integrate tests after all test the system functioning as one thing, rather than the individual components), but rather than resetting configurations all the time we use Deleporter to replace our repositories with Moqs when we need to. Seems to work ok, but there are certainly pros and cons either way. I think we're still largely figuring this out ourselves.
Edit: I found this article just now describing the concept of limiting yourself to talking only about specific domains to avoid brittle scenarios.
His main point is that the minimum number of domains you can talk about are the problem domain, and the solution domain. If you're talking about anything outside those two domains then you involve too many stakeholders, you introduce too much noise, and you make your scenarios brittle.
He also mentions an absolute "declarative" or "imperative" model being a myth. He talks about a cognative model called "chunking", saying that at any level of your abstraction, you can "chunk up" or "chunk down". This means you can get more explicit (how?) or more meta (what or why?). You chunk up from an imperative model by asking "what are we doing?" You chunk down by saying "how will we do this?" So I guess I wouldn't get too hung up on declarative vs imperative - it won't get you anywhere as far as this problem goes.
What will get you somewhere is figuring out which domains each term belongs in, possibly by identifying which stakeholder is the expert for the domain that term belongs in. Once you've identified all the domains, you can either pick related terms that are in one of the scenario's most prominent domains, or remove non-fitting statements entirely. If that isn't sufficient, you can split up, further specify, or move the scenario so it can satisfy these requirements.
BTW, he also uses the scenario of logging in on a UI, so you've got concrete guidance :)
Before Edit: (some of this still applies. The "DB or no DB" and "UI or no UI" questions are unrelated)
1 - Declarative or Imperative scenarios?
Declarative when you can, though imperative has some value (at some points in a project lifecycle).
Imperative is an easier way to think for testers and business analysts who aren't as familiar with information theory and design. It is also easier to think about earlier on in a project, before you've nailed down your problem domain and workflows. It can be useful for exploratory thinking.
Declarative is less subject to change over time. Since a GUI is the part of an application most subject to churn at a whim, this is extremely valuable. It is easier to think about once you've nailed down your problem domain and workflows, and are more focused on relational concepts. It is a more stable and more generally applicable model.
If you write test cases with a generic and declarative model, you could implement them using any combination of full app GUI automation, integration tests, or unit tests.
how do you describe such stuff like 'Home page' or 'Products page'?
I'm not sure I would at the base level of features and requirements. You might make sub-features and sub-requirements that describe implementation details, like specific UI workflows. If you're describing a piece of a UI, then you should be defining a UI feature/requirement.
2 - Exercise UI or not?
Both.
I think its extremely slow and brittle
Yes, it is. Perform every high level scenario/requirement with the UI and full DB integration, but don't exercise every single code path with end to end UI automation, and certainly not edge cases. If you do, you'll spend more time getting them working, and a lot less time actually testing your application.
You can architecture your application so you can do lower cost integration tests, including single-piece UI based tests. Unit tests are also valuable.
But the fewer integration tests you do, the more forehead-slapping bugs you're going to miss. It may be easier to write unit tests, and they will certainly be less brittle, but you'll be testing less of your application, by definition.
3 - Real database or not?
Both.
High level end-to-end integration tests must be done with the full system in place. This includes a real DB, running your tests with each system on a different server, etc.
The lower level you get, the more I advocate mock objects.
Unit tests should only test individual classes
Mid-level integration tests should avoid expensive, brittle, and impactful dependencies such as the file system, databases, the network, etc. Try to test the implementation of those brittle and impactful dependencies with unit tests and end-to-end tests only.
Instead of mentioning a page by name, describe what it represents, e.g.
Scenario: Customers logs in successfully
When I log in
Then I should see an overview of ACME's top selling products
You can test directly against underlying APIs or models, but the more you do this, the more you risk not catching an integration issue. One approach is to balance things with a small number of full-stack tests, and a larger number which test between two layers only.
Having written a small article on BDD, I got questions from people asking whether there are any cases of large-scale use of BDD (and specifically NBehave).
So my question goes to the community: do you have a project that used BDD successfully? If so, what benefits did you get, and what could have been better? Would you do BDD again? Would you recommend it to other people?
We've used somewhat of BDD at the code level in different scenarios (open source and ND projects).
Telling the view in MVC scenario, what kind of input to accept from user (DDD and Rule driven UI Validation in .NET)
result = view.GetData(
CustomerIs.Valid,
CustomerIs.From(AddressIs.Valid, AddressIs.In(Country.Russia)));
Telling the service layer, about the exception handling behavior (ActionPolicy is injected into the decorators):
var policy = ActionPolicy
.Handle<WebException>()
.Retry(3);
Using these approaches has immensely reduced code duplication, made the codebase more stable and flexible. Additionally, it made everything more simple, due to the logical encapsulation of complex details.
I was on a small team that used BDD on a website.
The way we used it was essentially TDD, but the tests are simply written as behaviors using a DSL. We did not get into large upfront design of behaviors, but we did create a large number of them, and used them exactly as you would tests.
As you might expect, it worked much as TDD, generally good. Phrasing the tests as behaviors was nice when interacting with the customers and made for a pretty decent document, but I kind of wish the behaviors were written in English and the tests programmed instead of trying to come up with some difficult intermediate language that doesn't fit either purpose perfectly.
It would still be BDD, just without this cute trick of trying to twist the language into a language delineated by a random_looking.set of_Punctuation rather_than simple.spaces, but that was only my grumpy-old-programmer attitude, everyone else was 100% happy with it.
The site is available and fully operational, so I'd call it a success: Have a look
I recently used the BDD style of GWT in a high-level requirements document. I didn't get any feedback about the GWT from the customer buy my boss said he liked it as it was very clear and easy to understand. Note he has no knowledge of BDD that I know of. I didn't put in user stories as this would probably have been a bit too airy fairy for people with a traditional waterfall background. Maybe I'll try putting in user stories next time.
By the way this was not a eye ball UI project. It was an integration project syncing data from a web service into a database. So it shows that GWT works even for non-"eye ball" UIs.
I've been using Context-Specification style on several projects (using MSpec) with great success. I am still trying to understand the real benefits of the Scenario style. The more I use the context-specification style, the more I like it, and the tighter my applications feel.