I read a lot about Continuous Deployment and Continuous Delivery but still i can't fully understand how to use it PROPERLY.
(I mean, i can throw up some line of bash and there you go. But i am not sure it's the correct way to do it).
So this is the question:
What are your tools, implementations and logic? How do you implement Continuous Deployment?
How you ensure everything you sent in production actually works without the unit testing (i am not allowed to do them.. I know..)?
Let's assume we have a project written in angular 8 and one in ASP.NET Framework and you have to integrate Continuous Deployment and Delivery to an IIS server.
Which tools you are going to use and why?
I Saw TeamCity, Jenkins, Gitlab CI/CD, Azure, etc.. But none of them seem to be the correct choice to me (maybe because my Continuous Deployment/Delivery commands/business logic were poor).
Now let's assume you have to update the database as well. You can use sqlpackage and a dacpac to do it. Yeah but let's assume you have already deployed the "server" app in the previous step and the database didn't updated because there are some troubles with schemas. How do you behave?
Sorry for the very LONG post and those (stupid maybe?) questions, but i am trying to learn how to use it properly and unfortunately i am the only dev in my corporation..
There's a lot to unpack here. Continuous Deployment and Delivery are beefy subject themselves, in addition to the individual narrative.
It might be helpful to think about it by pulling apart software delivery practices (CI/CD) from its implementations (Jenkins, TeamCity, etc) and tools (server, database, framework, etc). At least three separate types of beasts to make sense of.
CI/CD exists to help business mitigate risk and learn from customers as fast as possible by constantly and iteratively putting software on user's hands. CI/CD has been so successful that a company would highly benefit if their engineers leverage existing services that implement these practices as much as possible.
Jenkins/TeamCity, AWS/DigitalOcean, and all these other types of services know what they're doing so you can more of less "simply" hand your code to them - assuming everything is properly setup - and they'll pull the code, run tests (if exists hopefully, otherwise what's the point), build the project and ship it to production. This can work for "rich" or "poor", big or small, complex or simple type of application, it really is agnostic to the project's perceived state.
So going with a simplest option and jumping through the hoops to get everything setup is one step to take. To start quickly, sometimes the less one knows about the nitty gritty details of everything and the smaller the steps to take, the better it could be; so setting up a CI/CD sample pipeline with a "Hello World" type of application and a dummy unit test (because why not) is a good confidence boost and can help clarify some miunderstanding of that piece of the puzzle.
Then once that's out of the way, playing with specific tools, framework of choice, favorite technology and whatnot becomes another part of the problem to tackle.
Related
Are there any best practices for enforcing a TFS check-in policy? Are there any good guides on how to implement various types of policies as well as their pros and cons?
Things I'd particularly like to do are ensure that code compiles (note that compilation can take up to five minutes) and that obvious bits of the coding standards are followed (summary tags must exist, naming conventions are followed, etc).
TFS 2010 (and 2008 but I have not used 2008) allows a gated checkin - which forces a build prior to the build being checked in.
Activating this is a (reasonably) straightforward process, see for example these guides:
http://blogs.msdn.com/b/patcarna/archive/2009/06/29/an-introduction-to-gated-check-in.aspx
http://intovsts.net/2010/04/18/the-gated-check-in-build-in-tfs2010/
There is a step before all this which is required to make all this happen. That is a TFS build server setup. That can be a complex process depending on infrastructure etc. Here is an MSDN guide:
http://msdn.microsoft.com/en-us/library/ms181712.aspx
The pros are that the code in the repository can be reasonably stable. For a large team this can save a LOT of time.
There are a lot of cons worth considering for this benefit. Firstly, the installation and maintenance of an extra build server. This include disk space allocation, patches etc.
Secondly is the extra time required for each person to check in a file. Waiting for a build to succeed before the code is checked in (and available for others to get) can be a while.
Thirdly, when (not if) the build server is not available, a contingency plan needs to be in place to allow developers to continue their work.
There is a lot of extra process required to reap the rewards of gated checkins. However is this process is governed properly it can lead to a much smoother development cycle.
Although we do not use gated checkins, we do use a TFS build server for continuous integration to do scheduled builds. This minimises the dependency minute-to-minute on the build server while ensuring (with reasonably effectiveness) that after a build has broken, we are notified and can rectify it ASAP. This method empowers the developers to have an understanding of integrating code, and how to avoid breaking the code in the repository.
I think the premise of this question is somewhat wrong. I think a good question of this nature should be something along the lines of; my team is having a problem with code stability, conflicting change-sets, developers not running tests, poor coverage, or other metric reporting to management and we'd like to use TFS to help solve that(those) issue(s). Yes, I do realize that the OP stated that ensuring compilation is considered a goal, but that comes part and parcel with having an automated build server.
I would question any feature that adds friction to a developer's work cycle without a clearly articulated purpose. Although I've never used them, gated check-ins sound like a feature in search of a problem. If the stability of your codebase is impacting developer productivity and you can't fix it by changing the componetization of your software, dev team structure, or a better branching strategy, then I guess it's a solution. I've worked in a large shop on a global project where ClearCase was the mandated tool and I've encountered that kind of corporate induced fail, but the team I worked on didn't go there quietly or willingly.
The ideal policy is not to have one. Let developers work uninhibited and with as little friction as possible. Code reviews do much more than a set of rules enforced by a soul-less server will ever do. A team that supports testing, and is properly structured will do more for stability than a gated check-in will ever achieve. Tools that support branching and local check-ins make it easier for developers to try new things without fear of breaking the build will help mitigate the kind of technical debt that kills large projects.
You should look at chapter 8 of "Patterns & practices: Team Development with Visual Studio Team Foundation Server"
http://tfsguide.codeplex.com/
The code was migrated using a third party tool. what ever the tool couldnt do, was done by the .net developers, so that all compile issues were fixed. My question is, for such migration activities, do we not bother running unit tests for the functions.
Secondly, Could anyone suggest if we should use some tool in VSTS 10 to create a UML model of this code to minimize risks of issues that the client might find. How cumbersome is it.
Are there any other suggestions for how quality migrated code can be delivered, in light of the fact that the functionality of the original VB6 application is unknown to us.
for such migration activities, do we not bother running unit tests for the functions.
I wouldn't trust freshly translated code (mechanical or otherwise) at all. Absolutely it needs testing.
the functionality of the original VB6 application is unknown to us.
That will make regression testing quite... challenging. If you don't know how it is meant to behave, how do you know when you've finished it?
Of course, you could decide not to unit test the translated code, then you won't know how the new code works either - not sure that "unknown = unknown" counts as a "pass", though.
In my experience, the vast majority of applications provide a great deal of "unknown" functionality. After all the reason we write software is to help us manage information in ways that immeasurably exceed our abilities as mere morals. Over time, the size and complexity of our software grows, and grows, and grows until it contains a vast amount of "unknown" functionality. The unknown functionality was probably known and verified as "correct" at one time and it was captured in detail by the source code. However, as time passes no one fully remembers/knows what all the functionality is or even why it is "correct". The full functionality is only "remembered/known" by the source code, teams "test what they change" and the rest is assumed correct unless a problem shows up. This is particularly true of systems that have been extended and changed by many people over many years. Of course this creates risk, and we can do better, process like TDD and tools to automate unit testing are helping, but for many older systems lack of system understanding and incomplete testing are facts of life. The technical idealist in me does not like this, but the business realist in me accepts it.
All that said, this presents a major problem for migration teams. In theory these teams are "changing everything". In a VB6-to-.NET migration, "Test what we changed" means test it all. Ouch. Also the functional requirements for a migration often are "just make it do what it does now, but on the new platform." Not very useful when people do not know/remember everything the system does let alone how to verify that it does it correctly. I am working with several customers that have huge VB6 apps containing 100s of thousands of LOC organized into hundreds or forms and classes and several thousand methods, properties, and event handlers. I am sure these apps contain 10s of thousands of function points. I like to ask migration teams how long it would take them to find the error if I went into the VB6 and "broke" one little thing somewhere. I rarely get an answer...
This is why I advocate using a tool-assisted rewrite methodology. One of the most critical inputs to this process is the production-tested source code. We assume this code is "correct" since you or your customers are running their business on it. The source code is an extremely detailed, formal, and complete answer to the question: what does the system do? In our approach, the migration team iteratively customizes, calibrates, and verifies the automatic, systematic translation and re-engineering of the VB6 source to a complete .NET source. We translate, test, tune, and repeat; each time improving the quality of the translation in terms of functional correctness and conformance to .NET coding standards. Verifying and refining what the tool does is central to the methodology.
In order to verify code quality, we use code reviews and "side-by-side" testing. Code reviews are done by inspecting the .NET code using eyes, and other tools such as the .NET compiler, FXCop, NDepends, etc. We also do a lot of comparing successive generations of the translated codes using a product like BeyondCompare to verify that each translation tuning change has the desired effect and no undesired side-effect. Side-by-side testing is just what it sounds like: the general idea is to run the legacy and .NET apps in side-by-side test environments and make sure their results and behaviors match. There are at least a couple challenges here:
what do you do when you "run the app"; and
how do you make sure the results and behaviors match?
The first question is typically answered in terms of test data, use cases and automated unit tests; the second question is answered in terms of looking at the application UI, and the results (data, web pages, reports) from both systems and comparing (aka approval-based testing). Of course testing tools can go a long way to increase the efficiency. A large-scale migration is a very good time to have a discussion about starting to use testing tools.
If you are planning to migrate a large complex codebase, you need to plan to be very smart about testing. If done properly, the tool-assisted approach delivers production ready code very efficiently, and this will free up resources to produce QC artifacts and improve QC processes that will endure long after the migration.
Disclaimer: I work for Great Migrations.
From the tone of your question it sounds like you know the answer! I would say anything other than a complete set of regression tests would be a recipe for disaster! Ideally, you would want to run the same set of tests against both the old and new versions, although it sounds like you might not be able to do that...
My honest answer - make sure you've got plenty of support/maintenance developers ready to work round the clock fixing support issues!
I will be taking on the role of support for a complex application that is transitioning from the development team. This application is a sharepoint solution that connects to several (7) web services. The development team is rolling off almost immediately and will be available only for small questions.
I'm new to this role so I'm wondering what suggestions you have for me as I take on this large project. What are some considerations that should be made so that the transition to support is smooth and uninterupted?
I've been reading the documentation but I can already see some gaps that need to be filled. The applicaiton is very (perhaps overly) configurable and there is lots of injected code. Stepping through the code is about the only way I can gain an understanding of what is actually happening.
It sounds like you've already got your environment set up if you're able to debug the application, so that's the first thing I was going to suggest in a knowledge-transfer situation. Some general things that I would get from the developers before they depart:
A list of third-party components that the application uses, along with license information and website logins if applicable.
Access to every part of the environment that this thing runs on, both production and development. That means the source code management system, database server(s), etc. It sounds like you have some of these already but make sure you get access to absolutely everything.
If your development environment was given to you "as is" (i.e. you took it over from one of the departing developers, make sure you know how to rebuild it from scratch. They might have a document that describes the process of building a development box, but if not maybe you can get them to show you how to set up a fresh machine.
Three will go a long way towards this, but if setting up a server to run the application is different in any way from setting up a development environment, you'd want to know how so you can diagnose server configuration issues if they crop up, or even rebuild a server. Although this sort of thing may be someone else's responsibility depending on your organization.
Once you have those, you probably want to get some understanding of why the application does the things that it does. That will give you the context you need to understand support and enhancement requests when they come in.
Are the original developers the only source of this information, or are there business people who you will be working with after the developers leave? One of the first things I try to do when starting on an existing application that's new to me is to find someone who knows the business well and have them give me a high-level run-down of the application's purpose in life. From there you can go into more detail on individual components/features/whatever as needed. The business people may be a better source for this information than the developers are, so you may want to try them first.
Hopefully some of that helps.
If you're not the systems admin (as opposed to the SharePoint admin), develop an understanding with them of what tasks you are able to do and what you need of them.
This may include things like stopping and starting services (IIS, Timer Service, etc.) and filesystem and DB monitoring and maintenance. Getting this sorted out up front saves a lot of pain later.
If the sys admins don't have some understanding of SharePoint, educate them. They will need to know what the deal is with things like code deployments.
It's best not to feel my pain.
I've got a desktop application written in C# created using VS2008 Pro and unit tested with Nunit framework and Testdriven.net plugin for VS2008. I need to conduct system testing on the application.
I've previously done web based system tests using Bad Boy and Selenium plugin for Firefox, but I'm new to Visual Studio and C#.
I would appreciate if someone could share their advice regarding this.
System testing will likely need to be done via the UI. This gives you two options:
1) You can manually conduct the test cases by clicking on elements.
2) You can automate the test cases by programming against the UI. There are plenty of commercial tools to do this or you can use a programming framework like the Microsoft UI Automation Framework. These tend to use the accessibility APIs built into Windows to access your UI.
Whether you go the manual or automated route depends on how many times you will be running the tests. If you are just going to run them once or twice, don't spend the time automating. You will never earn it back. If you are going to run them often, automating can be very handy.
A word of caution: Automating the UI isn't hard, but it is very brittle. If the application is changing a lot, the tests will require a lot of maintenance.
As Thomas Owens commented on your question, first you must decide what kind of system testing you want to do. But assuming you want start with Functional System Tests. Prepare use cases you want to automate. Than you must find proper tool.
Just for start:
AtoIT – is not test atomization tool but it lets automate some tasks. So you could record/script use cases. Not really recommended, but can be done.
HP QuickTestPro – easily can be done with this tool via recording/scripting but it is expensive, so maybe not worth it for personal use.
IBM Robot – as HP QTP.
Powershell – you could write scripts in powershell and execute them. If you would use dedicated ide-like tools for powershell you could record test also. I did some web automation via powershell and it worked. With a bit of work probably you could script around your desktop app.
And the best would be to try different tools, and use one that suits you best. Try this link and this link.
System tests usually have use cases, end to end scenarios and other scripted functions that real people execute. These are the tests that don't lend themselves well to automation as they are asking your unit-tested cogs to work with each other. You might have great unit tests for your "nuts" and your "wrenches" but only a comprehensive system test will let you know if you have the right sized wrench for the nut at hand, how to select/return it from/to the drawer, etc.
In short - manual tests.
If you're willing to put money down, you could look at something like TestComplete.
Although I haven't really used it yet (our company just bought it), it seems quite nice. You can record clicks and keypresses and stuff, define success criteria, and it will replay the test for you later. It appears to be quite smart about UI changes - it remembers which button you clicked, not just the (x,y) of each click.
It's scriptable, or drag-and-drop programmable.
I'm not affiliated in any way, and this is not an endorsement, because I haven't really formed an opinion of it yet.
Perhaps NUnitForms could be useful for you?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm working with a small (4 person) development team on a C# project. I've proposed setting up a build machine which will do nightly builds and tests of the project, because I understand that this is a Good Thing. Trouble is, we don't have a whole lot of budget here, so I have to justify the expense to the powers that be. So I want to know:
What kind of tools/licenses will I need? Right now, we use Visual Studio and Smart Assembly to build, and Perforce for source control. Will I need something else, or is there an equivalent of a cron job for running automated scripts?
What, exactly, will this get me, other than an indication of a broken build? Should I set up test projects in this solution (sln file) that will be run by these scripts, so I can have particular functions tested? We have, at the moment, two such tests, because we haven't had the time (or frankly, the experience) to make good unit tests.
What kind of hardware will I need for this?
Once a build has been finished and tested, is it a common practice to put that build up on an ftp site or have some other way for internal access? The idea is that this machine makes the build, and we all go to it, but can make debug builds if we have to.
How often should we make this kind of build?
How is space managed? If we make nightly builds, should we keep around all the old builds, or start to ditch them after about a week or so?
Is there anything else I'm not seeing here?
I realize that this is a very large topic, and I'm just starting out. I couldn't find a duplicate of this question here, and if there's a book out there I should just get, please let me know.
EDIT: I finally got it to work! Hudson is completely fantastic, and FxCop is showing that some features we thought were implemented were actually incomplete. We also had to change the installer type from Old-And-Busted vdproj to New Hotness WiX.
Basically, for those who are paying attention, if you can run your build from the command line, then you can put it into hudson. Making the build run from the command line via MSBuild is a useful exercise in itself, because it forces your tools to be current.
Update: Jenkins is the most up to date version of Hudson. Everyone should be using Jenkins now. I'll be updating the links accordingly.
Hudson is free and extremely easy to configure and will easily run on a VM.
Partly from an old post of mine:
We use it to
Deploy Windows services
Deploy web services
Run MSTests & display as much information as any junit tests
Keep track of low,med,high tasks
trendgraph warnings and errors
Here are some of the built in .net stuff that Hudson supports
MSBuild
NAnt
MSTest
Nunit
Team Foundation Server
fxcop
stylecop
compiler warnings
code tasks
Also, god forbid you are using visual source safe, it supports that as well. I'd recommend you take a look at Redsolo's article on building .net projects using Hudson
Your questions
Q: What kind of tools/licenses will I need? Right now, we use Visual Studio and Smart Assembly to build, and Perforce for source control. Will I need something else, or is there an equivalent of a cron job for running automated scripts?
A: I just installed visual studio on a fresh copy of a VM running a fresh, patched, install of a windows server OS. So you'd need the licenses to handle that. Hudson will install itself as a windows service and run on port 8080 and you will configure how often you want it to scan your code repository for updated code, or you can tell it to build at a certain time. All configurable through the browser.
Q: What, exactly, will this get me, other than an indication of a broken build? Should I set up test projects in this solution (sln file) that will be run by these scripts, so I can have particular functions tested? We have, at the moment, two such tests, because we haven't had the time (or frankly, the experience) to make good unit tests.
A: You will get an email on the first time a build fails, or becomes unstable. A build is unstable if a unit test fails or it can be marked unstable through any number of criteria that you set. When a unit test or build fails you will be emailed and it will tell you where, why and how it failed. With my configuration, we get:
list of all commits since the last working build
commit notes of those commits
list of files changed in the commits
console output from the build itself, showing the error or test failure
Q: What kind of hardware will I need for this?
A: A VM will suffice
Q: Once a build has been finished and tested, is it a common practice to put that build up on an ftp site or have some other way for internal access? The idea is that this machine makes the build, and we all go to it, but can make debug builds if we have to.
A: Hudson can do whatever you want with it, that includes ID'ing it via the md5 hash, uploading it, copying it, archiving it, etc. It does this automatically and provides you with a long running history of build artifacts.
Q: How often should we make this kind of build?
A: We have ours poll SVN every hour, looking for code changes, then running a build. Nightly is ok, but somewhat worthless IMO since what you've worked on yesterday wont be fresh in your mind in the morning when you get in.
Q: How is space managed? If we make nightly builds, should we keep around all the old builds, or start to ditch them after about a week or so?
A: Thats up to you, after so long I move our build artifacts to long term storage or delete them, but all the data which is stored in text files / xml files I keep around, this lets me store the changelog, trend graphs, etc on the server with verrrry little space consumed. Also you can set Hudson up to only keep artifacts from a trailing # of builds
Q: Is there anything else I'm not seeing here?
A: No, Go get Hudson right now, you wont be disappointed!
We've had great luck with the following combo:
Visual Studio (specifically, using the MSBuild.exe command line tool and passing it our solution files. removes the need for msbuild scripts)
NAnt (like the XML syntax/task library better than MSBuild. Also has options for P4 src control operations)
CruiseControl.net - built in web dashboard for monitoring/starting builds.
CCNet has built in notifiers to send emails when builds succeed/fail
On justification: This takes the load off developers doing manual builds and does a lot to take human error out of the equation. It is very hard to quantify this effect, but once you do it you will never go back. Having a repeatable process to build and release software is paramount. I'm sure you've been places where they build the software by hand and it gets out in the wild, only to have your build guy say "Oops, I must have forgotten to include that new DLL!"
On hardware: as powerful as you can get. More power/memory = faster build times. If you can afford it you'll never regret getting a top-notch build machine, no matter how small the group.
On space: Helps to have plenty of hard disk space. You can craft your NAnt scripts to delete intermediate files every time a build starts, so the real issue is keeping log histories and old application installers. We have software that monitors disk space and sends alerts. Then we clean up the drive manually. Usually needs to be done every 3-4 months.
On build notifications: This is built in to CCNet, but if you are going to add automated testing as an additional step then build this into the project from the get-go. It is extremely hard to back fit tests once a project gets large. There is tons of info on test frameworks out there (probably a ton of info on SO as well), so I'll defer on naming any specific tools.
At my previous workplace we used TeamCity. It's very easy and powerful to use. It can be used for free with some restrictions. There is also a tutorial on Dime Casts. The reason we didn't use CruiseControl.NET is that we had a lot of small projects and it's quite painful to set each one up in CC.NET. I would highly recommend TeamCity. To summarize if you are toward open source then CC.NET is the grand daddy with slightly higher learning curve. If your budget allow you definitely go with TeamCity or check out the free version.
How? Have a look at Carel Lotz's blog.
Why? There are several reasons that I can think of:
A working build, when properly implemented, means that all your developers can build on their machine when the build is green
A working build, when properly implemented, means that you are ready to deploy at any time
A working build, when properly implemented, means that whatever you release has made a trip to your source control system.
A working build, when properly implemented, means that you integrate early and often, reducing your integration risk.
Martin Fowler's article on Continuous Integration remains the definitive text. Have a look at it!
The main argument in favour is that it will cut the cost of your development process, by alerting you as soon as possible that you have a broken build or failing tests.
The problem of integrating the work of multiple developers is the main danger of growing a team. The larger the team gets, the harder it is to coordinate their work and stop them messing with each other's changes. The only good solution is to tell them to "integrate early and often", by checking in small units of work (sometimes called "stories") as they are completed.
You should make the build machine rebuild EVERY time some checks in, throughout the day. With Cruise Control, you can get an icon on your task bar that turns red (and even talks to you!) when the build is broken.
You should then do a nightly full clean build where the source version is labeled (given a unique build number) that you can choose to publish to your stakeholders (product managers, QA people). This is so that when a bug is reported, it is against a known build number (that's extremely important).
Ideally you should have an internal site where builds can be downloaded, and have a button you can click to publish the previous nightly build.
Just trying to build a bit on what mjmarsh said, since he laid a great foundation...
Visual Studio. MSBuild works fine.
NAnt.
NantContrib. This will provide additional tasks such as Perforce operations.
CruiseControl.net. This is again basically your "build dashboard".
All of the above (save for VS) is open source, so you're not looking at any additional licensing.
As Earwicker mentioned, build early, build often. Knowing something broke, and you can produce a deliverable is useful for catching stuff early on.
NAnt includes tasks for nunit/nunit2 as well, so you can actually automate your unit testing. You can then apply stylesheets to the results, and with the help of the framework provided by CruiseControl.net, have nice readable, printable unit test results for every build.
The same applies to the ndoc task. Have your documentation produced and available, for every build.
You can even use the exec task to execute other commands, for instance, producing a Windows Installer using InstallShield.
The idea is to automate the build as much as possible, because human beings make mistakes. Time spent up front is time saved down the road. People aren't having to babysit the build by going through the build process. Identify all the steps of your build, create NAnt scripts for each task, and build your NAnt scripts one by one until you've wholly automated your entire build process. It also then puts all of your builds in one place, which is good for comparison purposes. Something break in Build 426 that worked fine in Build 380? Well, there are the deliverables ready for testing -- grab them and test away.
No licenses needed. CruiseControl.net is freely available and only needs the .NET sdk to build.
A build server, even without automated unit tests still provides a controlled environment for building releases. No more "John usually builds on his machine but he's out sick. For some reason I can't build on my machine"
Right now I have one set up in a Virtual PC session.
Yes. The build needs to be dumped somewhere accessible. Development builds should have debugging turned on. Release build should have it turned off.
How often is up to you. If set up correctly, you can build after each check in will very little overhead. This is a great idea if you have (or are planning on having) unit tests in place.
Keep milestones and releases as long as required. Anything else depends on how often you build: continuously? throw away. Daily? Keep a week's worth. Weekly? Keep two month's worth.
The larger your project gets the more you will see the benefits of an automated build machine.
It is all about the health of the build. What this gets you is that you can set up any type of things you want to happen with the builds. Among these you can run tests, static analysis, and profiler.
Problems are dealt with much much faster, when you recently worked on that part of the application. If you commit small changes, then it almost tells you where you broke it :)
This of course assumes, you set it up to build with every check in (continuous integration).
It also can help get QA and Dev closer. As you can set up functional tests to run with it, along with profiler and anything else that improves feedback to the dev team. This doesn't mean the functional tests run with every check in (can take a while), but you set up builds/tests with tools that are common to the whole team. I have been automating smoke tests, so in my case we collaborate even more closely.
Why:
10 years ago we as software developers used to analyse something to the nth degree get the documents (written in a human language) 'signed off' then start writing code. We would unit test, string test and then we would hit system test: the first time the system as a whole would be run together, sometimes week or months after we got the documents signed off. It was only then that we would uncover all the assumptions and misunderstandings we had when we analysed everything.
Continuous Integration as and idea causes you to build a complete (although, initially, very simple) system end to end. Over time the system functionality is built out orthogonally. Every time you do a complete build you are doing the system test early and often. This means you find and fix bugs and assumptions as early as possible, when it is the cheapest time to fix them.
How:
As for the how, I blogged about this a little while ago:[ Click Here]
Over 8 posts it goes step by step on how to set up a Jenkins server in a windows environment for .NET solutions.