Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm working with a small (4 person) development team on a C# project. I've proposed setting up a build machine which will do nightly builds and tests of the project, because I understand that this is a Good Thing. Trouble is, we don't have a whole lot of budget here, so I have to justify the expense to the powers that be. So I want to know:
What kind of tools/licenses will I need? Right now, we use Visual Studio and Smart Assembly to build, and Perforce for source control. Will I need something else, or is there an equivalent of a cron job for running automated scripts?
What, exactly, will this get me, other than an indication of a broken build? Should I set up test projects in this solution (sln file) that will be run by these scripts, so I can have particular functions tested? We have, at the moment, two such tests, because we haven't had the time (or frankly, the experience) to make good unit tests.
What kind of hardware will I need for this?
Once a build has been finished and tested, is it a common practice to put that build up on an ftp site or have some other way for internal access? The idea is that this machine makes the build, and we all go to it, but can make debug builds if we have to.
How often should we make this kind of build?
How is space managed? If we make nightly builds, should we keep around all the old builds, or start to ditch them after about a week or so?
Is there anything else I'm not seeing here?
I realize that this is a very large topic, and I'm just starting out. I couldn't find a duplicate of this question here, and if there's a book out there I should just get, please let me know.
EDIT: I finally got it to work! Hudson is completely fantastic, and FxCop is showing that some features we thought were implemented were actually incomplete. We also had to change the installer type from Old-And-Busted vdproj to New Hotness WiX.
Basically, for those who are paying attention, if you can run your build from the command line, then you can put it into hudson. Making the build run from the command line via MSBuild is a useful exercise in itself, because it forces your tools to be current.
Update: Jenkins is the most up to date version of Hudson. Everyone should be using Jenkins now. I'll be updating the links accordingly.
Hudson is free and extremely easy to configure and will easily run on a VM.
Partly from an old post of mine:
We use it to
Deploy Windows services
Deploy web services
Run MSTests & display as much information as any junit tests
Keep track of low,med,high tasks
trendgraph warnings and errors
Here are some of the built in .net stuff that Hudson supports
MSBuild
NAnt
MSTest
Nunit
Team Foundation Server
fxcop
stylecop
compiler warnings
code tasks
Also, god forbid you are using visual source safe, it supports that as well. I'd recommend you take a look at Redsolo's article on building .net projects using Hudson
Your questions
Q: What kind of tools/licenses will I need? Right now, we use Visual Studio and Smart Assembly to build, and Perforce for source control. Will I need something else, or is there an equivalent of a cron job for running automated scripts?
A: I just installed visual studio on a fresh copy of a VM running a fresh, patched, install of a windows server OS. So you'd need the licenses to handle that. Hudson will install itself as a windows service and run on port 8080 and you will configure how often you want it to scan your code repository for updated code, or you can tell it to build at a certain time. All configurable through the browser.
Q: What, exactly, will this get me, other than an indication of a broken build? Should I set up test projects in this solution (sln file) that will be run by these scripts, so I can have particular functions tested? We have, at the moment, two such tests, because we haven't had the time (or frankly, the experience) to make good unit tests.
A: You will get an email on the first time a build fails, or becomes unstable. A build is unstable if a unit test fails or it can be marked unstable through any number of criteria that you set. When a unit test or build fails you will be emailed and it will tell you where, why and how it failed. With my configuration, we get:
list of all commits since the last working build
commit notes of those commits
list of files changed in the commits
console output from the build itself, showing the error or test failure
Q: What kind of hardware will I need for this?
A: A VM will suffice
Q: Once a build has been finished and tested, is it a common practice to put that build up on an ftp site or have some other way for internal access? The idea is that this machine makes the build, and we all go to it, but can make debug builds if we have to.
A: Hudson can do whatever you want with it, that includes ID'ing it via the md5 hash, uploading it, copying it, archiving it, etc. It does this automatically and provides you with a long running history of build artifacts.
Q: How often should we make this kind of build?
A: We have ours poll SVN every hour, looking for code changes, then running a build. Nightly is ok, but somewhat worthless IMO since what you've worked on yesterday wont be fresh in your mind in the morning when you get in.
Q: How is space managed? If we make nightly builds, should we keep around all the old builds, or start to ditch them after about a week or so?
A: Thats up to you, after so long I move our build artifacts to long term storage or delete them, but all the data which is stored in text files / xml files I keep around, this lets me store the changelog, trend graphs, etc on the server with verrrry little space consumed. Also you can set Hudson up to only keep artifacts from a trailing # of builds
Q: Is there anything else I'm not seeing here?
A: No, Go get Hudson right now, you wont be disappointed!
We've had great luck with the following combo:
Visual Studio (specifically, using the MSBuild.exe command line tool and passing it our solution files. removes the need for msbuild scripts)
NAnt (like the XML syntax/task library better than MSBuild. Also has options for P4 src control operations)
CruiseControl.net - built in web dashboard for monitoring/starting builds.
CCNet has built in notifiers to send emails when builds succeed/fail
On justification: This takes the load off developers doing manual builds and does a lot to take human error out of the equation. It is very hard to quantify this effect, but once you do it you will never go back. Having a repeatable process to build and release software is paramount. I'm sure you've been places where they build the software by hand and it gets out in the wild, only to have your build guy say "Oops, I must have forgotten to include that new DLL!"
On hardware: as powerful as you can get. More power/memory = faster build times. If you can afford it you'll never regret getting a top-notch build machine, no matter how small the group.
On space: Helps to have plenty of hard disk space. You can craft your NAnt scripts to delete intermediate files every time a build starts, so the real issue is keeping log histories and old application installers. We have software that monitors disk space and sends alerts. Then we clean up the drive manually. Usually needs to be done every 3-4 months.
On build notifications: This is built in to CCNet, but if you are going to add automated testing as an additional step then build this into the project from the get-go. It is extremely hard to back fit tests once a project gets large. There is tons of info on test frameworks out there (probably a ton of info on SO as well), so I'll defer on naming any specific tools.
At my previous workplace we used TeamCity. It's very easy and powerful to use. It can be used for free with some restrictions. There is also a tutorial on Dime Casts. The reason we didn't use CruiseControl.NET is that we had a lot of small projects and it's quite painful to set each one up in CC.NET. I would highly recommend TeamCity. To summarize if you are toward open source then CC.NET is the grand daddy with slightly higher learning curve. If your budget allow you definitely go with TeamCity or check out the free version.
How? Have a look at Carel Lotz's blog.
Why? There are several reasons that I can think of:
A working build, when properly implemented, means that all your developers can build on their machine when the build is green
A working build, when properly implemented, means that you are ready to deploy at any time
A working build, when properly implemented, means that whatever you release has made a trip to your source control system.
A working build, when properly implemented, means that you integrate early and often, reducing your integration risk.
Martin Fowler's article on Continuous Integration remains the definitive text. Have a look at it!
The main argument in favour is that it will cut the cost of your development process, by alerting you as soon as possible that you have a broken build or failing tests.
The problem of integrating the work of multiple developers is the main danger of growing a team. The larger the team gets, the harder it is to coordinate their work and stop them messing with each other's changes. The only good solution is to tell them to "integrate early and often", by checking in small units of work (sometimes called "stories") as they are completed.
You should make the build machine rebuild EVERY time some checks in, throughout the day. With Cruise Control, you can get an icon on your task bar that turns red (and even talks to you!) when the build is broken.
You should then do a nightly full clean build where the source version is labeled (given a unique build number) that you can choose to publish to your stakeholders (product managers, QA people). This is so that when a bug is reported, it is against a known build number (that's extremely important).
Ideally you should have an internal site where builds can be downloaded, and have a button you can click to publish the previous nightly build.
Just trying to build a bit on what mjmarsh said, since he laid a great foundation...
Visual Studio. MSBuild works fine.
NAnt.
NantContrib. This will provide additional tasks such as Perforce operations.
CruiseControl.net. This is again basically your "build dashboard".
All of the above (save for VS) is open source, so you're not looking at any additional licensing.
As Earwicker mentioned, build early, build often. Knowing something broke, and you can produce a deliverable is useful for catching stuff early on.
NAnt includes tasks for nunit/nunit2 as well, so you can actually automate your unit testing. You can then apply stylesheets to the results, and with the help of the framework provided by CruiseControl.net, have nice readable, printable unit test results for every build.
The same applies to the ndoc task. Have your documentation produced and available, for every build.
You can even use the exec task to execute other commands, for instance, producing a Windows Installer using InstallShield.
The idea is to automate the build as much as possible, because human beings make mistakes. Time spent up front is time saved down the road. People aren't having to babysit the build by going through the build process. Identify all the steps of your build, create NAnt scripts for each task, and build your NAnt scripts one by one until you've wholly automated your entire build process. It also then puts all of your builds in one place, which is good for comparison purposes. Something break in Build 426 that worked fine in Build 380? Well, there are the deliverables ready for testing -- grab them and test away.
No licenses needed. CruiseControl.net is freely available and only needs the .NET sdk to build.
A build server, even without automated unit tests still provides a controlled environment for building releases. No more "John usually builds on his machine but he's out sick. For some reason I can't build on my machine"
Right now I have one set up in a Virtual PC session.
Yes. The build needs to be dumped somewhere accessible. Development builds should have debugging turned on. Release build should have it turned off.
How often is up to you. If set up correctly, you can build after each check in will very little overhead. This is a great idea if you have (or are planning on having) unit tests in place.
Keep milestones and releases as long as required. Anything else depends on how often you build: continuously? throw away. Daily? Keep a week's worth. Weekly? Keep two month's worth.
The larger your project gets the more you will see the benefits of an automated build machine.
It is all about the health of the build. What this gets you is that you can set up any type of things you want to happen with the builds. Among these you can run tests, static analysis, and profiler.
Problems are dealt with much much faster, when you recently worked on that part of the application. If you commit small changes, then it almost tells you where you broke it :)
This of course assumes, you set it up to build with every check in (continuous integration).
It also can help get QA and Dev closer. As you can set up functional tests to run with it, along with profiler and anything else that improves feedback to the dev team. This doesn't mean the functional tests run with every check in (can take a while), but you set up builds/tests with tools that are common to the whole team. I have been automating smoke tests, so in my case we collaborate even more closely.
Why:
10 years ago we as software developers used to analyse something to the nth degree get the documents (written in a human language) 'signed off' then start writing code. We would unit test, string test and then we would hit system test: the first time the system as a whole would be run together, sometimes week or months after we got the documents signed off. It was only then that we would uncover all the assumptions and misunderstandings we had when we analysed everything.
Continuous Integration as and idea causes you to build a complete (although, initially, very simple) system end to end. Over time the system functionality is built out orthogonally. Every time you do a complete build you are doing the system test early and often. This means you find and fix bugs and assumptions as early as possible, when it is the cheapest time to fix them.
How:
As for the how, I blogged about this a little while ago:[ Click Here]
Over 8 posts it goes step by step on how to set up a Jenkins server in a windows environment for .NET solutions.
Related
I read a lot about Continuous Deployment and Continuous Delivery but still i can't fully understand how to use it PROPERLY.
(I mean, i can throw up some line of bash and there you go. But i am not sure it's the correct way to do it).
So this is the question:
What are your tools, implementations and logic? How do you implement Continuous Deployment?
How you ensure everything you sent in production actually works without the unit testing (i am not allowed to do them.. I know..)?
Let's assume we have a project written in angular 8 and one in ASP.NET Framework and you have to integrate Continuous Deployment and Delivery to an IIS server.
Which tools you are going to use and why?
I Saw TeamCity, Jenkins, Gitlab CI/CD, Azure, etc.. But none of them seem to be the correct choice to me (maybe because my Continuous Deployment/Delivery commands/business logic were poor).
Now let's assume you have to update the database as well. You can use sqlpackage and a dacpac to do it. Yeah but let's assume you have already deployed the "server" app in the previous step and the database didn't updated because there are some troubles with schemas. How do you behave?
Sorry for the very LONG post and those (stupid maybe?) questions, but i am trying to learn how to use it properly and unfortunately i am the only dev in my corporation..
There's a lot to unpack here. Continuous Deployment and Delivery are beefy subject themselves, in addition to the individual narrative.
It might be helpful to think about it by pulling apart software delivery practices (CI/CD) from its implementations (Jenkins, TeamCity, etc) and tools (server, database, framework, etc). At least three separate types of beasts to make sense of.
CI/CD exists to help business mitigate risk and learn from customers as fast as possible by constantly and iteratively putting software on user's hands. CI/CD has been so successful that a company would highly benefit if their engineers leverage existing services that implement these practices as much as possible.
Jenkins/TeamCity, AWS/DigitalOcean, and all these other types of services know what they're doing so you can more of less "simply" hand your code to them - assuming everything is properly setup - and they'll pull the code, run tests (if exists hopefully, otherwise what's the point), build the project and ship it to production. This can work for "rich" or "poor", big or small, complex or simple type of application, it really is agnostic to the project's perceived state.
So going with a simplest option and jumping through the hoops to get everything setup is one step to take. To start quickly, sometimes the less one knows about the nitty gritty details of everything and the smaller the steps to take, the better it could be; so setting up a CI/CD sample pipeline with a "Hello World" type of application and a dummy unit test (because why not) is a good confidence boost and can help clarify some miunderstanding of that piece of the puzzle.
Then once that's out of the way, playing with specific tools, framework of choice, favorite technology and whatnot becomes another part of the problem to tackle.
I've spent the last few weeks trying to figure out a way to implement (or find someone who has implemented) regressive testing for our build process, but so far I haven't found anything that works. We use TFS2008 and VS2010, and upgrading to TFS2010 is not an option for us. I've tried to use NDepend to give us the list of changed methods and type dependencies, but running it through our build script has proven supremely unreliable (if I run the same build twice without changing anything I would not be surprised to have one perfect NDepend report, and one exception saying NDepend can't run for one reason or another).
Unfortunately, I'm pretty much stuck with the tools I have (TFS2008, VS2010, MSBuild, and MSTest). I could probably get another tool, but changing the tools I already have (such as moving from MSTest to NUnit, or TFS2008 to TFS2010) will not be possible.
Has anyone does this already? Or can someone point me in the right direction to find which methods and types changed between two builds programmatically?
If you have unit tests and a coverage report. Then you could diff the coverage report before and after. Any changes to the coverage would be shown in that. You could then regression test off that (which I assume is manual)
Are there any best practices for enforcing a TFS check-in policy? Are there any good guides on how to implement various types of policies as well as their pros and cons?
Things I'd particularly like to do are ensure that code compiles (note that compilation can take up to five minutes) and that obvious bits of the coding standards are followed (summary tags must exist, naming conventions are followed, etc).
TFS 2010 (and 2008 but I have not used 2008) allows a gated checkin - which forces a build prior to the build being checked in.
Activating this is a (reasonably) straightforward process, see for example these guides:
http://blogs.msdn.com/b/patcarna/archive/2009/06/29/an-introduction-to-gated-check-in.aspx
http://intovsts.net/2010/04/18/the-gated-check-in-build-in-tfs2010/
There is a step before all this which is required to make all this happen. That is a TFS build server setup. That can be a complex process depending on infrastructure etc. Here is an MSDN guide:
http://msdn.microsoft.com/en-us/library/ms181712.aspx
The pros are that the code in the repository can be reasonably stable. For a large team this can save a LOT of time.
There are a lot of cons worth considering for this benefit. Firstly, the installation and maintenance of an extra build server. This include disk space allocation, patches etc.
Secondly is the extra time required for each person to check in a file. Waiting for a build to succeed before the code is checked in (and available for others to get) can be a while.
Thirdly, when (not if) the build server is not available, a contingency plan needs to be in place to allow developers to continue their work.
There is a lot of extra process required to reap the rewards of gated checkins. However is this process is governed properly it can lead to a much smoother development cycle.
Although we do not use gated checkins, we do use a TFS build server for continuous integration to do scheduled builds. This minimises the dependency minute-to-minute on the build server while ensuring (with reasonably effectiveness) that after a build has broken, we are notified and can rectify it ASAP. This method empowers the developers to have an understanding of integrating code, and how to avoid breaking the code in the repository.
I think the premise of this question is somewhat wrong. I think a good question of this nature should be something along the lines of; my team is having a problem with code stability, conflicting change-sets, developers not running tests, poor coverage, or other metric reporting to management and we'd like to use TFS to help solve that(those) issue(s). Yes, I do realize that the OP stated that ensuring compilation is considered a goal, but that comes part and parcel with having an automated build server.
I would question any feature that adds friction to a developer's work cycle without a clearly articulated purpose. Although I've never used them, gated check-ins sound like a feature in search of a problem. If the stability of your codebase is impacting developer productivity and you can't fix it by changing the componetization of your software, dev team structure, or a better branching strategy, then I guess it's a solution. I've worked in a large shop on a global project where ClearCase was the mandated tool and I've encountered that kind of corporate induced fail, but the team I worked on didn't go there quietly or willingly.
The ideal policy is not to have one. Let developers work uninhibited and with as little friction as possible. Code reviews do much more than a set of rules enforced by a soul-less server will ever do. A team that supports testing, and is properly structured will do more for stability than a gated check-in will ever achieve. Tools that support branching and local check-ins make it easier for developers to try new things without fear of breaking the build will help mitigate the kind of technical debt that kills large projects.
You should look at chapter 8 of "Patterns & practices: Team Development with Visual Studio Team Foundation Server"
http://tfsguide.codeplex.com/
We currently build our .net website in C# in Visual Studio 2010 Pro on our dev server, then manually publish it and upload to the live server where it is copied over the current files to go live.
We want to automate this process as much as possible and if possible push it at a certain time, such as every day at midnight. We don't currently use any Source Control so this probably makes it essential anyway...
Is Team Foundation Server [TFS] the best solution to enable this? If so, how much would it cost our client for this or how can we find out? We're in the UK and they do have an MSDN subscription.
At this point, you need to slow down and set more realistic goals. Here's my biggest red flag:
"We don't currently use any Source
Control so this probably makes it
essential anyway..."
Without proper SCC, you're not capable of getting where you need to go. A full scale TFS implementation can most certainly do what you want to do, and it has a couple of really nice features that you can use to integrate automated deployment scenarios, which is great, but you really need to learn to walk before you can learn to run.
I've commented on TFS cost before, so I won't do that in this post, but suffice it to say that a TFS implemenation which does what you want will cost a significant amount of effort, especially if you factor in the time it will take you to set it up and script out the automated publishing workflow you want.
I don't know what your budgets are or how big your teams are, or the nature of your development strategy, or a number of things that might actually change my answer, but I'm proceeding on the assumption that you have a limited budget and no dedicated staff of people that you can draw upon to set up a first-class TFS implementation, so here's what I would recommend (in this order!)
Set up version control using something that's free such as Subversion or Git. For an organization that's just starting off with SCC, I'd recommend Subversion over Git, because it's conceptually a lot simpler to get started with. This is the bedrock of everything you're going to do. Unlike adding a when fuze to a 2000 pound bomb or assembling a bicycle, I'd recommend that you read the manual before and during your SVN installation.
Make a build file using MSBuild. Yes, you can use nAnt, but MSBuild is fairly equivalent in most scenarios, and is a bit more friendly with TFS, if you ever decide to go in that direction in the distant, distant future. Make sure your builds work properly on your development boxes and servers.
Come up with a deployment script. This may very well just equate to a target in your MSBuild file. Or it may be an MSI file -- I don't know your environment well enough to say, but guessing by the fact that you said you copied stuff over to production, an MSBuild target will likely suffice.
Set up a Continuous Integration server such as Hudson or CruiseControl.NET. I personally use CruiseControl, but the basic idea behind both is that they are automated services which watch your SCC system for changes and perform the builds for you. If you have set up a MSBuild target to perform your deployment, you can configure a "project" in CCNET (or probably Hudson) to do the deployment as well.
The total software cost of these solutions is $0, but you will likely face quite a bit of a learning curve on it all. TFS's learning curve, IMO, is even steeper, and the software cost is definitely north of $0. Either way, the take away is not to try to bite it all off in one chunk at one time, or you will probably fail. Go step-by-step, and you will get there. And have fun! I personally loved learning about all of this stuff!
If you client has MSDN then TFS is free!
Wither you have MSDN Professional, Permium or Ultimate you get both a CAL to access ANY TFS server and a licence to run a TFS server in production included. You just need to make sure that all your users have MSDN. If they do not, then you can buy a Retial TFS Licence for $500 which allows the first 5 users without a CAL. You can then add CAL packs which are cheaper than MSDN for users who need access to the data. If you have internal users that need to access only the work items that they have created, then they are also FREE.
As long as the person that kicks off the build has an MSDN licence your build server is also free. You can have the sequence that Dean describes, but I would sugest shelling out a little cash for Final Builder and use it to customise the process. It it integrates well into TFS and provides a nice UI.
The advantage is you get Dev->test->Deploy all recorded, audited and reportable in one product...
http://www.finalbuilder.com/download.aspx
Sounds like you are after a Continuous Integration (Build) server.
One I have used a lot with .net is Hudson.
You can kick off a build by a number of triggers such as at a particular time and can run various steps in sequence. These can include running batch commands (windows or linux depending on what platform you run it on) or running MS Build. It has a lot of plugins and most major tools are supported.
A common sequence for apps that we build is:
update from source control (but that doesn't mean you can't do something like take a copy from a file share)
Compile using msbuild
run unit tests using nunit
Deployed built project to test server
TFS Team Build is certainly capable of doing what you wish by setting up a Build which executes the Deploy target of a Web App. The key is MSDeploy which in reality can be executed in many ways and is not dependent upon any one tool. You could simply schedule a task to execute MSDeploy.
See these two links for more information:
http://weblogs.asp.net/scottgu/archive/2010/07/29/vs-2010-web-deployment.aspx
http://www.hanselman.com/blog/WebDeploymentMadeAwesomeIfYoureUsingXCopyYoureDoingItWrong.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+ScottHanselman+(Scott+Hanselman+-+ComputerZen.com)
There are a lot of really cool things you can do with the new Build system based on Windows Workflow in TFS 2010. Learning to customize your Build Process Templates is a worthwhile investment.
Well team TFS is the ultimate solution. But, to reduce cost you can make use of MSbuild to achieve your task. You can create a windows scheduler which fires MSBuild at a particular time. There is an open source MSBuild task available at http://msbuildtasks.tigris.org/ through which you can even upload your files through FTP
You need to do Continuous Integration. TFS 2010 is fully capable to do this for you. But before you continue you should move your sources to TFS Source Control Management. We are doing same thing as you need: All our sources resides in TFS, with each check-in, a build occurs in Build Server then deployed to a remote live server.
I work with two other developers for a medium-sized company writing internal applications in asp.net. We have about 10 discrete web applications, about 5 class libraries, and probably two dozen assorted command line and WinForms apps. Management expects us to be able to roll out an application multiple times per day, as required by their business rules du jour.
We are currently (mostly) using Microsoft.Net 1.1 and SourceSafe. When we need to roll out a web app, we get latest from SourceSafe, rebuild, and then copy to the production web server. We are also in the habit of creating massive solution files with 5-10 projects so that everything gets rebuilt and copied to our "master" bin folder instead of opening up each project one by one to rebuild them.
I know there must be a better way to do this, and with Visual Studio 2010 and Microsoft.Net 4.0 being released in the coming months it seems like a good time to upgrade our environment. Does Microsoft have an official opinion/whitepaper on how to set things up? My biggest problem in the past was having a system that worked well with how quickly we're expected to push code into production.
There's a build server for .NET called CruiseControl.NET. You may find it useful as it can be heavily automated.
See "patterns & practices Team Development with Visual Studio Team Foundation Server".
Read the whole thing. It contains things you may never have known existed.
Just for the sake of offering different options, you can also look at Microsoft's Team System. It does cost a good bit and also has a bit of a learning curve. However, we use it where I work, and it makes the scheduling of builds and source control easy. I know some people are totally against everything Microsoft, but I honestly haven't run into any problems with TFS yet. Just another thought.