This pertains to Lightweight Architecture Decision Records and its usage in TFS with consumer tools in TFS/Powershell.
Based on what exists today
https://github.com/npryce/adr-tools
I wanted to find if there is a corresponding .NET library or project for usage in TFS.
No that I know of.
The tool you reference simply creates some formatted text files; converting similar bash scripts to Powershell is not that hard, so you can do it and share the result with the community publishing your repo.
If you want to create custom work items to track this information, you can do as well. There is plenty of sample code around like Igor's Powershell Cmdlets.
Related
My team has a fairly large set of desktop applications with many shared libraries between them all in one common solution file in our repository. We'd like to use semantic versioning for a number of reasons, chief of which is to make it easier for our users to install updates. However, given the number of assemblies we're dealing with, we're finding it pretty tedious updating AssemblyInfo files for each one, especially if it's for a library that's a dependency for multiple applications.
I was wondering if there's an easy way to use git tags or some kind of external tool to tell the build server that, for example, XYZ has a bug fix and its patch number needs to be updated.
Use GitVersion : https://gitversion.readthedocs.io/en/latest/
It will do automatically the semantic versionning based on the last tag and git history.
You could use GitVersionTask if you use msbuild or (better) use it with build tools like fake or cake.net
Edit: you now have also alternatives easier to use : https://www.nuget.org/packages/Nerdbank.GitVersioning/, https://www.nuget.org/packages/GitInfo/,. ..
I am trying to work on an addon developed by Microsoft Azure for his old Cloud Service. The aim is to render Blender scenes using the Azure environment.
Here it is : https://github.com/Azure/azure-batch-apps-blender
As Microsoft doesn't support this addon anymore, and as it was originally created to work with the old Azure, I want to update it and make it work with the new Azure. Basically, here is what I understood :
The python part is the Blender part, it defines the Blender UI, authentify the user and register the assets (Blender models ?) into Azure. Then it should start the process.
The C# part is the Azure part, aims to be executed on Azure and has a reference to an executable of Blender. It has a class to split the calculus and an other class to process the calculus.
I'm using Visual Studio 2015 and Blender 2.77a.
What I don't understand is that the code seems to be short, especially the C# one. I don't understand how the split part is done (there is no logic around the blender model) and I don't understand why the principal functions of the principal classes (like Split in JobSplitter.cs) are never called ? Did I miss some code ?
I spent some days on various general documentation around Azure, but it didn't helped me that much with this specific application. I also asked Microsoft but this product isn't supported anymore.
Thanks for your interest in the Blender plugin!
The "missing code" that you mention here is actually part of the old Batch Apps C# SDK, which exposed an interface, allowing us to override select functions with Blender specific functionality.
While I'm afraid I can't find any old documentation for it, this project should no longer be necessary, as using the Batch API, the tasks can be constructed in Python from the Blender plugin.
I've actually started porting this plugin to support the Batch API. You can find my code in the dev branch of my fork here:
https://github.com/annatisch/azure-batch-apps-blender/tree/dev
There's still a lot of things that I have yet to clean up, including the dependency checking - but I've put some instructions in the issue filed here:
https://github.com/Azure/azure-batch-apps-blender/issues/7
I'm hoping to make some progress on this project in August after Siggraph. Though I would be happy to accept any PRs!
Regarding the cloud-side code, as I mentioned above, this is now no longer necessary (though I may re-introduce something similar later for richer feature support) - as the entire cloud-side task is constructed within the plugin. The downside to this is that at present I haven't implemented the persisting of rendered frames to Azure Storage, but you can download them using the Azure Portal before the VM pool is deleted.
This plugin currently runs only Linux nodes for rendering (Ubuntu) and installs Blender dynamically with apt-get.
Please post to the Github issues board if you have any trouble using the updated plugin and I'll be happy to help. :)
Cheers
I'm building an integration system that needs to execute some code on a Team Foundation Server (2010+) server when a user checks in some changes. I can diagrammatically access check ins no problem, but I would like to know when new check ins are added. Ideally, I would like to be notified and given the check in data so I can manipulate it and post off what I need to a different API, but if a notice that a new check in is all that exists, that would be sufficient. Ideally, I would be able to have tfs execute a call to my own C# code on the same machine.
I've been looking around the internet for the last 2 days, and I'm firmly confident that this is possible, however I can't seem to find any details on how to do it, and frankly I'm running out of ideas on where to look. If anybody has any ideas on where to start, or where to look, or ideally any similar source ideas, it would be greatly appreciated.
Mainly, I've been digging around in the TFS Integration Tools, but the docs for that are still questionable at best. I've found all of the existing adapter source code (for clearcase etc) but don't see anything to trigger execution anywhere, as I suspect those are more meant for one way migration.
There are different ways you can approach this:
Team Build. By using a TFS Build Server you can create a Continuous Integration build or a Gated Checkin build. In the build workflow you can then respond to whatever changes you've detected. You can use the TFS Client Object Model to grab the Changeset object. That contains all the data you'll need. The ALM Rangers have written an extensive guide explaining how to extend and customize the build process to suit your needs.
Checkin Policy. By creating a custom checkin policy you can run code pre-checkin on the client (inside Visual Studio). This policy could serve as a sample on how to interact with the pending changes.
ISubscriber TFS Application Tier plugin. Already mentioned by #ppejovic. The Application Tier plugin is installed on the TFS server and will run in process. Since it's hosted in process, you can do quite a bit. Samples that act on Work items and/or Source Control are the Merge Work Items handler, the TFS Aggregator. You can also fall back to the Client Object Model if needed, as described here.
The SOAP API. This is the precursor to the ISubscriber interface. You can still use it, but you'll have more power and efficiency from the ISubscriber solution.
The Client Object Model. You can always create a service or a scheduled job on a system that periodically connects to TFS to request the history since the last time it checked. By simply storing querying everything newer than the highest changeset number you've seen so far you can get all the information you need without having to extend TFS itself. You'll be looking for the VersionControlServer class. The QueryHistory method is the one you'll need to fetch the changesets.
There's a nice Pluralsight course that takes you through some of these scenario's.
As with most of these items, documentation is scarce and tools like Red-Gate Reflector .NET
or Jetbrains dotPeek are invaluable.
My team is developing a new application (C#, .Net 4) that involves a repository for shared users content. We need to decide where to store it. The requirements are as follows:
Share files among users.
Support versions.
Enable search by tags and support further queries such as "all the files created by people from group X"
Different views for different people (team X sees its own content and nobody else can see theirs).
I'm not sure what's best, so:
can I search over SVN using tags (not SVN tags of course, more like stackoverflow's tags)?
Is there any sense in thinking of duplication - both SVN and SQL - the content?
Any other suggestions?
Edit
The application enables users to write validation tests that they later execute. Those tests are shared among many groups on different sites. We need versioning for the regular reasons - undo changes, sudden deletions etc. This calls for SVN.
The thing is, we also want to add the option to find all the tests that are tagged "urgent" and were executed by now, for tracking purposes.
I hope I made myself more clear now :)
Edit II
I ran into SvnQuery and it looks good, but does it have an API I can use? I'd rather use their mechanism with my own GUI.
EDIT III
My colleague strongly supports using only a database and forget file based storage. He claims it is better for persistence (which is needed - a test is more than the list of commands to execute). I'd appreciate inputs on this issue, as I think it should be possible to do it this way or the other.
Thanks!
Firstly, consider using GIT rather than SVN. It's much faster, and I suspect it's more appropriate in your use-case: it's designed to be distributed, meaning your users will be able to use it without an internet access, and you won't have any overhead related to communicating with the server when saving documents.
Other than that, I'm not making full sense of your question but it seems like the gist of it might be better rephrased like so: "Can I do tag-based searches/access restriction onto my version control system, or do I need to create a layer on top to do so?"
If so, the answer is that you need a layer on top. Some exist already, both web-based (e.g., Trac) and desktop-based (e.g. GitX). They won't necessarily implement exactly what you need but they can be a good starting point to do what you're seeking.
You could use SVN.
Shared files: obvious and easy. It also supports the centralised locking that you might need for binary files.
Versions. Obviously.
Search... Now we're getting into difficult territory. There is a Lucene addon that allows web searching of your repo - opengrok, svnquery or svn-search. These would be your best starting points for that.
There is no way to stop people seeing what's present in a svn repo, but you can stop them from accessing it. I don't know if the access control could be extended easily to provide hidden folders, you could ask the svn developers.
There's some great APIs for working with SVN, probably the most accessible is SharpSVN which gives you a .net assembly, but there's Python and C and all sorts available.
As mentioned, there are web tools which sit on top of SVN to provide a view into it, there's Trac, and Redmine and several repo-viewers like webSVN, so there's plenty of sample code to use to cook up your own.
Would you use a DVCS like git or mercurial? I woulnd't. Though these have good mechanisms in themselves, it doesn't sound like they're what tyou're after. These allow people to work on their own and share with others on a peer-to-peer basis (though you can set a 'central' repo and work with that as everyone's peer). They do not work in a centralised, shared way. For example, if you and I both edit a test case locally andthen push to the central repo, we might have issues merging. We will have issues merging if the file is a binary or otherwise non-mergable file. In this case you have a problem with losing one person's changes. That's one, main reason for not using a DVCS in your case.
If you're trying to get shared tests together, have you looked at some apps that already do this. I noticed TestRail recently that sounds like what you're trying to do. It's not free (alas) but it's cheap.
We currently build our .net website in C# in Visual Studio 2010 Pro on our dev server, then manually publish it and upload to the live server where it is copied over the current files to go live.
We want to automate this process as much as possible and if possible push it at a certain time, such as every day at midnight. We don't currently use any Source Control so this probably makes it essential anyway...
Is Team Foundation Server [TFS] the best solution to enable this? If so, how much would it cost our client for this or how can we find out? We're in the UK and they do have an MSDN subscription.
At this point, you need to slow down and set more realistic goals. Here's my biggest red flag:
"We don't currently use any Source
Control so this probably makes it
essential anyway..."
Without proper SCC, you're not capable of getting where you need to go. A full scale TFS implementation can most certainly do what you want to do, and it has a couple of really nice features that you can use to integrate automated deployment scenarios, which is great, but you really need to learn to walk before you can learn to run.
I've commented on TFS cost before, so I won't do that in this post, but suffice it to say that a TFS implemenation which does what you want will cost a significant amount of effort, especially if you factor in the time it will take you to set it up and script out the automated publishing workflow you want.
I don't know what your budgets are or how big your teams are, or the nature of your development strategy, or a number of things that might actually change my answer, but I'm proceeding on the assumption that you have a limited budget and no dedicated staff of people that you can draw upon to set up a first-class TFS implementation, so here's what I would recommend (in this order!)
Set up version control using something that's free such as Subversion or Git. For an organization that's just starting off with SCC, I'd recommend Subversion over Git, because it's conceptually a lot simpler to get started with. This is the bedrock of everything you're going to do. Unlike adding a when fuze to a 2000 pound bomb or assembling a bicycle, I'd recommend that you read the manual before and during your SVN installation.
Make a build file using MSBuild. Yes, you can use nAnt, but MSBuild is fairly equivalent in most scenarios, and is a bit more friendly with TFS, if you ever decide to go in that direction in the distant, distant future. Make sure your builds work properly on your development boxes and servers.
Come up with a deployment script. This may very well just equate to a target in your MSBuild file. Or it may be an MSI file -- I don't know your environment well enough to say, but guessing by the fact that you said you copied stuff over to production, an MSBuild target will likely suffice.
Set up a Continuous Integration server such as Hudson or CruiseControl.NET. I personally use CruiseControl, but the basic idea behind both is that they are automated services which watch your SCC system for changes and perform the builds for you. If you have set up a MSBuild target to perform your deployment, you can configure a "project" in CCNET (or probably Hudson) to do the deployment as well.
The total software cost of these solutions is $0, but you will likely face quite a bit of a learning curve on it all. TFS's learning curve, IMO, is even steeper, and the software cost is definitely north of $0. Either way, the take away is not to try to bite it all off in one chunk at one time, or you will probably fail. Go step-by-step, and you will get there. And have fun! I personally loved learning about all of this stuff!
If you client has MSDN then TFS is free!
Wither you have MSDN Professional, Permium or Ultimate you get both a CAL to access ANY TFS server and a licence to run a TFS server in production included. You just need to make sure that all your users have MSDN. If they do not, then you can buy a Retial TFS Licence for $500 which allows the first 5 users without a CAL. You can then add CAL packs which are cheaper than MSDN for users who need access to the data. If you have internal users that need to access only the work items that they have created, then they are also FREE.
As long as the person that kicks off the build has an MSDN licence your build server is also free. You can have the sequence that Dean describes, but I would sugest shelling out a little cash for Final Builder and use it to customise the process. It it integrates well into TFS and provides a nice UI.
The advantage is you get Dev->test->Deploy all recorded, audited and reportable in one product...
http://www.finalbuilder.com/download.aspx
Sounds like you are after a Continuous Integration (Build) server.
One I have used a lot with .net is Hudson.
You can kick off a build by a number of triggers such as at a particular time and can run various steps in sequence. These can include running batch commands (windows or linux depending on what platform you run it on) or running MS Build. It has a lot of plugins and most major tools are supported.
A common sequence for apps that we build is:
update from source control (but that doesn't mean you can't do something like take a copy from a file share)
Compile using msbuild
run unit tests using nunit
Deployed built project to test server
TFS Team Build is certainly capable of doing what you wish by setting up a Build which executes the Deploy target of a Web App. The key is MSDeploy which in reality can be executed in many ways and is not dependent upon any one tool. You could simply schedule a task to execute MSDeploy.
See these two links for more information:
http://weblogs.asp.net/scottgu/archive/2010/07/29/vs-2010-web-deployment.aspx
http://www.hanselman.com/blog/WebDeploymentMadeAwesomeIfYoureUsingXCopyYoureDoingItWrong.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+ScottHanselman+(Scott+Hanselman+-+ComputerZen.com)
There are a lot of really cool things you can do with the new Build system based on Windows Workflow in TFS 2010. Learning to customize your Build Process Templates is a worthwhile investment.
Well team TFS is the ultimate solution. But, to reduce cost you can make use of MSbuild to achieve your task. You can create a windows scheduler which fires MSBuild at a particular time. There is an open source MSBuild task available at http://msbuildtasks.tigris.org/ through which you can even upload your files through FTP
You need to do Continuous Integration. TFS 2010 is fully capable to do this for you. But before you continue you should move your sources to TFS Source Control Management. We are doing same thing as you need: All our sources resides in TFS, with each check-in, a build occurs in Build Server then deployed to a remote live server.