Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
So, I have a staging and live environment of Umbraco.
Our content guys make changes in Live because they need something to be visible straight away.
Now, to back this up - I'm currently copying & pasting what they've done onto our staging environment and putting into source control..
Is there a better way of doing this?
From what I know of your situation, I would recommend setting up the staging site and the production site with the same database. Unless you are using the ContentService to pull content into your templates (which you should avoid because it hits the DB), your umbraco site should only be hitting the App_Data/umbraco.config xml cache and the examine indexes in App_Data/TEMP/ExamineIndexes. This means that even though your staging and production site will be sharing the same database, changes that you make on the staging site won't show up on the production site until you log in and republish the entire site or republish the specific node.
This approach is definitely not appropriate for every scenario. For example, we have clients who won't like that the database is shared for security reasons. They want as much separation from the production site and the staging site as possible. I also wouldn't use this if the content on the site is very time sensitive. If content being accidentally published before it is ready would be very bad for your client, this might not be the best solution. We haven't experienced any trouble with the xml cache being automatically refreshed when we weren't ready, but I wouldn't trust a cache to protect sensitive information from being released early.
We have been using it and are very happy with the simplicity. There are very few moving parts, so compared to some of the other deployment methods below, this is a pretty safe way to deploy. To make things more user friendly for our clients, we rig up a button on our staging site that, when clicked, will republish the cache for that node on the production site. I hope to release this as a package and will update this answer with a link to the package when it is ready.
UPDATE
I would consider the above approach experimental. Umbraco has been putting a lot of work into load balancing scenarios in later versions of umbraco 7, and some of the work they have done may invalidate what I was talking about. Just keep that in mind if you do decide to try this out.
Here are some other tools that are interesting to think about when dealing with content deployment:
Conveyor: https://our.umbraco.org/projects/backoffice-extensions/conveyor
Courier/Umbraco Deploy: http://umbraco.com/products/more-add-ons/courier-2
Umbraco Cloud: https://umbraco.com/products/umbraco-cloud/
CMS Import: http://soetemansoftware.nl/cmsimport
uMirror: https://our.umbraco.org/projects/backoffice-extensions/umirror
uSync Content Edition: https://our.umbraco.org/projects/developer-tools/usynccontentedition
Conveyor is a young package (at least right now it is). It has a dashboard that you would be able to use to selectively export content from your production site. You can then log in to the backoffice on your staging site and import the content. I am trying this out for the first time this month. It looks very promising, so far, but I can't give you a lot of advice from experience.
Courier is meant to be the ultimate solution in content deployment. It is one of the few content deployment options that allows you to selectively deploy only the content you want to. You can right click on content and deploy from staging to production or from production to staging. Courier also tries to detect dependencies and deploy them along with your content selections. The trick with Courier is that when something goes wrong, it is a big deal. Sites can go down and depending on what went wrong, it could take a lot of time to recover them. Courier might try to deploy a document type that it detected as a dependency and accidentally ruin things. I've also found that it requires a lot of training to use properly. I haven't had a lot of success allowing non-technical folk to use Courier. If you use Courier, set up a test environment and play around for a while. Make sure you know what workflows work for you and what will break things. Courier will let you shoot yourself in the foot. Update: Umbraco has been using Courier a lot for their new Umbraco as a service. They have been finding and fixing a lot of the bugs. The 2015 versions of Courier are much more stable. If you want to use Courier, Make sure you are using the newest versions for Umbraco 7. I've recently been doing some testing on Courier version 2.50.1. Much, much better. I'd still tread carefully though. Another Update Umbraco has been depending more and more heavily on Courier. They have announced a new and reworked Courier called Umbraco Deploy. I look forward to it. Once it is released, that will be a better choice than Courier and I expect that it will function similarly.
Umbraco Cloud is a whole SaaS setup that Umbraco has been working on very heavily. They can host your Umbraco site in Azure and have a very neat UI and process for deploying not only the content and media of your site but also all of the code, document types, and data types. This is still somewhat new, and a lot of very complex sites may not be a good fit for Umbraco Cloud. Also sites that rely heavily on document type inheritance vs document type composition might have problems. As far as I can tell, Umbraco Cloud is nice for small to medium sized sites, but Umbraco does have some very very large sites hosted on Umbraco Cloud as well. Umbraco Cloud relies heavily on the new Umbraco Deploy that is based off of Courier. Chances are that if your site is having trouble with the new Courier, it will still have problems on Umbraco Cloud.
uMirror is one that I've never used, but it exists and could be useful.
uSync Content Edition is another one that I've never used. We do have experience using the regular uSync, and I've found that the author is very responsive to issues and questions.
It sounds like you are seeking something like uSync.ContentEdition, which will allow you to export the database content to disk.
You can copy the files over to staging, and then import them into the database.
Be careful though, the author himself states that it is "Experimental (but getting better)".
An alternative option would be to copy the database itself from live to staging every so often, assuming that the staging database can be overwritten. This is the approach I would take.
Related
We have a web application under development by a mid-size team. We maintain several environments for development and testing.
A problem I've come up against is not knowing which version of the code is currently on a particular server. We work around this by just re-deploying whenever we're unsure.
Is there any way to include information from Git in (for example) the website footer? Ideally we'd be able to see the latest commit, the branch, and any tags.
Without reinventing the wheel, does anyone know of a way to do this?
Does anybody have experience automating deployments with Kentico? E.g. the difficulty of synchronizing document types, bizforms etc to another server?
I've used the built in content staging module to do this sort of thing. Unfortunately it's not all Unicorns and Rainbows. There were definitely some bugs in the module which essentially serializes the data from one server, and deserializes on the target server.
That was back in version 5.5 or 5.5R2 though, and they released version 6 a few months ago. I would take some time and look at the documentation for it's limitations, and then maybe give it a test before committing to it. It can definitely work for some, but it may not be Content Editor friendly.
Kentico Developer Documentation on Content Staging Module
Another possibility would be to utilize a tool that does database comparisons and syncing. I've used the SQL Examiner Suite before, but I've heard that Red Gate makes good tools too.
SQL Examiner
SQL Data Exminer
Red Gate Tools SQL Compare
While this probably isn't the best method, it can work. If you're not making significant changes on a regular basis this can be good for one off syncs between your local/dev server and production. This probably wouldn't be a good solution for "content staging", but more for changes that occurred due to development oriented tasks.
Another option is to use the Export/Import feature in Kentico: http://devnet.kentico.com/docs/6_0/devguide/index.html?export_and_import_overview.htm.
I haven't automated this process, but you can have a look at the ExportManager class in Kentico's API Reference: http://devnet.kentico.com/Documentation.aspx.
Hope this helps
With Kentico 10 you could use the Continuous Integration Feature. It is now working much better than in Kentico 9.
With the Continuous Integration Feature Database objects could be deployed together with the code files and are serialized automatically into the target database.
If you do not want to use this module, you need to use the Object Export Feature in Kentico (Site => Export site or objects).
In both scenarios you have to know, that content (Pages) are difficult to stage between different servers. Content staging is only usefull if you have a "real" staging server, where contend editors prepare the contet that should be staged to the live server on time.
In case you want to stage from a DEV server to the LIVE server, the pages will be overwritten by the dev version, if the GUID of the page is matching.
If you use Continuous Integration, all pages which are not in the DEV server instance will be deleted!
All other objects (Develop objects like Templates, Web Parts, Page Types, etc.) could be imported without any issues.
My team is developing a new application (C#, .Net 4) that involves a repository for shared users content. We need to decide where to store it. The requirements are as follows:
Share files among users.
Support versions.
Enable search by tags and support further queries such as "all the files created by people from group X"
Different views for different people (team X sees its own content and nobody else can see theirs).
I'm not sure what's best, so:
can I search over SVN using tags (not SVN tags of course, more like stackoverflow's tags)?
Is there any sense in thinking of duplication - both SVN and SQL - the content?
Any other suggestions?
Edit
The application enables users to write validation tests that they later execute. Those tests are shared among many groups on different sites. We need versioning for the regular reasons - undo changes, sudden deletions etc. This calls for SVN.
The thing is, we also want to add the option to find all the tests that are tagged "urgent" and were executed by now, for tracking purposes.
I hope I made myself more clear now :)
Edit II
I ran into SvnQuery and it looks good, but does it have an API I can use? I'd rather use their mechanism with my own GUI.
EDIT III
My colleague strongly supports using only a database and forget file based storage. He claims it is better for persistence (which is needed - a test is more than the list of commands to execute). I'd appreciate inputs on this issue, as I think it should be possible to do it this way or the other.
Thanks!
Firstly, consider using GIT rather than SVN. It's much faster, and I suspect it's more appropriate in your use-case: it's designed to be distributed, meaning your users will be able to use it without an internet access, and you won't have any overhead related to communicating with the server when saving documents.
Other than that, I'm not making full sense of your question but it seems like the gist of it might be better rephrased like so: "Can I do tag-based searches/access restriction onto my version control system, or do I need to create a layer on top to do so?"
If so, the answer is that you need a layer on top. Some exist already, both web-based (e.g., Trac) and desktop-based (e.g. GitX). They won't necessarily implement exactly what you need but they can be a good starting point to do what you're seeking.
You could use SVN.
Shared files: obvious and easy. It also supports the centralised locking that you might need for binary files.
Versions. Obviously.
Search... Now we're getting into difficult territory. There is a Lucene addon that allows web searching of your repo - opengrok, svnquery or svn-search. These would be your best starting points for that.
There is no way to stop people seeing what's present in a svn repo, but you can stop them from accessing it. I don't know if the access control could be extended easily to provide hidden folders, you could ask the svn developers.
There's some great APIs for working with SVN, probably the most accessible is SharpSVN which gives you a .net assembly, but there's Python and C and all sorts available.
As mentioned, there are web tools which sit on top of SVN to provide a view into it, there's Trac, and Redmine and several repo-viewers like webSVN, so there's plenty of sample code to use to cook up your own.
Would you use a DVCS like git or mercurial? I woulnd't. Though these have good mechanisms in themselves, it doesn't sound like they're what tyou're after. These allow people to work on their own and share with others on a peer-to-peer basis (though you can set a 'central' repo and work with that as everyone's peer). They do not work in a centralised, shared way. For example, if you and I both edit a test case locally andthen push to the central repo, we might have issues merging. We will have issues merging if the file is a binary or otherwise non-mergable file. In this case you have a problem with losing one person's changes. That's one, main reason for not using a DVCS in your case.
If you're trying to get shared tests together, have you looked at some apps that already do this. I noticed TestRail recently that sounds like what you're trying to do. It's not free (alas) but it's cheap.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
A little bit of Background first:
I have been using Team Foundation Server for about few months and know pretty much how to use it. I have been using it for my project on Codeplex. They required TFS and it was in my Visual Studio Installations, so basically I never knew what all it took to get it to work as it seamlessly worked inside Visual Studio and I just had to do Check In and Check Out stuff...
But now I wanted to see what other Alternatives were available and first installed Mercurial command line (which I never used), then searched for a GUI alternative and installed TortoiseHg and followed instruction from documentation on its Website. Then it said to install a 3 way Diff tool... I searched for it and then Found TortoiseSVN; I thought it must be some plugin or something so I searched SO for questions related to my situation when I stumbled upon this SO Question and was pretty mesmerized by so many tools for different work.
Now:
Can somebody explain what all tools are for source control. Do I have to install a different tool for every different task. Isn't there any single package for all of them. And basically what are the tasks we perform in Source Controlling. I only know Check In, Check Out and checking difference from Codeplex Website. What else should I know.
Does every website like Git, BitBucket, etc use different Tortoise (xxx) for their source control.
Are Source Control and Version Control different terms
Please help..
This is a huge topic and will be impossible to provide a single all-encompassing answer. Nonetheless here are a few thoughts, assuming you are looking for more of a Software Configuration Management solution rather than a simple Revision Control System type approach:
Release Management:
In addition to concurrency control (check-in, check-out, etc.) your SCM can/should also provide history, tagging, branching, and other release management type capabilities. That is, it should always provide a single source of truth as to what source files when into which release, service packs, etc. In order to do this, your build environment needs to be well integrated into your SCM.
WIP Management:
A good SCM system will allow to you compare your work-in-progress to the latest checked in revision. It should also let you revert your WIP, shelve it temporarily, or merge another's changes on a file by file basis.
Documentation & Training
Do not underestimate how important it is to use a tool that can give you a ton of help, books, documentation, community support, and even paid support if needed. Also selecting a "popular" tool can mean that some new developers have one less thing to learn.
Continuous Integration:
Automated builds are a must for any serious organization and you should pick an SCM that can be access by your build systems (e.g. Hudson, CruiseControl, Bamboo, etc.)
Security
The SCM system should have a built in authentication system and also be able to use outside authentication providers as many organizations change over time. In addition, it should be able to support developers working outside the firewall, preferrably over http.
IDE and Build Tool Integration
To make all this stuff easier your SCM must be able to be seamlessly linked into your development system and any command line tools you use. This fact is made easier by the fact that almost all non-Microsoft IDE's support all SCM tools.
Source Browsing
Most SCM tools that I've seen have a number of very high quality, third party browsers such as Fisheye. So I discount this as a differentiating factor.
So which tool to use?
If your organization is fairly well contained within your company then pick Subversion. It is very popular, integrates with every IDE/OS/Build tool, works with ToroiseSVN, supports all platforms, supports multiple protocols, several UI, a powerful command line, a huge community, is free, and is rock solid. It also has an excellent free book.
If you have a highly distributed development group and/or expect to receive open-source contributions from many different folks, go with the distributed capabilities of Git.
Beyond these two, save yourself a ton of time and hassle and forget everything else....really. I realize I am being opinionated, but you kinda asked for an opinion.
If I was to advise something to you, it would be
Use mercurial (aka hg), and start by
learning it in the command line. That
way you will learn all basic concepts,
which could be somewhat hidden from
you when using only GUI overlay such
as TortoiseHG. All with a good
simplistic tutorial of course, perhaps
widely known hginit which covers
some simple usage scenarios.
That would be answer to "What else should I know" part, at least for a start. You can then explore by yourself, having a limited, but somewhat solid base. Or, at least, you will be able to ask more concise questions to learn more, or make more sense of the SO question you quote. Your question is somewhat broader than this, of course, but I would advise not to try to grasp everything at once. Each system has it's own quirks and specialties, but you shouldn't be worried by that fact now. Just as with programming -- you should not try to learn many languages at once, if you don't know any yet.
Ah, and as a finishing touch: Tortoise(xxx) is not exactly a revision control system, thats just a typical name for a shell-integrated Windows client to system xxx. As far as I'm concerned, the "Tortoise" part refers to "shell".
PS. the "Mercurial" advice is due to my personal taste of course, but also by the feeling that learning Hg will enable you to grasp most of the ideas from other systems quite easy (if you ever need to).
From my personal experience I would recommend looking at the new generation of 'Source Control Systems' that are called Distributed Version Control Systems. These are systems like Git (and I think Mercurial but I haven't used that.) that actaully store a full version control system locally and when you commit to the remote repository (push in git terms) you push the changes in your local version control system to the master version control system on the server.
Also Git is designed to make Branching a breeze. In systems like Subversion branching is not as easy but with Git Branching is the recommended practice of making changes. I have used Git, Subversion (SVN) and SourceSafe(the worst Source control System of the three by far!) and this is the major advantage of Git over more traditional Source Control Systems.
For Example if you are fixing a bug or adding a feature in a code base that uses SVN the standard practice would be to
Check out the branch you are going to work in.
Make any bug fixes and test them.
Checkin the changes.
With Git or Similar systems you would
Branch the master branch locally (i.e. development, producton version 1.1, etc.).
Make any bug fixes and test in your locally branched version (i.e. you made a jira-123-bugfix branch for version 1.1).
Merge the branch back into your local copy of the master branch that you created it from and make sure everything is OK.
Then push the changes you made to your local copy of the master branch to the central Git repository.
The advantage of this is that if you have to go back and revist the bug fix you still have your local copy of that branch.
See articles like A Successful Git Branching Model for more info.
I will be taking on the role of support for a complex application that is transitioning from the development team. This application is a sharepoint solution that connects to several (7) web services. The development team is rolling off almost immediately and will be available only for small questions.
I'm new to this role so I'm wondering what suggestions you have for me as I take on this large project. What are some considerations that should be made so that the transition to support is smooth and uninterupted?
I've been reading the documentation but I can already see some gaps that need to be filled. The applicaiton is very (perhaps overly) configurable and there is lots of injected code. Stepping through the code is about the only way I can gain an understanding of what is actually happening.
It sounds like you've already got your environment set up if you're able to debug the application, so that's the first thing I was going to suggest in a knowledge-transfer situation. Some general things that I would get from the developers before they depart:
A list of third-party components that the application uses, along with license information and website logins if applicable.
Access to every part of the environment that this thing runs on, both production and development. That means the source code management system, database server(s), etc. It sounds like you have some of these already but make sure you get access to absolutely everything.
If your development environment was given to you "as is" (i.e. you took it over from one of the departing developers, make sure you know how to rebuild it from scratch. They might have a document that describes the process of building a development box, but if not maybe you can get them to show you how to set up a fresh machine.
Three will go a long way towards this, but if setting up a server to run the application is different in any way from setting up a development environment, you'd want to know how so you can diagnose server configuration issues if they crop up, or even rebuild a server. Although this sort of thing may be someone else's responsibility depending on your organization.
Once you have those, you probably want to get some understanding of why the application does the things that it does. That will give you the context you need to understand support and enhancement requests when they come in.
Are the original developers the only source of this information, or are there business people who you will be working with after the developers leave? One of the first things I try to do when starting on an existing application that's new to me is to find someone who knows the business well and have them give me a high-level run-down of the application's purpose in life. From there you can go into more detail on individual components/features/whatever as needed. The business people may be a better source for this information than the developers are, so you may want to try them first.
Hopefully some of that helps.
If you're not the systems admin (as opposed to the SharePoint admin), develop an understanding with them of what tasks you are able to do and what you need of them.
This may include things like stopping and starting services (IIS, Timer Service, etc.) and filesystem and DB monitoring and maintenance. Getting this sorted out up front saves a lot of pain later.
If the sys admins don't have some understanding of SharePoint, educate them. They will need to know what the deal is with things like code deployments.
It's best not to feel my pain.