Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Based upon your experience
If you was given the opportunity to set up the development processes for a small development team.
Please detail
The things you would implement, tools, documents, methodology.
And how you would implement these?
I wish to implement the following:
Source Control
Bug tracking database
Formal Spec templates
Code Reviews
Coffee cup meeting (simple quick informal meetings over coffee :) )
Strict Coding convention
Please keep in mind this would be for a C# .NET focused environment.
If you plan on purchasing, or already have purchased, MSDN subscriptions, there is a major change in licensing that plays out in your favor for Team Foundation Server 2010. Server licensing is now included with MSDN, and you are provided with one CAL per subscription. More licensing details are available here.
Team Foundation Server will cover almost all of your needs in one package. I am very preferential to using as few tools to complete a job as possible, which is one of the reasons I recommend that you look into using Team Foundation Server, as well.
Some notes into your specific requirements:
Source Control - One of the primary functions of Team Foundation Server. It works well and easily integrates with AD Groups. We have security groups set up by roles per project and also for roles across projects. A company developer will be a part of the "Developer" SG and the specific SGs that he/she are involved in. This allows us to give them full access to the projects that they are working on and read access to all projects. The system has the benefit of contractors not belonging to a generalist development group - effectively bucketing them into projects.
Bug Tracking Database - Integration with source control is a definite advantage. By using one package, you have an built in relationship between work items and changesets (which you can further enforce by requiring changesets are only created in the context of work items). Work item relationships are very nice - and vary, depending on your template choice. Both Microsoft's SCRUM and Agile templates are well thought out, and have served our needs quite well, to date.
Formal Specification Templates - There are a couple of ways to approach this. You could create a specific work item type for each template and prepopulate some of the content, or if you wanted a more traditional approach, you could store document templates within the project's Document tree (which is, effectively, a document library on the Team Project Portal site).
Code Reviews - Basic functionality like Annotate (or Blame, if you prefer) are built in. Diff Tools are provided, as well - plus you can switch out the diff tools for others if you do not like what ships with it. (Personally, I use DevArt's CodeCompare .) As for the actual review process, I am a fan of TeamReview.
Strict Coding Conventions - StyleCop is a must, in my opinion. As such, I also believe that ReSharper is a must, as well. Providing conventions is one thing, but being able to visibly put them in front of the user is another. Using something like StyleCop for ReSharper will provide real-time feedback on violations of policies. Anything that you want to add that isn't a part of StyeCop can be created via custom rules, and you can actually put the ReSharper configuration for SCFR in TFS, so that it is shared by the team.
Bonus items that you get that you did not explicitly mention:
Build Management - The build tools in Team Foundation Server 2010 are completely overhauled. Now, builds are defined as workflows using Workflow Foundation, but still can be manually manipulated for more complex build scenarios Gated check-ins, often referred to as "buddy builds" help keep bad code out of the trunk.
Test Management - Testers can benefit from the streamlined testing tools in Visual Studio 2010. Automation of CodedUI tests, and the Test Lab Management tools are major strides in the evolution of Visual Studio. Having a tester be able to capture the state of the machine and automatically insert it into a work item is brilliant, and long overdue. Plus, if you end up using the Test Lab tools, actually being able to capture a snapshot of the VM at the moment of the crash is just pure gravy.
Collaboration - Even if you don't plan on using it at first, the Team Project Portals that are created for each Team Project are ripe with opportunity for collaboration. Just for document management and project wikis alone, it is worth its weight in gold.
Reporting - The reports vary with the templates that you use, but most templates have the type of reports that management care about ready to go. Adding new reports is fairly simple, due to the way that the TFS team has presented data in the cube, as well. A little SSRS knowledge will have you creating detailed custom reports in no time, at all.
Planning - You do not mention what type of methodology you are using - but the Agile template has some really nice sprint planning tools built in. You literally launch a sprint planning worksheet from Visual Studio, which opens in Excel, and any changes you make are reflected in TFS. Really works great for a group planning session.
Support - This is one of the most important factors to me. Having all of the above in a single package also means that I only need to go to one vendor for support. It is invaluable to me to be able to have one phone number to call in the rare occasion that something does go wrong, and know that I have the support incidents to cover it already paid for, thanks to my MSDN subscription.
All that being said, TFS does have a bit of a learning curve. Installation and setup are actually quite simple, assuming that you follow the documentation. The learning curve comes from the fact that there is so much to do with TFS. You might not use all of the features, so the amount of time you actually need to spend learning might vary. The native integration with Visual Studio (and Office) do provide a seamless feel, though, that should translate well to the developers using the system.
Source Control: For a .NET environment, Team Server is good stuff. If you want a free solution, I like Mercurial.
Bug Tracking: FogBugz (of course :)
Spec Templates: I think defining these should be a collaborative process between the development team and business units. Start with a very light framework and let them evolve, don't prescribe a massive document that includes information no one ever uses.
Code Reviews: Peer programming time offers this in abundance. Call out good practices to the team- don't use it as a way to humiliate developers in public.
Meetings: I'm an Agile kind of guy, so meetings (standups) should be brief, to the point, and happen every day at the same time.
Coding Conventions: Again, if you have capable developers you shouldn't have to prescribe strict conventions. Agree as a team on basic conventions and address friction points as necessary.
If your team can foot the bill for Team Foundation Server, you'll have most of what you want in one convenient package.
Source Control: Changeset-based configuration control system with full branching support.
Bug tracking database: Work items - configurable, including bug tracking and reports.
Formal Spec templates: Work items - configurable, including requirements (CMMI), scenarios (Agile), custom types, etc.
Code reviews: Work items - track your reviews just like any other TFS piece of work. A review work item ships with the CMMI process template.
Strict coding convention - I've heard of people integrating StyleCop with their check-in policy.
As for coffee cup meetings, you won't need a tool for that. :)
Related
I'm at a customer where I successfully developed and deployed a WCF service layer (compiled against .NET 4.5). It works perfectly and everything is dandy.
However, we just got an additional requirement - I'm supposed to rebuild (or at least redesign) the layer to incorporate WSSF. There's no old functionality that we'd need to integrate with and all operations in the services are based on executing SPs in a DB.
Should I do that or is it wiser to argue against it? I'm not certain because I've never worked using WSSF and I got virtually no explanation as to why we should at this particular workplace (which could be that they don't want us to know as well as that simply don't know themselves).
My worries are based on, but not limited to, the following.
The latest release is from August 2010.
There's nothing listed in documentation section.
The license seems to be in conflict with commercial activities.
WSSF isn't widely used as technology today (or is it?!).
The purpose of WSSF is to WCF-fy old service layer (or isn't it?!) only.
Especially #4 and #5 are not the strongest statements in my arsenal at the moment so I'll gladly stand corrected, should anybody have a few wise words to contribute with on the subject.
Short story is that it doesn't look good. From MSDN: Web Service Software Factory 2010:
The Web Service Software Factory is now maintained by the community
and can be found on the Service Factory site. This content is outdated
and is no longer being maintained. It is provided as a courtesy for
individuals who are still using these technologies. This page may
contain URLs that were valid when originally published, but now link
to sites or pages that no longer exist. Retired: November 2011
1) So, it looks like it's totally being run by the community. However, looking at the discussion forum there aren't many postings and quite a few have no responses.
2) I find it's fairly common for the documentation tab to be empty at codeplex but there is frequently documentation but not on the documentation tab.
3) In terms of licensing Ms-PL is quite permissive so I wouldn't imagine there would be any issues.
4) Not to belittle it but I don't think it was/is very popular. Definitely not a standard.
5) The intent of the service factory was to provide guidance -- both written and code based. See Web Service Software Factory for a discussion.
WSSF was a tool that incorporated best-practices for building WCF services. It's been years since I've used it, but basically I recall a wizard that asked several (actually lots of) questions about the service (contract), data (model), etc. What it would produce is a nicely organized solution with several projects with proper naming conventions, verbose declarations like adding IsOneWay=true/false to [OperationBehavior]'s or IsRequired=true/false, Order=n, etc. to [DataContract]. In other words, it generated very verbose code that most of us blow off until we need it.
It did more though, such as structuring your solution so that service contracts were in one project, data contracts in another, and implementation in yet another. It created test projects (I believe). so, very granular layout of the solution. I remember the simplest of services would result in about 6-7 projects in the solution. It was a little intimidating at first until you poked through the code it generated.
Another cool feature it had (at the time many were asking for) was a way to do contract first development. Given an existing web service metadata, you could constuct a new service solution.
Anyway, once it was completed, you just had to do essentially provide implementation for the methods. Personally, I never really embraced it for services development. But, at the time, I appreciated it and often referred customers to it who were new to services development because I knew it would get them off to a proper start.
To comment on your worries though...
That's correct, and it is not getting any resources to update it.
Actually, there is quite a bit of documentation. Just move over to the Home tab and you will see links to it.
Not sure about this. The code it generates is yours. You still have to compile it and it's yours to maintain going forward. No different than any other code-generation tool (as far as I know).
Nope, it is not. Also, consider the time when this was developed, .NET Framework 2 - 3.x. There's been a lot added to WCF since then. There's also been some new guidance on service development. If you're using some of the newer features added in .NET Framework 3.5SP and beyond (which you probably are), then this definitely is not something I would recommend using.
Again, that was one of the nice features (contract first development). But, that really wasn't the main idea. It was a tool to build out the framework for new services too. In fact, new service development was the original motivation of the tool as I recall. Once you took the time to go through the dialogs, you had a really nice solution to start building on.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
A little bit of Background first:
I have been using Team Foundation Server for about few months and know pretty much how to use it. I have been using it for my project on Codeplex. They required TFS and it was in my Visual Studio Installations, so basically I never knew what all it took to get it to work as it seamlessly worked inside Visual Studio and I just had to do Check In and Check Out stuff...
But now I wanted to see what other Alternatives were available and first installed Mercurial command line (which I never used), then searched for a GUI alternative and installed TortoiseHg and followed instruction from documentation on its Website. Then it said to install a 3 way Diff tool... I searched for it and then Found TortoiseSVN; I thought it must be some plugin or something so I searched SO for questions related to my situation when I stumbled upon this SO Question and was pretty mesmerized by so many tools for different work.
Now:
Can somebody explain what all tools are for source control. Do I have to install a different tool for every different task. Isn't there any single package for all of them. And basically what are the tasks we perform in Source Controlling. I only know Check In, Check Out and checking difference from Codeplex Website. What else should I know.
Does every website like Git, BitBucket, etc use different Tortoise (xxx) for their source control.
Are Source Control and Version Control different terms
Please help..
This is a huge topic and will be impossible to provide a single all-encompassing answer. Nonetheless here are a few thoughts, assuming you are looking for more of a Software Configuration Management solution rather than a simple Revision Control System type approach:
Release Management:
In addition to concurrency control (check-in, check-out, etc.) your SCM can/should also provide history, tagging, branching, and other release management type capabilities. That is, it should always provide a single source of truth as to what source files when into which release, service packs, etc. In order to do this, your build environment needs to be well integrated into your SCM.
WIP Management:
A good SCM system will allow to you compare your work-in-progress to the latest checked in revision. It should also let you revert your WIP, shelve it temporarily, or merge another's changes on a file by file basis.
Documentation & Training
Do not underestimate how important it is to use a tool that can give you a ton of help, books, documentation, community support, and even paid support if needed. Also selecting a "popular" tool can mean that some new developers have one less thing to learn.
Continuous Integration:
Automated builds are a must for any serious organization and you should pick an SCM that can be access by your build systems (e.g. Hudson, CruiseControl, Bamboo, etc.)
Security
The SCM system should have a built in authentication system and also be able to use outside authentication providers as many organizations change over time. In addition, it should be able to support developers working outside the firewall, preferrably over http.
IDE and Build Tool Integration
To make all this stuff easier your SCM must be able to be seamlessly linked into your development system and any command line tools you use. This fact is made easier by the fact that almost all non-Microsoft IDE's support all SCM tools.
Source Browsing
Most SCM tools that I've seen have a number of very high quality, third party browsers such as Fisheye. So I discount this as a differentiating factor.
So which tool to use?
If your organization is fairly well contained within your company then pick Subversion. It is very popular, integrates with every IDE/OS/Build tool, works with ToroiseSVN, supports all platforms, supports multiple protocols, several UI, a powerful command line, a huge community, is free, and is rock solid. It also has an excellent free book.
If you have a highly distributed development group and/or expect to receive open-source contributions from many different folks, go with the distributed capabilities of Git.
Beyond these two, save yourself a ton of time and hassle and forget everything else....really. I realize I am being opinionated, but you kinda asked for an opinion.
If I was to advise something to you, it would be
Use mercurial (aka hg), and start by
learning it in the command line. That
way you will learn all basic concepts,
which could be somewhat hidden from
you when using only GUI overlay such
as TortoiseHG. All with a good
simplistic tutorial of course, perhaps
widely known hginit which covers
some simple usage scenarios.
That would be answer to "What else should I know" part, at least for a start. You can then explore by yourself, having a limited, but somewhat solid base. Or, at least, you will be able to ask more concise questions to learn more, or make more sense of the SO question you quote. Your question is somewhat broader than this, of course, but I would advise not to try to grasp everything at once. Each system has it's own quirks and specialties, but you shouldn't be worried by that fact now. Just as with programming -- you should not try to learn many languages at once, if you don't know any yet.
Ah, and as a finishing touch: Tortoise(xxx) is not exactly a revision control system, thats just a typical name for a shell-integrated Windows client to system xxx. As far as I'm concerned, the "Tortoise" part refers to "shell".
PS. the "Mercurial" advice is due to my personal taste of course, but also by the feeling that learning Hg will enable you to grasp most of the ideas from other systems quite easy (if you ever need to).
From my personal experience I would recommend looking at the new generation of 'Source Control Systems' that are called Distributed Version Control Systems. These are systems like Git (and I think Mercurial but I haven't used that.) that actaully store a full version control system locally and when you commit to the remote repository (push in git terms) you push the changes in your local version control system to the master version control system on the server.
Also Git is designed to make Branching a breeze. In systems like Subversion branching is not as easy but with Git Branching is the recommended practice of making changes. I have used Git, Subversion (SVN) and SourceSafe(the worst Source control System of the three by far!) and this is the major advantage of Git over more traditional Source Control Systems.
For Example if you are fixing a bug or adding a feature in a code base that uses SVN the standard practice would be to
Check out the branch you are going to work in.
Make any bug fixes and test them.
Checkin the changes.
With Git or Similar systems you would
Branch the master branch locally (i.e. development, producton version 1.1, etc.).
Make any bug fixes and test in your locally branched version (i.e. you made a jira-123-bugfix branch for version 1.1).
Merge the branch back into your local copy of the master branch that you created it from and make sure everything is OK.
Then push the changes you made to your local copy of the master branch to the central Git repository.
The advantage of this is that if you have to go back and revist the bug fix you still have your local copy of that branch.
See articles like A Successful Git Branching Model for more info.
I'm investigating technologies with which to develop a medium-scale (up to 100 or 200 simultaneous users) database-driven web application, and someone suggested Morfik. However, outside of the Morfik company I can find practically zero community support - no active blogs, no tutorials, no videos, no books - and this is of some concern (especially when compared to C# / ASP.NET / nHibernate etc support). Deciding between Morfik (untried and not used widely AFAIK) and the other technologies I mentioned (tried, tested, used widely) is becoming a critical issue for my company.
Has anyone had success using Morfik in these kind of circumstances? What kind of performance did you achieve?
Being a Morfik user for the last 2-3 months, trying to do a quite large project. I totally understand your concern.
The community is small, Morfik developers though try to help you and answer almost all your questions. It was one of my concerns before purchasing it, but it's not a big deal actually.
However, it lacks documentation and tutorials. Yes, there is a chm help file, but outdated and lacks in many ways. Not enough examples, you should figure a lot of stuff on your own. But they say, it's Morfik team's one of the first priorities in the upcoming release to enhance the documentation.
We chose not to use Firebird as the db (Morfik supports it natively) and going with Postgresql over ODBC. There are issues to overcome there too. We had to dive in and modify (override) our own security wrapper for postgre etc. But overall, Morfik integrates with it quite fine. You should be prepared to small annoyances though.
We chose to go with Pascal version, as it is the major language the developers use. But, oh I hate Pascal so much :) It had been 10+ years last time I used Pascal and it can be really annoying with the quirky code editor of Morfik.. I miss VisualStudio, or even Notepad++ as editor!
Since we started our app, I see new components and examples released quite frequently. Morfik team invested on a separate team that develop addons for Morfik, which is a good thing.
So, in terms of support (not community but staff) you should not worry. It's still far from being a mature product but it does the job. Our relationship with Morfik is a love and hate one. I am quite sure our big project will be successfully completed with Morfik, and I can do small enterprise solutions with Morfik very (I mean very) fast. But I would also really think again to use Morfik if we do a big project like we are doing now.
I hope I make sense :)
You might try looking at www.morfikwatch.com which a blog dedicated to Morfik. There you will find links to a couple of Morfik user communities. You can then ask around.
We use Morfik for a variety of purposes, all intranet based. We are looking at the migration of all in-house corporate applications being refactored into morfik applications.
Morfik is a new product, and as such, the community is still growing. Although Morfik 1 has been around for awhile, Morfik 2 is the first version that makes it easy to develop plugins and other third party tools. Now there are small websites starting to appear that create plugins and support Morfik. (http://www.pannonrex.com/ for example).
Morfik is in it's infancy yet offers a solution to be found nowhere else. I would recommend it highly. Just give it time and the developer community will appear just as it did for Delphi and the rest.
best regards
Dalton Calford
Distributel Communications
I'm sorry, when I saw 100-200 simultaneous connections, I immediately thought you meant intranet. We average 300-450 concurrent users on our apps, so we do not consider it a internet based app until you look at a possible 5,000+ users.
The design criteria for such a system is very different than a system with under 1000 users.
When you approach such a system, you are looking at a cloud configuration. As our company is a telecommunications company, and we are required by law to meet 5-9's service for our customers, we use firebird in all our back end processes. Although we have used DB2, Oracle and other products in the past, Firebird has either been more reliable or outperformed the others.
With the about to be released Firebird 2.5 (now in rc 2 if you wish to play with it), you can use firebird as it's own middle tier, with one database connecting to multiple other databases to perform both DML and DDL actions. You can have one Firebird database that has no tables whatsoever, just stored procedures, views etc. That database can then surface the data from multiple sources without the client application knowing. As the connection can be dynamically built within the stored procedures, you can have the backend databases change as needed without changing any front end code. This allows you to load balance, have multiple web servers share a single cluster of databases etc.
So, I since Morfik supports Firebird intrinsically, I would say that yes, Morfik can scale well to a larger environment without trouble. As for Firebird support, it has one of the most active user communities on the web.
From the point of view of Morfik, morfik is a great way to generate a web based UI while leveraging your existing developer base without having to learn a series of new languages. But, it currently lets the developer use the tools for n-tier development without getting in the way. I like that. I do not want a tool that tries to be everything and in turn, does nothing well.
best regards
Dalton Calford
Distributel Communications
Something that I am very concerned about is 3rd party components. GWT has a fairly large collection of components. We make extensive use of data grids that need to be data aware and very rich, meaning it needs to be able to do grouping and sub groupings and master detail relationships.
You must also be able to create new groupings on the fly.
We also make use of pivot grids a lot, so we need them as well, and a quick google search doesn't show any components that can compare to what is already available in GWT.
It is a pity though, since the Morfik development environment seems very integrated. The GWT environment is a bit funny to me, since I am used to the Visual Studio and Delphi environments, so the way Eclipse work is a bit foreign to me, especially when adding new components to the different designers and editors in eclipse.
Morfik is quite limited web development environment for a very basic web development. Even if it gives some benefits in the very beginning in long term it will tie you up.
I worked with Morfik for two years, you can undoubtedly make applications fairly quickly for the management that has design and maintenance is literally 3 clicks, but when you want to add a little more robust functionality can become a pain of head, without counting the inconvenience that is to adjust the reports, has little documentation and the components are the majority of paid.
If you want an app in a short time and not very robust Morfik is indicated, if you want something more I do not recommend it.
I have just won 1 Telerik Premium Collection for .NET Developer with subscription (lucky me!) and was wondering whether the OpenAccess ORM is worth learning? Has anyone thrown away their open source variant and are now using the Telerik ORM tools instead?
Are there any benefits from using the Telerik ORM tools instead of an open source variant?
Any thought suggestions?
BTW I can't wait to start using their RadControls for ASP.NET AJAX!!
I'm a happy telerik customer for more than 5 years. I used their ORM only in one solution and never used an open source ORM.
Throw away the existing one?
NO - if you have no problems and the thing does what it should do I wouldn't change.
That has nothing to do with quality or other aspects of telerik ORM.
It's just a matter of fact that using a new product means to learn new things, solve some solved things again in a different way and so on.
BUT - if you have problems (or must make compromises) with your current product it's sure worth to give it a try.
Without knowing other ORMs I have one clear point why I would try telerik ORM.
It's their (telerik's) outstanding support.
None of my other vendors offers / does what telerik does.
Simply take a look at their forums http://www.telerik.com/community/forums.aspx and you'll see what I mean.
You have a problem - they solve it; and that with very fast response times.
And that's a point you should think about when making a decision about ORM (or any other kind of product).
This is an older post, but I thought I would weigh in.
We recently started using Telerik's SiteFinity product for a client website. It is a very good, developer-oriented tool for creating a web content system without the size or expense of SharePoint or something similar.
We also went with a Cloud solution as Telerik's ORM supports Azure, so thus so does SiteFinity - which uses OpenAccess (ORM) to communicate with its database.
I was very impressed with the speed and flexibility of it all, being my first Cloud (Azure) development project. Telerik's customer support and personal attention is beyond reproach. I have been using Telerik products for years and was not surprised how well it worked.
Two days before the site was to go live everything bombed with a very inexplicable .Net error. As it turns out Microsoft announced they were upgrading their Azure SQL servers starting July, 2011: "This upgrade is also significant in that it represents a big first step towards providing a common base and feature set between the cloud SQL Azure service and our upcoming release of SQL Server Code Name 'Denali'."
(http://blogs.msdn.com/b/windowsazure/archive/2011/07/13/announcing-sql-azure-july-2011-service-release.aspx)
By its very nature, Cloud servers are upgraded and moved around behind the scenes so you don't have to mess with it. OpenAccess failed to take this into account however, and when our SQL Azure server group was upgraded OpenAccess failed to recognize its version and bombed.
Telerik, of course, was very quick about releasing a patch - but it still took them a few days. We couldn't wait that long, unfortunately, having already lost quite a bit of time just trying to figure out what was going on. The practical result was that I got to work nonstop for two days with no sleep to move the whole thing into a regular .Net solution with Entity Framework 4 as the ORM.
So to answer the question: Is Telerik ORM worth learning and / or better than an open source solution? I agree with the above statement that if you already have an open source solution, it is working well, has good performance, and is intuitive to develop against - absolutely stick with that.
The value of open source is the community that supports it and your ability to make changes to the underlying system if need be. Had my project been based on an open source ORM, I could have changed the code to default to the most recent version of SQL if it finds it is working with a version higher, and problem solved - sleep had.
The value of a product like OpenAccess ORM is that it is in competition with other products, open source or otherwise, and it has to: Perform Well, be customer oriented, have a manual (very important), and be easier than doing it yourself or learn an open source system that may or may not be very intuitive.
Throw in that Telerik's support is top notch, and I would say you could do worse - as long as you are willing to give up some control and have to wait for upgrades / patches to handle things like I described above.
First off I want you to know that I am not Telerik evangelist...
We did moved away from Telerik's ASP.NET AJAX controls only because we desired greater control over the look and feel of our UI (we still use the controls for quick internal solutions), but I must say their products are excellent given the right conditions. Our web product team started to use the Open Access ORM for our solutions and honestly we never looked back. The first reason that comes to mind to choose a Telerik solution is grade A support which has never failed to provide a resolution to a problem regarding any of their solutions with in 24hrs usually including sample code... Although I can usually dig through blogs for hours to find solutions for most of my issues regarding Microsoft technology I must say it is nice to have support when we need it.
I would certainly recommend using the Telerik Open access ORM. I have used Telerik products (eg telerik Ajax/Silverlight) for a number of years and they are best in class and the technical support is second to none. The company makes money by providing software that works.
Unfortunately this does not apply to open source since by definition anything can changed without proper control. All it needs is one duff developer and an entire suite can be rendered useless.
In order to use the products correctly, swiftly, and efficiently, it is necessary to be a highly competent web developer.
I'm one of those people that won a license at a users group meeting. Thankfully I got to experience how crappy this software is without paying for it.
I never got to test the ORM capabilities because the Visual Studio integration failed. Any click on the Telerik menu in VS threw an exception. And the VS item template that was supposed to be installed was not there, so I couldn't even begin to test the functionality.
Don't be fooled by the pretty designer screenshots, they can't even get the installer to work correctly.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
What is the benefit of using Windows Workflow foundation (WF) versus rolling your own workflow framework?
From what I can tell, WF only provides a pretty bare-bones runtime engine, a bunch of classes, and a schema (XAML-based) for defining workflows. All the hard stuff such as persistence, providing a host process for the runtime, and implementing distributed workflows (across processes) is left up to you.
Plus there is a learning curve to using WF... if we created our own workflow framework we would simply leverage skills that all developers already have (C#, XML, SQL, etc).
I saw this blog from an MS evangelist which tries to explain why we should use WF:
Why Workflow?...
IMO it doesn't do a good job of convincing because it just states that it helps "developer productivity", while admitting that developers could roll their own.
Can any of the smart folks here come up with a better reason?
SUMMARY FROM ANSWERS GIVEN BELOW:
I think the most convincing reason is that using a standardized workflow platform such as WF (versus rolling your own) will allow you to leverage current and future tooling such as the Visual Designer, provided by MS and third parties.
Also because it is part of the MS stack of .NET based technologies, it will likely have better integration/migration path with future MS technologies (such as Azure).
Finally, the number of developers with WF experience will increase (as it will benefit them career-wise), turning it into a basic commodity skill such as SQL or HTML, meaning it will become easier to find people who can start working with it with minimal ramp up time.
The choice to use WF takes some evaluation and I'm going to try and provide a fairly comprehensive list here on what the pros and cons are. Keep in mind that if you're going to use WF don't use anything other than WF4+ because it was rewritten and is significantly vetted past its predecessors.
Pros
Cost
Flexibility
Durability
Distributability
Future
Cost
The cost of WF is important to note when comparing it against other paths. Those paths may include BizTalk, an open source code based framework like Objectflow, or even rolling your own. Bear in mind that unless you need something significantly simplistic, rolling your own would be the most expensive approach every time. So, if you need a sizable piece of functionality but also need control over the source code I would recommend an open source framework.
Flexibility
WF is a very flexible framework in contrast with a framework like BizTalk. In WF you can write your own custom activities and do what you need to do outside of the framework - this really gives you the power you need.
Durability
WF includes a very powerful durability framework. It's durable in the sense that the state of a workflow can be persisted, the workflow can be set idle (to preserve resources), and then recalled later. But, that durability goes a lot further because it's already setup for durability across a host farm. In other words a workflow can be started on one host, persisted, and then recalled on another host.
Assumes that the workflows are hosted via a web service (i.e. WorkflowService).
Distributability
WF is already setup to be distributed across a host farm.
Assumes that the workflows are hosted via a web service (i.e. WorkflowService).
Future
WF is the replacement orchestration engine for BizTalk and is in fact developed by the same people that built BizTalk. Therefore WF has a bright future in the Microsoft stack. In fact, right now Microsoft is working on building individual components to replace every feature of BizTalk with components. For example, Windows Server AppFabric (and more specifically the plug-in to IIS) is the replacement for the monitoring services that exist within BizTalk today.
Why is Microsoft doing this? Because BizTalk isn't really well suited for the cloud because it's one massive install, whereas the components they are building could be deployed to a cloud solution.
Cons
Flexibility
Monitoring
Flexibility
WF's flexibility can also be its pitfall because sometimes you don't need the flexibility that it provides and thus spend more time building stuff that you would otherwise want to just be included. Sometimes you need a framework that makes a lot of assumptions and maybe works off of convention instead (e.g. MVC). However, generally speaking I have found that this isn't true when coupling the WF4 framework with the open source extensions provided by Ron Jacobs.
Monitoring
The monitoring for WF is still very young and this is its biggest pitfall. However, this will advance very quickly over time and in the meantime you can build your own monitoring tools with custom tracking mechanisms.
Resources
Your best resource is Ron Jacobs. I have never met somebody that is so willing to help the community of developers that have to use Microsoft's frameworks than him. Believe me, he's provided a vast amount of information surrounding WF via numerous channels, just get on Google and check it out.
The main reasons I can think of to lean towards using WF over another workflow framework are:
Microsoft is supporting it as a core part of the framework, so it can/will be easier to integrate into their other technologies like Sharepoint and Azure "cloud applications"
The tooling is likely to improve and be really slick in another few versions, which should improve developer productivity
I have had to create Workflow activities at my job, and I can't even tell you the answer.
One not very valid reason is that invalid values/inputs can be determined and refused at design time for workflow diagrams, and so compile-time errors basically don't exist (assuming all that boilerplate code you wrote has no compile-time errors).
Short answer: it's free and it gets the job done. If you can roll a better framework for managing workflow and want to spend your time on it, by all means do. But consider that your time is worth money, so how much money are you willing to commit to building a better framework for managing workflow? I could see that getting expensive.
Also, I'm pretty sure that persistence (to disk or SQL) is handled out of the box.
There is some reasonably nice designer support in Visual Studio that I'd rather not have to roll for myself, and it's a framework supported by someone else rather than me, meaning someone fixes the architecture bugs and does the main testing, leaving me to test just my workflow. I mean, I could roll my own versions of GDI+ calls, but I'd rather not. Same goes for my own serialization framework, XML parser, or some other element of the .NET framework.
When it comes down to it, these things are provided as a toolkit. Whether you choose to use a tool or not depends entirely on the problem you're solving, the suitability of the tool, and the time and resources you have available to achieve the goal.
Its a new technology Or you can say its latest with a promise of constantly updating features.
It respects the previous working environment and uses it and add those features which are very helpful with regards to the development of the long running programs(large projects).
It produces all that features directly into the hands of the developer which were previously running at the back lacking the interaction between the inner core concepts and the programmer.
Yeah its a little complex but also it provides more power in the hands of the programmer.
You can expect better frameworks and features in the coming future.
Its the future of the programming so better we start learning it today.