Is WSSF wise to use today on a new WCF service layer? - c#

I'm at a customer where I successfully developed and deployed a WCF service layer (compiled against .NET 4.5). It works perfectly and everything is dandy.
However, we just got an additional requirement - I'm supposed to rebuild (or at least redesign) the layer to incorporate WSSF. There's no old functionality that we'd need to integrate with and all operations in the services are based on executing SPs in a DB.
Should I do that or is it wiser to argue against it? I'm not certain because I've never worked using WSSF and I got virtually no explanation as to why we should at this particular workplace (which could be that they don't want us to know as well as that simply don't know themselves).
My worries are based on, but not limited to, the following.
The latest release is from August 2010.
There's nothing listed in documentation section.
The license seems to be in conflict with commercial activities.
WSSF isn't widely used as technology today (or is it?!).
The purpose of WSSF is to WCF-fy old service layer (or isn't it?!) only.
Especially #4 and #5 are not the strongest statements in my arsenal at the moment so I'll gladly stand corrected, should anybody have a few wise words to contribute with on the subject.

Short story is that it doesn't look good. From MSDN: Web Service Software Factory 2010:
The Web Service Software Factory is now maintained by the community
and can be found on the Service Factory site. This content is outdated
and is no longer being maintained. It is provided as a courtesy for
individuals who are still using these technologies. This page may
contain URLs that were valid when originally published, but now link
to sites or pages that no longer exist. Retired: November 2011
1) So, it looks like it's totally being run by the community. However, looking at the discussion forum there aren't many postings and quite a few have no responses.
2) I find it's fairly common for the documentation tab to be empty at codeplex but there is frequently documentation but not on the documentation tab.
3) In terms of licensing Ms-PL is quite permissive so I wouldn't imagine there would be any issues.
4) Not to belittle it but I don't think it was/is very popular. Definitely not a standard.
5) The intent of the service factory was to provide guidance -- both written and code based. See Web Service Software Factory for a discussion.

WSSF was a tool that incorporated best-practices for building WCF services. It's been years since I've used it, but basically I recall a wizard that asked several (actually lots of) questions about the service (contract), data (model), etc. What it would produce is a nicely organized solution with several projects with proper naming conventions, verbose declarations like adding IsOneWay=true/false to [OperationBehavior]'s or IsRequired=true/false, Order=n, etc. to [DataContract]. In other words, it generated very verbose code that most of us blow off until we need it.
It did more though, such as structuring your solution so that service contracts were in one project, data contracts in another, and implementation in yet another. It created test projects (I believe). so, very granular layout of the solution. I remember the simplest of services would result in about 6-7 projects in the solution. It was a little intimidating at first until you poked through the code it generated.
Another cool feature it had (at the time many were asking for) was a way to do contract first development. Given an existing web service metadata, you could constuct a new service solution.
Anyway, once it was completed, you just had to do essentially provide implementation for the methods. Personally, I never really embraced it for services development. But, at the time, I appreciated it and often referred customers to it who were new to services development because I knew it would get them off to a proper start.
To comment on your worries though...
That's correct, and it is not getting any resources to update it.
Actually, there is quite a bit of documentation. Just move over to the Home tab and you will see links to it.
Not sure about this. The code it generates is yours. You still have to compile it and it's yours to maintain going forward. No different than any other code-generation tool (as far as I know).
Nope, it is not. Also, consider the time when this was developed, .NET Framework 2 - 3.x. There's been a lot added to WCF since then. There's also been some new guidance on service development. If you're using some of the newer features added in .NET Framework 3.5SP and beyond (which you probably are), then this definitely is not something I would recommend using.
Again, that was one of the nice features (contract first development). But, that really wasn't the main idea. It was a tool to build out the framework for new services too. In fact, new service development was the original motivation of the tool as I recall. Once you took the time to go through the dialogs, you had a really nice solution to start building on.

Related

Google AngularJS Framework - Worth the risk?

I have been asked to build a small web application for one of our clients and think it might be a good opportunity to try out a different framework for building web applications. Most of the applications we build are based on asp.net web forms and we have no yet done anything in an MVC architecture but I am eager to start building web applications in a more structured manner with the right tools.
I have been researching things like asp.net MVC and the likes which look quite good but I am wondering is there anything to be said for using something like the Google AngularJS Framework.
If possible I would still like to be able to write my server side code using c# and I have not researched AngularJS enough to know if this is even possible, although I assume I could use web services.
Has anyone had any experience with developing an app using AngularJS and if so, how was it and can you point me in the right direction for some tutorials?
We have been developing a port of a Swing fat-client application in AngularJS for the last couple of months and I think it is worth recommending. As far as learning resources go, check out the official project site (and be sure to read the tutorial) and the mailing list (the authors are very helpful).
The good stuff:
great testability
the two-way data binding is a very powerful feature, and it can be extremely helpful once you "get it"
IMO the AngularJS templates are much less brittle than using data- attributes or "special" CSS classes to mark elements that do something
it greatly reduces the need for using jquery plugins, because implementing that functionality in AngularJS is very easy (stuff like trees, tabs, accordions, etc.)
The bad stuff:
the learning curve seems pretty steep (I didn't have much of a problem, but I've seen some people struggle with it)
validations in AngularJS suck for the time being (a new implementation is on the way)
not all libraries/jquery plugins play nicely with Angular and usually you have to wrap them
the API is still being polished, so expect breaking changes (not a big problem with frequent releases and very good changelog, though)
performance is OK up until several thousand bindings on a page - most of the time this is not a limitation, but there are cases when this could be a problem.
Some pointers (if you ever decide to learn AngularJS):
some people really overuse widgets. In my experience, it's much better to use HTML "partials" + services, and only use widgets sporadically.
read source code of the library - it's the best place to learn stuff about angular
no DOM manipulation in services/controllers
if you use css classes to bind to events, you are doing it wrong
+1 #psycho's answer
AngularJS is client-side framework, so you can use any language on the server. It's designed to work well together with jQuery, with big emphasis on testing...
Here are some resources you might find useful:
TUTORIAL: http://docs.angularjs.org/#!/tutorial
API DOCS: http://docs.angularjs.org/#!/api
Developer Guide: http://docs.angularjs.org/#!/guide
Some example apps:
http://cburgdorf.github.com/angular-todo-app
http://www.fluid.ie/angular/calculate/
http://hookercookerman.github.com/angularjs-todos/
http://paul-hammant.github.com/StoryNavigator/navigator.html
Adapter for SenchaTouch: https://github.com/tigbro/sencha-touch-angular-adapter
Adapter for jQ Mobile: https://github.com/tigbro/jquery-mobile-angular-adapter
Feel free to ask any question on mailing list !
We are still in beta, but there are already several internal apps at Google, powered by AngularJS.
UPDATE (26th July 2012):
AngularJS v1.0 has been released.
For some public AngularJS-powered apps, check out http://builtwith.angularjs.org
IMHO developing something for a client which they may have difficulty supporting is unprofessional. You have to bear in mind that it will be difficult for your client to hire experienced Angular professionals, or train their own people to climb that "steep learning curve". Also, so far the documentation is not that great. Can you easily, in a few moments, answer the question, "How can I connect my shiny Angular app to my client's database?" Can your client sometime in the future easily grab some existing code and adapt it to their potential future needs? Be honest.
Compare plain old reliable LAMP development to Angular. For a "small web application" I really believe that a professional should give his client something maintainable and simple.
It's not to say that Angular isn't cool and sexy etc etc. But you have your client's future maintainability to think about in addition to the latest framework fad. Tread lightly would be my recommendation. Build your own website with Angular first and see what you think before you bestow your fabulous new skills on some trusting client.
I remember reading this SO thread couple of months back with same question in my mind, and we decided to go ahead with AngularJS, and the best decision we made on this project yet.
We are using AngularJS + ASP.NET MVC4 REST WebAPI.
Most probalbly after such a nice client side Javascript MVC framework, you would only need REST API layer interacting with Business Logic Layer at server side, and no MVC at server Side (ASP.NET MVC/Spring/Structs would feel like old memories).
You will find Angular-UI good add-on (esp ng-grid)
Soon after our project finishes, we might put some of our directive we wrote for open source world.
I have been researching the merits of AngularJS for many months to utilize as a core framework for product I am creating.
There are many aspects of AJS that make it worth while to learn. Yes there is a bit of a learning curve but its well worth it, especially if you wish to have more control on client side capability.
JQuery manipulates the DOM at run time, whereas AJS situates itself within the JS rendering lifecycle. This allows you to teach the DOM new tricks by creating your HTML Elements and Attributes. This is very, very powerful. As what you are able to do is introduce new Element behaviors for whatever your purpose and need. In AJS these custom HTML Attributes/Elements are called Directives. With the ability to craft your own Directives, you are able to build functionality that the current HTML doesn't have, pushing out capabilities that will run on all modern browsers now and into the future. Of the many approaches to inducing new behavior, AJS appears to be the safest direction one could take due to how they have chosen to implement it.
There is a huge performance gain over JQuery in AJS.
I love the simplicity of the two-way data binding, and the separation of concerns in their client side MVC pattern, which as pointed out above provide great testability. There scope object is the glue between the View (HTML), the Model (your Data) and your custom Controllers. The scope provides access to parent attributes and can be isolated at the sibling level, which is important for some reusable templates.
Templates can created and reused across your application which can contain 0 or more custom directives.
I have been using frameworks such as PRISM and MEF but I am finding that AJS has most of the same features that exist in these .NET frameworks but in a 29K footprint. There is rumors that they are working on lazy-loading which if provided will provide for some very interesting LOB type capabilities.
There are a number of UI frameworks that are being built for AJS but you can wrap any 3rd party control lib as needed, given a bit of effort. The trick is to ensure that when these 3rd party controls have changes induced, that you ensure AJS is properly notified using their apply method.
If you combine AJS with MS TypeScript within VS 2012, it provides the ability to manage and build some very impressive projects which will work well for those who are more comfortable with projects within VS.
There are a ton of other reasons to look at AJS, but if you are considering frameworks such as KnockOut I'd highly recommend AJS instead, regardless of it's perceived learning curve. Knockout is a library, AJS is a framework.
So far i think Google's Angular is great. Particular like the databinding and dependency injection.
For other js framework, there are knockout.js , backbone.js etc.
here are some posts:
angular.js example in backbone.js and/or knockout.js
I realise this post is old and you haven't gone with Angular, but I have a similar background to you, and I'm at the same point as you when asking the question.
So for the benefit of future visitors, some of the "risks" and links to resources I've found useful...
As many have already mentioned, Angular can have a very steep learning curve "Not only me, but co-workers that I consider highly smart developers, have struggled with some of the basic concepts" AngularJS is amazing... and hard as hell (link also has some good tutorial links which you asked for), and the version 2 stuff is looking more like java, which wouldn't have been a problem with your C# background, in my opinion Directives are hard enough to understand without verbose annotations and so on.
Angular performance can be poor in some cases, especially when using ng-repeat on a large number of elements Considering Speed and Slowness in AngularJS and Scaylr's experience. Other's have mentioned that performance really degrades over ~2000 bound elements, but that's usually met with arguments about how any app with more than that many elements probably isn't a good app. Keep it in mind though if you have legitimate use cases which call for many bound objects.
Angular is popular in terms of contributors, but ranks way way behind, say, jQuery in terms of production usage. Finding Angular developers might be tough, and jQuery or other developers converting have that "steep learning curve" to deal with.
Because Angular is young, you have no guarantee that it'll gain enough traction for your new Angular skills to be employable, and your new application not to quickly become legacy code
In v1.2 Angular doesn't support IE7 and below and v1.3 will drop IE8. For >=IE9, you need to follow some special coding practices.
The many javascript widgets, plugins and libraries which you might be used to using can't be used properly with Angular without heavy modification and people often suggest to re-write your component in Angular anyway.
UPDATE March 2014: after 2 months attempting to build a non-trivial densely functional one page app: There are many versions of Angular, and it's hard to say which is the best or most stable. It will depend on what you're coding with it. I'm finding some bugs Angular that are fixed by upgrading to a later version and others fixed by regressing to an earlier one. I've never had to go version shopping like this with jQuery.
UPDATE May 2014: Young, broken tools. Batarang is great until it doesn't work. I can't trust it until they fix this one.
And finally, the three best resources I've found for learning this stuff
Todd Motto's ultimate guide, and
UPDATE April 2014: this eBook chapter is quite amazing. I didn't buy the rest of the book yet, but the concept is fantastic
A full non-trivial app written in Angular (the accompanying book is OK, but doesn't really talk about the non-trivial app enough, as they appear to be saying advertised on their site)
I would say yes to this and check out John Papa's hottowel implementation as a way to do it.

What is the best way expose key classes/methods my core API to 3rd party developers?

I have an application that I have designed and this app has a pretty decent core dll that contains an API that my main view's exe uses. I would like to allow other developers to access this core dll as well but I don't want them to have as much access as me since it would be a security risk. What is the standard way of exposing my core dll? Are there any particular design patterns I should be looking at?
I'm using C#
Edit: my question was a little vague so here is some clarification
My program is deployed as a windows exe which references the core.dll. I want other people to create extensions which dynamically get loaded into my program at start up by loading dlls in the /extensions directory. The 3rd party dlls will inherit/implement certain classes/interfaces in my core.dll. I only want to give 3rd parties limited access to my core but I want to give my exe additional access to the core.
I should mention that this is the first time I have written a program that imports DLLs. Perhaps this whole method of allowing users to add extensions is wrong.
How do I modify/expose my API for
other developers?
To deliberately allow other developers to work with an API you've built touches on many things, which can be broken into two areas:
Resources (documentation, samples, etc) that makes it easier for them to understand (yes - basically an SDK).
Architecting, constructing and deploying your solution so that it's easy to actually work with.
Examples include:
By packing it in a way that suits re-use.
By using naming conventions and member names that others can easily follow.
Documentation, samples.
Providing the source code (as open source) if you're happy for them to modify it.
I would like to allow other developers
to access this core dll as well but I
don't want them to have as much access
as me since it would be a security
risk.
Ok, so this gets us right into the second area - the actual solution.
The problem you have is not a trivial one - but it's also quite do-able; I'd suggest:
Looking into existing material on plugins (https://stackoverflow.com/questions/tagged/plugins+.net)
Personally, I've found using attributes and Dependency Inversion to be a great approach.
There's also stuff like the Managed Extensibility Framework which you should consider.
The big issue you face is that you're into serious architecture territory - the decisions you make now will have a profound impact on all aspects of the solution over time. So you might not be able to make an informed decision quickly. Still - you have to start somewhere :)
The "design patterns" in terms of an API are more related to things like REST.
I don't want them to have as much
access as me since it would be a
security risk
Then i would (for the sake of maintenance), layer on top of the core DLL extra logic to prevent this.
The thing is, the "clients" call the API, not the Core DLL.
"How" the API accesses the Core DLL is under your full control. Just only expose operation contracts that you wish.
Since you're using C#, I would look at Microsoft's Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries and use FxCop to in-force many of them (latest version here). This won't be all you'll likely need, but it would help put you in the right direction.
Also, take a look at the freely available distillation of Framework Design Guidelines by the same author.

Morfik - suitability for medium-scale web enterprise applications

I'm investigating technologies with which to develop a medium-scale (up to 100 or 200 simultaneous users) database-driven web application, and someone suggested Morfik. However, outside of the Morfik company I can find practically zero community support - no active blogs, no tutorials, no videos, no books - and this is of some concern (especially when compared to C# / ASP.NET / nHibernate etc support). Deciding between Morfik (untried and not used widely AFAIK) and the other technologies I mentioned (tried, tested, used widely) is becoming a critical issue for my company.
Has anyone had success using Morfik in these kind of circumstances? What kind of performance did you achieve?
Being a Morfik user for the last 2-3 months, trying to do a quite large project. I totally understand your concern.
The community is small, Morfik developers though try to help you and answer almost all your questions. It was one of my concerns before purchasing it, but it's not a big deal actually.
However, it lacks documentation and tutorials. Yes, there is a chm help file, but outdated and lacks in many ways. Not enough examples, you should figure a lot of stuff on your own. But they say, it's Morfik team's one of the first priorities in the upcoming release to enhance the documentation.
We chose not to use Firebird as the db (Morfik supports it natively) and going with Postgresql over ODBC. There are issues to overcome there too. We had to dive in and modify (override) our own security wrapper for postgre etc. But overall, Morfik integrates with it quite fine. You should be prepared to small annoyances though.
We chose to go with Pascal version, as it is the major language the developers use. But, oh I hate Pascal so much :) It had been 10+ years last time I used Pascal and it can be really annoying with the quirky code editor of Morfik.. I miss VisualStudio, or even Notepad++ as editor!
Since we started our app, I see new components and examples released quite frequently. Morfik team invested on a separate team that develop addons for Morfik, which is a good thing.
So, in terms of support (not community but staff) you should not worry. It's still far from being a mature product but it does the job. Our relationship with Morfik is a love and hate one. I am quite sure our big project will be successfully completed with Morfik, and I can do small enterprise solutions with Morfik very (I mean very) fast. But I would also really think again to use Morfik if we do a big project like we are doing now.
I hope I make sense :)
You might try looking at www.morfikwatch.com which a blog dedicated to Morfik. There you will find links to a couple of Morfik user communities. You can then ask around.
We use Morfik for a variety of purposes, all intranet based. We are looking at the migration of all in-house corporate applications being refactored into morfik applications.
Morfik is a new product, and as such, the community is still growing. Although Morfik 1 has been around for awhile, Morfik 2 is the first version that makes it easy to develop plugins and other third party tools. Now there are small websites starting to appear that create plugins and support Morfik. (http://www.pannonrex.com/ for example).
Morfik is in it's infancy yet offers a solution to be found nowhere else. I would recommend it highly. Just give it time and the developer community will appear just as it did for Delphi and the rest.
best regards
Dalton Calford
Distributel Communications
I'm sorry, when I saw 100-200 simultaneous connections, I immediately thought you meant intranet. We average 300-450 concurrent users on our apps, so we do not consider it a internet based app until you look at a possible 5,000+ users.
The design criteria for such a system is very different than a system with under 1000 users.
When you approach such a system, you are looking at a cloud configuration. As our company is a telecommunications company, and we are required by law to meet 5-9's service for our customers, we use firebird in all our back end processes. Although we have used DB2, Oracle and other products in the past, Firebird has either been more reliable or outperformed the others.
With the about to be released Firebird 2.5 (now in rc 2 if you wish to play with it), you can use firebird as it's own middle tier, with one database connecting to multiple other databases to perform both DML and DDL actions. You can have one Firebird database that has no tables whatsoever, just stored procedures, views etc. That database can then surface the data from multiple sources without the client application knowing. As the connection can be dynamically built within the stored procedures, you can have the backend databases change as needed without changing any front end code. This allows you to load balance, have multiple web servers share a single cluster of databases etc.
So, I since Morfik supports Firebird intrinsically, I would say that yes, Morfik can scale well to a larger environment without trouble. As for Firebird support, it has one of the most active user communities on the web.
From the point of view of Morfik, morfik is a great way to generate a web based UI while leveraging your existing developer base without having to learn a series of new languages. But, it currently lets the developer use the tools for n-tier development without getting in the way. I like that. I do not want a tool that tries to be everything and in turn, does nothing well.
best regards
Dalton Calford
Distributel Communications
Something that I am very concerned about is 3rd party components. GWT has a fairly large collection of components. We make extensive use of data grids that need to be data aware and very rich, meaning it needs to be able to do grouping and sub groupings and master detail relationships.
You must also be able to create new groupings on the fly.
We also make use of pivot grids a lot, so we need them as well, and a quick google search doesn't show any components that can compare to what is already available in GWT.
It is a pity though, since the Morfik development environment seems very integrated. The GWT environment is a bit funny to me, since I am used to the Visual Studio and Delphi environments, so the way Eclipse work is a bit foreign to me, especially when adding new components to the different designers and editors in eclipse.
Morfik is quite limited web development environment for a very basic web development. Even if it gives some benefits in the very beginning in long term it will tie you up.
I worked with Morfik for two years, you can undoubtedly make applications fairly quickly for the management that has design and maintenance is literally 3 clicks, but when you want to add a little more robust functionality can become a pain of head, without counting the inconvenience that is to adjust the reports, has little documentation and the components are the majority of paid.
If you want an app in a short time and not very robust Morfik is indicated, if you want something more I do not recommend it.

Porting a PowerBuilder Application to .NET

Does anyone have any advice for migrating a PowerBuilder 10 business application to .NET?
My company is considering migrating a legacy PB application to .NET (C#) and I am just wondering if anyone has any experience - good or bad - that you would like to share.
The application is rather large with 10 PBL libraries, some PFC as well as custom frameworks. There are a large number of DLL calls being made as well. Finally, it uses a Microsoft SQL Server database.
We have discussed porting the "core" application code to .NET and then porting more advanced functionality across as-needed.
When I saw the title, I was just going to lurk, being a renowned PB bigot. Oh well. Thanks for the vote of confidence, Bernard.
My first suggestion would be to ditch the language of self-deception. If I eat half of a "lite" cheesecake, I'm still going to lose sight of my belt. A migration can take as little as 10 minutes. What you'll be doing is a rewrite. The time needs to be measured as a rewrite. The risk needs to be measured as a rewrite. And the design effort should be measured as a rewrite.
Yes, I said design effort. "Migrate" conjures up images of pumping code through some black box with a translation mirroring the original coming out the other side. Do you want to replicate the same design mistakes that were made back in 1994 that you've been living with for years? Even with excellent quality code, I'd guess that excellent design choices in PowerBuilder may be awful design choices in C#. Does a straight conversion neglect the power and strengths of the platform? Will you be living with the consequences of neglecting a good C# design for the next 15 years?
That rant aside, since you don't mention your motivation for moving "to .NET," it's hard to suggest what options you might have to mitigate the risk of a rewrite. If your management has simply decided that PowerBuilder developers smell bad and need to be expunged from the office, then good luck on the rewrite.
If you simply want to deploy Windows Forms, Web Forms, Assemblies or .NET web services, or to leverage the .NET libraries, then as Paul mentioned, moving to 11.0 or 11.5 could get you there, with an effort closer to a migration. (I'd suggest again reviewing and making sure you've got a good design for the new platform, particularly with Web Forms, but that effort should be significantly smaller than a rewrite.) If you want to deploy a WPF application, I know a year is quite a while to wait, but looking into PowerBuilder 12 might be worth the effort. Pulled off correctly, the WPF capability may put PowerBuilder into a unique and powerful position.
If a rewrite is guaranteed to be in your future (showers seem cheaper), you might want to phase the conversion. DataWindow.NET makes it possible to to take your DataWindows with you. (My pet theory of the week is that PowerBuilder developers take the DataWindow for granted until they have to reproduce all the functionality that comes built in.) Being able to drop in pre-existing, pre-tested, multi-row, scrollable, minimal resource consuming, printable, data-bound dynamic UI, generating minimal SQL with built-in logical record locking and database error conversion to events, into a new application is a big leg up.
You can also phase the transition by converting your PowerBuilder code to something that is consumable by a .NET application. As mentioned, you can produce COM objects with the PB 10 you've got, but will have to move to 11.0 or 11.5 to produce assemblies. The value of this may depend on how well partitioned your application is. If your business logic snakes through GUI events and functions instead of being partitioned out to non-visual objects (aka custom classes), the value of this may be questionable. Still, this is a design faux pas that should probably be fixed before a full conversion to C#; this is something that can be done while still maintaining the PowerBuilder application as a preliminary step to a phased and then a full conversion.
No doubt I'd rather see you stay with PowerBuilder. Failing that, I'd like to see you succeed. Just remember, once you take that first bite, you'll have to finish it.
Good luck finding that belt,
Terry.
I see you've mentioned moving "core components" to .NET to start. As you might guess by now, I think a staged approach is a wise decision. Now the definition of "core" may be debatable, but how about a contrary point of view. Food for thought? (Obviously, this was the wrong week to start a diet.) Based on where PB is right now, it would be hard to divide your application between PB and C# along application functionality (e.g. Accounts Receivable in PB, Accounts Payable in C#). A division that may work is GUI vs business logic. As mentioned before, pumping business logic out of PB into executables C# can consume is already possible. How about building the GUI in C#, with the DataWindows copied from PB and the business logic pumped out as COM objects or assemblies? Going the other way, to consume .NET assemblies in PB, you'll either have to move up to 11.x and migrate to Windows Forms, or put them in a COM callable wrapper.
Or, just train your C# developers in PowerBuilder. This just may be a rumour, but I hear the new PowerBuilder marketing tag line will be "So simple, even a C# developer can use it." ;-)
I think gbjbaanb gave you a good answer above.
Some other questions worth considering:
Is this PB10 app a new, well-written PB10 app, or was it one made in 1998 in PB4, then gradually converted to PB10 over the years? A well-written app should have some decent segregation between the business logic and the GUI, and you should be able to systematically port your code to .Net. At least, it should be a lot easier than if this is a legacy PB app, in which case it would be likely that you'd have tons of logic buried in buttons, datawindows, menus, and who knows what else. Not impossible, but more difficult to rework.
How well is the app running? If it's OK and stable, and doesn't need a lot of new features, then maybe it doesn't need rewriting. Or, as gbjbaanb said, you can put .Net wrappers around some pieces and then expose the functionality you need without a full rewrite. If, on the other hand, your app is cantankerous, nasty, not really satisfying business needs, and is making your users inefficient, then you might have a case for rewriting, or perhaps some serious refactoring and then some enhancements. There are PB guys serving sentences, er, I mean, making a living with the second scenario.
I'm not against rewrites if the software is exceedingly poor and is negatively affecting the company's business, but even then gradual adjustments and improvements are a less risky way to achieve system evolution.
Also, don't bail on this thread until after Terry Voth posts. He's on StackOverflow and is one of the top PB guys.
If its rather large, you might have better results writing a front-end for it in .net (or a web-based GUI) and using that to interact with your PB code, assuming you can expose the functionality it as an API.
If you're using PB 9 or greater, you can generate COM or .NET dlls, that you can then consume by a C# GUI. I'd recommend this over a rewrite in any new language.
Remember, rewrites are never a silver bullet, they always end up more time-consuming, difficult, and buggy than you first expect.
You might want to spend some time investigating PowerBuilder 11.5 (recently released) which adds some significant .NET integration.
Migrating to PowerBuilder 11.5 in order to make use of new .NET code will certainly be a lot easier than completely rewriting the entire app in C#.
I don't know if it's good or not but check this (commercial) product : PB.Net
My pet theory of the week is that PowerBuilder developers take the DataWindow for granted until they have to reproduce all the functionality that comes built in.
I'd back that theory. I went though an attempted conversion from PB8 to Java on a project several years ago that failed miserably, even using the first-gen HTML DataWindow. My current employer is hell-bent on moving to C#, not using Datawindow.NET despite > 2K DWOs in our current product. I'm not looking forward to the day when the realization sets in. (the entire product consist of several user applications, more than a dozen services, and use about 70 PBDs)
OP - unless your application is unusually well-structured (originally written for EA Server maybe?), this will not be a port. Things work too differently in the PB & .NET environments for a plain port to work satisfactorily. I cannot stress this enough - if you're really using the PB event model, a "port" will likely be a failure.
You need to look at logic flow (intertwined UI & process), control flow (who owns the process or data right now), data access (UI, data layer, ??) and the parts of the DW event model you're using from code. If you're thinking about ASP.NET (as we are), your whole user interaction experience will have to change, and that will feed back into the other considerations.
Not directly related to code, build automation will change (we use PowerGen for consistent PB builds; MSBuild is very different) as will your installation & setup.
I think anyone considering this for a large app would be pretty crazy not to very seriously consider using the DataWindow.NET, so as not to lose their investment in the DWs.
PHB's at major corporations think that Powerbuilder is a toy language and migrating to a new language like C# is trivial and can be done at a low cost. In fact, migrating a PB application to any other language will cost at least as much as developing an entirely new application on the new language. The resulting app will generally lose functionality compared to the original and will result in user dissatisfaction. I have seen a number of attempts - all have failed because of the difficulty and the user issues.
If it ain't broke, don't fix it.
Yes, it`s doable now without rewriting service components period.
PB 12.5>
And target GUI and service component migrations and integrations to c#.
Migration/Integration strategy may vary depending your project scope, scalability, resources and timeline.
You can use these target and project types in PowerBuilder .NET.
Refer this link Sybase_PB .Net
WPF Window Application WPF Window Application, WCF Client Proxy, or REST Client Proxy
PB Assembly WCF Client Proxy, REST Client Proxy, or PB Assembly
.NET Assembly WCF Client Proxy, REST Client Proxy, or .NET Assembly
WCF Service WCF Client Proxy, REST Client Proxy, or WCF Service

What am I missing about WCF?

I've been developing in MS technologies for longer than I care to remember at this stage. When .NET arrived on the scene I thought they hit the nail on the head and with each iteration and version I thought their technologies were getting stronger and stronger and looked forward to each release.
However, having had to work with WCF for the last year I must say I found the technology very difficult to work with and understand. Initially it's quite appealing but when you start getting into the guts of it, configuration is a nightmare, having to override behaviours for message sizes, number of objects contained in a messages, the complexity of the security model, disposing of proxies when faulted and finally moving back to defining interfaces in code rather than in XML.
It just does not work out of the box and I think it should. We found all of the above issues while either testing ourselves or else when our products were out on site.
I do understand the rationale behind it all, but surely they could have come up with simpler implementation mechanism.
I suppose what I'm asking is,
Am I looking at WCF the wrong way?
What strengths does it have over the
alternatives?
Under what circumstances should I
choose to use WCF?
OK Folks, Sorry about the delay in responding, work does have a nasty habit of get in the way sometimes :)
Some clarifications
My main paint point with WCF I suppose falls down into the following areas
While it does work out of the box, your left with some major surprises under the hood. As pointed out above basic things are restricted until they are overridden
Size of string than can be passed can't be over 8K
Number of objects that can be passed in a single message is restricted
Proxies not automatically recovering from failures
The amount of configuration while it's there is a good thing, but understanding it all and what to use what and under which circumstances can be difficult to understand. Especially when deploying software on site with different security requirements etc. When talking about configuration, we've had to hide lots of ours in a back-end database because security and network people on-site were trying to change things in configuration files without understanding it.
Keeping the configuration of the interfaces in code rather than moving to explicitly defined interfaces in XML, which can be published and consumed by almost anything. I know we can export the XML from the assembly, but it's full of rubbish and certain code generators choke on it.
I know the world moves on, I've moved on a number of times over the last (ahem 22 years I've been developing) and am actively using WCF, so don't get me wrong, I do understand what it's for and where it's heading.
I just think there should be simpler configuration/deployment options available, easier set-up and better management for configuration (SQL config provider maybe, rather than just the web.config/app.config files).
I use WCF all the time now and I share your pain. It seems like it was grossly over-engineered, but we are going to be stuck with it for a long, long time so I'm trying to learn it.
One thing I am certain about, XML sucks. I've had nothing but problems using XML to control it and have since switched to handling everything via code.
The concerns you listed were:
Size of string than can be passed can't be over 8K
Number of objects that can be passed in a single message is restricted
Proxies not automatically recovering from failures
The amount of configuration while it's there is a good thing, but understanding it all and what to use what and under which circumstances can be difficult to understand. Especially when deploying software on site with different security requirements etc. When talking about configuration, we've had to hide lots of ours in a back-end database because security and network people on-site were trying to change things in configuration files without understanding it.
Keeping the configuration of the interfaces in code rather than moving to explicitly defined interfaces in XML, which can be published and consumed by almost anything. I know we can export the XML from the assembly, but it's full of rubbish and certain code generators choke on it.
here's my take:
(1) addressed a valid concern that customers had with ASMX. It was too wide-open, with no way to easily control it. The 8k limit is easily lifted if you know where to look. I guess you can count that as a surprise, but it's more of a one-time thing. Once you know about it, you can lift it and be done with it forever, if you choose.
(2) is also configurable.
(3) is known, but there are boilerplate ways to work around this. The StockTrader code for example, demonstrates a proven pattern. You can re-use the code in your own app. Not sure if this is fixed in WCF for .NET 4.0. I know it was an open request.
(4) The config is a beast. This is a concern for a lot of people. The problem here is that WCF is so flexible, and config of all of that flexibility is exposed through xml files. It can be overwhelming. An approach that seems to work is to take it in small bites, as you need it.
(5) I don't understand.
I vastly prefer ASP.NET MVC and Web API over WCF. If I had to summarize WCF to a developer who was just being introduced to it, I would say, "WCF is a well-meaning attempt to replace over-engineered, Java EE style RPC development." Unfortunately, many of the decisions made require you to become an expert in configuring low level, unimportant items (message sizes, timeouts, uninteresting protocol elements, etc.) while abstracting absolutely critical pieces (URL design, parameter serialization, response serialization, etc.). The difference in productivity and aggravation between teams I know using WCF vs. Web API is night and day.
To come clean a little: I have always hated the core concept of .NET Remoting. I feel that developers need a thorough understanding of the resource structure of their application and how these resources are serialized. Furthermore, the use of the "POST" verb for simple data retrieval is worrisome in a read heavy application that needs to scale.
I'll address the rest of your issues after clarification. In the meantime, I can address your question on when you should choose to use WCF: always.
WCF is the replacement for the old ASMX technologies, including WSE. It is also the replacement for .NET Remoting. It is the only technology upon which high-level communications features in .NET will be based for the forseeable future.
For example, consider Windows Azure. It was not inevitable that the new concept of "cloud computing" would have its communications aspects covered by WCF. Yet, WCF was flexible enough to be extended to cover those cases, with very little change in code.
If you're having trouble with WCF, then you'd do well to make sure Microsoft knows about it. WCF is the present and future of web service and other service-oriented development in .NET, so they've got a very strong incentive to listen to you and resolve your pain points. Either contact them directly through Connect, or ask questions here on SO (tag with WCF, please), and a lot of people will help you.
Biggest advantage of using WCF from a programmer's point of view: separates the definition of exposed services (operations, contracts, etc.) from the protocol's specific details, unlike ASMX where you expose a class as a web service directly in the code using attributes. Using a real example of mine: we where able to easily switch the transport protocol between web services and named pipes, whatever suited better the deployment and performance needs, without changing a line of code.
WCF is intended to SOA methodologies. Work professionally using it is a nightmare. I delivered a SOA solution using WCF as tool and hell, hundreds configurations and hidden tips! My past distributed solution using old style Web Services and Remoting were more stable. I've spent days working out the solution for the error "The underlying connection was closed: An unexpected error occurred" which makes no sense to happen for one method among 4 in the same contract. I'm very disappointed. It took me back through time where .net was first introduced with lots of promises and when we got hands on, hell, log problems came up!
To address the problem of maintenance nightmare of application config, some standard like UDDI or WS-Discovery exist, WS-Discovery will be supported by WCF in .NET 4.0.
Keeping the configuration of the
interfaces in code rather than moving
to explicitly defined interfaces in
XML, which can be published and
consumed by almost anything. I know we
can export the XML from the assembley,
but it's full of rubbish and certain
code generators choke on it.
Can you be more explicit ? I think you are talking about service behavior configured in code.
You can easily code behavior extensions to configure what your are talking about in config file instead of code BUT I think that if microsoft didn't do that there is a good reason.
For example a service with this behavior :
[ServiceBehavior(InstanceContextMode=InstanceContextMode.PerCall, ConcurrencyMode=ConcurrencyMode.Single)]
The implementation knows that the instance is not shared between multiple thread so it's developed differently than :
[ServiceBehavior(InstanceContextMode=InstanceContextMode.Single, ConcurrencyMode=ConcurrencyMode.Multiple)]
In this case the service implementation should take care about concurency problems.
The implementation is coupled with the attribute ServiceBehavior, so moving this behavior in a XML file is not a good idea.
What if you can change a InstanceContextMode.PerCall service to a InstanceContextMode.Single service inside the config file ? You break the application !
Looking at how you mention XML and SQL, you are using WCF to build a web application or an actual web service (service on the Web, and not just SOAP exchange).
It helps thinking about WCF as a replacement for .NET Remoting (or DCOM, CORBA etc), which also happens to support web services as one of the transports. Interfaces declared in assemblies, behavior of proxies, certain configuration options and other aspects of the framework that look unnatural and complicated from perspective of web apps - actually do work out of the box for DCOM-style systems of distributed objects.
To answer the question: no, you are not missing anything and using WCF for web applications is complicated, because WCF is not a framework for building web applications. Probably such framework can be built on top of it, but I would hate to see WCF itself changed to move into web realm.

Categories