Should I go with C# / .Net OR C++ for my application? - c#

I am working on a project which talks to SQL Server and most of the back end code is in C++.
This is an application which controls flow of few fluids while loading them into carriers. Some of the back end modules which talk to controllers which in turn control flow of fluids are in C++. Since they have memory leaks and some other bugs, there has been attempt to migrate them to .Net.
What I understand is, performance comes down when we use .Net for back end modules. So my opinion was NOT to convert these back end modules to .Net but to fix the issues in C++ itself.
The code in discussion is an application which interacts with firmware of controllers. It basically takes some commands and gets response from controllers. This code does not have UI and the same code interacts with SQL as well to update the data. Both are part of one exe.
.Net is believed to be good when performance is not expected to be rigorous. It would have been suitable if new code had to be written and especially when it involves design of UI. Another school of thought is, .Net is good for higher layers but not for lower layers in a multi tier architecture.
I would like to know opinions of others from different perspective. Some of the aspects to consider are:
speed
maintainability of code
migration related risks in future
etc.
Please comment from angle of rewriting existing code. It will be one to one C++ line conversion to C# if we decide to go for it.

Quick Answer:
If you have capable C++ programmers who can use debuggers and understand their application domain and how to use the controllers, it would probably be easier to do a careful review and fix the memory bugs and issues. After all, time and effort has already been spent on the code, and unless it is trivial code, rewriting in C# could introduce new logic errors.
Questions for the OP:
The code in discussion is driver code which interacts with firmware of controllers. It
basically takes some commands and gets response from controllers. This code does not have
UI and the same driver code interacts with SQL as well to update the data
Are you talking about user-mode software that you have named the "driver", or are you talking about a kernel-mode device driver?
It would help if you could provide more information about these controllers running firmware that control fluid flow. Does the C++ back-end connect to the controllers through RS232 (Serial)? Ethernet? USB? TCP/IP? PCI?
If you're connecting to the controller hardware via TCP/IP or RS232 (Serial), C#/.NET is well equipped to handle the task. For anything else like USB, PCI, Ethernet, etc., you're going to need a device driver which has to be programmed in C or C++ depending on the requirements of the driver. Of course you can encapsulate the user-mode part that is in C++ or encapsulate direct calls to Win32, but it will add more development tasks to your project.

Apparently the only problem with existing C++ code is memory leaks.
That seems to me an insufficient reason to rewrite it all in C#.
Instead I'd suggest running memory leak detection software to find the leaks.

every language is special in its own way, you should find out which language suits best for the scenario

Don't rewrite a whole program in a different language because of a few bugs -- there will just be different ones in the new product, and the Q&A cycle will have to be restarted. I'd fix the bugs in the C++ program. If the issue is memory management, I'd strongly look into std::auto_ptr or std::tr1::shared_ptr, which will automatically delete memory for you when finished. If that's not an option, I'm sure that running something through valgrind or even paying for commercial memory checkers would be cheaper than rewriting the whole thing (in both time and money).

"language is special in its own way" man I need a hug, for real. Don't change languages because code is not written well...write better code and use resources available to you.

Related

Track Data Input Through Application Code and System Libraries

I am a security dude, and I have done extensive research on this one, and at this point I am looking for guidance on where to go next.
Also, sorry for the long post, I bolded the important parts.
What I am trying to do at a high level is simple:
I am trying to input some data into a program, and "follow" this data, and track how it's processed, and where it ends up.
For example, if I input my login credentials to FileZilla, I want to track every memory reference that accesses, and initiate traces to follow where that data went, which libraries it was sent to, and bonus points if I can even correlate it down to the network packet.
Right now I am focusing on the Windows platform, and I think my main question comes down to this:
Are there any good APIs to remote control a debugger that understand Windows forms and system libraries?
Here are the key attributes I have found so far:
The name of this analysis technique is "Dynamic Taint Analysis"
It's going to require a debugger or a profiler
Inspect.exe is a useful tool to find Windows UI elements that take input
The Windows automation framework in general may be useful
Automating debuggers seems to be a pain. IDebugClient interface allows for more rich data, but debuggers like IDAPro or even CheatEngine have better memory analysis utilities
I am going to need to place memory break points, and track the references and registers that are associated with the input.
Here are a collection of tools I have tried:
I have played with all the following tools: WinDBG (awesome tool), IDA Pro, CheatEngine, x64dbg, vdb (python debugger), Intel's PIN, Valgrind, etc...
Next, a few Dynamic Taint Analysis tools, but they don't support detecting of .NET components or other conveniences that Windows debugging framework provides natively provided by utilities like Inspect.exe:
https://github.com/wmkhoo/taintgrind
http://bitblaze.cs.berkeley.edu/temu.html
I then tried writing my own C# program using IDebugClient interface, but the it's poorly documented, and the best project I could find was from this fellow, and is 3 years old:
C# app to act like WINDBG's "step into" feature
I am willing to contribute code to an existing project that fits this use case, but at this point I don't even know where to start.
I feel like as a whole dynamic program analysis and debugging tools could use some love... I feel kind of stuck, and don't know where to move from here. There are so many different tools and approaches to solving this problem, and all of them are lacking in some manner of another.
Anyway, I appreciate any direction or guidance. If you made it this far thanks!!
-Dave
If you insist on doing this at runtime, Valgrind or Pin might be your best bet. As I understand it (having never used it), you can configure these tools to interpret each machine instruction in an arbitrary way. You want to trace dataflows through machine instructions to track tainted data (reads of such data, followed by writes to registers or condition code bits). A complication will likely be tracing the origin of an offending instruction back to a program element (DLL? Link module? Named subroutine) so that you can complain appropriately.
This a task you might succeed at doing as an individual in terms of effort.
This should work for applications.
I suspect one of your problems will be tracing where goes in the OS. That's a lot harder although the same principle applies; your difficulty will be getting the OS supplier to let you track insructions executed in the OS.
Doing this as runtime analysis has the downside that if a malicious application doesn't do anything bad on your particular execution, you won't find any problems. That's the classic shortcoming of dynamic analysis.
You could consider tracking the data the source code level using classic compiler techniques. This requires that you have access to all the source code that might be involved (that's actually really hard if your application depends on a wide variety of libraries), that you have tools that can parse and track dataflows through source modules, and that these tools talk to each other for different languages (assembler, C, Java, SQL, HTML, even CSS...).
As static analysis, this has the chance of detecting an undesired dataflow no matter which execution occurs. Turing limitations means that you likely cannot detect all such issues. THat's the shortcoming of static analysis.
Building your own tools, or even integrating individual ones, to do this is likely outside what you can reasonably do as an individual. You'll need to find uniform framework for building such tools. [Check my bio for one].

A full operating system in c#

I saw this thread here. I was wondering if this was legit (sounds like it) and what are the drawbacks of doing this. What does it entail to run it stand alone in some architecture?
Thanks
Trying to create an operating system in a managed language is currently an "interesting research problem". This means that it seems possible, but there are still quite a few important issues that need to be resolved (for example, I wouldn't expect "managed windows" anytime soon).
For example, take a look at the Singularity project (also available at CodePlex). It still has some native parts, but very few of them. As far as I know, even the garbage collector is written in managed code (with some language extension that allows safe manipulation with pointers).
The trick is that even managed code will eventually be compiled to native code. In .NET, the compilation is done usually by JITter when you start the application. In Singularity, this is done in advance, so you run native code (but generated from managed). Singularity has some other interesting aspects - for example, processes communicate via messages (and cannot dynamically load code), which makes it possible to do some aggressive optimizations when generating native code.
There's an open source project that's trying to achieve exactly that.
It's called the "Managed Operating System Alliance". Mainly targeted as a framework (supplying users with a compiler, libraries, interfaces, tools and an example kernel), it will also feature a complete operating system kernel and small apps.
For further information:
Website: http://mosa-project.org/projects/mosa
IRC: #mosa on freenode
It is legit. Drawbacks are clear, this is a micro kernel. It is going to be a while before your video adapter driver will be fully managed as well. That takes acquiring critical mass with many devs and manufacturers jumping on the bandwagon. Difficult, but it has happened with Linux as the obvious example.
This is being pursued by Microsoft as well. Singularity has been well published about. It has evolved into a secret research project named Midori. There have been enough leaks about it to know its goal, Wikipedia has an article about it. I think lots of the devs that worked on the original CLR joined this project. Whether it will come to a good end is an open question. If it does, clearly the project backer is probably enough to get that critical mass rolling.
Microsoft's Singularity project is a operating system architecture framework which will allow people to write customizable operating system and probably Microsoft's new operating system will be based on singularity.
.NET is very powerful framework, it evolved and it probably contains everything from metadata attributes to linq and which certainly makes us free from bad pointer errors.
Just like Windows Phone and iPhone, people will be able to write customizable operating system for devices.
Today most of firewall, routers (the hardware ones) contain customized linux, that can be replaced with Singularity kernal and your own business process.
Singularity kernel is small it looks like perfect alternative of embedded windows/linux.
I dont think there is any drawback, except that it is totally new system and it will take time for hardware vendors to supply devices comptabile with this, but it will happen in future.

Migrating MFC (VC6) application to .NET 2008

I want very specific answer from the developers who did this and want to know how you resolved the problems you faced.
We have very big MFC (VC6) 32 bit application existing for 10 years. Now we would like to migrate it to .NET unmanaged 64-bit application. Here we have some problems, our UI should not change, we may need some managed .NET class for easier development, without affecting architecture how to add managed code with unmanaged code, lots win32 APIs might get changed to new APIs, should be run in XP, Vista, Windows 7 OS machine without any change, these activities should not be time consuming, new technologies analysis should be done as we are MFC programmers...
Pls share your experience and if you have any clear documents will be very helpful...
Note:
For the clear understanding, I am re-phrasing some points again. We want to migrate our VC6 native code 32-bit application to VS2008 (or VS2010) native code (unmanaged C++) with 64-bit support. The primary requirement is there should not be any change in existing UI. In Addition, if .NET supports the combination of managed code with unmanaged code, we can try using some features like .NET remoting in unmanaged C++ environment. And one more important thing I would like to convey to all is that we are not going to start any coding in C# or from scratch.
We've done this is steps (VC6 -> VS2005 -> VS2008 -> (soon) VS2010 ) and most issues were related to changes in the API.
unsafe string operations (strcpy vs. strcpy_s ) that give out a TON of warning messages (use the _CRT_SECURE_NO_WARNINGS preprocessor define to remove them if you do not want to fix them all )
Changed in prototypes for message handlers (returns LRESULT, changes in WPARAM and LPARAM, ... )
Deprecated API (you'll find them out soon enough, I think there's a page on msdn about that)
Compiler might be a bit more strict in regards to standard C++ .
It's hard to go to specifics...
but have a look at this blog entry for some more info : http://insidercoding.com/post/2008/08/20/Migrating-from-VC6-to-VC9.aspx
Good luck.
Max.
What you're asking for isn't really a migration -- it's a nearly complete rewrite, at least of the entire UI. Unless your "very big ... application" has a very small, simple UI, I'd sit back and think hard. Read Joel's Things You Should Never Do.
Bottom line: this is almost certainly a really bad idea. If you do it anyway, you need to start out aware that it will be quite time consuming.
If you decide to go ahead with this, you nearly need to do it incrementally. To do that, I'd probably start by finding discrete "pieces" of functionality in your existing code, and turning those pieces into ActiveX controls. Once you've gotten that to the point that your main program is basically a fairly minimal framework that most instantiates and uses ActiveX controls, it becomes fairly easy to create a new framework in managed code that does roughly the same thing, delegating most of the real work to the existing ActiveX controls. Once you've done that, you can start to migrate individual controls to managed code as you see fit.
The point of doing things this way is to avoid having a long interval during which you're just writing new code that duplicates the old code, but can't do any updates for your customers. Nearly the only reasonable alternative I've seen work reasonably well is to have two separate development teams: one continues to work on and update the old code base at the same time as the second team starts the total rewrite from the ground up. If you have enough money, that approach can work out quite well.
Microsoft, for one example, has done things like this a number of times. Just for a couple of examples, years ago when there were rumors that Borland (then Microsoft's biggest competitor in programming language tools) was going to produce a TurboBASIC, Microsoft decided QuickBASIC (V2 at the time) really couldn't compete. They responded by setting up two teams: one to do as much upgrading as reasonable to QuickBASIC 2 to produce QuickBASIC 3. The second team did a total rewrite from the ground up, producing what became QuickBASIC 4.
Another example is Windows 95/98/... vs. Windows NT. They continued development of the existing Windows code base at the same time as they did a total rewrite from the ground up to produce Windows NT. Though they kept the two teams synchronized so UIs looked similar and such, the two were developed almost entirely separately from each other. Only after years of overlap between the two did they finally quit working on the old code base (and I sometimes wonder whether the crappiness of Windows Me wasn't at least partly intentional, to more or less force users to migrate to the NT code base).
Unless you can do that, however, nearly your only chance of success is to use an incremental approach.
As Jerry has mentioned Joel talks about this in his blog Things You Should Never Do.
Moreover there are other things to be considered with this conversion.
What would you do with existing VC++ 6.0 code base.
Qualification time( different OS, SQL etc) needed to test the entire product after the changes are made.
How do you manage 2 code bases with and without 64 bit support.
PS: Most importantly by the time you fix and qualify your product in VS2010 i guess VS 2012 would have released :)

Converting C (not C++) to C#

I have some old C 32 Bit DLLs that are using Oracle's Pro C precompiler library (proc.exe) to expose a hundred or so sproc/func calls to an even older VB6 GUI which references these functions through explicit Declare statements like so:
Declare Function ConnectToDB Lib "C:\windows\system32\EXTRACT32.DLL" (CXN As CXNdets, ERR As ERRdets) As Long
All the structures in the C header files are painstakingly replicated in the VB6 front end. At least the SQL is precompiled.
My question is, is it worth trying to impose a .Net interface (by conversion to an assembly) onto the the C code and upgrade the VB6 to C# or do you think I should just abandon the whole thing and start from scratch. As always, time is of the essence hence my appeal for prior experience. I know that if I keep the Declares in .Net I will have to add lots of complicated marshalling decorations which I'd like to avoid.
I've never had to Convert C to .Net before so my main question if everything else is ignored is are there any porting limitations that make this inadvisable?
... At least the SQL is precompiled.
Is this the only reason you've got code in C? If so, my advice is to abandon that and simply rewrite the entire thing in C# (or even VB6 if that's what your app is written in) ... unless you've profiled it and can prove a measurable difference, you won't be getting any perf benefits from having sql/sproc calls in C. You will only get increased maintenance costs due to the complexity of having to maintain this interop bridge.
You should continue to use the DLL in .NET by creating an assembly around the Declares. That one assembly probably would go a little quicker in VB.NET than C#. Then have your new UI reference that assembly. Once you have that going then you have bought yourself time to convert the C code into .NET. You do this by initially keeping the assembly and replacing the the declares with new .NET code. Soon you will have replaced everything and can refactor it to a different design.
The time killer is breaking behavior. The closer you can preserve the behavior of the original application the faster the conversion will be. Remember there nothing wrong with referencing a traditional DLL. .NET is built on many layers of APIs which ultimately drill down to the traditional DLLs that continue to be used by Windows. Again once you have the .NET UI working then you have more time to work on the core and bring everything into .NET.
I always advise extreme caution before setting out to rewrite anything. If you use a decent tool to upgrade the VB6 to .NET, it will convert the Declare statements automatically, so don't stress about them too much!
It's a common pitfall to start out optimistically rewriting a large piece of software, make good early progress fixing some of the well-known flaws in the old architecture, and then get bogged down in the functionality that you've just been taking for granted for years. At this point your management begin to get twitchy and everything can get very uncomfortable. I have been there and it's no fun. Sounds like your users are already twitchy, which is a bad sign.
...and here's a blog post by a Microsofty that agrees with me:
Many companies I worked with in the early days of .NET looked first at rewriting driven in part by a strong desire to improve the underlying architecture and code structures at the same time as they moved to .NET. Unfortunately many of those projects ran into difficulty and several were never completed. The problem they were trying to solve was too large
...and some official advice from Microsoft UK regarding migrating from VB6 to .NET
Performing a complete rewrite to .NET is far more costly and difficult to do well [than converting] ... we would only recommend this approach for a small number of situations.
Maybe your program is small, and you have a great understanding of the problems it solves, and you are great at estimating accurately and keeping your projects on track, and it will all be fine.
If you move from VB6 to VB.net or C#, throw away the C code and use the appropriate ODP.net classes or LINQ to access those stored procedures. Since the C layer (as I understand it) has no logic other than exposing the stored procedures, it's not useful anymore after the switch. By doing that, you get (at least) much better exception handling (i.e. exceptions at all instead of magic return codes), maintainability etc.
See also: Automatically create C# wrapper classes around stored procedures

Porting a PowerBuilder Application to .NET

Does anyone have any advice for migrating a PowerBuilder 10 business application to .NET?
My company is considering migrating a legacy PB application to .NET (C#) and I am just wondering if anyone has any experience - good or bad - that you would like to share.
The application is rather large with 10 PBL libraries, some PFC as well as custom frameworks. There are a large number of DLL calls being made as well. Finally, it uses a Microsoft SQL Server database.
We have discussed porting the "core" application code to .NET and then porting more advanced functionality across as-needed.
When I saw the title, I was just going to lurk, being a renowned PB bigot. Oh well. Thanks for the vote of confidence, Bernard.
My first suggestion would be to ditch the language of self-deception. If I eat half of a "lite" cheesecake, I'm still going to lose sight of my belt. A migration can take as little as 10 minutes. What you'll be doing is a rewrite. The time needs to be measured as a rewrite. The risk needs to be measured as a rewrite. And the design effort should be measured as a rewrite.
Yes, I said design effort. "Migrate" conjures up images of pumping code through some black box with a translation mirroring the original coming out the other side. Do you want to replicate the same design mistakes that were made back in 1994 that you've been living with for years? Even with excellent quality code, I'd guess that excellent design choices in PowerBuilder may be awful design choices in C#. Does a straight conversion neglect the power and strengths of the platform? Will you be living with the consequences of neglecting a good C# design for the next 15 years?
That rant aside, since you don't mention your motivation for moving "to .NET," it's hard to suggest what options you might have to mitigate the risk of a rewrite. If your management has simply decided that PowerBuilder developers smell bad and need to be expunged from the office, then good luck on the rewrite.
If you simply want to deploy Windows Forms, Web Forms, Assemblies or .NET web services, or to leverage the .NET libraries, then as Paul mentioned, moving to 11.0 or 11.5 could get you there, with an effort closer to a migration. (I'd suggest again reviewing and making sure you've got a good design for the new platform, particularly with Web Forms, but that effort should be significantly smaller than a rewrite.) If you want to deploy a WPF application, I know a year is quite a while to wait, but looking into PowerBuilder 12 might be worth the effort. Pulled off correctly, the WPF capability may put PowerBuilder into a unique and powerful position.
If a rewrite is guaranteed to be in your future (showers seem cheaper), you might want to phase the conversion. DataWindow.NET makes it possible to to take your DataWindows with you. (My pet theory of the week is that PowerBuilder developers take the DataWindow for granted until they have to reproduce all the functionality that comes built in.) Being able to drop in pre-existing, pre-tested, multi-row, scrollable, minimal resource consuming, printable, data-bound dynamic UI, generating minimal SQL with built-in logical record locking and database error conversion to events, into a new application is a big leg up.
You can also phase the transition by converting your PowerBuilder code to something that is consumable by a .NET application. As mentioned, you can produce COM objects with the PB 10 you've got, but will have to move to 11.0 or 11.5 to produce assemblies. The value of this may depend on how well partitioned your application is. If your business logic snakes through GUI events and functions instead of being partitioned out to non-visual objects (aka custom classes), the value of this may be questionable. Still, this is a design faux pas that should probably be fixed before a full conversion to C#; this is something that can be done while still maintaining the PowerBuilder application as a preliminary step to a phased and then a full conversion.
No doubt I'd rather see you stay with PowerBuilder. Failing that, I'd like to see you succeed. Just remember, once you take that first bite, you'll have to finish it.
Good luck finding that belt,
Terry.
I see you've mentioned moving "core components" to .NET to start. As you might guess by now, I think a staged approach is a wise decision. Now the definition of "core" may be debatable, but how about a contrary point of view. Food for thought? (Obviously, this was the wrong week to start a diet.) Based on where PB is right now, it would be hard to divide your application between PB and C# along application functionality (e.g. Accounts Receivable in PB, Accounts Payable in C#). A division that may work is GUI vs business logic. As mentioned before, pumping business logic out of PB into executables C# can consume is already possible. How about building the GUI in C#, with the DataWindows copied from PB and the business logic pumped out as COM objects or assemblies? Going the other way, to consume .NET assemblies in PB, you'll either have to move up to 11.x and migrate to Windows Forms, or put them in a COM callable wrapper.
Or, just train your C# developers in PowerBuilder. This just may be a rumour, but I hear the new PowerBuilder marketing tag line will be "So simple, even a C# developer can use it." ;-)
I think gbjbaanb gave you a good answer above.
Some other questions worth considering:
Is this PB10 app a new, well-written PB10 app, or was it one made in 1998 in PB4, then gradually converted to PB10 over the years? A well-written app should have some decent segregation between the business logic and the GUI, and you should be able to systematically port your code to .Net. At least, it should be a lot easier than if this is a legacy PB app, in which case it would be likely that you'd have tons of logic buried in buttons, datawindows, menus, and who knows what else. Not impossible, but more difficult to rework.
How well is the app running? If it's OK and stable, and doesn't need a lot of new features, then maybe it doesn't need rewriting. Or, as gbjbaanb said, you can put .Net wrappers around some pieces and then expose the functionality you need without a full rewrite. If, on the other hand, your app is cantankerous, nasty, not really satisfying business needs, and is making your users inefficient, then you might have a case for rewriting, or perhaps some serious refactoring and then some enhancements. There are PB guys serving sentences, er, I mean, making a living with the second scenario.
I'm not against rewrites if the software is exceedingly poor and is negatively affecting the company's business, but even then gradual adjustments and improvements are a less risky way to achieve system evolution.
Also, don't bail on this thread until after Terry Voth posts. He's on StackOverflow and is one of the top PB guys.
If its rather large, you might have better results writing a front-end for it in .net (or a web-based GUI) and using that to interact with your PB code, assuming you can expose the functionality it as an API.
If you're using PB 9 or greater, you can generate COM or .NET dlls, that you can then consume by a C# GUI. I'd recommend this over a rewrite in any new language.
Remember, rewrites are never a silver bullet, they always end up more time-consuming, difficult, and buggy than you first expect.
You might want to spend some time investigating PowerBuilder 11.5 (recently released) which adds some significant .NET integration.
Migrating to PowerBuilder 11.5 in order to make use of new .NET code will certainly be a lot easier than completely rewriting the entire app in C#.
I don't know if it's good or not but check this (commercial) product : PB.Net
My pet theory of the week is that PowerBuilder developers take the DataWindow for granted until they have to reproduce all the functionality that comes built in.
I'd back that theory. I went though an attempted conversion from PB8 to Java on a project several years ago that failed miserably, even using the first-gen HTML DataWindow. My current employer is hell-bent on moving to C#, not using Datawindow.NET despite > 2K DWOs in our current product. I'm not looking forward to the day when the realization sets in. (the entire product consist of several user applications, more than a dozen services, and use about 70 PBDs)
OP - unless your application is unusually well-structured (originally written for EA Server maybe?), this will not be a port. Things work too differently in the PB & .NET environments for a plain port to work satisfactorily. I cannot stress this enough - if you're really using the PB event model, a "port" will likely be a failure.
You need to look at logic flow (intertwined UI & process), control flow (who owns the process or data right now), data access (UI, data layer, ??) and the parts of the DW event model you're using from code. If you're thinking about ASP.NET (as we are), your whole user interaction experience will have to change, and that will feed back into the other considerations.
Not directly related to code, build automation will change (we use PowerGen for consistent PB builds; MSBuild is very different) as will your installation & setup.
I think anyone considering this for a large app would be pretty crazy not to very seriously consider using the DataWindow.NET, so as not to lose their investment in the DWs.
PHB's at major corporations think that Powerbuilder is a toy language and migrating to a new language like C# is trivial and can be done at a low cost. In fact, migrating a PB application to any other language will cost at least as much as developing an entirely new application on the new language. The resulting app will generally lose functionality compared to the original and will result in user dissatisfaction. I have seen a number of attempts - all have failed because of the difficulty and the user issues.
If it ain't broke, don't fix it.
Yes, it`s doable now without rewriting service components period.
PB 12.5>
And target GUI and service component migrations and integrations to c#.
Migration/Integration strategy may vary depending your project scope, scalability, resources and timeline.
You can use these target and project types in PowerBuilder .NET.
Refer this link Sybase_PB .Net
WPF Window Application WPF Window Application, WCF Client Proxy, or REST Client Proxy
PB Assembly WCF Client Proxy, REST Client Proxy, or PB Assembly
.NET Assembly WCF Client Proxy, REST Client Proxy, or .NET Assembly
WCF Service WCF Client Proxy, REST Client Proxy, or WCF Service

Categories