Has anyone seen a recent (and fairly balanced) study into the relative costs for software development using differing languages ? I would particular like to see the relative costs of Java Vs. C# Vs. Delphi.
No. But I'm not fanatic of any, and work as consultant and use to recommend one of them for every requirement I have. So here are some facts to make easier the choose of what to use to address system development requirements that you can have.
In Common:
All of them are the best in their fields:
Java is the best Java development option.
C# is the best .NET development option.
Delphi is the best Native development option.
All of them have:
Worldwide third party vendors that provide quality components and libraries.
Worldwide well-known applications created with them (for example the Delphi ones may be are more known: Yahoo Go for TV!, Macromedia Captivate, TotalCommander, MediaMonkey, FinalBuilder, InstallAware, WinLicense, MySQL Administrator, etc).
All of them are:
Highly reliable technologies with RAD capabilities.
Supported by the best development aid tools (UML, etc).
Releasing major upgrades in its technologies (Java 7, .NET 4.0 and Delphi multiplatform).
Differences:
3 Things in which C# is better:
Quantity of developers available (comparing with Java) that can code in it (*).
Has Microsoft behind.
Cheaper development costs in terms of wages (usually).
3 Things in which Java is better:
Quantity of developers available (comparing with Delphi) that can code in it (*).
Portability.
Has Sun behind.
3 Things in which Delphi is better:
Speed (better performance for time critical systems).
Small footprint (the Delphi compiler generates really small binaries).
Has no explicit dependencies (easier distribution).
(*) there is a very reliable fact that there are more other-languages-developers that can code in C# than other-languages-developers that can code in Java, that means its easier to find C# programmers. Maybe that explains why in many websites (like this one) and forums that allow multi-language questions, refactorings, etc, there are USUALLY more C# questions and answers (84k vs 50k). Also, since Java jobs are best paid in many parts of the world, common sense points out that Java developers remain longer in their jobs than C# ones, that makes more difficult to find Java developers available than C# ones. And of course there are some other factors that can be discussed, but I'm pretty sure it's usually easier to find a C# programmer than a Java one.
I don't know about formal studies, but I've heard plenty of anecdotal accounts of companies taking an existing app in Delphi and rewriting it in C# for one reason or another. They all end about the same way.
It took twice as long to rewrite the program in C# as it did to originally write it in Delphi, even with all the business logic and domain knowledge already worked out and present in the form of the existing Delphi codebase. During this time, they were not releasing updates because all their resources were busy with the rewrite, allowing their competition to gain market share. And when it was done, it was a 1.0-level product. Glitchy, slow, and hard to use, often with severe backwards-compatibility issues.
The reason why are open to interpretation, but I think one of the major factors that makes Delphi so much more productive than C# (or Java) is the language's look and feel.
It's common knowledge that a lot more work, time and effort goes into maintaining and debugging modern programs than initially wriitng them, but That principle isn't often followed to its logical conclusion. If what requires the most work is maintaining the program, then picking a language on the basis of it being easy or quick to write code in is premature optimization. You get a better return on your investment if you use a language that is easy to read and maintain. And when it comes to code readability, Pascal (Delphi) beats the C family hands-down.
It's not a formal study, but it's worth thinking about.
Quantitative comparisons of this sort would be very hard to pin down, due to the number of complicating variables: developers' experience with the language, suitability of the language to the target domain, developers' overall quality (it's been argued that non-mainstream languages attract higher quality developers), tradeoffs with the resulting product (is a Ruby or Python app as fast as a well-written Delphi or C++ app?), etc.
In Code Complete, 2nd Ed., Steve McConnell lists several languages in terms of their expressive power (how many lines of equivalent C code can be expressed in a single statement of each language). It's been suggested that programmers' productivity in lines of code is relatively constant regardless of language; if this is true, then the expressive power of each language should give a rough estimate of the relative cost of development in each language. From Table 4.1, page 62:
LANGUAGE LEVEL RELATIVE TO C
C 1
C++ 2.5
Fortran 95 2
Java 2.5
Perl 6
Python 6
Smalltalk 6
Visual Basic 4.5
He lists several sources for this table: Estimating Software Costs, Software Cost Estimation with Cocomo II, and "An Empirical Comparison of Seven Programming Languages" (by Prechelt, from IEEE Computer, October 2000).
The figures that McConnell cites are all several years old, but from what I understand, the Cocomo II model is ridiculously detailed, so current Cocomo II material may offer current numbers on Delphi and C#.
I've never looked for such a study, but I would be surprised if one existed. Any experiment designed to measure and compare actual development costs across multiple languages in a proper scientific fashion would be incredibly expensive.
To do it properly:
You would need to specify a number of non-trivial projects across a range of application domains.
You would need to form a number project teams, each of which is composed of developers with significant experience in developing large-scale applications in one of the languages.
Then you would need to implement each project N times for each language ... to get a statistically significant result.
So you would need developer effort equivalent to project-size * nos-languages * nos-projects * nos-repetitions. Assuming a non-trivial project takes 1 man-year, that there are 5 projects and they are developed 5 times in each language (to give us a large enough sample size to be statistically significant), that is 25 experienced-developer-years ... say US$2 million to US$5 million ... PER LANGUAGE EXAMINED.
Those numbers are (obviously) pulled out of the air, but my point is that a proper scientific comparison of development costs for different language would be prohibitively expensive.
And even then, the study results would not address:
on-going maintainability / maintenance costs,
how the numbers scale to large projects,
language specific effects of team size,
availability, costs and benefits of development tools for respective languages,
ease / difficulty of forming experienced teams for each language,
and so on.
And the results would be out-dated in 3 to 5 years.
Peopleware (by Tom DeMarco and Timothy Lister) contains a section in chapter eight about "Coding War Games". From 1984 to 1986, more than 600 developers have participated.
In their analysis of the game results, they discovered that the programming language had little or no correlation to performance. (Only the assembly language participants got badly left behind by all the other language groups)
The US Air Force was interested and found Delphi to be significantly faster to code in. The C++ competition every year attracts speed coding teams to a competition. Delphi coders shadow this competition and almost always come in significantly faster with the required code.
After his career as head of Air Force development my former boss, Bill Roetzheim wrote the book on estimating software development costs. His choice, head and shoulders above all others was Delphi. That was version 3/4. Rational used his estimating pattern. I still use it, and nothing better has been forthcoming in all the years I have been doing it.
Clarity of design and the power of expression in code doesnt change much over versions. Most of the time you are looking at visual changes, and incremental augmentation. The core best practices from 20 years ago still apply. That is what makes Architecture possible. We know what best practices look like because at a certain scale the code has to conform to a certain set of standard requirements that dont vary much. You can almost always make it nicer to use, or have fewer stupid awkward interfaces, but data, security/filtering, and workflow systems that make business systems work still use the same design patterns from the GoF Design Patterns book. And if small devices have taught us anything, it is that intense clarity and simplicity is to be commended. It matters a whole bunch how easy your code base is to use for the purpose. All major environments can do domain design pretty well. The speed of the system and ease of development make Delphi and Node.js my two back end preferences. But capability wise C# and Java are both fine. If I was concerned about security of the environment against developers I would go for C# in some situations because it is harder for coders to violate the rules. But when I dont need those rules, ie most of the time, I prefer a more open environment that scales. When I dont much care about security I might prefer Node.js because it gets it done in a hurry. Most of the time I find it too easy to make mistakes in Node and I need full test code coverage eventually. Delphi is my first choice on balance.
"quality of developers" is hard to gauge. Java and (to a lesser extent) C# are used a lot in schools and universities to train pupils in the rudiments of programming.
Many of these end up on support forums with homework questions and would be counted somehow as being programmers (and poor ones) using that language.
In reality the vast majority of them never will write a single line of code after completing that mandatory introductory course, and most of the rest probably won't write in that language.
--- rant about "comparative studies" about programmer competence complete ---
As said, it's very hard if not impossible to give a cost comparison estimate for implementing something in different languages, at least as a general case to be used for all projects.
Some things lend themselves better to .NET, others to Java, others again might be best done in Excel macros.
And development cost is usually only a fraction of the TCO of a system, especially if it's something like a multitier application running on application servers with databases, etc.
If the customer already has a serverfarm running IIS with MS SQL Server databases as a backend, selling them a Java EE application using an Oracle backend is doing them a disservice, even if that would be the most logical choice for the application otherwise.
The development cost might be lower, but the running cost for the customer would be a lot higher.
On the other end of the scale, a website for your corner grocery store who wants to start taking orders through the net for delivery in the neighbourhood shouldn't be implemented in either .NET or Java EE. The cost of the solution (especially hosting) would far outweigh the benefits.
A simple thing based on for example php or rails would serve that customer far better. Hosting cost is reduced, no expensive license fees for database and application servers needs be paid, he might actually make some money using the resulting website.
Like others said, there is no studies... because nobody is interested. There is no measurable difference. Take almost any book on project management and you'll see no mention of languages barring examples, no relying on specific language features. Most time any money consuming problems over project life cycle are not coding problems, but architectural and organizational.
To put things in perspective, if you encounter serious drawback of the language and have to implement some workaround - you lose a few hours. Maintaining person might spent some more hours understanding what and why you did there A work day or two will be lost. Well, if you come to work in a wrong mood, you lose the same day. If you have problems understanding requirement or communicating with colleagues and management, you easily lose weeks and months.
Related
Recently I was talking with a friend of mine who had started a C++ class a couple months ago (his first exposure to programming). We got onto the topic of C# and .NET generally, and he made the point to me that he felt it was 'doomed' for all of the commonly-cited issues (low speed, breakable bytecode, etc). I agreed with him on all those issues, but I held back in saying it was doomed, only because I felt that, in time, languages like C# could instead become native code (if Microsoft so chose to change the implementation of .NET from a bytecode, JIT runtime environemnent to one which compiles directly to native code like your C++ program does).
My question is, am I out to lunch here? I mean, it may take a lot of work (and may break too many things), but there isn't some type of magical barrier which prevents C# code from being compiled natively (if one wanted to do it), right? There was a time where C++ was considered a very high-level language (which it still is, but not as much as in the past) yet now it's the bedrock (along with C) for Microsoft's native APIs. The idea that .NET could one day be on the same level as C++ in that respect seems only to be a matter of time and effort to me, not some fundamental flaw in the design of the language.
EDIT: I should add that if native compilation of .NET is possible, why does Microsoft choose not to go that route? Why have they chosen the JIT bytecode path?
Java uses bytecode. C#, while it uses IL as an intermediate step, has always compiled to native code. IL is never directly interpreted for execution as Java bytecode is. You can even pre-compile the IL before distribution, if you really want to (hint: performance is normally better in the long run if you don't).
The idea that C# is slow is laughable. Some of the winforms components are slow, but if you know what you're doing C# itself is a very speedy language. In this day and age it generally comes down to the algorithm anyway; language choice won't help you if you implement a bad bubble sort. If C# helps you use more efficient algorithms from a higher level (and in my experience it generally does) that will trump any of the other speed concerns.
Based on your edit, I also want to explain the (typical) compilation path again.
C# is compiled to IL. This IL is distributed to local machines. A user runs the program, and that program is then JIT-compiled to native code for that machine once. The next time the user runs the program on that machine they're running a fully-native app. There is also a JIT optimizer that can muddy things a bit, but that's the general picture.
The reason you do it this way is to allow individual machines to make compile-time optimizations appropriate to that machine. You end up with faster code on average than if you distributed the same fully-compiled app to everyone.
Regarding decompilation:
The first thing to note is that you can pre-compile to native code before distribution if you really want to. At this point you're close to the same level as if you had distributed a native app. However, that won't stop a determined individual.
It also largely misunderstands the economics at play. Yes, someone might perhaps reverse-engineer your work. But this assumes that all the value of the app is in the technology. It's very common for a programmer to over-value the code, and undervalue the execution of the product: interface design, marketing, connecting with users, and on-going innovation. If you do all of that right, a little extra competition will help you as much as it hurts by building up demand in your market. If you do it wrong, hiding your algorithm won't save you.
If you're more worried about your app showing up on warez sites, you're even more misguided. It'll show up there anyway. A much better strategy is to engage those users.
At the moment, the biggest impediment to adoption (imo) is that the framework redistributable has become mammoth in size. Hopefully they'll address that in a relatively near release.
Are you suggesting that the fact that C# is managed code is a design flaw??
C# can be natively compiled using tool such as NGEN, and the MONO (open source .net framework) team has developed full AOT (ahead of time) compilation which allows c# to run on the IPhone. However, full compilation is culbersome because it destroys cross-platform compatibility, and some machine-specific optimizations cannot be done. However, it is also important to note that .net is not an interpreted language, but a JIT (just in time) compiled language, which means it runs natively on the machine.
dude, fyi, you can always compile your c# assemblies into native image using ngen.exe
and are you suggesting .net is flawed design? it was .net which brought back ms back into the game from their crappy vb 5, vb 6, com days. it was one of their biggest bets
java does the same stuff - so are you suggesting java too is a mistake?
reg. big vendors - please note .net has been hugely hugely successful across companies of all sizes (except for those open source guys - nothing wrong with that). all these companies have made significant amount of investments into the .net framework.
and to compare c# speed with c++ is a crazy idea according to me. does c++ give u managed environment along with a world class powerful framework?
and you can always obfuscate your assemblies if you are so paranoid about decompilation
its not about c++ v/s c#, managed v/s unmanaged. both are equally good and equally powerful in their own domains
C# could be natively compiled but it is unlikely the base class library will ever go there. On the flip side, I really don't see much advantage to moving beyond JIT.
It certainly could, but the real question is why? I mean, sure, it can be slow(er), but most of the time any major differences in performance come down to design problems (wrong algorithms, thread contention, hogging resources, etc.) rather than issues with the language. As for the "breakable" bytecode, it doesn't really seem to be a huge concern of most companies, considering adoption rates.
What it really comes down to is, what's the best tool for the job? For some, it's C++; for others, Java; for others, C#, or Python, or Erlang.
Doomed? Because of supposed performance issues?
How about comparing the price of:
programmer's hour
hardware components
If you have performance issues with applications, it's much cheaper to just buy yourself better hardware, compared to the benefits you loose in switching from a higher-abstraction language to a lower one (and I don't have anything against C++, I've been a C++ developer for a long time).
How about comparing maintenance problems when trying to find memory leaks in C++ code compared to garbage-collected C# code?
"Hardware is Cheap, Programmers are Expensive": http://www.codinghorror.com/blog/archives/001198.html
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Straightforward C#/Java code is extremely difficult to parallelize, multi-thread, etc. As a result, straightforward C#/Java code will use less and less of the total processing power on a box (because everything is now going to be multi-core).
Solving this problem in C# and Java is not simple. Mutability and side effects are key to getting stuff done in C# and Java, but this is exactly what makes multi-core, multi-threading programming so difficult.
Hence, functional programming is going to become increasingly important.
Given that the J2EE/Ruby world will splinter amongst many functional/multi-core approaches (just like it does for just about everything else) while the .NET folks will all use F#, this line of thinking suggests that F# will be huge in two years.
What is wrong with this line of thinking? Why isn't it obvious that F# is going to be huge?
(Edit) Larry O'Brien nails it in this blog post: "Language-wise, in my opinion, this is a set of exercises where C and C++ shine — at least until the multithreading stuff. Languages with list-processing idioms will also do well initially, but may have memory-consumption issues (especially functional languages). Ultimately, I think that the managed C-derived language (Java and C#) have the easiest route to Exercise 9 and then face serious shortcomings with Exercise 10, where concurrency issues play the major role. In my opinion, concurrency is going to become the central issue in professional development in the next half-decade, so these shortcomings are very significant."
Straightforward C#/Java code is
extremely difficult to parallelize
Not if you use the Task Parallel Library.
Whether F# becomes huge depends on whether the cost/benefit is there, which is not at all obvious. If .NET developers find out that they can write some programs in 1/3 of the time using a functional rather than an imperative approach (which I think might be true for certain types of programs), then there should be some motivation for F# adoption.
Paul Graham's story of his use of Lisp in a startup company is illustrative of this process. Lisp provided them with a huge competitive advantage, yet Lisp didn't take over the world, not because it wasn't powerful, but for other reasons, like lack of library support. That F# has access to the .NET framework gives it a fighting chance.
http://www.paulgraham.com/avg.html
Functional programming is harder to get your head around than imperative programming. F# is a more difficult language in many ways than C#. Most 'developers' don't understand functional programming concepts, and can't even write very good imperative code in C#. So what hope have they got of writing good functional code in F#?
And when you consider that everybody on the team needs to be able to understand, write, debug, fix, etc. the code in the language you choose, it means you need a very strong team -- not just a very strong person -- to be able to use F# as it's meant to be used. And there aren't many of those around.
Add into the mix the fact that there's 8 years of C#/VB code lying around which is unlikely to be migrated, and that it's easier to create libraries that look and feel like the BCL in C#/VB as it's less easy to leak stuff like tuples etc. through public interfaces, and I reckon that F# will struggle to gain anything more than usage by strong teams on new, internal projects.
Ask a programming question on SO and specify you are using F#.
Ask the same question and specify you are using C#.
Compare the answers.
Using a novel programming language is a calculated risk--you may get more built-in functionality and syntactic sugar, but you will lose in community support, ability to hire programmers, and working around blind spots in the language.
I'm not picking on F#--every decision of programming language is a risk equation you need to work out. If people didn't take that risk on C#, we'd all still be using VB6 and C++ now. Same with those languages versus their predecessors. You have to decide for your project whether the advantages outweigh the risks.
There isn't really any case against F#, but you have to understand the context of the situation we, as developers, are in currently.
The multi-core architecture is still in it's infancy. The major driving force to change single-threaded apps over to a parrellel architecture is going to take time.
F# is very useful for a number of reasons, parrallelism being one of them, but not the only one. Functional programming is also extremely useful for scientific purposes. This will be huge in many sectors.
However, the way you're wording your question it sounds like you're stipulating that F# is already fighting a losing battle, which is definitely not the case. I've talked to many scientists to date that are using things such as MatLab and the like, and a lot of them are already aware of F#, and excited about it.
Imperative code is easier to write than functional code. (At least, its easier to find people who can right acceptable imperative code vs. functional code)
Some things are inherently single threaded (UI* is the best known example).
There's alot of C#/C/C++ code out there already, and multiple languages in the same project makes management of said project more difficult.
Personally, I think functional languages will become increasingly mainstream (heck F# itself is a testament to that) but probably never gain lingua franca status like C/C++/Java/C#/etc. have or will.
*This is apparently a somewhat contentious view, so I'll expand upon it.
In a multi-threaded UI, each UI event is dispatched asynchronously and on a thread of its own (the actual management of threads is probably more sophisticated than just spinning up a new one, but that's not really germane to the discussion).
Imagine if this were the case, and you're rendering the window.
The window manager asks you to draw each element (expect a message, or a function invokation for each element).
Each element reads its state (implicitly reading the application state)
Each element draws itself.
In step 2, every element MUST lock the application state (or the subset of it that affects display). Otherwise, in the event the application state is updated, the end result of rendering the window could include elements that reflect two different application states.
This is a lock convoy. Each render thread will lock, render, and then release; therefore they'll execute serially.
Now, imagine you're dealing with user input. First, users are pretty slow so the benefits are going to be non-existent unless you're doing considerable work on the (one-of-many) UI thread; so I'm going to assume thats the case.
The Window Manager informs your application of user input (once again, message, function call, whatever).
Read what's needed from the application state. (Locks needed here)
Spend noticable time crunching some numbers.
Update the application state. (Locks needed here as well)
All you've accomplished is changing from explicitly starting a worker thread, to implicitly doing so; at the cost of potential heisenbugs & deadlocks if you're loose with locking your state.
The fundamental problem with UI api's is that you're dealing with a many-to-one (or one-to-many depending on how you look at it) relationship. Either many windows, many elements, or many "input types" all of which affect a single window/surface. Some sort of synchronization has to happen, and when it does multi-threading doesn't have any benefits anymore just detractions.
What is wrong with this line of thinking? Why isn't it obvious that F# is going to be huge?
You're assuming the large masses actually write programs that need multicore support - or the programs would gain significant benefit from being parallellized. That's a false assumption.
Server side there's even less need for a parallell language.
Backend server processing already takes enough advantage of multicore/processor support by it's inherent nature of being concurrent(work is divided on clients via threads and among processes(e.g. one app server, one db server, one web container.. ).
What is wrong with this line of reasoning is that it assumes that everything will work out as planned.
There is the assumption that it will be easier to write multithreaded programs in F# than in C#. Historically, functional languages have not done all that well in popularity, and there's probably reasons why. Therefore, while it is generally easier to multithread functional than imperative languages, it's generally been easier to find people to program in imperative languages. These two things balance out somehow, depending probably on the people and the app. It may or may not be easier in general to write multithreaded applications in functional or imperative languages. It's far too early to tell.
There's the assumption that people are going to demand efficient use of their 1K-core computers. There are always applications that can take as much CPU power as they can find, but these aren't the most common applications. Most applications people run are not in any way limited by CPU power nowadays, but by delays in local I/O, networking, and users. This may change, but it won't change at all quickly.
Also, it isn't clear that massively multicore processors are the wave of the future. There may be a fairly small market for them, so chip manufacturers will produce smaller chips instead of more powerful, or will devote resources to other things that we aren't clear about right now.
There's the assumption that F# is going to be the winner among functional languages. As the VS 2010 functional language, it does have a considerable advantage. However, the race hasn't really started yet, and there's plenty of time for things to happen. It may turn out that F#.NET isn't a particularly good language to program massively parallel PCs, and something else may come about. It may happen that Microsoft and .NET won't be all that important by the time 64-core processors routinely come on cheap laptops. (Shifts like that aren't all that common, but they tend to come by surprise. They also are more likely to happen during times of conceptual change, and a mass move to functional languages would qualify.)
On the assumption that F# will continue to be the primary Microsoft functional language, that Microsoft programming languages will continue to be dominant, that getting maximum performance out of massively multicore processors will be important, that all the technical arguments won't be swamped by business inertia, and that F# will be considerably better than C# and other such languages at writing massively multithreaded applications, and that you're right. However, that's a whole lot of assumptions strung together and linked by plausible reasons rather than rigid logic.
You seem to be trying to predict the future as a combination of next year's stuff extended by one line of reasoning about technical issues, and that's extremely unreliable.
The only 'case' against it (if there is such a thing) is that most modern, professional developers use different tools (as well as different tool types). F# brings some unique tools to the game, and those of us who embrace them will find our respective, specialized talents useful for other programming tasks -- especially those tasks involving analysis and manipulation of large data collections.
What I've seen of F# truly amazes me. Every demo leaves me grinning because F# strikes me as an advanced edition of what I remember from 'the good old days' when functional programming was much more common (probably more 'old' than 'good' to be sure, but such is nostalgia).
I disagree with the premise that C# is hard to parallelize. It really isn't if you know what you're doing. Additionally, parallel linq will make this even easier. Do I wish there was an OpenMP for C#? Of course, but the tools C# provides allow you to do almost everything you want if you are good enough (and I feel one doesn't even have to be that good anymore).
There is a few things worth noting about technology
The best technical solution is not always the most popular or most used. (And I don't know if F# is any good) I would argue that SQL is the most used, most asked for programming language by employers and its not a nice,cool,fast,friendly,fun language in my book. If the best technical solution always "won", how do you explain qwerty keyboards? And if you ever read the "design" for x86/x64 processors.. ;)
Azul with 864 core servers exclusively uses Java, and the trend is bigger servers in future.
If we assume the battle is between C# and F#, I do not think F# will win over C# within 2 years for the following reasons:
The features of F# that C# does not have are not features people have been missing. For instance, I think Seq.map, Seq.iter, Seq.fold and friends are great, but I don't see a majority of developers switching from foreach to these constructs.
The performance benefits of multicores are irrelevant to most of the existing programs, as only few programs are cpu-bound. For those programs where performance really is important, (e.g. video games), C++ will remain predominant, at least for the 2 years to come. It's not that hard to use threads in C++, assuming one avoids side-effects (which you can decide to do even in C++). Isn't that what Google is doing?
For F# to become really big, I think it has to become one of the main languages used in teaching, the way Java has been. This is actually quite likely, seeing how the academic world is fond of functional languages. Should that happen, I don't think the effects will become visible before 5 years.
Linking assemblies together is not trivial.
F# is tied to the .NET typing system, which is significantly more restricted than, say, PHP. It's probably right up there with Java in the land of Strong Typing. That makes the entry barrier pretty high for someone who isn't intimately familiar with the .NET types.
Single-assignment code is hard to write; most algorithms use the typical Turing machine model, which permits multiple assignments and single-assignment code does not really neatly fit into a good model for How We Think. At least, for those of us who write Turing Machine code for a living. Perhaps it's different for those of us who write Lambda Machine code out there...
F# is tied to Microsoft, which produces knee-jerk hate from many geeks. They would rather use Lisp or Scheme or Haskell(or whatever). Although mono supports it, it doesn't support it well last time I tried to work on mono(it was quite slow).
Most of our existing code lives in imperative, sequential code bases, and most of our applications are oriented around imperative, sequential operations with side-effects.
Which is all to say, pure functional approaches do not neatly model the real world, so F# is going to have to carve out a niche where it easily manages real-world problems. It cannot be a general purpose language, because it does not neatly solve general purpose problems.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I wonder why would a C++, C#, Java developer want to learn a dynamic language?
Assuming the company won't switch its main development language from C++/C#/Java to a dynamic one what use is there for a dynamic language?
What helper tasks can be done by the dynamic languages faster or better after only a few days of learning than with the static language that you have been using for several years?
Update
After seeing the first few responses it is clear that there are two issues.
My main interest would be something that is justifiable to the employer as an expense.
That is, I am looking for justifications for the employer to finance the learning of a dynamic language. Aside from the obvious that the employee will have broader view, the
employers are usually looking for some "real" benefit.
A lot of times some quick task comes up that isn't part of the main software you are developing. Sometimes the task is one off ie compare this file to the database and let me know the differences. It is a lot easier to do text parsing in Perl/Ruby/Python than it is in Java or C# (partially because it is a lot easier to use regular expressions). It will probably take a lot less time to parse the text file using Perl/Ruby/Python (or maybe even vbscript cringe and then load it into the database than it would to create a Java/C# program to do it or to do it by hand.
Also, due to the ease at which most of the dynamic languages parse text, they are great for code generation. Sure your final project must be in C#/Java/Transact SQL but instead of cutting and pasting 100 times, finding errors, and cutting and pasting another 100 times it is often (but not always) easier just to use a code generator.
A recent example at work is we needed to get data from one accounting system into our accounting system. The system has an import format, but the old system had a completely different format (fixed width although some things had to be matched). The task is not to create a program to migrate the data over and over again. It is to shove the data into our system and then maintain it there going forward. So even though we are a C# and SQL Server shop, I used Python to convert the data into the format that could be imported by our application. Ultimately it doesn't matter that I used python, it matters that the data is in the system. My boss was pretty impressed.
Where I often see the dynamic languages used for is testing. It is much easier to create a Python/Perl/Ruby program to link to a web service and throw some data against it than it is to create the equivalent Java program. You can also use python to hit against command line programs, generate a ton of garbage (but still valid) test data, etc.. quite easily.
The other thing that dynamic languages are big on is code generation. Creating the C#/C++/Java code. Some examples follow:
The first code generation task I often see is people using dynamic languages to maintain constants in the system. Instead of hand coding a bunch of enums, a dynamic language can be used to fairly easily parse a text file and create the Java/C# code with the enums.
SQL is a whole other ball game but often you get better performance by cut and pasting 100 times instead of trying to do a function (due to caching of execution plans or putting complicated logic in a function causing you to go row by row instead of in a set). In fact it is quite useful to use the table definition to create certain stored procedures automatically.
It is always better to get buy in for a code generator. But even if you don't, is it more fun to spend time cutting/pasting or is it more fun to create a Perl/Python/Ruby script once and then have that generate the code? If it takes you hours to hand code something but less time to create a code generator, then even if you use it once you have saved time and hence money. If it takes you longer to create a code generator than it takes to hand code once but you know you will have to update the code more than once, it may still make sense. If it takes you 2 hours to hand code, 4 hours to do the generator but you know you'll have to hand code equivalent work another 5 or 6 times than it is obviously better to create the generator.
Also some things are easier with dynamic languages than Java/C#/C/C++. In particular regular expressions come to mind. If you start using regular expressions in Perl and realize their value, you may suddenly start making use of the Java regular expression library if you haven't before. If you have then there may be something else.
I will leave you with one last example of a task that would have been great for a dynamic language. My work mate had to take a directory full of files and burn them to various cd's for various customers. There were a few customers but a lot of files and you had to look in them to see what they were. He did this task by hand....A Java/C# program would have saved time, but for one time and with all the development overhead it isn't worth it. However slapping something together in Perl/Python/Ruby probably would have been worth it. He spent several hours doing it. It would have taken less than one to create the Python script to inspect each file, match which customer it goes to, and then move the file to the appropriate place.....Again, not part of the standard job. But the task came up as a one off. Is it better to do it yourself, spend the larger amount of time to make Java/C# do the task, or spend a much smaller amount of time doing it in Python/Perl/Ruby. If you are using C or C++ the point is even more dramatic due to the extra concerns of programming in C or C++ (pointers, no array bounds checking, etc.).
Let me turn your question on its head by asking what use it is to an American English speaker to learn another language?
The languages we speak (and those we program in) inform the way we think. This can happen on a fundamental level, such as c++ versus javascript versus lisp, or on an implementation level, in which a ruby construct provides a eureka moment for a solution in your "real job."
Speaking of your real job, if the market goes south and your employer decides to "right size" you, how do you think you'll stack up against a guy who is flexible because he's written software in tens of languages, instead of your limited exposure? All things being equal, I think the answer is clear.
Finally, you program for a living because you love programming... right?
I don't think anyone has mentioned this yet. Learning a new language can be fun! Surely that's a good enough reason to try something new.
I primarily program in Java and C# but use dynamic languages (ruby/perl) to support smoother deployment, kicking off OS tasks, automated reporting, some log parsing, etc.
After a short time learning and experimenting with ruby or perl you should be able to write some regex manipulating scripts that can alter data formats or grab information from logs. An example of a small ruby/perl script that could be written quickly would be a script to parse a very large log file and report out only a few events of interest in either a human readable format or a csv format.
Also, having experience with a variety of different programming languages should help you think of new ways to tackle problems in more structured languages like Java, C++, and C#.
One big reason to learn Perl or Ruby is to help you automate any complicated tasks that you have to do over and over.
Or if you have to analyse contents of log files and you need more mungeing than available using grep, sed, etc.
Also using other languages, e.g. Ruby, that don't have much "setup cost" will let you quickly prototype ideas before implementing them in C++, Java, etc.
HTH
cheers,
Rob
Do you expect to work for this company forever? If you're ever out on the job market, pehaps some prospective employers will be aware of the Python paradox.
A good hockey player plays where the puck is. A great hockey player plays where the puck is going to be.
- Wayne Gretzky
Our industry is always changing. No language can be mainstream forever. To me Java, C++, .Net is where the puck is right now. And python, ruby, perl is where the puck is going to be. Decide for yourself if you wanna be good or great!
Paul Graham posted an article several years ago about why Python programmers made better Java programmers. (http://www.paulgraham.com/pypar.html)
Basically, regardless of whether the new language is relevant to the company's current methodology, learning a new language means learning new ideas. Someone who is willing to learn a language that isn't considered "business class" means that he is interested in programming, beyond just earning a paycheck.
To quote Paul's site:
And people don't learn Python because
it will get them a job; they learn it
because they genuinely like to program
and aren't satisfied with the
languages they already know.
Which makes them exactly the kind of
programmers companies should want to
hire. Hence what, for lack of a better
name, I'll call the Python paradox: if
a company chooses to write its
software in a comparatively esoteric
language, they'll be able to hire
better programmers, because they'll
attract only those who cared enough to
learn it. And for programmers the
paradox is even more pronounced: the
language to learn, if you want to get
a good job, is a language that people
don't learn merely to get a job.
If an employer was willing to pay for the cost of learning a new language, chances are the people who volunteered to learn (assuming it wasn't a mandatory class) would be the same people to are already on the "fast track".
When I first learned Python, I worked for a Java shop. Occasionally I'd have to do serious text-processing tasks which were much easier to do with quick Python scripts than Java programs. For example, if I had to parse a complex CSV file and figure out which of its rows corresponded to rows in our Oracle database, this was much easier to do with Python than Java.
More than that, I found that learning Python made me a much better Java programmer; having learned many of the same concepts in another language I feel that I understand those concepts much better. And as for what makes Python easier than Java, you might check out this question: Java -> Python?
Edit: I wrote this before reading the update to the original question. See my other answer for a better answer to the updated question. I will leave this as is as a warning against being the fastest gun in the west =)
Over a decade ago, when I was learning the ways of the Computer, the Old Wise Men With Beards explained how C and C++ are the tools of the industry. No one used Pascal and only the foolhardy would risk their companies with assembler.
And of course, no one would even mention the awful slow ugly thing called Java. It will not be a tool for serious business.
So. Um. Replace the languages in the above story and perhaps you can predict the future. Perhaps you can't. Point is, Java will not be the Last Programming Language ever and also you will most likely switch employers as well. The future is charging at you 24 hours per day. Be prepared.
Learning new languages is good for you. Also, in some cases it can give you bragging rights for a long time. My first university course was in Scheme. So when people talk to me about the new language du jour, my response is something like "First-class functions? That's so last century."
And of course, you get more stuff done with a high-level language.
Learning a new language is a long-term process. In a couple of days you'll learn the basics, yes. But! As you probably know, the real practical applicability of any language is tied to the standard library and other available components. Learning how to use the efficiently requires a lot of hands-on experience.
Perhaps the only immediate short-term benefit is that developers learn to distinguish the nails that need a Python/Perl/Ruby -hammer. And, if they are any good, they can then study some more (online, perhaps!) and become real experts.
The long-term benefits are easier to imagine:
The employee becomes a better developer. Better developer => better quality. We are living in a knowledge economy these days. It's wiser to invest in those brains that already work for you.
It is easier to adapt when the next big language emerges. It is very likely that the NBL will have many of the features present in today's scripting languages: first-class functions, closures, streams/generators, etc.
New market possibilities and ability to respond more quickly. Even if you are not writing Python, other people are. Your clients? Another vendor in the project? Perhaps a critical component was written in some other language? It will cost money and time, if you do not have people who can understand the code and interface with it.
Recruitment. If your company has a reputation of teaching new and interesting stuff to people, it will be easier to recruit the top people. Everyone is doing Java/C#/C++. It is not a very effective way to differentiate yourself in the job market.
Towards answering the updated question, its a chicken/egg problem. The best way to justify an expense is to show how it reduces a cost somewhere else, so you may need to spend some extra/personal time to learn something first to build some kind of functional prototype.
Show your boss a demo like "hey, i did this thing, and it saves me this much time [or better yet, this much $$], imagine if everyone could use this how much money we would save"
and then after they agree, explain how it is some other technology and that it is worth the expense to get more training, and training for others on how to do it better.
I have often found that learning another language, especially a dynamically typed language, can teach you things about other languages and make you an overall better programmer. Learning ruby, for example, will teach you Object Oriented programming in ways Java wont, and vice versa. All in all, I believe that it is better to be a well rounded programmer than stuck in a single language. It makes you more valuable to the companies/clients you work for.
check out the answers to this thead:
https://stackoverflow.com/questions/76364/what-is-the-single-most-effective-thing-you-did-to-improve-your-programming-ski#84112
Learning new languages is about keeping an open mind and learning new ways of doing things.
Im not sure if this is what you are looking for, but we write our main application with Java at the small company I work for, but have used python to write smaller scripts quickly. Backup software, temporary scripts to manipulate data and push out results. It just seems easier sometimes to sit down with python and write a quick script than mess with classes and stuff in java.
Temp scripts that aren't going to stick around don't need a lot of design time wasted on them.
And I am lazy, but it is good to just learn as much as you can of course and see what features exist in other languages. Knowing more never hurts you in future career changes :)
It's all about broadening your horizons as a developer. If you limit yourself to only strong-typed languages, you may not end up the best programmer you could.
As for tasks, Python/Lua/Ruby/Perl are great for small simple tasks, like finding some files and renaming them. They also work great when paired with a framework (e.g. Rails, Django, Lua for Windows) for developing simple apps quickly. Hell, 37Signals is based on creating simple yet very useful apps in Ruby on Rails.
They're useful for the "Quick Hack" that is for plugging a gap in your main language for a quick (and potentially dirty) fix faster than it would take to develop the same in your main language. An example: a simple script in perl to go through a large text file and replace all instances of an email address with another is trivial with an amount of time taken in the 10 minute range. Hacking a console app together to do the same in your main language would take multiples of that.
You also have the benefit that exposing yourself to additional languages broadens your abilities and learning to attack problems from a different languages perspective can be as valuable as the language itself.
Finally, scripting languages are very useful in the realm of extension. Take LUA as an example. You can bolt a lua interpreter into your app with very little overhead and you now have a way to create rich scripting functionality that can be exposed to end users or altered and distributed quickly without requiring a rebuild of the entire app. This is used to great effect in many games most notably World of Warcraft.
Personally I work on a Java app, but I couldn't get by without perl for some supporting scripts.
I've got scripts to quickly flip what db I'm pointing at, scripts to run build scripts, scripts to scrape data & compare stuff.
Sure I could do all that with java, or maybe shell scripts (I've got some of those too), but who wants to compile a class (making sure the classpath is set right etc) when you just need something quick and dirty. Knowing a scripting language can remove 90% of those boring/repetitive manual tasks.
Learning something with a flexible OOP system, like Lisp or Perl (see Moose), will allow you to better expand and understand your thoughts on software engineering. Ideally, every language has some unique facet (whether it be CLOS or some other technique) that enhances, extends and grows your abilities as a programmer.
If all you have is a hammer, every problem begins to look like a nail.
There are times when having a screwdriver or pair of pliers makes a complicated problem trivial.
Nobody asks contractors, carpenters, etc, "Why learn to use a screwdriver if i already have a hammer?". Really good contractors/carpenters have tons of tools and know how to use them well. All programmers should be doing the same thing, learning to use new tools and use them well.
But before we use any power tools, lets
take a moment to talk about shop safety. Be sure
to read, understand, and follow all the
safety rules that come with your power
tools. Doing so will greatly reduce
the risk of personal injury. And remember
this: there is no more important rule
than to wear these: safety glasses
-- Norm
I think the main benefits of dynamic languages can be boiled down to
Rapid development
Glue
The short design-code-test cycle time makes dynamic languages ideal for prototyping, tools, and quick & dirty one-off scripts. IMHO, the latter two can make a huge impact on a programmer's productivity. It amazes me how many people trudge through things manually instead of whipping up a tool to do it for them. I think it's because they don't have something like Perl in their toolbox.
The ability to interface with just about anything (other programs or languages, databases, etc.) makes it easy to reuse existing work and automate tasks that would otherwise need to be done manually.
Don't tell your employer that you want to learn Ruby. Tell him you want to learn about the state-of-the-art in web framework technologies. it just happens that the hottest ones are Django and Ruby on Rails.
I have found the more that I play with Ruby, the better I understand C#.
1) As you switch between these languages that each of them has their own constructs and philosophies behind the problems that they try to solve. This will help you when finding the right tool for the job or the domain of a problem.
2) The role of the compiler (or interpreter for some languages) becomes more prominent. Why is Ruby's type system differ from the .Net/C# system? What problems do each of these solve? You'll find yourself understanding at a lower level the constructs of the compiler and its influence on the language
3) Switching between Ruby and C# really helped me to understand Design Patterns better. I really suggest implementing common design patterns in a language like C# and then in a language like Ruby. It often helped me see through some of the compiler ceremony to the philosophy of a particular pattern.
4) A different community. C#, Java, Ruby, Python, etc all have different communities that can help engage your abilities. It is a great way to take your craft to the next level.
5) Last, but not least, because new languages are fun :)
Given the increasing focus to running dynamic languages (da-vinci vm etc.) on the JVM and the increasing number of dynamic languages that do run on it (JRuby, Grrovy, Jython) I think the usecases are just increasing. Some of the scenarios I found really benifited are
Prototyping- use RoR or Grails to build quick prototypes with advantage of being able to runn it on the standard app server and (maybe) reuse existing services etc.
Testing- right unit tests much much faster in dynamic languages
Performance/automation test scripting- some of these tools are starting to allow the use standard dynamic language of choice to write the test scripts instead of proprietary script languages. Side benefit might be to the able to reuse some unit test code you've already written.
Philosophical issues aside, I know that I have gotten value from writing quick-and-dirty Ruby scripts to solve brute-force problems that Java was just too big for. Last year I had three separate directory structures that were all more-or-less the same, but with lots of differences among the files (the client hadn't heard of version control and I'll leave the rest to your imagination).
It would have taken a great deal of overhead to write an analyzer in Java, but in Ruby I had one working in about 40 minutes.
Often, dynamc languages (especially python and lua) are embedded in programs to add a more plugin-like functionality and because they are high-level languages that make it easy to add certain behavior, where a low/mid-level language is not needed.
Lua specificially lacks all the low-level system calls because it was designed for easeof-use to add functionality within the program, not as a general programming language.
You should also consider learning a functional programming language like Scala. It has many of the advantages of Ruby, including a concise syntax, and powerful features like closures. But it compiles to Java class files and and integrate seamlessly into a Java stack, which may make it much easier for your employer to swallow.
Scala isn't dynamically typed, but its "implicit conversion" feature gives many, perhaps even all of the benefits of dynamic typing, while retaining many of the advantages of static typing.
Dynamic languages are fantastic for prototyping ideas. Often for performance reasons they won't work for permanent solutions or products. But, with languages like Python, which allow you to embed standard C/C++/Java inside them or visa versa, you can speed up the really critical bits but leave it glued together with the flexibility of a dynamic language.
...and so you get the best of both worlds. If you need to justify this in terms of why more people should learn these languages, just point out much faster you can develop the same software and how much more robust the solution is (because debugging/fixing problems in dynamic languages is in my experience, considerably easier!).
Knowing grep and ruby made it possible to narrow down a problem, and verify the fix for, an issue involving tons of java exceptions on some production servers. Because I threw the solution together in ruby, it was done (designed, implemented, tested, run, bug-fixed, re-run, enhanced, results analyzed) in an afternoon instead of a couple of days. I could have solved the same problem using an all-java solution or a C# solution, but it most likely would have taken me longer.
Having dynamic language expertise also sometimes leads you to simpler solutions in less dynamic languages. In ruby, perl or python, you just intuitively reach for associative arrays (hashes, dictionaries, whatever word you want to use) for the smallest things, where you might be tempted to create a complex class hierarchy in a statically typed language when the problem doesn't necessarily demand it.
Plus you can plug in most scripting languages into most runtimes. So it doesn't have to be either/or.
The "real benefit" that an employer could see is a better programmer who can implement solutions faster; however, you will not be able to provide any hard numbers to justify the expense and an employer will most likely have you work on what makes money now as opposed to having you work on things that make the future better.
The only time you can get training on the employer's dime, is when they perceive a need for it and it's cheaper than hiring a new person who already has that skill-set.
I learned Java in college, and then I was hired by a C# shop and have used that ever since. I spent my first week realizing that the two languages were almost identical, and the next two months figuring out the little differences. For the most part, was I noticing the things that Java had that C# doesn't, and thus was mostly frustrated. (example: enum types which are full-fledged classes, not just integers with a fresh coat of paint) I have since come to appreciate the C# world, but I can't say I knew Java well enough to really contrast the two so I'm curious to get a community cross-section.
What are the relative merits and weaknesses of C# and Java? This includes everything from language structure to available IDEs and server software.
Comparing and contrasting the languages between the two can be quite difficult, as in many ways it is the associated libraries that you use in association with the language that best showcases the various advantages of one of another.
So I'll try to list out as many things I can remember or that have already been posted and note who I think has the advantage:
GUI development (thick or thin). C# combined with .NET is currently the better choice.
Automated data source binding. C# has a strong lead with LINQ, also a wealth of 3rd part libraries also gives the edge
SQL connections. Java
Auto-boxing. Both languages provide it, but C# Properties provides a better design for it in regards to setters and getters
Annotation/Attributes. C# attributes are a stronger and clear implementation
Memory management - Java VM in all the testing I have done is far superior to CLR
Garbage collection - Java is another clear winner here. Unmanaged code with the C#/.NET framework makes this a nightmare, especially when working with GUI's.
Generics - I believe the two languages are basically tied here... I've seen good points showing either side being better. My gut feeling is that Java is better, but nothing logic to base it on. Also I've used C# generics ALLOT and Java generics only a few times...
Enumerations. Java all the way, C# implementation is borked as far as I'm concerned.
XML - Toss up here. The XML and serialization capabilities you get with .NET natively beats what you get with eclipse/Java out of the box. But there are lots of libraries for both products to help with XML... I've tried a few and was never really happy with any of them. I've stuck with native C# XML combined with some custom libraries I made on my own and I'm used to it, so hard to give this a far comparison at this point...
IDE - Eclipse is better than Visual Studio for non-GUI work. So Java wins for non-GUI and Visual Studio wins for GUI...
Those are all the items I can't think off for the moment... I'm sure you can literally pick hundreds of items to compare and contrasting the two. Hopefully this lists is a cross section of the more commonly used features...
One difference is that C# can work with Windows better. The downside of this is that it doesn't work well with anything but Windows (except maybe with Mono, which I haven't tried).
Another thing to keep in mind, you may also want to compare their respective VMs.
Comparing the CLR and Java VM will give you another way to differentiate between the two.
For example, if doing heavy multithreading, the Java VM has a stronger memory model than the CLR (.NET's equivalent).
C# has a better GUI with WPF, something that Java has traditionally been poor at.
C# has LINQ which is quite good.
Otherwise the 2 are practically the same - how do you think they created such a large class library so quickly when .NET first came out? Things have changed slightly since then, but fundamentally, C# could be called MS-Java.
Don't take this as anything more than an opinion, but personally I can't stand Java's GUI. It's just close enough to Windows but not quite, so it gets into an uncanny valley area where it's just really upsetting to me.
C# (and other .Net languages, I suppose) allow me to make programs that perfectly blend into Windows, and that makes me happy.
Of course, it's moot if we're not talking about developing a desktop application...
Java:
Enums in Java kick so much ass, its not even funny.
Java supports generic variance
C#:
C# is no longer limited to Windows (Mono).
The lack of the keyword internal in Java is rather disappointing.
You said:
enum types which are full-fledged classes, not just integers with a fresh coat of paint
Have you actually looked at the output? If you compile an application with enums in in then read the CIL you'll see that an enum is actually a sealed class deriving from System.Enum.
Tools such as Red-Gate (formerly Lutz Roeder's) Reflector will disassemble it as close to the orginal C# as possible so it may not be easily visible what is actually happening under the hood.
As Elizabeth Barrett Browning said: How do I love thee? Let me count the ways.
Please excuse the qualitative (vs. quantitative) aspect of this post.
Comparing these 2 languages (and their associated run-times) is very difficult. Comparisons can be at many levels and focus on many different aspects (such as GUI development mentioned in earlier posts). Preference between them is often personal and not just technical.
C# was originally based on Java (and the CLR on the JRE) but, IMHO, has, in general, gone beyond Java in its features, expressiveness and possibly utility. Being controlled by one company (vs. a committee), C# can move forward faster than Java can. The differences ebb and flow across releases with Java often playing catch up (such as the recent addition of lambdas to Java which C# has had for a long time). Neither language is a super-set of the other in all aspects as both have features (and foibles) the other lacks.
A detailed side-by-side comparison would likely take several 100s of pages. But my net is that for most modern business related programming tasks they are similar in power and utility. The most critical difference is probably in portability. Java runs on nearly all popular platforms, which C# runs mostly only on Windows-based platforms (ignoring Mono, which has not been widely successful). Java, because of its portability, arguably has a larger developer community and thus more third party library and framework support.
If you feel the need to select between them, your best criteria is your platform of interest. If all your work will run only on Windows systems, IMHO, C#/CLR, with its richer language and its ability to directly interact with Windows' native APIs, is a clear winner. If you need cross system portability then Java/JRE is a clear winner.
PS. If you need more portable jobs skills, then IMHO Java is also a winner.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Has anyone ever written an application bigger than its .NET luggage?
People used to criticize VB6 for its 2 MB runtime but it rarely dwarfed the app it accompanied.
Today despite having Vista on my machine I had to download 35 MB of the 3.5 framework and reboot to then try out an app half that size.
When you factor in the decreased source code security I wonder why anyone would anyone develop a windows application in .NET rather than in a language that allowed for the building of native executables.
What is superior about .NET that outshadows these drawbacks when it comes to writing applications to run on Windows?
PEOPLE: Please note that this was written in February, 2009, and what is said was appropriate at that time - yelling at me in late 2012 (3+ years later) is meaningless. :-)
Delphi has some considerable advantages for Win32. Not that .NET apps are inherently bad, but try:
running a .NET app (any version) on Win95/ME, where .NET doesn't exist (AFAIK)
distributing any small ( < 1.5 MB) .NET app (yes, floppy drives still exist)
providing any .NET app on a system that has no Internet access (yes, they exist)
distributing your .NET apps in countries without widespread high bandwidth
keep people from seeing your source code without spending a ton of dough (Reflection, anyone?)
Garbage collection in .NET might be really nice, but anyone who knows anything about programming can also handle manual allocation/deallocation of memory easily with Delphi, and GC is available with reference-counted interfaces. Isn't one of the things that brought all of the non-programmers to proliferation the pseudo-GC of VB? IMO, GC is one of the things that makes .NET dangerous, in the same way VB was dangerous - it makes things too easy and allows people who really have no clue what they're doing to write software that ends up making a mess. (And, before I get flamed to death here, it's great for the people who do know what they're doing as well, and so was VB; I'm just not so sure that the advantage to the skilled outweights the hazards to us from the unskilled. )
Delphi Prism (AKA Rem Objects Oxygene, formerly Chrome) provides the GC version of Delphi that those who need it are looking for, along with ASP.NET and WPF/Silverlight/CE, with the readability (and lack of curly braces) of Delphi. For those (like me) for which Unicode support isn't a major factor, Delphi 2007 provides ASP.NET and VCL.NET, as well as native Win32 support. And, at a place like where I work, when workstations are only upgraded (at a minimum) every three years, and where we just got rid of the last Win95 machine because it wasn't a priority to upgrade, the .NET framework is an issue. (Especially with company proxy requirements only allowing Internet access to a handful of people, limiting bandwidth and download capabilities, and proper non-admin accounts with no USB devices allowed, all still running across a Netware network - no such thing as Windows Update, and never a virus so far because nothing gets in.)
I work some in .NET languages (C#, Delphi Prism), but the bread and butter both full-time and on the side, comes from Win32 and Delphi.
Okay, I doubt this will persuade you as you don't want to be persuaded, but here's my view of the advantages of .NET over older technologies. I'm not going to claim that every advantage applies to every language you mentioned in the question, or that .NET is perfect, but:
A managed environment catches common errors earlier and gives new opportunities:
Strong typing avoids treating one type as another improperly
Garbage collection largely removes memory management concerns (not totally, I'll readily admit)
"Segmentation fault" usually translates to "NullReferenceException" - but in a much easier to debug manner!
No chance of buffer overruns (aside from the potential for CLR bugs, of course) - that immediately removes a big security concern
A declarative security model and a well-designed runtime allows code to be run under a variety of trust levels
JITting allows the CLR to take advantage of running on a 64 bit machine with no recompilation necessary (other than for some interop situations)
Future processor developments can also be targeted by the JITter, giving improvements with no work on the part of the developer (including no need to rebuild or distribute multiple versions).
Reflection allows for all kinds of things which are either impossible or hard in unmanaged environments
A modern object-oriented framework:
Generics with execution time knowledge (as opposed to type erasure in Java)
Reasonable threading support, with a new set of primitives (Parallel Extensions) coming in .NET 4.0
Internationalisation and Unicode support from the very start - just one string type to consider, for one thing :)
Windows Presentation Framework provides a modern GUI framework, allowing for declarative design, good layout and animation support etc
Good support for interoperating with native libraries (P/Invoke is so much nicer than JNI, for example)
Exceptions are much more informative (and easier to deal with) than error codes
LINQ (in .NET 3.5) provides a lovely way of working with data in-process, as well giving various options for working with databases, web services, LDAP etc.
Delegates allow a somewhat-functional style of coding from VB and C#; this is better in C# 3.0 and VB9 due to lambda expressions.
LINQ to XML is the nicest XML library I've used
Using Silverlight as an RIA framework allows you to share a lot of code between your lightweight client and other access methods
A mostly-good versioning story, including binding redirection, runtime preference etc
One framework targeted by multiple languages:
Simpler to share components than with (say) COM
Language choice can be driven by task and team experience. This will be particularly significant as of .NET 4.0: where a functional language is more appropriate, use F#; where a dynamic language is more appropriate, use IronRuby or IronPython; interoperate easily between all languages
Frankly, I just think C# is a much cleaner language than VB or C++. (I don't know Delphi and I've heard good things about it though - and hey, you can target .NET with Delphi now anyway.)
The upshot of most of this - and the soundbite, I guess - is that .NET allows faster development of more robust applications.
To address the two specific issues you mentioned in the question:
If your customer is running Vista, they already have .NET 3.0. If they're running XP SP2 or SP3, they probably have at least .NET 2.0. Yes, you have to download a newer version of the framework if you want to use it, but I view that as a pretty small issue. I don't think it makes any sense to compare the size of your application with the size of the framework. Do you compare the size of your application with the size of the operating system, or the size of your IDE?
I don't view decompilation as nearly such a problem as most people. You really need to think about what you're afraid of:
If you're afraid of people copying your actual code, it's usually a lot easier to code from scratch if you're aware of the basic design. Bear in mind that a decompiler won't give local variable names (assuming you don't distribute your PDB) or comments. If your original source code is only as easy to understand as the decompiled version, you have bigger problems than piracy.
If you're afraid of people bypassing your licensing and pirating your code, you should bear in mind how much effort has gone into stopping people from pirating native applications - and how ineffective it's been.
A lot of the use of .NET is on the server or for internal applications - in neither of these cases is decompilation an issue.
I've written more on this topic in this article about decompilation and obfuscation.
To name a few:
Automatic memory management, garbage collection
Type safety
Bounds checking
Access to thousands of classes that you will not have to create
OK first up, No one language/platform is ever going to be universally superior.
Specialization will always provide a better use case in certain areas but general purpose languages will be applicable to more domains.
Multi-paradigm languages will suffer from complex boundary cases between paradigms e.g.
Type inference in any functional language that also allows OOP when presented with sub classes
The grammar of C++ is astonishingly complex, This has a direct effect on the abilities of its tool chain.
The complexities of co/contra variance coupled with generics in c# is very hard to understand.
Older languages will have existing code bases that work, this is both positive (experience, well tested, extensive supporting literature) but also a negative (the resulting inertia against change, multiple different ways to do things leading to confusion for new entrants).
The selection/use of both languages and platforms is, as are most things, a balancing of the pros and cons.
In the following lists Delphi has some of the same pros and cons, but differs on many too.
Potential Negatives of .Net (if they are not an issue to you they aren't negatives)
Yes, you need the runtime deployed (and installed), and it's big.
If you wanted a language with multiple inheritance you're not going to get it
The BCL collections library has some serious flaws
Not widely supported outside the MS universe (mono is great but it lags the official implementation significantly)
Potential patent/copyright encumbrance
Jitted (ignoring ngen) start up time is always going to be slower and more memory will be needed.
There are more but these are the highlights.
Potential Positives (again if they don't matter to you)
A universal GC, no reference counting that prevents certain data structures being usable, I know of no widely used Functional language without GC, I can't think of significant language of the last 10 years without at least optional GC. If you believe this is not a big deal you would appear to be in a minority.
A large BCL (some parts not so good as others but it's very broad)
Vast numbers of languages (and a surprising number of paradigms) can be used and interact with each other (I use c#, f#, C++/CLI within one wider application using each where it makes most sense but able to easily use aspects of one from another).
Full featured introspection combined with declarative programming support. A wide variety of frameworks become much simpler and easy to use in this manner.
Jitted - alterations in underlying CPU architecture can be largely transparent, sophisticated runtime optimizations not available to pre-compiled languages are possible (java is doing rather better on this currently)
memory access safety
Fusion dll loading and the GAC for system dlls
Likewise specifically for c#
Con:
Syntax based on C underpinnings
(pre 4.0) late binding solely via inheritance
More verbose than some imperative languages
poor handling of complex embedded literals (regexes/xml/multi line strings)
variable capture within closures can be confusing
nested generators are both cumbersome and perform appallingly
Pro:
Syntax based on C underpinnings
Much functional support through lambdas
Expressions allowing compile time validation of non code areas such as Linq to SQL
Strongly typed but with some Type inference to make this easier
if you really need to unsafe is there for you
interaction with existing C++ ABI code via P/Invoke is both simple and clear.
multicast event model built in.
light weight runtime code generation
The C underpinnings really is a pro and con. It is understandable by a vast number of programmers (compared to pascal based style) but has a certain amount of cruft (switch statements being a clear example).
Strong/Weak/Static/Dynamic type systems are a polarising debate but it is certainly not contentious to say that, where the type system is more constraining it should strive to not require excessive verbosity as a result, c# is certainly better than many in that regard.
For many internal Line of Business applications a vast number of the .Net platform Cons are absolutely immaterial (controlled deployment being a common and well solved problem within corporations).
As such using .Net (and this does largely mean c#, sorry VB.Net guys) is a pretty obvious choice for new development within a windows architecture.
The "simplicity" of developing complex(and simple) applications using it. A lot of basic stuff is already coded for you in the framework and you can just use it. And downloading 35mb file today is much easier than 2mb file 8-6 years ago.
There are a lot of reasons. I don't know much about RealBasic, but as far as Delphi goes:
Less widespread than .NET, smaller development community. Many of the Delphi resources on the net are ancient and outdated.
Until Delphi 2009, Delphi didn't have full unicode support.
I don't know about Delphi 2009, but 2007 didn't have very good garbage collection. It had some sort of clunky reference counting that required some intervention on behalf of the developer. .NET has a much more advanced GC that does virtually everything for you.
.NET has a larger standard library and more up-to-date 3rd party libraries.
.NET languages like C# are arguably better, and certainly easier to understand for those new to the language.
There's a lot of supposed advantages cited by .NET developers here that shouldn't be in that comparison, simply because Delphi has them as well:
Type safety
Bounds checking
Access to thousands of classes (components) that you will not have to create
There are however some things in .NET that Delphi doesn't have out-of-the box, and only some of those can be added by libraries and own code. To name a few:
Support for multiple languages working on top of the same runtime - allowing to choose the matching language for the problem (e.g. functional programming using F#)
Dynamic source code generation and compilation - this is something so alien to Delphi programmers that they probably don't even see how it could be useful [1]
Multicast events
Better multi-threading support (for example BackgroundWorker class, asynchronous delegates)
Seamless support for both 32 and 64 bit processes
Weak references
[1] If you don't know but are interested, check out the home page of Marc Clifton, especially the articles about declarative programming.
Edit: I'd like to respond to the comment by Mason Wheeler:
Re dynamic code: I know that there are solutions to have Pascal scripting embedded in the application. There is however a distinct difference between making parts of your internal object hierarchy available to the scripting engine, and having the same compiler that is used for your code available at runtime as well. There are always differences between the Delphi compiler and the compiler of the scripting engine. Anyway, what you get with .NET goes far beyond anything that is available for Delphi. And anyway, it's not the point whether one would be able to code similar infrastructure for Delphi, the point is that with .NET it's already there for you, when you need it.
Re Multicast events: Exactly, there's ways to code it, but it's not part of Delphi / the VCL out-of-the-box. That's what I was saying above.
Re weak references: You are sadly mistaken. Try to use interfaces in a non-trivial way, creating circular references on the way. Then you have to start to use typecasts and wish for weak references.
Well, the .NET Framework is shared for all .NET applications, so you have it only once on your machine and 35MB are nothing today (compare it to the size of your Vista installation). For your second .NET application you don't have to download it again.
For Windows app, .NET (using C# or whatever) gives you more direct access to the latest and greatest Windows features. It's also very well supported by Microsoft, has a huge community and lots of books written about it.
REALbasic (now Xojo) is for cross-platform apps. Using it just on Windows can sometimes be useful, but that would not be its strength (which is that it's amazingly easy to use).
I don't know much about Delphi.
From what I can see RealBASIC doesn't have much (if anything) in the way of Object Relational tools and probably wouldn't be as good a choice for n-tier, database-centric applications.