Does single-paradigm OOP lead to abstraction inversion? - c#

For those of you who aren't familiar with the concept, abstraction inversion is the implementation of low-level constructs on top of high-level constructs, and is generally considered a bad thing because it adds both needless complexity and needless overhead. Of course, this is a somewhat imprecise, subjective definition.
In your opinion, does programming in a single-paradigm OOP language where everything must be part of a class and things like pointers aren't exposed, such as Java or C#, inevitably lead to abstraction inversion? If so, in what cases?

Single-paradigm anything violates the First Commandment of Abstractions, which few people seem to know about and even less care about.
Thou shalt not make unto thee any abstraction that cannot be overridden if necessary, that thy code be free of ugly abstraction inversions.
No matter what your paradigm, once you start writing non-trivial code to solve real-world problems, you are going to end up with some things that just don't fit the paradigm very well. If you declare that that one paradigm is all there is, this is when things get ugly.
To mix a couple metaphors, when all you have is a mallet, everything starts to look like a peg, and if all your holes are round and you suddenly end up with some square pegs and you don't have any saws, (just mallets,) you've got yourself a problem.
There's no free lunch. Everything is a trade-off. The easier code becomes to write, the harder it becomes for someone else to read and maintain, or for you or anyone else to debug when problems occur below the level of abstraction that you're working at. (And they will eventually, because no abstraction is perfect.
This is the fundamental flaw with "helpful" technologies and paradigms such as managed code, garbage collection, JIT compilation and the "everything is an object" fiat. They impose a baseline level of abstraction that you are not allowed to reach beneath, and when something goes wrong below that level, there's nothing you can do about it. You're stuck working around a bad abstraction because you can't fix it.

real programmers can write FORTRAN in any language
in other words, it ain't the language, it's the programmer

Java and C# have static methods which are equivalent to functions, so the language isn't forcing anything on you.
That said, Once I really got OO, I haven't had any desire whatsoever to go to another style of programming.
If your simple OO layer is covering something complex, I suggest that it's probably your code that's an issue. If OO is done right it should be simple all the way down to the metal.

I don't think a language like Java or C# suffers from this in any tangible way, simply because the libraries accompanying the framework are so rich that the need to break into another paradigm isn't really required for general programming.
That said, the rich libraries themselves can suffer from abstraction inversion however, because the internal implementations are often hidden, or key classes are sealed, developers attempting to extend these libraries are often forced into re-implementing/replicating functionality to successfully make use of extension points provided by the base class library - this is not so much a language issue, as a conscious choice made by the developer of the base class libraries, to avoid exposing functionality which may be brittle or change often with new releases of the .Net Framework/JDK.
Also because the .Net Framework / Java Runtime both allow for interop with other languages sitting atop the common runtime, the ability to target other paradigms (such as functional programming, dynamic languages etc.) can also provide another route to breaking away from single paradigm constraints.

Related

Why there is no declarative immutability in C#?

Why the designers of C# did not allow for something like this?
public readonly class ImmutableThing
{
...
}
One of the most important ways to safe multi-threading is the use of immutable objects/classes, yet there is no way to declare a class as immutable. I know I can make it immutable by proper implementation but having this enforced by class declaration would make it so much easier and safer. Commenting a class as immutable is a "door prop" solution at best.
One look at a class declaration and you would instantly know it was immutable. If you had to modify someone else's code you would know a class does not allow changes by intent. I can only see advantages here but I can't believe no one thought about this before. So why is not supported?
EDIT
Some say this is not very important feature but that does not really convince me. Multicore processors showed up because increasing performance by frequency hit a wall. Supercomputers are heavily multiprocessor machines. Parallel processing is more and more important and is one of the main ways to improve performance. The support for multithreading and parallel processing in .NET is significant (various lock types, thread pool, tasks, async calls, concurrent collections, blocking collection, parallel foreach, PLINQ and so on) and it seems to me everything that helps you write parallel code more easily gives an edge. Even if it's non trivial to implement.
Basically, because it's complicated - and as usr wrote, features need a lot of work in various ways before they're ready to ship. (It's easy being an armchair language designer - I'm sure it's incredibly difficult to really do it, in a language with millions of developers with critical code bases which can't be broken by changes.)
It's tricky for a compiler to verify that a type is visibly-immutable without being overly restrictive in some cases. As an example, String is actually mutable within mscorlib, but the code of other types (e.g. StringBuilder) has been written very carefully to avoid the outside world ever seeing that mutability.
Eric Lippert has written a lot on immutability - it's a complex topic which would/will need a lot of work to turn into a practical language feature. It's also quite hard to retrofit onto a language and framework which didn't have it to start with. I'd love C# to at least make it easier to write immutable types, and I suspect the team has spent quite a while thinking about it - whether they'll ever be happy enough with their ideas to turn it into a production language feature is a different matter.
Features need to be designed, implemented, tested, documented, deployed and supported. That's why we get the most important features first, and the less important ones late or never.
Your proposal is ok, but there is an easy workaround (as you said). Therefore it is not an "urgent" feature.
There is also a thing called representational immutability where state mutations inside the object are allowed but are never made visible to the outside. Example: a lazily-calculated field. This would not be possible under your proposal because the compiler could never prove the class to be immutable to the outside, although its field are routinely written to.

F# or C# for personal Silverlight project?

I'm about to start working on a rich-internet-application project for a student organization at my university. I will be the only programmer, and what technologies to use is totally up to me. I've already decided on going with Silverlight, but I'm not sure whether to use C# or F#. Here are some of the things I'm keeping in mind:
C#:
I already know it and have used it extensively with Silverlight at work. I have no F# and little general FP experience.
Some say the OOP paradigm works better for complex stateful UIs.
Maintenance: I'll be in school for three more years, but after that if the app is still in use they may have a better time finding someone else to maintain it if I use a more common language.
C# experience is probably more valuable in the "real world".
F#:
The main reason is I want to learn something new. Functional programming languages seem pretty cool (I find myself using the FP features of C# very often, and think they're the biggest improvement in C# 3.0). I think I'd have a lot more fun if I used F#, but am I being unrealistic in thinking the cost in time and effort might not outweigh the benefits?
In my opinion, when you are a student, you should be trying to put your fingers in as many pots as possible.
The more languages you play with, the more understanding you will have of the "best" ways of doing things in a specific language.
As for "experience" being more valuable in the "real world". Personally I only ever consider true commercial experience when looking at potential candidates. Experience in a language when you're in a job and being paid is extremely different to experience in using a language when learning / studying it. Things you do whilst studying are about gaining skills and knowledge whereas things you do in a commercial environment give you experience in solving real life problems.
Bottom line... play with the cool stuff whilst you still can!
Because you expect to create something useful that will live past your tenure as maintainer, I would suggest writing the majority in C#. What you can do to scratch your new-technology-itch, though, is pull out distinct, well-defined components that don't interact directly with the UI and write those in a separate F# assembly.
I've done something similar with a project that I've open sourced in the past. My fundamental UI logic (in this case, the V-VM parts of the M-V-VM) were in C# because it works so well with WPF technologies. Then, certain functionally-oriented components of the Model itself I broke out into a separate assembly and wrote in F# just to get some limited exposure to the language.
It's not a jump-with-both-feet approach to learning technology, so I probably didn't learn as much as I could have. An F#-only project wouldn't have taught me nearly as much about exposing F# functionality to the greater .Net world in a friendly way, either, though.
No matter what, the key in a situation like this is for you to have fun and enjoy what you're doing. :)
You can make F# business logics project (a dll).
And then the user interface in C#. And in user interface project you can add a reference to the F#-library.
This is a good solution in general when using Silverlight: The power of F# is (functional) programming but currently C# will have a better tool-support.
I know it's not in your list, but if you're interested in learning something new, you might consider GWT - You write your client in Java (which ought to be an easy jump from C#), and then the compiler turns the client side into JavaScript. Should be a bit more cross-platform compatible than Silverlight, and it's an interesting fusion of technologies (CSS, JavaScript, and Java aren't going anywhere in the near future).
I just gave a talk about programming reactive Silverlight applications in F# at London F# user-group meeting. The recording of the talk (and samples) are available here, so you can take a look at that.
Here are a few points you could consider:
I think F# has some very nice features that make programming this kind of applications more elegant than in C# (for example, you can nicely model program as a state machine and encode this direcly in code).
F# is still relatively new, but I believe that there is a decent chance that finding someone familiar with F# after three years will be much easier than today (and finding younger students who are interested to learn something new should be easier :-)).
I was surprised that there is already quite a demand for good F# programmers in the London area. This will be probably different in different places, but I think that F# is becoming a "nice-to-have" feature on CV for some jobs.
I'm presuming that this will be used in an intranet environment. Otherwise, I'd question whether the choice of Silverlight is really the best due to market penetration.
The second point I'd raise is that one of the really key skills for most web developers is Javascript. (Nowadays, that would be Javascript with a library like JQuery to manipulate the DOM, simplify AJAX, etc.) unless the application is particularly complex, there might be some merit in considering DHTML+Javascript as a starting point, and only looking at other technologies if it proves too much for that.
However, if you're set on going down the Silverlight route, then C# is by far the most likely to be supported. If you're still learning, then it's also the route that has the best documentation. F# has some excellent documentation around, but unfortunately not nearly as much as for C#.
You briefly mention the time and cost commitment. Unless you're quite comfortable with functional programming, F# is liable to take significantly longer, in part due to unfamiliarity and in part due to the amount of reference documentation to help you on your way.
While it it undoubtedly good to have knowledge of a range of programming languages under your belt, what's more valuable to most employers is a solid understanding of their language of choice - so diversifying too much can miss that. When looking to learn an unfamiliar programming language starting with something like solving Project Euler problems may present a better way of starting out, rather than diving straight into a major project with a new language. If you start in C#, you can always create an F# project that implements functions more suitable for its focus, and reference it from the C# one, to dip your toe in its waters while not automatically committing a lot of additional time to it.

What are the advantages of doing 100% managed development using C++/CLI?

What are the advantages (the list of possible disadvantages is lenghtly) of doing 100% managed development using C++/CLI (that is, compile with /clr:safe which "generates ... assemblies, like those written in ... C#")? Especially when compard to C# (note C++/CLI : Advantages over C# and Is there any advantage to using C++/CLI over either standard C++ or C#? are mostly about managed/unmanaged interop).
For example, here are a few off the top of my head:
C++-style references for managed types, not as elegant as full blown non-nullable references but better than nothing or using a work-around.
templates which are more powerful than generics
preprocessor (this may be a disadvantage!, but macros can be useful for code generation)
stack semantics for reference types--automatically calling IDisposable::Dispose()
easier implementation of Dispose() via C++ destructor
C# 3.0 added auto-implemented properties, so that is no longer a C++/CLI advantage.
I would think that the single biggest advantage is the managed/unmanaged interop. Writing pure managed C++/CLI would (to me at least) without interoping with C# or other .Net languages seems like missing the point entirely. Yeah you could do this, but why would you.
If you're going to write pure managed code why not use C#. Especially (like nobugs said) if VS2010 drops IntelliSense support for C++/CLI. Also in VS2008 the IntelliSense for C++/CLI isn't as good the C# IntelliSense; so from a developer standpoint, it's easier to work/explore/refactor in C# than C++/CLI.
If you want some of the C++ benefits you list like the preprocessor, stack semantics and templates, then why not use C++?
Odd, I like C++/CLI but you listed exactly its features I dislike. My criticisms:
Okay. But accidental use of the hat is pretty widespread, getting the value of the value type boxed without warning. There is no way to diagnose this mistake.
Power that comes at a high price, templates you write are not usable in any other .NET language. If anything, it worsens the C++ template export problem. The complete failure of STL/CLR is worth pondering too.
Erm, no.
This was IMO a serious mistake. It already is difficult to avoid problems with accidental boxing, as outlined in the first bullet. Stack semantics makes it seriously difficult for any starting programmer to sort this out. This was a design decision to placate C++ programmers, that's okay, but the using statement was a better solution.
Not sure how it is easier. The GC.SuppressFinalize() call is automatic, that's all. It is very rare for anybody to write a finalizer, but you can't avoid the auto-generated code from making the call. That's inefficient and a violation of the 'you don't pay for what you don't use' principle. Add to this that writing the destructor also forces a default finalizer to be auto-generated. One you'd never use and wouldn't want to be used if you forgot or omitted to use the destructor.
Well, that's all very subjective perhaps. The death-knell will come with VS2010, it will ship without IntelliSense support for C++/CLI.
In C++/CLI you can define functions outside of classes, you can't do that in C#. But I don't know if that is an advantage
Like others here, I can't think of any general cases where a clear advantage exists, so my thinking turned to situational advantages -- are there any cases where there is an advantage in a particular scenario?
Advantage: Leverage the C++ skill set of technical staff in a rapid prototyping scenario.
Let me elaborate ...
I have worked quite a bit with scientists and (non-software) engineers who aren't formally trained programmers. Many of these people use C++ for developing specific modules involving high-end physics/mathematics. If a pure .NET module is required in a rapid prototyping scenario and the skill set of the scientist/engineer responsible for the module is C++, I would teach them a small amount of additional syntax (public ref, ^ and % and gcnew) and get them to program up their module as a 100% managed C++/CLI DLL.
I recognize there are a whole heap of possible "Yes, but ..." responses, but I think leveraging the C++ skill set of technical staff is a possible advantage of C++/CLI.
I agree on what you have mentioned and as an example of preprocessor use point to: Boost Preprocessor library for generating a set of types based on a list of basic types e.g. PointI32, PointF32 etc. in C++/CLI
You can have enums and delegates as generic constraints in C++/CLI, but not in C#.
https://connect.microsoft.com/VisualStudio/feedback/details/386194/allow-enum-as-generic-constraint-in-c
There is a library to simulate these constraints in C#.
http://code.google.com/p/unconstrained-melody/
One could imagine the following requirements for a hypothetical product:
Quick time-to-market on Windows
Eventual deploy to non-Windows platforms
Must not rely on Mono for non-Windows
In such a scenario, using eg C# for 1 would stymie you on 2 and 3 without a rewrite. So, one could develop in C++/CLI, suitably munged with macros and template shenanigans to look as much like ordinary C++ as possible, to hit reqt 1, then to hit reqt 2 one would need to (a) reimplement said macros and template shenanigans to map to pukka C++ and (b) implement .NET framework classes used in pukka C++. Note that (a) and (b) could be reused in future once done once.
The most obvious objection would be "well why not do the whole thing in native C++ then?"; well maybe there's lots of good stuff in the vast .NET class library that you want to use to get to market asap.
All a bit tenuous I admit, so I very much doubt this has ever been done, but it'd be a fun thing to try out !

What is the case against F#? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Straightforward C#/Java code is extremely difficult to parallelize, multi-thread, etc. As a result, straightforward C#/Java code will use less and less of the total processing power on a box (because everything is now going to be multi-core).
Solving this problem in C# and Java is not simple. Mutability and side effects are key to getting stuff done in C# and Java, but this is exactly what makes multi-core, multi-threading programming so difficult.
Hence, functional programming is going to become increasingly important.
Given that the J2EE/Ruby world will splinter amongst many functional/multi-core approaches (just like it does for just about everything else) while the .NET folks will all use F#, this line of thinking suggests that F# will be huge in two years.
What is wrong with this line of thinking? Why isn't it obvious that F# is going to be huge?
(Edit) Larry O'Brien nails it in this blog post: "Language-wise, in my opinion, this is a set of exercises where C and C++ shine — at least until the multithreading stuff. Languages with list-processing idioms will also do well initially, but may have memory-consumption issues (especially functional languages). Ultimately, I think that the managed C-derived language (Java and C#) have the easiest route to Exercise 9 and then face serious shortcomings with Exercise 10, where concurrency issues play the major role. In my opinion, concurrency is going to become the central issue in professional development in the next half-decade, so these shortcomings are very significant."
Straightforward C#/Java code is
extremely difficult to parallelize
Not if you use the Task Parallel Library.
Whether F# becomes huge depends on whether the cost/benefit is there, which is not at all obvious. If .NET developers find out that they can write some programs in 1/3 of the time using a functional rather than an imperative approach (which I think might be true for certain types of programs), then there should be some motivation for F# adoption.
Paul Graham's story of his use of Lisp in a startup company is illustrative of this process. Lisp provided them with a huge competitive advantage, yet Lisp didn't take over the world, not because it wasn't powerful, but for other reasons, like lack of library support. That F# has access to the .NET framework gives it a fighting chance.
http://www.paulgraham.com/avg.html
Functional programming is harder to get your head around than imperative programming. F# is a more difficult language in many ways than C#. Most 'developers' don't understand functional programming concepts, and can't even write very good imperative code in C#. So what hope have they got of writing good functional code in F#?
And when you consider that everybody on the team needs to be able to understand, write, debug, fix, etc. the code in the language you choose, it means you need a very strong team -- not just a very strong person -- to be able to use F# as it's meant to be used. And there aren't many of those around.
Add into the mix the fact that there's 8 years of C#/VB code lying around which is unlikely to be migrated, and that it's easier to create libraries that look and feel like the BCL in C#/VB as it's less easy to leak stuff like tuples etc. through public interfaces, and I reckon that F# will struggle to gain anything more than usage by strong teams on new, internal projects.
Ask a programming question on SO and specify you are using F#.
Ask the same question and specify you are using C#.
Compare the answers.
Using a novel programming language is a calculated risk--you may get more built-in functionality and syntactic sugar, but you will lose in community support, ability to hire programmers, and working around blind spots in the language.
I'm not picking on F#--every decision of programming language is a risk equation you need to work out. If people didn't take that risk on C#, we'd all still be using VB6 and C++ now. Same with those languages versus their predecessors. You have to decide for your project whether the advantages outweigh the risks.
There isn't really any case against F#, but you have to understand the context of the situation we, as developers, are in currently.
The multi-core architecture is still in it's infancy. The major driving force to change single-threaded apps over to a parrellel architecture is going to take time.
F# is very useful for a number of reasons, parrallelism being one of them, but not the only one. Functional programming is also extremely useful for scientific purposes. This will be huge in many sectors.
However, the way you're wording your question it sounds like you're stipulating that F# is already fighting a losing battle, which is definitely not the case. I've talked to many scientists to date that are using things such as MatLab and the like, and a lot of them are already aware of F#, and excited about it.
Imperative code is easier to write than functional code. (At least, its easier to find people who can right acceptable imperative code vs. functional code)
Some things are inherently single threaded (UI* is the best known example).
There's alot of C#/C/C++ code out there already, and multiple languages in the same project makes management of said project more difficult.
Personally, I think functional languages will become increasingly mainstream (heck F# itself is a testament to that) but probably never gain lingua franca status like C/C++/Java/C#/etc. have or will.
*This is apparently a somewhat contentious view, so I'll expand upon it.
In a multi-threaded UI, each UI event is dispatched asynchronously and on a thread of its own (the actual management of threads is probably more sophisticated than just spinning up a new one, but that's not really germane to the discussion).
Imagine if this were the case, and you're rendering the window.
The window manager asks you to draw each element (expect a message, or a function invokation for each element).
Each element reads its state (implicitly reading the application state)
Each element draws itself.
In step 2, every element MUST lock the application state (or the subset of it that affects display). Otherwise, in the event the application state is updated, the end result of rendering the window could include elements that reflect two different application states.
This is a lock convoy. Each render thread will lock, render, and then release; therefore they'll execute serially.
Now, imagine you're dealing with user input. First, users are pretty slow so the benefits are going to be non-existent unless you're doing considerable work on the (one-of-many) UI thread; so I'm going to assume thats the case.
The Window Manager informs your application of user input (once again, message, function call, whatever).
Read what's needed from the application state. (Locks needed here)
Spend noticable time crunching some numbers.
Update the application state. (Locks needed here as well)
All you've accomplished is changing from explicitly starting a worker thread, to implicitly doing so; at the cost of potential heisenbugs & deadlocks if you're loose with locking your state.
The fundamental problem with UI api's is that you're dealing with a many-to-one (or one-to-many depending on how you look at it) relationship. Either many windows, many elements, or many "input types" all of which affect a single window/surface. Some sort of synchronization has to happen, and when it does multi-threading doesn't have any benefits anymore just detractions.
What is wrong with this line of thinking? Why isn't it obvious that F# is going to be huge?
You're assuming the large masses actually write programs that need multicore support - or the programs would gain significant benefit from being parallellized. That's a false assumption.
Server side there's even less need for a parallell language.
Backend server processing already takes enough advantage of multicore/processor support by it's inherent nature of being concurrent(work is divided on clients via threads and among processes(e.g. one app server, one db server, one web container.. ).
What is wrong with this line of reasoning is that it assumes that everything will work out as planned.
There is the assumption that it will be easier to write multithreaded programs in F# than in C#. Historically, functional languages have not done all that well in popularity, and there's probably reasons why. Therefore, while it is generally easier to multithread functional than imperative languages, it's generally been easier to find people to program in imperative languages. These two things balance out somehow, depending probably on the people and the app. It may or may not be easier in general to write multithreaded applications in functional or imperative languages. It's far too early to tell.
There's the assumption that people are going to demand efficient use of their 1K-core computers. There are always applications that can take as much CPU power as they can find, but these aren't the most common applications. Most applications people run are not in any way limited by CPU power nowadays, but by delays in local I/O, networking, and users. This may change, but it won't change at all quickly.
Also, it isn't clear that massively multicore processors are the wave of the future. There may be a fairly small market for them, so chip manufacturers will produce smaller chips instead of more powerful, or will devote resources to other things that we aren't clear about right now.
There's the assumption that F# is going to be the winner among functional languages. As the VS 2010 functional language, it does have a considerable advantage. However, the race hasn't really started yet, and there's plenty of time for things to happen. It may turn out that F#.NET isn't a particularly good language to program massively parallel PCs, and something else may come about. It may happen that Microsoft and .NET won't be all that important by the time 64-core processors routinely come on cheap laptops. (Shifts like that aren't all that common, but they tend to come by surprise. They also are more likely to happen during times of conceptual change, and a mass move to functional languages would qualify.)
On the assumption that F# will continue to be the primary Microsoft functional language, that Microsoft programming languages will continue to be dominant, that getting maximum performance out of massively multicore processors will be important, that all the technical arguments won't be swamped by business inertia, and that F# will be considerably better than C# and other such languages at writing massively multithreaded applications, and that you're right. However, that's a whole lot of assumptions strung together and linked by plausible reasons rather than rigid logic.
You seem to be trying to predict the future as a combination of next year's stuff extended by one line of reasoning about technical issues, and that's extremely unreliable.
The only 'case' against it (if there is such a thing) is that most modern, professional developers use different tools (as well as different tool types). F# brings some unique tools to the game, and those of us who embrace them will find our respective, specialized talents useful for other programming tasks -- especially those tasks involving analysis and manipulation of large data collections.
What I've seen of F# truly amazes me. Every demo leaves me grinning because F# strikes me as an advanced edition of what I remember from 'the good old days' when functional programming was much more common (probably more 'old' than 'good' to be sure, but such is nostalgia).
I disagree with the premise that C# is hard to parallelize. It really isn't if you know what you're doing. Additionally, parallel linq will make this even easier. Do I wish there was an OpenMP for C#? Of course, but the tools C# provides allow you to do almost everything you want if you are good enough (and I feel one doesn't even have to be that good anymore).
There is a few things worth noting about technology
The best technical solution is not always the most popular or most used. (And I don't know if F# is any good) I would argue that SQL is the most used, most asked for programming language by employers and its not a nice,cool,fast,friendly,fun language in my book. If the best technical solution always "won", how do you explain qwerty keyboards? And if you ever read the "design" for x86/x64 processors.. ;)
Azul with 864 core servers exclusively uses Java, and the trend is bigger servers in future.
If we assume the battle is between C# and F#, I do not think F# will win over C# within 2 years for the following reasons:
The features of F# that C# does not have are not features people have been missing. For instance, I think Seq.map, Seq.iter, Seq.fold and friends are great, but I don't see a majority of developers switching from foreach to these constructs.
The performance benefits of multicores are irrelevant to most of the existing programs, as only few programs are cpu-bound. For those programs where performance really is important, (e.g. video games), C++ will remain predominant, at least for the 2 years to come. It's not that hard to use threads in C++, assuming one avoids side-effects (which you can decide to do even in C++). Isn't that what Google is doing?
For F# to become really big, I think it has to become one of the main languages used in teaching, the way Java has been. This is actually quite likely, seeing how the academic world is fond of functional languages. Should that happen, I don't think the effects will become visible before 5 years.
Linking assemblies together is not trivial.
F# is tied to the .NET typing system, which is significantly more restricted than, say, PHP. It's probably right up there with Java in the land of Strong Typing. That makes the entry barrier pretty high for someone who isn't intimately familiar with the .NET types.
Single-assignment code is hard to write; most algorithms use the typical Turing machine model, which permits multiple assignments and single-assignment code does not really neatly fit into a good model for How We Think. At least, for those of us who write Turing Machine code for a living. Perhaps it's different for those of us who write Lambda Machine code out there...
F# is tied to Microsoft, which produces knee-jerk hate from many geeks. They would rather use Lisp or Scheme or Haskell(or whatever). Although mono supports it, it doesn't support it well last time I tried to work on mono(it was quite slow).
Most of our existing code lives in imperative, sequential code bases, and most of our applications are oriented around imperative, sequential operations with side-effects.
Which is all to say, pure functional approaches do not neatly model the real world, so F# is going to have to carve out a niche where it easily manages real-world problems. It cannot be a general purpose language, because it does not neatly solve general purpose problems.

oo-spaghettio web architecture

I've noticed that the majority of enterprise web apps I've worked on over the past few years have seemingly mis-used the powers of oo.
That is, what once would have been perhaps 1000 lines of HTML and script, has often now morphed into 10,000 lines of code, 50 classes, and 2000 method calls to do basically the same thing. I.e. oo and layered architecture appears to be over-used and/or ill-used, often leading to longer development times, higher-cost, and often nightmarish maintenance.
How often are other people seeing this happen?
How can oo be effectively utilized to, as the Buddha himself has said: as much as possible try not to harm...as much as possible try to help...
"The road to hell is paved with the best of intentions."
I haven't personally encountered this myself, but all the times I've heard stories it seems to be an issue of architecture astronauts (people who spend too much time thinking) or bad developers (people who spend too little time thinking).
In the earlier days of programming, you didn't see as much of this because of the limits of the hardware, languages, etc.
However, developers are now are trying to focus on writing code that's understandable by humans for loose coupling and higher maintainability by incorporating as many design patterns and OO principles as they can, but just like everything it can be over-done.
On the other hand, some developers might just not be thinking enough about the problems they're attempting to solve and writing extra code just because it gets the job done and not thinking about the bigger picture.
In either case, developers might not be malicious or even incompetent and want the best for the projects their working on, but they still over-do principles simply because they are trying too hard.
So I would say the solution is to remind developers to use OOP principles as guidelines, but just that. There comes a point when you have to find a happy medium between thinking and programming and just stop thinking and start programming.
See: Jeff wrote a good blog post about just this kind of thing: KISS and YAGNI.
I see these all the time :( Basically if people are going to do a mess they will do it trying or not to use oo design. It gets equally awful on either case.
Update 1: it is important to understand how/what will be reused (but not going crazy on it as that would hinder productivity), since we don't want to get tons of classes where every single one of them aren't reused and fulfill tons of different functions.
Basically the main issue is understanding and caring for what is being built, as you could apply oo, tdd, ddd, anything, and if the devs doesn't understand what they are doing it will end up in the same mess ... or worst :(
Bottom line, these things do help, but they aren't magic, they won't replace the developers skills to create maintainable code.
Update 2: Also note that a checklist or some bullet points won't do it. I mean I love SOLID, and plenty things going on and I think they really clear things up, but they usually make the most impact on the people that have been trying to avoid the mess.

Categories