What algorithm to use for Dynamic Scheduling System? - c#

I'm planning to develop an expert system that automatically fits the school faculty's work load (time, teaching load, etc), and generate class sections, room that is at least 90% accurate with what the Director of a certain department wants to assign the schedule for a certain semester.
What algorithm to use? Heuristics? Optimization? Any suggestions or help is highly appreciated!

Two friends of mine did something similar for a class project. They used the simulated annealing heuristic. They concluded that it might not be the best tool for the job.
Hey, knowing what not to do can be useful, right? :)

Here are some general observations:
1) Manual scheduling is rarely attempted from scratch. Instead, somebody starts with the schedule for the previous year and alters it to take account of changes in requirements. One way of mimicing this with a computer is to use a hill-climbing algorithm, which repeatedly tries a number of small changes to improve a solution so far. This can then be started off at the current schedule.
2) Does the manual process ever terminate with the conclusion that the requirements are collectively unachievable and that some of them must be dropped? In that case your algorithm must be transparent enough that failures can be understood, or at least capable of proposing such changes (e.g. by maximising a penalty function which allows it to produce a "least bad" solution which does not satisfy all of the original constraints). I know of one case where a sophisticated constraint-based approach was replaced by a much simpler algorithm because failures of the constraint-based system did not give enough user feed back.
3) Curiously enough, the next generation system did not use sophisticated scheduling at all. It turned out - roughly speaking - that at the time the decisions had to be made not all of the consequences of sophisticated scheduling decisions could be forseen, and, in the long run, a simple predictable schedule that could be maintained indefinitely was more productive than constantly rearranging schedules to grab small momentary advantages.

Take a look at the curriculum course lesson scheduling example of Drools Planner (open source, java I am afraid). It uses meta-heuristics such as simulated annealing and tabu search.

Here is a paper on dynamic scheduling using genetic algorithms... you might find some of the ideas here useful... even if the domain isn't the same, I think the idea is more generally applicable.

Related

Thread Affinity

I have a multithreaded program which consist of a C# interop layer over C++ code.
I am setting threads affinity (like in this post) and it works on part of my code, however on second part it doesn't work. Can Intel Compiler / IPP / MKL libs / inline assembly interfere with external affinity setting?
UPDATE:
I can't post code as it is whole environment with many many dlls. I set environment values: OMP_NUM_THREADS=1 MKL_NUM_THREADS=1 IPP_NUM_THREADS=1. When it runs in single thread, it runs ok, but when I use number of C# threads and set affinity per thread (on a quad core machine), the initialization is going fine on separate cores, but during processing all threads start using the same core. Hope I am clear enough.
Thanks.
We've had this exact problem; we'd set our thread affinity to what we wanted, and the IPP/MKL functions would blow that away! The answer to your question is 'yes'.
Auto Parallelism
The issue is that, by default, the Intel libraries like to automatically execute multi-threaded versions of the routines. So, a single FFT gets computed by a number of threads setup by the library specifically for this purpose.
Intel's intent is that the programmer could get on with the job of writing a single threaded application, and the library would allow that single thread to benefit from a multicore processor by creating a number of threads for the maths work. A noble intent (your source code then need know nothing about the runtime hardware to extract the best achievable performance - handy sometimes), but a right bloody nuisance when one is doing one's own threading for one's own reasons.
Controlling the Library's Behaviour
Take a look at these Intel docs, section Support Functions / Threading Support Functions. You can either programmatically control the library's threading tendancies, or there's some environment variables you can set (like MKL_NUM_THREADS) before your program runs. Setting the number of threads was (as far as I recall) enough to stop the library doing its own thing.
Philosophical Essay Inspired By Answering Your Question (best ignored)
More or less everything Intel is doing in CPU design and software (e.g. IPP/MKL) is aimed at making it unnecessary for the programmer to Worry About Threads. You want good math performance? Use MKL. You want that for loop to go fast? Turn on Auto Parallelisation in ICC. You want to make the best use of cache? That's what Hyperthreading is for.
It's not a bad approach, and personally speaking I think that they've done a pretty good job. AMD too. Their architectures are quite good at delivering good real world performance improvements to the "Average Programmer" for the minimal investment in learning, re-writing and code development.
Irritation
However, the thing that irritates me a little bit (though I don't want to appear ungrateful!) is that whilst this approach works for the majority of programmers out there (which is where the profitable market is), it just throws more obstacles in the way of those programmers who want to spin their own parallelism. I can't blame Intel for that of course, they've done exactly the right thing; they're a market led company, they need to make things that will sell.
By offering these easy features the situation of there being too many under skilled and under trained programmers becomes more entrenched. If all programmers can get good performance without having to learn what auto parallelism is actually doing, then we'll never move on. The pool of really good programmers who actually know that stuff will remain really small.
Problem
I see this as a problem (though only a small one, I'll explain later). Computing needs to become more efficient for both economic and environmental reasons. Intel's approach allows for increased performance, and better silicon manufacturing techniques produces lower power consumption, but I always feel like it's not quite as efficient as it could be.
Example
Take the Cell processor at the heart of the PS3. It's something that I like to witter on about endlessly! However, IBM developed that with a completely different philosophy to Intel. They gave you no cache (just some fast static RAM instead to use as you saw fit), the architecture was pretty much pure NUMA, you had to do all your own parallelisation, etc. etc. The result was that if you really knew what you were doing you could get about 250GFLOPS out of the thing (I think the none-PS3 variants went to 320GLOPS), for 80Watts, all the way back in 2005.
It's taken Intel chips about another 6 or 7 years or so for a single device to get to that level of performance. That's a lot of Moores law growth. If the Cell got manufactured on Intel's latest silicon fab and was given as many transistors as Intel put into their big Xeons, it would still blow everything else away.
No Market
However, apart from PS3, Cell was a none-starter market proposition. IBM decided that it would never be a big enough seller to be worth their while. There just wasn't enough programmers out there who could really use it, and to indulge the few of us who could makes no commercial sense, which wouldn't please the shareholders.
Small Problem, Bigger Problem
I said earlier that this was only a small problem. Well, most of the world's computing isn't about high maths performance, it's become Facebook, Twitter, etc. That sort is all about I/O performance, and for that you don't need high maths performance. So in that sense the dependence on Intel Doing Everything For You so that the average programmer to get good math performance matters very little. There's just not enough maths being done to warrant a change in design philosophy.
In fact, I strongly suspect that the world will ultimately decide that you don't need a large chip at all, an ARM should do just fine. If that does come to pass then the market for Intel's very large chips with very good general purpose maths compute performance will vanish. Effectively those of use who want good maths performance are being heavily subsidised by those who want to fill enourmous data centres with Intel based hardware and put Intel PCs on every desktop.
We're simply lucky that Intel apparently has a desire to make sure that every big CPU they build is good at maths regardless of whether most of their users actually use that maths performance. I'm sure that desire has its foundations in marketing prowess and wanting the bragging rights, but those are not hard, commercially tangible artifacts that bring shareholder value.
So if those data centre guys decide that, actually, they'd rather save electricity and fill their data centres with ARMs, where does that leave Intel? ARMs are fine devices for the purpose for which they're intended, but they're not at the top of my list of Supercomputer chips. So where does that leave us?
Trend
My take on the current market trend is that 'Workstations' (PCs as we call them now) are going to start costing lots and lots of money, just like they did in the 1980s / early 90s.
I think that better supercomputers will become unaffordable because no one can spare the $10billions it would take to do the next big chip. If people stop having PCs there won't be a mass market for large all-out GPUs, so we won't even be able to use those instead. They're an exclusive thing, but super computers do play a vital role in our world and we do need them to get better. So who is going to pay for that? Not me, that's for sure.
Oops, that went on for quite a while...

When is optimization premature? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I see this term used a lot but I feel like most people use it out of laziness or ignorance. For instance, I was reading this article:
http://blogs.msdn.com/b/ricom/archive/2006/09/07/745085.aspx
where he talks about his decisions he makes to implement the types necessary for his app.
If it was me, talking about these for code that we need to write, other programmers would think either:
I am thinking way too much ahead when there is nothing and thus prematurely optimizing.
Over-thinking insignificant details when there is no slowdowns or performance problems experienced.
or both.
and would suggest to just implement it and not worry about these until they become a problem.
Which is more preferential?
How to make the differentiation between premature optimization vs informed decision making for a performance critical application before any implementation is done?
Optimization is premature if:
Your application isn't doing anything time-critical. (Which means, if you're writing a program that adds up 500 numbers in a file, the word "optimization" shouldn't even pop into your brain, since all it'll do is waste your time.)
You're doing something time-critical in something other than assembly, and still worrying whether i++; i++; is faster or i += 2... if it's really that critical, you'd be working in assembly and not wasting time worrying about this. (Even then, this particular example most likely won't matter.)
You have a hunch that one thing might be a bit faster than the other, but you need to look it up. For example, if something is bugging you about whether StopWatch is faster or Environment.TickCount, it's premature optimization, since if the difference was bigger, you'd probably be more sure and wouldn't need to look it up.
If you have a guess that something might be slow but you're not too sure, just put a //NOTE: Performance? comment, and if you later run into bottlenecks, check such places in your code. I personally don't worry about optimizations that aren't too obvious; I just use a profiler later, if I need to.
Another technique:
I just run my program, randomly break into it with the debugger, and see where it stopped -- wherever it stops is likely a bottleneck, and the more often it stops there, the worse the bottleneck. It works almost like magic. :)
This proverb does not (I believe) refer to optimizations that are built into a good design as it is created. It refers to tasks specifically targeted at performance, which otherwise would not be undertaken.
This kind of optimization does not "become" premature, according to the common wisdom — it is guilty until proven innocent.
Optimisation is the process of making existing code run more efficiently (faster speed, and/or less resource usage)
All optimisation is premature if the programmer has not proven that it is necessary. (For example, by running the code to determine if it achieves the correct results in an acceptable timeframe. This could be as simple as running it to "see" if it runs fast enough, or running under a profiler to analyze it more carefully).
There are several stages to programming something well:
1) Design the solution and pick a good, efficient algorithm.
2) Implement the solution in a maintainable, well coded manner.
3) Test the solution and see if it meets your requirements on speed, RAM usage, etc. (e.g. "When the user clicks "Save", does it take less than 1 second?" If it takes 0.3s, you really don't need to spend a week optimising it to get that time down to 0.2s)
4) IF it does not meet the requirements, consider why. In most cases this means go to step (1) to find a better algorithm now that you understand the problem better. (Writing a quick prototype is often a good way of exploring this cheaply)
5) IF it still does not meet the requirements, start considering optimisations that may help speed up the runtime (for example, look-up tables, caching, etc). To drive this process, profiling is usually an important tool to help you locate the bottle-necks and inefficiences in the code, so you can make the greatest gain for the time you spend on the code.
I should point out that an experienced programmer working on a reasonably familiar problem may be able to jump through the first steps mentally and then just apply a pattern, rather than physically going through this process every time, but this is simply a short cut that is gained through experience
Thus, there are many "optimisations" that experienced programmers will build into their code automatically. These are not "premature optimisations" so much as "common-sense efficiency patterns". These patterns are quick and easy to implement, but vastly improve the efficiency of the code, and you don't need to do any special timing tests to work out whether or not they will be of benefit:
Not putting unnecessary code into loops. (Similar to the optimisation of removing unnecessary code from existing loops, but it doesn't involve writing the code twice!)
Storing intermediate results in variables rather than re-calculating things over and over.
Using look-up tables to provide precomputed values rather than calculating them on the fly.
Using appropriate-sized data structures (e.g. storing a percentage in a byte (8 bits) rather than a long (64 bits) will use 8 times less RAM)
Drawing a complex window background using a pre-drawn image rather than drawing lots of individual components
Applying compression to packets of data you intend to send over a low-speed connection to minimise the bandwidth usage.
Drawing images for your web page in a style that allows you to use a format that will get high quality and good compression.
And of course, although it's not technically an "optmisation", choosing the right algorithm in the first place!
For example, I just replaced an old piece of code in our project. My new code is not "optimised" in any way, but (unlike the original implementation) it was written with efficiency in mind. The result: Mine runs 25 times faster - simply by not being wasteful. Could I optimise it to make it faster? Yes, I could easily get another 2x speedup. Will I optimise my code to make it faster? No - a 5x speed improvement would have been sufficient, and I have already achieved 25x. Further work at this point would just be a waste of precious programming time. (But I can revisit the code in future if the requirements change)
Finally, one last point: The area you are working in dictates the bar you must meet. If you are writing a graphics engine for a game or code for a real-time embedded controller, you may well find yourself doing a lot of optimisation. If you are writing a desktop application like a notepad, you may never need to optimise anything as long as you aren't overly wasteful.
When starting out, just delivering a product is more important than optimizing.
Over time you are going to profile various applications and will learn coding skills that will naturally lead to optimized code. Basically at some point you'll be able to spot potential trouble spots and build things accordingly.
However don't sweat it until you've found an actual problem.
Premature optimization is making an optimization for performance at the cost of some other positive attribute of your code (e.g. readability) before you know that it is necessary to make this tradeoff.
Usually premature optimizations are made during the development process without using any profiling tools to find bottlenecks in the code. In many cases the optimization will make the code harder to maintain and sometimes also increases the development time, and therefore the cost of the software. Worse... some premature optimizations turn out not to be make the code any faster at all and in some cases can even make the code slower than it was before.
When you have less that 10 years of coding experience.
Having (lots of) experience might be a trap. I know many very experienced programmers (C\C++, assembly) who tend to worry too much because they are used to worry about clock ticks and superfluous bits.
There are areas such as embedded or realtime systems where these do count but in regular OLTP/LOB apps most of your effort should be directed towards maintainability, readability and changeabilty.
Optimization is tricky. Consider the following examples:
Deciding on implementing two servers, each doing its own job, instead of implementing a single server that will do both jobs.
Deciding to go with one DBMS and not another, for performance reasons.
Deciding to use a specific, non-portable API when there is a standard (e.g., using Hibernate-specific functionality when you basically need the standard JPA), for performance reasons.
Coding something in assembly for performance reasons.
Unrolling loops for performance reasons.
Writing a very fast but obscure piece of code.
My bottom line here is simple. Optimization is a broad term. When people talk about premature optimization, they don't mean you need to just do the first thing that comes to mind without considering the complete picture. They are saying you should:
Concentrate on the 80/20 rule - don't consider ALL the possible cases, but the most probable ones.
Don't over-design stuff without any good reason.
Don't write code that is not clear, simple and easily maintainable if there is no real, immediate performance problem with it.
It really all boils down to your experience. If you are an expert in image processing, and someone requests you do something you did ten times before, you will probably push all your known optimizations right from the beginning, but that would be ok. Premature optimization is when you're trying to optimize something when you don't know it needs optimization to begin with. The reason for that is simple - it's risky, it's wasting your time, and it will be less maintainable. So unless you're experienced and you've been down that road before, don't optimize if you don't know there's a problem.
Note that optimization is not free (as in beer)
it takes more time to write
it takes more time to read
it takes more time to test
it takes more time to debug
...
So before optimizing anything, you should be sure it's worth it.
That Point3D type you linked to seems like the cornerstone of something, and the case for optimization was probably obvious.
Just like the creators of the .NET library didn't need any measurements before they started optimizing System.String. They would have to measure during though.
But most code does not play a significant role in the performance of the end product. And that means any effort in optimization is wasted.
Besides all that, most 'premature optimizations' are untested/unmeasured hacks.
Optimizations are premature if you spend too much time designing those during the earlier phases of implementation. During the early stages, you have better things to worry about: getting core code implemented, unit tests written, systems talking to each other, UI, and whatever else. Optimizing comes with a price, and you might well be wasting time on optimizing something that doesn't need to be, all the while creating code that is harder to maintain.
Optimizations only make sense when you have concrete performance requirements for your project, and then performance will matter after the initial development and you have enough of your system implemented in order to actually measure whatever it is you need to measure. Never optimize without measuring.
As you gain more experience, you can make your early designs and implementations with a small eye towards future optimizations, that is, try to design in such a way that will make it easier to measure performance and optimize later on, should that even be necessary. But even in this case, you should spend little time on optimizations in the early phases of development.

Update .NET desktop application in real-time

I have no experience building a .NET desktop application, all my experience is with the web. A friend of mine has asked me to do a quick estimate for a small desktop application.
The application just displays a list of items from the database. When rows are deleted/added from the database, they need to be deleted/added from the list on the user's desktop.
Is this done pretty easily in a desktop application, or do I need to do any sort of "reload" every X seconds?
The simplest design would involve polling the database every so often to look for new records. Adjust the number of seconds between polling to best reflect the appearance of real time and also for performance.
Any design that would allow the database management system to broadcast updates to a desktop application would be quite complicated and (depending on your needs) would most likely be overkill.
Elaborating on Andrew Hare's design slightly, I'd suggest that you include some sort of mechanism to 'short-circuit' the refresh cycle when user interaction occurs, i.e.
Refresh every x seconds
AND
Immediatey if the user clicks a control that is deemed to be a critical one AND the required update is less than x records
EXCEPT
where this would increase the refresh rate beyond a certain throttle value
Basically, you want to give the impression of high performance. Perceived performance doesn't mean accomplishing tasks quickly, it's more like doing the slow work during periods that you expect the user to be thinking, faffing around or typing something, rather than when they're waiting for a response. Very few applications are busy even a small fraction of the time they are running - any perceived slow performance is derived from a poor design where the program does too much work at once at the point the user asks for it, requiring them to wait. Caching in the background allows you to only assign the bare minimum amount of work to directly respond to user input, improving the user's perception of performance.
Trying to be directly helpful:
You state you're using .Net - which is handy. .Net databinding is very rich and powerful, and is quite likely to make this job a breeze.
However - read on...
There is a chance that it won't do exactly what you want. This is where databinding becomes a massive pain. Databinding requires certain things to be set up the way .Net wants it, and if they aren't, it's quite a lot of work reimplementing the basic functionality in the way you require. In this case, don't hesitate to reach for the MSDN documentation, and StackOverflow. Ask Early, Ask Often.

Multicore programming: the hard parts

I'm writing a book on multicore programming using .NET 4 and I'm curious to know what parts of multicore programming people have found difficult to grok or anticipate being difficult to grok?
What's a useful unit of work to parallelize, and how do I find/organize one?
All these parallelism primitives aren't helpful if you fork a piece of work that is smaller than the forking overhead; in fact, that buys you a nice slowdown instead of what you are expecting.
So one of the big problems is finding units of work that are obviously more expensive than the parallelism primitives. A key problem here is that nobody knows what anything costs to execute, including the parallelism primitives themselves. Clearly calibrating these costs would be very helpful. (As an aside, we designed, implemented, and daily use a parallel programming langauge, PARLANSE whose objective was to minimize the cost of the parallelism primitives by allowing the compiler to generate and optimize them, with the goal of making smaller bits of work "more parallelizable").
One might also consider discussion big-Oh notation and its applications. We all hope that the parallelism primitives have cost O(1). If that's the case, then if you find work with cost O(x) > O(1) then that work is a good candidate for parallelization. If your proposed work is also O(1), then whether it is effective or not depends on the constant factors and we are back to calibration as above.
There's the problem of collecting work into large enough units, if none of the pieces are large enough. Code motion, algorithm replacement, ... are all useful ideas to achieve this effect.
Lastly, there's the problem of synchnonization: when do my parallel units have to interact, what primitives should I use, and how much do those primitives cost? (More than you expect!).
I guess some of it depends on how basic or advanced the book/audience is. When you go from single-threaded to multi-threaded programming for the first time, you typically fall off a huge cliff (and many never recover, see e.g. all the muddled questions about Control.Invoke).
Anyway, to add some thoughts that are less about the programming itself, and more about the other related tasks in the software process:
Measuring: deciding what metric you are aiming to improve, measuring it correctly (it is so easy to accidentally measure the wrong thing), using the right tools, differentiating signal versus noise, interpreting the results and understanding why they are as they are.
Testing: how to write tests that tolerate unimportant non-determinism/interleavings, but still pin down correct program behavior.
Debugging: tools, strategies, when "hard to debug" implies feedback to improve your code/design and better partition mutable state, etc.
Physical versus logical thread affinity: understanding the GUI thread, understanding how e.g. an F# MailboxProcessor/agent can encapsulate mutable state and run on multiple threads but always with only a single logical thread (one program counter).
Patterns (and when they apply): fork-join, map-reduce, producer-consumer, ...
I expect that there will be a large audience for e.g. "help, I've got a single-threaded app with 12% CPU utilization, and I want to learn just enough to make it go 4x faster without much work" and a smaller audience for e.g. "my app is scaling sub-linearly as we add cores because there seems to be contention here, is there a better approach to use?", and so a bit of the challenge may be serving each of those audiences.
Since you write a whole book for multi-core programming in .Net.
I think you can also go beyond multi-core a little bit.
For example, you can use a chapter talking about parallel computing in a distributed system in .Net. Unlikely, there is no mature frameworks in .Net yet. DryadLinq is the closest. (On the other side, Hadoop and its friends in Java platform are really good.)
You can also use a chapter demonstrating some GPU computing stuff.
One thing that has tripped me up is which approach to use to solve a particular type of problem. There's agents, there's tasks, async computations, MPI for distribution - for many problems you could use multiple methods but I'm having difficulty understanding why I should use one over another.
To understand: low level memory details like the difference between acquire and release semantics of memory.
Most of the rest of the concepts and ideas (anything can interleave, race conditions, ...) are not that difficult with a little usage.
Of course the practice, especially if something is failing sometimes, is very hard as you need to work at multiple levels of abstraction to understand what is going on, so keep your design simple and as far as possible design out the need for locking etc. (e.g. using immutable data and higher level abstractions).
Its not so much theoretical details, but more the practical implementation details which trips people up.
What's the deal with immutable data structures?
All the time, people try to update a data structure from multiple threads, find it too hard, and someone chimes in "use immutable data structures!", and so our persistent coder writes this:
ImmutableSet set;
ThreadLoop1()
foreach(Customer c in dataStore1)
set = set.Add(ProcessCustomer(c));
ThreadLoop2()
foreach(Customer c in dataStore2)
set = set.Add(ProcessCustomer(c));
Coder has heard all their lives that immutable data structures can be updated without locking, but the new code doesn't work for obvious reasons.
Even if your targeting academics and experienced devs, a little primer on the basics of immutable programming idioms can't hurt.
How to partition roughly equal amounts of work between threads?
Getting this step right is hard. Sometimes you break up a single process into 10,000 steps which can be executed in parallel, but not all steps take the same amount of time. If you split the work on 4 threads, and the first 3 threads finish in 1 second, and the last thread takes 60 seconds, your multithreaded program isn't much better than the single-threaded version, right?
So how do you partition problems with roughly equal amounts of work between all threads? Lots of good heuristics on solving bin packing problems should be relevant here..
How many threads?
If your problem is nicely parallelizable, adding more threads should make it faster, right? Well not really, lots of things to consider here:
Even a single core processor, adding more threads can make a program faster because more threads gives more opportunities for the OS to schedule your thread, so it gets more execution time than the single-threaded program. But with the law of diminishing returns, adding more threads increasing context-switching, so at a certain point, even if your program has the most execution time the performance could still be worse than the single-threaded version.
So how do you spin off just enough threads to minimize execution time?
And if there are lots of other apps spinning up threads and competing for resources, how do you detect performance changes and adjust your program automagically?
I find the conceptions of synchronized data moving across worker nodes in complex patterns very hard to visualize and program.
Usually I find debugging to be a bear, also.

Line of business applications: Will F# make my life easy?

I develop mainly line of business applications.No scientific operations. No complex calculations. Just tie User Interface to database. The only reason I use threading is to do some work in background and still keep UI responding.
This may not be the best approach but this is what I follow
1.Create an working application first(without threads) and give it to end user to play for the sake of feedback.
2.Once all requirements are locked, I try to use threads wherever it makes sense to improve performance.
The code for steps 1 & 2 is overwhelmingly different and threading code dominates the actual code.
1.Will F# make my life easier in case of Line of Business applications?
2.Are there any particular UI technologies that will fit best with F#? I mainly work on ASP.NET & Silverlight. WPF now & then.
3.Are there any good examples of Line of business applications/demos with F#?
I am not asking whether it is possible to develop Line of Business application in F#. I am asking whether F# will make my life easier to develop Line of Business applications when compared to C#? My main concern will be threading & UI synchronization.
I'm in the same boat as you, doing lots and lots of line-of-business type apps, nothing "fun" like games or compilers or search engines.
Your mileage will vary, but at least in my own experience a lot of dev teams are reluctant to jump right into F# because no one else on the team knows or has even heard of it. And right off the bat, you have to fight those questions like "what does it do differently from C#?" If you can convince your boss to let you write a few demo apps with it, go for it!
So with that being said, I find that C# isn't very good at business rules, but it handles GUIs like a champ; F#'s immutability makes GUI development awkward, but business rules and workflows feel natural. So the two languages have their own strengths and compliments one another's weaknesses.
In my own LOB apps, F# runs circles around C# in a few areas:
F#'s async workflows and mailbox processors orders of magnitude easier to work with than native threads and even task parallel library. Since using mailbox processors for interthread communication, I don't even remember the last time I've had to lock or thread.join() anything for syncronization at all.
Defining business rules engines and DSLs with unions FTW! Every non-trivial LOB app I've ever worked on has its own half-baked rules language and interpreter, and its almost always based on recursively switching on an enum to drill through the rules and find a match. Current project I have now contains 1300+ public classes, 900 or so are simple container classes to represent a rule. I think representing the rules as an F# union would substantially reduce the code bloat and make for a better engine.
Immutable code just works better -- if you get a new object with an invalid state, you don't have to search far to find the offending line of code, everything you need to know is on the call stack. If you have a mutable object with an invalid state, sometimes you have to spend a lot of time tracking it down. You can write immutable code in C#, but its really hard not to fall back on mutability, especially when you're modifying an object in a loop.
I hate nulls. I can't give an exact estimate, but it feels like half the bugs we get in production are null reference exceptions -- an object is improperly initialized and you don't know about it until you're 30 stack frames deep in code. I think F# would help us write more bug free code the first time around.
C# usually works well when:
You're writing GUI code or working with inherently mutable systems.
Exposing a DLL or web service to many different clients.
Your boss won't let you use the tools you want ;)
So if you can get over the "why would we want to use a new language hurdle", I think F# will indeed make your life easier.
That's a very hard question to answer - I think that F# would make some aspects of your life much easier and some a bit harder.
On the plus side, dealing with threading should be relatively painless - F# async workflows are a huge benefit. Also, F# Interactive makes rapidly iterating and exploring code very easy. To me, this is a huge benefit over C#, since I can test out minor code changes without going through a full build.
On the down side, the tooling for F# isn't where it is for C#, which means that you won't have access to GUI-builders, refactoring tools, etc. It's hard to say how big of a productivity hit this would cause you without knowing more about your scenario. One workaround would be to create a multi-language solution which has a thin C# front-end, but again the complexity may not be worth the benefit.
#Juliet and #kvb have good answers, I just want to reiterate how useful F# is for making threading easy. In my blog post "An RSS Dashboard in F#, part six (putting it all together with a WPF GUI)", I say
...Note the nifty use of ‘async’ here – I
use Async.StartImmediate, which means
all the code runs on the UI thread
(where the Click handler starts), but
I can still do non-blocking sleeps
that don’t jam up the UI. This is one
way that F# async just blows the doors
off everything else.
...
Our “ago” information (“3 minutes
ago”) will quickly get stale if we
don’t refresh it, so we start a loop
that redraws every minute. Once
again, F# async kicks butt, as I can
just write this as though it were a
synchronous loop running on the UI
thread, but the non-blocking sleep
call ensures that the UI stays live.
Awesome. ...
but that blog post is an example of using F# to hand-code the entire UI. That implies trading away all of the great GUI tooling you get with C# or VB. I imagine that a hybrid approach can potentially net almost all of the benefits of both (with the cost of having two projects in the solution where you previous just had one), but I don't (yet) have direct experience of my own to share here.
(Is there a canonical "problem example" of a C# GUI app where you need to add threading to improve perf or keep the app live during some long operation? If so, I should check it out.)
Something you might like to see:
The First Substantial Line of Business Application in F#
A big LOB app in F#.
To address this I posted some thoughts of mine in using F#,
http://fadsworld.wordpress.com/2011/04/13/f-in-the-enterprise-i/ http://fadsworld.wordpress.com/2011/04/17/fin-the-enterprise-ii-2/
I'm also planning to do a video tutorial to finish up the series and show how F# can contribute in UX programming.
I'm only talking in context of F# here.
-Fahad

Categories