Haskell's Performance
This thread (still currently active at the time of writing this blog entry) on the haskell-cafe mailing list is a nice example of the one big, major problem I have with Haskell: performance.
To summarise the thread, the infamous great language shootout (which is only infamous if you’re into alternative languages like Haskell, which get their asses handed to them in microbenchmarks for varying numbers of silly reasons) showed that Haskell placed last in the word count test. (Note that the benchmark results have since been updated with a new GHC-specific version of the solution, which is far uglier, but also about 60x faster than the original version.) The Ocaml guys went to town on this result, and of course claimed that Haskell was dreadfully inefficient, with a half-condescending remark at the end that “Haskell’s approach has been expensive in terms of performance”. The Haskell community ignites, and comes up with a version within a day or two that is twice as fast as GNU wc@, which is more or less on par with the Ocaml @wc
version. This is good. However, two rather bad things stand out to me:
- Malcolm Wallace’s Space leak or what? post. Sure, Haskell has a space leak here. The problem is, you don’t notice these space leaks unless you’re an expert Haskell programmer. (I certainly can’t tell that code has a space leak, and I’ve been using Haskell for quite a while now.) The bigger problem is how to fix those space leaks.
- The final solution to the
wc@ program submitted for the contest "looks like a goat":http://shootout.alioth.debian.org/lang/ghc/wc.ghc.html. Why should I have to use @unsafeRead
(and thus potentially destroy any guarantees that Haskell’s awesome type system gives me) to get decent performance out of this thing, and why do I need to resort toIOUArray
and lots of uses ofseq
? Compare the Haskell version to the O’Caml version, and draw your own conclusions about which is more declarative, high-level, and most importantly, which one doesn’t require an expert in the language to write.
The main problem I see with all this is that it’s just too hard for an average Haskell programmer to get good performance out of Haskell. As people have shown on the haskell-cafe mailing list, you certainly can get very good performance out of Haskell: in plenty of cases, surpassing an equivalent version written in C. However, you can only do that if you’re a complete Haskell guru, and even then, things start looking really ugly: you might as well just rewrite that part of the program in C and call it via the Haskell Foreign Function Interface. I think the main cause of this is that Haskell is simply too unpredictable: in most cases, it’s laziness which causes the unpredictability. Note that this is different from saying laziness leads to slower performance: this is untrue, because in many cases, laziness can improve performance. Predictability is the important factor.
For optimisation, you must know what factors are causing things to be slow, otherwise you’ll be searching in the dark for what bits to optimise. So, Haskell users will tell you to profile (which is sound advice no matter what language you’re using). That’s fine. The problem then is you have to know why something is slow, and there’s where Haskell can get complex. Is there a space leak? (Do you even know what a space leak is?) Is your array strict instead of lazy? If it’s strict, is it boxed instead of unboxed? (Do you even know what unboxing is?) Can you use unsafeRead
instead of readArray
? (Did you know that unsafeRead
exists? I sure didn’t!)
I think if Haskell were to give the programmer slightly more control (or, in more formal terms, enable the programmer to choose from different operational semantics), Haskell optimisation could be moved from the expert’s domain to the novice’s domain. This is one area where the Clean language has innovated over Haskell: one may claim that uniqueness types and heavyweight use of strictness/unboxing annotations make Clean programs far less pretty than their Haskell version (and I agree), but if you need speed, you can get it in Clean much more easily in than Haskell. The language specification gives you more control over what’s going on; strictness annotations are one example of such a control mechanism. GHC does give the programmer access to many new operations intended for optimisation, but in a syntactically-heavy manner, which discourages you from using them. This is in contrast to Clean, where you can, for example, make an unboxed, strict array just by writing [# 1, 2, 3 !]
rather than [1, 2, 3]
. I don’t know how to make a strict, unboxed array in Haskell without looking up GHC’s library documentation.
It’ll be interesting to see whether the Haskell language itself evolves to plug this problem. I know I would recommend Haskell rather than O’Caml to more people if it made performance optimisation easier. Note that what I’m discussing here is convincing a Haskell compiler to give good performance, rather than blazing performance: if you absolutely require blazing performance and nothing else will do, code that small fragment in C or Assembly, and call it via the FFI. Multi-language approaches are good. However, the goal is to make it easy to optimise programs from an abysmal run time to a good run time: a run time that you wouldn’t be embarrassed about demonstrating to non-Haskell folks.
Update: This was originally written toward the end of 2004. I’m glad to say that things have changed since then: Don Stewart’s Data.ByteString library has nailed the performance gap between Haskell and C for string processing and binary I/O (even beating C occasionally, thanks to GHC’s far superior compiler optimisations vs C and gcc), which is arguably the main area where you’d run into serious performance problems easily. GHC 6.6 has also gained bang patterns and some syntactic sugar for arrays. (The latter isn’t quite as nice as Clean’s strict array syntax, but it’s a huge improvement.) Both these features greatly lower the barrier to entry if you’re trying to coax blazing performance out of Haskell.