Mike's one of the more fascinating people around. One of the only people ever to receive a discharge from the navy as a conscientious objector after he became a pacifist.
Since you're using the auto-tuned index for FLANN it's going to build a large number of indexes to try and find the best performing one, which can take a long time. Properly tuned I would expect FLANN to be much faster (though that's expected since it doesn't do exact searches). Also benchmarking the index build time together with query times doesn't make sense to me since depending on the application you may not care how long it takes to build the index.
"I’d guess that it took about 10 times as much work to create my Haskell-based cover tree than it would have taken to create a similar C-based implementation. (I’m roughly as proficient in each language.)"
Interesting since I would imagine implementing something in C to take much longer than doing the same in something like Python or Java. On the other hand, I suppose getting something to work is not the same as getting it to run fast.
He says "Because Haskell code is so high level, it requires aggressive compiler optimizations to perform well."
I think you are basically right. It is easier to make something correct in Haskell, it is easier to make something reasonably fast in Haskell too (generally) but in specific circumstances when you want max performance and care about the assembly or intermediate code output then it becomes easier in a low level language like C to do the performance optimization.
Having said that the 10* effort is an investment if you get a reusable library and will pay off because the rest of your code is in Haskell not C!
Carter claims he can perform optimizations that aren't possible in other languages, though I can't personally speak to them since I'm not well versed in numerical code.
It's not yet in shape for general use, nor for being advocated. There is some tech there that's unique in terms of handing matrix computation on rich formats nicely.
I'd prefer if we reserve advocacy for when i get it to the point where there's public documentation, or at least examples :)
This algorithm operates on trees. But I've always wondered how one can create efficient algorithms operating on general graphs in a functional language such as Haskell?
For example, how would one write a fast version of Dijkstra's shortest path algorithm in Haskell?
Not that i'm the proshit haskell expert, but I don't think that GHC does the most amazing job in applying optimizations and I definitely think its performance paradigm is really hard to reason about.
Still, it was a nice breakdown of a real non-toy problem in haskell!
I've heard quite the opposite. I've heard that GHC is state of the art in the depth and breadth of optimizations it can perform. I'm struggling to remember if the context was "of everything" or just within pure functional programming.
GHC has great optimizations within the scope of modules (and seems to balance for the right level of polymorphism/modularity/linking), and can inline some things across module boundaries. But at the level of entire programs more advanced optimizations are possible; I gather there are tradeoffs too.