Does anyone else think that none of this is really that important? Really, not having to name a function and using it inline is an important feature, compared to defining it just before? And map is a lot more powerful than a for loop? I get the argument that its a nice to have, but does it justify a post called "can your language do this"? Really?
Automatic theorem provers are stuck in AI winter. Pre/post-conditions and dependent types currently need too much handholding. Once the AI firms deign to release heuristic language models for proof discovery, we can have nice things like that.
> In fact, if you have two CPUs handy, maybe you could write some code to have each CPU do half of the elements, and suddenly map is twice as fast.
I remember this came up a lot in late-aughts functional programming discussions. But is there any language where this actually works? Where map is just parallel because of course it is?
F# defines a parallel map with the same type signature as sequential map, but you have to ask for Array.Parallel.map rather than Array.map or whatever, because of the possibility of side effects. Only in a pure language like Haskell could you safely parallelize automatically. I'm not sure why Haskell doesn't - maybe overhead?
I think it's pretty much impossible to tell statically whether you're going to be drowning in overhead spinning up a thread for every pixel on the screen, or leaving tons of parallelism on the table. Maybe with PGO it can be made to work. Even if it could be automatic, parallelism can be so important, and usually requires explicit design to enable, you probably don't want to delegate it.
This to me looks like a great example of The Wrong Abstraction. You're removing a tiny bit of code duplication at the cost of creating a weird, hard-to-read, inflexible, limited-use helper; he's taken something straightforward and turned it into, honestly, a bit of a hash.