"My major gripe with haskell was that I could never tell the space/time complexity of the my code without serious analysis (that among other things involves second-guessing the compiler's ability to optimize). This makes writing good quality code harder than it needs to be."
It seems to me that there is some connection/analogy with garbage collection. Garbage collection also makes reasoning about space usage of programs difficult, since it happens in some future, yet unspecified, time. Yet GC is almost always regarded as good (although historically there was a lot of fight against it), yet lazy evaluation is regarded as bad by many. Maybe the issue really is "good enough compilers"?
It's really easy to fuck up what will be evaluated, when, and wind up with disastrous performance ramifications.
As an example, I wanted to accumulate a list of summary data as I iterated through a recursive function. Through a coding error, the elements of the list (although not the list itself) were not evaluated until all the iterations were complete. This meant that, instead of a list of structs, which I thought I had, I had a list of zero-argument functions that, when called, would yield structs, _each of which contained a reference to a large array computed at an earlier iteration of the computation_.
So, I hemorrhaged space. This was a fuck-up, not a missed optimization.
It seems to me that there is some connection/analogy with garbage collection. Garbage collection also makes reasoning about space usage of programs difficult, since it happens in some future, yet unspecified, time. Yet GC is almost always regarded as good (although historically there was a lot of fight against it), yet lazy evaluation is regarded as bad by many. Maybe the issue really is "good enough compilers"?