Hacker News new | past | comments | ask | show | jobs | submit login
How quickly do algorithms improve? (news.mit.edu)
57 points by npalli on Oct 17, 2021 | hide | past | favorite | 20 comments



In section III B (p.8) the paper states that “in 1982…Gries[24] devised another linear algorithm” for the maximum subarray problem. I remember that. Jon Bentley visited Cornell, and posed the problem to me and Gries together. We thought on it independently overnight and presented a linear solution to Bentley the next day. Gries had a simpler invariant, and I had simpler code (using min). I never knew he wrote the paper until now, and am not one of the several people he mentions in the Acknowledgement, although he does say “Bentley…challenged US with this problem”.


There are a lot of trivial papers to be written when a field is young. Someone has to write them even if the discovery wasn't hard to make.


> Algorithms are sort of like a parent to a computer

One of the less illuminating analogies I've heard. People should stick to "recipe" or "assembly instructions" and don't try to get so creative.


Creativity pollutes the communication, but a lack of creativity pollutes the purpose.


recipe / assembly instructions are not quite right, either.

more like a process in a factory - material handling, coordination, input, output


I'm curious how they went about this. A lot of algorithms "improve" older ones through taking e.g. a probabilistic approach (thus introducing failure modes that weren't there previously), or by using mechanical techniques like SIMD, or by parallelizing previously sequential solutions. How do you measure and compare improvements like that? The first one is hard to compare fairly, and the other ones can easily depend on the hardware. Of course there are cases where two algorithms solve exactly the same problem and one is just plain faster too, but did they really evaluate hundreds of those?


The statement more directly backed up by the graphs in the paper is "changes in big-O did more to speed up solving large instances of some problems than did increase in SPECint," and they seem to mean really the same problem. The paper is at

https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=954...

It loses any feeling of objectivity, but it'd be interesting to broaden the question to approximate or specialized solutions, or things that make larger problems practical to solve without improving big-O. For some examples:

- probabilistic sketches have been very successful at approximately answering questions that took a lot more work to answer exactly

- adapting algorithms to hardware like GPUs doesn't reduce how many operations are done, but can make the operations cheaper/faster for the right problem

- similarly, a good parallel or distributed approach to a problem doesn't change the big-O, but is often what allows scaling up to big n in practice

- the B-tree (1970) did not improve on the big-O of the AVL tree (1962), but it made databases as we know them a lot more practical

- LSM trees and compressed columnar storage also don't improve big-O, but provide big benefits for write-heavy and scan-heavy workloads

Generally that pushes me more towards "hardware isn't responsible for all the advancements". Also, there's a fun conversation (happening out there, I assume) about how to value the sort of work that's more pragmatic than improving on a complexity bound but also more fundamental than just tuning some asm or C.


(Once more thing I might put in the list of "not a big-O improvement but a rearrangement that made new things possible" is an essentially economic observation Google and others made: once you're planning to serve enough queries/sec, putting unreasonable-sounding quantities of data into expensive storage (like much of your Web search index in RAM) starts to make sense. I saw it referred to as "online data-intensive" at https://perspectives.mvdirona.com/2012/03/power-management-o... and linked papers but don't know if it has a more common name.)


Yeah, looks like the answer to my question is their paragraph about "exact algorithm, exact solution" on page 8. Sadly their supplementary materials don't seem to be online yet, so we'll have to see how they're comparing, but it looks interesting.


> how to value the sort of work that's more pragmatic than improving on a complexity bound but also more fundamental than just tuning some asm or C

I'd bet lazily slapping caches on things has saved more total time than all other computer optimizations combined.


> I'd bet lazily slapping caches on things has saved more total time than all other computer optimizations combined.

For compilers, "Proebsting's law" suggests the total gain from all compiler research to date has been ~3%/year or >3.3x total: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.29...


Really depends on the baseline, it’s easy to accidentally write something that would take billions of years to finish on real data but is fine for a mock-up. Caching often seems like the first optimization simply because systems tend to already be optimized enough to be useful.


Link to full paper: https://ide.mit.edu/wp-content/uploads/2021/09/How_Fast_Do_A...

If you prefer video, I've made a short summary video: https://youtu.be/R0k80fSRLts



The is a great study showing the algorithms have improved faster than Moore's law and advances in computing in many cases, if only the abstraction allowed for frontend and end users to experience these gains more often.


I'm torn. A lot of the advances in algorithms required advances in memory capacity. Such that it seems odd to claim they outpaced Moore's law. True, but they could really only do it with said law.


I think these concepts do not compete.

The gains are not linearly related to the capacity or speed of memory. The memory improvements accomplish just the removal of a bottleneck.

You remove a bottleneck and everything flows faster instead of pools. So the increase itself can be greater than the same as the increase in the size of the corridor.


They were somewhat linear with relation to capacity of memory, but they also might not have been possible with capacity from a decade ago.

This is easily seen if you look at fun algorithms such as BDDs. If you have the memory for them, they are amazingly fast. But, if you don't have the memory for it, it gets you nothing.

Edit: removed a "not" that I didn't mean to type. :(


How quickly to business requirements cancel out algorithm improvements.





Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: