Hacker News new | past | comments | ask | show | jobs | submit login

There's a spectrum of near vs. far from the machine, but there's no spectrum of computability, all Turing complete languages are equally Turing complete. There's no difference between Lisp and assembly in this regard. In fact if you take into account the big-O time complexity, then assembly is better than the version of Lisp described in the OP, because the latter suffers a logarithmic penalty due to immutability.

More to the point, Lisp is nice because it's in a sweet spot of language complexity vs self-interpreter complexity, and lambda calculus is nice because it's a simple basis of computation. But there are many languages that make self-interpreters simple (assembly, Forth...) and many simple bases of computation (SKI calculus, rule 110...)

To call something the "Maxwell equations of software", IMO you need something more substantial. For example, Haskell folks can claim that lazy evaluation is provably superior (in termination and time complexity) to any other evaluation strategy. The dependent typing folks can claim that all valid programs correspond to valid proofs and vice versa, and that strict and lazy evaluation are equivalent w.r.t. termination. The guy behind Kernel can claim that the one true Lisp should allow you to mapcar a macro. There are many kinds of wonderful math that give you insight into computation!




It's true that Turing complete languages are (almost by definition) equivalent for writing programs of type "Nat -> Nat". However, things are not so simple at other types and it turns out there is actually something of a spectrum of computability. Andrej Bauer has a nice post on such things here: http://math.andrej.com/2011/07/27/definability-and-extension...


Please correct me if I'm wrong, since this is very much not my field, but as I understand it, there are at least 4-5 mathematically equivalent fundamental models of computation, and in practice the Turning machine and the lambda calculus (developed by Turing's thesis adviser ... I gather that was a small world back then) are useful.

E.g. does the additional math you're talking about stand on top of either of those? I don't know (By nature and calling I'm a scientist of the continuous world, not discrete, damnit! :-).

Of course, I'm moving the goalposts by using "Lisp" as a stand in for the lambda calculus (and I remember someone, maybe McCarthy, saying Lisp is what you get after you've read and only understand the first chapter of a book on it :-), which I don't even know at the poets level (but am maybe starting to correct that; don't know if the payoff is worth it to me at this point in my life).


Yeah, you understand correctly. The thing is, I don't know any mathematical insight that singles out lambda calculus as a basis for computation among other possible bases, or any mathematical insight that singles out Lisp as an approach to writing self-interpreters. In other words, there's no theorem saying Lisp lies at the extreme end of any spectrum.

To many people, Lisp is simply the most advanced language they know, so they view other languages as "Lisp plus some features that you could implement with macros". They might as well view other languages as "assembly plus some features that you could implement with assembly". It's just the blub paradox all over again.

To add insult to injury, many of today's advanced languages aren't even built on Lisp. They throw away the key idea of Lisp (easy equivalence between code and data) in order to achieve other things which are not achievable in Lisp (provability of certain classes of statements about all valid programs). To put it in an exaggerated but not entirely untrue way, these days Lisp seems like a dead end in terms of research. Most of the interesting stuff is coming from ML-like languages instead.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: