I was just reading this lately; really interesting stuff. However, I wonder how much has changed in the last 25 years since this book was written? Or to put it in a different way: what to read next?
There is still a lot that has changed since then. In particular:
1) Control-flow analysis has become much more well-understood, and there's a lot more you can do in your optimization phases to dramatically speed up code and reduce allocations (allocations and heap accesses are the bane of functional language implementation, btw. Unless you're Haskell, and then you also have to deal with mutation for shared lazy computation results).
2) Certain tricks such as monomorphisation (http://mlton.org/Monomorphise ) dramatically improve the ability of the compiler to generate optimized code from originally polymorphic inputs without paying huge representation overheads.
Unfortunately, #1 and #2 are mainly written up "in the source" of modern functional language compilers or at best in JFP (Journal of Functional Programming) articles :-( But the Appel book provides most of the fundamentals you'll need to get bootstrapped with anything more modern, so I'd recommend it anyway!
If you are still interested in learning more about writing compiler optimizations after reading the Appel book, there are two really fantastic Ph.D. theses:
Olin Shivers, Control Flow Analysis in Scheme
David Tarditi, Design and Implementation of Code Optimizations for a Type-Directed Compiler for Standard ML
The first can be a bit mathematically hairy, particularly if you have either a weak or engineering-focused math background. Fortunately, you can skip it and still get a lot out of both the static analyses and particularly the optimizations listed at the back. There are several he lists that still offer promise and that folks are even now trying to figure out how to implement efficiently (i.e. super-beta).
The second is a wonderful implementation-focused thesis that covers in even more detail than the Appel book how to perform those optimizations on a typed IR. A typed IR is much tougher than an untyped IR because in order to maintain a valid IR can be quite tricky (i.e. if you change a function to take an additional argument, you need to update its type... and everywhere that type occurs, including possible in tuples passed to functions that pass them to other functions, etc.). Further, he includes proofs of correctness that will prepare you for the kinds of things that appear in more modern coverage of compiler optimizations.
Beyond that, getting to specific journal papers, you really can't go wrong just browsing TOPLAS and JFP and looking at particular topics you're interested in. There are some that are, of course, particularly good, but generally the editorial staff on both is just fantastic and you can't go wrong. If you have specific interests, feel free to drop me mail directly for a pointer to a good entry paper in the area.
larsberg has already given several suggestions, however, since nobody else mentioned it yet, you might want to complete your view of functional programming language implementation by taking a look at "Lisp in Small Pieces" by Christian Queinnec (http://www.amazon.com/Lisp-Small-Pieces-Christian-Queinnec/d...).
It would be great if one of our resident HN functional geeks could give a short review of the book and why it and lambda are still relevant (more relevant?) now.
In addition to the other comments (at the time of this writing: silentbicycle and dons), I would say:
- Regarding the importance of the book: it's discussion of the G-machine (probably also interesting is another paper by SPJ, "The spineless tagless G-machine") is very comprehensive and textbook style. I think this is very important to study/compare the other model of virtual machines used for implementing functional programming languages, viz. the SECD machine (named after the four stacks necessary, Stack, Environment, Code, Dump.) The book I first read about the SECD machine is Peter Kogge's "The Architecture of Symbolic Computers" and I recommend it dearly.
- Regarding the use of lambda calculus: It's not just the equivalent of a Turing machine, but much more fundamental in the theory of implementing programming languages: Aside of the eager/lazy evaluation of functional programming languages (corresponding to applicative and normal order evaluation of terms), the typed lambda calculus for instance is fundamental for studying type systems, one of the more important research areas of the last decade. Besides it is also the basis of LISP (eval and apply functions.)
Classic text on implementing typed Haskell/ML-like languages circa late 80s. Many of the terms and concepts remain relevant, though state of the art of runtime systems has advanced a lot.
This is in no way new, and is fairly easy to find for anybody looking for this sort of material, so why post it to HN? It's a good book, I used it myself for a school project, but it has been available for free for at least 5 years now.
You are of course 100% correct (although it seems to me that most stuff on HN is "new" stuff). My gripe is that I use the RSS feed so I see EVERYTHING that gets posted to HN, and this is one of those posts where I feel like the target audience is quite small, and those who need it will have no issue finding it themselves. But I guess I'm just biased because I have actually used this book before, and therefore don't find it interesting at all when it is brought to my attention at a time where I have no further use for it.
I don't have the links handy, but there are RSS feeds that only serve up HN items that hit certain point levels, such as 20, 50, or 100 points. I like them because there is often a lot of noise getting submitted to HN and often only want to see what has managed to garner some attention.