Hacker News new | past | comments | ask | show | jobs | submit login
Racket – Lisp beyond Clojure (slides.com)
220 points by macco on Sept 26, 2016 | hide | past | favorite | 173 comments



I find DSLs to be extraordinarily powerful for writing and working with my own programs. I find them painful when I'm trying to work with other people's programs.

Making a DSL lets me construct a language to express my program that meshes with the way I think. Unfortunately, we all think differently, and what is intuitive and friendly for me is hostile and awkward for others.

While reading someone else's program that uses a DSL may be superficially easy, but the minute you try to get under the surface, you're left scrambling to understand what a given term or idea meant to the author. There is no language definition to rely on for common understanding.

There are exceptions naturally.

I know a lot of people will disagree, and that's fine of course. It's opinions all the way down.


While reading someone else's program that uses a DSL may be superficially easy, but the minute you try to get under the surface, you're left scrambling to understand what a given term or idea meant to the author. There is no language definition to rely on for common understanding.

I have generally found that this is the same problem as with user-defined functions, and it has the same solution: programmers need to document their code.


Agreed. I have two follow up thoughts:

1. Badly designed functions can do less design damage per-function than badly designed languages per construct, but user-defined functions tend to have much higher volume than both language constructs and their usages.

2. Powerful techniques, including DSLs, lead to denser code. Denser code takes more time to read per unit of code. Total reading time may be either more or less, but the cold start ramp is always much steeper.

Combined, this means that people are less sensitive to suffering the complexity caused by user-defined functions vs DSLs/etc because that complexity has lower intensity spread out over a greater area. I'll refrain from any further value judgement on this state of affairs; just wanted to share some more insights in to the tradeoffs.


> Badly designed functions can do less design damage per-function than badly designed languages per construct

Really? When was the last time you successfully used a C library without reading the documentation?


Straw man. When was the last time you had to check the documentation for the parameter evaluation strategy of a C function call?

I don't think it is a controversial statement than a badly designed macro can do more damage than a badly designed function. I believe that this is true, even normalizing for bad macro systems. This is not a statement against macros, so much as it is a statement to consider when making tradeoffs.


It is controversial. The GETS function is probably responsible for a huge number of security breaches with a cost possibly ranging into the billions of dollars (there's no way to really know). What would you cite as a comparable example of a badly designed macro?


Part of the reason gets() has been such a disaster is that people use it.


What would you cite as an example of a badly designed macro?


If you need documentation probably you are not writing good code. Documentation is absolutely needed when writing third party libraries, but everyone of your team should know all the aspects of the codebase, at least at high level. If someone doesn't know it then documentation is not the solution. Doing some pair-programming, adding them as reviewers of more pull requests, writing some wiki page that explains the architecture of the various components are much more effective ways. Remember that documentation is not free, you need to spend a certain amount of time to build it and even a greater amount of time to maintain it. Every time that I've seen some attempt at writing documentation in javadoc or whatever is called in .net, there were always missing parameters in the methods, wrong arguments and method names, parameters not existing anymore and so on. Naming is also a very important aspect. Just yesterday I found in a pull request something like "SingleLineTextBoxTextTooLongShowTooltipHandler". This doesn't really add any information and just makes it impossible to understand what is meant to do. I proposed to change it to "TextOveflowTooltipHandler", maybe still not ideal, but I think that is a definite improvement over the original.

DSLs in theory should be self documenting. Gherkin is quite easy even if you don't read the documentation and even if you don't write code. If on the other end you write some Lisp-esque ultra compact DSL that has no whatsoever connection with the real world then also in that case the best solution is not to write the documentation, it should be to write a better DSL.


> Doing some pair-programming, adding them as reviewers of more pull requests, writing some wiki page that explains the architecture of the various components are much more effective ways

You get a new member of the team, how do pull-request reviews and pair-programming help them get familiar with old code? Documentation is a way of stating explicitly what might be learned implicitly from those tasks.

> Every time that I've seen some attempt at writing documentation in javadoc or whatever is called in .net, there were always missing parameters in the methods, wrong arguments and method names, parameters not existing anymore and so on

So fix that. review the comments as part of code review - if the documentation is in the code this should be easy. From my memory of Java, doesn't eclipse have some sort of doc sting checker? I'm sure I recall the in-comment parameter names changing when you updated the method name...


That's one thing that I liked with FP often combinatorics friendly code, you can compose lots out of a common core, it helps tremendously to decrypt the author's idea.


Also be consistent with their grammar, naming schemes, and conventions.


I think this conceptual disconnect can happen even without a DSL. So, then you have the huge amount of code, plus, a bunch of new and confusing concepts applied in ways that don't fit with your understanding of those concepts.

A DSL may mean that the language doesn't fit how you think about a given problem, sure, but a poorly fitting abstraction can happen with or without those language abilities. I tend to think that having those poorly fitting abstractions expressed in a compact form is probably better than having them expressed in libraries with very large surface area and oddly connecting/dependent parts.

Now, I don't actually know this to be true; my experience with Lisp projects is very limited. I have worked with Perl and Tcl that made use of higher level capabilities of those languages to create little DSLs for specific tasks, and I didn't find it problematic; but neither is as far along as Lisp on the DSL spectrum. But, I have also worked with big projects in languages like PHP that get the abstractions very wrong (at least, wrong from my perspective), and have found the verbosity to be exhausting...saying so much in order to accomplish so little (and to do it wrongly) seems even worse than tiny amounts of code that don't fit my mental model.


Agreed, in fact a well defined and documented DSL could make it easier than user-defined functions and large chunks of code.


This is why it's always important to consider semantics when you define an abstraction—whether it's a DSL or a normal library. Life is a lot easier when your semantics are simple, precise and composable (the semantics of an expression are clearly derived from the semantics of its components).

It makes the up-front design more difficult—you can't just muddle through a design without thinking deeply about it—but pays off in the long run.

If you have nice semantics, I think the advantages of building a DSL (mostly greater expressiveness and flexibility) vastly outweigh any confusion people might have about the different syntax. If the semantics are genuinely simple, learning how the syntax maps to then should not be difficult.

And with unclear semantics, even a non-DSL library is going to be hard to use and understand.


> I find DSLs to be extraordinarily powerful for writing and working with my own programs. I find them painful when I'm trying to work with other people's programs.

Maybe this isn't the case for you, but for me, "myself in a year" fits nicely in the "other people" category, so I'm always asking myself: will I be happy or sad about this if I stumble upon it again while doing something totally different a year from now?


Creating reusable functions and types and whatnot is also creating your own DSL, as another poster hinted at.

However the main problem with bad DSLs is that they aren't composable. Which really means users have to learn the right sequence of events and the right context in which certain actions function properly. People often complain about awkward symbols or unfamiliar constructs, but really, they are missing the forest from the trees.

This is why plain pure functions are always better, because they are composable and reusable, being the missing ingredient of many DSL authors.


Unhappy with the current state of functional programming. I know I should't be. I have a lot of options.

I went to school at UC Berkeley. My first language was Scheme. Then Elisp. CLOS was in the first few too. I really didn't learn C or C++ until second semester.

A few months ago I had to pick up enough Scala to figure out what some code did in a project and I wasn't really happy with the language. I realize it is multi-paradigm, but the functional part just seems a little bolted on at times. The notation was clunky. Maybe it is because I prefer Lisp-ish and APL-ish languages. And the Scala docs aren't very friendly. I actually find it easier to write imperative code in Scala than functional code.

Clojure seems nice, but lacks a lot of things that you kind of want in a LISP (proper tail rec). Racket has very little market pen.

Maybe I'm just picky, but I don't see anything I really want to put time into.


I guess I don't understand what the big deal is with tail call optimization. Could someone give an example where it really shines and clojure's loop/recur just doesn't?

If you are looking to put time into a programming language that is interesting in and of itself, I'd suggest Haskell.


loop/recur is good for replacing structured loops but tail recursion is more general, since it can replace arbitrary gotos.

One example is given by Guy Steele in "Why Object-Oriented Languages need Tail Calls". Basically, tail recursion lets you keep O(1) stack space even in the face of opaque method dispatching in OO code. https://eighty-twenty.org/2011/10/01/oo-tail-calls

Another required reading are the "Lambda the Ultimate" papers, if you haven't read them yet. The "GOTO" one is particularly relevant here. http://library.readscheme.org/page1.html


Thanks! These must be where the website/blog got its name. I actually thought that was what you were referring to at first.


An interesting thing about general tail recursion is that it's composable.

Closure's loop/recur is a special case implementation of a common pattern. That's Ok, but it can't extend.

Any higher order function in scheme can be parameterized with methods that should be tail recursive. No special language support needed. That's why for-each is a procedure instead of a special form in scheme.

This allows libraries (like SFRI-1) or DSLs to be created that expect callers can conform by contract instead of by using special forms. Callers, in turn, can create abstractions over the libraries and DSLs that are equally general, instead of being strongly influenced by what they're built on.

Clojure does a really nice job of creating reusable abstractions. From what I understand, tail-call optimization isn't possible in the JVM, so we have iterators and lazy evaluation first-class instead; which has other nice advantages for creating abstractions.


Clojure has both recur and trampoline which handle tail calls (in general not just with loop) and trampoline for corecursion:

recur:

  (defn factorial
  ([n]
    (factorial n 1))
  ([n acc]
    (if  (= n 0)  acc
    (recur (dec n) (* acc n)))))
trampoline:

  (declare is-even?)

  (defn is-odd? [n]
    (if (zero? n)
      false
      #(is-even? (dec n))))

  (defn is-even? [n]
    (if (zero? n)
      true
      #(is-odd? (dec n))))

  (trampoline (is-odd? 10000))


Yep, and the same can be implemented in any language that supports functions-as-objects (for example, javascript).

It's a practical solution/workaround, but isn't quite composable/generalizable as first-class support for TCO (unless the community is willing to follow some convention like "TCO-like behavior is a 'good thing' and so libraries should try to support and interface compatible with `trampoline`".

This is a little bit like `function` / yield in newer versions of Javascript. It's not TCO, but it is a strong community convention (with the assist of special syntax to enforce the semantic).

Clojure is a great example of how practical/tasteful decisions by language author can create re-usable abstractions that are really powerful (even while at the same time working around limitations of the underlying technology).

Some might also argue that TCO is "bad" because it's less debuggable (e.g. when an exception is thrown, the stack trace is gone). We can see the same phenomena with non-blocking IO, for example. So maybe it's a Good Thing to have "less general" semantics in a language (e.g. function, trampoline, and loop/recur may be good-enough while at the same time preserving debuggability).

There are always tradeoffs. What's nice about general TCO is that the composibility of unrelated libraries/DSLs is more probable whereas with conventions like trampoline, DSL/library authors need to understand the advantages and design the DSL/library in a compatible way in advance. As Clojure shows with lazy sequences, this can absolutely work given enough community awareness of the convention.


Another thing to mention: having guarantee of TCO supports higher-level control-flow abstractions (capturing continuations), which is also an interesting outcome...


It's a big deal since it's not really about optimization, but more about the overall coding style and thought process.

With TCO, you start seeing functions as composable code blocks rather than something that is called and then return from. State machines are just mutually tail-calling functions---it's not that state machines are written so, but that in your mental model two are the same. And usually a program is humongous multi-layered state machine.

Suppose you want to abstract the part of the loop. You just write a generic version that tail-calls user-passed function. Users can extend the loop freely, even interlacing two independent loops by mutually calling them (which can't be done with callback model without consuming stack frames).

It's really a matter of style, but for those who get used to think in this way, not having guaranteed TCO is like writing code with one hand tied up in the back.


Looping is fine for imperative languages. It should be the norm for performance sensitive things since that is how the CPU works. It is just easier than trying to make a smart compiler turn your recursion into looping constructions.

However, for functional languages you really want TCO to work properly so you can write your algorithms in a properly functional way. And you don't just want self-calls to work, but full mutual recursion (A calls B that calls A etc) to be properly optimized too.


the CPU is just doing a jump to an instruction at the end of each iteration.

When you do tail call recursion you are doing just that. The assembly jumps to a tag and in recursion you jump to a function name, which is also effectively a tag.

So it's not really about mapping better to the CPU instructions or anything like that

The added benefit is that with tail call recursion you can have mutually recursive function (function A calls function B at the end and function B calls function A at the end), which isn't possible with a simple iterative loop, so the functional style is actually more powerful than an iterative approach and still maps directly to "the way the CPU works"


You are forgetting about the function prologue, epilogue, and the maintenance of the various pointers. It certainly isn't as simple as a jump as you imply. To do proper TCO most implementation resort to function trampolines and other self-modifying trickery to get similar to loop-like performance. Within the confines of the JVM, without doing extensive source analysis, I doubt there is even a way to do it.


this is outside my field of expertise. Can you give an example of a tail call that needs "trickery"?

I'm not 100% on this, but when you have a tail call I don't think you have to remember the state of the stack frame. "function prologue, epilogue, and the maintenance of the various pointers" aren't god given rules - they're things dictated by something like the C ABI so you can resume the caller. So if you're thinking in terms of C ABI then yes, it's a compiler optimization, but in principle I think it's a zero cost abstraction (though I'd argue that the iterative loop is the actual abstraction)

In tail call recursion you're ensured the caller will never resume!

When you enter a tail-recursive function you save (as you normally would) a return pointer for the program counter and allocate registers/memory for your function arguments. At the end of the function you've done all the computation that that frame will ever do, so you are free to overwrite the argument variables when calling the next iteration/recursive-step. The return pointer doesn't need to be touched at all. Where is the complexity you're talking about?


When TCO is "lexically scoped" like loop/recur, the compiler can handle it.

When HoF are involved, you may have a case where a procedure calls a procedure-parameter, which calls another, and another... Something about the runtime has to recognize this or else the compiler has to accept more constrains.

See, for example, how Gambit-C implements tail calls by compiling modules to single C functions and how there is a trade off for procedure calls across module boundaries versus Chicken Scheme's approach to essentially garbage-collecting the call stack.


Okay, but at that point you're talking about things that are way beyond the capabilities of an iterative loop. I think my point still stands - that implementing a tail recursion in place of a loop is not something you will have to pay for. Both structures will map to the same instructions.


The difficulty with tail recursion optimization is related to calling conventions. Some calling conventions expect the caller to do some cleanup after the callee returns, which effectively means that no function calls actually occur in tail position. For example, in the C calling convention the caller caller is responsible for allocating and freeing the stack memory dedicated for the callee's function parameters. This system makes vararg functions like printf easy to implement but makes it hard to do TCO. Another example is Rust, where having destructors automatically run when a variable got out of scope prevented them from implementing TCO in a clean manner. I'm not familiar with the JVM internals but I think the limitations are going to be similar to the ones I mentioned here.


It's not just that, it's that last time there was serious talk of it, LLVM's musttail wasn't properly supported across important platforms for us. So it got put by the wayside, and there's always so much to do, nobody has worked through the semantics now that support is better.

We did reserve a "become" keyword for this purpose though. Instead of "return", you say "become", and TCO is guaranteed. That's the basic idea anyway, we'll see what happens.


Yep, you're absolutely right. I guess that maybe if you care about inventing new kinds of loops (e.g. a DSL that wants to loop as a specialized kind of `fold` construct), there's more flexibility with TCO.

But Clojure shows that there's tons of mileage and composability that can happen by using runtime protocols and tasteful language-design decisions even without TCO.

Capturing continuations is another interesting theoretical outcome of TCO, although as various scheme implementations show: the approach to supporting TCO also imposes some performance constraints on call/cc.


I will have to look for mutual recursion support (or lack thereof) in clojure.

Clojure's loop/recur is a work-around for some issue with the JVM and lack of fine grained control over the stack. That said, It's not a simple loop, the keyword "loop" is effectively an anonymous function that gets called by the "recur" keyword, with parameters.

An example:

  (loop [iter 1
       acc  0]
  (if (> iter 10)
    (println acc)
    (recur (inc iter) (+ acc iter))))
I don't know if that is any different than other lisps or not. In case it's less less than clear, the bracketed portion after the "loop" keyword is an initial binding form (setting iter to 1, and acc to 0), and thereafter iter and acc are just considered parameters.

For self recursion, that seems like a fair compromise. No help at all for mutual recursion though.


I never had to use it but I recall there was a 'trampoline' fn for mutual recursion


Exactly. I once read a great article [1] written by Martin Trojer I think, it was about recursion in Clojure.

EDIT: [1] https://martintrojer.github.io/clojure/2011/11/20/tail-calls...


> Could someone give an example where it really shines and clojure's loop/recur just doesn't?

In practice you don't replace TCO with loop/recur; you replace it with lazy sequences. They both solve the "how to work on unbounded data while consuming bounded memory" problem even though at first glance loop/recur seems more similar.

Most people who complain about not having TCO have not fully wrapped their heads around lazy sequences.


Can you elaborate a little on what you think lacks in Clojure?

I ask because I've never been happier with any language (I came from Java/Peolog) and I rarely use loops (maybe one for every 2kloc, mostly because a reduce does it in most cases) so I don't know what else I might be missing without knowing it.


Well, of course you're limiting yourself to lisps (except Scala, about which I share your dislikes). How about F#, which is a very practical functional language with good tooling on all platforms? Or Ocaml, or Haskell?


I don't use too much Windows and Mono sucks from all my previous attempts at using it so that kind of kills F# from what I hear.

I tried OCaml (many) years ago, but it always seemed like a kitchen sink research language.

I like well designed languages. Ones with purpose when they start out. Ones I can see a path for. Languages of accretions (Python, Ruby, Javascript, etc) don't interests me as much unless they are languages of production value (Java). Java start off as a language of purpose (Write-One Run-Anywhere) but soon became a language of accretion. Nobody cares about WORA anymore, server side Java is where it is at now.

Maybe I'll just go back to my little corner of APL called KDB+/Q/K[1][2][3] and wait for a language that really piques my interest.

[1]https://en.wikipedia.org/wiki/Kdb%2B

[2]https://en.wikipedia.org/wiki/Q_(programming_language_from_K...

[3]https://en.wikipedia.org/wiki/K_(programming_language)


What about Standard ML? [http://www.smlnj.org/sml.html]

Much simpler than Scala, OCaml etc.. Many would say much better designed. It is the god father of many languages. There are several very good compilers available. [http://www.smlnj.org/] [http://mosml.org/] [http://mlton.org/]


Elixir and Erlang were built with a purpose. OCaml kitchen sink is due to use in prod btw. Like Erlang, it is a language that evolved with industry needs more than due to "research"


> good tooling on all platforms

Eh... I've run into loads of trouble with F# and Mono, and with .NET Core you have to deal with platform fragmentation if you use any NuGet packages. Editors are pretty good across platforms now (as long as you stay far away from Xamarin Studio/Mono Develop), but I wouldn't call it "good" for non-Windows platforms.


Haskell and OCaml are both great. I've had a fair amount of experience with both of them as well as Racket and, while I rather liked Racket, I would definitely order them Haskell > OCaml > Racket. (I did 61A in Scheme too, coincidentally.)

I'm using Haskell professionally now and, while there are still a few things I miss from Racket and even OCaml, Haskell delivers the best overall programming experience I've encountered.


Also started with 61a in Scheme. How would you compare Scheme to Racket?


Imagine a superset of 61a Scheme with a more robust standard library, better tools and some cool facilities for domain-specific languages. It still feels like the Scheme you wrote while having some useful basic features like structs built in:

    (struct document (author title content))
    (document "Me" "My Document" "Blarg")
It's not strictly a superset of Scheme (for example, lists are immutable) but it feels like it. It's not like you usually used set-car! and set-cdr! in normal Scheme either!

A lot of random small things like the syntax for importing modules is better. Then, if you dive in further, there are some cool packages like an optional static type system, a fancy contract system and some cool facilities for transforming Racket into another language (complete with new syntax).

I saw people doing some really cool stuff with that. For example, Rosette[1] lets you write normal Scheme with a #lang rosette annotation on top. Then, instead of running your code directly, you can turn it into a compiler that produces a logic formula from your code. It redefines core language constructs like if to be symbolically evaluated. I'm working with a system that does something similar in Haskell and its implementation is a lot more awkward.

[1]: https://docs.racket-lang.org/rosette-guide/index.html

If you're doing something really meta like that, Racket is great—easily one of the best options around. If you're just writing normal code, it's basically a marginally better Scheme which is fine.

But not great. I worked on a compiler written in Racket without using any of the meta features and it was okay, better for the task than any other dynamic language, but it would have been better with a more functional language with a real type system. The standard library is certainly better than Scheme's but that's because Scheme is tiny; I was constantly missing functions I'm used to having in Haskell whereas, except for the metaprogramming features, there's very little I miss in the other direction.

That's my take on it, anyhow. Obviously, other people disagree. If you're just comparing Racket against Scheme, you can mostly think of it as a superset that's a strict upgrade.


Most of the rough edges in Scala syntax are where it handles static typing around functional code and I think they've done a solid job achieving that. I'm from a lisp background and I loved learning Clojure, but for my day job I wouldn't give up Scala


Have you tried learning an ML variant? SML and OCaml are great, and Haskell isn't too bad, either.


Yes. Years ago I learned OCaml (and I really didn't care for it. Seemed like a big mess of random stuff). I haven't touched it in years though. I used to learn a new language about every other year. My resume even used to be written in hand coded PostScript.


This is addressing a question I'm interested in, but on first read I'm struggling to see the value that's being presented here.

- "parameterize" is great, but seems exactly the same as Clojure's "binding" on a dynamic variable. [Edit: modulo being "sticky" with regard to continuations... but Clojure doesn't have continuations, so it's a pretty subtle difference.]

- Reader macros are an interesting feature, but do move away from the syntactic regularity of LISP -- and were, AFAIK, explicitly rejected in Clojure

- Custodians are kind-of interesting -- but I ended up having to do some Googling [1] to find out exactly what they do...

Is there anywhere else I should be looking for a more systematic Racket/Clojure comparison?

[1] - https://docs.racket-lang.org/reference/eval-model.html#%28pa...


> - Reader macros are an interesting feature, but do move away from the syntactic regularity of LISP -- and were, AFAIK, explicitly rejected in Clojure

That does not make sense to me since clojure introduced explicit irregular syntax for things like maps and vectors. In Common Lisp the "we-the-language-developers can introduce special typography but the users cannot" thing, in constrast, doesn't exist .


The majority of Clojure's special syntax is for data structures, and while Clojure doesn't have reader macros, it does have data readers.


I don't know if it's systematic per se, but I wrote a comparison here: http://technomancy.us/169

My take is that the biggest difference is not in the language but the runtime. Nearly all the things that Racket lacks (except world-class GC and JIT compilation) can be added in after the fact due to its massive flexibility; Racket is a natural choice when you want a lisp but can't afford the memory or launch time overhead of the JVM. Unfortunately the things that Racket does well typically cannot be retrofitted onto Clojure due to its sloppy semantics around nil, but maybe Clojure's port of Racket's contracts system will change the balance there.


The place where I've found Racket to shine is the reader macros.

More specifically, a project can be broken up to be expressed in a way that makes sense, using what looks like, on the surface, multiple languages.

Most of what I have done in Racket used #lang racket for the core, and lazy or typed everywhere else.

Contracts and laziness made debugging a breeze.

Edit: My clumsy hand hit reply before I was done.


I've recently started porting some bash scripts over to Racket, using https://docs.racket-lang.org/shell-pipeline and it's been pretty straightforward and painless so far!


Link to your code? I would be very interested to see this thus far.

I have looked at avesh (CL) and xonsh (Python) because I think the future of fun *nix tooling is like libraries that implement a bash layer to like gradually move code away from shell languages. This is the sweet spot I would want in my life.


Not the cleanest, but it's at http://chriswarbo.net/projects/repos/theory-exploration-benc...

Note that the static HTML views are truncated to 10 commits, to save space (useful on larger repos), but the repo itself contains the full history.


Just got back around to looking at this. Thanks, I will check it out.


Are there (m)any advantages to doing this? I've seen a number of shell-interop modules for various languages but they all appear to add an extra layer that reduces portability or ease of maintenance.


> Are there (m)any advantages to doing this?

[trigger warning: cynicism distilled from 15+ years of bitter tears]

More fun and better job security for the current maintainer. The former because one hacks away in the preferred language, the latter because one cannot be replaced easily by some unix geek unless s/he happens to speak the same language fluently.

One can chose from various implementations in various languages (eg: psh and scsh) but hacking your own is easy:

* Just re-implement the basic unix tools and shell semantics and invent some nifty convenience syntax and features along the way.

* Either don't document at all or document everything with at least 2 pages aimed at people who have never used a computer before. Both ways will prevent the only ones who could replace you from even looking at your "shell in X".

* Never ever do this in a typed language, write a ton of unit-, functional- and integration tests to make up for that - of course all of them completely undocumented.

* On top of your creation, invent a fragile "shell syntax like" layer to make adoption easier.

* Under no circumstances create some new abstraction layer that goes beyond shell semantics that brings something substantial to the table and is well documented.

* Enjoy the time saved because you don't have to "man $tool" anymore, it's all your code anway.

* Under no circumstances ever write in pure posix sh again - the realization that with all the stuff you've learned along the way posix sh covers 99% of the use-cases in a more expressive way will be crushing.


That actually doesn't sound much like SCSH to me. SCSH was really just a library for posix integration, as well as some common shell idioms that came with it. And a damned good one, too. It was originally designed for situations in which you might write (notoriously unmaintainable) shell scripts, but it's so good that some of its APIs have become almost de-facto standard. In particular, SRE and the AWK macro really caught on.


Remind me never to hire you for anything ever. You've truly internalized the Tao of the BOFH.


Thanks for the compliment. I'm usually hired for that attitude (and my ability to hide it except for the few meetings where it really counts) ;)


> Are there (m)any advantages to doing this?

In the general case, as always, "it depends". I'm of the opinion that shell scripts are very powerful for quickly prototyping something, but as soon as you need to do real software engineering (e.g. test suites, etc.) then it's probably worth porting over to a saner language (yes, there are test frameworks for bash; no, that doesn't make bash's semantics any more sane ;) )

Piping data between subprocesses is almost universally painful in anything other than a shell, so these sorts of libraries are great for that sort of glue; I wouldn't use them for anything that's not a bog-standard "foo | bar | baz | ..." though, since that's what the rest of the language is for!

In this case it was a no-brainer: the project requires a lot of s-expression manipulation, which is implemented as small racket scripts; these are called from bash scripts which contain the main logic, data flow, etc. I wrote it this way since I'd never used racket before, but figured it would be better suited to this s-expr manipulation than bash, python, haskell, etc. (which it is!); the remaining parts were simple enough to do with bash.

This worked fine for a while, but a recent change in requirements has invalidated a lot of the code's assumptions, so I need to ensure I'm making changes in the right place, that test cases are updated to reflect the changes, etc. which motivates porting to something more sophisticated, and racket is the clear choice here.

I wanted to use a shell-interop library after previously finding Scala's "process" package to be very pleasant to use. It's certainly made the porting job much more manageable, as it just becomes a case of recursively refactoring each part:

    bash:
    echo "foo" | ./bar.sh  | ./baz.sh  | grep "quux"

    racket:

    ---- Convert shell scripts to racket functions

    bash:
    echo "foo" | ./bar.rkt | ./baz.rkt | grep "quux"

    racket:
    (define (bar) ...)
    (define (baz) ...)

    ---- Move pipeline over to racket
    
    bash:

    racket:
    (define (bar) ...)
    (define (baz) ...)
    (run-pipeline '(echo "foo") '(./bar.rkt) '(./baz.rkt) '(grep "quux"))

    ---- Call racket functions directly, rather than invoking scripts

    bash:

    racket:
    (define (bar) ...)
    (define (baz) ...)
    (run-pipeline '(echo "foo") `(,bar) `(,baz) '(grep "quux"))

    ---- Encapsulate stdio

    bash:

    racket:
    (define (bar) ...)
    (define (baz) ...)
    (with-input-from-string
      (with-output-to-string (lambda ()
        (run-pipeline '(,bar) '(,baz) '(grep "quux")))
      "foo")

    ---- Replace stdio with arguments and return values

    bash:

    racket:
    (define (bar x) ...)
    (define (baz x) ...)
    (string-join (filter (lambda (line)
                           (string-contains? line "quux"))
                         (string-split (baz (bar "foo"))
                                       "\n")))

    ---- Replace strings with more useful datastructures

    bash:

    racket:
    (define (bar x) ...)
    (define (baz x) ...)
    (filter (lambda (line)
              (string-contains? line "quux"))
            (baz (bar "foo")))


> bog-standard "foo | bar | baz | ..."

Surely if you're doing that then shell is the way to go?

What cases are there for performing that sort of piping inside a "real software engineering" program? (And if you're doing it inside a single program, why aren't you making several independent programs that pipe to each other?)


When there are a few pipes, surrounded by hundreds of lines of logic, parsing, pretty-printing, error reporting, etc. then it makes sense to use a non-shell language.

The problem is that invoking subprocesses can be quite verbose.

If you do the naive thing and pass strings in/out of each step, it increases verbosity, gives you temporary variables to abstract away and can eat up a lot of memory with temporary data.

If you do the clever thing and set up pipes between the subprocesses, it removes some of the variables, but takes more code and hits deadlocks when the buffers fill up.

If you do the right thing and spawn separate threads to read/write data to the pipeline then it takes even more code, and now you have to worry about multithreading.

Hiding that sort of boilerplate is why I like shells and shell-like libraries. Both are rather unsuited to anything else, but at least with a shell-like library you can fall back to a decent language.


>The problem is that invoking subprocesses can be quite verbose.

Which is why SCSH is kickass.


I'd think Common Lisp is the Lisp beyond Clojure.

Also, I just see slides that have (googleable) terms, is there a recording of this presentation?

EDIT: see mkozlows' comment (THANKS!!!)


I have been trying to get into the whole Lisp paradigm for 1-2 years now. I own Realm of Racket, amongst a collection of Lisp books.

Racket is all the promise of developer-centric power tooling that puts Common Lisp to shame.

I recommend you watch one of the core devs, Matt Flath, build a a hygenic macro expander.

https://www.youtube.com/watch?v=Or_yKiI3Ha4

Notice how his talk, likely written in its own documentation language Scribble, shows demos of DrRacket. I am more of an emacs guy, but do you see the interactive colorized debugger pointing out variable binding and flow control? That is the only thing that I have seen in the Lisp world that takes SLIME and laughs at its dogged simplicity.

Also, as you watch this talk, observe how he takes the complexity of something I am still certain I cannot do, build a macro system (let alone understand macros others write), and boils it down to the essence with the color-coded schematic to match his environment and visualize his thought process. I dare say Rich Hickey et al would be impressed, tipping the hat to the Simple or Easy talk he is famous for.

If I have bored you at this point, you will probably not check out Rackjure. Someone basically implemented a language subset in Racket of Clojure. So it is safe to say you can subsume Clojure with Racket.

I know we are all smug Lisp weenies, but the whole Brown/NEU PLT group who works on Racket deserve serious praise. I listen to all the core devs and watch their talks, because they push the boundaries of what a good computer scientist is.

They just happened to choose Scheme/Lisp to make me feel dumb. Watch them ditch it and go for Haskell. We will all be sorry then.


> Racket is all the promise of developer-centric power tooling that puts Common Lisp to shame.

Have you ever used a commercial Common Lisp like Allegro ?


We cant entirely blame them: those cost significant money while mainstream stuff usually has free IDE's that are good without limitations. Allegro even mentioned royalties last time I looked at them. Royalties!?

Only natural hobbyists overlook this. But, yes, LispWorks and Allegro are very powerful environments. Comparing AllegroCache to Hibernate might be fun for newcomers too. Haha.


I guess we were spoiled by being able to have seen such environments, back when it was common to pay for software tools.

Even Macintosh Common Lisp would already have been a good experience.

http://basalgangster.macgui.com/RetroMacComputing/The_Long_V...


That was pretty interesting. Especially that the Mac CL survived so long in about the same form. The one thing that confused me was Apple acquiring "Allegro" Common LISP. That's name of Franz's product. History of Franz says nothing about Apple. I'm assuming they're different products with the same name just to trip us archaeologists up?


Coral Common Lisp was once renamed 'Macintosh Allegro Common Lisp', because Coral went into a marketing agreement with Franz Inc. Apple later bought Coral and its products, and published MACL then as Macintosh Common Lisp (MCL).

MACL had technically nothing to do with Franz' Allegro CL.


Now that makes sense. I just couldnt see one not saying something about other with same name. Tks for the tip.


No, sadly I have not. But I am very interested if I could afford such things and this was not a hobby.

Ahmon Dancy talked about his experience doing DB programming for Allegro and they sound, as a group and a generalization, super competent from this talk, so much so I would love to have a reason as a cheap wannabe to use their graph DB.

https://www.youtube.com/watch?v=S7nEZ3TuFpA


Cider's integration of Clojure into Emacs is reasonably comparable to what traditional SLIME integration for Common Lisp is like, as a user experience.

Likewise, Geiser is a sort-of capable mode for most of the other more prominent Schemes; though it is not quite as elegant or polished as Cider.


geiser is quite nice. I used it a couple of years ago with racket to work through the first three chapters of SICP, including the part that involves drawing pictures https://i.imgur.com/fwCUUZI.png


Geiser is looking for a maintainer for its Racket support, IIRC.


Racket 6.6 doesn't even start on my Mac... crashes immediately.


I hope you have filed a bug report? (Also - did you use the official version from download.racket-lang.org ?)


Racket isn't anywhere near close to what CL offers.

Your comments make me think that you've never used CL/SLIME extensively.

CL and SLIME are geared for __interactive image-based development__. The DrRacket IDE makes you jump through hoops to get but a tiny subset of that interactivity [last I remember, the Racket devs themselves said publicly that interactive development is not a priority to them].

I see Racket as an interesting experiment with lots of applications in academia but compared to CL, it's not anywhere near as pragmatic or capable. To believe otherwise you're simply deluding yourself.


TBH, almost any Lisp is beyond Clojure.

Don't get me wrong, Clojure is an amazing tool. While it addresses a very specific use case (functional programming on the JVM) that use case is common enough that it's a very useful tool. But when compared to other Lisps in a context where non-JVM toolsets are acceptable, Clojure leaves a lot to be desired.


But, when I'm spoiled with built in immutable persistent collections and awesome concurrency semantics, am I not also desiring those features in other lisps.


Can you be more specific? In particular, about stuff you'd consider to be lacking in Clojure?


Conditions and restarts, read macros, interactive debugging, native code compilation, extremely easy interface with C, intrinsics, compiler macros, full control of code generation, built-in disassembler and so on and so forth.

Clojure is ok if all you're doing is based on the JVM. If that's not the case, then it simply doesn't exist.


You're just saying 'Clojure isn't as good as CL because it's not CL'. When one of the the biggest selling point is "it works seamlessly on the JVM" it's a bit disingenuous to say "it's not as good as CL because it doesn't have a good C interface".


The c interface is a bit silly, but the rest of the points are pretty good. Clojure is a reasonable lisp-on-the-JVM, but lacks some of the really good things about lisps.


You could use JNA (Java Native Access) with Clojure[0], exactly like ABCL does with the CFFI library.

[0] https://nakkaya.com/2009/11/16/java-native-access-from-cloju...


Restarts, call/cc, reasonable error reporting, reasonable error reporting, real tail-call elimination.


I thought the general consensus on call/cc now is that although it's very powerful in principle it's not such a great abstraction in practice. It's difficult to implement efficiently and abstractions created using it tend to be very brittle. See e.g.:

http://okmij.org/ftp/continuations/against-callcc.html


Racket provides delimited continuations. The page you linked to is criticizing undelimited continuations -- including quoting Matthias Felleisen -- and contains a link to the following:

http://okmij.org/ftp/continuations/undelimited.html#delim-vs...


Yes I know, but call/cc without any context generally refers to undelimited continuations. Delimited continuations use differently named primitives.


In lisp-like languages I am aware of two implementations of delimited continuations using call/cc (racket and cl-cont) but zero that use shift/reset.


Well, real conses would be nice, for a start...


why?


There are all kinds of useful structures that you can make out of conses: trees (with two cells per node), infinite lists, and so on. They're also useful in conveying intent.


I don't see why you can't make those out of the data structures provided - or even the Clojure conses?


Well, it's nice to have a uniform cell structure, and you can't make them out Clojure conses, because Clojure conses don't really have a cdr, not as such, as Clojure's cdr cannot contain arbitrary data.


Can't it? This seems to work just fine: (cons :a (cons '[x] (cons {:foo :bar} #{:x :y}))) and cljs.user=> (rest (cons :a (cons '[x] (cons {:foo :bar} #{:x :y})))) ;=> ([x] {:foo :bar} :x :y)

Maybe I don't understand what you mean. I also don't see why a uniform cell structure would be preferable?


The result you got isn't actually what you'd get with proper cons cells. With real conses, you wouldn't get a list, you'd get an improper list, or a pair: i.e:

Scheme:

  (define x (cons 'a (cons 'b '())))
  ;> (a b)
  (define y (cons 'a 'b))
  ;> (a . b)
  (cdr x)
  ;> (b)
  (cdr y)
  ;> b
  (eq x y)
  ;> #f
If clojure doesn't behave like that, that it's not using real conses.

As for why a uniform cell structure is preferable, it's a preference thing, but I feel it makes dealing with lists and other cons-based structure (mostly lists, but you get the occaisional alist or plist or tree in there too) much nicer.


Got you.

Once upon a time, there was a serious continuations-on-the-JVM proposal, but it fizzled. Rather sad.

Fixnums would also help Clojure.


UI is weird. Left/right goes between main topics, up/down goes through the slides on that topic. So if you just go right, right, right, you're only seeing the headings.


Over here in Firefox 52.0a1, something is cut off the margin of the slides.. all of them. I can briefly see the cut-off part when flipping to the next slide (vertically or horizontally).


Switching to fullscreen fixes it...


Use the n and p keys for next and previous respectively.

Use the ? key for the help and other useful keybindings.


I tried to get more into CL because of implementations like SBCL and because it's standardized, but modularization/package management turned out to be too verbose and archaic for my taste.

In Racket, by default a file creates a module, I import things, export stuff and done. That's really very handy, because I hate fiddling around with package declarations and paths -- these are a pain in the ass in almost every older language.


That's not very Lisp-like.

Personally I really don't like it when files are modules. For me that's orthogonal.


It's not necessarily 1:1 file:module.

`#lang foo ....` in name.rkt is (in part) a shorthand for `(module name foo ....)`.

A single Racket file can have 1 or more modules.

Each module can also have sub-modules.

Modules generalize runtime vs. compile-time to many kinds of time, including but not limited to "test time", "doc time", etc. Also they enable reproducible compiles by clarifying what "compile-time" actually means relative to other modules. It's more than "paste these s-expressions at the top-level prompt and auto-prefix some names."

Racket has a wonderful module system and sometimes I miss that in Clojure, although I enjoy Clojure very much in other respects.


In Common Lisp, files are in fact "modules" (or "compilation units") also. To treat multiple files as one unit, you have to use the with-compilation-unit macro:

http://clhs.lisp.se/Body/m_w_comp.htm

The packaging systems people use nowadays over CL are not a part of CL. The whatever subjective suckage they introduce is their own. If you want racket-style modules, hack them up.

The grandparent's observation that "modularization/package management turned out to be too verbose and archaic for my taste" is very ironic --- in particular, the "archaic" part. ASDF is from around the turn of the century (the 21st that is).

The big hurdle in CL modularization is something very trivial: the fact that a file cannot refer to neighboring files easily by a short path. There is no

  (load "foo")
which will look for a "foo" in the same directory as the file which is invoking the load form.

This steers package management toward the external mode, whereby some definition exists outside of all the files and handles their inter-dependencies and the manner of actually locating the groups of files.

I fixed this in TXR Lisp, and so simple modularization is a cinch! You load some main file, and that just does (load "foo") (load "bar") ... to load its related files, no matter where they have been located.

I made load a macro, and that macro accesses the source file location at macro-expansion time, imbuing it into the resulting form that actually does the loading when evaluated. If the load path is relative, then the caller's path is used to resolve it, rather than the current working directory.


A compilation unit is not a module.

> The packaging systems people use nowadays over CL are not a part of CL

Packaging systems were never a part of CL.

> The big hurdle in CL modularization is something very trivial: the fact that a file cannot refer to neighboring files easily by a short path. There is no

    (load (merge-pathnames "test1" *load-pathname*))
If that's not short enough, define a function L which does above...

> This steers package management toward the external mode, whereby some definition exists outside of all the files and handles their inter-dependencies and the manner of actually locating the groups of files.

This is the way how to do it.

> I fixed this in TXR Lisp

Oh no..., well there have been zillions of similar attempts...


"The whatever subjective suckage they introduce is their own. If you want racket-style modules, hack them up."

I don't think it really makes sense to try to roll your own module system? I mean, the whole point is to be able to easily share code with the community, right?


> I don't think it really makes sense to try to roll your own module system?

Obviously Fare Rideau thought it was a good idea when he started hacking on ASDF. He could just have used Defsystem or whatever. It's very popular now, but its name stands for "another system definition facility" for Lisp!

If you don't like anything, roll your own.

Sharing with the community isn't the entire point of a module system; it's also to internally organize the software you're working on, whether you're just one hacker or a team of one hundred. Niklaus Wirth's Modula-2 language has a module system, and so does Ada. The concept of sharing modules with the community didn't even exist.

A module system as a hub for sharing is a relatively new thing: it's a fusion of the ideas from open source OS distro package management and language modules.


ASDF was originally not written by Fare Rideau. It was developed by Dan Barlow in 2001/2002. He did not use MK-DEFSYSTEM, because he wanted a free/open-source DEFSYSTEM with an implementation using more of CLOS and easier to maintain. ASDF was later developed into ASDF2 and 3 with a lot of work by Fare Rideau and others.


There are plenty of module systems already there.

If you want racket-style modules you can also have them by implementing them.


Great solution if you never want to use code anyone else has written. To be fair this seems like a pretty common mindset in CL-land.


ASDF as a system building tool is widely used in CL-land.

> To be fair this seems like a pretty common mindset in CL-land.

My Lisp Machine has around 60000 functions, and I have only written a tiny fraction of them... Strange. If nobody else has written them, where are they coming from?


I came from python and take the opposite view: it's nice to be able to look at a call to a function, and at least have a reasonable clue at where it's going.


If you like racket modules, you should look at package-inferred-system in ASDF. Not quite as little boilerplate, but close.


I think, it is more of a timeline, than what is better/more powerful.


Who said there was only one?


Sure, there doesn't need to be only one. If one is more widely known it's common lisp.


I'm not so sure anymore. Scheme is at least taught in schools and isn't uncommon as a scripting language. Where are you likely to run into Common Lisp?


For me it was the Practical Common Lisp book.


So the main use case of the language is the handbook of that language? :)


For me its at work.


I'm not a fan of Racket, as I've made clear elsewhere. Its insistance on using as many paradigms as possible frustrates me, and while I like "batteries included" in general, I wish some things like contracts, custodians, OO system, etc. had been moved out to other libraries to make core easier to wrap your head around. I am also not a fan of syntax-case: it violates the macro abstraction, and is overly complex for what it does, IMHO.

In essence, I wished that core had picked a paradigm and stuck with it, instead of like 10 of them.

Practical or no, I'd take a simpler Scheme over Racket any day. Usually Chicken, which is quite practical, and has a lighting fast call/cc implementation (non-delimited, with delimited implemented in terms of call/cc by an external library, if you want it).

But just because I don't like it doesn't mean you won't: objectively, it's powerful, technically competent, and fairly well designed. It's just not to my tastes, any more than CL is, and so it looks like I'll have to wait until R7RS Large before I get a large lisp I like.

And yes, I said lisp, not scheme, because Scheme is a lisp, and also to annoy the people who say otherwise.


Can you elaborate on how syntax-case "violates the macro abstraction", and maybe what you think is important about the quality it breaks? Not a description I've heard before. I do agree it is complicated!


The macro abstraction is that the AST is exposed as a series of lists, which is what macros process. Instead, syntax-case exposes 'syntax' objects, which are entirely distinct from any other datatype, and kind of complicated.

This isn't a horrific sin (new macro abstractions pop up like rabbits, although they rarely change something so fundamental, at least in Lisp mavro systems), but if you're going to replace an abstraction everybody knows and understands with a more complex one, you better have a good reason, and I don't think the benefits outweigh the costs.


I thought that the syntax objects resulted in better error messages, due to keeping track of source code position, lexical bindings for variables and so on. Isn't that worth the complexity?


Talking about CL, all the things that are done with syntax objects are done with environments (`&environment`). Type declarations, lexical bindings, etc. You can even process the same code in two different environments if this is useful. Other bindings from symbols to values can be stored in the symbols themselves: things like original code source (as a string; I recovered deleted code thanks to that in the past), original location, custom properties, etc.


So like the sc macro system, but more featureful. In theory, the sc macros could record this sort of data (maybe), but I don't think they record as much as syntax objects.


I'm not sure it is. As junke pointed out below, however, some of this can be achieved through other, less intrusive methods, such as environments, which are actually used by the sc macro mechanism. You don't get it all, but it's usually enough to know that the error was triggered in the expansion of a macro on line <n>. That's dramatically better than most C error messages already. :-D


A kind of nitpick, but the use of dynamic-wind with custodian in the slides bothers me. Dynamic-wind shouldn't directly be used to ensure cleanup, since a continuation captured in the body may be invoked outside from dynamic-wind and the body restarts, even if resources are gone.

In the R5RS age, light users looked for try-catch construct in Scheme and could only find dynamic-wind and used it; but they're not the same. Dynamic-wind is a low-level construct that can be used to implement try-catch idiom and other stuff.

Now Scheme has guard, so it's better to use them to write cleanup stuff. It's a bit verbose, though (you have to write cleanup both in the handler and the end of body), so I personally wrote unwind-protect macro on top of it and am using it.


I really don't get the title of this. Why is it "beyond Clojure"? Maybe if we can see the actual talk it will be more clear.


Clojure ideas are beyond Clojure.


I need types. But Racket has the world's greatest macro system and everybody should imitate it, Period.


Have you tried Typed Racket? https://docs.racket-lang.org/ts-guide/


Somebody always asks :).

1) I want my libraries to be typed all the way down.

2) Improving the implementation of typed racket is harder/less likely than fixing the macro systems of Rust and Haskell.

3) Network effects of those two languages.


How I am supposed to read the slides? Top to bottom, left to right? It is not intuitive.


Top to bottom. The "top row" of slides are section headers. Go down from a section header to see the section. When you get to the bottom of a section, go right, and it will put you at the top of the next section.


I don't understand - is it just 8 slides? That's all?


I don't understand why this slide format is popular, but I believe you go down until the end of that column, then right, then down again, then right, and so on.


Or just press space to go to the next slide.


Let's try with other languages:

    Rust - C beyond Go
See any problem?


C isn't a big family of languages, it is very narrow family. Rust and Go are not in that family. A proper analogy requires some close dialectal ties, for instance:

    ObjC - C beyond C++
What Rust and Go have in common is that they are Von Neumann model languages built around pushing word-sized quantities from memory through a CPU, like C, Pascal, Algol, Modula, ... thus:

    Rust - "Von Neumann Model Algol-Like Blub Programming" beyond Go
:)


Lisp is a family of languages that includes Racket and Clojure.

C is a language, not a family of languages that includes Rust and Go.

The Lisp family of languages also contains Common Lisp. In some contexts, it's reasonable to assume "Lisp" means "Common Lisp". This is not one of those contexts.


Newcomers are always confused by the distinction, this doesn't help at all to put them in the same big bag. This classification is not wrong, but mostly useless when you see how the different "dialects" evolved (they are grown-up languages, nowadays).

> C is a language, not a family of languages that includes Rust and Go.

But Rust and Go are constantly compared to C. We always talk about C-like languages. Maybe I should have said "Algol" instead, but there is a family of languages rooted at "C", and we don't call it the C family.

Go is "a compiled, statically typed language in the tradition of Algol and C". "The syntax of Rust is similar to C and C++", but "Rust is semantically very different from C and C++.[citation needed]" (wikipedia).

Likewise, Scheme is semantically very different from Common Lisp. And Racket is not considered as a Scheme, even though it is related to it. You can't copy-paste any Typed-Racket expression and run it with ChezScheme.

So, yes, they all belong to the same proto-language family, but constantly referring to them as a unique family is doing a disservice to each of them.

Take Clojure for example: the wikipedia page says that it is inspired by C++, C#, Common Lisp, Erlang, Haskell, Mathematica, ML, Prolog, Scheme, Java, Racket and Ruby. There are plenty of influences that goes into a language, why not talk about the other ones? parentheses?


> So, yes, they all belong to the same proto-language family, but constantly referring to them as a unique family is doing a disservice to each of them.

This is like saying that referring to dialects of English as "dialects" is doing them a disservice. If you are studying them, it is, if you are learning English as a second language, it is not.

If you look close enough, the differences are huge (after all, that's why someone created a specific lisp dialect in first place), but from a distance, they are all still pretty similar.

And finally, I rarely witnessed any discussion about those differences among lisp programmers, they were merely referred to when discussion how to implement something: "X-style Y", where Y is some CS-concept and X is some lisp dialect tailored to Y - and implementing anything in your preferred dialect is usually "trivial", though not necessarily performing as well as the implementation in some dialect tailored to the task.


> Newcomers are always confused by the distinction, this doesn't help at all to put them in the same big bag. This classification is not wrong, but mostly useless when you see how the different "dialects" evolved (they are grown-up languages, nowadays).

Sadly, language is determined by usage, which only incidentally correlates to usefulness. And critically, there's a feedback loop here: a word is only useful as it's used, because otherwise people don't know what it means, and you've failed to communicate your intent. "Lisp" might hypothetically be more useful if it meant "Common Lisp", but it's not useful for meaning "Common Lisp" (at least not on Hacker News) because when you say "Lisp" people assume you're talking about all the languages with the parentheses, and you've failed to communicate.

> But Rust and Go are constantly compared to C. We always talk about C-like languages. Maybe I should have said "Algol" instead, but there is a family of languages rooted at "C", and we don't call it the C family.

Right, the terminology commonly used for that is "C-family languages", not "C".

> Go is "a compiled, statically typed language in the tradition of Algol and C".

Note how they don't say "Go is a C".

> Likewise, Scheme is semantically very different from Common Lisp. And Racket is not considered as a Scheme, even though it is related to it. You can't copy-paste any Typed-Racket expression and run it with ChezScheme.

Yes, but in a context not centered around Common Lisp, people use "Lisp" to refer to all these languages. You can argue whether that's good or bad, but you're not able to change it.

> Take Clojure for example: the wikipedia page says that it is inspired by C++, C#, Common Lisp, Erlang, Haskell, Mathematica, ML, Prolog, Scheme, Java, Racket and Ruby. There are plenty of influences that goes into a language, why not talk about the other ones? parentheses?

Parentheses, macros, some functional programming constructs.

Look, I'm not saying the terminology is ideal. I'm saying that, in this context, "Lisp" doesn't mean what you and I want it to mean, and trying to change that will a) fail to communicate and b) fail to change the meaning of the word in this context.


I'll just leave two of my old comments here... https://news.ycombinator.com/item?id=10207199 and https://news.ycombinator.com/item?id=3423646

Short summary: I do think on HN it's reasonable to assume "Lisp" without further qualification to refer to the lisp family. The usefulness of the lisp family idea (which afaict is having sexps and maybe something like defmacro) is questionable. Among Lisp and lisp-family practitioners outside of HN and maybe even on HN itself (it'd be interesting to see a poll result) Lisp means Common Lisp, as it has basically since Common Lisp was created, and the distinction with other lisps usually made by saying things like "Clojure is a Lisp", that article "a" being important here to say x is a member of the y family and would be nice to have in the slide title.

I agree with your argument about common usage but I'm not sure what the common usage on HN really is and what's a vocal minority using it incorrectly until it becomes common. But we've already lost several wars like that, in the wider culture ('literally' for one..) and in tech (a recent one that bugs me being 'isomorphic' JavaScript) so it's probably best to sigh in resignation and maybe think/complain about the lisp/lisp family stuff once every few years at most. ;)


> I agree with your argument about common usage but I'm not sure what the common usage on HN really is and what's a vocal minority using it incorrectly until it becomes common.

I just don't think "correctly" and "incorrectly" are words that have meaning when it comes to semantics. I care more about your goals. If your goal is to have "Lisp" mean "Common Lisp", then you can try to enforce that, but it's a strange goal to fight for, and probably not an achievable one.

Personally, my goal is to understand what people are saying and communicate my own ideas effectively. Toward that end, I just say "Common Lisp" or "CL" when I mean Common Lisp, and Lisp-family languages or Lisp-like languages when I'm talking about the larger group. And when reading, I use context clues to figure out which the person is talking about.

Certainly in the OP, it's clear that "Lisp" refers to the s-expression-y languages.


Here are some ways words can be incorrect: http://lesswrong.com/lw/od/37_ways_that_words_can_be_wrong/

In the end it's not a very fulfilling battle even if won, to preserve the meaning of a word. I always liked the line "In the face of ambiguity, refuse the temptation to guess" as a guide to be more explicit and ask for clarification in general communication.


Yeah, 17-20 in that link do a better job of expressing what I wanted to express than I did. :)


An article that would title "Lisp beyond Clojure" would indeed talk about the Lisp family of languages. What is a very bad in that title is that it focuses on Racket, just as if the title was: "Racket - this is how Lisp looks like beyond Clojure". And that is the main problem I have with the title. The ideal title would be IMO "What could Clojure learn from Racket".


If you understand what the title means well enough to object to its usage, that's solid proof that the title has communicated effectively.

> What is a very bad in that title is that it focuses on Racket, just as if the title was: "Racket - this is how Lisp looks like beyond Clojure".

That's your own interpretation--you could easily have interpreted it as "Racket--an example of a Lisp beyond Clojure".


I can understand and consider the choice of words to be bad.


But on what basis would you consider them to be bad?


I agree with you about actual usage, but your wish for Lisp==CL doesn't make sense either. Is LISP 1.5 "Lisp"? What about MacLisp or InterLisp or any of the other languages that both predated Common Lisp and had "lisp" in their name?

The people who create Common Lisp didn't think that is was coextensive with "Lisp", so I'm not sure why that would have become _more_ true in the last 30 years.


Lisp 1.5, Maclisp, Interlisp and CL are all Lisp. They share a common core. Common Lisp can run Lisp 1.5 code with no or little changes. The Common Lisp Object System was developed from Interlisp's LOOPS. Interlisp-D had Common Lisp and Interlisp running in one Lisp image side by side. Maclisp and Common Lisp shared complex macros like the LOOP macro in one source file at MIT.

Common Lisp even runs pre-Lisp 1.5 code. See for example:

https://gist.github.com/lispm/93bba58caf3d3c7aab3b

That's original McCarthy code from 1960 in s-expression syntax. Runs mostly unchanged.


This is frequently held up as an example of CL being more true to McCarthy's vision than Scheme. The counterargument is that code-compatibility is a poor measure of conceptually realizing McCarthy's ideals--rather Scheme's more academic approach has caused it to compromise less on McCarthy's vision. Where CL compromised for pragmatic reasons, Scheme has kept more purer abstractions, even improving on McCarthy's original Lisp. In this argument, Scheme results from a deeper understanding of of an idealized Lisp, whereas CL is a pragmatic compromise that allows Lisp to be used for industrial applications.

Personally, I don't take a side in this debate, because I don't care.


> realizing McCarthy's ideals--rather Scheme's more academic approach has caused it to compromise less on McCarthy's vision.

What were those 'ideals'? What was that vision?

> even improving on McCarthy's original Lisp

Common Lisp improved the original Lisp, too. It vastly expanded into the areas which were interesting for McCarthy: AI programming. Common Lisp was the base for thousands of research & development projects. That was McCarthy's vision: a tool for AI research.

>In this argument, Scheme results from a deeper understanding of of an idealized Lisp

No, Scheme went away from the idealized Lisp. -> R6RS.

> Personally, I don't take a side in this debate, because I don't care.

Yeah, sure.


> What were those 'ideals'? What was that vision?

I don't know that.

> That was McCarthy's vision: a tool for AI research.

You don't know that.

> No, Scheme went away from the idealized Lisp. -> R6RS.

Neat! I'm glad we cleared that up.

EDIT: Okay, maybe "You don't know that" is a bit strongly-worded; there's no doubt that that was part of McCarthy's vision, but there's a great more to Lisp than that, and notably, Scheme is also a pretty good tool for AI research. The real fundamental point I'm making is that the "Common Lisp is One True Lisp" vs "Scheme is One True Lisp" vs "There Are Many True Lisps" is based on unclear goals that nobody really knows. McCarthy is dead and it's questionable whether he was the sole proprietor of the "ideal Lisp" concept anyway.

Or put more tersely, I don't care.


> You don't know that.

Sure we know what McCarthy developed Lisp for: as a tool for AI research. It's already mentioned in the very first paragraph of the 1960 paper on Lisp.

Sure there are several Lisp dialects and a bunch of derived languages, like Scheme.

> Or put more tersely, I don't car.

Yes, sure. car and cdr.


> Sure we know what McCarthy developed Lisp for: as a tool for AI research. It's already mentioned in the very first paragraph of the 1960 paper on Lisp.

> Sure there are several Lisp dialects and a bunch of derived languages, like Scheme.

Sure, that's one thing that it was when McCarthy started developing it, but the original Lisp paper doesn't say "only" and you don't know how McCarthy's intentions developed as the language developed.

> > Or put more tersely, I don't car.

> Yes, sure. car and cdr.

Hah! That was an entirely unintentional pun.


> Sure, that's one thing that it was when McCarthy started developing it, but the original Lisp paper doesn't say "only" and you don't know how McCarthy's intentions developed as the language developed.

McCarthy later did not care too much about Lisp development and direction himself, since he was working on core AI topics then. But the field of AI caused massive investment into Lisp during the 70s and 80s, until it died out in the early 90s. We are talking about something like 2 billion USD.


Ugh, when you say "Lisp" you're talking about "Common Lisp"?

AI industry has invested into a lot of languages, including Scheme, but also including stuff like C++/Java/Python/Ocaml. Trivially Google has invested more in Python AI than $2 billion, and I wouldn't be surprised if Jane Street has invested more than that in OCaml AI. So I'm not really sure what you're trying to prove here.


> Ugh, when you say "Lisp" you're talking about "Common Lisp"?

When I say Lisp, I mean Lisp dialects: Maclisp, Interlisp, Standard Lisp, Franz Lisp, EULisp, Euslisp, LeLisp, ISLisp, Common Lisp, ...

But not Scheme, Racket, Logo, Dylan, Javascript, Clojure, Ruby, Python, Mathematica, Perl, TCL, ...


There isn't really a C family maybe a not-quite c family that includes c++/java/c# languages(but not c). C isn't very statically typed. C++/java/c# are object oriented (or sort of for c++) with generics/statics/inheritance, a few functional features, limited type inference. Similar looking syntax.

Rust syntax isn't really that close to c, I think it's close to ml.


> Similar looking syntax.

Exactly my point.


Lisp is a language family. It'd be more like "F#: ML beyond SML" or something.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: