I have often thought that programmers can actually just choose to make Rust easy by using a cyclic garbage collector such as Samsara. [1] If cyclic GC in Rust works as well as I think it can, it should be the best option for the majority of high level projects that need fast development with a trade-off of slightly lower efficiency. I suspect we'll see a "hockey stick" adoption curve once everyone figures this out.
I am still waiting for a scripting language to be bolted on top of Rust. Something that will silently Box all the values so the programmer does not have to think about the Rust specifics, but can still lean on all of the Rust machinery and libraries. If performance/correctness becomes a problem, the scripting layer could be replaced piecemeal with real Rust.
Perhaps you mean to say that you're waiting for a new scripting language to be created that's designed to be "almost Rust." That could be interesting! OTOH, the bindings for existing languages have matured significantly:
I definitely am thinking of something more Rust-forward. As Rusty as possible without having to worry about lifetimes, the borrow checker, whatever. Huge performance hit is acceptable, so long as it remains trivial to intermix the Rust+scripting code. Something that gives a smooth on-ramp to push the heavy bits into pure Rust if required. The Python+C strategy in a more integrated package.
You're very much describing the powershell -> .Net -> C# path so would be curious to hear your take there. There's also the mad lad support rust in .net https://github.com/FractalFir/rustc_codegen_clr/
I know. We're all just rediscovering Lisp in our own way.
... And yet the fact that most of us know we're reinventing Lisp, and still doing it anyway, says something. I guess it says that we're just trying to get our jobs done.
Lisp is a language family, not one specific language. Do you have a particular one in mind? There are many languages that can be called Lisp which are different from each other, and some have multiple implementations.
Mainstream Lisp dialects have had objects other than lists for many, many decades. The LISP-1 programmer's manual from 1960, referencing the original language which started it all, describes zero-based arrays already.
In some Lisp-like languages, the syntax processing itself is based on arrays, like Janet. The parenthesized notation turns into a nested array, not a nested linked list.
In Lisps where the syntax is based on lists, that doesn't imply that your program has to work with list at run-time. The code-transformations (macros) which happen at compile time will be working with linked lists.
Budding computer scientists and engineers like to write toy Lisp dialects (sometimes in one weekend). Often, those languages only work with linked lists, and are interpreted, meaning that the linked lists representing the code structure are traversed to execute the program, and repeatedly traversed in the case of loops.
(If you're making remarks about an important historic language family based on familiarity with someone's toy Lisp project on github, or even some dialect with an immature implementation, that is a gross intellectual mistake. You wouldn't do that, would you?)
Linked lists may "kind of suck" on cached hardware with prefetch, but that doesn't prevent them from being widely used in kernels, system libraries, utilities, language run-times (internally, even in the run-times of languages not known for exposing linked lists to the programmer), ... C programmers use linked lists like they are going out of style.
The most popular lisp dialects are linked list based (Common Lisp, scheme, guix I think as well)
No need to be pedantic. Obviously I’m not talking about a random toy lisp someone hacked together.
Linked lisps have their uses, obviously, but being the core data abstraction for your entire language kinda sucks nowadays.
I’m talking about lisp the language, not the philosophical concept.
When people just say “lisp” referring to a specific language you can safely guess either scheme or Common Lisp.
The dialects you mentioned have a list-based syntax. They are list based in the same way that C++ is token based. (Actually I believe this is not strictly true of Scheme, which is defined from the character level up by a grammar, like many other programming languages. Bootstrapping compilers for Scheme have been written that do not read the program as a nested list. Those features of Scheme that calculate list structure have to do that at run time, of course, like quotation and quasi-quotation, but that doesn't require their syntax to be treated as a list during compilation).
You say you're not talking about a random toy Lisp someone threw together. Yet those kind of projects are the ones that have lists as the core or perhaps the only data abstraction for the entire language. If we search for the Lisps that make your remarks correct, that's mainly what we find.
I think this is a rare exception in production Lisps. One notable one is something called Pico Lisp. People take this seriously and use it, so we can't call it a toy. Yet it does almost everything with lists.
When people say Lisp nowadays no you cannot guess that it's Scheme or Common Lisp. It could be Clojure, or Fennel or others.
Scheme and Common Lisp are very different languages.
From the table of contents you can see that the language spec prominently describes: CLOS objects, structures (records), condition objects (-> errors), symbols, packages (namespaces for symbols), multi-dimensional arrays, strings, hash tables, files, streams, ...
None of these standard data structures are linked list based.
For example when I write a Lisp form do define a structure, a record-like data structure:
(defstruct packet
sender
receiver
header
payload)
then the SOURCE is an s-expression, a linked list.
DEFSTRUCT is a macro, which defines a record data structure and a bunch of functions for it (accessors, getters, creater, type predicate, ...).
The Lisp compiler will expand the macro form into a much larger s-expression -> again a nested list.
The compiler will then process lists and a lot of other data structures (see above) and create MACHINE CODE for code defined by above record definition.
Structures themselves are by default VECTOR-like objects, with static access into its components. A getter will access the fixed offset into a record and the code for that will usually be inlined in the using code.
So we have two aspects:
* processing with linked lists on current CPUs is several orders of magnitude faster, than on the machines where Lisp was originally defined. It does not matter for most use cases on modern machines. For example any Apple Silicon is great for running Lisp.
* Lisp offers many other data structures, which are widely used in Lisp applications.
For example if I would need a bit vector, I would not use a linked list of numbers, but a real bitvector:
CL-USER 1 > (describe #*0000010010000011000000000)
#*0000010010000011000000000 is a (SIMPLE-ARRAY (UNSIGNED-BYTE 1) (25))
CL-USER 2 > (sbit #*0000010010000011000000000 5)
; get the fifth bit, using zero-based indexing
1
Here the operations are written as lists, but they operate on real vectors of bits.
The result then is that optimizing Common Lisp compilers can generate code, which is fast enough for many applications.
So is in Common Lisp the linked list the "core data abstraction for your entire language"?
That's misleading. The "entire language" has many more data structures, which are not built on top of linked lists. For example arrays (strings, vectors, bitvectors, multidimensional arrays) are a part of the language, are widely used and are not made of linked lists.
[1] https://github.com/chc4/samsara