> Despite my decades of dynamic typed languages, I hate going back to dynamic languages anymore. YMMV.
Mine does vary - while static typing is helpful, it still (even with more advanced type systems) leads to boilerplate code that I dislike writing. In a compiler written in OCaml that I worked on for a bit, there were hundreds of lines of code dedicated to just stringifying variants. It could have been generated by a syntax transform (the newer tools for this are actually quite good), but that's another dependency and another cognitive overhead. In Kotlin, lack of structural types means that the rabid "clean architecture" fans create 3 classes for each piece of data, with the same 10 fields (names and types), and methods to convert between those classes - it requires 10x as much code for very little gain. Lack of refinement types makes the type systems mostly unable to encode anything relating to the number values, other than min/max values for a given type. There's reflection in Kotlin (not in OCaml though) that you can use, but then we're back to everything being an Object/Any and having runtime downcasts everywhere.
I think gradual type systems are a good compromise, for now at least. I'd prefer Typed Racket approach of clearly delineating typed and untyped code while generating dynamic contracts based on static types when a value crosses the boundary. Unfortunately, that's not going to work for existing languages, so the next best thing is something like TypeScript or mypy.
Of course, convenient, hygienic, Turing-complete not by accident, compile time execution and macros would, to some extent, alleviate the problems a simplistic type systems cause. A good example is Haxe, Nim, Rust, Scala 3, etc. Without such features, though, I'm not willing to part with runtime reflection and metaprogramming facilities provided by dynamic languages - the alternative is a lot more lines of code that need to be written (or generated), and I don't like that.
---
More to the topic: logic variables. The `amb` operator from Scheme, for example, or what Mozart/Oz has, or Logtalk, or Prolog of course. They're powerful, incredibly succinct way of constraints solving without writing a solver (just state the problem declaratively and done - as close to magic as it gets). No popular language offers an internal logic DSL, although there are some external DSLs out there.
Also, coroutines. No more manual trampolining, no need for nested callbacks, the state of execution can be saved and resumed later mostly transparently. Lua has them built-in, Kotlin implements CPS transform in the compiler. Nowadays almost all popular languages provide them, mostly exposed as async/await primitives. Scheme and Smalltalk can implement them natively inside the language and did so for ages; it's nice to see mainstream languages catch up.
REPLs. Not a language feature per se, but an implementation decision that has a lot of impact on productivity. It's relatively commonplace now - even Java has jshell - but most of the REPLs are pretty bad at executing "in context" of a project or module. Racket, Clojure, Common Lisp, Erlang, Elixir are gold standards, still unmatched, but you can get pretty far with Jupyter Notebooks.
Destructuring/pattern matching. It was carefully added in some simplified cases (mostly simply destructuring sequences) in many languages, then the support for wildcard and splicing was added, then support for hashes/dicts was added, and now finally Python has a proper `match` statement. I think more languages will implement it in the near future.
Some sort of coroutine solution is definitely on my list too. I'm actually not too passionate about which one it is, except for a distaste for async/await on the grounds the compiler ought to be able to do it for me. But generators, threads or actors cheap enough to use freely, coroutines, something that allows me to break out of the strictly hierarchical structured programming system and retain some degree of state within a function when I need to. It's possible to hack something together in a language lacking this, by moving all function state into a struct/object but all the manual scaffolding is painful and error prone.
Mine does vary - while static typing is helpful, it still (even with more advanced type systems) leads to boilerplate code that I dislike writing. In a compiler written in OCaml that I worked on for a bit, there were hundreds of lines of code dedicated to just stringifying variants. It could have been generated by a syntax transform (the newer tools for this are actually quite good), but that's another dependency and another cognitive overhead. In Kotlin, lack of structural types means that the rabid "clean architecture" fans create 3 classes for each piece of data, with the same 10 fields (names and types), and methods to convert between those classes - it requires 10x as much code for very little gain. Lack of refinement types makes the type systems mostly unable to encode anything relating to the number values, other than min/max values for a given type. There's reflection in Kotlin (not in OCaml though) that you can use, but then we're back to everything being an Object/Any and having runtime downcasts everywhere.
I think gradual type systems are a good compromise, for now at least. I'd prefer Typed Racket approach of clearly delineating typed and untyped code while generating dynamic contracts based on static types when a value crosses the boundary. Unfortunately, that's not going to work for existing languages, so the next best thing is something like TypeScript or mypy.
Of course, convenient, hygienic, Turing-complete not by accident, compile time execution and macros would, to some extent, alleviate the problems a simplistic type systems cause. A good example is Haxe, Nim, Rust, Scala 3, etc. Without such features, though, I'm not willing to part with runtime reflection and metaprogramming facilities provided by dynamic languages - the alternative is a lot more lines of code that need to be written (or generated), and I don't like that.
---
More to the topic: logic variables. The `amb` operator from Scheme, for example, or what Mozart/Oz has, or Logtalk, or Prolog of course. They're powerful, incredibly succinct way of constraints solving without writing a solver (just state the problem declaratively and done - as close to magic as it gets). No popular language offers an internal logic DSL, although there are some external DSLs out there.
Also, coroutines. No more manual trampolining, no need for nested callbacks, the state of execution can be saved and resumed later mostly transparently. Lua has them built-in, Kotlin implements CPS transform in the compiler. Nowadays almost all popular languages provide them, mostly exposed as async/await primitives. Scheme and Smalltalk can implement them natively inside the language and did so for ages; it's nice to see mainstream languages catch up.
REPLs. Not a language feature per se, but an implementation decision that has a lot of impact on productivity. It's relatively commonplace now - even Java has jshell - but most of the REPLs are pretty bad at executing "in context" of a project or module. Racket, Clojure, Common Lisp, Erlang, Elixir are gold standards, still unmatched, but you can get pretty far with Jupyter Notebooks.
Destructuring/pattern matching. It was carefully added in some simplified cases (mostly simply destructuring sequences) in many languages, then the support for wildcard and splicing was added, then support for hashes/dicts was added, and now finally Python has a proper `match` statement. I think more languages will implement it in the near future.