Change takes time! This is a great example, as he has specific instances of evolving syntax to describe the same thing, semantically.
The big problem, of course, is that code hangs around. Can you evolve your syntax as people get used to it? Not only are you adding work to your parser, but people who use your language also have to deal with the multiple iterations of the syntax that linger in various codebases. This is one reason I detest dealing with Perl codebases.
Anyhow, as an exercise of evolving syntax while keeping the underlying semantics, I find Reason [1], a new syntactic layer over OCaml, particularly interesting.
> Can you evolve your syntax as people get used to it? Not only are you adding work to your parser, but people who use your language also have to deal with the multiple iterations of the syntax that linger in various codebases.
Clippy[1] helps a lot in this case, it helps people writting idiomatic Rust and it's updated when new syntaxes introduces new ways to do stuff, so the old one is deprecated _de facto_.
Syntax is one of the easiest things to migrate, because it can be statically detected. Though it could be a bit trickier with optional types and pattern matching
If your company decides to not use try!, then you can set that up (and have an opt-out for when you "really" know what you're doing)
Reading about Milo reminded me of MSOffice Ribbon. And now I wonder if there are reflection or theories about social fluid dynamics and how to redirect a flow of people into something radical-looking without creating turmoil.
The same site has this later post about the transition process for a big change, which partially answers your question: http://www.ribbonfarm.com/2014/09/24/the-rhythms-of-informat.... It's a very alien worldview for most developers, who tend to focus all their attention on designing a feature and none on designing the transition process for the feature.
I often feel like I'm the only software developer in the world that thinks most programming languages are evolving in the wrong direction.
Our civilization is going to end up with countless dead languages -- dead programming languages, that is -- because future generations won't know how to read the increasingly terse and arcane syntax that is all the rage these days. Either that, or the investment necessary to read these crazy languages will be considered an ill advised investment in resources. We'll end up recreating the wheel indefinitely, but in different languages.
I would like to see much, much simpler programming languages, even if they're more verbose, as long as the result is improved readability and longevity of the code written in said languages. I think some of the older languages are far superior in many ways, if only because, with some of them, you can actually read and understand the code without even any formal training in the language.
> I would like to see much, much simpler programming languages, even if they're more verbose, as long as the result is improved readability and longevity of the code written in said languages.
I'm really sad to see this misconseption over and over on HN. Verbosity is so much worst than syntax complexity because you ends up having dozens of different patterns that do the same thing overall but with subtile differences on edge cases.
If you look at JavaScript pre-ES6, you had at least 10 ways of doing OOP with really different behaviors when it came to inheritence or encapsulation. This makes the code so hard to understand.
ES6 formalizing a specific semantic for OOP was godsend.
As long as you introduce syntax to help with what people are actually doing (and not juste because it looks cool) syntax addition enhance the code readability.
JavaScript's OO problem was never syntax or verbosity, it was that it's a prototype-oriented language. Any language like that will invite conflicting ways of doing OO. The new ES6 class stuff adds yet another way -- but one that at least corrals developers into following a single official convention. The underlying object model is, for better or worse, the same.
Ada and Dylan are good arguments against the notion that verbosity is harmful.
I'm really sad to see this misconseption over and over on HN.
Please don't be rude.
Verbosity is so much worst than syntax complexity because you ends up having dozens of different patterns that do the same thing overall but with subtile differences on edge cases.
There's a level of verbosity/terseness that's ideal for (your average) human. I don't know precisely what that level is, but I think many programming languages, such as C++ and Rust, have stepped way past that line.
If you look at JavaScript pre-ES6, you had at least 10 ways of doing OOP with really different behaviors when it came to inheritence or encapsulation. This makes the code so hard to understand.
The problem with JavaScript OO was, in my opinion, the use of prototypes.
As long as you introduce syntax to help with what people are actually doing (and not juste because it looks cool) syntax addition enhance the code readability.
> I don't know precisely what that level is, but I think many programming languages, such as C++ and Rust, have stepped way past that line.
There are no languages in the same domain that don't have a comparable level of syntax and semantic complexity given the intrinsic nature of the information they want to encode.
And I would make rather take sigils than using a bunch of different keywords every time I need to talk about lifetimes and pointers.
To date I have not seen languages with comparable complexity expressed with more elegant syntax. Some domains just have irreducible complexity.
Use of prototypes, excessive falsy values, function scope, for each loping over object properties vs. arrays, and undefined vs. null are just a few things that will confuse people who see Java-like name and syntax and expect Java-like behavior. Maybe if you learn JavaScript first, it makes sense, but for everybody coming from other languages with C-like syntax, it is just too surprising.
It's a trade-off. Obviously Javascript's main problem is that it is too dynamic, classes or not. Class can make code more readable though, but Javascript still lacks of ways to declare private members explicitly. They should fix that ASAP.
> Verbosity is so much worst than syntax complexity because you ends up having dozens of different patterns that do the same thing overall but with subtile differences on edge cases.
So vague.
> As long as you introduce syntax to help with what people are actually doing (and not juste because it looks cool) syntax addition enhance the code readability.
So you are saying Rust's syntax is doing just right. But this has nothing to do with other language design. Just open your mind.
Keep it as simple as possible but don't try to make it simpler than it really is.
Programming languages can make things more complicated and having worked in C++14, python and VBA i can tell you that VBA does not come out on top even though it is seemingly simple. Compared to python VBA lacks so much ecosystem and language features that you will have a hard time to even parse XML, process loosely structured data and serve it via HTTP.
Looking at python vs. VBA and C++11 vs. C++03, I'd say that programming languages have generally been moving in the right direction.
When I was young, it was all about the "next guy." You iterate on it to make it clean, make it simple, make the docs right, etc because you wanted to set up the next guy for success debugging your shit. It was a code of honor, we are all the next guy in a way. You don't hear about it as much any more. Working and getting it done quick seem to outweigh real craftsmanship.
There are certainly newer domain specific languages that fit your criteria such as Go.
It can be verbose due to its lack of expressivity and its syntax.
But simplicity to me is more about semantics. Should a successor to Javascript or PHP feature the same equality table or even be Turing complete?
There are general purpose languages that enable people to author complex programs because they have terse syntax and are highly expressive. In such cases the programs are complex but the languages are often semantically simple and consistent (e.g. Haskell).
I certainly do not agree in languages like C++, where for example they are attempting to combine the semantics of runtime memory management with the lambda calculus.
Some examples might be COBOL, Java, BASIC, and Logo.
Of course reading any program still requires a mind accustomed to logical thought. A subsistence farmer has a different sort of "no formal training" than an 18 year old taking Chemisty courses.
In any case there is no need to worry...the important thing is the ideas not the syntax. And the useful ideas tend to be conveyed through time in multiple languages.
I can't speak for COBOL and Logo... but Java, really? When you're looking at a hello world program in it, 80% of it doesn't make sense to someone not familiar with the concept - import? class? static? void?
BASIC is even worse in many respects. It starts being deceptively simple, but do you think that someone looking at these two lines:
PRINT a, b;
PRINT a; b
would be able to tell the difference? Or, say, what does this do?
LINE (0, 0) - (100, 100),, BF
(no, it doesn't draw a line)
And then if we're talking about classic BASIC, you have to remember that A% is integer while A$ is string etc. None of that is at all obvious.
Or, say, you see this:
NAME x AS y
A reasonable guess would be that it renames a variable, or maybe creates an alias, right? But no - it actually renames a file with a name corresponding to a value in variable x, to a new name corresponding to a value in variable y. And many BASIC dialects will even helpfully stringize it for you, if the variables were, say, integers.
No you are not the only programmer who thinks that languages are going in the wrong direction. For example, the lambda operator of Java 8. All it does is hide the (very useful) name of the method being called. I'm forever having to look up the definition of some arcane single method interface to ascertain the method name so I can have a clue as to what the -> operator is doing/calling. :(
Wait what? In context of the typical usage of lambda expressions, the method name does not matter at all. I mean, what difference does it make to you that if it's a Runnable the method is called run(), if it's a Consumer the method is called accept(), and if it's a Function the method is called apply()?
OTOH the usual, pre-Java-8 way of allocating anonymous subclasses when you mean to pass a function is at best obscuring the meaning of your code...
We're going to be agreeing to disagree here. I think there are languages that do what you want: Go seems like a good recent example, but in general terms I find keeping the "reading age" of code low to make comprehension in the large more difficult. Why, for instance, do I keep reading a for loop followed by an if followed by a continue when filter/where exists?
Again, this is just a difference in attitude, but I find verbosity in general just cognitive noise that would be better spent with precise, higher-level constructs.
I think you can let your fears go :) See - science is more and more complicated every decade. It's much more difficult to learn math completely today than in middle ages. But we are not buried under all of these new researches.
Following this rule, maybe it's time to add a shorthand for std::unique_ptr<> to c++ already. I'm not really a fan of asterixes or ampersands either but 17 characters, 5 of them requiring shift modifier, for something you write that often quickly becomes annoying.
Which doesn't work, even in Python. Python 1.0 could iterate over various collections with straightforward syntax that still works. No such programmer would recognize the functional style that's taken over for iteration in the modern language.
Likewise the evolution from list comprehensions to generators left us with some terribly overlapping syntax.
Can't argue with that, but what I really said was that in contrast to the article, and Stroupstrup's rule, if I was to design a language I would put the "one way to do it" very high in my priorities.
Going with let's have "explicit syntax" now until the users get familiar with it and we will make it more terse later, is not a good idea IHMO.
But I do understand and appreciate that sometimes evolutionary changes will bring in duplication, so I am in no way attacking Rust and the specific example of the evolution of the error handling syntax.
> Going with let's have "explicit syntax" now until the
> users get familiar with it and we will make it more
> terse later is not a good idea IHMO.
I don't think this is a conscious decision. IMO, when you're first designing a language, you don't necessarily know what idioms are going to be most common, so you use special syntax sparingly, and force users to be explicit everywhere else. Then, once you have experience with how people use the language and can empirically observe what idioms are popular, you favor those idioms with shorthand syntax.
Getting back to the specific example in the OP, for a long time there was a discussion on whether Rust should use the question mark for some dedicated syntax (having removed the C-style ternary operator in 2012 as redundant with `if`). But people disagreed on what to use it for: some wanted to use it to designate functions that return booleans (like Ruby does); others wanted to use it to designate functions that return Option (my original stance, way back when); others wanted it as sugar for constructing and typing Options (as Swift eventually did); some wanted a C#-style coalescing operator. But after years of experience it turns out that none of the above are especially prevalent in Rust, especially once the community embraced the Result type (which matured relatively late in Rust's development) for most of what Option had originally been used for. In retrospect having a terse way to handle Results is an obvious win and I absolutely adore the new `?` operator, but it takes time to produce the evidence that such things are truly worth their weight.
I don't think you're wrong, I just think that you're overlooking that not all features eventually receive terse notation. Some stay loud and explicit forever! Only a few, relatively important features receive terse notation, and knowing which features those are requires observing how the language is used.
I think most languages go through a sort of growth phase (or multiple) where the users of the language grow and learn new patterns the language affords. As they learn, the style (idiomaticity??) of the language evolves. Expecting the language designers to anticipate this style from the outset seems almost impossible. I think the approach of starting with a possibly verbose base that provides the options needed and then boiling it down seems appropriate. But IANALD (language designer). :-)
Lets use Rust for an example. While I understand what a lot of operators in Rust do, like let X : Y = Z or !foo, I personally adore the Python habit of starting with a word based grammar and graduating common functionality into more obscure abbreviations (def) or glyphs (%).
Using those examples, let X as Y = Z is just one more letter but eliminates any confusion for someone unfamiliar with the syntax, since its fairly distinct from other C like languages.
Its also not particularly cumbersome for a language to support a ! operator and the keyword "not" for logical negation.
Same with : and "in" in for loops.
That and syntax features like whitespace significance as a basis are IMO great for good style.
For a language like Rust where it is now, adding these optionally would greatly increase my enjoyment of the language, because I generally like programming that I can read like Python, despite having a half decade of C++ experience so its not for a lack of benefit from the terseness.
Whenever I come across someone whose favorite language is Python, their argument is usually that "my pseudocode is basically Python!". While Python is certainly not my favorite language I really adore its syntax. A joy to work with. The wordiness and significant whitespace are features to me. For any feature, the syntax is usually the simplest and most obvious thing you can think of. No reason to look things up; guessing is actually a viable alternative! It's the lack of compile-time checks, support for functional programming, run-time weirdness and slowness that I don't like. It's an excellent replacement for shellscript, though.
Ruby is even better as a shell script replacement. It has things like backtics for command execution, built-in regex operators, string interpolation, and other nice shortcuts. Python is a bit too rigid/dogmatic.
Now Python has asynchronous list comprehensions, for which there seem to be few use cases. If Python supported compute parallelism it might be useful, but Python async is only for async I/O.
>There should be one-- and preferably only one --obvious way to do it.
The zen of python isn't saying "only have one way to do anything" it says "have at least one obvious way, and preferably only one obvious way to do this".
There can be lots of ways to iterate over a list:
[x for x in l]
for x in l:
x
while (x for x in [1,2,3]).next():
...
Maybe languages should provide starter pack, not batteries (as in libraries made noob friendly). Just enough vocabulary in a module to go along with most needs.
It only works when your function returns error of the same type as expression inside 'try!' or '?' macros. So in a lot of cases it doesn't work or forces programmer to return underlying type of error, instead of error which belongs to current level of visibility (in other words, it provokes programmer to violate encapsulation for the sake of lazyness). Also it means we have 2 ways to do same thing and users have to know this small trick even to read code (so it raises entry level).
I love Rust language and feel sorry for this criticism. But small things are important.
> It only works when your function returns error of the same type as expression inside 'try!' or '?' macros.
The language has actually thought of this case -- it turns out you can map lower-level errors into your higher-level error type by implementing the `From<E>` trait. E.g.:
use std::io;
pub enum MyErr { Io(io::Error), Custom(String) }
impl From<io::Error> for MyErr {
fn from(err: io::Error) -> MyErr { MyErr::Io(err) }
}
fn do_stuff() -> Result<(), MyErr> {
let f = try!(File::open(...));
// ...
}
Or you can call map_err to do an on-the-spot transformation if you don't have a general way to go from Error A to Error B.
I still sometimes make mistakes reading rust code with the ? at the end of the line, but it's just a question of what type is it and does the line return. Both of which are handled by the compiler in the end, so you get a little bit of extra knowledge that even if you misread the line of code, there's often limits to how much damage you can cause.
The error-chain [0] crate helps to reduce this boilerplate. I usually don't like macro magic like this, but the code is so straightforward that you can trust the macro to work as expected.
Well, usual pattern matching will be even shorter, so I'm talking exactly about cases when lazyness provokes programmer just return that underlying type than write pattern matching or implement trait.
It would still be true with a simple error code. It's always easier to reuse an existing abstraction (in this case, existing set of codes) than it is to define a new one.
You definitely need to use the `From` trait if you haven't already. I know others have already recommended this, but it cannot be encouraged enough. It's a great pattern for idiomatic error handling--much better than matching or `map_err`.
Maybe you already know this, but if not (and for others that don't), try it immediately. You'll be glad you did.
I know this and I don't like this. Every level of abstraction should have own errors, without dependencies on some other errors, otherwise code will be too fragile. And it's not theoretical assumption, but lessons from practice.
And if you don't trust me in this question, you can try to read Uncle Bob. I thought it's not applicable to Rust (because initially it was about OOP), but on one of refactorings I got a lot of broken dependencies in From trait. Then I realized good rules worth to be remembered.
>It only works when your function returns error of the same type as expression inside 'try!' or '?' macros. So in a lot of cases it doesn't work or forces programmer to return underlying type of error
Or you can implement a conversion from the underlying error to your custom error type[1], and 'try'/'?' will just work.
Please see reply in nearby comment (to follow DRY principle :)), and I want to note that second reason is also important: a lot of people are scared already by initial complexity of Rust and Rust's syntax. It's a mistake, because after couple of months Rust looks not more difficult than PHP or JavaScript (but gives much more satisfaction). So less quirks Rust will have - more readable and predictable code will be for newcomers.
I really don't see Rust as a good example of a language getting its syntax right. It reads like Perl. You know what's good syntax? The one that reads like pseudocode (Python is the closest to that).
Perl has infinity ways of describing the same computation with no clear best practices in many cases.
Rust's problem is the same that any language in its domain encounters: it must encode a lot more information than similar scripting languages in order to describe the semantics of this program.
It most certainly does not read like Perl. The sigils are weird but there's few of them and they're used consistently and without overloaded meaning. And unless you want to make a horrendously verbose language by assigning keywords to stuff related to lifetimes, pointers and references, it attempts to strike the balance between readability and compactness for the experienced developer.
Rust used to have a lot more sigils. Its syntax and semantics are nowadays a lot simpler than languages in the same domain (C++ I'm looking at you) and a lot of the complexity has been pushed into the traits system, IMO for the better.
Have you looked at Rust lately? We used to get that comparison a lot, back when we had a lot of sigils, but we've removed those by now. The ones we do have are very similar to C-family languages.
"syntax doesn't matter, only semantics!" is what I classify as geek machismo. I used to see a lot of that over at http://lambda-the-ultimate.org in its heyday.
It's basically a pedant's creed. I really wish everyone who desired to express that point would translate it into some forgotten language first, just to drive the irony deep.
The big problem, of course, is that code hangs around. Can you evolve your syntax as people get used to it? Not only are you adding work to your parser, but people who use your language also have to deal with the multiple iterations of the syntax that linger in various codebases. This is one reason I detest dealing with Perl codebases.
Anyhow, as an exercise of evolving syntax while keeping the underlying semantics, I find Reason [1], a new syntactic layer over OCaml, particularly interesting.
[1] https://facebook.github.io/reason/