>This does mean writing for n in (1..10).iter(), but Rust already requires that for collections, so it’s more consistent.
Even this needn't be the case. `Range` can implement `IntoIter` to plug into `for` loop syntax.
Another related problem is that `SliceIndex` (https://doc.rust-lang.org/std/slice/trait.SliceIndex.html) trait, which is used to implement indexing, is perma-unstable. So, even if you build your own better range, you can't make it play nicely with slices.
The problem is that his/her proposal also says that start <= end needs to be enforced (or non of his "faster" things would work).
Now consider the usability nightmare `(1..10).unwrap()` would be and `&slice[(1..10).unwrap()]` and `(1..10).and_then(|range| slice.get(range))` instead of `slice.get(1..10)`.
That's the actual problem, not that `Range` implements iterator.
Oh and most ranges are used ad-hoc (created and then directly consumed) so for many use cases going with Range + IntoIterator would increase the overhead.
Besides that while SliceIndex is perma unstable, `Index` is not so if you control the container you can make it work alternatively you can always do `my_range.index(slice)`.
Author here. In my idea, x..y would be like x+y: not fallible, but may panic. There would be a separate fallible function for creating ranges, analogous to checked_add.
(Of course unlike x+y, x..y would require checks in release builds too.)
Agree with you that there won't be any problems with panicking in this setup.
Although, it'll require some non-minor api adjustments: at the moment, Range's fields are public. To maintain the invariant, we would have to make them private and provide getters. Which we actually already do for RangeInclusive, because of that extra bool field. Which is an inconsistent mess :)
Basically all of your statements apply to your own post. You created a pointless subthread and resorted to direct insults that are far more inflammatory than anything I said.
error: this range is empty so it will yield no values
--> src/main.rs:2:9
|
2 | let x = 5..0;
| ^^^^
|
= note: `#[deny(clippy::reversed_empty_ranges)]` on by default
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#reversed_empty_ranges
help: consider using the following if you are attempting to iterate over this range in reverse
|
2 | let x = (0..5).rev();
| ^^^^^^^^^^^^
I disagree, if this happens you want to document these edge-cases by failing early, this is very bad code if you don't and let the loop happen when most reviewers would have no idea if the result would be an unexpected behavior or not.
This tripped me up a few times in Swift in the beginning, when I did a range `lowerBound ..< higherBound`, or `lowerBound ... higherBound`, and assuming that in those cases where actually `lowerBound > higherBound`, which can often happen naturally, it would default to the empty range. Still think empty range would be the sensible thing here, but at least Swift throws an exception, so after an initial wtf it wasn't a problem.
Range's idiosyncrasies and the antipathy to improving its ergonomics manifest in the issues raised against the project have been a capstone mental block to my investing emotionally in Rust.
I continuously really want to like the language. The premise is good, a lot of the ideas are really compelling, but when it comes down to aspects I disagree with, I get a strong sense of Rust demanding that users subjugate themselves to all the choices and opinions of the architects, and is (are?) hostile to the notion of a user expressing themselves through their tools, and not just their output. As a result, every time I interact with Rust my hackles end up getting raised and so far it would seem that a (currently absent) existential motivator would be required for me to get past it.
This is very far away from the Rust I know, both language and contributors.
Since 1.0 Rust has improved a lot while maintaining compatibility, even through a big deprecation step like the 2018 edition.
A good chunk of those improvements are about making Rust more usable and more forgiving. Some were things I didn't initially recognise as usability problems, but what came out the other side was absolutely better – the modules changes are a good example.
Rust's lead contributors have been welcoming and humble – while also being human. It can be very frustrating running an open source project, and Rust is trying very hard to be open and inclusive, which increases the challenge.
When writing Rust, there are definitely paths of least resistance. You often read about the moment people realise they're going "against the grain", and discovering why it's safar and/or faster to do it "the Rust way". But that's not subjugation. It's about understanding your tools, how they work, and what they're for.
> I get a strong sense of Rust demanding that users subjugate themselves to all the choices and opinions of the architects [..]
I don't really get where you're coming from here, especially with regards to this issue -- it seems like almost everyone, including Rust core team members, agree that `impl Iterator for Range` was an API design mistake, that came out of (IIRC) Iterator existing prior to IntoIterator in the pre-1.0 days. This seems to me to just be an unfixable design flaw due to backwards-compatibility rules, not something that really speaks to the design of the language as a whole.
> I get a strong sense of Rust demanding that users subjugate themselves to all the choices and opinions of the architects
I don't know why you feel that way. Any major language or standard library change goes through an RFC process, where anyone in the community can submit feedback, and from what I have seen, the official teams are very responsive to concerns.
- The big one is that, at least what I see from my admittedly flawed perspective, there appears to be an active annoyance by a significant number of participants on the various issues on the importance of maintaining library ergonomics under constrained execution environments, with bare metal / free-standing environments being an obvious scenario. An example that comes to mind being stuff dealing with situations where 'global allocation' is actively harmful, which, in addition to happening in kernel development, also happens when building managed runtimes.
- What comes across as an almost begrudging foreign-function interface, no ABI commitments, and lack of clarity about what is undefined behaviour under the `unsafe` operation needed to engage in any of this
- Cargo VS rustc and integration into other build systems. Sanctioned VS unsanctioned paths more generally.
- Prominent people in the community making comments about actively suppressing individual stylistic preferences because it's better for the collective
That last one is a common enough idea nowadays. Prominent people in every community advocate enforcing code style rules (Douglas Crawford's Javascript linter is famous as an early example and gofmt is practically enforced in the community).
The rest of the issues you mentioned are being actively worked on or investigated. Yes Rust 1.0 didn't solve all problems out of the gate and even now there's a lot of work still to do. But a lot of progress has been made and is being made.
I get that if you don't follow issues, working groups, zulip chats, etc that it may not be obvious what is and isn't being worked on but I'm really not sure where you're getting this "arrogance" from.
> That last one is a common enough idea nowadays. Prominent people in every community advocate enforcing code style rules (Douglas Crawford's Javascript linter is famous as an early example and gofmt is practically enforced in the community).
I would argue that this is because there are differences in the severity of the tooling. gofmt for example will not override your decisions regarding line breaks last time I checked, while rustfmt will not afford you any freedoms there.
> there appears to be an active annoyance by a significant number of participants on the various issues on the importance of maintaining library ergonomics under constrained execution environments
My understanding is that this is being worked on. Of course any language (or really any project) will have people annoyed that the maintainers priorities don't align perfectly with their priorities.
> What comes across as an almost begrudging foreign-function interface,
I haven't gotten any impression that FFI is begrudging. It really seems like a first class citizen to me, and is certainly better than the FFI story in say, java or go. Rust even has language features that exist just for FFI, such as unions and c-style variadic arguments.
> no ABI commitments
This is a recognized problem. It is also a hard problem, since lifetimes don't exist after compile time. But using the C-ABI is quite usable in some cases. And there have been a few RFCs around this recently (specifically around stable ABIs for vtables).
> lack of clarity about what is undefined behaviour under the `unsafe` operation needed to engage in any of this
This is again a known problem, and something that is improving.
> Cargo VS rustc and integration into other build systems.
I think there was some work being done on this, but I don't know the current status. There are people that use rust with bazel.
Not necessarily. A new type could be introduced and the old one deprecated.
Even without introducing a new type, it could be improved in a backwards compatible way. For example, derive the Copy trait, and add a contains method that takes an owned value instead of a reference (at least as long as the Idx type is Copy).
> In 2021 edition the compiler team could change the
meaning of (3..5) to resolve to a new type. Then if you wanted
the old type you have to request it by a fully qualified name
or something.
And the responses was:
> That would be pretty awful. I think it would be
better to change IntoIterator and warn for differences
in copy and intoiterator in both editions so you can write portable code.
I was super excited about the idea of Rust but constantly frustrated trying to use it to write real code.
I've basically decided to stay in my (ever-improving) comfort zone of C++ until Rust gets named/default arguments, which is my arbitrary litmus test for whether it's going to actually be a usable language for me.
This doesn't demonstrate either named or default arguments. You can expand on it to implement the equivalent of default arguments, sure, but it doesn't solve the ergonomic reason to have default arguments, and it doesn't touch on named arguments at all.
You're missing the point. Here's a good article that talks about the problems of having neither overloading nor default parameters, but in Go.[1] The problem is made significantly worse in Rust when all struct fields need to be initialized and there are no varargs. I really see no solution to the "stable API with more options" problem in Rust, aside from a builder pattern, which I really hate.
In Go, you can use an interface to implement private struct fields. Then you can make a "New" function to implement default values as needed. I made a post about this last month:
I've always seen Rust as a transitional language like SmallTalk or ALGOL. It's a glance into the future of programming and the Next era of modern languages but probably not the next 40 year language itself.
I see scope creep. I don't think it will be very suitable for quick get-webshit-done tasks because of its goals ans design, and they should focus on systems / high performance programming, like zig is doing.
I broadly agree with this, and am most frequently confused by the fact that a `Range<T>` isn't `Copy` even if `T: Copy`. I feel this was a small design mistake (having `Range` itself be an interator instead of implementing the `IntoIterator` trait) but there is a proposal to fix this that looks like it wouldn't be breaking, and so should be possible:
Yeah, this sounds like it could be fixed with a one-liner without any breaking changes: `impl<T> Copy for Range<T> where T: Copy {}`.
That being said, I can understand why this might not necessarily be desirable for some people; Copy is generally reserved for types that are "small enough" that implicitly copying the bytes will be cheap enough to ignore. It might seem obvious that a struct containing two types that are Copy should also be cheap enough to be Copy, the line has to be drawn _somewhere_, and for any number of bytes that is chosen to be the threshold where the cost is too high, it's pretty easy to pick a size that's below that threshold but a struct containing two instances of that size is above the threshold. Because of the unclear boundaries for that sort of thing, I generally tend only to derive Copy on things in my code that thin wrappers around existing types that are Copy, e.g. `struct Foo(i32)`.
The reason this can't be done quite as you describe is outlined in the linked issue: if something implements both `Iterator` (as Range does) as well as `Copy`, then you can end up in situations where you accidentally end up copying an iterator you think you're mutating. A good example is provided here:
Hmm, interesting. I honestly don't think I've ever really used an iterator like that, but I can understand why that would be considered undesirable. It does seem like having Range implement IntoIterator rather than Iterator would probably have been best, but that's tough to fix now.
Making `InclusiveRange` an iterator (instead of merely an IntoIterator) was definitely a design mistake and `Range` probably shouldn't be one either.
Adding a `Copy` constraint seems a bit weird to me, since I expect those borrows (e.g. on `contains`) for consistency with collections. Deriving `Copy` (i.e. implementing it when the index type is `Copy`) should be enough.
One small mistake in the article is that you need a `PartialOrd` constraint not a `PartialEq` constraint to ensure start <= end.
I actually love this type. Specifically because you can build ranges over complex key types. For example, two keys in a BTreeMap can be used to define a range selector to collect items out of the map, it made me very happy to be able to do this:
The example you provide appears to be using the same type for both beginning and ending values, and would also appear to explicitly provide an ordering (<=). Maybe I missed something, but that level of abstraction did not seem to be what was under indictment in the article, but rather scenarios where the types of first and last were disjoint, or the values were incomparable.
Range<Idx> only has one type parameter. Thankfully, you really cannot have a range with an i32 on one end and a String on another.
It's also not possible to call methods like contains() on Range<Idx> unless the type is (Partial?)Ord, because of the constraint on Idx in the impl which defines those methods.
Seems to me that you could still have this syntactic sugar if you use a separate type for iterable ranges and non-iterable bounds and then have them both implement RangeBounds.
I think you’d still want something where all the types Barrow to something consistent type. Otherwise, PartialEq/PartialOrd becomes complex to reason about. I haven’t tried this, but maybe the Idx of Range could be Borrow, much like the get requirements on HashMap.
I feel like this is an example of what Jonathan Blow calls a "Big Idea" or a "100% solution". His thesis is that when you make a feature of a language too abstract and usable in many different contexts, eventually there will be so many corner cases that the result will almost certainly be clunky and full of footguns.
He claims that language designers should aim for "80% solutions" instead, which cover most common usages but limit themselves enough to avoid complexity. This runs in contrast to a lot of commonly accepted language design wisdom.
> He claims that language designers should aim for "80% solutions" instead, which cover most common usages but limit themselves enough to avoid complexity. This runs in contrast to a lot of commonly accepted language design wisdom.
This is easy enough to say, and indeed I do think it's a good approach, but the problem is identifying that 80% in the first place. The reason that language designers tend to favor general approaches is because they presume not to know how people are going to want to use certain things. It's an approach borne out of humility, not ideology. You need time observing how things are used in the wild before you can identify which 20% not to support; get this wrong and people will be more frustrated than if you had saddled them with the baggage of the general approach.
In the specific case of Rust's Range API, we can observe this problem acutely. Rust hugely benefited from the period between 2011 and 2015 where it was able to iterate aggressively on design and observe what opinionated stances were worthwhile. But the Range type came relatively late to the party: it was devised and stabilized only months before 1.0 as a replacement for an old, hardcoded slicing syntax that worked with no types other than plain integers, and only in very limited syntactic contexts. With little time to observe use in the wild (and with all the other madness and work that was going on in the run-up to 1.0), the reasonable approach was to not over-constrain. Now that we have experience with it one could devise ways to do it better, certainly, and with luck Rust may be able to move the type in that direction, but other than that it may just be a lesson for those languages that are yet to come.
> With little time to observe use in the wild … the reasonable approach was to not over-constrain.
Given a new language feature and limited time to observe actual use, IMHO the reasonable approach would be to constrain it as tightly as possible. It's much easier to relax constraints to enable new uses later than it is to reign in inadvisable uses of an underconstrained interface. For example, if the original Range interface had simply consisted of two private, immutable fields with Copy + PartialOrd constraints and an implementation of the IntoIterator trait then it would be trivial to add setters (or public fields), an internal Iterator implementation, and looser type constraints later on if these were deemed necessary. Going the other way, however, breaks programs that have come to depend on these dubious features.
But it's a 80% solution, just not the 80% the author likes.
One of the main usage of `Range` is to be an iterator.
The other is to causally slice data structures.
For both use-cases would the proposed changes lead to major usability regressions and braking. Because you
can't compiler time enforce valid ranges as many are not
created at compiler time (e.g. `a.start()..b.mid()`) and
making range creation fallible would in practice be a
massive usability nightmare. E.g. consider `for x in (start..end).unwrap().iter { .. }` instead of `for x in start..end { .. }`.
The current solution while imperfect was chosen to fit the most common use-cases of it best.
For some very performance sensitive use cases where you need slicing of ranges and the way the std range does thinks is to slow/bad you can have alternatives which are faster but have usability drawbacks. But that's the exceptional case not the normal case.
Many of the other examples shown also seem kinda strange.
E.g. `get_unchecked` as well defined as "an out of bounds array index" is well defined (it's only defined for Range<usize> it's also an unstable experimental API...).
Range need clones => only in use-cases it was not primary designed for.
Range is unsure when its valid => No it knows it's always valid but not all valid ranges can be used in all places without having errors, indexing a slice with a range can panic anyway (out of bounds access) so moving the error handling there is fairly sane. Also you really can't have fallible Range creation.
Range hides a foot gune => any exclusive from-to range in any language has this problem, it's why in mathematics there are 4 types of ranges
A Recipe for Rearranging Range => he/she somehow assumes you can magically make sure that start <= end without error handling but without that oversight on his/here part this changes would make Range a usability nightmare.
I don't think Range is intended to be a 100% solution at all. It only supports iteration over integers and slice indexing over a specific integer size, which are its main use cases. Also the author is exaggerating on the footgun, as Range is explicitly end-exclusive, and the Regex is explicitly end-inclusive (e.g. [a-z] includes z), so a RangeInclusive should be a straightforward decision.
Yes, I think it was a rushed type which should be a lot more constrained on 1.0 to allow for modifications later, which is how stuff is usually done in Rust. It also probably should have implemented IntoIterator instead of Iterator directly. It may also be my least favorite Rust type, but I don't think it's really that bad, and I'm thankful for whoever designed it as I find it still much better than slice-specific indexing syntax.
The footgun is not just the exclusivity but the fact that `RangeInclusive` has an extra bool field and so is wastefully large for use cases that need the inclusivity. i.e. it's a perf footgun.
Then again, TFA analyzes Range in a vacuum, as if some Range<T> must make sense in arbitrary context A but also arbitrary context Z.
In reality, Range of some T generally makes sense in a local API or program. Even if that same Range<T> doesn't necessarily make sense in every other place T might be used.
Now that I know the details of Rust's range type it is extremely weird. The constraints need to be separate types. Why should range be so general as to support things that are obviously not ranges, only to return values indicating the range is malformed?
> The constraints need to be separate types. Why should range be so general as to support things that are obviously not ranges, only to return values indicating the range is malformed?
I'm not sure what malformed return values this is referring to, because I can't think of any. Is it referring to the fact that ranges where the start is greater than the end will result in an empty range? Without dependent types, which Rust doesn't have, there's no way to detect that; even in the subset of cases where the range bounds are computable at compile-time, back at 1.0 Rust didn't have the compile-time evaluation machinery necessary to make that happen. You could instead choose to interpret that a range where the start is greater than the end indicates a descending range, but plenty of other people will regard that behavior as a flaw.
Whether 5..0 being an empty range is "bad stuff" or "good stuff" is a matter of perspective. It is often "good stuff" for me, when computing some indices to slice with. Panicking on construction would force one perspective on every use case.
> But why isn't it panicking on len()? How is 0 the right answer there?
- len is `ExactSizedIterator.len()` which is the length of `Range` as iterator, i.e. the number of items yielded by next. Which is 0.
- When slicing with 5..0 it threats it not as an empty iterator but as an out of bounds access. This is without question slightly inconsistent and not my favorite choice but was decided explicitly this way as it makes it much easier to catch bugs wrt. wrongly done slices. Also it only panics if you do Index which can panic anyway but it won't panic if you use e.g `get` where it return `None` so making it traet the "bad" empty case differently for slicing doesn't add a new error path, but doing so for iteration and `len` would add a new error path especially given that `ExactSizedIterator.len()` isn't supposed to panic as it's a size hint.
The range is invalid, not empty; someone had to do a validity check to return 0 to prevent it from returning -5 or trying to count up from 0 (depending on what it was willing to assume). A big point of the article is that a range of size 0 should always be iterable, but it somehow isn't, because it isn't actually of size 0.
If you index a slice with a out of bounds index it will panic independent of weather the index is a usize or a
Range<usize>.
If you use `get` with a out of bound index you always get
a None.
Sure it's open for discussion if why a range with start > end should be treated the same as an out of bounds index or if it should be treated as empty slice. But then doing the former makes it easier to catch errors.
Enforcing start <= end would mean that the range construction is fallible which would be a major usability nightmare and now you would need two synatxes one for the normally error handling and one for panicking or you would need to add a lot of unwraps or similar.
Range's are mainly used ad-hoc (e.g. `slice[start..=mid+2]`)
or `for x in x..y {...}` and are optimized for that usage patterns.
For other usages they might not be optimal. But you can always do your own types.
> obviously not ranges, only to return values indicating the range is malformed?
It's not the case. The only think affected by range being generic is that `contains` takes a reference instead of a copy (which btw. can likely be eliminated by the optimizer). Which is necessary to allow thinks like `Range<BigNum>`.
All other things have nothing to do with it being generic but with for which use cases it was designed for.
In the end in rust a Range is mainly an iterator.
If it's a Range<usize> and only then you can also use it to get slice arrays/vectors/slices.
Which means that e.g. the unstable experimental `get_unchecked` function is actually very well defined.
Lastly the reason why you can't enforce `start <= end` is because that would make the creation of an range fallible which would be a horrible usability nightmare, a thing the author somehow misses completely.
The thing is indexing a slice already can panic so moving the panic there is generally a good idea. Similar you always want to have a non-panic path. Which would be e.g. `[T]::get()` which in case of a "bad" slice does the same as on a "bad" index it returns `None`.
In the end both `Range` and `RangeInclusive` are compromises focused on the most common use cases of range, which is a ad-hoc creation "just around" the place you consume it for iteration or slicing of slices. Which also means that e.g. the fact that `RangeInclusive` is bigger is no problem as at the place it's used you elsewise would need to either turn it into a iterator just like `RangeInclusive` adding even more overhead then the current `RangeInclusive`. Sure if you want to store a lot of `RangInclusive`s then this is not the use-case it was defined for and you are better of defining your own range inclusive.
But shouldn't len() panic instead of returning 0? I don't even understand how it could return 0 without having already done all the work to determine it should have returned a negative number.
Reminds me of D at times. It has many fancy features and powerful metaprogramming. But it also comes with drawbacks, many simple language improvement proposals are being shot down because they break in presence of some advanced usage of those features.
`Range` was discussed a lot before being stabilized and it's drawbacks where well known when it was stabilized.
The reason it was stabilized that way anyway was because it happens to work out best for the most common use-cases.
It's more of a "practically use-full but theoretically imperfect compromiss" thing.
The main usage of range are:
1. To iterate over it
2. To slice things using it
Which is what it is focused on in it's design.
Sure ranges could be `Copy` but one of their main purposes is to be an iterator so it's reasonable to not make them copy as that would be a usability nightmare.
Sure it's strange that you can construct a invalid range and then panic when you use it to slice something. But the alternative would be to make the creation of range fallible which is a usability nightmare. Furthermore validity depends on what you use it one, so a backwards range might be a very reasonable thing for some use-cases so practically it's best to make every `Range` valid, but not necessary every usage of one.
Sure exclusive ranges based on start+end can't contain the maximal value but that's a fundamental property of exclusive ranges defined through start+end. There is a reason mathematics have 4 kind's of ranges (differing in exclusiveness in start/stop).
Sure `.contains` takes a reference, but that's a problem about how rust can't specialize traits in how they need
references for Copy methods. Not having that would prevent the usage of ranges of `BigNums` or similar reasonable usages.
Sure `RangeInclusive` could be made shorter but that also means you can't have empty inclusive ranges and you can't
use it directly as an iterator, which is probably the most
common use case of inclusive ranges.
All in all the `Range` types are a compromise optimized for it's most common use cases. That makes some parts sub-optimal if used for other cases but you can also always use your own types so that's not really a problem in practice.
Also he does some mistakes:
- You can't enforce `start <= end` at construction time without making the constructor fallible which would be a ergonomic nightmare. Which means that neither `[T]::get()`
or `Range.len()` would get faster nor would is_empty get
easier.
The last point also sadly means that for certain arithmetic high performance tasks it can make sense to not use the rust provided range type but a custom one.
Well, as others have said: Hitting exactly 80% is pretty much impossible. And we've established that hitting more than 80% produces languages that are often bad, in some way or another.
So, logically it follows: aim for less than 80%. You can always add things; you can't take things away.
I've noticed this as well. It also attracts a lot of criticism. People are really insistent that a language should be perfect in exactly one area and terrible in others. They tend to support this kind of thinking with arguments about how you should pick the right tool for the application, as though most applications only care about one criteria or another (and need a language that trades everything for that one criteria).
That said, I think Rust does an impressive job at squeezing efficiency out of these tradeoffs. Sure, it trades off some developer productivity for extreme performance and safety, but its developer productivity story is still markedly better than other systems languages (and probably on par with some of the more cumbersome managed languages). Similarly, the tooling story is pretty great while every other systems language has pretty awful tooling (especially build systems). Moreover, Rust is getting better at a remarkable pace. I don't think it will ever close some of these gaps, but I think it will get close enough to pose a real threat.
This statement seems very true, even to the point of “duh” for people who have designed and maintained semi-widely used API or applications. I wonder where the language design wisdom to the contray come from.
I'm unfamiliar with marketing but I suspect the situation is much better now with all the tracking. And I can imagine for many APIs, people do know what most frequently used features are, and what typical users are like.
This is crazy, but I kind of like the way web does new features.
They start off with a name that obviously will never be needed (eg "moz-blur") and work with that for a while until it becomes apparent what "blur" should be.
If the rust developers had named "Range" as "RustRange" or something else weird to start with, then they could come back through later and name it to the more desirable name. This seems like a good tactic whenever you're still trying to figure something out but intend to put it in production anyways.
This is true, but a lot of users will clamor for a 30% solution as well. A user may be writing a back end in your language and ask for first class SQL. After all, they write a lot of SQL and being able to have typechecked, safe SQL statements sounds great, right? Except, not everybody writes SQL. Indeed, SQL may be dead in 10 years (I'm not making this claim, but it is a possibility) and replaced by a different language.
A good language designer will see that users want a general way to query over data, and create something like LINQ.
Of course, you're right that language designers shouldn't go for a 100% solution. Monads are kind of the classic 100% solution. You can do anything with monads, but that means you can do anything with monads.
I'm not sure that this is true. Common Lisp shoots for "100% solutions" a lot and it never ends up with the kind of crap that this article describes. This could just be a result of the fact that a lot of clunky-looking code can be handed off to macros and a human will never have to see it or touch it.
Haskell’s Foldable and Traversable typeclasses represent these use cases, are more general, and have none of the clunky edges mentioned in the article.
But Range is a concrete type, not an interface, right? And most of the discussion here is about the implementation of Range, not about the interface it exposes to users.
Note also that the clone issue and the borrow issue are not applicable to Haskell, and that the performance characteristics of Range may be hard to replicate while implementing Foldable or Traversable.
Right, the trick is to not use concrete types when unnecessary. As the OP makes clear, it doesn't make sense to stuff all these use cases into a single concrete type.
> Note also that the clone issue and the borrow issue are not applicable to Haskell
No, but Rust switching to a typeclass-based iterator syntax should help with this too.
> the performance characteristics of Range may be hard to replicate while implementing Foldable or Traversable.
I don't see why - rustc generally does (and must do) a great job of specializing parametric code.
.len() is not the length of a slice induced by the range or the distance between start and end but instead it's the len method form `ExactSizedIterator`, which in turn is a "special case" of where `Iterator.size_hint()` is known to return a correct value.
The thing is `Iterator.size_hint()` does return a size, which is usize.
So `Range<u64>` can only implement `ExactSizedIterator` on 64-bit targets, which I guess is why someone decided that it's better to not implement it (at all) to not hinder portability of libraries as it would be quite easy to accidentally write a lib not working on 32 bit. Not sure if that is the right decision tbh.
It is an interesting, awkward edge case. But the utility of a u64 length might be rarer than you think. `Range<usize>` will be more common for wrangling memory sizes, and will handle 64-bit sizes on 64-bit targets - so the utility of Range<u64> would mainly be for larger-than-address-space sizes, which probably means falliable I/O calls.
I've been considering upstreaming a trait into the read_write_at crate to provide std::io::Result<u64> lengths for Mutex<impl std::io::Seek> / std::fs::File (on platforms where length is available without mutating seek position - such as on windows via file.metadata().map(|m| m.file_size()) per:)
There's a whole slew of u64 offsets and sizes... the occasional subtraction to calculate a maybe-larger-than-memory size doesn't seem like that big a deal. Occasionally there are methods implemented for it - typically named "file_size()" instead of "len()" though.
This is another Rust design mistake: conflating the platform pointer size with "lengths". It's possible to process files bigger than 4 GB on 32-bit platforms. It's even possible to memory map them (using sliding windows/views).
In some Rust forums, the language designers were shocked to learn that this is possible.
At any rate, the Range type was very clearly designed for slices of in-memory contiguous arrays. Then template magic was used to make them "generic over all types". So now we have a situation where Range acts like an array subset and a B-Tree range selector, and a source for ordinals in iteration, and a bunch of other things. Some of which are incompatible.
Using pointer-sized sizes for in-memory collections and slices is no mistake, and every single Rust API involving file offsets or sizes in the Rust stdlib that I'm aware of uses u64 or i64, not pointer sized stuff. Perhaps you didn't mean to imply otherwise, but you do.
That iterators and ranges have this 1% edge case where you'll need to roll your own trait if you want "(0u64..1).len()" to compile because ExactSizedIterator was designed for the 99% case of in-memory collections, could be argued as a design mistake - or could be argued as a reasonable avoidance of overcomplication in std in favor of allowing the end users who encounter that edge case to solve the problem how they please, if it is indeed a problem for them.
Meanwhile, I'm still inheriting C/C++ codebases using APIs that know they're dealing with files and still use pointer-sized integers. In their defense, the system APIs they use often predate widespread 64-bit integer support in 32-bit C compilers (I'm looking at you, fseek/ftell). Less in their defense, the wrappers around said system APIs often postdate the very same, and postdate a slew of alternative APIs that don't even need 64-bit integer support.
> This is abusing the borrow checker as a bad linter. Range undermines the story of lifetimes and ownership, making the borrow checker feel arbitrary.
This is the key part. You really shouldn't try to "censor" the math to "help" users. It just causes more pain.
I'm recently been annoyed with the push back against "DynSized" again, which IMO is a symptom of the same thing. "DynSized" is the natural way to generalize Rust's notions of size/layout so we can implement custom DSTs, but people are scared of another ?Trait making Rust "more difficult" or "more complicated", so we're going to get some crapped up epicycles plan if we get anything at all.
-------
Incidentally the Jonothan blow quote below is full 180 misapplied. The issue is people trying to dial back something, not push it to it's natural conclusion, because they don't like the natural conclusion. Too bad, don't do that.
Are you sure about this? I've heard the quote applied specifically in the context of RAII, where he complained that there is no such thing as a generalized "resource", and that the same mechanism for handling memory access should not be used for file handles and texture maps. I don't have a link right now, but I'm pretty sure it was in his first "ideas for a programming language for games" video back in 2014. Seems to me like this range situation would be fairly analogous to that.
I remember his video on RAII. My opinion on his take is that it's pure tosh. Incidentally I also hate Golang... He's used to languages with thin abstractions like C where it's basically impossible to get anything done fast without dissociating allocation and creation (and in particular, allocation and mutation).
Modern languages and Rust specifically address that problem by letting you write clean programs with the illusion of immutability but still basically mutating state all over the place in the actual executable.
And that's if you care that much about speed. Even slow Rust is pretty fast, and the readability/consistency/maintainability benefit of RAII is immense compared to the tiny speed gains.
>If you try to enforce that start <= end, you lose the ability to make a Range of non-comparable things
What even is a "range of non-comparable things"? Doesn't the very definition, included in the article, imply an ordering because of the "upper" and "lower" bounds? What on earth is a situation where "upper" is not necessarily greater than "lower"?
If you try to enforce start <= end you get a fallible range constructor which is a usability nightmare, which is why it's not enforced. It has nothing to do with `Range` being generic.
No of the problems come from this not being constrained on PartialOrd. Instead they are related to other things like the usability benefit of range construction not being fallible (which is why we can't have a start <= end constraint in the type).
Some things could be done differently by constraining the type to always be an integer, but expect `constains` now not needing a reference non of the thinks the article complained about would be different just by constraining it to an integer. In practice many of it's method are anyway only defined for some Range types like you can only use `Range<usize>` for slicing.
As others have said you don't often put constraints on types in struct definitions. Most of the time if you do it is for using associated types to simplify generics.
The reason why constraints on structs isn't a good API design is because it limits future API design scope unnecessarily. In this case, maybe you want to reuse the Range syntax as part of a DSL?
- Float ranges can't be used for indexing/slicing slices
- Float ranges don't implement Iterator
- Rust is clever enough to make is_empty checks NaN robust, so `is_empty` return true
- contains checks if it's inside the stard bound and inside the end bounds, but whichever of the bounds is NaN is always not inside the bounds and as such the range won't contain anything (which matches with the behaviour of is_empty)
So it works well.
But float ranges are kinda useless for anything but the contains/is_empty methods.
I kept grumbling about the `Copy` limitations on iterators for years, but at the end of the day, someone has to write the lint for "iterating on a copy of a variable" and I don't think that has happened yet.
But unlike the other complaints this one is still fixable, with zero backwards incompatibilities.
All we'd need is someone to implement that lint (in, say, clippy). What it'd need to do is look for `IntoIterator::into_iter(x)` calls expanded from `for` loops, where `x` is a variable of a `Copy` type. And maybe look for mentions of the same `x` after the loop.
This was an enjoyable read, and also did an excellent job of explaining some issues I have frequently had work ranges, particularly having to borrow integers, which always feels awkward.
I'm only just learning Rust so I don't know much about Rust development, but what are the chances that any of the suggestions listed at the end make it into the language?
I'd love to see RangeInclusive fixed in the manner described. Part of what's blocking such a fix is that we have hard backwards compatibility guarantees. We'd have to do some kind of edition-based transition, where ranges start desugaring to a different type in a new edition than they did in the previous edition. We could do that, and there are various discussions going on to consider such approaches, but we're somewhat hesitant to go down that path.
The problems with Range are well-understood by now, so I don't think that it would be too much of a stretch to argue that a brand-new type could be added to the stdlib with some variation on the semantics given at the end of the post. However, transitioning the dedicated range syntax to that new type would be the tricky part.
Funnily enough, there was a ... syntax for inclusive ranges, but because when looking at code at a glance it is hard to distinguish between .. and ..., it was replaced with the current syntax: ..=
Pretty much zero. All his points are 100% valid, but they're probably annoyances that we can live with, and Range is a really core type - changing it would break basically all Rust code in existence.
New types could be introduced, a..b and other range syntax could resolve to them in next edition and cargo fix could add ".into()" to convert when needed.
I can’t even figure out how to iterate over a Range of Vectors? iter and collect both don’t exist... Google doesn’t seem helpful either. I don’t know any rust but it seems unforgiving at the first hurdle here :-)
It looks like you can't. You can create a Range out of it, but the range is not an iterator. If you look at the Range docs[0], you'll see that it only implements Iterator where its generic type (A) implements std::iter::Step.
Going to the Step docs[1] you'll see that its only implemented by integer types and char. If Vec implemented Step, you could iterate over it.
Also, if you're referring to the "contains" guessing puzzle, you have to look at Range::contains docs to see it uses Vec's PartialOrd implementation (basically comparison), in which docs state its elements are compared lexicographically.
Usually in Rust the first thing I'll do when trying to do something very specific (iterate over a Range of Vecs) is try it at the playground (play.rust-lang.org) and/or just look at std docs.
The range doesn't have values. A range defined by Rust has a start point and an end point, and no other semantics. Some additional functions then add behavior if the types contained in the range can be compared via greater-than, etc. Vecs in Rust can be compared if their contents can be compared. For a range of two vecs, foo and bar, to ask if a third vec is "contained" in that range is to ask if it compares greater than foo and less than bar.
You don't, because that doesn't make mathematical sense.
Let's say I have the range [1].. [2], what is in this range?
Well vectors are sorted in lexicographic order, so every vector that starts with 1.
What comes after [1]?
Suppose it's any vector whose first non zero digit is in place n (e.g. n = 2 and the vector is [1, 0, 3, 1]). I can make a vector that comes earlier in the ordering by just inserting another 0 (e.g. [1, 0, 0, 3, 1]). This means that the vector has no non zero digit... but it is obvious that [1] doesn't come after [1], and that [1, 1] comes before [2] so [2] isn't the next vector, so (by proof by contradiction) there actually just isn't a well defined concept of next vector. As a result I definitely can't just list the vectors in a range in order.
> there actually just isn't a well defined concept of next vector
I think there is a well-defined next vector, it just isn't very useful. The next vector after [1] for a vector of unsigned integers would have to be [1, 0]. And then [1, 0, 0], [1, 0, 0, 0], etc. (For signed integers substitute T::MIN instead of 0.)
Not really a Rust programmer, but I would expect a Range of vectors to operate something like a vector of Ranges. So if you say [0,10]..[20, 30], the length would be twenty, and the elements would be [1,11], [2,12]... Presumably if there was asymmetry in the values like [1,10]..[20, 50] you'd end up with...something? Not really sure what it should be, there are lots of options!
The fact that there are options is my point: it's ambiguous. That's the reason there is no implementation for it in the std lib. You're absolutely free in Rust to define your own though.
It's not a range of an vector but a range which start/end are defined by vectors.
The best thing I can come up with is that such a range defines a path from one tip of a (geomotric) vector to another vector and because vec in rust can have (theoretically) usize elements it's in a usize::MAX dimensional space ;=)
In rust the (useful) trait implementations on `Range<T>` only exist for `T: Step`.
Sure you can create other `Range<T>`'s but you can't use them for anything useful. (Ok, if they are PartialOrd you still can use `contains` and `is_empty`).
The article makes it look like a lot of problems comes from rust being too generic over the range type but that's not the case (expect for contains requiring a reference, but then BigNum's are a thing too).
I.e. the only reason why you can create a `Range<Vec<Mutex<Rc<String>>>>` is because it doesn't hurt anyone to allow me to do so. But I won't be able to use that rang for anything. It won't implement `is_empty`/`contains` nor will it be Iterable. Heck it doesn't even implement `Clone`. But non of the implementations around iterability, is_empty, contains etc. get in practice any problem because of this type being valid.
Instead the mentioned problems come mainly from:
- start <= end not being enforceable at type level as a fallible range constructor would have a horrible UX.
- Range<usize> if used for indexing treating things like 5..0 as "out of bounds" while for iterating it's just an "empty" sequence (which is not nice but a very reasonable decisions generally improving usability in practice due to how the range types are normally used in rust but confusing for some less common use case).
- The person somehow being hung up on the exact definition of "out of bounds" not being clear enough defined in the function documentation of a experimental nightly API which is perma-unstable, i.e. we are basically speaking about internal (but visible) implementation details of the standard library...
Well, the std lib disagrees with you. In Rust, type definitions tend to be maximally generic and restricted with traits only in their implementation. Have a look at BTreeMap for example:
The two type parameters are unbounded, this gives flexibility in the implementation because it allows you to define separate `impl` blocks where the bounds are different and more specific to their use case.
The worse bug I fought in one of my rust programs was when using the range type. I needed to go from n to 1 included, so, from n included to 0 excluded. Of course `(n..0)` didn't work, so I tried `(0..n).rev()`, which didn't work either (first, you generate all numbers in [0;n-1], then you reverse the list), and I can't blame the language, it makes sense, but it's highly unintuitive (in python for instance, `range(n, 0, -1)` does what is expected).
I'm not a seasoned rust programmer though, is there a better, more explicit way to do what I attempted there?
Given how general Range is, how does it even implement a len() which has the behavior in the article? (I tried looking into the documentation for this and came up short on Ranges even having len; I see it provides a TrustedLen to other things via a trait, but that seems unrelated, and it has some len related to an overload of one specific iterative type, but I feel like I must have misunderstood it.) Either it assumes subtraction, which should discover the negative length, or it would have to iterate the distance, which would take a ridiculous amount of work to get from 5 to 0. How is len() there at all, much less able to return 0 for 5...0? (FWIW, in my libraries like this, I have a different type for things that have a start/end for iteration from things which have a start/length for indexing, as they are often extremely different concepts and are used for different purposes, even if it feels like the former is a conceptual superset of the latter.)
> how does it even implement a len() which has the behavior in the article?
It's from `ExactSizedIterator.len()` and is a specialization of the case where `Iterator.size_hint()` returns known to be correct values.
It's a bit confusing on `Range<>` as it's not the distance between start/end or anything like that (well it's the number of steps when iterating Range so kinda the distance).
It also has the side-effect that it's not implemented for Range<u64> as I guess not to hinder 32bit compatibility (len/size_hint return a usize).
Because it's the iteration length it also is calculated based on iterations criteria, simplified `if start < end { end- start } else { 0 }`. Except in practice it goes from `len` to `size_hint` to the `Steps.steps_between` API in-lining all the calls and optimizing away the unnecessary overhead.
Range implements the ExactSizedIterator trait, and this is why you can call `(0..5).len()`. When you call len() the ExactSizeIterator just calls size_hint [1].
The size_hint is implemented here [2]. This is where the "start < end" check happens.
> Is this backwards range valid? Its len() is 0, it contains() nothing, it yields nothing as an iterator. But if you try to use it to index into a slice, you get a panic! So it looks valid but is primed to explode!
Rust has comparison traits... why aren't those involved here? It seems like it would be straightforward to ensure that any Range's start and end can only be Ord's, and that the first value must be < the second.
> Range<Vec> requires the borrow, so the vastly more common Range<usize> etc. forces it as well.
Not sure why this has to be the case. Why the implicit reference? Why not, when you want to use references in your range (a fairly exotic usecase), you have to do so explicitly?
So my first thought was "hey, why doesn't it take AsRef instead?" Then I checked the rust playground and... AsRef is not available for simple numbers, which I find confusing - since you can always reference an int, why isn't it?
I guess the same reason that `impl AsRef<T> for T` isn't implemented. (Not sure exactly what that is, but that would probably be a better question to look into, since you could argue that `AsRef<T>` should be manually implemented for any std type T with your argument)
I don’t think they’re related. The blanket AsRef<T> impl doesn’t exist because that would prevent you for providing different behavior (or blocking this behavior) for any of your types (eg smart pointers/containers) because rust doesn’t yet have impl specialization or override.
I don't know, I just find the concept of lexicographically ordering vectors something that isn't really useful. Can you think of examples where this might be something you would want?
Obviously you need a PartialOrd (and Ord, PartialEq and Eq) implementation for all fundamental data structures, since otherwise they are unusable (can't sort to canonicalize ordering, sort+uniq, use as btree keys, etc.).
Lexicographic order is the most common choice for vector; the other reasonable choice is length-then-lexicographic.
I have nothing against String having PartialOrd. I don't see why this means Vec<u8> should be PartialOrd–especially because treating a String as a Vec<u8> sounds very wrong.
Why does it sound wrong? Since strings are UTF-8, this just boils down to lexicographical ordering of code points.
Ironically though, that should be actually more objectionable than ordering arrays by lexicographical order of elements, because the underlying code point order is pretty much useless for any other purpose than having some arbitrary deterministic order. Meaningfully ordering strings is an inherently locale-dependent operation, and PartialOrd has no way to take this into account.
> A practical problem is writing correct bounds checks. For example, consider the get_unchecked function on slice - it says “an out-of-bounds index is undefined behavior” but never defines what out of bounds means. So how does one even call this function safely?
Except that it's well defined: If you start or end after then end of the array/slice/vector. The same way it's done for any other array/slice/vector access.
Could the documentation of this nightly experimental API be better, sure. Is it undefined or unusable? No it's as well defined as "an array index being out of bounds".
Oh and it's only defined for Range<usize> before you wonder there is no confusion with any ranges of custom types, this a method of a trait implemented for only for
Range<usize> which generalizes the indexing of slices and of which everything but the name is currently unstable/experimental.
I don't even know what PartialOrd and PartialEq even are, mathematically speaking...
The docs about these feel written by someone who knew that "some API like this" would be a good idea, but that somehow never managed to flesh out what these traits should semantically imply.
Which is kind of dumb, given that there was _excellent_ prior art about this when Rust was created (Elements of Programming, From mathematics to generic programming, the C++ standard library and the dozen papers about operator<=>, ...).
And that's one of the things I dislike more about rust. The way to overload operators, like +, or <, uses trait names, like Add or PartialOrd, which suggest that these operators have certain semantics (particularly when using Add in where clauses), but in practice they lack any semantic meaning and are just syntactic things.
Which is why, e.g., the standard library implements "Add" for strings. That doesn't mean that it implements "Addition" for strings, but rather that it overloads the Plus operator. And in the String case it does so to implement "Concatenation".
Which is IMO super dumb, because they could have just fixed this by naming the `Add` trait `Plus` instead, which is what languages that do the same thing, like C++, already do (`std::plus`, `operator+`, ....).
You could argue that using `+` to implement concatenation is "bad", but it is way less worse than using "Addition" to implement concatenation, which is what Rust ends up requiring everybody to do because that's just how you overload the `+` operation.
> I don't even know what PartialOrd and PartialEq even are, mathematically speaking...
They are binary relations, just some that are more "niche" than the more common ones. PartialEq actually has a link to it's mathematical definition on the docs [0].
Thanks for making my point: this API provides _a_ binary relation is IMO useless. There are millions of binary relations that one could implement for a type, and often many that make sense implementing for a particular type, and that this API doesn't support (e.g. there are both strict partial order and total orders for float in the IEEE standard; this API however implements none).
For this to be useful, the docs would at least need to say what can one assume about the partial order implemented by PartialOrd (is it strict? is it non-strict? something else?), and ideally have a solid ordering hierarchy so that APIs and algorithms can pick what makes sense to them, instead of having to assume the lowest-possible denominator imaginable, which results in, e.g., it not making sense to implement ordering for floats in the standard library, even though to be IEEE compliant it would actually need to do that.
This is because tuples like `(T, U)` implement `PartialOrd` - this is actually useful. If you are going to implement `PartialOrd` for tuples, it makes sense to implement it for 0-tuple too.
I disagree, I think the fact Range is polymorphic is extremely useful. Perhaps if your only interaction with it is writing a range of usize's. The only real 'pain point' I've ever had to deal with is passing `&`, but I'd hardly call that a pain point.
Why not make the compiler issue deprecation warnings and provide a few complimentary, specialized range types? It sounds like it needs to be broken down.
You started a flamewar and fueled it with 35 comments. That's beyond the pale. In fact, it makes me wonder what the record is for a single user in a flamewar on HN.
You did the same thing in other threads today too (e.g. https://news.ycombinator.com/item?id=24544124). We ban accounts that do this, or at a minimum rate limit them. I'm not going to do those things right now, but if you do it again we'll have to.
I know that, which is why I didn't rate limit or ban your account. These things mostly happen unintentionally. The thing is, though, we have to judge them by their effects, not intentions, since it's the effects that have...effects. https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
You're a good HN user and the only thing that's needed here is a bit more mindfulness. I like the 'arson' analogy for this sort of thing, except it's the wrong word since it implies intent. The right analogy is not arson, but negligence, i.e. playing with fire for fun or what have you, but then it burns down the building.
There's nothing intrinsically wrong with this thread. It's perfectly good watercooler conversation. That's part of the original intention for HN (let me dig up an old pg link...ah yes: https://news.ycombinator.com/item?id=8314). We all enjoy talking smack with coworkers or friends—same way one might argue about whether so-and-so is a good athlete or actor or whatever. On that level, what you did wasn't wrong at all, it's perfectly fine and good fun.
The issue is just that this particular watercooler gets broadcasted to millions of people, which suddenly makes for completely different dynamics. For example, the person you're talking about may well read the thread, which may sour them on HN and make them less likely to participate here, which would impoverish the site for all of us. That doesn't mean you shouldn't ever post criticism, but it should be alloyed with respect, and if the thread starts to turn into a cage match, you should do something to try to walk it back. (By "you" I don't mean you personally, but all of us.)
The biggest issue, though, are the feedback loops that happen when you're no longer talking to a couple people you already have good connections with, but an entire crowd of people who you have zero connections with. Your comments are guaranteed to get interpreted in ways that you not only didn't intend, but would find horrifying and would never dream of meaning. A lot of dark stuff flows in that way. Then people read it and feel licensed to start doing the same and worse—why not, if that's the kind of place this is? This is how internet forums decay, and the decay is exponential, so we have to be proactive about not letting that happen. That also is one of the founding intentions of HN: https://news.ycombinator.com/newswelcome.html.
Unfortunately, this does mean we need to have more restrained conversation than would be most enjoyable in a small group. People instinctively do this when talking to large groups, but HN is a large group that feels like a small group, so those instincts don't kick in. And of course it's also one of the best things about HN that it feels like a small group.
Past comments about this, if anyone wants a fuller picture:
Goodness, the lack of any compiler at all would disqualify a lot of cutting-edge academic research on programming languages, much less the existence of a publically-available compiler. Often all you get is a diagram full of (basically) inscrutable derivation rules.
Yes, but now you've shifted the goalposts, because I think most people would generally consider PL researchers to have an understanding of programming language design.
You can be more than one single thing. Johnathan Blow is a game designer, a game programmer, an entrepreneur, a programming language designer, etc.
At the end of the day he's someone who do write software for a living since decades, and has strong opinions and expectations. And he's using this to create the tools he wants to have.
I'm also not a fan of closed-source compilers (or fully closed, even worse), but it's unfair to disregard his work and opinions just because it's not publicly available.
Regardless of that, this is not a discussion about compilers or compiler design/programming, but rather about PL design, which are two different areas and disciplines.
They are currently closed beta testing the compiler, but it isn’t an abstract or imaginary thing.. he works with it all the time on his livestreams on Twitch.
I'm not as dismissive to jblow, but I am frustrated with his model of development and the inflated expectations that come with it. It's quite likely that Jai will be a great language but it won't be the silver bullet some seem to think that it is. There's nothing more annoying than talking about the tradeoffs between compile time and static analysis only for someone to say "but Jai solved it" (and yes, I have conversations with people who do say this).
Imagine I had a startup and kept on posting about how amazing my product was and how everybody was going to find it revolutionary, and maybe I even shared a few streams of me using it, but I didn't release. And this continued for like, 6 years. Wouldn't you get a little suspicious? Even if I was running a closed beta. Even if I had a proven track record.
Writing a language and keeping it locked up while you perfect it just isn't a great model of development. Languages are not just the compiler and the corresponding features. They're tooling and infrastructure. They're libraries. They're communities. It may be wonderful to work sans the usual politics of open source, but you're not gonna end up making a language that people use.
And maybe that's not jblow's goal. Maybe he just wants a language for himself. Well, good for him. But that seems a little sad. Programming languages are some of the most generous, wonderful gifts to the CS world. If I were him, I'd hate to see my language die with me.
I wasn't familiar with him and found this video where he seems to say a lot of things that are kind of sketchy about concurrent programming at the very start of the video: https://youtu.be/hjvtzriNlMU
Because I can't download a compiler for his language and try out his ideas. I've designed fantasy programming languages, too, but you don't see me telling everyone on the internet that will listen to me about them, dragging other PL designers that have actually released working code through the mud as I do.
He demonstrably has a working compiler and programming language. Calling it a "fantasy" just because he has not given you full access to the source is plain incorrect.
He streams so much of his development work on the compiler that a determined party could probably reproduce it by just copying the text from his text editor. He streams so much of his game development using the compiler that it obviously and unambiguously works - and so long as you know all the idiosyncrasies of the language (a big caveat) works reasonably well.
He has put up, just not in the medium that you would prefer. You have not, so perhaps you should take the second half of your advice.
I can make a python script that prints out stuff while I'm streaming, too. He hasn't demonstrated squat.
But, let's say he does have a working compiler. It's very probable that he does. Why doesn't he release it, then?
Well you see, the only way it's possible to really effectively criticize a programming language is to use it. By not releasing the compiler, he avoids real criticism.
So, no, I don't take him seriously on PL design. At best, he's scared of criticism, but wants to be able to criticize others. At worst, he's a fraud.
I'm not a PL designer, but from everything I've heard him say regarding PL design, I agree with your assessment that one probably shouldn't take PL design advice from him (or at the least take them with a grain of salt).
However I don't think there is any need to suggest he's a "fraud", the language is "fantasy" or faked for the stream. I also don't think that there is any need for him to release the compiler.
Have you considered that Jai is just a hobby project? Many programmers choose writing a programming language as a hobby project so why shouldn't he? Releasing a compiler to the public and having everyone demand that he also maintains it for every corner case they have is a lot less fun than just working on it for your own scope and showing it off/discussing it in your stream.
Did he say something that offended you? He shares his opinions to people who ask about it. Often his opinions are shared on his livestream - which is an environment he ultimately owns.
I’m a bit shocked at your attitude with this. Your conviction is unnerving. The existence of jai doesn’t prevent the existence of Rust or other languages.
> I can make a python script that prints out stuff while I'm streaming, too. He hasn't demonstrated squat.
Creating a Python script that makes it look like you have an actual compiler is harder than you think it is.
> At best, he's scared of criticism, but wants to be able to criticize others. At worst, he's a fraud.
I wouldn't say he is scared of criticism, more like he is not yet at the state that he wants to accept criticism. I wouldn't want people to comment on my half-finished projects too, but this doesn't mean I can't say that something else couldn't be better.
So people can watch him develop software? The same reason why you might stream a "let's play" but not be interested in somebody commenting on how bad of a gamer you are?
The issue is that he uses that platform to criticize other languages. That's my root problem, here. If all he did was write code, then nobody would care.
I'd be okay if he never spoke about it as the next big thing in programming with no intention of releasing it for public criticism. It's 100% okay to have private, in-house tools.
> I can make a python script that prints out stuff while I'm streaming, too.
I really doubt this. The amount of effort and acting that would need to go into being able to script in advance the sort of mistakes you make when programming would be astronomic. You would have to never make a typo when you weren't supposed to, always make a typo when you are, somehow deal with the fact that when you get stuck you need to be able to respond to audience feedback, and so on and so forth. (Oh, and interactive 3d programs not just text to a terminal).
That's a lot harder than making a programming language.
> Why doesn't he release it, then?
Because he doesn't think public feedback would be helpful at this time. Because he wants to release software that he can take pride in instead of a alpha level programming language. Because that's what he is used to from game development. Because he feels like it. Etc. Why does it matter? You aren't entitled to it.
If he doesn't feel like public feedback would be helpful, then why is he streaming about it? That doesn't make any sense at all.
I'm not entitled to shit, but he's also not entitled to a platform. He's a game designer. He's released games. If he wants to talk about games design, I'm all ears.
> If he doesn't feel like public feedback would be helpful, then why is he streaming about it? That doesn't make any sense at all.
He _does_ take feedback from the public. That is, from people who take him seriously and participate in the conversation.
The compiler _is_ released in a closed beta. You could be a part of that if you were a productive voice in his language design discussions.
Instead, you're condescending:
> I can make a python script that prints out stuff while I'm streaming, too. He hasn't demonstrated squat.
> He's a game designer. He's released games. If he wants to talk about games design, I'm all ears.
Why would _anyone_ want to hear your ideas on their language if you believe these statements?
He's provided hours and hours of discussion on Jai and language design, explained most decisions he's taken and changes he's made since the inception of the language, documented the language's journey, _paid_ for people to work on the language, and we get to read "he could be faking it?" Yeah, he faked compiling programs with literal input from Twitch chat. The guy's a magician, not a programmer.
Why is the default that everyone should release the things they make? Maybe he just wants to use it for himself and is uninterested in anyone else using it.
I think that's a really toxic attitude. My thoughts about programming languages are still valid, even though I've never made one of my own. I have opinions about what works and what doesn't, based on experience using programming languages, and just because I haven't actually tried to write a compiler doesn't mean I should "shut up" about it.
If you don't like his ideas (or him) just ignore them. No one's forcing you to listen.
I think it's toxic to suggest your way or ideas are better without giving people any way to verify that they are. Of course anyone can have an opinion about anything.
Take him however you wish; I don't follow him or know the backstory here. But adopting a "put up or shut up" attitude makes it less likely that people excited about something will share it, especially if they're newer or less experienced. That sucks. We should try to make the tech community a welcoming place that fosters enthusiasm.
I neither know Jonathan Blow, nor understand why he must create a programming language to critique them - is he someone who works on programming languages as his day job, for some time, yet somehow has avoided enforcing his opinions on himself?
It's more the issue that he has one that has worked on in private. He hasn't released a version publicly. Someone earlier in the thread was saying the fact that he has designed a language means he is worth listening to where the counter argument is does it really count if he's never released it and no one really uses it?
He's a video game designer who announced a new programming language called "Jai" in 2014, and no compiler has yet been released for it (but he discusses new language features on his Twitch stream)[1].
I know nothing about the discussion of whoever this dude is, but I am very impressed that HN hasn’t rate limited your comments with all the grey I’m seeing.
Because it hasn't been released, and it likely never will be. It's impossible to criticize a programming language without actually using it, so any discussion of its merits is moot without an actual release.
C was largely "finished" when it came out–future additions have not added much. And by "finished" I don't mean "never changes", I mean "finished enough for a 1.0".
Aside from some syntax around function prototypes, it's really not, unless you are counting "people tend to use certain things less often" as how you're counting that difference.
If you're counting the language by "how many pages of specification" it takes up, perhaps. To me is the language is really like the many features it had when it came out, and then the standard library improvements since then have been "string routines" and then "more string routines" and then "atomics, and also some string routines". Now C++, that's a language that has changed a huge deal–your C++03 code is obviously out-of-place in a C++11 codebase, but nobody will bat an eye if you drop some C89 code in a C11 project.
I've used C for long enough to meaningfully contribute to this conversation. And yes, by my definition C takes much of its inspiration from B, although there are enough differences that I would not claim that B is some sort of release of C.
Maybe he has different standards than you? I assume Jai will be similar to his games as far as attention to details goes. I would describe his games as having a maniacal attention to detail.
...Except that he did talk about all of his games almost from the get go? He showed earliest prototypes of Braid and Witness publicaly when the games were just blocky protypes, in the latter case good 7 years before the game came out. I don't know where you got this idea from, the way he is handling the development of the language seems to be pretty much an extension of how he develops his games.
It's his hobby/passion project, I really don't understand your objection.
You seem to have a problem with the fact he criticises other languages. Would you also treat someone who isn't currently designing a language with the same harshness if they also criticised a language the way Jonathan Blow does?
But he is designing a programming language. Even if it's all a fantasy or a Python script, like you claim, there's still a design.
His opinions and views on PL design are still up to scrutiny, so it's not a glass house like you claim. PL design is not compiler writing. You're free to implement all his ideas and write a Jai compiler yourself if you need a proper compiler in your hands to properly criticise the language.
And frankly, the only thing mentioned on this thread about "80% features" is not as controversial as you're making it. Nobody here is claiming it as revolutionary or game-changing, so even if you're right, your posts are a bit uncalled for.
Just because it has been debated extensively doesn't mean it's not a wart.
Having run into the "reverse range does not contain what you obviously expect it to contain" before and wasting a few hours on it, like many other people have and will continue to do in the future, definitely makes me want to call it a wart.
IMO, the current behavior is correct, it would be absolutely horrible when having lower bound higher bound suddenly reversed the order instead of producing an empty result. (think dynamic ranges, not hardcoded ones). Perhaps the operator should be .>>., not .. as an improvement.
Rust tries to be conservative with its semantics. You can always create your own range type and implement a deref/from for it to convert it to Rust range.
It seems like in this case Rust actually failed to act conservatively in its semantics, by allowing ranges on overly generic types in a way that doesn't make sense.
I think the problem here is more that the semantics are documented in the "documentation", but are contrary to the intuition derived from the semantics in the type system.
`.contains()` is only implemented for `Range<Idx: PartialOrd>`, which to me implies that when checking whether a value is contained in the Range, it has enough knowledge about the ordering of numbers that it should be able to still do a bounds check on reversely ordered numbers.
> Range<Vec> requires the borrow, so the vastly more common Range<usize> etc.
So I have no idea about rust, but in c++ a big feature of templated code is that you can make type appropriate specializations, is that not possible in Rust or did its standard library maintainers just sleep on the job?
Rust has specialization but I don't think you can change the signature like that. In C++ of course it's the callee that decides how arguments are passed.
But it would also be uncommon in C++ to specialize to pass by value. Generic code normally just passes T by reference, ex. max, push_back. Since this is transparent to the caller in C++ though, you don't have to write the &s.
It's been in the works for years and there's no end in sight. IIRC sound specialization might not be possible until some precursor work for GATs is complete.
Last I checked (a month or two ago), the specialization feature warning even included a bit about how it can crash the compiler (the crashes were the reason I gave up on specialization two years ago)
Edit: Turns out theres been some progress since I used it last! Feature `min_specialization` seems to not be a non-crashing subset of specialization [1]
Even this needn't be the case. `Range` can implement `IntoIter` to plug into `for` loop syntax.
Another related problem is that `SliceIndex` (https://doc.rust-lang.org/std/slice/trait.SliceIndex.html) trait, which is used to implement indexing, is perma-unstable. So, even if you build your own better range, you can't make it play nicely with slices.