It's jarring initially, but becomes natural very quickly. Writing loops like "for i in 1:length(arr) ... end" is pretty neat compared to C++ or even python. Plus in math sequences typically start at index 1.
I would go as far as to say that it is now largely an archaic idiom to write «for i in 1:length(arr) ... end», and there is no reason to write such a code to process a collection of elements (an array, a list etc) in its entirety. Yet, people keep on writing it and then spend countless man-hours chasing subtle bugs that blow out later in production.
Most modern languages have a «foreach» operator or its functional equivalent as a collection item generator. «foreach elem in my_collection {}» also has a clearly defined semantics: «process everything». The code is cleaner, concise and a bit shorter and reduces the cognitive overload. And since many languages now support lambdas, the code can quickly and error free be converted into a staged data processing pipeline without the use of temporary variables that serve no purpose, e.g. «foreach elem in my_collection.filter (_ -> _.my_selection_criteria ("42"); ) { do_something (elem); }». Whether the collection is indexed from 0 or from 1 becomes irrelevant.
You should never write loops that way, at least in code you're going to share (assuming that you’re going to have some arr[i] in the loop body—if not, you would just do "for e in arr").
Assuming that arrays start at 1 is a source of occasional bugs in public packages. The existence of OffsetArrays means that arrays can start at any index (so for people who get nauseated by 1-based arrays, you can change it).
Instead, write "for i in eachindex(arr)".
In fact, Julia’s array syntax means you can loop over and manipulate arrays without using indexes much of the time, so don’t even need to know how they’re based.
> Plus in math sequences typically start at index 1.
I'm not quite sure that this is the case enough to say "typically". In terms of undergraduate exposure to math, I think more people have taken Calculus (or Analysis) than Linear Algebra, and I think Calculus textbooks tend to index from 0 while Linear Algebra textbooks tend to index from 1.
That is such a weird argument to me. I can count in the fingers of one hand the number of times in my life that I've had to compose ranges, it seems a strange thing to optimize for.
You seem to have a weird fetish for composable ranges seeing your history of comments on exactly this topic (even this thread!). The number of times I have had to think about composability as you mention is close to zero. In fact, the off-by-one errors with C class of languages due to the open ended nature of the interval have bit me about 3-4 orders of magnitude more. There is an entire class of security bugs because of this. I find this obsession very strange, YMMV obviously.
I used Julia for a project back in 2016, and had the same adverse reaction to 1-based indexing. It's the same reason I have a hard time with Lua. Why? I suppose it's all rather superficial, but it's one of those things that just grates on me and I'll avoid it if I can. I'm not sure there's a logical explanation, but I've found the sentiment to be rather common among developers.
Same, but probably it's more of a familiarity bias rather than a logical one. I think there is a strong argument to index from 1 as is done in MATLAB, R etc. since it matches the way most people refer and think about lists in reality.
A brave decision from the authors though, I imagine they would have got less flak matching the more popular 0-index languages.
Ha, I was thinking of replying to your parent comment, "Half-open ranges are such an unpleasant thing as to be unforgivable to me. Different subjective preferences."
I've always found 0-indexing mildly unpleasant too, even though C was the language I learnt programming with, and felt like I found home when I came across 1-indexed languages.
With half open ranges, the length of the range is the end minus the beginning, which is nice. It's also much more simpler to write an empty half inclusive range. For instance if left=right then left..right would be the empty range. Whereas in Julia I'd need... left:(right-1)? But is 1:0 the empty range or the right-to-left inclusive range [1,0]? Very confusing and hard to work with all around.
> With half open ranges, the length of the range is the end minus the beginning, which is nice.
But I already know from real life that if there's maintanence in blocks 5 to 15, that's eleven blocks under maintanence. With half-open ranges, it once again introduces confusion and makes things unintuitive.
We can each find countless examples where each style comes out better, for eg `1:N` most often expresses the intent better than `range(N+1)`. I find people who prefer half-open ranges to be weird and incomprehensible, you probably find my preferences the same, I just don't like the categorical statement often made in this regard that one is inherently and universally superior to the other.
Yes, range(n+1) is bad. I hate python as well. You want something like Rust’s m..n and m..=n to have the choice. Anyway, if you have half open ranges, it's trivial to get a closed range (add one to the right endpoint), but going the other direction is not so simple.
Idk typing +1 seems easy enough... 1:N == union(1:(N÷2), (N÷2+1):N).
But really, these discussions are funny to me - each side pretending their convention is how God intended indexing to be done. You see it's naturally composable because N/2 shows up twice, which is really the perfect amount of times to show up, and as you recall I just defined composable in that way (not to mention it matches the fact that N/2 occurs twice in [0, N)!)