Hacker News new | past | comments | ask | show | jobs | submit login
Inko Programming Language (inko-lang.org)
195 points by gautamcgoel on Nov 14, 2023 | hide | past | favorite | 94 comments



> Inko uses lightweight processes for concurrency, and its concurrency model is inspired by Erlang and Pony. Processes are isolated from each other and communicate by sending messages. Processes and messages are defined as classes and methods, and the compiler type-checks these to ensure correctness.

> The compiler ensures that data sent between processes is unique, meaning there are no outside references to the data. This removes the need for (deep) copying data, and makes data races impossible. Inko also supports multi-producer multi-consumer channels, allowing processes to communicate with each other without needing explicit references to each other.

Now I'm very interested. I was wondering why with 'actors' we'd still mark functions as async. It seems that a class marked async is analogous to an actor.

  class async Counter {
    let @value: Int
  
    fn async mut increment {
      @value += 1
    }
  
    fn async send_to(channel: Channel[Int]) {
      channel.send(@value)
    }
  }


A class marked as "async" can still have regular methods, which are only available to the inside of the class (i.e. the process itself, not other processes sending messages to it). So you can do something like this:

    class async SomeProcess {
      fn async some_message(...) {
        ...
        some_helper_method
      }

      fn some_helper_method {
        ...
      }
    }

    SomeProcess {}.some_message # This is fine
    SomeProcess {}.some_message # This is a compile-time error
I toyed with the idea of _not_ allowing regular method and thus implicitly making all of them async for an async class, but that might lead to a pattern of async classes being "mirrored" by non-async "helper" classes just to reuse some methods between the different messages of an async class, i.e. you end up with this:

    class async SomeProcess {
      fn some_message(...) {
        SomeProcessHelper {}.some_random_method
      }
    }

    class SomeProcessHelper {
      fn some_random_method {
        ...
      }
    }


I'm confused by

    SomeProcess {}.some_message # This is fine
    SomeProcess {}.some_message # This is a compile-time error
Aren't those the same call? Or is it calling it twice that's the error? or did you mean the 2nd to call some_helper_method instead?


I'm guessing it's a typo. Should probably be

    SomeProcess {}.some_message # This is fine
    SomeProcess {}.some_helper_method # This is a compile-time error


You're right! Sadly I can't edit the comment any more to correct it :<


Glad to see you here. Thanks that all makes sense, as making the async class's method async by default would then needs another keyword to make one non-async.

Is the mut marking also for consistency (and/or non-message methods)? Presumably each actor/object would only process one message at a time so could always have full access to all its data.

One more question, is memory allocated globally or from arenas/slabs/etc?


    > Is the mut marking also for consistency (and/or non-message methods)?  Presumably each actor/object would only process one message at a time so could always have full access to all its data.
This is indeed correct: processes have exclusive access to their data and run a single message at a time. Still having to tag methods as mutable is to prevent you from accidentally mutating data, and to keep it consistent with regular methods.


This is all an amazing achievement, congrats on making it so far/long.

I always felt that Rust wasn't a local maxima and there was something that could be added/removed. The only thing I'm not sold on is async (outside of actor use) vs futures where waiting explicitly blocks the waiter, but can be composed without waits. There seems to be some debate around Rust's choice of implementation.


> To add a package, first create a GitHub repository for your package. While Inko's package manager supports the use of any Git repository (e.g. one hosted on GitLab), the above list is populated using GitHub repositories only.

So another community that wants to lock itself into Microsoft products as well as Git in general. Why should a user’s choice for DVCS & forge make them feel less supported or inferior?


This is just the list on https://inko-lang.org/packages/ – which has a grand total of 14 packages. Even the text you quoted yourself says the package manger supports $any git repo, just that the website doesn't (yet).

In the Inko repo there are 13 contributors, with the #2 contributor having just 8 commits. The Website repo has 7 contributors, with the #2 having 5 commits. This is pretty much a one-person show.

But I guess every project, no matter how small the team, must support "my favourite VCS" and "do things my favourite way". Why don't you go and write a patch if Mercurial support or whatnot is so important for you?

This is the textbook definition of open source entitlement. You can't do anything without some asshole shouting at you for doing something "wrong".


Because PL devs and their communities, until very large, have limited resources; supporting niche use cases needs to be provided by people wanting that niche served, not by the primary contributors.


When did the ability to clone a repo outside of GitHub become a niche use case? It's not something you need to explicitly support, in fact you have to actively go out of your way to make sure only GitHub works.


Inko's package manager supports non-GitHub repositories just fine. What the original comment likely is referring to is the website only including GitHub based packages. This is simply because the script that populates that list hasn't been extended to support more than GitHub, because there hasn't been a need for it so far.


…Or use something generic like a tarball? Almost all DVCS have the ability to export an archive. Don’t build on lock-in; build on the generic thing… and then provide an ‘enhanced’ experience for something specific like Git or a specific forge.


When there are people using Inko and hosting their packages on another forge such as GitLab, and said forge has an API to get the data needed, I'll happily accept any patches to allow listing of non-GitHub packages on the website. The reason this hasn't been done thus far is simply because there hasn't been a need for it.


It is not even the worst I have seen. E.g. Kinesis and others sell keyboads with ZMK open source firmware, where official way to change key map is making github repository with a fork, editing it and using github to build it. I wish I was making this up.

So yes, seems like github has joined twitter and whatsapp as another thing the world works on.


> make them feel less supported or inferior?

Your point would have been much better made without the hysterical language


?

How should a developer who learned on & is happy with Pijul or Mercurial … or Codeberg, or whatever feel otherwise? There’s an implicit bias in supporting a much narrow band of tools, especially a proprietary one, that gets codified when not starting at something more generic. And if your projects aims to be long-lived it’d be equally myopic to assume Git will remain the most popular tool til the end of time. As noted in a sibling comment, had the project accepted a tarball, then almost any platform—past, preset, & future (assuming filesystems with files & folders are still used)—would be supported. A good example of this is what happens in Nix where the base is a tarball, and all shorthands merely a convenience DSLs that map down to the tarball.


I can't believe you replied after 2 days.

And why are you on another rant about tools and support thereof.

I was pointing out that your hysterical language was not helping your point.

You're not doing yourself any favours here


> Why should a user’s choice for DVCS & forge make them feel less supported or inferior?

do you really feel that way, mate? a bit dramatic isn't it? let me show you a better way.

yo, Yorick Peterse, the owner of Inko prog lang. you see, I actually want to publish shit for Inko. but you guys seem to only support github. what kind of dance / computer programming should I perform so your tools can show my tool?

and in case you guys don't wanna / cannae support my super sekrit "forge", too bad. this is where we part way.

simple, yeah?


Related. Others?

Show HN: Inko 0.10.0 – build concurrent software with confidence - https://news.ycombinator.com/item?id=32811621 - Sept 2022 (3 comments)

Inko 0.5.0 released, featuring the first steps towards a self-hosting compiler - https://news.ycombinator.com/item?id=20988908 - Sept 2019 (7 comments)

Inko (a gradually-typed object-oriented programming language) 0.4.0 released - https://news.ycombinator.com/item?id=19893035 - May 2019 (1 comment)

Show HN: Inko – A safe and concurrent object-oriented programming language - https://news.ycombinator.com/item?id=17702237 - Aug 2018 (45 comments)


This looks enough like rust that I'm not sure why I'd pick it over rust. What would be some concrete programs that would be difficult or tedious to write in rust and easy in inko? Could really use a section on "here's rust code you'd like to write but compiler won't let you" and how inko works.


From a quick glance, a "simpler" Rust with an Erlang like concurrency model seems like a very appealing language. It's also a sentiment I've seen repeated quite often.


You’ll note lifetimes aren’t a thing here, which is enough for me to give it a shot.


Apparently it's just moving that to runtime, which is kind of terrifying and misses the point. "If the owned value is dropped but references to it still exist, a panic is produced and the program is aborted; protecting you against use-after-free errors." [1]

That "protection" seems pretty worthless to me, since a non-trivial number of the use-after-free bugs I've seen are triggered only in rare cases, which means you're still crashing in prod.

Overall, Rust's lifetimes are sometimes hard, but generally only when memory safety is also hard. Inko's docs claim it makes it easier to implement self-referential data structures without unsafe/raw pointers, but to be honest the references here don't seem significantly safer than raw pointers.

[1] https://docs.inko-lang.org/manual/latest/getting-started/mem...


This sounds like very useful protection to me: yes, you can have crashes in prod, but those are unexploitable at least. That's a very big deal in some systems.


So use Rust. I’m not understanding these complaints.


How many times did you write `for<'a>` keyword or manually used `Pin<...>` type, or specified `where 'a: 'b`? I personally didn't need to reach for this "dark art" in at least last 2 years - thanks to Rust compiler developers making lifetime inference much better compared to how it's been few years ago.

But there are still some warts around e.g. generic async functions, where you do sometimes need to think very hard about lifetimes and know plenty of tricks. Arguably those are situations where memory management is hard, but it's not something that e.g. TypeScript, C#, Swift or Scala would make you think about.


You seem to misunderstand the intent of my comment.

The feedback (from Rust devs, at least) on this language seems to boil down to "But why isn't it Rust?", which also seems to be kind of a resounding sentiment around many different languages that Are Not Rust. Conversations around Zig and Go get derailed (by Rust devs) all the time.

So my question is, if a language Is Not Rust, and ultimately your demand is to use Rust, why even bother complaining? Just use Rust. The rest of us don't want Rust, so languages in this space (this space being defined as "Statically typed, compiled languages that Are Not Rust"), can be interesting. Derailing to poke at all the ways a language isn't Rust is just a waste of everybody's time.


crashing in prod is still a hundred times better than silent data corruption and security issues from use-after-free.


Generally speaking: if you're looking for a non-systems language that gives you similar guarantees as Rust, but with a lower mental cost, and you can stomach the drawbacks of a young language, then Inko might be worth looking into.

As for comparisons to actual Rust programs: I haven't done any side-by-side comparisons, mainly because I worry they may come across as a bit disingenuous, as you'd never pick examples that make your language look worse than whatever language you're comparing to.


I know this is a purely subjective opinion, but I feel that Inko looks a lot nicer than Rust, from a syntactic perspective. I could never get over how many symbols and keywords Rust has, not to mention how it requires semicolons (despite them being unnecessary in all but a few cases).


This is a good selling point because my main issue with Rust is the mental overload that comes with handholding myself throughout the development process.

I'm sure a lot of people just roll out of bed and naturally dance with the borrow checker like it's an old friend, but to me it's an hindrance. Maybe because I don't just only do Rust. I do plenty of other languages, or my brain is inferior, or both.


I feel that using Rust for low level stuff, all the mental overhead is worth it compared to using C: dance with the borrow checker, or get bitten by subtle bugs -- you choose :)

Using Rust for non-pref-critical web apps is a nice exercise, and yields a super fast and efficient app. But in that case the mental overhead is not worth it compared to a language and developer experience like Kotlin.


FWIW, for most of my Rust use cases I found I can get away with a relatively simple heuristic:

- if you pass it is a parameter, pass a read-only reference.

- if you return a value as a result of a function call, clone as necessary and return as an owned object, try to never return a reference.

- if the above two don’t fit, it’s thinking time; either the structure of the program can still afford for them with some refactor, or it’s genuinely incompatible.

The former avoids excessive copying and limits the bugs due to attempting to modify a cloned object, the latter limits the lifetime managing gymnastics required. The third allows to keep the program logic relatively simple except in the cases where it’s really required.

Hope that this is of some use :-)


- if you pass it is a parameter, pass a read-only reference.

Completely breaks with a lot of async code


I had been only following this language with some interest, I guess this was born in gitlab not sure if the creator(s) still work there. This is what I'd have wanted golang to be (albeit with GC when you do not have clear lifetimes).

But how would you differentiate yourself from https://gleam.run which can leverage the OTP, I'd be more interested if we can adapt Gleam to graalvm isolates so we can leverage the JVM ecosystem.


While I indeed worked for GitLab until 2021, Inko's development began before I joined GitLab, and GitLab wasn't involved in it at all.

Inko was hosted on GitLab for a while (using their FOSS sponsorship plan), but I moved it back to GitHub as to make contributing easier and to increase visibility. As much as I prefer GitLab, the sad reality is that by hosting a project there you'll make it a lot harder for people to report bugs, submit patches, or even just find out about your project in the first place. I could go on for a long time how GitLab really wasted its chances with taking over GitHub, but that's for another time :)


Inko means “one more” in Telugu, a language spoken in South India (Satya Nadella’s mother tongue). So its one more programming language!


How is Saty even relevant here?


An advantage over Erlang and Elixir is that this looks like any language from the C/Java family including all the clutter as in

  import std.fs.file.ReadOnlyFile
  import std.stdio.STDOUT
  
  class async Main {
    fn async main {
We basically have a zero signal to noise ratio here but it's a declarative pattern common to many successful languages so it will make many people feel at home. I think it's a good design decision.

On the other side I feel like the naming decisions in the error handling section are weird. The expect method breaks my expectations:

  let file = ReadOnlyFile
    .new('README.md')
    .expect("the file doesn't exist")
  
  file
    .read_all(bytes)
    .expect('failed to read the file')

Well, no: I expect the file to exist and to be able to read from it.

An expect method is common in test libraries and it behaves like an assertion. When I see an expect method I expect it to have a condition and stop the program if it evaluates to false, or propagate the error in any way appropriate to the language. So I expected

  let file = ReadOnlyFile
    .new('README.md')
    .fail("no_ro", "the file exists but it is not read only")
    .expect("created")
    .expect("open") 
I intentionally changed the probable behavior of the class, that seems not to create a read only file but only to open it. I made it create the file to show a softer version of expect() than an assertion. At least one of them must be true. The implementation of how to chain those methods without failing at the first failed expect() is a detail.

And about .unwrap_or(0), what that even means? The linked post [1] hints about a

  "wrapper type,” like Maybe<T>
so it's an established naming convention in some language but if it works as I think, what about this?

  .default(0)
That because unwrap_or seems to ignore errors and return a default. If it doesn't, it confirms that it's a bad name.

[1] https://joeduffyblog.com/2016/02/07/the-error-model/


You're right that "expect" combined with the error message here could be better. Specifically, I've been wanting a better name for the method, but haven't really done anything about it yet. The name itself is taken from Rust.

"unwrap_or" could perhaps be called "unwrap_or_use_default", but that's a tad too much to type. What it basically does is this:

    match the_value {
      case Some(val) -> val
      case _ -> 0
    }
i.e if you have a Some, you "unwrap" its value and return it, otherwise you return a default value.


> The name itself is taken from Rust.

That's great if you want to grow your user base with Rust developers. It's what Elixir did with Ruby.

About me, I unwrap presents IRL and never unwrapped anything in a programming language, not using that very name. That's why it was totally opaque to me.


I’ve always been partial to borrowing a bit from Perl and renaming .expect() to .or_die()


ok_or()


`unwrap_or` and `expect` work exactly like they do in Rust. The naming of `expect` works nicely if the argument is the precondition that failed (see https://doc.rust-lang.org/std/error/index.html#common-messag...), e.g.

    .expect("should have opened file")


With all the respect for a language that succeeded and the evidence that its choices are at least good enough, I think that they got it backwards.

  expect("the file is open")
is a positive statement.

  expect("should have opened file")
is a kind of negative and less easy to understand. They should have hinted the developers to use the correct style with a different method

  should("have opened file")
That said, it did not hinder Rust, maybe it even contributed to its success so it's OK even if it feels strange to me. But unwrap is really an awkward word to pick.


OT pet peeve/PSA: don't put `.new`/function on a separate line after the type/variable name--at least until our editors/tools (e.g. Github) can all do multi-line/parser aware searches.


Given the big similarity to Rust, and having similar target group (people who care about concurrency in compiled languages), what made you decide to stray from Rust syntax @YorickPeterse ? Personal preference?

It's all about small stuff like: `async fn` (Rust) vs `fn async` (Inko), skipping parens for 0-arity functions in Inko, `::` vs `.`, `Generic<T>` vs `Generic[T]`.


fn async means that by looking at the first token, we already know we are parsing a fn.

When you see an async, you only know you're parsing a fn while fn is the only thing in the language that can be async.


`<T>` in Rust is due to C++ pandering (assuming that C++ like it—unsure). I don’t see why “people who care about concurrency in compiled languages” should care about C++ syntax (which is both ugly to read and hard for computers to parse).


I have mixed feelings on Rust's syntax, especially around generics, lifetimes, and the `modifier -> keyword` syntax (i.e. `async fn` or `pub fn`). For Inko, I wanted something that's easy to parse by hand, and no context specific parsing (e.g. `QUOTE -> something` being the start of a lifetime in one place, but a char literal in another place).

Another motivator for that is that years ago I worked on Rubinius for a while (an implementation of Ruby), and helped out with a parser for Ruby (https://github.com/whitequark/parser). The Ruby developers really liked changing their already impossible syntax in even more impossible ways on a regular basis, making it a real challenge to provide syntax related tools that support multiple Ruby versions. I wanted to avoid making the same mistake with Inko, hence I'm actively trying to keep the syntax as simple as is reasonable.

As for the specific examples:

`fn async` means your parser only needs to look for `A | B | fn` in a certain scope, instead of `A | B | fn | async fn`. This cuts down the amount of repetition in the parser. An example is found at https://github.com/inko-lang/inko/blob/8f5ad1e56756fe00325a3..., which parses the body of a class definition.

Skipping parentheses is directly lifted from Ruby, because I really like it. Older versions took this further by also letting you write `function arg1 arg2`, but I got rid of that to make parsing easier. It's especially nice so you can do things like `if foo.bar.baz? { ... }` instead of `if foo().bar().baz?()`, though I suspect opinions will differ on this :)

Until recently we did in fact use `::` as a namespace separator, but I changed that to `.` to keep things consistent with the call syntax, and because it removes the need for remembering "Oh for namespaces I need to use ::, but for calls .".

`[T]` for generics is because most editors automatically insert a closing `]` if you type `[`, but not when you type `<`. If they do, then trying to write `10<20` is annoying because you'd end up with `10<>20`. I also just like the way it looks more. The usual ambiguity issues surrounding `<>` (e.g. what leads to `foo::<T>()` in Rust) doesn't apply to Inko, because we don't allow generics in expressions (i.e. `Array[Int].with_capacity(42)` isn't valid syntax) in the first place.


> Skipping parentheses is directly lifted from Ruby, because I really like it. Older versions took this further by also letting you write `function arg1 arg2`, but I got rid of that to make parsing easier. It's especially nice so you can do things like `if foo.bar.baz? { ... }` instead of `if foo().bar().baz?()`, though I suspect opinions will differ on this :)

I've spent a little bit of time in Ruby land and such cute syntactical tricks tend to have limited benefits (none?), but significant downsides.

This:

    foo.bar()
communicates very clearly that arbitrary code will get executed now.

But:

    foo.bar
could be arbitrary code execution or a trivial and cheap field access. No way to know which without extra information. Reading code like this stutters; every instance of x.y requires pausing to look up type information.

There's a second problem. Allowing:

    foo.bar
but not:

    foo.bar x y
applies special treatment to functions with zero arity. Is there a reason for that? As far as I know function arity carries no special significance.

Consistency shouldn't be disturbed without a sound justification.


There's one advantage of foo.bar calling a function, it works well with referential transparency. Whether it's a property/value or function, only the result matters. I can't say it's a big difference though, I've only been mildly annoyed by having to change all call-sites when changing between them. Other languages allow code bodies for getters/setters (foo.bar = ...) so it still hides the call. For a C-style syntax language having the () seems less surprising.


When it comes to correctness, only the result matters, sure. But not so for performance and code understandability. Obscuring points of arbitrary code execution is awful.

The getter/setter pattern is also bad, but a little less so since it's exceedingly rare any real code will get executed there.


Ah that is indeed making the parser much nicer. Thanks for the explanation!

Also, thanks a ton for your work on Ruby parser gem, so many awesome projects are using it now! https://rubygems.org/gems/parser/reverse_dependencies


Most likely not wanting to repeat Rust's mistakes.


Having pre-built binaries would be hugely helpful; I can't get it to compile (can't "find" LLVM, even though it's installed and everything should be set up right – doesn't help that llvm-sys error message isn't very helpful in explaining what went wrong).

I have no doubt I could get it to work if I spent more time on it, but ... I have no idea of that's going to be worth the effort, because I don't know if Inko is even interesting for me, as I can't actually try.



Are there binaries for ivm? If ivm is intended to make it easier to manage inko versions, having to install it via another language's toolchain (Rust) seems a bit onerous.

Edit: Oh I see, ivm itself relies on Cargo to build inko rather than downloading binaries.

Still, it would be nice if it didn't need another language's toolchain just to use Inko :(


There are no pre-built binaries. If you're using Arch Linux there's an AUR package (see https://docs.inko-lang.org/manual/main/getting-started/ivm/#...), but for other platforms you'll need to use `cargo install`. I'm hopeful that over time the list of package managers including ivm will grow, making this process easier.


> Inko allows...moving of the borrowed values while borrows exist

Yet

> With Inko you never again have to worry about NULL pointers, use-after-free errors, unexpected runtime errors, data races, and other types of errors commonly found in other languages

I’m wondering how that can possibly work? Surely the memory safety guarantees are not as strong as Rust’s guarantees.


Inko defaults to heap allocating objects (except for Int, Float, Nil, and Bool), meaning the data pointers point to is in a stable place. Thus, moving them around is just moving a pointer around, not a memcpy of the underlying data. This in turn means it's fine to keep references around.

To prevent you from _dropping_ a value while references still exist, Inko uses runtime reference counting. Owned values being moved around incurs no reference counting cost, but creating and dropping references does (just a regular increment for most objects, so pretty cheap). When a value is dropped, the reference count is checked, and if it's _not_ zero a runtime panic is produced, terminating the program. Refer to https://docs.inko-lang.org/manual/latest/getting-started/mem... for some additional details.

This setup is perfectly memory safe and sound, though in its current form the debugging experience of such errors (although surprisingly rare, but that might just be me) is a bit painful; something I want to improve over time.

My long-term vision is to start adding more compile-time checks, such that maybe 80% of the cases where a reference outlives its owned value is detected at compile-time. For the remaining 20% or so, the runtime fallback would be used.

In theory this should provide a good balance and only require a fraction of the mental cost associated with Rust. Whether that will work out remains to be seen :)


> To prevent you from _dropping_ a value while references still exist, Inko uses runtime reference counting.

> Inko doesn't rely on garbage collection to manage memory.

It sounds like Inko is in fact garbage collected? I have no problem with a refcounted language, it's totally reasonable, but reference counting is garbage collection. Am I misunderstanding something here?


The reference counts are used to prevent dropping a value that still has references to it, but it doesn't dictate _when_ that drop takes place. Instead, an owned value is dropped as soon as it goes out of scope, just like Rust.

So no, Inko isn't garbage collected :)


Sounds like the worst of both worlds. You have the overhead of reference counting and you still have to fight a borrow checker?


The cost of reference counting is only present when creating and destroying aliases, moving moved values around incurs no cost. That alone significantly reduces the overhead.

In addition, for objects other than String, Channel and processes, the count is just a regular increment (i.e. `count += 1`), not an atomic increment.

It's true that this can have an impact on caches, as the count increment is a write to the object, and so over time I hope to add optimizations to remove redundant increments whenever possible.

As for the borrow checker: Inko doesn't have one, at least not like Rust (i.e it's perfectly fine to both mutably and immutably borrow a value at the same time), so the cost of that isn't present.


I'm not sure I understand but I'll have to just look into it.


Colloquially, "garbage collection" typically refers to non-deterministic automatic memory management (and/or stop-the-world), whereas ref-counting is typically considered deterministic

Not really correct in an academic sense, but this isn't the only language I've seen talk about ref-counting as something other than garbage collection


In common usage of the terms, "reference counting" and "garbage collection" are completely different.


Well, they shouldn't be.


The first bold claim I see on the page is:

Deterministic automatic memory management

but actually looks like it's neither deterministic (refcounts!) nor actually memory management (deallocating memory can randomly crash the program?! this is even worse than C, where e.g. use after free can crash a program but then you're doing the wrong thing!)


The reference counts don't dictate when memory is released, that happens when an owned value goes out of scope, just as is the case for Rust. The reference counts are merely used as a form of correctness checking. The result is that allocations, destructors, and deallocations are perfectly deterministic.

Deallocating memory itself doesn't crash the program either, rather it's a check performed _before_ doing so (though that's mostly a case of pedantics). This strictly _is_ better than C, because if the program kept running then you'd trigger undefined behaviour and all sorts of nasty things can happen.

If you're familiar with Rust, this idea is somewhat similar to Rust's RefCell type, which lets you defer borrow checking to the runtime, at the cost of potentially triggering a panic if you try to mutably borrow the cell's contents when another borrow already exists.

You can also find some backstory on the idea Inko uses from this 2006 paper (mirrored by Inko as the original source is no longer available): https://inko-lang.org/papers/ownership.pdf


I believe Swift does something similar.


How is this worse than C? In C the program.might bit even crash and instead have a remote code execution vulnerability.


How does this work for collection types (ex. dynamic array) that can reallocate after growing? If there's multiple references, with at least one being mutable.

Is there an extra pointer hop as compared to, for example Rust's Vec? i.e. the value on the stack is a pointer to some heap data that has a pointer to the actual array data.


Arrays store just the pointers to their values, storing them inline isn't supported, like so:

    ptr to array  --> array
                      header
                      size
                      capacity
                      [
                        val1 ptr --> val1
                        val2 ptr --> val2
                      ]
At some point I'd like to support stack allocated values as an optimization, including the ability to allocate values directly into arrays, but that's not a priority at this point.


Hmm I wasn't talking about the values in the array, whether those are pointers or Int or whatever doesn't matter, but the allocation of the array data itself.

If the collection has capacity=2 (2 words) and another element is pushed in, typically you'd double the capacity, allocate new memory (4 words), copy the data over, and deallocate the old data.

If the square brackets in your diagram actually represents another pointer then I think we're on the same page, but otherwise I don't see how the data could be allocated in the same chunk as the header/size/capacity if there can be multiple (potentially mutable) references.

(hopefully the formatting on this works)

    ptr -> header
           size
           capacity
           data ptr -> [ val1
                         val2 ]


Ah gotcha. Yes, the actual layout of arrays is:

    header
    size
    capacity
    data ptr
Where `data ptr` is a raw pointer to data that is realloc'd as the array grows.


It looks like there’s a little information here: https://docs.inko-lang.org/manual/latest/internals/compiler/...


If the moved value still is not allowed to be read or mutated while borrowed elsewere I don't trivially see the issue.


But how could you statically check that?


I don't see how it would be different if the owner change, as long as the borrow takes precedence.


I assumed reference counting and locking would be employed transparently as needed.


Seems like a very stripped down Rust.


Yeah that is my read as well, and for a ton of use cases that's exactly what I want. Very exciting.


Excellent!


Precisely what we need


funny in my language inko (telugu) means another. so its yet another programming language.


Inko is Japanese for 'parrot'. Although it can also mean 'obscenity' which came to mind...


Sadly another language with concurrency support that fails to learn the lessons from occam https://en.wikipedia.org/wiki/Occam_(programming_language).

Erlang, Go, Rust, Raku and now Inko all missed the point: 1. synchronous channels guarantee correctness 2. so the same code can be run concurrently (ie lightweight processes on one CPU) or in parallel mapped to multiple non shared memory CPUs


Go channels are synchronous by default.


yes - but ... in the occam model that is not safe so it is not allowed


> Sadly another language with concurrency support that fails to learn the lessons from occam.

Weren't the top two lessons learned from occam about indeterminacy?:

1. If you ignore indeterminacy, you haven't really tackled concurrency, so you become irrelevant. CSP 1 ignored it, so occam 1 and 2 were designed to ignore it. While they became irrelevant for several reasons, I've long thought it was the fatal mistake of ignoring indeterminacy that made the demise of the original occam series inevitable.

2. If you tackle indeterminacy, but not its deeper consequences, you remain irrelevant. occam-π, which has added indeterminacy constructs as an after thought, has the problem that it's tackled it from the outside in.

During the 2019 concurrency talk panel that brought together three concurrency giants -- the late Joe Armstrong (Erlang), Sir Tony Hoare (CSP / occam), and the late Carl Hewitt (Actor Model) -- Tony said:

> The test for capturing the essence of concurrency is that you can use the same language for design of hardware and software, because the interface between those will become fluid. We've got to have a hierarchical design philosophy in which you can program each individual ten nanoseconds at the same time as you program over a 10 year time. And sequentiality and concurrency enter in to both those scales. So bridging the scale of granularity and of time and space is what every application has to do. The language can help them do that and that's a real criterion to designing the language.

Both Joe and Carl instantly agreed about what Tony said but also instantly disagreed about the central role of indeterminacy, and one could be forgiven for thinking Tony still hasn't learned the lesson of the mistake he made with CSP 1 and occam 1.

Erlang's "let it crash" concept distilled Joe's fundamental understanding of the nature of the physical universe, and how to write correct code given the inescapable uncertainty principle aka inescapable indeterminacy.

The Actor Model, which is a simple, purely mathematical model of purely physical computation, ironically the right theoretical grounding if you apply "Occam's Razor" to concurrent computation in our physical universe, contrasts sharply with the more "airy fairy" process calculi, which abstract processes as if one can truly ignore that, for such calculi to be useful in reality, the processes they describe must occur in reality -- and then indeterminacy becomes the key characteristic to confront, not an afterthought.

At least, that's my understanding.

I recall loving occam when I first read about it in the mid 1980s, partly because I was writing BCPL for a living at the time, and occam's syntax was based on BCPLs, but also because I was getting interested in concurrency, and fell in love with the Transputer, which was created by Inmos, a Bristol UK outfit, and I lived in the UK having spent a couple years living a few miles from Bristol.

But one can't deny reality, and the laws of physics, and the growing complexity of software, and the consequences of those two fundamentals, so eventually I concluded the Actor Model was going to outlive CSP and occam. I still think that, but am ever open to being persuaded of the error of my ways of thinking...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: