Brings me back too. But what stayed stuck on my memory, was something i read here on HN: at some point, one yahoo dev added his affiliate code to the links !!
I have used Rust a little, but this book was most useful to me when I was working on a concurrent data structure for an old C program. It’s a very good book for anyone writing low-level multi-threaded code in C or C++ as well as Rust, because they have basically the same primitives.
The only places I know where it isn’t applicable are the Linux kernel and Java, because their memory models and concurrency primitives predate and significantly differ from the Rust/C++/C models.
I guess there must be at least one book about the Java Memory Model, which is very different but fascinating? I don't know of any specific books to recommend.
For many languages there is nothing resembling this, they tend to not get into the details Mara covers, if you get a mutex and maybe atomic arithmetic then they're done.
If you wondered about C or C++, this book is the same content as for those languages but with Rust's syntax. The discrepancy between Rust's memory model and the memory model adopted in C++ 11 and subsequently C is mostly about a feature that's not available in your C or C++ compiler and (which is why Rust doesn't have it) probably won't ever be.
The biggest syntax difference is that C++ x.store(r1) compiles, and in Rust it doesn't. But, chances are after reading Mara's book you will think it's weird not to specify the Ordering needed and never use this uh, convenience.
Java atomics are actually sequentially consistent. C# relaxes this to acquire/release. Though the general concept of happens-before is still immensely useful for learning atomics as sequential consistency is a superset of acquire/release.
All of the memory models in question are based on data-race-free, which says (in essence) that as long as all cross-thread interactions follow happens-before, then you can act as if everybody is sequentially-consistent.
The original Java 5 memory model only offered sequentially-consistent atomics to establish cross-thread happens-before in a primitive way. The C++11 memory model added three more kinds of atomics: acquire/release, consume/release (which was essentially a mistake [1]), and relaxed atomics (which, to oversimplify, establish atomicity without happens-before). Pretty much every memory model since C++11--which includes the Rust memory model--has based its definition on that memory model, with most systems defaulting an otherwise unadorned atomic operation to sequentially-consistent. Even Java has retrofitted ways to get weaker atomic semantics [2].
As a practical matter, most atomics could probably safely default to acquire/release over fully sequentially-consistent. The main difference between the two is that sequentially-consistent is safer if you've got multiple atomic variables in play (e.g., you're going with some fancy lockless algorithm), whereas acquire/release tends to largely be safe if there's only one atomic variable of concern (e.g., you're implementing locks of some kind).
[1] A consume operation is an acquire, but only for loads data-dependent on the load operation. This is supposed to represent a situation that requires no fences on any system not named Alpha, but it turns out for reasons™ that compilers cannot reliably preserve source-level data dependencies, so no compiler really implemented consume/release.
[2] Even Java 5 may have had it in sun.misc.Unsafe; I was never familiar with that API, so I don't know for certain.
> as long as all cross-thread interactions follow happens-before, then you can act as if everybody is sequentially-consistent.
I don't think that's the actual guarantee. You can enforce happens-before with just acquire/release, but AFIK that's not enough to recover SC in the general case[1].
As far as I understand, The Data Race Free - Sequentially Consistent memory model (DRF-SC) used by C++11 (and I think Java), says that as long as all operation on atomics are SC and the program is data-race-free, then the whole program can be proven to be sequentially consistent.
[1] but it might in some special cases, for example when all operations are mutex lock and unlock.
The book is good but it has a couple important drawbacks:
* while it tells you how to do lock-free programming but doesn't teach you why, nor whether you should it.
* it has a relatively narrow focus on linearizability, but the truth is memory is neither linearizable nor sequentially consistent. These days it is agreed that Lamport's "happens before" relationship and acquire-release are a better way to reason on multithreaded code.
I'm leading an in-house team that's developing a custom software for a niche financial institution.
The original product was outsourced to a software factory, forward 2 years and they decided this project wasn't worth their time and left.
I took the responsibility of moving the project forward, then with only another dev.
What worked great to me as dev?
. the direct connection with the CEO as he became a high level PO. With this arrange we were always sure that our work was giving value to the business.
. I learned a LOT about the business and loved it
. I was in charge of growing the team when the amount of work justified it. I got several wonderful devs onboard.
What worked great for the business?
. they decided and prioritized the direction of the (customized) product.
. they understood the strategic constraints and possibilities of their software
what was tough for all?
. at least the first full year, was spent killing bugs and making the project work properly. Operations needed too much support in that initial period. Stressfull times.
Starting with a pair of Senior Devs is a good choice IMHO. If you trust them, even better.
Those points I mention above that I found great, are your selling points to hire some good Developers.
Eventually there will be a "Raspberry Pi" for LLMs. How long it will take to get there is anyone's guess, but I'd rather see it sooner than later personally.