Because nixos.wiki was the primary source, until very recently. Then, wiki.nixos.org was introduced, and mass edits were made (without the owner of nixos.wiki being ok with that) trying to redirect people from nixos.wiki to wiki.nixos.org, and now both co-exist.
Reminds me a bit the archlinux.fr vs wiki.archlinux.org fr wiki situation from 10 years ago.
This is France, stop expecting the legal system to work like in the US. Dude was arrested on a /sunday/, and we have a thing called violation of the secrecy of the investigation (yeah, often, it's not respected, but still).
He is presumed innocent, and if there is a trial, it will be public.
Secrecy of the investigation is sadly less and less respected as regular papers turns into Anglo tabloids and social media professionals whiners think everyone owe them information.
Phones can also transmit beacons. The same as laptops, actually, some of them constantly broadcast BLE beacons.
The signature is based on physical layer characteristics, according to the linked paper : « They result in two measurable metrics in BLE and WiFi transmissions: Carrier Frequency Offset (CFO) and I/Q imperfections, specifically: I/Q offset and I/Q imbalance. ».
Yes, you would probably need SDR to do this tracking, but SDR are generally not this expensive.
> Phrases were designed to mimic the writing style of elementary school students, including typical spelling errors observed at that age (Quinn, 2020).
Well, the paper is from a university in Italy. The fact they're writing research papers in English is a benefit to English speakers everywhere, including us :)
In reality probably authors of
the papers understood that the OpenAI team artificially readjusted these biases through RLHF and that there is nothing to find there, except that it still works when the words are written with typos because no manual examples of “redressing biases” have been provided with such typos.
> this looks ace, so a serious question: is it widely used?
I know some french cloud provider that uses it, and… I'm not sure they're happy with it. Issues with YAQL, a lot of weird bugs. They mentioned Temporal looking much cleaner.
For those wondering : it should be remembered that the reproducibility of Nix / NixOS / Nixpkgs is only a reproducibility of the sources: if the sources change, one is warned, but it is not a question of the reproducibility of the binaries (which can change at each build). This binary reproducibility of Nix / NixOS / Nixpkgs is indeed not really tested, at least not systematically.
Guix, Archlinux, Debian do the binary reproducibility better than Nix / NixOS / Nixpkgs.
To emphasize chpatrick's point below, there are two definitions of "reproducibility" in this context:
* Input reproducibility, meaning "perfect cache invalidation for inputs". Nix and Guix do this perfectly by design (which sometimes leads to too many rebuilds). This is not on the radar for Debian and Arch Linux, which handle the rebuild problem ("which packages should I rebuild if a particular source file is updated?") on an ad-hoc basis by triggering manual rebuilds.
* Output reproducibility, meaning "the build process is deterministic and will always produce the same binary". This is the topic of the OP. Nix builds packages in a sandbox, which helps but is not a silver bullet. Nix is in the same boat as Debian and Arch Linux here; indeed, distros frequently upstream patches to increase reproducibility and benefit all the other distros. In this context, https://reproducible.nixos.org is the analogue of the other links you posted, and I agree Nix reports aren't as detailed (which does not mean binary reproducibility is worse on Nix).
Your comment can be misinterpreted as saying "Nix does not do binary reproducibility very well, just input reproducibility", which is false. That's the whole point of the milestone being celebrated here!
> Your comment can be misinterpreted as saying "Nix does not do binary reproducibility very well, just input reproducibility", which is false.
It's only "false" as nobody has actually tried to rebuild the entire package repository of nixpkgs, which to my knowledge is an open problem nobody has really worked on.
The current result is "only" ~800 packages and the set has regular regressions.
I am probably misunderstanding your point BUT I have actually depended on Nix for "reproducible docker images" for confidential compute usecase so that all parties can independently verify the workload image hash. Rarely (actually only once) it did fail to produce bit identical images every other time it successfully produced bit identical images on very different machine setups. Granted this is not ISO but docker images, but I would say Nix does produce reproducible builds for many real world complex uses.
I'm very sure you are actually just rebuilding the container images themselves, not the package tree you are depending on. Building reproducible ISOs, or container images, with a package repository as a base isn't particularly hard these days.
it's about binary bit by bit
reproducibility of not just the binaries but also how they get packed into an iso (i.e. r13y.com is outdated, the missing <1% where also as far as I remember a _upstream_ python regression as reproducability of binaries (ignoring the packaging into an iso) was already there a few years ago)
now when it comes to packages beyond the core iso things become complicated to compare due to the subtle but in this regard significant different ways they handle packages, e.g. a bunch of packages you would find on arch in aur you find as normal packages in nix and most of the -bin upstream packages are simply not needed with nix
in general nix makes it easier to create reproducible builds but (independent of nix) this doesn't mean that it's always possible and often needs patching which often but not always is done if you combine this with the default package repository of nix being much larger (>80k) then e.g. arch (<15k non aur) comparing percentages there isn't very useful.
through one very common misconception is that the hash in the nix store path is based on the build output, but it's instead based on all sources (weather binary not) used for building the binary in an isolated environment
this means it has not quite the security benefit some people might think it has, but in turn is necessary as it means nix can use software which is non reproducible buildable in a way which still produces reasonable reproducable deplyments (as in not necessary all bits the same but all functionality, compiler-cfgs, dependencies versions, users, configurations etc. being the same
> but it is not a question of the reproducibility of the binaries (which can change at each build). This binary reproducibility of Nix / NixOS / Nixpkgs is indeed not really tested, at least not systematically.
Isn't that exactly what your first source and OP are about? They check that the binaries are the same when built from the same sources on different machines. The point is exactly that the binaries don't change with every build.
> How are these tested?
> Each build is run twice, at different times, on different hardware running different kernels.
I think that's also a bit of an unfair comparison given the number of AUR packages you usually use on Arch. With nixpkgs there isn't a distinction between official and community packages.
I'm just saying that X% of arch official packages being reproducible isn't a complete statistic when many day to day things are in AUR, most of which are in nixpkgs not NUR.
AUR is unsupported, and the fact that nixpkgs decides to support everything is for them to decide.
Reaching for reproducible builds support in Arch is a more attainable goal than for nixpkgs. Properly maintaing 80k packages without regressions is going to be a lot more work in the long term.
That is not true at all, with respect to the aims or the reality of nixpkgs. The original post here is talking about reproducing the (binary) minimal iso, which contains a bunch of binary packages.
It is true. The original post writes about reproducing the minimal iso, which contains probably around 1% of the packages in nixpkgs. The remaining packages are not tested regarding binary reproducibility, or, at least, not in a systematic manner, which means regressions may happen regularly (which is exactly what happened with the .iso, see the previous announcement from 2021: https://discourse.nixos.org/t/nixos-unstable-s-iso-minimal-x... .)
While I would love testing reproducibility more systematically, it would not really have helped for the Python 3.10 regression: in this case we knew full well that it would break reproducibility even before we merged the change, but the performance advantage it unlocked seemed too big to ignore. Such trade-offs are luckily rare - I'm happy that with Python 3.11 we can now have both :)
Doesn’t the content-addressed derivation experimental feature address this issue? Instead of store hashes being input-addressed as you mention, the derivation outputs are used to calculate the store hash, which ensures binary reproducibility.
Ish. This is covered in section 6.4.1 of Eelco's thesis (https://edolstra.github.io/pubs/phd-thesis.pdf). It all becomes much simpler if evaluating a build many times can only ever result in one output, but the Nix content-addressed model does permit multiple outputs. In such cases, the system just has to choose a canonical output and use that one, rewriting hashes as necessary to canonicalise inputs which are non-canonical.
With input-addressing you look things up in the store based on the input hash. You can determine the input hash yourself, but you have to trust the store to provide a response that corresponds to the sources. With Reproducible Builds you can have a third party confirm that output matches that input.
With content-addressing, you look things up in the store based on the output hash. You no longer need to trust the store here: you can check yourself the response matches the hash. However, you now have to trust who-ever told you that output hash corresponds to the input you're interested in. With Reproducible Builds you can now have a third party confirm that output hash matches that input.
I have not worked with content-addressed nix in depth yet, but my understanding is that this stores the mapping between the inputs and their output hashes in 'realizations' which are also placed in the store. Reproducible Builds will still be useful to validate this mapping is not tampered with.
It is, and I assume that that' what they'll end up using.
So they might end up technically meeting their target of "unmodified current devices" being able to use this, given that Samsung announced they'll be integrating 5G NTN IoT into some of their devices – but I highly doubt it would work with regular old (broadband) LTE or 5G phones.
You unplug/plug it, cold boot it, and then it works again.