Hacker News new | past | comments | ask | show | jobs | submit | more rollcat's comments login

I did an OS dev course in uni, that was some 15 years ago. We used Minix, which is super cleanly written (5kloc kernel) and is covered by many textbooks (sorry but can't remember which one I used, but there's also the official one).

I got to implement a simple server (Minix is a μkernel so that's how most drivers work) and do some kernel hacking. I read the course material in advance and didn't even attend any lectures, still got an 8/10 grade ^^

I've also heard many good things about NetBSD and of course SerenityOS (Andreas did a lot of development on live streams).

It is indeed easy once you know where to start.


> can't remember which one I used, but there's also the official one

Andrew Tanenbaum "Modern Operating Systems" (The Red Book)


You could probably skip some of the difficult parts if you bring in an existing bootloader that can provide a UEFI environment (it's how Linux & the BSDs boot on ARM Macs). But Serenity is all about DIY/NIH

openSUSE does the same to boot on Raspberry Pi as well. Probably a few other distros as well.

This is a noble and worthy goal in itself. Too much software development nowadays is just copying or gluing existing code. Clean-room implementations ensure we (collectively) still have the wits to rebuild and maintain the difficult parts.

It also means we burn time and energy "rediscovering" the same knowledge we failed to better preserve and communicate.

I'm all for greenfield when appropriate but I also get more quality work done standing on giants.


You may also discover a new and better way to solve an old problem.

For most things, build on the work of others, but every now that then, we should check if those "giants" actually found the best solution, so that we may chance direction if we're heading down the wrong path.


The knowledge is already preserved. It's about keeping the muscles fit.

Rediscovering the same knowledge has kind of been the point of Serenity OS since the project's inception.

Danger is the nature of this kind of "sport". Friend & I were hitch hiking thru Albania, we ran into a "carbage" race from Poland to Greece. Both of had to travel in different cars (and our backpacks in a third), because otherwise the vehicles wouldn't make it uphill. We've passed by another team who have lost a wheel (it fell off and rolled down a hill at night - effectively got irretrievably lost); they didn't have a spare, so they just sold their car for something like €25.

Fun times.


> The only exploit I can think of with this bypass is that someone who gets access to the developer's release channel can host their own signed apk, which [...] gets installed in case of first install.

That still enables a supply chain attack, which should not be dismissed - virtually all modern targeted attacks involve some complex chain of exploits; a sufficiently motivated attacker will use this.


You missed the later part of the quote:

>But in either case, its still the same reproducible build, only the signature is different.

That means the attacker still has to compromise the source repo. If they don't and try to upload a backdoored apk, that would cause a mismatch with the reproducible build and be rejected. If you can compromise the source repo, you're already screwed regardless. Apk signature checks can't protect you against that.


In a previous post you said that - in case of matching builds - the dev's version is used. Why is the "dev's" version relevant? And assuming I'm correct that it isn't. What is the added benefit vs. just building from source (from a known good state, e.g. by a blessed git hash)?

Android will block any update to an existing app that wasn't signed with the same signature. The benefit of using the developer's signature (even if the app is built by F-Droid) is that the F-Droid release of the app is not treated as a "different app" by the Android OS, and thus it can be updated by other app stores or through direct APK releases from the developer. If the user chooses to stop using F-Droid in the future, they can still receive updates through other means without uninstalling and reinstalling the app.

It also allows the user to place a little less trust on F-Droid because the developer, as well as F-Droid, must confirm any release before it can be distributed. (Now that I think of it, that probably creates an issue where if malware somehow slips in, F-Droid has no power to remove it via an automatic update. Perhaps they should have a malware response or notification system?)

More: https://f-droid.org/2023/09/03/reproducible-builds-signing-k...


>In a previous post you said that - in case of matching builds - the dev's version is used

Which post are you talking about? https://news.ycombinator.com/item?id=42592150 was made by FuturisticGoo, not me.

Also, the wording on f-droid suggests the version that f-droid hosts is built by them, rather than a version that's uploaded by the dev. If you go on any app and check the download section, it says

> It is built by F-Droid and guaranteed to correspond to this source tarball.


> "pgp is too complex"

PGP is too complex. I've known my way around the command line before I learned how to hand-write, and I have to look up the commands to fetch the keys and/or verify the blob every single time. Keyservers regularly fail to respond. There's no desktop integration to speak of. The entire UX stinks of XKCD 196.

Don't blame CIA for obvious deficiencies in usability.


I was with you right up until the end. I think the only thing that would stop me from sabotaging a small project like PGP (was in the early days) is moral aversion. FOSS and academic circles where these things originate is generally friendly and open, and there is plenty of money and length of rubber hose for anyone who doesn't welcome the mole into their project.

I'm not saying I have evidence that this happened to PGP specifically, just that it doesn't seem at all implausible. If the CIA told me my code was never to get too easy to use, but otherwise I could live a long and happy life and maybe a couple of government contracts it would be hard to argue.

Why a mass-market interface never took off (GPG and other descendants notwithstanding) may indicate that the whole cryptographic idea is inherently not amenable to user-friendliness, but I don't find that hypothesis as compelling.

(It could also be an unlikely coincidence that there's a good solution not found for lack of looking, but that's even less plausible to me.)


Then why no such efforts are being pursued for PGP(GPG) nowadays?

signify[1] is approachable at least for the power users - I could print out that man page on a T-shirt. HTTPS is ubiquitous and easy, thanks to ACME & Let's Encrypt. E2EE with optional identity verification is offered in mainstream chat apps.

And of course there are usability improvements to GPG, being made by third parties: Debian introduced package verification a couple decades ago, Github does commit verification, etc. What's to stop e.g. Nautilus or Dolphin from introducing similar features?

[1]: https://man.openbsd.org/signify


> Then why no such efforts are being pursued for PGP(GPG) nowadays?

I wonder why there aren't more, but there are some, for example Proton's efforts towards encrypted email.

https://proton.me/support/how-to-use-pgp

(I won't mention the relative shortcomings of HTTPS and E2E chat apps here.)


you'd think if the cia don't want it to happen, then somebody somewhere else would make it though. it's not like the CIA and fsb would collude - they serve different oligarchs.

For what purpose? Setting up PGP signing and encryption for emails in Thunderbird is dead simple. if only I knew anyone else willing to use it!

I think you are right that UI sucks in many cases, but I think its not intrinsic to PGP - its fixable.


I agree, but not with the "fixable" part.

Encrypted email is near useless. The metadata (subject, participants, etc) is unencrypted, and often as important as the content itself. There are no ephemeral keys, because the protocol doesn't support it (it's crudely bolted on top of SMTP and optionally MIME). Key exchange is manual and a nuisance few will bother with, and only the most dedicated will rotate their keys regularly. It leaves key custody/management to the user: if there was anything good about the cryptocurrency bubble, it's that it proved that this is NOT something you can trust an average person with.

Signed email is also hard to use securely: unless the sender bothered to re-include all relevant metadata in the message body, someone else can just copy-paste the message content and use it out of context (as long as they can fake the sender header). It's also trivial to mount an invisible salamanders attack (the server needs to cooperate).

The golden standard of E2EE UX are Signal, iMessage, and WhatsApp; all the details of signing and encryption are invisible. Anything less is insecure - because if security is optional or difficult, people will gravitate towards the easy path.

The only use-case I have for PGP is verifying the integrity of downloads, but with ubiquitous HTTPS it's just easier to run sha256sum and trust the hash that was published on the website. The chain of trust is more complicated and centralised (involves CAs and browser vendors), but the UX is simpler, and therefore it does a better job.


I know more people who use terminal user interfaces for email than I know people who use Thunderbird, and I say that as a techie.

The UI still sucks, though, because people ask me what the .ASC attachments sent with all of my emails are and if I've been hacked. When I explain that's for encryption, they may ask how to set that up on their phones if they care, but most of them just look at me funny.

I do use email encryption at my job, through S/MIME, and that works fine. Encryption doesn't need terrible UI, but PGP needs support from major apps (including webmail) for it to gain any traction beyond reporting bug bounties.


Yes, but making sure you can still read your encrypted emails after something went wrong with your setup and you had to reinstall is already harder. How PGP integrates with a system is not trivial to understand.

if only everyone used my preferred set of tools (thunderbird + pgp)...

so it's their fault that every other tool maker refuses to provide the facilities at the same level of simplicity? they gave an example to show it was possible, it doesn't mean that their example was the only way - other developers decided that the public was too dumb to use those kinds of tools.

> I have to look up the commands to fetch the keys and/or verify the blob every single time.

I have no doubt that this is true, but I very much question whether any alternate UX would solve this problem for you, because the arguments for these two tasks are given very obvious names: `gpg --receive-keys <keyIDs>` and `gpg --verify <sigfile>`. There's no real way to make it easier than that, you just have to use it more.

The tool also accepts abbreviations of commands to make things easier, i.e. you could also just blindly type `gpg --receive <keyID>` and it would just work.


> There's no real way to make it easier than that

If we accept that the world has moved to webmail, and use a GUI client, then the way to make it easier is bake in into the client and make it seamless so there's no manual futzing with anything. Make it like TLS certs, so there's a padlock icon for encrypted mail, yellow for insecure, and mail that fails validation gets a big red warning.

Unfortunately, purists in the community could not accept that, so it's never happened, and so gpg failed to get critical mass before alternatives popped up.



pgp is only complex because there was a jail sentence to anyone willing to discuss or improve it at the crucial start time. go learn history and rethink your argument.

with that stigma no company invested in that that entire space for decades! we are still gluing scraps from Canadian phds when it comes to pgp UX.

now that crypto is cool you will get keypass, which is the obvious evolution of "url padlock". either the login button is enabled or not. don't question whats happening behind the curtain.

... the fact this entire comment thread is mixing my loose points about the url padlock (consequence) with the CIA actions on pgp (cause)... sigh. I won't bother anymore. enjoy the bliss.


> cross platform/web/mobile distribution

Unfortunately, each and every platform has its own unique quirks[^], and I think the problem is too large for a project like LÖVE to tackle.

In fact, LÖVE's authors have already done "90% of the work in 10% of the code": you can make a ZIP file, change the extension to .love, and just distribute that. Install LÖVE, double-click YourGame.love, and you're done. You can also concatenate the game executable with the ZIP file to create a self-contained binary distribution for that target platform (super old & cute trick; it's possible because ZIP's "header" is in fact, at the end of the archive).

[^]: In 2016 I've attempted a "one game per month" challenge; but I also wanted to distribute executables for Windows, macOS, and Linux, so that all of my friends could just play them. It went something like this:

- Windows: distributing executables from your personal domain is considered a crime; the browser and the OS will yell at the user, block the download, blacklist you, etc.

- macOS: simply concatenating won't work; you need an elaborate dance to create a .app bundle, and distribute that in a .dmg - good luck building that on a Linux host, also good luck debugging without an actual Mac. Nevermind code signing, you can bypass that with shift-right-click-open. Oh wait, macOS 15 won't let you.

- Linux: are you ready for the .so hell?

- BSD: bonus! The documentation for SDL_GetPlatform doesn't even enumerate all of the possible values! You need to source-dive to get them: https://github.com/libsdl-org/SDL/blob/f45ce45603a00e92332f9... - plus any possible patches in ports; most developers won't ever care, and will just hardcode "Linux". Checking for $DISPLAY is also unreliable: macOS users may have XQuartz installed, Wayland purists may run without XWayland, etc.


DragonRuby did it: https://dragonruby.org/

And many other frameworks did it too. LÖVE decided it's out of scope, and that's a 100% valid choice, considering the breadth and complexity of the problem. They still went quite far in ensuring it's no more difficult than it actually needs to be. Similar with e.g. cross-compiling Go programs (unfortunately, no cgo-less, cross-platform framework for interacting with windowing systems/video hardware exists).

Yes, but most other engines are much larger scope and fundamentally different purpose than LÖVE. DragonRuby, however, is in the same camp: code-first, scripting language (Ruby vs Lua), beginner-to-expert friendly. So my point was there are (at least one) project(s) very similar in purpose and size/scope to LÖVE that did solve for cross platform.

I really wish that 1% of the sustained effort that is currently being devoted to building terminal emulators and TUI applications was collectively redirected at researching a better future to fill this niche.

Terminal emulators and TUI applications belong with the DMG Gameboys, Commodore 64's, and DOSBoxes: as a historical curiosity and something for the enthusiasts to enjoy after hours. But we've come to rely on them almost to the point where they're being fetishised; most of us using this technology nowadays aren't even questioning its negative impact.

We need better command line / REPL environments. We need better cross-platform GUI toolkits. We need better remote access. And we need to let go of the TV teletype.


It's X11/systemd/ipv4 problem. People agreeing there is a problem doesn't mean that any single solution will be accepted. With terminals you have a system that works with anything from HPC-s to serial devices. An alternative would probably need to drop support for something, meaning congratulations: The number of standards increased by 1 - probably 2 by the time that people unsupported by the modern alternative decide that they too would like something new.

The current effort can at least be pointed towards something.


I don't think we need a new standard (the web is already a standard, and PWAs/Electron build on top of it), we simply need better incentives/trade-offs for people who would like to write portable software, but are stuck between TUI and a hard place.

I'm weirdly attracted to Tk (with Tcl or Tkinter). It's not shiny, but it's very practical.


I think you're both right in that we need standards, and we'll get them but with fractures, but at the same time we could try harder to reduce the incidence of fractures. Because at the end of the day this is all a social problem, and social problems can only be fixed by the society coming together.

Have you heard of Arcan? https://arcan-fe.com

Yes, I've been following it for a while - it's exactly the kind of research that I had on my mind. I don't necessarily agree with some of the design choices (I don't think we need a middle layer underneath VT100 emulation to make TUI apps, TUI is IMHO just terrible as a concept to begin with), but SHMIF has the mark of a battle-tested protocol, and many other concepts are very interesting.

They treat CLIs as a subclass of TUIs. Underneath it all it is just a high-level decoupling API to replace ncurses/readline/.... The output becomes a window with a packed-text buffer format. You can mix and match graphical windows with TUI(CLI) ones in the same client.

I appreciate the commitment it has to backwards compatibility - the TUI apps are better by virtue of running on top of Arcan's layer, with no changes.

1% of the sustained effort being spent on terminal emulators would be like a third of an engineer, so I have some great news for you.

We're living in a world of frivolous take-down demands, where the law says shoot first, ask questions later, and all the power is concentrated with the copyright mafia. (How meta.)

GPL does not restrict how you can use the program (freedom #0), unless (as is the case with lexer/parser generators) the output contains a part of the program's own source - e.g. GNU Bison's license has an explicit exception to allow that.

In case of Minecraft, you do have Redstone and Command Blocks at your disposal, and people have built things like an 8086 or Tetris. I wonder if there's an edge case with those WG tools, where the generated world becomes GPL-contaminated.

(Personally I'm a fan of BSD/MIT licenses, for those kinds of reasons - I'd rather not have anyone pay a lawyer to answer a simple question like, "can I actually use this program?")


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: