This NSA report is a wonderfully thorough guide to configuring UEFI Secure Boot, although it is another example of how unusable security tools can be. This conplexity was my motivation for writing the safeboot[1] scripts, which wrap all of the signing key management, TPM key sealing, and attestation into a hopefully easier to use package.
(additional plug) portions of which are used to enable github.com/google/go-attestation, which aims to make it practical for verification of system boot state in a heterogeneous environment and is now being actively used inside Google.
I've been following safeboot for a while and it looks really, really cool!
One concern I've been having regarding a read-only root file system – an idea that I really like! – was how cumbersome software updates (say, through apt) and quick config changes (in /etc) must be. AFAIU I'd have to manually sign a new rootfs image every single time[0] which looks rather painful to me. I wish Linux distributions provided a clear separation between user-facing software & configs and system-internal stuff that one hardly ever has to touch: IMHO software & configs should by default get installed on a per-user basis and not require root. (And applications should also be sandboxed by default but I'm digressing…)
In practice it is a bit of a pain during the initial setup and package installation -- I was probably rebooting to recovery mode once a day or more to install some command line tool that I had forgotten about. Once the machine is configured then it is a more rare occurrence, although your usage might vary.
Something that I'm not happy about is that the snaps all live on the writable /var since they want to do automatic updates all the time. This is problematic for a locked-down configuration and might recommend against a snap based distribution.
Separating out the bootable bits from the rest of the packages might help, as would running more things in sandboxes. Another option that we're exploring is some lvm magic to create a snapshot, upgrade the snapshot, sign it, and then on the next reboot use it as the real root. This is also useful for fleet management -- the new root filesystem, kernel, initrd, etc can arrive "behind the scenes" and on the next reboot is the one that is used. Since the PCRs can be predicated as well, the PCR policy can be signed and sent along with the upgrade to make it seamless.
> Another option that we're exploring is some lvm magic to create a snapshot, upgrade the snapshot, sign it, and then on the next reboot use it as the real root.
This sounds very nice and similar to Android's A/B partitions!
> Since the PCRs can be predicated as well
This may be a stupid question but… what are PCRs? Google yields "polymerase chain reaction" – a method used, among others, for detecting the coronavirus but I'm sure that's not it. :)
It's amazing that you disregard the most basic attack in your threat model (https://safeboot.dev/threats/): going after the TPM itself. TPMs are usually FIPS 140-2 L2: not something that's meant to be hardened against even basic hardware attacks.
IMHO the TPM should be a required piece but not the only piece of the puzzle. If I loose my laptop, I don't want the goods to be protected exclusively by a key that's trivial to recover from it (stored in something that's not a secure-element).
tl;dr; Use the TPM (and potentially other technologies like SGX) as part of your KDF to strengthen PIN/passphrase that the user provides. This breaks the asymmetry of offline attacks (attacker will always be bound by TPM/SGX-speed). Do NOT give it the only key required to decrypt your data.
Some forms of TPM tampering are explicitly addressed in the threat model:
> The PCR values in the TPM are not "secret", so an adversary with physical access could directly wire to the TPM and provide it with the correct measurements to extend the PCRs to match the signed values. The user PIN is still necessary to unseal the secret and the TPM dictionary attack protections both rate-limit and retry-limit the attacker.
Decaping chips to recover secrets is outside of the threat model, however.
What I'm argueing in my talk is that it shouldn't be. Odds are your phone does it better :)
Decaping a chip from a lost laptop is far from science fiction and can be performed at a fixed cost. Mitigation is super-cheap... There's just no good reason to store the "final" key on the TPM.
Here I interleave rounds of argon2id (configured with parameters that fit my system: use up all the RAM and all the cores since there's nothing else to do in the initrd) with HMAC rounds from TPM and/or SGX (configured with the right policies so that they rate-limit and only unlock if the PCRs check out).
Surely TPM is only useful for getting to a trusted point/input for the key that encrypts the drive?
If an attacker can get hold of the hw long enough to get into the TPM they could just copy the encrypted drive, and replace the laptop entirely - the only thing needed would be to ship the typed pass-phrase home on next login?
I suppose ideally there'd be some kind of challenge-response to verify the TPM (very naive version - type in a wrong pin/pw first - if it's accepted you know the system is compromised..).
But, assuming the attacker can replace the whole system - I'm not sure I see how it could be trusted fully, assuming it's not under 24/7 watch (and even then, it could of course be compromised, but shifting the attack toward eg bribery, betrayal, neglect etc).
It is very much news to me that key-recovery from the TPM is not supposed to be that hard.
If that is the case, how does it strengthen a PIN? Any attacker wanting to find a decryption key could simply extract the key, and then brute-force the PIN outside of the TPM constraints, can't they?
It definitely does when there is no attempt made at protecting against it. L2 means "tamper evidence", you need L3 for things to start to be designed to prevent it from being "basic".
SGX is L3, you'll be hard pressed to find a TPM that does better than L2.
Phrasing it another way: Even if you don't have the skills/equipment to do it. How much do you think it costs to get someone to do it for you? How reproducible is that process? Why are we assuming it's hard?
I'm pretty sure their benchmark for hard is needing to take the device apart and do surgery on it. To me, that is what I would qualify as hard despite knowing how to do it, it's hard by way of being annoying.
However, regarding cryptography, the NSA's cryptographic expertise and resources are secret, so it's very hard to include them in a threat model.
They could know more than civilian cryptographers, have new direct attacks that we don't know yet, e.g. algebraic attacks and specialized hardware to solve gigantic systems of equations. Or, they could have a working quantum computer with many qbits. We don't know, do we?
My threat models for my clients use a state sponsored APT, and generic SIGINT and HUMINT agencies all the time. The idea being that SIG agency does passive interception and traffic analysis, where the HUM agency does targeted collection, and the APT is opportunistic zero day.
It's not just the NSA, it's literally everyone else as a class of threat they might need to consider. Also, I use opposition researchers as threats for politically exposed people, and who cross over into foreign spy level stuff.
The controls it prescribes are straightforward, and realistically, it's a risk you just understand, do your best to mitigate it, and accept.If you are going to not do business because you are afraid of state level consequences, you've got a legal/regulatory problem, and not a technical one.
That leak says something about the resources, not the expertise. For example both the NSA and almost every NATO equivalent of it tends to design cryptographic primitives with openly documented “weird” interfaces (key checksums, self-synchronizing remarkably slow stream ciphers...) and probably nobody outside of these agencies really knows why.
The way I see it, safeboot is a bunch of scripts helping to set up solutions already available.
I didn't look at it in details, but in one of the screenshots the system asks for a pin to unlock the disk.
I agree that storing a full decryption key in the TPM may be risky. Even if the thread model should be considered (it may not be interesting for an attacker to go around doing this to Joe Random's laptop) it is something that users should be aware of.
I don't like TPM. I generally don't like any hardware-based security features. The security must relies on sufficiently long passphrase in my brain remains secret.
Perhaps, I can use Shamir's Secret Sharing to share a key with other people I trust, including the lawyer I paid which must keep client's secret and exempt from police raid by law.
Much like Trammell I have been annoyed by the complexity of the current secure boot tooling ecosystem. This has lead me to a 2500 long UEFI specification and written some tooling from scratch.
sbctl is essentially a secure boot key manager. It enrolls keys and ensures the relevant files are signed on your system. It works fine and I use it day-to-day these days, but it lacks several nice UX features.
The second thing I did was reimplement the UEFI API portion in native Go code from scratch. It currently is feature comparable to sbsigntools, but in pure Go. The top-level API is not completely nailed and It lacks some granularity, but I have written several test tools that replicates the sbsigntools binaries.
I said this last time the topic came up, and I'll say it again: secure boot should not be trusted.
Fortunately, there's an easy alternative if you want to protect against evil maid attacks: use full disc encryption and keep the bootloader (and key) on a usb drive on your person.
A separate bootloader and key on USB does not protect against many physical attacks, nor ones that involve changing the firmware or nvram configuration through software attacks. Without some sort of sealed keys or attestation of the platform configuration, your external bootloader has no guarantees that the device itself has not been backdoored. mjg59's tpm-totp talk[1] discussed the difficulty of trusting the firmware that loaded the bootloader that loaded the kernel that is now asking for your password (although even with that it is necessary to add integrity protections on the encrypted disk, otherwise there are a variety of attacks against the systems).
Secure Boot is trustable, if you remove the vendor keys and reprogram the platform key with one under your own control. Likewise, the TPM is useful for protecting your secrets, not just enforcing DRM, if you take ownership of it and make use of the sealed key policies. See the safeboot.dev threat model[2] for how these protections are applied and how they detect or prevent many sorts of attacks.
I think there are some misunderstandings in this thread about what Secure Boot is and how it works. Secure Boot doesn't protect your disk encryption key.
The purpose of Secure Boot is to validate that the bootloader is trusted so that you can have some assurance that you're not giving your disk encryption password to a fake bootloader which phishes you.
Secure Boot doesn't give any agency more control over your machine than if you were not running Secure Boot. Using Secure Boot is strictly more secure than not using it, even if you don't trust the parties who made the implementation.
You could say "Your computer is trustable, if you have verified the silicon of your CPU, motherboard, etc." and it would be equally true. Secure Boot isn't perfect, but it's a lot better than no Secure Boot.
The only argument against it is that it provides a false sense of security, which is only a problem if you decrease security in other areas as a result of using Secure Boot.
What most people don't understand is that secureboot means trusting a foreign overseas company with financial motivations.
And additionally trusting them even with the possibility for a NSL that very likely was sent to them in the past already and means they probably have an automated pipeline for handing over the keys to federal institutions.
I'd never trust any OEM BIOS with anything. Just as I won't trust Intel ME.
Or are you literally replacing all OEM firmware, using purely open hardware. Using purely open source firmware. Verifying the firmware you have corresponds to the sources you have. Verifying that there is no additional secret firmware you don't know about, verifying that the hardware you have actually corresponds to the open hardware specs you have, etc. i.e. Doing an insane amount of steps that are so impractical, you might as well make your own computer starting from first principles.
I totally agree with you there that most things are super unpractical and impossible to realize for end-users (or even engineers).
What you can do though is trying your best that you can influence with your own skillset. I would never claim that any device is secure (heck, eversince BadUSB not even my power transformator is) but I'd have a better feeling when using coreboot that I configured, built and flashed via my CH341a adapter instead of an OEM SeaBIOS, for example. I mean, software is my skillset. Software I can influence. Hardware: not so much.
I don't know whether there are government-level exploits available for coreboot or libreboot, but I think that's the level of security where we can just dump our hardware into the trash anyways.
Additionally I don't have the skillset of verifying that a RISC-V chipset is really open, verified or secure. Therefore I would have to trust somebody else to do it, which might become the centralized point where the red tape fails for all of us.
When it comes to open hardware, mntmn [1] got pretty far already. Even though I personally think that the touchpad is still unusable in terms of modern UX. But I really admire them for what they do, and that they do not compromise on their core principles.
As a slightly less snarky addendum to this: Good risk management is all about balancing the liklihood and severity of a threat against the cost of mitigating it.
If you spend $1000 (or equivalent in time/whatever you care about) mitigating a risk that at worst would cause you $10 worth of damage, that is a poor use of resources.
Of course some people like locking things down as a hobby. Nothing wrong with that, but at that point you're doing it for fun, not to protect yourself.
the only acceptable phrasing of "I don't trust anything" is to finish that sentence with "... therefore I don't use computers" ... the very idea of using computers means that data is processed. and so eliminating all attack surface is not to play.
NSL is usually used specifically to indicated a National Security Letter issued by a US government agency. National Security Letters only apply to particular types of information and even then only to transactional records and not content. They also must be targeted to information specific a particular investigative target. I would hope any company receiving a NSL trying to compel providing keys especially in some sort of automated way would challenge such a NSL in court.
> With a Librem Key linked to your encrypted drive, you can boot your system, insert your key, and enter your PIN when prompted. You can always fall back to your passphrase if your Librem Key isn’t at hand.
Emphasis mine. Since the bootloader is not protected, it's susceptible to evil maid attacks.
"Technical report" meaning "guide created for government system admins." It's crazy that the NSA cybersecurity mission seems so similar to one of a thousand generic tech how-to blogs.
Even the CIA's leaked internal PowerPoints look fairly similar to what you'd get in the valley, for the most part. The budgets big but - especially for defensive operation - it's the same kind of thing most other people do in that sector.
For offensive work, then the motto (of one of the NRO's satellites) comes to mind: Doing god's work with other people's money. Everything is tapped, we should assume at least a proportion of what we take for granted now is unsafe. It's unlikely they'll have broken any big fry protocols or schemes but planting a backdoor is trivial for them (if you can manipulate the entropy on 10% of computers so you should be able to crack it 10 years, think of all the kids you could save!).
Of course it's the same. Their offices are full of the same boring Dells that you'd find in any other office. Best practices are best practices anywhere.
I believe the organization is silo'd into defensive and offensive sub-organizations. My blind guess is the defensive side is pretty separate from the rest of what's going on (any intelligence agency naturally tries to compartmentalize as much as possible). I wouldn't be surprised if there's internal politics where the defensive side may want to release a security advisory or other defensive guidance but the offensive side blocks them due to wanting to hoard something as a potential weapon in the arsenal.
And I'm very likely in the minority of HN on this one, but I think this is generally probably fine and warranted. That kind of hoarding is exactly what I would expect and want them to do, as opposed to the warrantless domestic dragnet surveillance I don't want them doing. If you're in a non-stop ever-changing arms race, you want every edge you can get, as long as there's a carefully considered cost-benefit analysis (which they likely at least attempt to perform).
"any intelligence agency naturally tries to compartmentalize as much as possible"
Why is compartmentalization natural? The business world analog is "silos", and we're forever trying to break them down, or work around them or something. Are intelligence agency compartments just jargon-justification for bureaucratic fiefdoms? We know human organizations tend towards individual small warring tribes, are compartments just a justification of that?
Would an intelligence agency that scraps compartmentalization have an advantage? How would you see that advantage?
Because intelligence agencies are always also concerned with counterintelligence as a major function.
> The business world analog is "silos", and we're forever trying to break them down, or work around them or something.
Most businesses try to keep highly sensitive data that has adverse consequences for release siloed. Unlike intelligence agencies, for most businesses such information is exceptional, rather than the rule.
> Are intelligence agency compartments just jargon-justification for bureaucratic fiefdoms?
They aren't just that, which is why the practice is universal. There is, of course, the perennial risk that the legitimate need gets exploited for that, though.
> Would an intelligence agency that scraps compartmentalization have an advantage?
As long as they were never penetrated by a hostile agency, maybe (though it might also reduce focus, contribute to analysis paralysis, and have other deleterious effects without penetration.) But the impacts of any penetration would be magnified, and while major penetrations may be rare because of compartmentalization, penetrations of intelligence agencies aren't rare enough for magnifying their impact to be discounted.
From what various related sources have said, the defense budget is miniscule compared to the offense one. I get the feeling anyone on the defense team is someone unlikely to try and oppose what else is going on
>That kind of hoarding is exactly what I would expect and want them to do, as opposed to the warrantless domestic dragnet surveillance I don't want them doing
Why do you think they aren't collecting these exploits for more domestic surveillance?
>Why do you think they aren't collecting these exploits for more domestic surveillance?
They may very well be. But, first, because a 0-day in Microsoft Word or something isn't really helpful for spying on hundreds of millions of people; it's for rare, highly targeted spear phishing and other kinds of very precisely-aimed operations, and I think that's the type of stuff they generally discover and/or are given/sold
In theory some kind of major flaw in TLS or networking equipment could enable it, but the latter is risky to be doing all the time (dragnet implies constant surveillance), and the former is as well unless it can be done purely from passive observation of traffic, and I think such a critical vulnerability in modern TLS requiring no active interference (e.g. not Heartbleed) is fairly unlikely and rare - though of course definitely not impossible.
Also, I think after all the leaks and recent high-ranking court rulings, it's just not very tenable for them to keep that going as it existed before. Even if only due to future leaks and backlash. Plus, PRISM and XKEYSCORE are cool and have rad cyberpunk codenames and stuff, but from what I can tell the actual valuable, actionable intelligence they got out of it wasn't worth even 1% of what they put into it, due to having so much raw data to deal with. Trying to filter the signal out of the noise is like a needle in a galaxy-sized haystack. Future ML and other software developments could maybe make finding the needle, but it'll always be a very technically challenging problem.
And now that there's a precedent of leaking, there's a higher risk that a future dragnet surveillance program might get exposed by people who otherwise wouldn't have exposed different programs. "Vacuum everything, ask questions later" / "collect them all and let God sort them out" just seems technically, politically, legally, and practically not worth continuing. I'd also like to think some percentage of employees have probably been swayed and now morally oppose it, even if they wouldn't say it openly.
And, finally, I actually don't personally care much about being caught in that dragnet myself, so the thought of it doesn't really bother me. I work in infosec and am very privacy-conscious, too, to the point of some friends thinking I'm paranoid - I've just been in enough positions to know that it's like being the Earth: you feel important, but relative to the universe you're so small you might as well not exist. My threat model and risk profile is just very different. However, it's of course unconstitutional and unethical, and the fact that many other people feel very violated by it is more than enough reason for me to oppose it, even if it's more on abstract, philosophical grounds.
This was the same in the UK: GHCQ had offensive and defensive arms. But they’ve since officially split; the defensive role is now taken by NCSC. Having said that, GCHQ is still its parent organisation and I have no doubt that they maintain strong ties.
Why should they not be on the offensive also? Wouldn’t it be a bit naive to think that the US would sustain from offensive cyber ops, when the other major world powers actively do so?
We already know they’re on the offense, so much so that if you asked any random American on the street what the NSA did their response would likely be “spying on me/other countries/terrorists”. Very few people know that the NSA is expected to (and sometimes does) spend its effort on defensive measures, and it would be a good idea for the agency to improve this record.
> Wouldn’t it be a bit naive to think that the US would sustain from offensive cyber ops
you mean 'abstain'?
and no, it would be responsible to abstaint because offensive cyber relies on knowledge of vulnerabilities in software and hence creates a incentive to not fix them which in turn weakens security for everyone.
You could still read their charter online 10 or so years ago. Their mission was defense. Today they have replaced it with "Mission & Values" which they define without any immediately apparent legal basis.
Would you extend that to bioweapons and depleted uranium munitions that are well documentated to have caused tens of thousands of birth defects in innocent children?
X terrorist does it, so why can't the US right? Is this line in the sand really drawn at cyber? And does cyber not kill people in meatspace? Last I looked you drone strike weddings based on metadata.
TPM should still be scrutinized in my opinion. It is mainly used to bind licenses to hardware. The keys act as a identifiable serial number. Yes, it is possible to shield against third party manipulation and there are some cryptographic benefits, but not in any practical sense. Not supporting it can even be a security boon, spies probably would have incentives to not use it.
So I think it is fair to stay critical if the NSA supports unique identifiers for hardware.
The TPM is not a DRM enforcement mechanism if you set it up for your own use. It is a very useful tool for taking control of machine that you own - it provides a way to prove* to yourself that the system booting with the firmware that you've approved, in the configuration that you setup, and running the kernel and initrd that you've signed. https://safeboot.dev/attestation/#i-thought-remote-attestati...
No it is not, but I am fairly sure that is one of the main use cases. And you have a uniquely identifiable machine which creates new security problems.
We also know from smartphones that manufacturers can indeed be motivated to lock bootloaders. I think the main reason we don't have that on PC is that there are still multiple manufacturers and legacy considerations.
I cannot read the minds of Microsoft, but I have my assumptions that I believe are quite safe.
https://trustedcomputinggroup.org/ has rebranded themselves because they got a bad name. Justified in my opinion. People have identified the motivation on day one.
But again, yes, it can have some security advantages against the numerous disadvantages. I think it is bad for open computing overall. There are certainly mechanisms to secure your OS that don't rely on TPM. It may benefit you, but I would actually like to see it removed from my machine with all the consequences (which would be not being able to play DRM protected media).
Can someone ELI5 what secure boot does? Does it just prevent malware from overwriting the MBR? I turned it off on my most recent install, even though I still use full disk encryption. Had some issues with VirtualBox VMs that indicated secureboot might have been the cause.
If legacy boot is off, UEFI and TPM/secure boot offers code signing for everything running on your computer.
UEFI checks first stage boot loaders and/or kernels for a signature backed by a key/cert in the TPM. AFAIK know, out of the box, that means Microsoft, RedHat/IBM, Canonical or a handful of others have signed your bootloader/kernel.
Whatever you run next is supposed to check that any further code is signed - at least code allowed into kernel space.
If there exist any signed snippets with exploitable errors, the whole card house collapses (but you can limit exposure by only allowing code you yourself signed, such as a single build of the Linux kernel).
Linux can/will require signed drivers with secure boot enabled - so that can lead to some issues (that can be fixed, eg by adding a signing key and signing the drivers).
In theory, you'll never run untrusted code in the kernel - no blue pill hypervisor root kit etc.
UEFI Secure Boot only applies to UEFI booting. So if you're booting via an MBR (courtesy of a UEFI Compatibility Support Module) then Secure Boot won't help you. You need to turn the CSM off for any decent booting guarantees, and then MBRs are ignored and malware overwriting an MBR is of no consequence -- it won't even stop the machine from booting if it zeroes it out completely [1].
The purpose of Secure Boot is to verify that the binaries (e.g. bootloader) that the firmware is executing from your EFI System Partition (Yes, UEFI systems are aware of both partitions and filesystems, unlike BIOS systems) are digitally signed with a key in its database. Likewise, those binaries are themselves supposed to verify that the things they're loading (e.g. kernels) are signed with a trusted key, which can either be a key built into the Secure Boot database, or a key built into the bootloader (where changing such a key would invalidate the signature on the bootloader itself).
If you're running Linux, you can even eschew a bootloader entirely, by building the kernel itself as an EFI binary and relying on the UEFI Boot Manager to load it directly. This is called EFI stub mode, and is still compatible with Secure Boot if you sign the kernel binary yourself, with a key that you provision into the database. This is how my NAS boots.
Note that nothing here implies any sort of encryption. Whether you use disk encryption or not is independent of whether you use Secure Boot or not -- Secure Boot does not require, or even provide, any disk encryption services. Something like Microsoft's BitLocker can use a TPM to store the decryption key, and Windows will not require that UEFI Secure Boot is enabled to do this. However, changing the system firmware settings after the fact (e.g. turning Secure Boot on or off) will make the TPM (correctly) refuse to divulge the disk encryption key you've sealed into it during BitLocker setup, rendering the machine unbootable again until you either (a) undo your configuration change or (b) enter your BitLocker recovery code and set up BitLocker all over again.
[1] Speaking only of the bootloader portion, which is the first 440-ish bytes. The disk identifier and partition table is in the rest of the first 512 bytes, but on a UEFI system booting in UEFI mode, there is (usually) only a single "partition" in the table here anyway: a protective MBR containing a single whole-disk GPT. The actual GPT with the real list of partitions follows after. Implementations differ on whether they actually require a protective MS-DOS partition table, so a GPT-only disk (no protective MBR) could be bootable on some systems anyway.
This guide does a good job of discussing Secure Boot, but what I've always been curious about is the feasibility of enrolling my LUKS key in the TPM in order to roll my own Bitlocker. I've seen a few guides on this but I'm hesitant to mess around with my encryption key on my everyday systems. Has anyone successfully done this along with Secure Boot and if so can they point me to the guide they used?
If you want LUKS encryption with signed policies to prevent brittle PCRs, rollback prevention with monotonic counters, and user pin to prevent dictionary attacks, plus TPM sealed TOTP to attest to the state of the firmware: https://safeboot.dev/
It performs bootchain hardening, eliminating any vendor secureboot certificate and replacing with own. Entire bootchain gets signed, including ramdrive and grub configs.
The header and footer remind me of the limitless budgets government agencies have. Nobody in the private sector would put that much color on a print-ready security document for wide distribution.
No. You can also turn of secure boot and I would recommend it. It is mostly a scam and was probably introduced to try to lock down PCs like mobile devices. Microsoft being the usual offender here.
Security benefits are questionable. There is no scenario where non-technical users boot into a malignant OS. They open mails with strange attachments. The security analysis leading to this initiative is some kind of fantasy novel.
> There is no scenario where non-technical users boot into a malignant OS.
According to the report there is, and I believe the report is correct, when the "malignant OS" is not actually a full OS, but rather a rootkit. This rootkit might have been the result of a non-technical user opening a mail with a strange attachment.
Right, there are such scenarios, but it comes with disadvantages that are also relevant to security, to a larger degree in my opinion, and the mechanism is not excluding every form of rootkit. If you open a malignant attachment and execute it, there is just so much you can do.
Linux distros are now required to ship their kernel builds as Windows EXE files that are digitally signed by Microsoft. Talk about a branding iron. It amazes me how no one talks about this. Or maybe there's just some NSA gravy train for online commentators I haven't found yet :(
I'm running Arch on a "professional" HP ProBook with secure boot activated. This laptop has no official linux support whatsoever from HP. It's fairly recent, too – 2018 I think.
The bios allowed me set up my own keys which I use to sign the kernel. A fun fact is that with the way it is setup currently, I refuses to boot windows because of the... wrong signature!
> The bios allowed me set up my own keys which I use to sign the kernel
Not all manufacturers allow this, and even then, it adds considerable user complexity to have to do this to install Linux, as opposed to a normal liveUSB GUI flow, not to mention that it screws up dual-boot, as you said. So, it is practically a requirement, even if it is technically possible to work around it with good technical skills on specialized hardware.
The HP has a box to tick in the UEFI where it loads the MS keys. I never tried it, so I don't know if that is in addition to whatever keys are configured or if it replaces the custom keys. As there's a different option to clear all keys, I would hope it's the former.
I think in this case there are two problems, more related to the hardware and SecureBoot implementation rather than secureboot itself.
1. Shoddy hardware that doesn't allow the user to control it.
2. A process that is relatively involved, although I'm not sure how you could go about providing an "easy way" for people without technical skills.
One way or another, the keys have to get in the UEFI. It's technically possible to configure them from inside Linux (while it's running) – this worked on an HP EliteDesk – but the UEFI SecureBoot has to be disabled. I suppose that's relatively simple to do, certainly simple enough for someone interested in trying linux (as opposed to people who don't care what os they use as long as they can accomplish what they want with their computers).
I guess a nice way would be for the UEFI to expose some sort of interface where the OS can change the keys and then, on reboot, asks the user if they really wanted to the change. But this would still be a problem for the initial setup, because, if booting linux (instead of the usual windows) you are actually booting a different OS than what the UEFI expects.
1: https://safeboot.dev/