The pervasiveness of secure boot has genuinely made things difficult for attackers - there'd have been no reason for the Black Lotus bootkit to jump through all the hoops it did if it weren't for secure boot, and the implementation of UEFI secure boot does make it possible to remediate in a way that wouldn't be the case without it.
But secure boot at the OS level (in the PC world, at least) is basically guaranteed to give users the ability to enable or disable it, change the policy to something that uses their own keys, and ensure that the system runs the software they want. When applied to firmware, that's not the case - if Boot Guard (or AMD's equivalent, Platform Secure Boot) is enabled, you don't get to replace your firmware with code you control. There's still a threat here (we've seen firmware-level attacks for pre-Boot Guard systems), but the question is whether the security benefit is worth the loss of freedom. I wrote about this a while back (https://mjg59.dreamwidth.org/58424.html) but I lean towards thinking that in most cases the defaults are bad, and if users want to lock themselves into only using vendor firmware that's something that users should be able to opt into.
Would it be sensible to make that choice using a good old fashioned jumper? For example, when the jumper is connected to pins 1 and 2, the firmware must be signed by a list of vendor-controlled keys; when the jumper is connected to pins 2 and 3, the firmware must be signed by a list of user-managed keys. That way, I can choose what kind of freedom makes sense for me. Most people value the freedom of not having to worry about firmware, while others value the freedom to use or create their own firmware.
No, because the entire point of this is to be resilient against physical attack - anyone re-flashing your firmware already has your case open and can just move the jumper while they're at it.
Taken to the extreme, someone with physical access could replace the whole unit, to something which has the malware pre installed.
A switch that can’t be controlled via software is at least more secure than the alternatives. If you check the jumper before booting up you can still be 100% sure even if someone previously flashed stuff onto it. Remember, it is called secure boot, not secure flash, so fw get verified again during boot up.
> Taken to the extreme, someone with physical access could replace the whole unit, to something which has the malware pre installed.
Not really. Secure Boot also guards access to tamper-resistant security modules like the TPM. Replacing the whole machine would never give you access to the old TPM. And if, for example, the disk is encrypted using using keys stored in the TPM, replacing the board won't work. Same even for OS-level keychains and credential stores even if the entire disk isn't encrypted.
if i have access to the tpm and the system, i can MitM it.
safeboot is perfect in a theoritical ideal vacum. something that bored reaearchers look and nod.
but keys leaking, hardware hacks, etc... are not even considered to not disturb the safety blanket everyone wrapped themselves with. yeah it makes it inconvenient for boot kits, bit that's it. if you can install os updates, i can install a boot kit
> if i have access to the tpm and the system, i can MitM it.
This is if you unlock your machine after an attacker has had physical access to it. "Evil maid" attacks are well-known (that is what these are called, someone installing MitM hardware on your computer). Whether contemporary machines are actually resistant to it in practice I am not sure.
However, all UEFI implementations (on PCs at least) allow anybody with physical access to disable Secure Boot, the classic method of just popping the button battery remaining valid to these days.
So, isn't this firmware protection with BootGuard only really meant to prevent rootkits from getting persistence?
PS: thanks for all the blog posts you share on the matter! they are really golden
> So, isn't this firmware protection with BootGuard only really meant to prevent rootkits from getting persistence?
I saw this "but it only prevent persistence" several times and I wonder...
Isn't preventing rootkit from getting persistence already a big win? Preventing a rootkit from getting persistence also means that should a new signed kernel contain a security fix fixing the hole the rootkit was exploiting be installed, the rootkit won't work at all anymore. The attacker now needs not only to root the machine at each boot, he also needs to cross fingers that a kernel patch closing the hole he's exploiting doesn't get installed (or he needs to both prevent the new kernel from being installed while, at the same time, managing to make believe it's been installed).
Which also raises the probability the exploit he's using at every boot gets detected at some point.
How is this a win for attackers?
Are black hat hackers really thinking "Great, BootGuard and SecureBoot are getting ubiquitous, everything up to the kernel loading is signed and enforced, so now things are easier for me!"?
Kind of true? If you disable secure boot at least on Windows BitLocker will no longer unlock your disk at boot, and so you'll need to enter the recovery code at least once.
If the target doesn't have BootGuard, you replace the firmware with one that pretends that Secure Boot is enabled even if it isn't and Bitlocker is unaware anything's changed.
Secure Boot configuration is usually stored in flash, not battery-backed CMOS, so on most boards won't be wiped if you simply remove the battery. But if you do have physical access you can simply rewrite that variable in flash to disable it - doing so will change the TPM measurements and so Bitlocker (or whatever) will complain, so it's not silent.
Then you don't need a jumper - you can trust the firmware to impose any policy you've set (re-flashing firmware is mediated by the firmware, so you can just have a setup option that lets you choose which signing keys are trusted)
Then netflix wont serve you HD content unless you run your OS with pin 1-2 setup as those are the only keys they "trust" :) This is what "trusted platform" is all about.
This would have such a niche market, but also be so wonderful. I would love to have a physical switch that toggles between, essentially, secure boot and a user-programmable boot.
>if users want to lock themselves into only using vendor firmware that's something that users should be able to opt into.
But even this is a potential risk all by itself if you aren't making sure this can only be done by someone with physical access to the hardware. Case in point is Dell and AMD EPYC CPUs that were locked to Dell firmware if they had been booted on a Dell motherboard in the past. It's bad enough that processors were being locked to Dell only without the user making the choice but that also allows for the possibility of some pretty potent ransomware. Not just holding data for ransom but holding hardware as well and with the same durable cryptographic guarantees.
Absolutely agree on here as Benjamin Franklin once said: "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."
The freedom to choose which software runs on your computer is also the freedom to choose that certain software shouldn't run on your computer. The freedom to make that decision should be left up to the individuals rather than being imposed on them, but making an explicit choice that my computer should only run specific firmware builds is an expression of freedom, not a rejection of it.
Except that describes a fantasy reality and not our reality where the user has no say and even if you are lucky enough to be able to run the software that you want it won't have access to the keys demanded by third parties like the media industry. The mere existence of "trusted comuting" is a threat to free computing.
The key term there is “essential liberty”, by which he meant political rights. Choosing to delegate management of a computer’s firmware to a third party firstly isn’t giving away any political rights, and is a rational choice in some circumstances. It’s a choice I might make for some computers I own or operate, and not others. Suggesting that this choice has anything to do with what Franklin was talking about is nonsense on stilts.
Yes, delegating firmware management unaudited is giving away political rights. You do not know what is in there.
The ones who write the code make the rules. The ones who make the rules, wield power. Those that are attracted to the exercise of power, inevitably abuse it.
Trying to handwave that "computer says no" couldn't possibly be abused to political ends is literally shoving your head in the sand. DRM would not be a thing if computing was fundamentally apolitical.
Any code I didn’t write myself, from the ground up using no third party components, in a language I developed, ‘could’ possibly be abused in this way. Even then, do I have to also write the microcode in the chip as well, and supervise the fab? This extreme absolutist stance is completely non viable. I certainly don’t see how I could live life taking that level of extreme paranoid seriously, or how anyone could in an advanced technological society.
The whole point of laws and government is to outsource such concerns. That’s why free speech, the rule of law and democracy matter so much. Those are the essential freedoms he was talking about. Everything else is based on those, because with those you don’t need to trust the vendor, because your legal rights will be protected.
So sure, essential freedoms are exactly that, essential, but extending that to absurd lengths is fuzzy thinking that obscures what’s actually important by confusing it with things that aren’t practically attainable generally anyway.
Not OP, and I agree that freedom quote is definitely an overreaction in this sense.
Allow me to go off a bit of a tangent. I find this call for freedom much better than what is beaten into children/students in our schools in a small European country, namely Croatia. It is obedience, rote memorization, compliance, anti-individualism, hopelessness, anti-leadership and dependance. Ask any citizen over here what freedom even is, and people would stare at you blankly because frankly, very few would even know how to try to describe it.
For a normal citizen over here, it means what Government allows you to do. And now in my fourth decade in this country it is less and less. People are afraid to speak up because the Goverment is the biggest employer by far. Every forth citizen works for it (directly or indirectly). The previous generation could at least be always safe to have a home so if they get fired, they can try to get a job somewhere else. My generation (and the generation after me) have no economic freedom. I bet 99% of homes bought are with mortgage, so those are all owned by the banks, so people can lose those very easily. And they do. This is not America where jobs are like a revolving doors. People losing jobs here are very concerning thing and getting employed again is difficult. No wonder young people are emigrating en masse because they cannot afford anything here. At least somewhere else they have a chance at a normal life. Here, they'll be almost slaves their whole lives.
People here also don't know how to protest and self-organize, so protests are very rare. It also doesn't help that police is very active in supressing any kind of "undesirable" political activity. As my ex neigbour said (now deceased) who was a chief of regional branch of secret police: "everyone is doing something illegal, if you stick your head in certain matters it will get chopped, and there won't be newspaper articles about it."
When the government enacts laws to remove some remaining freedoms and rights, this goes here even without a whimper. Its not even a headline news. It is a non-news. "Well at least they didn't take away my TV, so all is fine." is a bit of a sarcastic reply from an average citizen here.
Just wanted to put this into a perspective when one would claim that Americans are over sensitive on "freedom fetishism". That's fine in my view, millions of people died for that freedom. For the most of the rest of the world, freedom probably just means something only rich people can afford.
Americans are very uncritically accepting of their system of government in general. It gets taught as this "wow, our system of checks and balances, isn't it amazing!?" and by and large is never criticized or substantially analyzed as to whether it's a system that produces good outcomes. It simply is.
The largest criticism imo is that it biases against action. Every dimensionality of the population gets represented (popular, geographic, regional, etc) and if any segment of the population disagrees, the whole process can be dragged to a halt. And as the Polish Sejm showed hundreds of years ago... the liberum veto is a terrible idea politically. And everyone knows that, but, what is the threshold at which it becomes a bad idea? 90% consensus? 70%? Pure majority? The US system requires very high consensus as some issues show - there are some issues with >90% popular support that still cannot get passed.
Consensus building is good but at the same time there always needs to be someone in the drivers' seat, the idea of "split government" in the sense of the executive and legislature being opposed is fundamentally and innately a bad idea that people just uncritically accept because that's how it is. Again, the executive being the leader of the coalition that controls the legislature is a good way to do that - and then we can work on making the legislature represent the population fairly in the desired ways.
Most of the problems with the US really boil down to "bias against action" and "split government" and "the senate in its entire conception". And federalism really is not great when taken to the degree that the US takes it, either, but again that's something where it's taught as "wow, federalism, how great!" and its downsides are never mentioned. Having regions of the country where human rights are 100 years in the past is pretty bad. Gay marriage wasn't constitutionally protected until like 10 years ago, and it was by court decree, not actual constitutional process. Texas just goes around killing people, some absurd % of the executions happen there and the evidentiary bar is quite low. Social services tend to be similarly scant in these regions and again, it’s not a good thing that states can just choose not to fund (or to place arbitrary restrictions on) senior care or other funding for vulnerable populations. Federalism is supposed to be backstopped by a minimum bar that in practice doesn’t exist in the states, in human rights or social services or many other areas.
Unfortunately, a lot of this was historically necessary to get the US built - you wouldn't have gotten the slave states onboard if they didn't have disproportionate representation and mechanisms for dragging abolitionism to a halt. And it's produced one civil war and a half-dozen-odd constitutional crises over the centuries. But that part gets separated away from the “design choices”, and people only hear the positive.
It's not that they're all inherently bad either but they're deliberate design-decisions that have consequences both positive and negative, they are "tech debt" from politicians who wanted to move quickly and break things, and now they're these sacred cows. And in hindsight some of those design-decisions have been ones that had immensely negative consequences and can't be easily undone... but they were necessary at the time.
> Americans are very uncritically accepting of their system of government in general. It gets taught as this "wow, our system of checks and balances, isn't it amazing!?" and by and large is never criticized or substantially analyzed as to whether it's a system that produces good outcomes.
Is this really different anywhere else? Pretty much all school I have been trhough (none of it in the US) has been 90% about deferring to authority if you really think about it.
Sorry, what? This literally happened, THIS YEAR, and not a single one cared beyond saying "oh no, this is terrible, if only there was something we could have done!"
If "something truly salacious will turn up", I would bet the source politician will deny it and would want the whole thing to be forgotten as quickly as possible, not to work on a bill against it to pass it bipartisan, because he was an embarrassment case of a leak.
Like what? Another homophobe southern republican politician gets outted as a pedophile?
A half dozen people testifying that our last president was a serial rapist?
The head of the Supreme Court whining that when congress appoints a probable rapist to the court, the double standard is that the court should be above politics, and not comment on such things, but the press covers it anyway?
At least two of those things actually made the news yesterday.
I’m really not able to think of anything sufficiently salacious to get any more shit on the fan.
Despite my comment I'm not in favor of deliberately exposing private medical records of anyone, politicians or otherwise. My point is that the fixed and blinkered view of politicians won't change until they're personally affected or harmed.
I had nothing particular in mind. But we only have to look at the long line of past political scandals to know others that are sufficiently newsworthy will soon turn up—ones politicians wouldn't want exposed.
> There'll be more, eventually something truly salacious will turn up.
Have we lived in the same past few years? At this point, if one of them turns out to be an actual baby-eating reptilian, he'll still get keep his fucking seat next election, as long as he has the right letter next to his name, while the party that professes to hate baby-eaters will keep voting for him.
If you have no shame, scandal slides off you like water off a duck's back.
It worked several times before (Clinton email server, Hunter Biden laptop, various examples from other countries) so no reason why this argument would not work again.
It worked against Democrats because the Republican base does not care about facts and just need a reason, fact or fiction, to justify the outrage they are told to feel.
Ironic that you would confuse your emotionally felt bias, for a fact. There are many good and decent people, who are firmly rooted in reality and facts, that count themselves as part of the Republican base. We need to return to political argument, rather than vilification.
Just because people are "good and decent" doesn't mean that the outcomes they support are thus.
As for facts even without the recent "alternate facts" the truth is you can easily come to radically different conclusions even based on the same facts.
That's all fodder for valid political debate. My objection is the denigration and condemnation of those with differing opinions. It's immature and toxic.
> Just because people are "good and decent" doesn't mean that the outcomes they support are thus.
That's exactly what many on the right think of leftist policies. That the bulk of sustainable and healthy solutions should come from the grassroots, and that many government interventions are unsustainable and have profound unintended negative consequences that aren't fully appreciated by the calculus of the left.
> There are many good and decent people, who are firmly rooted in reality and facts, that count themselves as part of the Republican base.
I'm sure that's true, but those people aren't being voted for. Those people aren't in power and as long as people are voting for literal Nazis there is no discussion to be had. Can't have a rational discussion with people who don't believe you should even exist.
The most recent research suggests partisanship, esp negative partisanship, is rooted in identity. Which is to say facts, beliefs, positions are besides the point.
There have been many more recent follow ups, but the two books I read were Democracy for Realists by Achen and Bartels, and Why We're Polarized by Ezra Klein.
As a recovering activist, this has been very hard for me to accept and adjust to. What hope is there if discourse and persuasion don't, can't work? I have no clue.
I didn't realize that Hunter Biden, or his laptop were running for an election, or for a cabinet position in 2020.
And I don't recall anyone calling the Clinton emails a Russian anything, unless you think that the people building the case against her were compromised.
I do recall that someone ran on a campaign of prosecuting her for it, but somehow, that was forgotten right as soon as the words left his mouth...
I was unclear in my original post, and now can’t edit it. I do not support this stance, I’m saying that this is what they might argue, given the incident the OP posted.
Of course that's their job. The police force is not designed to be a vehicle for social change nor for justice.
The way the system is supposed to work is that engaged citizenry actively overhaul unjust laws and apparatuses, and the police then enforce those new laws.
Unfortunately we have abysmally low civic engagement in most of the western world which leads to the mess we are currently in.
I like to make fun of the French as much as anyone else but I really respect and admire the French people's propensity for protest and to stand up for what they believe. That's advanced citizenship in action.
Advanced citizenship is about not needing revolutions, it is a sustainable governing process existing, one that integrates citizens in their daily lives. So far, I've seen shittier governments take out better ones, because they have more might. I don't think what is currently on Earth is by any means close to ideal because governments spend so much resource on worrying about other governments taking them out.
I'll agree that the citizens of France feel empowered while Americans feel disempowered. And empowerment is a natural consequence of identification with control of larger structures.
Most Americans don't feel like they have any control over larger structures in their countries.
> Most Americans don't feel like they have any control over larger structures in their countries.
This is called "working the refs" or "talking their book".
Americans' main interest in local government is blocking new housing project to pump their property values. So no matter what happens they just say there weren't enough meetings about it and the project needs to be cancelled because it didn't get community input.
You don't have to believe them on larger political topics for the same reason you don't have to believe them here.
> I like to make fun of the French as much as anyone else but I really respect and admire the French people's propensity for protest and to stand up for what they believe. That's advanced citizenship in action.
You'll be disappointed then. The French are protesting for benefits and not for social change. Aka, they are not better than the police they are engaging with.
The French aren't protesting for good things, they all believe in 100 different conspiracy theories at the same time and are protesting that the government won't reveal Qanon is real.
It's also completely ineffective, because the government can just ignore them when they get bored, and they don't actually get voted out or anything.
Sometimes they do, sometimes they don't. Democratic systems of government are messy, and that's okay. It's not for you or me to decide what's a "good" thing to protest.
So, your claim is that protests, whose purpose is change public opinion and to inform leaders of the public opinion, has no part in a democratic process, a system in which a government’s authority derives from public support? Am I understanding your claim correctly?
Yes. There is no part of the official process where anyone is required to listen to or respect the opinion of a protest. That's kind of obvious.
If you want a rep to know your opinion on something you can call them.
The point of a protest is for when they're not listening to you, which is why they feature more in not-officially-democratic politics like dictatorships. But for that to work, it has to be so large they can't ignore you. (Otherwise, if they're small and still work through being annoying it's minority rule, which isn't democratic. But like I said that doesn't happen because you can just ignore them.)
All politics depends on popular support because your subjects can either leave the country or have you killed if they really don't like you. Democratic politics means there's official channels for exercising this by voting instead of having to do this.
> There is no part of the official process where anyone is required to listen to or respect the opinion of a protest. That's kind of obvious.
Actually, I think there is one - government media. CBC/BBC/etc should cover these instead of covering them up. If they represented the citizens voices honestly, they might not feel they have to blockade parliament to get their point across.
> Democratic politics means there's official channels for exercising this by voting instead of having to do this.
That breaks down a bit when your media only presents one side of certain issues.
There's no requirement for corporate media to be honest, but for state media paid for by the citizens that's basically its only purpose.
> If you want a rep to know your opinion on something you can call them.
You can contact them, but if you can't do it in public others won't know and they can basically kill issues by just ignoring them.
In the French case, media does interview the protestors, which is why we know they're not upset about economic unfairness or whatever but instead think the government is suppressing that the pyramids were built by Jews to mind control you.
This is exactly what the media does, point to the craziest and imply that the group shares all their views. Even if everyone in the group was a conspiracy theorist they can still be right that workers are feeling the strain more than others.
We need to steelman the views of our countrymen, not mock them.
Think of what they'd say about your side if the tables were turned.
We continue to elect charismatic fools (and sometimes charismatic criminals) because we're charmed, or scared, or bewildered, or just too damned tired to care.
Until that stops, there's really no justification for expecting anything rational to come from our leadership.
The generation currently in charge thinks psychology is a dirty word. And even if they would do us all a favor and get some help, they probably wouldn’t use an app to get it.
By the time millennials and gen z are running the place, we’ll have moved on to misunderstanding AI or something like that.
It's funny how blatantly the NSA doesn't care. They feign ignorance when asked or FOIA requested about it, but then also ask for a backdoor to opt-out of the Intel Management Engine[0]. It's like there's no coordinated effort to deny that it's an extreme vulnerability.
You're going to have to explain how, grounded in a reality where politicians fundamentally do not understand the idea of security and have to make decisions based on who sounded the most confident when they argued complete nonsense.
politicians know this can be abused, just like nsa, fbi or even local sheriff's... but they are glad because they're ones in position to abuse it.
my tinfoil hatted guess? china this past week had talks with FR, RU, BR and others. they probably got 3+ favors in exchange of this leak.
Or, someone wanted to blacklist someone else's key without a public name and included it in this leak. i wouldn't doubt MSI, intel and safe-pc-for-criminals-r-us keys happen to be present there.
Kind of/almost a good thing. More and more security processors seem to have irrevocable keys or other non-user setups. It's. Just. Not. OK.
And more and more governments are making demands about decrypting users data on demand, about blowing up security for their own greedy needs. They have no idea the awful horrific mess, the buggy by design systems we get when we forgoe security for their paranoid snooping. This is such a textbook lesson. Alas that we need another blow to the face to remind ourselves.
The TPM2 spec allows it to do much more than just hold private keys, it can act as a device identifier for attestation. If that's something you care about, then you might want to disable it.
The fact that Windows 11 won't work without a TPM is a bonus.
> It was a genius idea - you cannot install Windows 11 on an old computer. So you need to buy a new one.
> Monopoly practice hidden as security.
Actually you can, it's only checked by installer, if you modify it there is nothing stopping you from installing it. I have installed one on my parents computer. CPU wasn't supported but after installation everything worked ok.
The point is that it's more difficult and acts as discouragement. Ditto for the elusiveness of the word "support" --- it can mean anything from "nearly impossible" to "it works but we don't want you to".
Yeah. Microsoft not allowing me to install Windows 11 on my 486 is a secret ploy to force me to upgrade, when Microsoft makes no money on new sales of computers!
Eh, it's a bit more exclusive than that. I'm writing this comment on a Sandy Bridge Core i7 which handles current gen games (mostly thanks to the GPU, but still), yet Win 11 won't support it.
Not a huge loss in my eyes, I see Win 11 as a strictly worse OS than Win 10. Haven't yet seen any feature whatsoever that strikes me as an improvement.
"Microsoft could opt for Windows 11 or Windows 12 in future, but if people upgrade to Windows 10 and the regular updates do the trick then everyone will just settle for just "Windows" without even worrying about the version number."
Windows 11 is fetid garbage. There is no improvement worth subjecting myself to it and never will be far as I can see. I will never install or use it. LTSC for now and the forseeable future for my use cases. Updates are only done if I deem them additive and not routine. Security is not my biggest red line anyway. Updates are a choice not an inevitability and retaining choice is my right as a user as well as my responsibility to exercise wisely. Forcing compliance is a bad way to get me to do things. TPM, UEFI, Attestation systems of any kind are used as I see fit only and how I deem acceptable.
I rue the slow march away from customization, self-administration, interoperability and the long walk toward closed systems with trusted components required to work and perform basic things. That approval of the gatekeepers. I will stay my own gatekeeper, thanks. Netflix can suck it, i'll torrent it instead if it won't work without golden keys.
All that stuff is really only the foundation for the long game of future-proofing revenue streams for content makers anyway. A means to exercise control over it and you using it.
Which requires a 12th generation CPU or newer. For those of us on a budget, that may not really apply. IIRC the cheapest one is the i5-12600KF at ~270USD. So maybe not that unaffordable.
> Better support for high refresh rate monitors and variable refresh rate setups.
Isn't this more to do with CPU/GPUs? I personally don't see this as a feature of a OS. This is also the first time I have heard of this.
> Windows Subsystem for Android.
Which *only works for apps on the AMAZON App Store*. Also known as functionally non existent for most of us. What a let down, and what a useless feature. Was very excited to do native testing of apps without an emulator of some description. They just had to vendor lock it. Wish I knew why, but pretty much everyone I know was quite bummed about it when they found out.
> More cohesive UI
In what world is the Windows 11 UI more cohesive? I have Windows10 for my home PC, and Windows11 for my work PC.
I can tell you exactly which UI is better, simply because one of them actually makes some bloody sense. They didn't even fix the hybrid settings menu mess with control panel. Or even allow opening multiple settings menus. The bloody start button starts in the middle! There are ads in the start menu for social media, ads in the start page for browsers. Task Manager's UI is literally broken, with non functional scaling.
I could list off a dozen other things just off of what I remember.
> Better hardware and driver development support going forward
Not relevant IMO. Not until Windows 10 leaves long term support anyway. This isn't a feature. Just like how Google abandoning your device; and you the customer being forced to move on isn't a "feature".
I’m on planes more than once a month (nomadding+consulting). I work with people who travel 80% of the time.
Stealing luggage is so rare it’s meaningless. If you are really worried about your luggage, the easiest way to steal someone’s luggage is at baggage claim and just knock the lock off after they leave the airport.
There isn’t a ring of thieves at the airport stealing your clothes.
Lost luggage and delayed luggage is a bigger concern.
So I mentioned in another reply that I’m in a plane more than once a month on average between nomadding and consulting.
The rest of the story is that my wife and I literally have everything we own - aside from our former home that’s now a rental and our “winter home” thats rented out 7 months a year - in 4 suitcases and two carryon book bags.
If we lost all four of our suitcases, it would be a nuisance. But we could rebuy everything for less than $1000.
The only time that our stuff was mishandled was when I made a mistake and put her portable mixer (Amazon
- not an affiliate link https://a.co/d/iBQivv5) in my checked bag. I could see why X-rays thought it looked suspicious. They broke the power cord.
On the other hand, never underestimate the ability of the "security" establishment to spin this as a bad thing and instill more fear and paranoia.
All these user-hostile in the name of "security" features do is take control away from and put it in the hands of some centralised entity whose interests certainly do not align completely with yours.
Isn't it good? Does leaked key mean that now owners of hardware will be able to read and modify the firmware, including IME, and check it for backdoors?
Such keys should be in the hands of users, not Intel.
Part of the blame, imo, lies with how clunky tools are at the lower levels. I've seen plenty of hardware based signing protocols that don't allow for key hierarchies.
Higher level tools push this along as well. Hashicorp Vault also, last I checked, doesn't allow for being a front end to an HSM. You can store the master unlock key for a Vault in an HSM, but all of the keys Vault works with will still be in Vault, in memory.
HSMs are not secure to sustained competent hardware attacks. This should have been on an HSM in multiple secure a signing service facilities with authenticated access and never handed to an OEM of any kind in any form.
It is not a conspiracy. Just like the iOS App Store it is for your own protection. There is no legitimate reason to run your own software on general purpose computing hardware.
Doesn't really matter /a or not, it's a ridiculously reductive and extremist position either way.
Security is about tradeoffs, most notably security vs convenience, but also many others.
Anyone who suggests that their personal preferences in tradeoffs are not just universally correct but also the only reasonable position to hold is just silly.
This rings more true than many would want to accept: Most people view and use computers as household appliances; they just use whatever is installed on it and if it breaks they go out and buy a new one.
For most people there are, in fact, no legitimate reasons to run "their own" software on "general purpose" (read: household appliance) computing hardware. Almost nobody runs custom software on their washing machine or toaster.
Is everything that is gong to fail eventually just useless theater? Like new cars Re just transport theater because they will have to be junked eventually?
I agree that master private keys are bad security design, and we can and should do better. I'm just not willing to say that all past security value is retroactively nullified. That feels polemic more than realistic.
There's a difference between temporary security and security theater.
Real but temporary security -> This 2048 bit key you generated will be commercial grade protection until at least 2030. Sometime after that computers will be strong enough to brute force it. Do not store anything with this key that will still be highly sensitive in 7 years. It's possible the underlying algorithm is cracked, or a leap in quantum computers happen that will make the key obsolete sooner.
Security theater -> All software running on this chip must be signed with our master key. Please trust all software we sign with this key, and no malicious party will have access to it. You are not allowed to run arbitrary software on your hardware because it is not signed with our key.
In the first case, the security is real. You own the lock, you own the key, and you control the entire security process. In the second case, you neither own the lock, the key, and basically have limited access to your own hardware.
The trick is seeing different personas rather than just "you".
IT admins are thrilled to have limited access to their own hardware, as long as adversaries do too.
In corporate IT, the greatest fear is insider attacks, either knowing or because statistically some users will inevitably make mistakes. Secure boot is fantastic in this context, even if it feels like an unreasonably impingement to gamers / tech enthusiasts.
I honestly don't understand why people act like this. Wanting to be able to ensure firmware isn't maliciously modified is a good thing. Open firmware is also a good idea obviously but there has to be a way to ensure firmware is signed either by OEM or your own keys like secure boot.
As for games, lots of people play games and want good anticheat. If you don't like that you don't have to play those games but no need to act like the way you are because other people want decent anticheat.
>honestly don't understand why people act like this.
Because it’s social pressure to compromise your computer to a gaming company to get to play a game.
People don’t care about the anticheat on their computer, they want it foisted on everyone else who plays, which is a sucky proposition for privacy and security minded people.
It’s like advocating for the TSA to be controlling access to the grocery store because you want to feel safe there and don’t mind the privacy violation.
>People don’t care about the anticheat on their computer, they want it foisted on everyone else who plays, which is a sucky proposition for privacy and security minded people.
No they want games without hackers. Which kernel based anticheats helps with. Can it also impact privacy and security? Yes no doubt but so can any program running on the computer even in userspace. Remember we are talking about kernel anticheats on windows lol.
If you are really worried about it you could dual boot like many people. Either way this whole argument seems silly to me.
You say “no” but then repeat what I said worded differently. When the current market for “games without hackers” is filled with kernel modules, it sucks.
It's worth reminding that we already had a solution to the games with hackers problem: smaller community-run servers where people can get to know each other. But game companies want the centralized model so that they can stay in control of how the game is used to continuously monetize it and to forcefully retire it once they want to push a newer one.
Hackers are mainly only problem for anonymous ranked matchmaking. That's not to say cheaters don't exist without it but they are a) much less disruptive b) have much smaller reach and therefore c) are less motivated.
Like most ills being used to push anti-user technology, cheating is primarily a problem created by the industry itself.
It's a bit of a stretch, but maybe we as a society should be considering the social pressure as being the problem. Why do we allow ourselves to be 'controlled' in this way by others? Especially young people? If we could learn to move past this?
What do you mean by this? As the user you are intending to have the game and its anticheat run. Having to download and run a game on your computer isn't compromising your computer either. Maybe the only thing which doesn't give the game company power to run potentially malicious code on your machine is cloud gaming. That also solves the cheating problem at least.
>You're advocating for installing a kernel module that you don't even know what it does exactly when running a random game.
You don't know what the game will do either. It requires trusting Riot even if there isn't an anticheat.
Also most users will never know what the other kernel level drivers do.
>Would you also support a full cavity search each time you decide to fly a plane?
I don't see how this is related?
>The kernel module has full access to your hardware, you don't know what it does exactly.
The same can be said about any other kernel level driver and even about Windows itself.
>People got so complacent in recent years, and this is on a technology forum no less.
What Riot wants to do is not possible with a user level anticheat. Once Windows eventually gets its security improved such that apps can query the integrity of the system Riot would likely be able to get away with a less privileged anticheat.
>I guess today the Sony rootkit[1] would be totally acceptable.
If it didn't try and hide itself I would agree with you.
Sure, isolation exists except for Riot Games (and any other company that adds similar mandatory modules, which eventually will be all). Oh yeah, assuming there also won't be any vulnerabilities, but that's impossible, because we all know about the high quality software coming from gaming industry.
2. All it actually does is keep users trapped in Windows. God forbid anyone actually use Linux, or even a VM!
The only actually effective anti-cheat is the original: moderation.
Now that users aren't able to host their own servers, they can't do moderation. Game studios don't want to do moderation themselves, so they keep trying (and failing) to replace it with automated anticheat systems.
This here. Cheaters will always find ways to cheat. Theres already cheats that run on completely separate machine so they cant be detected. Legimate customers keep getting screwed.
Game companies want the centralized model so that they can stay in control of how the game is used to continuously monetize it and to forcefully retire it once they want to push a newer one.
Because people prefer to pay 10x as much to have apple/google/etc host it for them. It's still a thing for people who know what they're doing and the experience is vastly better and more private and secure.
> As for games, lots of people play games and want good anticheat
Great, let's install a backdoor in every computer so that some people can play games and watch movies. No. Computer is a thing for computing numbers not a replacement for a TV.
I can't take people like you seriously. The anticheat isn't a backdoor. It doesn't ship with the operating system or come preinstalled in anyway. You opt into it when you play the game. Literally nothing is forcing you to use it or have it installed on your computer.
I understand this is the internet and being super dramatic is part of it but can we please be for real for one moment?
What's wrong with people being upset that in order to play the game they have to install low level kernel modules? It's flawed software that hits an ant with a sledgehammer. Adding to injury, tons of reports online of people getting BSODs and other security features not working while the module is enabled. It's invasive and it's silly to be as dismissive as you are around peoples' concerns.
I meant things like Microsoft's signing keys embedded inside BIOS, Microsoft Pluton CPU, secure enclaves inside CPU (now deprecated though) and things made for Secure Boot and DRM.
> Wanting to be able to ensure firmware isn't maliciously modified is a good thing.
I'm not sure it's a good thing at its core. The intent seems legit on the surface, but digging into the implementation you'll always end up having an adversarial relation with your user's security and device ownership.
On games, I kinda see this as an argument for preserving a special status for consoles, where the maker keep a right to secure everything to insane levels. Doing the same on general purpose computing platform isn't acceptable. Banking and digital currencies are morr of a blurry line, but games definitely shouldn't be accessing the utter most secure system of the platform.
If anything, opening the door to a whole community to hack the base security of your computing life when litteral life and death applications also rely on those shouldn't be allowed.
But how will this provide the game company with continous revenue streams from selling you database entries that you can show of to complete strangers.
Riot anti cheat is quite invasive but Valorant is a competitive ranked first person shooter, allowing cheaters violates the integrity of any ranking system of players, and that ranking system is one of the primary appeals of the game.
Wanting to play competitive games without cheaters is something that real users actually want and they get real value from. Your mockery of these people doesn't remove the value they get from being able to play without cheaters.
Again. You are ignoring that users gain real value from this. The reason why copyright owners are making more profit is because user's are finding value in their product and they are being given what they want.
So don't play the game. Personally I want kernel level anticheats because they make it much harder to cheat in the game. I want to know that my opponents are not cheaters. That's something I don't have in CS:GO, a game ripe with cheaters, or TF2, a game ripe with bots. (Valve's usermode anticheat is absolutely useless)
Yet it's still pretty dang easy to bypass VGK and cheat in Valorant if you even slightly know what you're doing. Now you have the worst of both worlds. In theory, Valve's VACnet and Trust Factor are the ideal solutions, but in practice... not so much.
Make every player pay a deposit which is confiscated when they get caught cheating. Make servers with different deposit levels, so that people who really care about cheating pay over $1000 for example.
Better than having keys which I cannot control on my computer. And I don't play games anyway.
Or allow people running their own gaming servers that they can moderate. Solves cheating problem and when gaming company stops supporting their online game or goes out of business.
I've read a lot of anti cheat RE in the past, seems like the cheater/modder people have found their way to the infosec community, can you elaborate on how this would accelerate Valorant cheating.
Is their watchguard thing using some Intel feature?
Doesn't it only require secure boot on Windows 11? For now you can get around that requirement by simply staying on Windows 10, until they retire support for that.
Is there any tutorial that I can learn to do it? Should I Google "dump Intel firmware" or some other more specific ones? I'm going to do some research after going through my training this afternoon.
Nothing's prevented you from reading the firmware - this is a signing key, not an encryption key. Multiple people have spent time reverse engineering the ME firmware, people have found bugs but no evidence of a backdoor.
There was this recent article (here in HN) about these "evil public charging ports that can hack your smartphone" and how there is an entire ecosystem of devices to protect against them.... when in practice no one has heard about any one single example of such evil charging port, and that in practice carrying out such attack is so target-specific and leaves so many warnings signs that the entire thing sounds implausible to say the least.
These evil maids are even more implausible than that. Has to be ridiculously targeted. If you are really targeted by such a powerful state-like entity, wouldn't it make much more sense for them to just send a NSA letter to Intel (or whatever the weakest link in your chain is, and there are plenty of extremely weak chains here, like the BIOS manufacturer) and/or backdoor the hell out of it?
Secure Boot was never about security for normal users nor security for the majority of us. This is like https://xkcd.com/1200/ all over again. At the point the attacker can write arbitrary bytes to your hard disk, its way past the point where the majority of users care.
It's not just about evil maids and physical access. Even if you did get root level RCE, you did not have access to screw with hardware security. With the UEFI keys, you suddenly have a whole new level of persistence, meaning that if you ever get pwned, you can basically throw your hardware in the trash, because even a system level wipe will not be a guaranteed way to clean malware.
If your attacker has root, and your system allows flashing the BIOS from root (many do), he can simply disable Secure Boot, or enroll one extra signature -- his. If the system doesn't allow flashing a BIOS even if an attacker has root access, then Secure Boot makes no difference whatsoever.
What does the boot rom have to do with the root user of an operating system? How does root help you disable secure boot if there is a password to change UEFI settings for instance?
At the point where you have root you basically won. You can ship user’s data elsewhere. You can install a key logger. You can empty their bank account.
But yes if the OS also let’s you change the boot ROM then you can make your root access semi-permanent.
All smartphones use ARM and USB and Android, and _even then_ the evil USB charging port is targeted -- you still have to tailor it to the target's screen ratio, Android version, Android UI/skin, even launcher if they have one, etc.
> it'd just be a matter of replacing a binary with a iffy'd version that runs before any decryption happens, e.g. replacing plymouth.
You'd at least need to imitate the UI your target is using for unlocking the disk (e.g. plymouth theme). Then, after the user types something, either virtualize the rest of the boot process (which is already extremely implausible), or otherwise reboot in a way that does not immediately cause the user to be suspicious. All of this is as targeted as it gets. A generic version would get as far as your average phishing email.
But... how do you plan to replace my bootloader in the first place? You'd need root access for that. At that point, it is already game over for the target! Why would you need to tamper with the bootloader at that point?
Or are you thinking about breaking into my house and do that in my offline computers ? How is that not a "targeted attack" ?
adding `store password somewhere` doesn't get in the way of plymouth's theming (which is separate), it doesn't change the rest of the boot process, etc etc etc etc etc, its taking an open source project, adding some lines to it, compiling, and swapping a binary out. Why would it need to any of this other stuff?
> You'd need root access for that. At that point, it is already game over for the target! Why would you need to tamper with the bootloader at that point?
Yes that is the crux of the Evil Maid attack, a drive-by install of software. e.g. at a coffeeshop while one is on the toilet, at an office, at a hotel by an evil maid, etc etc. AEM is about detecting changes in trust: if the loading sequence is changed, then the verifier (another device like a usb dongle) can't verify (since the TPM can no longer unlock the prior secret due to the chain changing).
You might want to look into the article I linked in my earlier comment to get the full idea of what is meant by evil maid
> Yes that is the crux of the Evil Maid attack, a drive-by install of software. e.g. at a coffeeshop while one is on the toilet, at an office, at a hotel by an evil maid, etc etc.
If the laptop was left online and unlocked: What do you expect to gain by installing a patched plymouth versus installing a traditional remote control software and/or keylogger ? You don't even need root for the latter!
If the laptop was left locked: do you plan to open the laptop, remove the disk, transfer some files to it (matching the same distro & version of all components your target was using, otherwise the entire thing may just crash or look different and reveal the attack), hope the target doesn't notice his laptop was literally taken apart (most laptops just can't be opened at all, for the ones which can, even mine has a simple open-circuit tamper detector...), then come back in the future _and do the same_ again to recover the captured password? And how is this not a ridiculously targeted attack?
Besides, at that point, you could simply install a wiretap on they keyboard, an attack which unlike the evil maid crap I have seen _millions_ of times in the wild (e.g. at public pinpads, card readers at gas stations, etc. ).
I agree that this is the reason, but having Intel as the guard only makes it so that it could have already been hacked/leaked/bypassed and you never know.
At least if it was user controlled we can ensure that other people's leaked keys don't bypass our security.
If it's user controlled what stops an attacker from bypassing it as the "user"? Most people just want to have a secure device and will not think about security, not want to do any work to secure their device.
"The name refers to the scenario where a maid could subvert a device left unattended in a hotel room – but the concept itself also applies to situations such as a device being intercepted while in transit, or taken away temporarily by airport or law enforcement personnel. "
I genuinely hate this "cute" yet condescending name. Maids are on the low skill low wage end of the spectrum. Even if there is a motive to mount a physical attack, possibly a targeted one, it will either be performed by a person impersonating a maid or with the help of an operator giving instructions. So, either an "evil" maid who is not really evil, or an evil "maid" who is not really a maid. Contrived, inaccurate and demeaning.
Most people don't choose low paying physically demanding jobs when they can paste together stack overflow answers.
That said, "Evil Maid" fits here because of that - they are no someone that you expect to need technical protections from but theoretically they could be a genious adversary or just hired by one.
I tend to agree with your analysis. But that is precisely my beef with the term. You seem to be saying that the term fits _because_ it describes a population that violates security expectation, _because_ it is "generally not smart, except for theoretical surprises" or "easy to hire for nefarious purpose". Neither one is very flattering, neither one equates to "evil" and neither one applies specifically to maids. A neutral term would have been an "adversary with temporary physical access" but that is not nearly as catchy.
Its definitely on the high end of attacks and a bit unlikely, but i dont think its exclusively nation states. Well within the reach of thieves who want to steal your bank info or something.
The most alarming part of the article is that we are only really getting a revocation of these keys because they didn't pay a ransom and the ransomers were apparently too stupid to sell them secretly instead of releasing them to the public.
As far as we know, if MSI had paid no one would know that Intel shipped shared private keys to multiple vendors who could then lose them like drunken monkeys.
People ask why these weren't on HSMs.. The article seems to claim that they weren't even able to generate the most important ones in the correct locations, let alone on HSMs with non-extractable settings.
My concern is that MSI not announcing they were comprised implies they were going to act like they believe that. The keys were permanently compromised as soon as they were exfiltrated but there's no longer a market for buying them.
knowing the ramson is important for the window on how incompetent intel was... but paying a ransom for a secret is extra dumb. glad they didn't and announced.
AFAIK MSI didn't announce, the hackers leaked the keys themselves as a public service. For all we know MSI was still hoping to reach a lower ransom amount and then never disclose that the keys were most likely sold in the exploit market.
I think we have to assume Intel was willing to put together a broken system with total incompetents. I think the US is better off with foreign chips watched closely than Intel watched poorly.
So true, I forgot about that, wow, how the frog has boiled so slowly over the years.
I now see every GitHub repo has a 'report' button, to facilitate censorship, in the name of 'safety' as usual. With a huge list of categories.
Hasn't intel heard about locking keys in hardware, e.g. like with hardware security key modules similar but faster/flexibler then a TPM. Surly one of the main developers of TPM does understand that concept.... right? /s
Intel gave it to MSI, but I may have been incorrect before. Apparently the keys was shared across multiple OEMs (at least that is how I read this below)
>The leaked private keys affect Intel’s 11th, 12th, and 13th generation processors and were distributed to various OEMs, including Intel itself, Lenovo, and Supermicro.
...isn't such signing usually done the other way around? Intel would hold the signing keys, MSI (or any other OEM manufacturer) then sends (over a secure channel) their firmware to Intel, Intel signs it and sends the signature back.
The processor has to verify the signature, which means the public key needs to be embedded in the processor. It makes sense that the processor manufacturer creates the keys and distributes them. If it was the other way around, then a) fifty motherboard manufacturers would make their own keys and the processor would need space to embed fifty of them, and b) a new fifty-first motherboard manufacturer coming into the market would be unable to support any existing CPUs. It would also explain why the same key was used for signing multiple motherboard manufacturers' firmware.
Why distribute the keys themselves instead of having some way for the motherboard manufacturers to upload an unsigned firmware and then download a signed one? That is probably laziness.
If this is the Intel key that's burned on the CPU (or PCH, I forgot) e-fuses, and that manufacturers use to sign their own BIOS and UEFI payloads, do you think this will enable us editing the manufacturer bioses to tweak variables, resign some of their EFI payloads or just remove some of these payloads? (like the ones involved in checking the PCI or USB ids is on an approved whitelist of WWAN and WLAN cards cards)
I ask because this would enable a lot more security: I could mod my bios, and add a extras EFI module signed with my key that wouldn't trigger bootguard: then later during the boot, an encrypted grub or whatever could check them, say with a TPM enrolled key having their hashes: this could make the computer refuse to boot further if say the MAC address of the network card (or the CPU serial, or the NVME serial) doesn't match what I added.
Yes, an evil maid attack could do the same to me (by removing these checks I've added and replacing them say by nops), but that would alter their hash.
And if the secure boot payloads signed with my own key first check the existence of these modules, and then verifies that their signature (kept inside my signed payloads matches) my versions (and not the evil maid or the manufacturer original) I'd still get the benefits of secureboot - just with my own keys all the way down.
On the one hand that is the goal, on the other hand the applicable standard seems to be FIPS 140, which counts products with a flawless security history like OpenSSL amongst its long time adherents. In other words there is a non zero chance that the modules will just spit out the key if you even look at them funny.
What factory reset? PC motherboards are not Android phones. When you flash new BIOS you can't get back to old one without flashing it and obviously new BIOS versions after accident will likely forbid flashing of older versions.
On many modern motherboards there is a backup bios that you can boot into by shorting certain pins. This can be done in about 10-15 minutes by a person with a metal paperclip, some basic technical knowledge and instructions from a YouTube video. I don't think there is even a mechanism to update this backup version, it's just a "known good" bios shipped with the hardware so that a bad bios update does not brick the device.
So even if they push an update, people can pretty easily downgrade to a vulnerable version if they want to.
I doubt there are actually many boards with dual bios. For a while Gigabyte had a lot of them, but for many years very many of their boards dont have this feature
Anyone saying secureboot "brings absolutely nothing" clearly doesn't understand how secure boot works (or is just arguing in bad faith). Secure boot has issues (see key revocation issue & vulnerable UEFI program used by malware to install bootkit) but it does address a real security issue.
People might not like who holds the commonly preinstalled keys (Microsoft and motherboard OEMs) but even then you can add your own keys and sign your own images if you want (there was just a post yesterday about doing this for raspberry pis),
> People might not like who holds the commonly preinstalled keys (Microsoft and motherboard OEMs) but even then you can add your own keys and sign your own images if you want (there was just a post yesterday about doing this for raspberry pis),
I like SecureBoot, and I like that I can select my keys to sign things the UEFI will run, but I don't like that I can't replace the UEFI itself since it's protected by bootguard.
Now if I can edit the UEFI, that's a gamechanger: I could have my signed UEFI payloads check the UEFI firmware has the parts I want (or don't want) and refuse to keep booting if it doesn't
I don't think its absurd at all. It isn't required in anyway (opt in), lets you use your own keys (no preinstalled microsoft or other bigcorp keys), and isn't possible for someone to modify what keys you installed.
Of course if you lose your keys you can't sign anything else and that would make it basically ewaste, but most things end up as waste when you take actions that are reckless and can't be reversed (which is what losing the keys would be). Plus tech tends to ends up as ewaste after less than a decade anyways. Like sure you could still be using an AMD steamroller CPU but realistically after 10 years you'd be better off using a cheaper more power efficient chip anyways.
> Plus tech tends to ends up as ewaste after less than a decade anyways. Like sure you could still be using an AMD steamroller CPU but realistically after 10 years you'd be better off using a cheaper more power efficient chip anyways.
I'm not sure what you are trying to argue but people routinely buy used computers on market place. Rasperry pies with locked keys are essentially paper weights once the owner doesn't want to use them anymore.
And realistically, the biggest ewaste generators are especially smartphones nowadays which are too locked to be reused well.
> I'm not sure what you are trying to argue but people routinely buy used computers on market place. Rasperry pies with locked keys are essentially paper weights once the owner doesn't want to use them anymore.
Why can't the owner who wants to sell their locked Pi give the buyer the key?
That wiki page buries what might be the rationale in the "What is UEFI Secure Boot?" section:
> Other Linux distros (Red Hat, Fedora, SUSE, Ubuntu, etc.) have had SB working for a while, but Debian was slow in getting this working. This meant that on many new computer systems, users had to first disable SB to be able to install and use Debian. The methods for doing this vary massively from one system to another, making this potentially quite difficult for users.
> Starting with Debian version 10 ("Buster"), Debian included working UEFI Secure Boot to make things easier.
Sounds plausible, but I don't know how seriously to take it, when that wiki page also includes very generous and regurgitated-sounding bits like:
> UEFI Secure Boot is not an attempt by Microsoft to lock Linux out of the PC market here; SB is a security measure to protect against malware during early system boot. Microsoft act as a Certification Authority (CA) for SB, and they will sign programs on behalf of other trusted organisations so that their programs will also run. There are certain identification requirements that organisations have to meet here, and code has to be audited for safety. But these are not too difficult to achieve.
I normally look to Debian to be relatively savvy about detecting and pushing back against questionable corporate maneuvers, but it's not perfectly on top of everything that goes on.
Can you provide examples of such pushback from Debian? I always viewed them as a typically understaffed, underfunded volunteer effort without the resources to push back against funded technology. I'm ready to be wrong on this, if you can help me out!
For example, Debian putting their foot down on closed drivers and (for a long time) downloadable device firmware blobs.
I've also seen Debian very responsive when I pointed out that a particular package was phoning home before consent given.
And one of the notable annoying parts of the Debian installer forever is when you think it's started a long unattended period of installing packages, but it soon pauses to ask you for opt-in to some package usage telemetry (so at least they're asking before doing it).
I definitely get the understaffed vibe from Debian, but I'm also still pleasantly surprised how well they execute in general.
Contrast with a certain commercial derivative -- which snoops, installs closed software without the user understanding that's that they're doing, pushes an IMHO horrible different package manager, is sloppier about regressions in security updates, etc.
I wish I had time to volunteer right now to scratch some of the itches I have with Debian, and very much appreciate all the work that others have done and are doing on it.
Debian keeps track of all remaining privacy issues in all packages (i.e. such issues which have not yet been corrected or patched by the Debian package maintainer):
I can see your point but, geez, that's pretty depressing if it's the only reason it's supported!
As a sidenote for having installed Debian with SecureBoot on on several systems, I'd say I still had to muck around quite some in the BIOS/UEFI. Latest one I scratched my hair for a bit was an AMD 3700X on an Asrock mobo where I somehow had to turn "CSM" (Compatibility Support Module) off otherwise Debian would stubbornly start the non-UEFI (and hence no SecureBoot) installer. On my Asus / AMD 7700X things were a bit easier but I still had to toggle some SecureBoot setting (from "custom" to "default" or the contrary, don't remember). All this to say: it's still not totally streamlined and users still need to muck around anyway.
I’m confused, why would a kernel need to be signed if you don’t actually care about secureboot? If you have a signed GRUB, then UEFI with SB will run it, after which you can use GRUB to launch any kernel, signed or not, right?
no, a mok is just adding an unprotected UEFI variable. It's not the same as adding your key which can say disallow running payloads signed by Microsoft key.
On one of my systems disabling secure boot also disables other aspects of the BIOS. I forget what, maybe use of the Intel graphics on the chip? It was severe enough I spent an hour figuring out how to make secure boot work instead.
Software can still modify the bootloader. Secure Boot does not protect against that. It just will complain on the next boot .... unless the replacement bootloader has been signed with the MS signature, the BIOS manufacturer signature, the OEM signature, or a bazillion other signatures.
Even if you were to completely replace all of the signatures with your own, you are going to have to trust some of the MS/manufacturer ones (unless you replace all the manufacturer-signed firmware modules with your own).
>unless you replace all the manufacturer-signed firmware modules with your own
... of which there might not be any. Eg none of my half-dozen SB-using systems (desktops and laptops) have anything in the ESP other than the booloader and UKIs I put there, and boot with my own keys just fine.
I think this is not general enough. What would be needed is the Microsoft secure boot private key so we can just sign EFI binaries and have them work everywhere without mucking around in the bios setup.
Afaiu, this key is specific to certain generations of Intel CPUs.
There seems to be a bit of a precedence with the AACS DVD encryption keys that got leaked (https://en.m.wikipedia.org/wiki/AACS_encryption_key_controve...), the suppression of that key. Seems to have failed, it was widely copied, and you can even find a copy of it on my link to Wikipedia.
"Your honor, I wasn't copying that movie. You see, I applied a mathematical formula to the .zip file, and it just happened to produce the movie as output. Coincidence!"
(That's not to say the key is copyrightable, it's not. I think the relevant law would be the DMCA anti-circumvention provision.)
"I didn't distribute the movie, just a file that XOR'd every byte with 255!"
Technical people tend to see the law as a technical thing, where technical arguments will win. Courts are generally unamused, since every judge has years of experience with defendants who think that they've discovered one simple trick.
> How can you scan your firmware on linux, without running an unknown payload, to know if you're affected?
According to the Twitter thread above, you would upload the original OEM firmware for your device to Binarly's web portal at https://fwhunt.run. The firmware file matching your device could be obtained from the OEM's website, rather than the running system. I haven't tried this myself, don't know if it requires unpacking or pre-processing the OEM firmware file format.
Maybe someone could add key manifest inspection to this OSS tool, https://fiedka.app.
Hopefully Intel and OEMs will make official statements soon.
If you're copying a firmware file from the OEM's website to Binarly's website, then receiving a text report, they would have an IP address, browser fingerprint and device model number, but little else.
Sure, but intel is ultimately left holding the bag here not the oem, and it was totally within their power to put stipulations in the contract around key management.
Contracts don't work. IMO Intel should not provide raw keys to the OEMs but HSMs with the keys preloaded. And MS should require good key management for the Secured Core sticker (although probably few/no MSI products are Secured Core anyway).
There is no reason to use a manufacture key anyway, at least for SecureBoot.
Obviously it isn't in everyone's skillset, but if you have the means there is nothing preventing you from generating and using your own key.
Honestly it seems like a good basic security precaution, not only to prevent against leaks like this, but also to counteract any backdoors (although kind of a moot point with chipmakers).
1. some EFIs are broken in ways that make using private platform keys hard or impossible
2. there are PCIe cards which need option ROMs to be executed (most commonly that dedicated GPUs), this ROMs are not always but often signed by one of the Microsoft Keys and removing it from the trust db will prevent the ROMs from running and lead to all kinds of problems, e.g. not having any video and in turn not being able to undo to EFI setting/disable secure boot. You can make sure the specific ROMs are whitelisted, but then you need to be very very careful about e.g. graphics drivers updating the GPU firmware and similar. And putting the right data in the trust db isn't easy either.
Is there a way to know if it is safe to enroll my own keys? I always wanted to, but always didn't do it, because I often read that it can make the system unbootable.
The question is whether you have any mandatory UEFI drivers or not. If they're in the ESP you can just look there to check, but UEFI drivers can also be loaded from PCI cards or baked in the firmware itself.
Thanks, I'm not seeing these lines in the eventlog, thats good. Are these the only things that could brick my laptop, or are there any other known "security features" that vendors implement?
option ROMs are not quote "security features" they are more like legacy x86 thing which if they get corrupted (e.g. modified by malware) would be a complete security nightmare of having malware in a way which can access most things and can't be detected at all (slightly oversimplified).
Because of this validating option ROMs is _very_ important to a point some would argue it's more important then validating that e.g. your Linux kernel has not been modified
Honestly option ROMs IMHO should not exist in the way they do (for security reasons), but they outdate secure boot by quite a bit and like many of this "old legacy features a lot of things depend on" are just really hard to get right of.
Through without question the design this ended up with has a terrible UX.
But most laptops with a reasonable EFI implementation and no dedicated graphics card do luckily not have option ROMs (through their firmware update tool might have something similar, but you can just not use it).
Most option ROMs are signed by the same set of Microsoft keys, You can make sure to enroll this keys alongside your custom platform key (e.g. the tool sbctl can do so for you). While this does tend to work it has one drawback which is if someone has that key they can break your secure boot chain, i.e. it weakens the benefits custom platform keys give you.
Option ROMs signed by vendor keys are somewhat rare, but sometimes (parts of) firmware update tools are signed by vendor keys, in which case you can choose between not updating your firmware, disabling secure boot while doing so or hoping you can find the (pub)keys for the keys they used somewhere on the internet.
In the end you probably want to see if someone already tried it and had success.
Secure Boot keys are unrelated to the leaked key in question. The Boot Guard key is used to verify the firmware itself that the CPU executes on boot. What the firmware happens to do afterwards, say it's a UEFI firmware that implements Secure Boot, is irrelevant to Boot Guard.
If your firmware and its UEFI modules were originally signed by these leaked signatures, what are you going to do? You can't just un-trust those.
In many BIOSes were you can apparently remove all keys and the firmware still loads its own builtin components, that's because the manufacturer has put a backdoor so that their own components can always be loaded irregardless of Secure Boot state. (Otherwise users would be able to brick their own machines). MSI, for example, does this. And guess what: with these leaked keys, now anyone can claim to be a "manufacturer's own component". Secure Boot is useless.
> If your firmware and its UEFI modules were originally signed by these leaked signatures, what are you going to do? You can't just un-trust those.
You are going to tweak them again, and check down the line (ex: grub) that they match the hash you've stored, say inside you secureboot, or in your TPM
This should get you MORE security: before, you had to accept whatever the board manufacturer though it was ok (ex: any WWAN or WLAN cared with just these Ids). Now you can add more checks (just this serial or mac address) and refuse to boot.
Someone could do the same, but if you verify at the next level (ex: grub, etc) that you tweaks are present, you can refuse to boot if they aren't.
There are plenty of mobos that gladly brick themselves until you reset configuration. The most common way being Device Guard getting enabled and GPU not being initialized after that. I've not taken a look about manufacturers' own components and those defaults however.
Why do people think that these security schemes that are based on a "secure" central key are ever going to work in the long run. The most secret documents of the US state are leaked periodically by dumbasses, and yet the FBI thinks they can keep backdoor keys to all our devices secure.
Conversely I think events like this make things more secure. It highlights the danger of having a third party control what should be your keys.
For the sake of convenience we ended up with an inversion of control. There can still be a chain of trust. However the owner of the machine should be the root of the chain, not the vendor/manufacturer. the vendor should sign their firmware and the owner should authorize that sig on the machine.
How poor must your security be at such a big company for no one or no system to detect 1.5TB data being exfiltrated to a remote host? Especially supposedly extra sensitive data like private keys...
>>In April, MSI fell victim to a cyberattack perpetrated by the ransomware group Money Message, who successfully infiltrated MSI’s internal systems and exfiltrated a staggering 1.5TB of data, predominantly comprising source code.<<
> 1.5TB of data, predominantly comprising source code
that's a lot of source code it makes my typing fingers tired just thinking of it
It's possible they could have written it if all 2,672 MSI employees typed this source code constantly in 8-hour shifts seven days a week for five years with no lunch break no restroom break and not allowed to think before typing.
From the linked article, I'm left with the impression that this is only a problem for MSI (and a few other vendors) devices.
If Intel Boot Guard works by including a public key in a fuse in all CPUs from a set of series and now the corresponding private key is leaked, why isn't this a global problem? The same CPU with the same public key must be in every machine with an Intel CPU from these generations. What am I missing here?
> The boot chain uses an RSA public key (its hash is hard-coded inside the CPU) and an OEM private key. The OEM sets the final configuration and writes it to one-time-programmable Intel chipset fuses during the manufacturing process, thus making it almost impossible for an attacker to modify the BIOS without knowing the private part of the OEM Root Key. However, because some OEMs might fail to properly configure Intel Boot Guard, attackers could end up injecting code and permanently modifying BIOS.
> At Black Hat 2017, security researcher Alex Matrosov presented some vulnerabilities in poor BIOS implementations, explaining that not all vendors enable the protections offered by modern hardware. Because of that, attackers could elevate privileges, bypass protections, and install rootkits, he explained.
Some HP business devices don't use Intel BootGuard, because HP has their own proprietary solution for firmware integrity verification, https://news.ycombinator.com/item?id=35845073
There is a better solution already designed into Intel Boot Guard, which avoids the problems of OEM "secrets" and allows an owner-defined firmware root of trust. As described in a 2017 MIT paper from Victor Costan, Ilia Lebedev and Srinivas Devadas, "Secure Processors Part I: Background, Taxonomy for Secure Enclaves and Intel SGX Architecture", https://www.nowpublishers.com/article/Details/EDA-051
> The TPM's measurement can be subverted by an attacker who can re-flash the computer's firmware .. the attack .. can be defeated by having the initialization microcode hash the computer's firmware (specifically, the PEI code in UEFI) and communicate the hash to the TPM module. This is marketed as the Measured Boot feature of Intel's Boot Guard.
> Sadly, most computer manufacturers use Verified Boot (also known as "secure boot") instead of Measured Boot (also known as "trusted boot"). Verified Boot means that the processor's microcode only boots into PEI firmware that contains a signature produced by a key burned into the processor's e-fuses. Verified Boot does not impact the measurements stored on the TPM, so it does not improve the security.
On a related note, some HP devices have a dedicated security co-processor (SureStart) to verify and/or fix system firmware, instead of relying on a CPU vendor root-of-trust like Intel BootGuard. Since HP's proprietary security co-processor can be disabled by a device owner, those HP devices may be amenable to OSS firmware like coreboot.
> We apply this approach to detect attacks targeting the System Management Mode (SMM), a highly privileged x86 execution mode executing firmware code at runtime .. We instrument two open-source firmware implementations: EDKII and coreboot. We evaluate the ability of our approach to detect state-of-the-art attacks and its runtime execution overhead by simulating an x86 system coupled with an ARM Cortex A5 co-processor.
> Every time the PC powers on, HP Sure Start automatically validates the integrity of the firmware to help ensure that the PC is safeguarded from malicious attacks. Once the PC is operational, runtime intrusion detection constantly monitors memory. In the case of an attack, the PC can self-heal using an isolated “golden copy” of the firmware in minutes.
Measured boot is great in theory.. But it is only really practical to determine that your bios haven't changed at all. If you are going to trust updates you are ultimately going to have to make the same mistake as verified boot, just manually.
The Measured Boot mode of Intel Boot Guard is about removing the need for an Intel/OEM private key and e-fuse to verify the initial code.
For OS-specific measured boot of coreboot open-source firmware with a reproducible build, there would be a 1:1 mapping between the measured firmware hash and the coreboot source code revision used to generate the firmware.
Separately, the issue of firmware blobs (Intel FSP, AMD AGESA) would remain, although AMD OpenSIL is promising to reduce those binary blobs by 2026.
You want to be able to deploy and execute code outside the control of whoever physically controls the machine.
Either you implement it with security features hidden from the device holder, in which case it will always be broken eventually, or you guarantee the capabilities with mathematics - in which case a security break cannot happen even if the physical machine's description is completely public.
There are certainly layers to this that I'm missing, but I think homomorphic compute is the only unbreakable answer to secure compute in general.
My understanding (which might be wrong, crypto is a complex topic and i am an amateur) is that homomorphic would hide the data being worked on from the algorithm working on it. Here we want to verify the (non-secret) algorithm has been approved (code signing) which we then run on non-secret data. I don't think homomorphic encryption can help with that since its kind of a different problem.
The issue here, of the key holder leaking the key, also seems impossible to work around in general, since the requirements are: 1) there exists someone who can sign code. 2) that person cannot screw up (e.g. leak the key) and allow the wrong code to be signed. These are pretty contradictory requirements, that no amount of crypto can fix. Ultimately it is a social problem not a technical one; there is no full technical definition of misusing a key. There are things that can help - like HSMs, splitting the key between multiple parties, having better methods of revoking and replacing compromised keys (hard without network access and an unwillingness to brick old devices). Not the same domain, but AACS is an interesting example of a system somewhat resiliant to key compromise.
There's a good chance that I'm conflating some ideas here, but I think there might be a kernel of something that isn't completely useless.
I'm not sure if it's possible (given that there's overlap with public key/private key encryption it may be), but I think that if you could produce a homomorphic computer capable of plain text export, this would be a resolvable problem.
I do not want malware authors to be able to run code on my machine outside of my control. That prevents me from knowing whether it is installed, what it is doing, or to have a way to get rid of it.
Holomorphic encryption allows someone's interests to be secured. But I'm dubious that I'm the one who will actually benefit here.
Do you have any idea how much software is on the average consumer device, and how poorly equipped the average consumer is to determine its provenance let alone decide what is trustworthy?
Not to mention that there are economic reasons to run untrusted software. For example no matter how little I trust Zoom and Slack, I don't have a job if I am not willing to run them.
I like the approach of having a dedicated PC and phone that is permitted to be riddled with remote-management and malware and used only for those purposes, and my own devices completely separate.
But secure boot at the OS level (in the PC world, at least) is basically guaranteed to give users the ability to enable or disable it, change the policy to something that uses their own keys, and ensure that the system runs the software they want. When applied to firmware, that's not the case - if Boot Guard (or AMD's equivalent, Platform Secure Boot) is enabled, you don't get to replace your firmware with code you control. There's still a threat here (we've seen firmware-level attacks for pre-Boot Guard systems), but the question is whether the security benefit is worth the loss of freedom. I wrote about this a while back (https://mjg59.dreamwidth.org/58424.html) but I lean towards thinking that in most cases the defaults are bad, and if users want to lock themselves into only using vendor firmware that's something that users should be able to opt into.