Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Two Hidden Instructions Discovered in Intel CPUs Enable Microcode Modification (infoq.com)
449 points by andrewnicolalde on June 7, 2021 | hide | past | favorite | 121 comments


> on the good side of things, getting an Intel CPU to enter the red state is not easy to accomplish. In fact, it should never happen unless there are vulnerabilities in the Intel Management Engine (ME), an almost undocumented subsystem present in all Intel CPUs since 2008 that Intel says is required to provide full performance.

"unless there are vulnerabilities or backdoors in the Intel Management Engine (ME)".

There, fixed it for you.


The IME itself is a backdoor in the first place. I remember a story a few years past when a full line of CPU's went out with IME having no password set at all, allowing a field day for hackers even when your computer was shut down but still receiving power on standby. Intel had to recall all of them but only after the news blew in media in the first place. Otherwise I'd suspect Intel would've just let it stay, cause money talks.


I regularly hear story about IME that all seem disproportionate. From what I've seen, AMD has an equivalent, and mobile SoC are worse. Also, it looks like you really have to go out of your way to enable IME remote control, which makes sense because it is a premium feature aimed at professionals.

And even if you actually have a backdoor on your PC, for hackers to exploit it, they somehow need to get to your local network first, which is not that easy on a home network unless your computer is the one to initiate the connection. I also probably won't work on anything but the built-in Ethernet port.

I don't know what "field day" hackers had, but I suspect it only helped them compromise corporate networks they could access where IME remote control (vPro?) was actively used. It is indeed a serious concern and warrants a recall, but I've yet to see how it can be a concern to most individuals, at least compared to OS-level attacks. And if it is, firewalls, including the simple ones in home routers should be effective.


> allowing a field day for hackers even when your computer was shut down but still receiving power on standby.

AFAIK vPro doesn't let anything talk to its management interface over the network until you actually do the local setup.

I found this out to my irritation, when I once acquired a used workstation with vPro, freshly wiped, and — thinking I could just set it up by plugging its management-network NIC into my switch and talking to it over the LAN — I found out to my dismay that it wouldn't even bother to acquire an IP address for the management interface until I enabled vPro in the BIOS and then told it to use DHCP.

(Am I wrong about this? Perhaps, do OEMs ship batches of machines with vPro pre-configured in certain ways, for clients that explicitly specify that they're going to use vPro for remote provisioning? Or does every workstation order have an implicit "unload from the pallet, plug into a KVM, enable vPro, then install at location" step?)


Your parent gave an anecdote:

"I remember a story a few years past when a full line of CPU's went out with IME having no password set at all"

They describe a pretty serious snag. My experience of iDRAC/iLO/vPRO etc etc etc is that they have all had some pretty major problems.

Keep your monitoring/management interfaces on their own network/VLAN is my advice. While you are at it, get the logging and firewall rules sorted.


I think the parent you're referring to was echoing a story that popped up as something of a conspiracy on the vestiges of the Internet, hinted at by the claim that the machines could be accessed when powered off. I don't think that's true, and you'd have to have access to the network they're connected to. Though, I suppose you could power such systems up (maybe?) and then gain access to them if you did have access via, say, a corporate network.

The closest analog to the claims that I could find in short order did mention a known AMT exploit[1] in 2017.

[1] https://www.theregister.com/2017/05/05/intel_amt_remote_expl...


>I think the parent you're referring to was echoing a story that popped up as something of a conspiracy on the vestiges of the Internet, hinted at by the claim that the machines could be accessed when powered off.

If by "vestiges" you mean "fringes", I find that a bit disturbing assuming you are sincere.

Whatever thought process led you to think it was an obscure conspiracy theory - did you check Wikipedia?

The third sentence of the Wikipedia page on the IME is:

"The Intel Management Engine always runs as long as the motherboard is receiving power, even when the computer is turned off."

https://en.wikipedia.org/wiki/Intel_Management_Engine

There isn't a specific source for this given, but I looked at some of the references and found this in Intel documentation:

"This interface can retrieve the current power state and change the power state of the hosting machine via commands to Intel AMT."

https://software.intel.com/sites/manageability/AMT_Implement...

There is also a link to "Black Hat 2017" given as a source for the ability to manipulate computers when they are turned off through the Management Engine.

And then, looking at your Register link, I don't see that it says anything about the capability of control when powered off.


> Whatever thought process led you to think it was an obscure conspiracy theory - did you check Wikipedia?

I'm guessing you haven't encountered the conspiracists on sites like 4chan and Gab where they literally claim that you can extricate all data from the machines while powered off.

Do not underestimate the idiocy and extremism behind some of these conspiracies. They have widely stupid claims that are outright impossible, but because it's terrifying to users who don't know better otherwise, the truth gets muddied.

Yes, IME/AMT can be accessed as long as the machine has utility power. However, the machine still has to be remotely powered up to access anything else (e.g. if there were an exploit that allowed remote extrication of in-memory data loaded by the OS because, well, the OS has to be booted).

So yes, I'm well aware of this. I think the problem is that you haven't encountered some of the really wild and ridiculously extreme interpretations of this conspiracy on, shall we say, the vestigial fringes of the Internet that harbor some equally stupid notions. It's no accident that the people who think $THREE_LETTER_AGENCY can access all of your personal photos and information with your machine powered off via IME/AMT are almost uniformly also the types who think that 5G is going to instigate mind control or believe the Earth is flat.

I know of these because I regularly find it perversely amusing to debate them because I'm either stupid, crazy, or both.


They're talking about potential backdoors, perhaps guised as misconfiguration. Later vPro versions would connect back to a VPN.

You shouldn't just trust Intel that it doesn't respond to network access if you tell it to nicely.


I agree.

I don't want to rehash it here, but if you look at my reply to your sibling, you'll get a better insight into my reasoning.

Basically the TL;DR version is that there are some absolutely asinine conspiracists who literally think that government agencies can extract all of their data while the computer is powered off using IME/AMT.

Obviously, there's a lot more to it, but I do agree that I don't trust Intel, and IME/AMT are dangerous.

I'd link one of the posts I ran into that made this claim, but I don't really want to give the idiots who peddle this garbage any more exposure than necessary, because they appear to be into the 5G "mind control" conspiracies as well.

No, I'm not kidding.


iDRAC/iLO typically run some sort of Linux or embedded system that is rarely updated and insufficiently fuzzed for vulnerabilities. The prevailing fallacy in physical infrastructure commodity hardware is that iDRAC/iLO, ethernet switch serial console ports, UPS/PDU management, KVMs, and so forth are supposed to be isolated on a separate, dedicated, and secured internal network and so the endpoints "don't need Defense-in-Depth."


> until I enabled vPro in the BIOS and then told it to use DHCP.

It may have been disabled by the prior owner of the workstation. The servers we get come with "All management enabled, and will get an IP from DHCP" mode. There's a probability that these workstations left the factory with vPro enabled.


I think it’s maybe a difference between servers and workstations, then. I did a full BIOS factory reset, and it was in the “Intel AMT”-disabled state after I did so.

To be clear, are you talking about Intel AMT specifically (which I’m not really aware of being a “thing” with Xeon), or just regular server-OEM BMCs (iLO, DRAC, etc.)?

BMCs are definitely usually pre-configured. That’s part of the point of BMCs: you can rack the servers and run cable from their management NICs to the management-VLAN switches; then populate their drive bays; and then batch-provision everything at once through the BMCs (including, hopefully, setting up actual security on the BMC.)

But Intel AMT isn’t quite the same thing. (For one, vPro-badged computers don’t usually have a separate management NIC — though some do! — but rather Intel AMT usually uses the IOMMU to virtualize another NIC onto the same physical motherboard RJ45 socket. So it’d be a much worse security design to have AMT default-enabled from the factory, since that’d put new machines’ control-plane interfaces out onto your regular-traffic VLAN by default...)


I'm in the minority, but I think IME is a great feature to have from a business/IT perspective. As a business, you own the computer, not the the user. And IME lets you provision/control it in at a lower level that most tools won't.


Which is fine, except I'm not a business and I _do_ own my computer, but I can't avoid IME.


You guys are confusing two things - ME and AMT (=vPro).

The fist one everyone gets. This one doesn't allow remote management, it is intended for internal housekeeping, plus things like emulated TPM and DRM.

The second one, which builds on top of the first one, allows management in about the same scope as any other BMC would, and it is pretty easy to avoid it - just do not purchase SKUs with vPro. For vPro, you have to pay extra, so doing that is pretty easy.

That said, I also consider vPro generally useful, though somewhat flaky and unreliable. It is also about the only way to get BMC-like functionality for desktops or laptops (hey, Intel, any plans for new NUCs with vPro?).


Ah, you're correct. I was indeed conflating the two.


https://github.com/corna/me_cleaner works reasonably well for me - at least on motherboards with removable BIOS chips.


Yes, that part is annoying (if you absolutely need to use intel).


AMD have their own "IME" called PSP.


ARM has something similar (I forgot what it's called).


Why not then let the purchasing user disable the IME, wether its an average person or a business?


DARAP, CIA, FBI, NSA -- they are all against that.

A computer is a to dangerous tool to leave it to the public without state control.


I don't know how its implemented, but I'm assume they don't want the user defeating the feature if they're not the owner. The only choice seems to be to use AMD, which probably works quite well given the smackdown that AMD is putting down in recent times..


The proprietary nature is the issue. IBM Power 9 stuff has something similar, but much less shady.


How is the rest of the CPU not proprietary?


Do you have a link to a IME tutorial? Or at least the official docs? To me the whole things is just a blackbox that doesn't serve any purposes that are in my interest.

Wikipedia is missing such a link.


Link?



Conceptually


The 3-letter agencies have probably playing with this for a long time.


This is speculation, and doesn't add anything to the conversation. Is there a precedent that TLA's conspire with CPU manufacturers to create backdoors like this? Or are you just a conspiracy theorist?


Chris Domas works at Battelle Memorial Institute. Battelle is a "non profit R&D house" that just so happens to hire "retiring" CIA operatives, and work on tons of CIA/NSA contracts.

Completely coincidentally Chris Domas managed to discover the only Intel CPU backdoor ever discovered and publicized the very same year Intel already patched it

https://www.youtube.com/watch?v=lR0nh-TdpVg


After the Snowden revelations, those agencies have no benefit of the doubt anymore.


You're moving the goalposts, and is thus still speculation.


Let the guy speculate? There diversity of opinions is why I enjoy hn. I would argue that your comment added even less (subtracted from), much as mine has now.


When you get mystery people screeching at you for speculating, you’re getting a confirmation that there’s something to speculate about.

The US government has 60,000 full time shills telling you to take your meds and stop speculating. And studies have shown that they target their chidings.


I remember when discussions about Echelon where for tin foil hat cooks. It’s all conspiracy theory until it turns out to be true.


It’s healthy to speculate about these types of issues. In fact, it’s required. Shadow Brokers fiasco showed us that our overseer class can’t keep their spy tools under control. So why not speculate? I doubt it can be taken too far, my glowing friend.


a backdoor is a vulnerability. no need to call it out separately, I don't think.

I don't shop for groceries and canned corn, I shop for groceries.

I don't listen to music and <insert artist>, I listen to music.

in fact, the way you separate them seems to indicate that you believe that backdoors are not vulnerabilities, and they absolutely are vulnerabilities.

vulnerabilities are things that make you vulnerable. undocumented backdoors definitely make you vulnerable.


Is that supposed to be a serious limitation?


My initial guess was "WRMSR, and the 64-bit version of WRMSR". Fortunately, this appears to be an actually new finding rather than the memorable case a few years back of someone claiming to have discovered something that was clearly documented in the datasheet.

The whole idea that there's a "red state" and a bunch of others on a CPU, normally hidden, should immediately raise the attention of many who wonder what those extra modes can do, and more interestingly, why they're hidden. From the (very little!) research I've done, it appears this is not unlike SGX where only Intel has the key[1] to some part of the hardware you bought from them, and based on some leaked internal documents, it's very plausible that this is indeed a backdoor which only they can use. Ostensibly, for debugging purposes. Here's a previous comment I made on this: https://news.ycombinator.com/item?id=26521359

[1] If these keys were leaked, which I very much hope will happen at some point, no doubt the "security" community will be heavily against it and spread plenty of FUD about how it makes everyone's computers insecure. But IMHO it should be eagerly awaited and received with the same optimism as the other DRM key leaks (HDCP, HDDVD, etc.) --- it is a path to freedom.


I assume it will be a public-private keypair with some challange-response mechanism, so no amount of reverse engineering could extract the key from a CPU.

Intel probably have the master private key on a hardware security module, and have set up some API so their engineers can do the challange-response dance to get into RED mode. But after poweroff, it'll need to be done again.

If that's the setup, the key won't leak. There are probably only 5 hardware cards in the world with the key, and they're probably set to not allow key extraction.

To get the key, you would need to physically steal the card, and have an exploit for silicon designed to be key storage...

Not going to happen.


And the key on the silicon would auto-destroy if a fraudulent attempt is recognized.


It's interesting that they submitted a paper for a talk at BLACK HAT USA 2021 but were rejected [0]. Looks like geopolitics (and politics) has permeated every aspect of technology.

[0] https://twitter.com/h0t_max/status/1397441062705057793


It's Sklyarov. Personally, if I were him, I wouldn't be going anywhere near the USA again.


What was the specific geopolitics here? That tweet didn’t give any real info.


Positive Technologies has had sanctions imposed on them by the US Government


Well, that doesn’t sound very positive!

Thanks


Why?


From the US Treasury press release announcing the sanctions:

> Positive Technologies is a Russian IT security firm that supports Russian Government clients, including the FSB. Positive Technologies provides computer network security solutions to Russian businesses, foreign governments, and international companies and hosts large-scale conventions that are used as recruiting events for the FSB and GRU.


It was involved in SolarWind attack.


[citation needed]


[flagged]


17 intelligence agencies agree!


I'd guess that amongst tens of thousands of submissions their paper didn't stand out as much as it should have.


The article is based on the same tweet as this discussion from 3 months ago: https://news.ycombinator.com/item?id=26519693

I also want to state how impressive their work is. These are the true undocumented instruction finders as apposed to sandsifter.


Whats the difference between sandsifter and this work?


My understanding is that sandsifter just detects byte sequences the processor decodes. That indicates an instruction the decoder recognizes, but gives nothing about what they do. This is about two specific instructions with a known purpose


Rejected? Not a problem. Upcoming talks on Red Pill for Intel Atom.

https://zeronights.ru/en/reports-en/chip-red-pill-how-we-ach...


In theory this should open possibility to unlock all the market segmentation gated Intel bullshit like ECC, AVX, multiplier change overclocking, etc. Maybe even manipulating CPUID.


Maybe, but that might be risky in the sense that if a chip has completely faulty behaviour at high clock speed (I'm not familiar with how they bin things in practice) they can sell it as a low speed one. Intel like to segment their products, but it's not completely for profit.


Yeah but things like ECC are totally artificially segmented. It's an identical memory controller.


You can just stress test your cpu for a day after and roll it back if it crashes or makes a mistake.


Exactly my thoughts from the last time this was posted https://news.ycombinator.com/item?id=26522208 .

I hope some talented hacker/s somehow manage/s to actually pull it off.


I've heard on the grapevine that binning decisions like that are made with fuses and lasers. ECC might even be another mask.


Lasers and masks, no. Fuses, yes. But fuses are just non-volatile memory. The fuses are not connected directly to the hardware. There is either logic or microcode that loads the fuse values into registers that actually configure the hardware features on and off.

So the question becomes whether you can access those registers, or otherwise affect the fuse loading process, using these features. That depends on the design (e.g. whether those registers are accessible and not locked down after initial load).

Fuses (and antifuses) are the only form of nonvolatile, programmable memory available in these process nodes (as they require no extra masks or processing steps beyond a regular CMOS process), that is why they are used. They are write-once memory, and they are based on breaking or making electrical connections, but they are not literally used to directly reconfigure chip wiring; that hasn't been the case for many years now. Modern fuse memory is just one more functional block you throw into your chip, and it comes with its own requirements, read amplifiers, etc (the fuses aren't actually "binary"; what happens is the electrical resistance increases, but you still need some analog-ish circuitry to set threshold levels to read them reliably, and carefully controlled programming voltages and timings, etc).


There was at least a long period of time until recently that Intel parts didn't even tape out ECC for consumer designs. That's what I meant by masks, their product differentiation for an individual chip in that case is all the way in the mask. See how here all of the chips Intel enables ECC on are Xeons or the occasional Atom intended for servers? https://ark.intel.com/content/www/us/en/ark/search/featurefi... The only question there is if they still have it taped out on some other masks, but universally disable it on those chips as a later product decision. There's a tiny chance that's the case so that they can make more real time product decisions rather than having to wait out the multi year process to tape out new chips.

And lasers were used at least very very recently at Intel.


Look at E3-1245 V3, then 4770k, and finally 4770. Can you spot a difference? Its ECC and TSX-NI. Same die.


Right, so when I said ECC _might_ not even be taped out, it obviously depends heavily on the die. Sometimes they repackage Xeon dies as extreme high end consumer parts, but it's very rare and doesn't apply to the general case (and your examples are almost a decade old at this point).


Intel don't use something like eFuse by Qualcomm, or it's same thing?


Not sure what specific technology they use/license (they might have an in-house design), but it's all vaguely equivalent from a high level point of view. Qualcomm calls theirs QFPROM.

In the end, all of these companies end up licensing whatever fuse block is compatible with their process, e.g. I'm sure TSMC has something in-house for theirs. Intel of course own their own fabs, so they probably have their own. DesignWare also have their own (generic-ish?) thing, etc.

Basically every modern SoC or really IC of any complexity uses fuses for something (e.g. calibration data, switching in spare/redundant blocks when manufacturing defects happen, binning/product segmentation, crypto key storage, etc).


At least max multiplier limit is definitely switchable in microcode. In 2013 Intel released microcode update changing multiplier limit when the CPU is on h81/b85/h87 boards. In 2015 they even somehow convinced Microsoft to push this microcode globally in an update.

https://bit-tech.net/news/tech/cpus/intel-overclocking-block...

Before microcode - full overclock, after microcode update - locked multipliers.


TL;DR

1. If you control Intel Management Engine (exploit/backdoor/whatever), you have full control over the system.

2. That full control comes with, among everything else, the ability to put the CPU in a deep debug state ("Red Unlock")

3. In Red Unlock, you can basically play with the CPU's internals at will using a debug cable (just a modified A-A USB3 cable on many modern systems; "DCI").

4. It turns out that in Red Unlock state there are also undocumented instructions that let you do the same thing straight from code running on the CPU itself.

Notice how the security relevance stops at #1. We already know that if you control ME, you control the system. So there is no security impact to this discovery. The prerequisite is already total control.

Also notice how what these instructions let you do isn't new. The same researchers already showed how to do the same thing via an external debugger (CRBUS access) last year. So this does not open any new capabilities for CPU research.

What it does do is make things more convenient. Now you can do this without an external debugger, "only" with an ME patch/exploit and code running on the system itself, which means you could e.g. have it apply custom microcode patches on every boot (by patching your UEFI firmware to do it). Also, the USB debug thing doesn't work on all motherboards (some are missing the required connections), while this would work.


> Notice how the security relevance stops at #1.

Security of a single PC, or security of an entire operation? Seems like this would be a great "in" for personalized unattended exploits of airgapped systems, kinda like what Stuxnet/Flame/Duqu were famous for but at an even lower level.


You still need to compromise ME, at which point there are a million easier ways to accomplish your goals than a microcode patch. For example, SMM is much easier to compromise and arguably more powerful (you can hide large amounts of code in SMM, not so in microcode); SMM backdoors that interact with userspace code have already been demonstrated. Or you could do something in ME directly, which again has plenty more space to hide stuff in than microcode.

The point is this isn't a bug, it's a feature; if you've gotten the CPU into red unlock anyway, you are already thoroughly screwed. This is like saying if you install a development version of an OS and enable remote debugging, you can take over the machine remotely. Well.... yes.

This is interesting because it's undocumented, not because it has security implications.


> because it's undocumented, not because it has security implications.

"They're the same picture."


No. The Apple M1 is undocumented, but so far the only security problem I've found while reverse engineering it was an actual unintentional bug. It is otherwise extremely well-designed as far as security goes.


>" Dmitry Sklyarov"

I assume he is that famous guy from Elcomsoft who put Adobe and DOJ to shame. Good to know he is still productive.


> it allows to craft your own persistent microcode patch without external debugger.

These are persistent? Meaning they survive reboots? Is it stored in flash memory on the CPU or something? I thought all microcode updates are re-applied on each boot.


They mean you could put code into your UEFI to apply the patch using these instructions on every boot, as opposed to the state of affairs until now, when you needed an external debugger connected via DCI to do the same thing.

The CPU still needs to be in the Red state, which means you need to control ME. This applies to both cases.


Oh I see! Thanks!


If you would override the default microcode update, which should be doable if you have that level of access, you could call it persistent no?


Not really honestly. I mean, I thought the microcode update is done by the firmware and/or the OS on boot. Assuming that's correct, whether it's performed or not is entirely dictated by those entities... not by anything related to the CPU. It's like saying "I'm moving to New York" just because there's a car that drives you there from New Jersey every day. And if you really want to call that 'persistent', I'm not sure how one might have a non-persistent update in that case?


Microcode updates have to be reapplied each time you restart your system.


Clickbait. Only in debug mode.


I don't think they need to resort to clickbait articles.

Dmitry Sklyarov is the same guy that 20 years ago was arrested by the FBI for DMCA violations when visiting the US because he, despite being a Russian citizen working for a Russian company (there's no DMCA in Russia), wrote a software that circumvented Adobe e-books copy protection. Had social media been around back then, #freedmitry would have been a trending hashtag like all similarly named sites that spawned shortly after the arrest.

https://en.wikipedia.org/wiki/United_States_v._Elcom_Ltd.


Unless Twitter censored it.


When I was deciding on which country to immigrate to in 2006 I had three job offers: one from Canada, one from the US and from the UK so those were the potential targets. The CFAA and the Sklyarov case were among the major reasons I decided against the USA.


Not quite. Only in "red state", which is a subset of debug mode. So, even more restrictive than what you said.

But I still don't think that makes it clickbait. From the article:

> The three researchers have posted a video demonstrating how to access the two instructions with only root/admin privileges. This requires uploading a custom UEFI to SPI flash and then rebooting the system, which definitely requires having physical access to it.

So, not remote-able (at the moment). But there's enough there that I can't agree with calling it clickbait.


Even aside from security vulnerabilities, wouldn't the ability to write your own microcode for Intel CPUs be interesting for many hobbyists?


Right. Writing custom microcode can be useful for understanding fine details of the CPU's microarchitecture, which in turn is useful for things ranging from low-level optimization to discovering microarchitectural vulnerabilities that don't require custom microcode to exploit.


Or you can treat your Intel processor as a fpga and write instructions that do what you want them to. Unlocking features locked down to higher end machines is also not off the table. So it may or not be a security issue but it makes the machine hackable in a way it hasn't been before.


should you be capable enough to write your own microcode, there is an elevated chance that you could also encrypt and sign that code like intel microcode.


Not unless you break RSA, which would be much more of an accomplishment than working out how microcode works.


From the article:

> As a matter of fact, several vulnerabilities in Intel ME have been discovered in the past. Among others, Ermolov, Sklyarov, and Goryachy described a method to extract the secret key that is used inside the CPU to decrypt microcode updates, which also led to the possibility of executing your own microcode on the CPU or reading Intel's microcode.

So you don't have to break RSA. You just have to use the private key to derive the public key, which I believe is do-able.


Encryption and signing are two different things both using rsa keys. The public key used for encryption does not need to be part of the same key pair as the private key used for signing.

Not sure whether this is the case here.


Also they talk about a secret key, not a private key. That means it's likely using symmetric key encryption, with RSA for signing (and maybe to update the symmetric key a la RSA-KEM). This is normal, nobody does direct asymmetric message encryption.


> This requires uploading a custom UEFI to SPI flash and then rebooting the system, which definitely requires having physical access to it.

Does this require something other than writing the file to the UEFI partition, setting it as the next boot target, and rebooting into that boot target? I tend to think I misunderstand because that would require root privileges under Linux but not physical access.


If the attacker is flashing custom ME firmware to flash, you are already well and truly fucked.

They already have persistent and unlimited access to the ultimate back door and modifying the microcode isn't going to get them any extra access to your data or programs.


At a minimum, though, this would be harder to detect. All of the usual places to look are completely unpatched.


But it's not persistent by itself.

You are going to need both a persistent modified ME firmware to enable red mode and some x86 code do the actual microcode modification on each boot.

If the attacker has that, they can already do a traditional VM rootkit to hide their modifications.


A publicly documented cheap and reliable way to backdoor the cpu would bring this ME vector to many more people's radars.


I don't think you read the article closely enough. You only need "red state" (aka Debug mode) if you are using the instructions with unsigned microcode updates. But if you have a way to pop the machine into redstate you can exfiltrate the secret key and actually sign microcode changes that will be accepted as legit. All of those things are well within the capability of an APT, especially if you don't need the EXACT target system[1].

Personally I thought IME was a huge problem when Intel announced it, and I haven't changed that opinion.

[1] Open question I don't know the answer too is whether all CPU SKUs share the same secret key. (which would be logical if you wanted to push a microcode change for a particular SKU).


> But if you have a way to pop the machine into redstate you can exfiltrate the secret key and actually sign microcode changes that will be accepted as legit.

No. The article mentions that the key required to decrypt microcode updates can be extracted. http://inertiawar.com/microcode/ indicates that the encrypted microcode image is also signed, and being able to decrypt the image doesn't mean you can also generate an appropriate signature.


Microcode updates are signed with an asymmetric key ever since the Core line, so no, you can't "extract" the key because the only key the computers have is the public key, not the private key.


I misread this paragraph: As a matter of fact, several vulnerabilities in Intel ME have been discovered in the past. Among others, Ermolov, Sklyarov, and Goryachy described a method to extract the secret key that is used inside the CPU to decrypt microcode updates, which also led to the possibility of executing your own microcode on the CPU or reading Intel's microcode.

Where it talked about extracting the secret key.


Personally, I didn't understand the title as a security flaw. I simply understood it as a step forward into understanding/documenting the way Intel CPU work. So I don't think it is clickbait.

The only part I'm not clear, is whether it was possible before? The article seem to imply that physical debugging tools (JTAG-like I presume) were already available to do uOP exploration?


Yeah, previously the same team found a JTAG like mechanism that exposed microarchitectural internals like R/W access to microcode storage.


Not quite clickbait. Previously there was no way to experiment with and introspect the changes in any Intel microcode updates. This gives unverified r/w to microcode update RAM, allowing reverse engineering of at least Apollo Lake updates and update mechanisms.

I've been a little wigged out by the post spectre world of encrypted, unknown blobs changing every couple months. I prefer to treat vendors in a "trust in Allah, but tie up your camel" sort of way as much as possible.


clever. thanks for the explanation.


This title should have been appended with " in debug mode."


I wonder are there implications for Intel SGX?


How did they find these instructions? Did they just try things while in the red state?


Why is it legal to sell a device without providing full documentation, incl ME?


It's blatantly obvious that the US intelligence agencies obviously have backdoors (or known exploits if you want to give then the benefit of the doubt) into Intel ME, and its AMD counterpart.

The DoD is the only Intel client that is allowed to disable ME. They sell chips without ME. They just won't sell it to anyone but the DoD. Why is that?


Can you provide a legal definition of what "full documentation" would be?


A describition of every intentional design feature that is not immediately and obviously apparent.


You don't need this to do anything with the CPU that was promised when you bought it.


What do you mean? What if I want to change the microcode to something I like?


That wasn't a product claim that Intel made. Its like warranty void if opened, no user serviceable parts inside.

There is no legal obligation to sell you an open and fully-documented CPU.


"warranty void if opened" is also illegal in most consumer cases in the US, as common as it is for companies to apply this policy without being challenged on it, and certainly in many other countries. Of course, they are certainly allowed to refuse to service warranty claims where a third-party part or repair caused the problem.

Source: https://www.ftc.gov/news-events/blogs/business-blog/2018/04/... (blog post on the official FTC site - look at the third bullet point in their examples)

Not mentioned in the above source, but the burden of proof that the third-party part or repair caused the problem is on the warrantor, not the consumer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: