That's a pretty nice write-up, and continues to show why hardened hardware is needed, not just software, if you actually have secrets to protect.
One thing that struck me is "After running these experiments for more than a day, resulting in more than 1 million experiments, we did not observe any successful glitch…"
That's like 100ms per test. Are those devices that fast to put into uart mode and reset and wait for the result? Were they running this with multiple devices in parallel?
> continues to show why hardened hardware is needed, not just software, if you actually have secrets to protect
Right. But I tend to disagree against the premise, though: I generally tend to think that an attacker with hardware access will compromise that hardware sooner or later, and you should design your security measures around that assumption.
The only useful uses I see for this are:
* Giving your HW for somebody else to operate, while hoping that this person won't recover secrets from the inside.
* As a variant, locking customers out from using the HW with software it was not designed for, or extract the SW it came with.
I am most often on the receiving end of the second option, and do not have any sympathy for it. The first issue is a trust issue, and I would avoid those situations anyway. Another option is:
* Protect an unattended device against tampering
Which is a bit saner, but you probably shouldn't leave a sensitive device unattended anyway, as it can always bite you back. What is desperately needed is the hardware equivalent of a software checksum, to verify that the device conforms to specs.
> You probably shouldn't leave a sensitive device unattended anyway
I find this area fascinating in terms of how you protect devices that are both sensitive, and need to be left unattended in someone else's possession. One of the best explorations I've seen of this is Big Clive's tear down of a smart meter. The thing is absolutely full of anti-tampering measures, including several switches that will cut off the power supply if the casing is removed, but most interestingly to me the cellular modem being connected to a big capacitor designed to give it just enough power to send out a last cry for help even if the entire radio module is pulled off the meter.
The capacitors may just be part of the standard power supply circuit for the mobile telephone modem.
I looked at the SIM800 module a few months ago - people use them to add mobile connectivity to embedded hardware like Arduino. The module is designed with a 4v LiPo battery in mind as it can draw 2 amps for a few milliseconds when transmitting, so needs a buck converter and hefty capacitors rather than the usual linear voltage regulator.
NB: Not a hardware engineer - was just interested in adding SMS notification to my home alarm.
> Giving your HW for somebody else to operate, while hoping that this person won't recover secrets from the inside. [...] is a trust issue, and I would avoid those situations anyway.
I hate DRM with a passion, but the same problem is always my concern. If I lost my smartcard (or USB dongle) that stores an OpenPGP private key in a hotel - yes, everything is possible with physical attack, and I should use an extra PIN to symmetrically encrypt the key material on the card anyway - But still, if the attacker decides to extract anything from the card, I'd like to find a way that forces the attacker to pay the maximum price.
An inexpensive security chip can bill the attacker an extra $1000. If the attacker must decap the chip and use some semiconductor test equipment, it's even better.
However, there's a small problem - while forcing the attacker to pay, say $1000, to develop the attack is easy, but it's difficult to guarantee the cost per attack will remain the same - if a noninvasive attack has ever been found (e.g. fault injection), the marginal cost of new attacks becomes essentially zero. And unlike software, the smartcard is useless at this point, and it cannot be patched. If it's a FOSS hardware design, everyone is now vulnerable. Worse, it doesn't help that these attacks are rarely published - before you start using it, you have no idea how secure it really is, and after you start using it, you have no idea how vulnerable it has become. Solution? Use the latest security chip and upgrade your smartcard to a new design frequently. Unfortunately, the secure chip industry is basically owned by the DRM/financial companies, and the best chips are behind layers and layers of NDA, and even then, they're often only sold to a restricted list of consumers. It protects them, not us...
Ah, yes, this is true. One option is to use that as just one authentication factor, and revoke it if it is lost.
One could use it with a password, as is commonly done, but you could imagine combining it with a key stored on a server, making it a two-layer key system.
That way, extracting the key would likely be of no use to someone who stole it, provided you can revoke access quickly enough. So yeah, hardware security measures can be useful, but do not necessarily need to be 100% foolproof.
> you probably shouldn't leave a sensitive device unattended anyway
Almost every ESP32 device is left unattended while storing the owner's wifi credentials.
You can't exactly make a wifi doorbell, or wifi thermostat, or wifi lightbulb, or wifi security camera, without storing a wifi password.
Of course, a cynic would say product manufacturers deploy these features hoping to protect their source code, rather than to keep the customer's wifi passwords safe.
IMO that's more an issue with (customer) WiFi, than the device itself. WiFi is the very last place where people living together have to share one common secret for their security, and that's pretty ridiculous. At least with WPA3 we get perfect forward secrecy even with shared password...
Not sure if supported by the ESP32, and it's kind of complex to setup, but I think you could set up a per-device password, with WPA enterprise (PEAP/MSCHAPv2), and a RADIUS authentication server.
What you want to do with that is another question. You can't really do anything against someone who decides to fake your thermostat reading, but having access to the hardware already defeats that.
At least you limit the impact to the one device that was stolen. Having someone check that the device is still there could be helpful. And since that attack still takes some time to perform, some monitoring could detect that the thermostat/etc hasn't been connected in a while.
I am not sure how useful extracting a Wi-Fi password that's only used by the doorbell would be?
> You can't exactly make a wifi doorbell, or wifi thermostat, or wifi lightbulb, or wifi security camera, without storing a wifi password
What I am thinking of doing for the outdoor weather station I will someday get around to finishing is to have it be on a separate WiFi network. To get data from it you will have to connect to that WiFi network.
Inside the house, I'll have an RPi or something like that which has WiFi and Ethernet that will use its WiFi to connect to the weather station's network and Ethernet to connect to my home LAN.
If someone steals the weather station, the only password they get is the password to the weather station network, assuming that it has one (not sure yet...may make it open so that anyone in WiFi range can connect and read the data).
Same thing for the automatic wildlife camera I'm contemplating. Probably that will go on the same WiFi network as the weather station.
Depending on the capabilities of your WiFi gear, you can just run these devices off of a second wireless network which backends onto a separate VLAN. You can then use firewall rules to determine what traffic is allowed to pass between VLANs.
I do this for my cameras (no traffic out from the VLAN, but allow connections in from the NVR machine) and IoT devices (traffic to the internet is allowed, only some traffic to my other networks is allowed). It works very well.
I use Unifi for my networking gear, but I've done the same thing with DD-WRT on an Asus router/AP device.
> You can't exactly make a wifi doorbell, or wifi thermostat, or wifi lightbulb, or wifi security camera, without storing a wifi password.
Sure you could. It would just require the access point to provide a passwordless guest network for such devices to use, and then encrypt the traffic end to end for the actual security.
A) routers must support network isolated guest network
B) anyome can connect to my internet to download stuff
C) if my compuer and IoT hub is on my main network, and the lightbulb is on the guest one, how do they talk? Through the third party server? What if there is no internet, I can't control my lights?
The actual solution is to have individualised passwords for each device that can be revoked, isolated, etc. Those features are not found in 'normal' wifi routers
Why would that network need to provide internet connectivity? Have the Wi-Fi gateway perform the authentication and provide an interface, or use a vlan connected to a server that handles that.
> Those features are not found in 'normal' wifi routers
Then deplore the state of current consumer wi-fi routers and protocols. We can do better than this.
More to the point, Wi-Fi wasn't really designed for this, so maybe a new kind of extension is warranted (like NAN, P2P, mesh, etc). Or use another protocol: Zigbee, Lora, z-wave, etc. These generally have the guarantees you want, sometimes do mesh networking, etc.
In the past, I've used a raspberrypi3 as a "IoT hub", using its wi-fi network to connect "smart" devices. The third-party server was my hub. Everything works without internet. The dedicated wi-fi network doesn't have internet. You can even lower the bandwidth to 20 MHz, and 1Mbps to achieve greater ranges on that channel.
> if my compuer and IoT hub is on my main network, and the lightbulb is on the guest one, how do they talk?
If your IoT devices aren't on the same subnet as your smartphone or computer, the vast majority of their "frontend apps" are already DOA. If you're lucky, those apps might let you explicitly specify the device's IP address (in a half-baked fashion that still doesn't work very well).
Nope, the client device only needs the server's public key, which isn't a secret. The session keys are a secret but they're also different for every device, transient and never written to permanent storage.
The device can identify itself with a UUID, which isn't key material and doesn't allow anyone to break the channel between the actual device and the server, which is still authenticated by the client using the server's public key. The attacker could generate a random UUID and connect to the server, but then it just shows up as an unconfigured device and gets ignored.
The attacker would have to extract the UUID from the device it wanted to impersonate, which still wouldn't allow it to impersonate any other device the same way a WiFi password does, and would then be detectable because there would be two devices claiming to have the same UUID, at which point you generate a new one for the real device after addressing whatever allowed the attacker to extract it to begin with.
> What is desperately needed is the hardware equivalent of a software checksum, to verify that the device conforms to specs.
The problem is, how can you know the OS hasn't been replaced with a fake that bypasses the checksum and keylogs your password?
Without Secure Boot, a TPM already accomplishes what you're asking -- it won't release the disk encryption key to the OS unless the "hardware checksum" is correct.
A TPM usually only checks the software (firmware), you could theoretically plant something in the hardware and have it undetected by the TPM.
I am pretty sure there is no way to achieve that currently, but with such a concept, the device performing the checksum would be trusted (like a USB stick with verified HW+SW).
An example of such a hardware checksum might be a turntable, used to compute the inertia matrix of your hardware. Most tampering (like the bloomberg story plant) should be detectable, up to a certain scale. Unfortunately, it can be compensated, and is not effective against some classes of attacks (frying components, microscopic circuit tampering, etc). It depends on your threat model, anyway.
There was a great write up by someone about designing a Trusted computing type of authentication device and how it should operate. I believe the requirements were write-only flash for the program code and an interface to reflash it. That way you can create whatever program you want and embed whatever secrets you want in it yourself and if you lose it, nobody should be able to retrieve them. I think that would be a use case of hardened hardware. Same with a lost cell phone type of thing.
> I generally tend to think that an attacker with hardware access will compromise that hardware sooner or later, and you should design your security measures around that assumption.
There are two types of hardware attackers. Those who have home tools and those who have access to lab equipment. Unless you're designing a smartcard, you're only going to have the budget to protect against the former.
But you can't guarantee that the security won't be broken in 30 years, when we have another infineon-class fiasco (https://nvd.nist.gov/vuln/detail/CVE-2017-15361 I think), or in 300 years when you'll be able to buy nanites by the kilogram at the convenience store.
It seems obvious (I hope) that you can't protect that kind of technology against secret leakage on these timescales. I think it's wishful thinking that the technology you are relying on won't be broken in a one-year timeframe. Unlikely, I agree, but that happens, and you just hope that it will be someone else's tech, not the one you rely on to keep your secrets. The only practical measure you can take at that point is buying insurance :)
So, in any case, you have to make sure the secrets you are storing in HW aren't too important, and can be rotated easily.
I would add a third kind of hardware attacks class to your list, which is the one I think justifies these developments: the kind that can be performed by an evil maid. No shutdown, 30s to 2 hours to perform the attack, on-site. Luckily, it's rather easy to protect against that (a solid case would).
You can't guarantee security of anything, ever. But that doesn't mean you shouldn't try to prevent attacks.
Of course timescales and assets are important. For the movie streaming industry, it's a certain number of months before and after a stream is publicly released. After sales have been made, there are diminishing returns when trying to protect the content.
> I think it's wishful thinking that the technology you are relying on won't be broken in a one-year timeframe
It depends on what your assets are, your threat coverage and who you are trying to protect against. Some people are successful at doing this for more than a year's timeframe, which is enough to be overall successful. If you're developing something as popular and precious as an iPhone, it's far more challenging.
Nothing lasts years. The economical side of security is always to reduce the likelihood and impact of threats until they aren't worth worrying about.
> But you can't guarantee that the security won't be broken in 30 years
Yes, you can. Easily, relative to what you already need to protect against physical tampering today. Include a real time clock, and a comparator that, after 29 years, sets off the anti-tampering magnesium/thermite/whatever charge to destroy your data storage. In fact, you probably don't even need the comparator as such; just use a carry bit from the RTC. Guaranteeing that the security won't be broken in less than 30 years (really, less than X years for whatever X you set the self-destruct to) is the hard part.
I know you can't infer it from my post, but I'm totally on the side that pretty much no device should have secrets to protect, and the vast majority of secure boot features are anti-consumer.
However I still think it's interesting to understand how to make things secure if you need t.
Using your devices how you want is reasonable I believe. It's reasonable if someone wants to turn their IoT hub or device into something else. However, an important issue is protecting access to cloud resources. Making sure a stolen hub doesn't give you access to someone's home and appliances remotely.
Yes, in general, small microcontrollers are extremely fast to boot. They run code directly out of on-chip Flash memory, so there's no delay from copying kernel data from a disk into RAM. On-chip peripherals are pretty much fixed, you don't need to do feature discovery and load drivers. Booting from power-on takes single-digit milliseconds, booting from various sleep modes takes tens to hundreds of microseconds. There's nothing that's going to give you the 60-second-plus boot time of an old Windows PC.
The limitation is typically due to system clocks, specifically, internal phase-locked loop (PLL) oscillators. The worst-case binned silicon under extreme temperature conditions might need 1000 clock cycles of a 32.768 kHz PLL before it stabilizes at the correct frequency, so power-on reset might take single-digit milliseconds.
Time-to-start is a critical parameter that affects battery life, functionality, and user experience, it's something manufacturers try to optimize. These PLLs are extremely low power (single-digit microamps), and typically are among the last things to shut down when the uC is put to sleep, so boot time from deep sleep mode to active - for example, to read a sensor value and buffer the reading to memory every minute - might be as short as couple hundred microseconds. You might think that a sensor and microcontroller that draw 10 mA when active and are powered by a 200 mAh coin cell, will be dead after 20 hours and be pretty useless. But if your controller draws 10 uA in sleep mode, you boot in 200 microseconds, measure the sensor, write to memory, and go back to sleep in 400 microseconds, you can be 'awake' for 600 uS or just 0.001% of the time and have an average current consumption of 10.1 uA and battery life of more than 2 years.
And yes, in a well-run shop, when something costs a small fraction of an engineer's hourly rate, they'll be able to run tests on it in parallel.
Agree. One additional point to add: many people have some limited exposure to microcontrollers via Arduino and from that experience may have concluded that "Arduinos take around a full second to boot" without realizing that that delay is the bootloader timeout listening for new code download, completely unrelated to the inherent boot time of the underlying AVR.
> That's like 100ms per test. Are those devices that fast to put into uart mode and reset and wait for the result? Were they running this with multiple devices in parallel?
Whilst I expect they probably did use multiple devices, I regularly see reboot->uart in the range of 150-250ms.
That being said, depending on what you do (like undervolting), you can see up to an agonising 500ms.
"That's like 100ms per test. Are those devices that fast to put into uart mode and reset and wait for the result?"
Perhaps each run did multiple tests with the "execute arbitrary code / load_ram" CVE. And they are counting those individually as tests.
I don't know about boot time, but they are fast for a microcontroller at 160 or 240 MHz clock speed.
Edit: Various posts seem to indicate boot time is related to application size. I saw claims of 300ms for a small program, closer to a full second for larger apps.
A common use case for secure boot is when I want to use my device however I want to while being reasonably confident that I'm not letting 3rd parties use my device however they want. Especially if you have a device that stores information that might be sensitive (user passwords, encryption keys, bitcoin wallet, safe combinations, thermostat settings, whatever).
This is a good thing --- it lets you actually own the hardware you purchased! I wonder what it is about the security industry that seems to make it full of corporate bootlickers who are all too proud to find and "fix" these "vulnerabilities" and continue the trend of locking users out.
And this is why I don't understand people who think that without secure boot devices like Pinephone are insecure.
Secure boot is nice optional protection, but real security comes from ability of the platform to reliably boot from known good media without any ability of the software outside that media to intervene.
That is the only way you can know what you're booting/running.
Relying on some crypto implementation and allowing other people to try to attempt to break it is a nice defense in depth, but not the ultimate protection, and may even induce false sense of security.
But that defeats the point of tamper-proof hardware: it should not be modifiable via software.
Unless you make adding your own keys a one-time event and then blow fuses, I’m not sure you can have a secure boot chain slung with control over the hardware keys.
It's about ownership. I want to own my phone. That means whatever I do with it is by definition not tampering.
The procedure should involve repeated ability to add and remove keys, but require direct physical access to the chip, perhaps by means of a separate device that's coded to only work with your phone, and that you would never carry together with your phone. Lacking that, I guess "one-time event and then blow fuses" is fine too, though end-users need to be warned up front they're doing something irreversible and must be absolutely sure to protect their key.
Aren’t the trusted boot signing keys from MSFT burned in and therefore not revocable? Maybe they have multiple keys with some offline backup keys or something.
Actually the solution is to store the secure-boot key and the full-disk-encryption key in the same place, and design the hardware so that erasing the former also erases the latter. You don't need any one-time anything.
Or, if you need more flexibility, some kind of out-of bound mechanism that doesn't rely on the OS that is booting and you are trying to verify. This could be loaded via some kind of internal memory card or via an interface that requires access to the board. This opens you up to physical attacks but you get a bit of a middle ground between security and flexibility.
> Secure boot is nice optional protection, but real security comes from ability of the platform to reliably boot from known good media
Which is exactly what (UEFI) Secure Boot usually aims to assist.
I mean... One can complain how it tries to achieve that, but either you realize it or not, you are seemingly in full agreement with the overall design-goal of Secure Boot.
As long as in-device firmware is modifiable and the protection relies on lack of bugs in its implementation, then I'm not in agreement.
What I have in mind is just a fixed boot order enforced by firmware stored in ROM, where external storage takes priority, and where you can easily control external storage. (as easy as inserting a uSD card)
All the TF-A, bootloader, kernel is loaded from the external storage, so no software at all is loaded from any modifiable storage that's part of the device itself, if you don't want to.
Any attacker made changes to the software on the storage inside the device would be irrelevant. Such setup seems much better than having to assume no bugs in existing firmware or HW related to secure boot, etc.
I've wasted enough time getting a Bus Pirate to try and behave when I wanted to test something that I pretty much gave up. Now I just keep a random Arduino board around and push snippets of scratch code to it for the same purpose.
(And I say that as someone who doesn't really use Arduino boards for the final project.)
fair enough - i guess the idea of the bus pirate was that it's flow was supposed to be faster than writing code, i.e. you want to peek & poke a target more interactively or exploratory just to see how the it behaves. but Yeah if you know exactly what you want to do why not just skip right to writing code for it with the arduino.
Hrmm, this vulnerability seems to only affect the previous versions of the esp’s? There was a revision made after that glitching attach at the same time last year.
ESP32-D0WDQ6 is the newest revision.
It even says in the Security Advisory: The ESP32-D0WD-V3 and related products support a new RSA-based Secure Boot implementation (ESP32 Secure Boot V2) and have a feature to permanently disable the UART Download Mode via eFuse. Because of these modifications, these attacks can be prevented on ESP32 V3 SoCs and modules.
One thing that struck me is "After running these experiments for more than a day, resulting in more than 1 million experiments, we did not observe any successful glitch…"
That's like 100ms per test. Are those devices that fast to put into uart mode and reset and wait for the result? Were they running this with multiple devices in parallel?