Hacker News new | past | comments | ask | show | jobs | submit login
Attacking Titan M with Only One Byte (quarkslab.com)
175 points by ZeroCool2u on Aug 15, 2022 | hide | past | favorite | 76 comments



I'm very happy to see that a vulnerability introduced in a May 2022 was found, diagnosed, and fixed in a June 2022 update.

After something went wrong, a bunch of things went right very quickly. Nice to have good news on a Monday morning.


It's a bit confusing to me. The doc says:

> … the vulnerable firmware was introduced by Google's Pixel security update of May 2022.

However, further down, in the timeline section, the doc indicates that the issue was reported before May 2022, all the way back in March 2022:

> 2022-03-02: Vulnerability reported. Opened issue 222318108 in Google's Issue Tracker, providing a technical description of the bug and a Proof of Concept (PoC) exploit…


> Thanks to the leak functionality that we built with this exploit, we can now read arbitrary memory on the chip. This means we can now have access to any readable address. As a consequence, we can dump the secrets stored in the chip (such as the Root of Trust sent by the Pixel bootloader when the Titan M is updated).

> One of the most interesting consequences of this attack is the ability to retrieve any StrongBox protected key, defeating the highest level of protection of the Android Keystore. Similarly to what happens in TrustZone, these keys can only be used inside Titan M, while they are stored in an encrypted key blob on the device.

I thought the whole point of making a hardware security chip (rather than using a general purpose microcontroller, possibly with crypto acceleration hardware) is that the private keys would be protected by the hardware design. So you could use the private key to e.g. create a digital signature but its impossible to read out the private key itself, outside of potential side-channels.


This is amazing work!

I was surprised to see that the reward was set at 10k initially. Granted, it was bumped to 75k later, but even that seems on the low side considering the degree of compromise that occurred here.

I may have given up too early during my (fairly brief) research on CVE-2019-9465. I let the lack of firmware source code availability stop me at the time, but in hindsight the presence of "0dd0adde0dd0adde" in the ciphertext likely indicated a crash in Titan M as well. Perhaps there would have been a similarly interesting path to exploitation there.


They missed the opportunity to call the article "Attack on Titan M".


Very cool.

I wonder why companies still leave the UART pins accessible. Fine they're on the chip, but just remove the trace and slow down attack evolution is worth the cost of a board revision surely...


Isn't it better to leave them exposed and make it easier for security researchers who genuinely want to test the chip? Someone interested in and capable of developing and using/selling an exploit won't be deterred by needing a special cable to get a UART console, whereas a security researcher might appreciate the simpler access.

So long as it doesn't weaken the actual security model, companies should make their products as easy to analyze as possible imo.


Then sell a dev unit surely, not a consumer grade device...


If THAT is your idea of security, i hope you do not work on any hardware whose security matters. First thing anyone would do is find the pins and connect to them. It buys you nothing, and if anything tells me that i should go look for them.

Visible and labeled UART pins tell me that you've (hopefully) thought through the consequences of me having access to them. Hidden UART tells me that most likely nobody ever gave that half a thought.


Do you have __any idea__ how difficult removing the chip and re-surface mounting it for an attack...

Removing the trace means an extra step which is the whole point. Ffs


Yes i do. done it. at home. for fun. Which means anyone motivated to do it can easily get it done too..


With data intact after etching a custom PCB for a custom chip? I'd be impressed if that skillset overlaps with someone hacking bytecode


I bet that skillet overlaps more than you think. Especially once you consider motivated teams of two or three.


Again this additional step slows down an attack which discourages people from attacking the product. If it achieves this it's a valid step as part of a security strategy.

No single solution/step is 100% secure, if you think that, throw your devices away now because they're probably already compromised.

Stop with the ego pedaling security stuff and live/work in the real world where small changes have real positive impacts.

It's the same with DRM. It doesn't have to be uncrackable, as long as it keeps the sales for the first x-months it's worked. (Not that I think DRM is acceptable but that's another discussion).


Sometimes, idiotic moves like that DRAW people to your product who'd otherwise pass it by. The only reason cypress got publicly embarrassed by their secure rom being completely broken [1] was because the way they talked about it annoyed me.

[1] http://dmitry.gr/?r=05.Projects&proj=23.%20PSoC4


I...

Please stop arguing against a step which could have been taken as part of a valid security model...

I'm not saying this would have magically fixed the chip firmware. I'm not saying this would have magically stopped anyone ever getting into the device. I'm not saying this would stop Google accidentally shipping an unprovisioned unit.

I am saying a small move that strengthens the whole unit should be strongly considered. I'm sorry that backtracking from such a flippant response is so difficult for you.

Frankly it's a custom chip design, they could burn an efuse to cripple UART in production consumer units, that has the same effect for 99.99% of chips that would sell.


The one bit I didn’t understand was how they bypassed W/RX. How did they manage to get the new code to be marked as RX after writing?

I thought I read the whole thing. Did I miss that explanation?


AIUI, they didn't inject code, just mangled the stack to hijack the execution flow towards specific code fragments ("gadgets") already in the executable memory.


Hmm, doesn't ARM have mitigation against Return oriented Programming


Yeah some devices support PAC use that feature to sign return pointers. But not everyone uses it (even when available), and there exist methods to bypass PAC— from attacking the micro architecture to finding signing oracles.


PAC (pointer signing) & Branch Target Identification are not available on 32 bit arm chips, and judging by the assembly in the blog post the Titan M is a 32 bit chip.


There's a new PAC specification for ARMv8-M: https://community.arm.com/arm-community-blogs/b/architecture...


There's multiple mitigations, including some compiler passes IIRC. But the pointer auth hardware ones defined by ARM were probably specified after Titan M's original design.


Sounds like an amateur hour at that Google team. While post authors are putting blame on the un-safeness of C, absence of user input validation, like that integer from a message, is a path to a very unhappy place independent of language. The rest of the exploited places of that Titan software seem to be similarly sloppy.


Most languages don't let you (or at least make it hard to) directly convert user input into memory locations though. The scope of the issue in other languages would likely be much more limited.


It is unrelated. For example the input may be used only as a parameter for reading operations - ie. one can easily imagine a situation where even in a safest language using un-validated input may result in a call/query producing info outside of what would be expected for valid parameters.


You can create special purpose languages which lean on type safety heavily to just outright forbid this sort of error at compilation. "This is an untrusted input byte, those range from 0 to 255, so the rest of the code must work for all values in that range or else that's a build error".

Languages like that aren't going to be suitable for writing a whole web browser or a desktop operating system, but they might well be enough for the Titan chip. This is the sort of work where the compilation is tricky (definitely wouldn't fit on a small ARM core) but the machine code it spits out is much the same except safe.


Yes, one can always imagine such a scenario. However, more often than not, real discovered vulnerabilities (that is, vulnerabilities in the real world, not some hypothetical situation) seem to exploit permissive memory modification in C/C++. In the scenario you describe, even discovering the vulnerability would require deeply understanding the application logic. In the vulnerability discussed here, simply fuzzing was enough to discover the vulnerability because it was easy to detect the invalid memory access.


This is an elegant attack that effectively compromises all Titan M chips. They were even able to dump all securely stored private cryptographic keys, which Google acknowledges in the disclosure timeline.

Even still though, the award Google initially gave was only $10k USD(!). They finally bumped it to $75k USD after complaint and review, but Google's bug bounty program claims up to $1 Million USD.

If fully compromising Google's own security chip to dump all private keys isn't worth the full $1 Million bounty, I honestly don't know what is.

Really, what would, in the mind of those on the internal committee, constitute justification for the $1 Million bounty?


From my point of view, the fact that the exploit required either to run as root on a rooted device (or alternatively direct access to an internal serial bus) is enough to justify not paying at the top level.

These things should probably be more transparent, but I would assume the $1M level would be for exploits that could be deployed on a fresh-from-the-box device with no rooting/mods.


is isb access an issue? your phone could get stolen


Sure; but it's a sliding scale, right? Zero-click network exploits are more severe than interactive exploits are more severe than those which require physical access etc.


Quoting a random article from when this was initially announced:

> Google says that if researchers manage to find "a full chain remote code execution exploit with persistence" that also compromises data protected by Titan M, they are willing to pay up to $1 million to the bug hunter who finds it.

So a compromise that doesn't require physical access or root, presumably.

Cue also the inevitable discussion that bug bounties are too low.


> Really, what would, in the mind of those on the internal committee, constitute justification for the $1 Million bounty?

Probably something that doesn't require physical access to a key for longer time to extract the keys?


Remote 0-day onto all google internal infra?


thats literally worth billions, and could be sold to many governments.

If thr right people don't buy these zero-days, yhe wrong people will.


Billions? couldn’t a government just get a person affiliated with them hired by google?


Most Google employees do not have direct access to all infra.


[flagged]


Probably an error-shift-encoded one-time pad scheme. The post is the publicly distributed ciphertext.


Google bug bounty program is a sham. When I tried to report critical vulnerability in their authentication system, they told it's a 'feature' and threatened to sue if I disclose.

Then fixed it in two and a half years, and wrote an article on how complicated bug they just found and how proud and secure they are with their pentesters.


On what grounds could they sue you if it was truly considered a "feature", and they used that exact terminology in their response?


You have inadvertently discovered the value of pentesters which is making handling security issues a win for the the management.


Do you have a link?


It was looong time ago (2011). You could use app specific password to change master password and disable 2FA, with a script.

Funny thing. I had to discover this way myself when I lost my phone with Authenticator App. I took my mail client password and discovered that with some header magic I was able to hijack my own account. I couldn't believed it. So I created another account, protected with 2FA and did same thing. Got gaslighted by Google bug bounty team and decided it's not worth it.


This article [1] includes a quote describing a specific vulnerability that would pay the $1 million.

> "We are introducing a top prize of $1 million for a full chain remote code execution exploit with persistence which compromises the Titan M secure element on Pixel devices. Additionally, we will be launching a specific program offering a 50% bonus for exploits found on specific developer preview versions of Android, meaning our top prize is now $1.5 million," Jessica Lin of the Android security team said.

> The Titan M bounty applies to the Google phones that have the chip, which include the Pixel 3 and 3 XL, 3a and 3a XL, and 4.

[1] https://duo.com/decipher/hack-the-titan-m-get-usd1-million


For 2018 chip and for a company like Google, the decision to go with C despite their all knowledge oN C/C++ memory issues (hello Chrome) is a bit sad.


I would imagine it wasn't a hard decision - they likely needed to build something in an environment where they are already paying 40-80k C++ developers, and I would guess something like 1-10k Rust developers, who are scattered around various teams and may not want to hop on a new team right now. Also it was released to the consumer market in 2018, so probably built in 2016-17. Rust and other memory-safe systems programming alternatives didn't have nearly the same uptake back then - so maybe 1-2k would be a safer bet. Not to mention, Rust didn't get tier-1 ARM support until 2021 - and even then, that's only when running on linux, which the Titan M chip is most likely not running.


I think the $1M is probably reserved to something that would tank Google stock imo, but maybe I'm cynical


Nah, you're not cynical enough. My cynical take is there is nothing they would ever pay $1M for. "Up to" is a marketing term to get people to think that if they work hard enough, they might qualify for this mythical unicorn bounty, but at the end of the day they just get peanuts.


"You might win this car!"


Really sign me up!


First, we'll need you to fill out this form with some basic information:

  --First, Middle, Last Name
  --Phone number, email address, social contacts
  --Mother's Maiden Name
  --First concert you attended
  --Name of the street you grew up on
  --Name of your best friend
  --Name of your first pet
  --Make/Model of your first car


i would imagine higher bounty would be for extracting device keys. from my reading this exploit would not allow you to extract the key needed to unlock a powered off device. im no security expert so please correct me if im wrong


>2022-06-20: Quarkslab sent Google a new exploit that demonstrates code execution on the chip and exfiltration of encryption keys from it. A detailed description of the exploitation technique, the exploit's source code, and a video showing its use to exfiltrate a StrongBox-protected AES key were provided.

This sounds close enough to me, but perhaps there's some subtle nuance between device keys and other keys in the chip.


> perhaps there's some subtle nuance between device keys and other keys in the chip.

Thats what im wondering too. particularly this line from mitigations section from the report

>> However, we do want to point out an interesting feature that would have made the StrongBox key blob leak impossible. Indeed, an application can create a key that is authentication-bound, specifying setUserAuthenticationRequired(true) when building it with KeyGenParameterSpec. This way, users need to authenticate before using the key and the key blob is encrypted a second time using a special key derived from the user password that we do not have.

so unless your phone doesnt have a password i dont see how they can retrieve device keys


Yeah, okay I definitely missed that part. Makes more sense. Still, the bounty seems shockingly low. They probably could've gotten a lot more for it on the open market.


yeah i think its a noteworthy attack and defn worth more than 10K. also google should be more open about how bug bounties are evaluated


Those committee's just lost contact to the real world. Ridiculous


I guess there's someone who orders the marketing department to say "1 million", while telling the operational side "10k", because his bonus rides on it.


That's not how it works. Bug bounties work like this.

Somebody sets up a bounty program and defines a framework for deciding how much to pay out. Security is complicated as hell and you cannot possibly devise a framework that accounts for all possible things so this framework is necessarily brittle. For example, you might reasonably decide that the highest payouts require very minimal attacker capabilities (fully remote unauthenticated attacks being the top payouts). This makes sense since those are the easiest attacks to mount.

So now a bounty comes in. It goes to a triage person or, at best, a small group. They refer to the framework. Your bug doesn't really match any of the categories but it kind of looks like this thing over here so it gets bucketed as such. Maybe there is some discussion. Ultimately, the rules say "max payout requires unauthenticated remote attacks" so the payout ends up lower, even if the attack is exciting. Maybe somebody managing the system takes a note to update the framework and policy moving forward. The community then rages about how this bug is actually a big deal and deserves a lot of reward.

In my experience, the people managing these programs get rewarded based on the amount they pay out going up, not down. But you need a payment framework otherwise each bug is paid out on somebody's whim (and trust me, the security researchers will complain to high heaven if they perceive inconsistency in bounty sizes). So you end up with novel bug structures that aren't handled well by the framework and get treated weirdly.


I'm still surprised by the $10k payout given the disclosure timeline. For example they indicate that on 2022-05-04 there was a "conference call between Quarkslab engineers, Google Android Security Team members, and a Titan engineering team member." Since there were both Android security team members and a Titan engineer in the phone call there were clearly engineers in the loop who understood the technical details of the issue as well as the severity. The initial $10k payout was done on 2022-06-07, a month later.

One would imagine that this would have been escalated to some pretty senior security folks at Google before the payout was decided. That would mean that there would be some amount of discretion on Google's end as to the payout, since there would (presumably) be someone senior enough to look at this closely and authorize a higher amount even if there was some rubric that might seem to award a lower amount. Obviously this is ultimately what happened, as they eventually did increase the payout. It's a little strange to me though that this wasn't done sooner.


> One would imagine that this would have been escalated to some pretty senior security folks at Google before the payout was decided.

Bug bounties are routine. "How much do you want to pay out" is way down the list of things that leadership is focused on for these things. "How do we mitigate this" and "how does the researcher get paid" are often questions owned by different people and teams. Directors aren't swooping in to make payout decisions.


> As a reminder, there are two conditions to perform this attack. First, we need to be able to send commands to the chip, either from a rooted device (required to use nosclient), or physically accessing the SPI bus.

> Then, we need a way to access the key blobs on the Android file system, which can be done again by being root, or with some exploit to bypass File Based Encryption or the uid access control.


For a moment I thought this is something related to the anime


Myself pulled in hoping for a historical tale of attacking vulnerabilities in Titan ICBMs.


Just curious - if the usage of acronym here a soft-euphemism? (So that only those who knew or care to know gets it)

edit: Thanks for the clarifications. That helps. I'm asking for 2 reasons:

  1. Discussing "nukes" openly where I come from would raise some eyebrows.

  2. I see acronyms used on HN frequently - sometimes ambiguously even considering the context.


I don't think so, people could just say "nukes" but there are plenty of nukes that aren't capable of hitting targets halfway around the world. ICBM seems like the fastest way of saying "nukes that can hit stuff really far away" while also making a distinction from sub launched and cruise missile launched nukes.


The payload on the missile doesn't need to be a nuclear weapon of any sort. The important thing about the ICBM is that being further away doesn't stop it. Nuclear weapons are the obvious choice because it's hard to imagine why you want to strike something so very far away, at such great expense, with conventional explosives.

The German V2 rocket from World War II has a maximum range of about 320km. So you literally can't fire one from say Berlin to London. They were actually launched from coastal sites in the Netherlands and other occupied countries, and as the Allies took territory after Overlord, the targets changed to cities nearer Germany because the launchers were pulled back.


ICBM is a very common term. For example, the NY Times uses it in headlines. https://www.nytimes.com/2022/05/24/world/asia/north-korea-ba...


That would be "only one bite"


A lot of software can be cracked "with only one byte". Finding which one is the hard part.

Don't lose sight of the fact that the purpose of this and other TPM-like devices is to hide secrets from its owner.


> hide secrets from its owner.

It makes sense to use exactly the same technology even if you are "the owner" unless you are somehow only ever running software you wrote on data you obtained, and maybe not even then if other people are able to influence that data.

Most of use a lot of software we didn't write, to process data we got from some third party who may or may not have our best interests in mind.


> Don't lose sight of the fact that the purpose of this and other TPM-like devices is to hide secrets from its owner.

That's a complete misunderstanding of a TPM's security model. A TPM guards against key theft in a compromised environment by securely storing artifacts and authenticating the platform. The user doesn't enter this threat picture. It is the platform that gets authenticated, not the user.


SafetyNet and "authenticating the platform" is used for remote attestation, so app authors can deny phones access to apps and services if they're running root/kernel-level CPU code chosen by the user (or also an attacker without access to the phone's OS signing keys) rather than by the phone's manufacturer.


That's a great example. Thanks!


well there are cases where owner and current user aren't one and the same. Like an ATM terminal, or when someone else is using your computer.

The whole security landscape seems to have many catch-22's




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: