I applaud your creativity Ken! However, this concept has been explored before, and in fact, you could think of U2F tokens as devices with private keys.
We've already gone through a number of solutions, and regardless of the tech you built, this is just another example of an app-approval flow. Twitter & Facebook allow you to approve logins from already approved apps, and while their underlying tech is different from yours — your server can still change the tech maliciously.
For a brilliant app idea, see Kryptonite[1]. It lets you install a browser add-on that pretends to be a U2F key, but the private keys are stored in your phone's secure key storage (Keystore for Android & Secure Enclave for iOS). The experience is similar to what I assume your approval from a phone would be, but it works on all website that support U2F.
For your own site, I'd recommend using U2F tokens and TOTP + OTP keys, they're considered really good solutions, and while SMS is considered bad — it's still better than no 2FA. Please see the post by Troy Hunt[0] on 2FA from late last year. It covers more than I possibly could.
Sorry for the negative critique, but I'm sure you'll build something great next time ;) Instagram was first launched as a Foursquare clone[3]
I'd like to point out that U2F has been superseded by WebAuthn[0]. It's backwards compatible with U2F keys and doesn't require a bunch of JS since support is built into browsers[1]. Some hard/software combinations allow you to the platform native authenticators instead of an external key, such Chrome/Android biometrics, Chrome/Macbook Touch ID and Edge/Windows Hello biometrics. Besides 2FA it can also be used to build a passwordless login experience[2].
You could think of FIDO Security Keys as having private keys, and in a sense they do, but you're more likely to mislead yourself if you think of them this way. Keys are tied up with identity, and one thing FIDO tokens deliberately aren't designed to do is have an identity.
The trick WebAuthn/U2F do is equivalent to if somehow, magically, the device could have an effectively infinite number of private keys, not just one, or one per site, but one for every enrollment on a site. If Alice and Bob share the same FIDO key for, say, Facebook, the only way even Facebook itself could discover this is to ask Alice's key to authenticate as Bob which will work. Every attempt at such a guess requires a human interaction (button press) and risks discovery. So probably no one will ever try.
> The trick WebAuthn/U2F do is equivalent to if somehow, magically, the device could have an effectively infinite number of private keys, not just one, or one per site, but one for every enrollment on a site.
Ledger Nano S (a hw crypto wallet) has a virtually infinite number of public/private key pairs. In fact, all hardware wallets that implement "hierarchical deterministic wallets" have them. All the wallet needs to determine a private key for one of its public keys is the "derivation path" of that pair. Even more interesting, if one site knows one of your public keys, it can't determine the rest of them without knowing your "master public key".
Along with U2F, see also Authy and Keybase. Each asks for an account ID, then uses a prior authenticated device to assert that the new device is OK to authenticate. Keybase definitely uses a genuine PKI to make this happen. Authy might not -- it's hard to tell from the app's outward behavior.
Upside is the convenience of not needing the original account passphrase to add new devices. Depending on your threat model, demanding a passphrase to add a new device might not add anything to the security.
Downside (I imagine) is that people can misplace their passphrases for a long time, thereby allowing the account to grow in value, but not realize it until they've lost or destroyed their last authenticated device. Then they're screwed because they no longer have any way to get back in.
SMS as a second factor is not a net reduction in security. The ability to hijack a phone via number porting or similar could give access to SMS messages, but by definition a second factor should never reduce your security.
What does reduces security is the use of SMS as a password reset mechanism, or any similar method that uses SMS as the only factor for authentication. Don't do that.
It gives users a false sense of security and providers an excuse not to implement something better. Most (all?) of those who only support SMS 2FA also get the part about no resets wrong.
SMS does still help to mitigate a common attack...folks trying out password dumps on other sites. But, I don’t disagree we need to move on when we have more options to choose from. Right now the best looking option is webauthn with platform authenticators.
This to me seems like faux security. The server can just send its own custom key to an already authenticated device, which will gladly encrypt the main private key and hand it back to the server which can decrypt it. Even if the previously authenticated devices tries to directly communicate with the new device not via the server, presumably the server could also give its own IP as the new endpoint.
I get that security is a bunch of trade-offs, but this seems post seems to present it as a fool-proof method, since the goal is to prevent the server from being able to see the private key ("how can the user's private key be transmitted in a way that doesn't reveal their credentials to the server"), and the given solution doesn't actually fulfil this goal.
In that case, the authenticated device could have the ability to show the public key before accepting a request, and match it with the public key generated on the new device. But yes, at some point you'll have to put trust in the server not doing malicious things like that, which is one reason it's open source
Sorry for blunt words, but that's bullshit. If there is no proof a client is secure from malicious server, then this system is useless from the security point of view. You can have open source solution, but you might be running tweaked version.
The key is to have an open source client and a protocol that protects against a malicious server.
One way to do this is to have the new device generate a random passphrase, display it on the screen and require it to be typed into the already authenticated device. Then the devices can use PAKE with that passphrase to establish a secure channel between each other. Even if the data still goes through the server, it's encrypted and the server can't read it.
Another method is to have the new device display its public key as a QR code and have the existing device scan it.
I think the trick is to push the trust up a level to the platform owner (you have to trust someone at some point) via webauthn or something. If you do that, then the browser itself can be the one the be trusted to show the actual public key being added. As long as you are relying on the server itself to serve trustworthy JS to show and validate the new public key, you are kinda stuck.
It is important to remember that this security model is not that much different than every other javscript app in which the app logic is delivered from the server.
"We only handle keys clientside" is irrelevant when the server can ship code, at any time, that delivers keys, passphrases, or plaintext anywhere they like.
If I understand right, an existing user (who already has an account, and a device) wants to add a new device, they…
• On new device, make a key pair.
• On new device, log in to server and send new device's public key to server.
• Server sends approval request to old device, which includes the new device's public key.
• On old device, receive the approval request and the new device's public key. Decide to approve new device.
• On old device, take old device's _private_ key, encrypt with new device's public key, and send encrypted blob to server.
• Server relays approval message to new device.
• New device decrypts blob, and deletes its original key-pair (that was generated at the start). New device now uses old device's private key. The public key is derived from the private key.
So in the end, old device and new device have the same key pair.
Please let me know if I got that wrong!
Assuming I got it right, this strikes me as problematic, because it means that all the devices are sharing a private key. If any one device were compromised, then all past messages would be exposed; if the compromise were to go undetected, then future messages would also be compromised.
I agree. You would be better off, if you have a private key per device and increase your logic on the server side (multiple devices per user), to allow for easier revocation of stolen devices.
“Linking multiple devices” via their public key is still fine, since the user has to login once with email/password (or by some other means).
If you were to have the requirement of needing multiple private keys to decrypt a message, that would be a different thing - maybe to ensure in a group chat only the current group chat members can decrypt the messages. And whenever a member leaves/joined a new group secret gets generated based on all private keys of all members. But then what would be the benefit of this ?
Moving the private key seems unnecessary. Wouldn’t it be better to keep the key locked in the secure enclave on each device, and allow multiple keys for each user? Then devices can also be removed from the account.
Isn't the whole point to keep the private keys in some kind of secure element that is tamper resistant and never exposed? On iOS you can't even export private keys generated on the secure enclave, which is exactly how I expect it to work. There are other ways to securely transfer accounts without creating vulnerabilities.
As noted elsewhere in this thread, I’m not advocating this homegrown solution. But, I will play devil’s advocate a bit. Yeah, sure, a Secure Enclave key per device is more secure than not. But, what is your threat model where this is the priority? The only realistic threat model where the enclave (or hardware security key) would come into play is malware running on the device. But, in that situation, you are probably hosed anyway. The malware can read browser session cookies. Or, it can interact with the enclave or external dongle in ways that would grant them access to whatever site they want by sending bogus site challenge requests for site X to the enclave/dongle when you meant to approve a challenge for site Y.
I personally think there is potentially a big security usability win by using a securely (key part here) synced private key (via iCloud Keychain for example).
I’m not advocating for this homegrown solution, but moving private keys around isn’t necessarily the worst idea ever. A unique key per device requires explicit registration on each device. That is “better security”, but also potentially far worse security user experience. Security user experience isn’t to be undervalued. 2FA today is miserable precisely because of the tragic security user experience. Folks don’t understand how fragile things are if they drop their single device with google authenticator in the lake. I think a securely synced shared key could be a huge usability win with only nominal security downside. For example, I think Apple could pull this off pretty darn well using their iCloud Keychain system.
This is my thought as well. Keep the "broadcast a notification to all other devices registered to the user" part to verify that the new one is authorized, but instead of copying the private key to the new device, tell the server to authorize the newly-generated private key.
Interesting to see a hacker news expert coming up with something with some good buzzwords - two factor and public-key crypto.
That said, I like the apple approach better here.
No forward secrecy?
Passing private keys all over the place?
Look at how apple does it. Private keys stay on device in secure enclave, even user cannot export from there. I don't even think system root (which user and software generally don't have access to) can export private keys.
Adding a new device to the account involves old device generating a funny looking wave pattern which new device scans. This has always worked fine for me. I assume apple then adds public key of new device to my acct.
Ken, thanks for posting, though I imagine it presently feels like you have been nuked from orbit. Once you've digested what's been said here, please consider either updating the blog post with a warning or, possibly, a retraction. I suspect you will be getting some pagerank generated traffic and it would be best for those folks to get a fair warning. Maybe even point everyone back to this discussion, which has been fairly informative for many people.
It seems like you do not verify the public key of the new device (client-side) before encrypting the authenticated device's private key to it?
Your server can then impersonate a "new device", generate an ephemeral key pair, send the public key to all the user's authenticated device and perhaps trick the user into encrypting their private key to it. One way to prevent this is to show some sort of fingerprint on both the new device and the authenticated device proving that the public key is the correct one.
1) According to experiments, users associate the term "fingerprint" with crime, not security.
2) Users, even ones who know what fingerprints are and care, do not validate fingerprints. The rest of the users (the other 99.9%) don't either. This is not a good UX.
I don't know about 1 but definitely agree with you on 2.
You could make validation part of the workflow (before you can even "say yes", you need to scan a QR code for example). You might say this is also not good UX and I wouldn't disagree. Cryptography + good UX is a hard problem.
…which forms the second factor, the "something you have". (The private key.)
I don't really see how it's very different from an external security token that does basically the same thing. (Though an external security device can make it very difficult for something malicious to get at the private key, which is good.)
Firstly, this is just authenticating the same factor over a different channel. (Actually the exact same credential). So not 2FA.
Secondly, if I can successfully get you to approve my login request then I not only get logged in but also receive a copy of your private key? That is a really bad failure mode.
Sure.. but Phone, Tablet, Laptop, Desktop, Office Desktop, etc. I want my chat apps to work on all, be sync'd on all, and be secure.
(Though I'm not convinced the proposal in the article is all that secure - as others have pointed out, it seems trivial for an evil server to obtain the private key..)
This feels vaguely like a homegrown version of webauthn, but without any of the niceties of having the browser enforcing origin protections and ensuring the credential isn’t lost if cookies/local storage are cleared.
I’m curious to know what benefits accrue from sending the user’s private key over the wire (even encrypted). It seems a strange concept, at odds with ephemeral key usage.
What happens if a user is authenticated in a single device and that device breaks? Is there a way for the user to authenticate with another device then?
We've already gone through a number of solutions, and regardless of the tech you built, this is just another example of an app-approval flow. Twitter & Facebook allow you to approve logins from already approved apps, and while their underlying tech is different from yours — your server can still change the tech maliciously.
For a brilliant app idea, see Kryptonite[1]. It lets you install a browser add-on that pretends to be a U2F key, but the private keys are stored in your phone's secure key storage (Keystore for Android & Secure Enclave for iOS). The experience is similar to what I assume your approval from a phone would be, but it works on all website that support U2F.
For your own site, I'd recommend using U2F tokens and TOTP + OTP keys, they're considered really good solutions, and while SMS is considered bad — it's still better than no 2FA. Please see the post by Troy Hunt[0] on 2FA from late last year. It covers more than I possibly could.
Sorry for the negative critique, but I'm sure you'll build something great next time ;) Instagram was first launched as a Foursquare clone[3]
[0]: https://www.troyhunt.com/beyond-passwords-2fa-u2f-and-google...
[1]: https://krypt.co/
[3]: https://www.theatlantic.com/technology/archive/2014/07/insta...