I like the forced UX of typing something, though "continue" might be glossed over. It would be an interesting study to determine if typing "I know the risk" is a better safety mechanism for users (can be A/B tested for less pass-through events) than "continue".
Might be a good thing for apps that try and looking like something else, G00GLE or FACEB00K could be missed, but having to type a zero manually instead of an O would (hopefully) eliminate that.
Nice, though it would probably just cause confusion for a heap of people, and would result in them not installing the app (good) and blaming Google for it (bad for their brand).
I would have thought by now it would be pretty easy for algorithms or AI to pick up on these kinds of tricks, whether by a similarity score or an image processing approach.
Because you associate it with an error that you have no control over? I find that I'm tuned to recognise patterns of behaviour, so when the patterns look similar to other things, but aren't the same, it's quite confusing.
I just can't shrug off the thought that manual review approach is a lost game in the long run. It's a process than requires skilled human and can't be fully automated while generating malicious code perfectly can.
I think of it in a different perspective: You combine humans and programmers to do better than each could do alone.
Start all-manual. Perhaps you only do it with a subsection of applications. Pay an extra fee and you get "certified" with special app placement. Then you start all-manual. You look for the people who are the best at finding issues, and pair them with programmers, and make the tools for the things that are gruntwork for them.
Build more and more tools, and you pull more and more people into the program as you build more and more intelligence into the machine.
Let the humans do the NP-hard portions. I'm sure this is what Apple has to be doing behind the scenes.
Tasks of classifying things (in this case into "approved" or "rejected") that humans can routinely do but machines find difficult are areas where ML shines.
Human reviewers today, but once the training set is large enough you can start to let computers take over with human reviewers reviewing the lower certainty cases until the certainties rise further.
Only if similar numbers of people work on each side. With thousands or millions of developers to a handful of reviewers, it may go as well as Google's other attempts at human support (read: terrible).
Charge money for that. If an app takes days to make, then you can also to pay for an hour of reviewers work if you want to distribute it to the public.
A non-prohibitive barrier to entry wouldn't be a big issue, as users in general want stricter filters that mean less shitty apps instead of looser filters that mean simply more apps.
They already do manual review for many of their high-serving ads, and people shouldn't shy away from some human intervention in these processes.
AI and machine learning are most effective these days when they help assist people (flagging potentially malicious code, bubbling up anomalies, etc.), and it isn't that expensive to get a pair of eyeballs to double check conclusions!
It actually can be automated to a quite some degree. This is basically a something called expert system. You could create automated system to do these reviews from human reviewers and their expert domain knowledge. It might never be 100% accurate and might require human intervention from time to time but a high level of automation of this process can be achieved.
Current Machine Learning techniques rely on being trained by manually generated data. Google's virtual Assistant learned to understand voice because Google setup Google 411 and had millions of participants train it. Google developed OCR models by setting up Google reCAPTCHA and having millions of people train it.
Manual prompts to users are a great way to develop training data and being able to distribute them at scale to millions of users means you can develop training data in very short order.
On the contrary, the lack of human review is why both the Play Store and the Chrome Web Store (especially this one) are dumpster fires with rampant malware.
Automated tasks are not good at outsmarting humans. When you want to review a human's work for security, you need humans somewhere along the process.
Automation can help those humans do their jobs, but it's simply not a solution here.
Thanks for that info, if it's setup like this it doesn't affect me at all instead of being really annoying to work with (since most of my apps are still in development).
But they still don't let you create app-specific passwords/tokens without enabling 2FA. How they think enabling "less secure apps" is better is beyond me. Trying to force an office full of luddites into 2FA does not go down well.
I reported a bug to Google just a couple of days, which is very similar to this.
It allows an attack to present a user with a real Google 'account select' page with their account listed, but if they click that link it actually redirects them to another site (which you can dress up to look like the password page the user is expecting).
It is arguably worse than the previous issue, as I don't need a hoax extension, I can just manipulate the link to inject the malicious redirect behaviour.
They have triaged it and I'll probably write up a report once they are happy for me to do so.
I'm not sure it's worse, since it requires users to type their password into a non- google.com domain. Whereas the oauth phishing, everything was on google.com so it looked legit.
No point 0Auth apps if google has access to it. Rather pay for my e-mail service than to use google, whose source of revenue is directly in conflict with my interest of privacy and security.
I highly recommend protonemail.com. Has all the bells and whistles and its major feature is user privacy and security.
Google has become the judge, the jury and the executioner of the internet. Recently a malicious user embedded an image from a site that is on Google's Safe Browsing list in a forum that is itself embedded on a third party site. This nuked a popular third party site where the forum is embedded: it is now flashing red (malicious software detected) in Chrome.
I was wondering how HN would spin this into Evil-Google. It's just tiring at this point. This is a perfectly valid security guard that protects their users.
It is, and with Google in a position of owning much of what happens online, it's also Google acting unilaterally in ways that multiple actors might not.
Both statements are true: It's a good thing. It's not right to have one company with so much power
There are plenty of cases where you might reasonably question how much control they have. But this is a mechanism to protect access controls to your Google account, almost the definition of something first-party that they should gate very carefully.
> you might reasonably question how much control they have
"Доверяй, но проверяй" (trust, but verify)
It's very important to always question decisions made from a position of power; vigilance is the price of freedom. Questioning does not necessarily mean disapproval. Everyone concerned about the future of the internet should be questioning Google's motives and intentions every time they unilaterally exercise their power to judge which apps are "acceptable".
In this case, it shouldn't take long to find the answers to that question: Google should have implemented this kind of dismissible warning a long time ago. Hiding the "continue" option behind the possibly-misleading "Advanced" link is a minor problem, but some sort of warning is obviously necessary. (I'm actually impressed they used a type-this-word check instead of yet another ignorable "go away" button.)
I think it really started when Google made its ill-spirited push to make everyone use Plus[1]. Before that point, Google was mostly beloved and most users were truly grateful for Google's free products. After that push, many lost YouTube accounts (including me), people who didn't even want to use Plus for chats were pushed to that page as the popular Google Talk app was killed (and then later forcibly moved from plus.google.* to hangouts.google.*).
In a fairly short period of time, people started seeing the same kind of "We're in charge and you're going to use _____ and like it!" attitude that Microsoft was once famous for. People wanted Plus about as much as they wanted Vista, but it got shoved on them anyway.
In more recent years—with AMP pages (and authors on Plus) getting an edge in search, google.com badgering users to install Chrome, a successful embrace->extend->extinguish strategy being executed against open source Android, etc—moves like this one don't look so innocent as they would have coming from the smaller, goofier and cheerier Google of 2005.
I have a theory that popular opinions within social groups can reach a sort of tipping point.
On one side of the tipping point, the company is seen as basically a bunch of good guys who may make occasional mistakes - but who doesn't? And they're always quick to correct them, or they're out of understandable commercial necessity.
On the other side of the tipping point, the company is seen as basically a detrimental force, hungry to gather and abuse monopoly power, responsive and accountable to no-one, and motivated only by things that will let them take more money and power from the likes of us.
Of course, for any company the size of Google / Microsoft / Apple / Facebook the answer is a little of column A, a little of column B.
But how people will see a given action depends on what side of the tipping point a company is on. When a company does something like buying another company's patent portfolio (for "defensive purposes"), or buying a startup and making their paid offering free, it'll confirm your existing beliefs - whichever side you were on.
If you went to Slashdot in the 90s, Microsoft could do no right - they could give free PCs to orphanages, and there'd be people saying they only did it to cement the dominance of Office.
I'd say Google is just kinda hovering around that tipping point right now, in HN's median opinion. Some people think it's on one side, some on the other.
Of course, us online-forum-readers and designers have to take some of the blame for this sort of reductionism - it's easy to comment without reading the article, or to design a system where parroting the collective opinion is rewarded with upvotes.
Well it's legitimately founded in that Google has a lot of power over how things are interpreted online due to its omnipresence. Once you reach the size of Google, every action is an experiment in the Butterfly Effect, with each change having many unintended consequences. Product Launches and Discontinuations on a monthly basis, flagging sites as malicious due to a false-positive, bubble search results causing discoverability issues, AMP, the list goes on.
Yes, some people will rag on Google for anything, but there are valid concerns when such changes are made without warning that breaks current functionality. Per the article, this time Google just enabled it for new apps with plans to roll it out to existing apps, but in the past Google as not been as courteous.
It's the classic xkcd situation that every change breaks someone's workflow; the difference being that when you're Google's size and have Google's reach of influence, you end up breaking a lot of someones workflows.
> This is a perfectly valid security guard that protects their users
No it's not. It's Google telling it's users which apps to use and not to use. For regular users Google presents itself (just like Apple does) as an authority of what's a good app. Any other apps automatically are not good in the mind of regular users.
When that app gets access to your Google account to do things on your behalf then this is a perfectly valid strategy. Feel free to not integrate with Google's OAuth system if you don't want to deal with them.
> Feel free to not integrate with Google's OAuth system if you don't want to deal with them
Sorry but that's a bit of an ignorant non-argument. Given the size of Google, I believe that it is important that users can use the apps they want, not the apps that Google wants them to use.
Protectionism does not make using Google's services more secure. It serves Google, not its users.
Do you understand what type of apps we are talking about here? It's not Chrome apps. It's not android apps either. It's apps that make use of Google's OAuth connection to do stuff on behalf of the user. So, don't use that if you don't want to deal with the very legitimate security checks.
Again, this is NOT about Android or Chrome apps. So yeah, go use whatever apps you want, as long as it doesn't want the users to grant the developer permission to do stuff with the user's Google account, this does not affect you in the slightest.
This is any web app that /you/ granted access to your Google account details (or a wider scope). If you don't want a web app to access your account, then don't grant access. It really is that simple. Now there's an extra step: Google's stamp of trust.
I agree that for most web app this will be an extra formality. For me personally the key point is that there will be web apps that Google does not agree with from a business POV. That is my real problem with this.
If this list was curated by a community rather that Google, I would not have a problem with it.
I mean, I see this as a culmination of the response to the OAuth phishing attack that happened a few months ago [0]. Red flags should go up if you were accessing a site yesterday via OAuth, and all of a sudden today on the same site you get an angry full-page alert that "so-and-so wants your credentials, but they haven't been verified yet". I also noticed the presence of the domain name on the OAuth page, which certainly helps but is probably not widely noticed in practice.
It's certainly worth acknowledging that even if users aren't truly blocked, they may be turned away from legitimate apps that just haven't gone through the review process yet -- too many extra clicks, concerns about the legitimacy of the app, whatever.
I wonder how this in particular differs from the vetting process for any mobile app store (Apple's App Store in particular comes to mind with its reputation as a walled garden [1], but Android does also warn the user when enabling the installation of third-party apps not on the Play Store).
> It's certainly worth acknowledging that even if users aren't truly blocked, they may be turned away from legitimate apps that just haven't gone through the review process yet -- too many extra clicks, concerns about the legitimacy of the app, whatever.
Perhaps it should have a "tell me when they're verified then" option.
Everyone can make such an app. A malware site can just redirect an user to that page, and the user thinks they have to log in. They do so. Most users aren't very tech savvy.
For apps that aren't vetted you have to confirm that you really intended to give them access. This is good! It protects most users who aren't a genius like yourself.
Even if you're not verified, users can still grant your app access! Nothing is blocked! Google just wants to be sure the user realizes what is going on.
> No it's not. It's Google telling it's users which apps to use and
> not to use. For regular users Google presents itself (just like Apple does)
> as an authority of what's a good app.
It's not a good app/bad app question. It's a "can I trust the person who built this app with access to my private data" question that this is attempting to help with. It's not about good apps/bad apps - it's about access to your data. Including data that could be very sensitive.
It's an extra step, and therefore probably frustrating to a number of people - but I have a hard time seeing how taking a few minutes to declare your intentions around private data usage with the entity supplying that data for your use is a bad thing. I hope other OAuth providers go in a similar direction - that way developers don't just go with "easier" paths that are less secure for their users.