Hacker News new | past | comments | ask | show | jobs | submit login
Google Will Eat Itself (2005) (gwei.org)
208 points by slater on April 30, 2019 | hide | past | favorite | 65 comments



Probably don’t visit this site if you have epilepsy.

Discussion from reddit (~7 years ago): https://www.reddit.com/r/todayilearned/comments/htidn/til_of...

And from hn circa 2008: https://news.ycombinator.com/item?id=273337

Definitely a hoax (but a really cool one, indeed) by the art collective Uber morgen.


Probably? I'll counter with very unprobably. It's mostly a blur effect and the amount it shifts is very small.

Edit: I hope whoever downvoted has actual evidence of danger.

Also the danger range is generally around 12-20Hz and that background is at 4.


I don't care if it's dangerous. It's annoying and makes me feel uneasy. Had to block it with ublock.



I was wondering what you were referring to (later comments clarified this).

Once again, I'm happy that I don't allow scripting to run in my browser by default -- I never saw the effect you are talking about.


Just to be clear, there's no scripting involved, it's a plain image file set as the page's background.


Interesting. Then I can't explain why I don't see the effect.



"202.345.117 Years until GWEI fully owns Google."

In less than a galactic year!


This reminds me of zeno's paradox, even though you continue to buy google stock its a smaller and smaller fraction so even if you buy google to infinity you still never own it


Zeno’s paradox only seems like one before you learn differential Calculus.

That’s because dx is going to zero, true, but as dt goes to zero.

We have figured out that this converges to an actual number, in a system that is totally consistent and makes sense.

x0 + sum of all the dx converges to x1 just as t0 + the sum of all the dt converges to t1. And that’s the exact time when Achilles catches up to the tortoise.

Zeno’s paradox ignores that dt is going to zero, and always acts like the sum of all the dt is infinity, thus “Achilles will never reach the tortoise” is wrong.

Simple!


You don't need 100% though, just 51%


There was a time where you could actually make money by having text ads. But then the whole ecosystem got diluted by fake impressions and clicks. One easy solution would be to require people to solve "captcha", for example when logging in, before being served a text ad.


Have you ever browsed the web using Tor? Cloudfare hits you with a captcha for every other site, it gets annoying very fast. You don't want this experience for the whole Internet.


We changed our handling of the Tor Browser literally years ago.


Using US Tor exit nodes (as an exercise or obligation, not out of personal necessity), I often see access denial pages by Cloudflare, up to and including within the last day.

Cloudflare is much more permissive if you enable JS for the site. Then, if you're going from a Cloudflare error page denying you permission (presumably because of Tor), you might reluctantly decide to enable JS, and reload... then you'll probably load and sit on a different Cloudflare page for a few/several seconds, before the actual content page probably will finally load.

For the scenario above, I don't know whether the site owners (usually news sites, in my case) are aware that their site has been made difficult or more dangerous to access, for some people who feel threatened/oppressed in their countries.

I appreciate that Cloudflare infrastructure is probably under constant abuse, including through Tor, and of course I don't have any snap answers for what they should do differently. What I mention above is just some of the behavior I can see.


The site owners might be explicitly blocking Tor, the "Cf-Country" header will show "T1" if the request comes from Tor, so site owners can block those requests via a Cloudflare firewall rule or at their origin. It can be tempting to block all of tor if you keep getting attacked via the tor network, even if you're a news site.


> if you keep getting attacked via the tor network

This is why we can't have nice things. Given enough time and traffic, eventually the worst-behaving actors will poke at everything vulnerable any way they can, until everything gets locked down so tight that everyone with half a brain develops acute paranoia and an intrusive sense of defeat. I honestly think it's genetic, though, and there will continue to be a lack of easy answers.

Try to find a few humans in meatspace you can trust. Preferably with skills that balance well with yours.


What's genetic?


I would imagine it is getting worse because very recently a lot of US local news and sites are blocking so many external countries its hard to tell what is really happening inside the USA if not a large city or popular topic. Ironically, this is exactly what Tor was built to fight against but the perceived risk is too high now for some reason.


I can't figure out how you only serve the alt-srv onion address only via the tor browser. Browsing via tor otherwise does not return that header.

I have tried mimicking the user agent and many other headers, but I cannot get CF to send me the alt-srv onion address when using tor unless I am using the tor browser.


I'm sorry, I don't understand this comment. You want to solve captcha in order to be served text ads? Why would you want to do that?


Think from the publisher side of view. You want to have an easy way to monetize content, and ads used to work well, but now you get very little or nothing from ads because most impressions and clicks are made by bots. So instead of adding a "paywall" to see the content, you add a "captcha" and the users will be able to see the content for free along with some unobtrusive text ads. I think it's already proven that users rather solve a "captcha" then use their credit card to gain access.


If I need to solve a captcha to read an article or watch a video I'll read or watch something else. If I can't adblock it, it's not worth my time


As it happened, that was only true for a minority of visitors. Many people would happily punch in an answer to recaptcha v1 to read the article one of their friends sent them, and might not have even known it was to increase the legitimacy of ad clicks.


ordinarily I would say the same but if the service seemed valuable I would be okay with solving a captcha


> Think from the publisher side of view.

Any publisher with any sense at all knows that annoying users will simply drive them away. Then there will be no audience at all, except perhaps bots which can solve captchas.


I mean, that's what a pay wall is isn't it? think of all things people do on HN to go around those. Solving a captcha is probably worse because it's not annoying enough to go around so you actually just have to do it, but now it's more likely a legitimate user.


Doesn't Google's current version of reCaptcha work mostly invisible? I am confident that they use a similar system atm to detect bots. I know the "checkmark" version used e.g. mouse movement to detect humans.


Unless you're using TOR or another anonymizer, which eats a lot of the clues that the bot detector uses.


But only one captcha per day per site, max, please. Seems a good trade off for me. Especially if it unlocks solely text ads.


Iff the answer to paywalls (and not site access) is CAPTCHA, then I would have been game. Problem is that I'm still not going to disable uBlock due to the poor practices of ad companies destroying web usability.


i actively avoid sites that serve me captchas now, clicking through images of cars/bicycles/firehydrants is the most excruciatingly boring thing i've ever done



If you click randomly eventually the computer gives up and thinks you're a really stupid human and takes pity on you.


Really? If so, I'll just do that from now on. I'd rather waste a few extra seconds than work for Google for free.


I've found sometimes the robots get really upset at your terrible performance and will ask you a lot more frequently, but eventually tire of asking new questions and are like "Sure, whatever." You'll get prompted a lot more often with the usual captchas though.


same. nothing more infuriating than failing a captcha three times....somehow.


Why get one person to do one unit of work to train your AI for free, when they could do three units of work for free?


That kind of captcha seems unusable to me as well. No idea what could be used instead.


Surely you mean before serving text content, or we just block the ad and captcha and get on with our lives?


It looks like this ended pretty quickly.

http://www.gwei.org/pages/google/google_letter_02.html


This page seems unwise: http://www.gwei.org/pages/google/check.html

A check with enough info for G to probably figure out which advertiser it is? Or maybe they have some pool of accounts?

Edit: Ahh, nevermind. They mention a pool of accounts, some already disabled: http://www.gwei.org/pages/google/google_letter.html


I'm confident Google doesn't have any rules against this practice; after all, if they serve legitimate ads on legitimate sites and don't do any fraud, Google still earns money themselves off of the ads.

edit: another commenter pointed out they got a letter and a disabled account due to click fraud already.


I don't know about eating itself (possible) but Google might one day not need human employees at all or be very close to 100% AI. Maybe Sergey and Larry will keep their jobs because of seniority. The rest of us are in trouble.


Meh. Will never happen.


they say they own $450k worth of google stock?


Hey, almost as much as any one of the 100,000 employees! I am sure the process of self-eating is nearly complete!


"202.345.117 Years until GWEI fully owns Google."


> Amount of USD: 405.413,19

That’s kind of a weird way to format it. So I double checked the math.

> Google Shares owned by GWEI: 819

> Current Google Share Price : 495.01 USD

The current share price is actually $1,287.58

So they own $1,054,528.02 of Google stock?


IIRC most of mainland Europe formats numbers this way ('.' for thousand separator ',' for decimal separator)

Strangely there doesn't seem to be a 'falsehoods programmers believe about numbers' post, but if there were, this would definitely feature :)


Most? A very large chunk use spaces or thin-spaces as thousand separators. By GDP, at least, periods are probably in the minority.


Germany, Italy, Spain, Netherlands, Belgium at least appear to use a period as the thousands separator. Quite of lot of population and GDP there already.


At least Belgium officially uses spaces as thousands seperator [0]

[0] http://ond.vvkso-ict.com/vvksomainnieuw/brochure/Vouwblad%20...


The spaces is an iso standard.

Most Europeans I’ve talked to use “,” for the radix and “.” For the separator.

Personally I really like spaces since they’re unambiguous.


European here, in my corner we use spaces for thousands separator and commas for the radix.

But hey, we have several different standards for power plugs and outlets, so no surprise that other stuff is different too.


> Personally I really like spaces since they’re unambiguous.

But not if you are writing with a pen or pencil, a visible token makes way more sense.


I've seen '⎵' commonly used as an unambiguous space. Maybe that could work?


There was a stock split in 2014. Not sure if there's share splits before then.

So > 1mil.

Goog has grown 957% since may 2005. So their 405k is now worth ~4mil.


European-style formatting.


Their listed share price is outdated by years. I'd be skeptical


That's funny


Aside: Google Will Eat Itself seems to be a pun on the English band Pop Will Eat Itself, who I can recommend.


I suspect both names are a pun on the old Marxist slogan "Capitalism will eat itself".





Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: