Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm so confuzzled, because later on, the story spins into a bitter, sarcastic rant about how preventing existential risk is considered regulatory capture and hence Bad.

I know that EY is terrified of a "hard takeoff"-style singularity, to the point of calling for datacenters to be bombed if it looks like they're taking risks with AI development. so presumably he doesn't give two shits about an oligopoly of enlightened billionaires choking off open-source AI research, dictating alignment and gatekeeping access, compared to the risk of Skynet he perceives looming.

so why is his take towards "AI safety" so much like my own? I guess he thinks the term has been watered down?



I think you misread that? It's in favour of heavier regulations.


I know, but it's also critical of some of the regulations, for example:

> Student: But a big chunk of the profits are due to regulatory capture. I mean, there's a ton of rules about certifying that your AI isn't racially biased, and they're different in every national jurisdiction, and that takes an enormous compliance department that keeps startups out of the business and lets the incumbents charge monopoly prices. You'd have needed an international treaty to stop that.

I am personally very nervous about regulation precisely because of regulatory capture and the ineptitude of even well-intentioned legislators, so I'm trying and failing to grok EY's perspective here, since he seems to echo my concerns about regulatory capture but then summarily dismiss them a paragraph later.


Another paragraph later he writes something that sounds like "we have to bring on regulatory capture when it's right to, not when it makes us feel good and doesn't damage our profits".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: