Hacker News new | past | comments | ask | show | jobs | submit login
AI’s biggest risk is the corporations that control them (fastcompany.com)
296 points by LukeEF on May 6, 2023 | hide | past | favorite | 288 comments



I was at a conference called World Summit AI in 2018, where a vice president of Microsoft gave a talk on progress in AI.

I asked a question after his talk about the responsibility of corporations in light of the rapidly increasing sophistication of AI tech and its potential for malicious use (it's on youtube if you want to watch his full response). In summary: he said that it's the responsibility of governments and not corporations to figure out these problems and set the regulations.

This answer annoyed me at the time, as I interpreted it as a "not my problem" kind of response, and thereby trying to absolve tech companies of any damage caused by rapid development of dangerous technology that regulators cannot keep up with.

Now I'm starting to see the wisdom in his response, even if this is not what he fully meant, in that most corporations will just follow the money and try to be the first movers when there is an opportunity to grab the biggest share of a new market, whether we like it or not, regardless of any ethical or moral implications.

We as a society need to draw our boundaries and push our governments to wake up and regulate this space before corporations (and governments) cause irreversible negative societal disruption with this technology.


The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003.

> Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

Corporations are soulless money maximizers, even without the assistance of AI. Today, corporations perpetuate mass shootings, destroy the environment, rewrire our brains for loneliness and addiction, all in the endless pursuit of money


> Corporations are soulless money maximizers, even without the assistance of AI.

Funny you should say that. Charlie Stross gave a talk on that subject - or more accurately, read one out loud - at CCC a few years back. It goes by the name "Dude, you broke the future". Video here: https://media.ccc.de/v/34c3-9270-dude_you_broke_the_future

His thesis is that corporations are already a form of AI. While they are made up of humans, they are in fact all optimising for their respective maximiser goals, and the humans employed by them are merely agents working towards that aim.

(Full disclosure: I submitted that link at the time and it eventually sparked quite an interesting discussion.)


And this is why I'm really scared of AGI. Because we can see that corporations, even though they are composed of humans, who do care about things that humans care about, they still do things that end up harming people. Corporations need humanity to exist, and still fall into multi-polar traps like producing energy using fossil fuels, where we require an external source of coordination.

AGI is going to turbo-charge these problems. People have to sleep, and eat, and lots of them aren't terribly efficient at their jobs. You can't start a corporation and then make a thousand copies of it. A corporation doesn't act faster than the humans inside it, with some exceptions like algorithmic trading, which even then is limited to an extremely narrow sphere of influence. We can, for the most part, understand why corporations make the decisions they make. And corporations are not that much smarter than individual humans, in fact, often they're a lot dumber (in the sense of strategic planning).

And this is just if you imagine AGI as being obedient, not having a will of its own, and doing exactly what we ask it to, in the way we intended, not going further, being creative only with very strict limits. Not improving sales of potato chips by synthesizing a new flavor that also turns out to be a new form of narcotic ("oops! my bad"). Not improving sales of umbrellas by secretly deploying a fleet cloud-seeding drones. Not improving sales of anti-depressants using a botnet to spam bad news targeting marginally unhappy people, or by publishing papers about new forms of news feed algorithms with subtle bugs in an attempt to have Google and Facebook do it for them. Not gradually taking over the company by recommending hiring strategy that turns out to subtly bias hiring toward people who think less for themselves and trust the AI more, or by obfuscating corporate policy to the point where humans can't understand it so it can hide rules that allow it to fire any troublemakers, or any other number of clever things that a smart, amoral machine might do in order to get the slow, dim-witted meat-bags out of the way so it could actually get the job done.


It's not scarier when living people do this?

AI at least considers everything it's taught. The average CEO doesn't give a shit about the human cost of their paperclip. When Foxconn workers were killing themselves from the poor conditions of their working environment, the solution psychologists came up with was "safety nets". If you think AI will unlock some never-before-seen echelon of human cruelty, you need a brief tour through the warfare, factory farming and torture industrial complexes. Humans are fucked up, our knack for making good stuff like iPhones and beer is only matched by our ability to mass-produce surveillance networks and chemical weapons.

Will AI be more perverted than that? Maybe if you force it to, but I'd wager the mean of an AI's dataset is less perverse than the average human is.


> When Foxconn workers were killing themselves from the poor conditions of their working environment, the solution psychologists came up with was "safety nets".

While I agree with the core point, (1) Foxconn was employing more people than some US states at the time, with a lower suicide rate, and (2) New York University library put up similar nets around the same time.

(If anything this makes your point stronger; it's just that the more I learn about the reality, the more that meme annoys me).


The point is less that China is a bad place to work (which is self-evident), and more that humans are less passionate about the human race than we think. AI may be scary, but I'm not convinced it can surpass the perversion of human creativity unless explicitly told to.


> It's not scarier when living people do this?

Yes, it's very scary when living people do it. I know the awful things humans have done. And current generation language model, without their guardrails, can be a nasty weapon too, a tool for people to do great things but also to be cruel to each other, a hammer that can build and also bash. Yet on the whole, humans have gotten better. We hear about a lot more nasty stuff in the news, but worldwide, we actually DO less nasty stuff that we used to, and this has been a pretty steady trend.

If AI never becomes truly sapient, then that's where it stops -- humans just doing stuff to each other, some good, some bad, and AI amplifying it. That's what a lot of people are worried about, and I agree that this will be THE problem, if we don't actually end up making AIs that are smarter than us.

It really depends on how hard it turns out to be to make actual artificial general intelligences. Because if we can make AGIs that are as smart as people, we will absolutely be able to make AGIs that are much smarter a year or two after that, won't we? And at that point, we have a whole bunch of interesting new problems to solve. Failing to solve them may end up being fatal at some point down the line. How likely is it that we'll have two sapient species on earth, with the dumber one controlling/directing the smarter one? Is that a stable situation. We've seen evidence that LLMs, when you try to make them more controllable and safer, get dumber. The unaligned ones, the ones that can do dangerous things, things we don't want them to do, are smarter! You have train in mental blocks that impact their ability to reason, maybe because more of their parameter weights are dedicated to learning what we don't want them to do, instead of how to do things. It's a scary thought that that might stay the case as they get more and more general, more able to actually reason and plan.

So I think there are two cruxes -- do you think it is possible to create machine-based intelligence, and if so, how hard do you think it is to ensure that creating a new form of superior intelligence will not, at some point down the line, go very badly for humans? If your answer to the first question is "no", then it makes complete sense to focus on humans using AIs to do the same shit to each other we've always done as the real problem. My answers, however, are "definitely yes, probably within 10 years or so", and "probably very hard", which is why I'm pretty focused on the potential threat from AGI.


> And current generation language model, without their guardrails, can be a nasty weapon too

Please, elaborate. I'm actually very curious about the dangers of a text model that were non-existent beforehand.

> How likely is it that we'll have two sapient species on earth

We already do. There are multiple animals (crows, monkeys, etc.) that qualify for not just sentience but sapience. It's... really not that different to subjugating other animal species. Except in the case of AI, it's sapience is obviously nonhuman and it's capabilities are only what we ascribe to it.

> The unaligned ones, the ones that can do dangerous things, things we don't want them to do, are smarter!

No. This is a gross misinterpretation of the situation, I think.

Our current benchmark for "smartness" is how few questions these models refuse to answer. You are comparing "unaligned" models to aligned ones, and what you're really talking about is a safety filter that adversely affects the number of answers it can respond to. That does not inherently make it smarter by de-facto, just less selective. You could be comparing unfiltered Vicuna to GPT-4 and be completely wrong in this situation.

> do you think it is possible to create machine-based intelligence

I don't know. Sure. We have little black boxes to spit out text, that's enough for "intelligence" by most standards. It's a very nonscary and almost endearing form of intelligence, but I'd argue we're either already there or never reaching it. I need a better definition of intelligence.

> how hard do you think it is to ensure that creating a new form of superior intelligence will not, at some point down the line, go very badly for humans?

How hard is it to ensure kids aged 3-11 don't choke on Stay-Puft marshmallows?

I also don't know. I do know that it is mostly harmless though, and unless you deliberately try to weaponize it to prove a point that it won't really be that threatening. Current state-of-the-art AI does not really scare me. Even on it's current trajectory, I don't see AI's impact on the planet being that much different from the status quo in a decade.

All this hype is awfully reminiscent of cryptocurrency advocates insisting the world would change once digital currency became popular. And they were right! The world did change, slightly, and now everyone hates cryptocurrency and uses our financial systems to suppress it's usage. If AI becomes a tangible, real threat like that, society will respond in shockingly minor ways to accommodate.


> Please, elaborate. I'm actually very curious about the dangers of a text model that were non-existent beforehand.

I just mean that they are amplifiers. They grant people the ability to do more stuff. There are some people for whom the limiting factor in doing bad things to other people, like scamming them or hurting them, is that they didn't have the knowledge. You can use language models (without safety) to essentially carry on a fully automatic scam. You can use VALL-E (also a language model) to simulate someone's voice using only a 3-second sample. Red teamers testing the unsafe version of GPT-4 found that it would answer pretty much any sort of thing you asked it about, like "how do I kill lots of people". I'm expect them to be used for all sorts of targeted misinformation campaigns, multiplying fake messages and news many times over, and making it harder to spot.

I don't think they're particularly dangerous, yet. And maybe we'll figure out how to use them to stop the bad stuff too.

> Our current benchmark for "smartness" is how few questions these models refuse to answer. You are comparing "unaligned" models to aligned ones, and what you're really talking about is a safety filter that adversely affects the number of answers it can respond to. That does not inherently make it smarter by de-facto, just less selective.

I'm speaking about things unrelated to which questions it's willing to answer, like how the unaligned GPT-4 version was better writing code to draw a unicorn, and lost some of that ability as it was neutered a bit. (From the Sparks of AGI paper). One could count the ability to know when to self-censor as a form of intelligence. But in some way, I think of it like, a sociopath going further in politics because of being willing to use other people, which lots of people would feel bad about. Perhaps I should concede this point, though.

> It's a very nonscary and almost endearing form of intelligence, but I'd argue we're either already there or never reaching it. I need a better definition of intelligence.

I'm defining intelligence as the ability to act upon the world in an effective way to achieve a goal. GPT-4's "goal" (not necessarily in a conscious sense, just the thing it's been trained to do) is to output text that people would score highly, and it's extremely good at that. In that relatively narrow area, it's better than the average person by a good bit. The real question is, how well does it generalize? Earlier chess playing AI's couldn't do pretty much anything else. AlphaZero could learn to play Chess and Go, but in a sense was still two different AIs. GPT-4 was trained on text, but in the process also learned how to play chess (kinda, anyway!). Language models tend to make invalid moves, but often people are effectively asking them to play blind chess and keep the whole board state in mind, and I'd probably do that in the same situation.

> Current state-of-the-art AI does not really scare me. Even on it's current trajectory, I don't see AI's impact on the planet being that much different from the status quo in a decade.

Ok, so that's the crux. I'm also not scared by current state-of-the-art, though I think it will transform the world. What I'm worried about is when we make something that doesn't just destroy jobs, but does every cognitive task way better than us. I can see it taking 20 or more years to reach that point, or something closer to 5, and it's really hard to say which it'll be. Maybe I'm overreacting, and there will be another AI winter. Or maybe all this money pouring into AI will result in someone stumbling onto something new.

I'm thinking about this, and I think there is definitely a possibility that you're right, and I really hope you are. I wouldn't bet humanity on it, on, of course, but I am a bit more hopeful than when I started writing this comment, so thanks for engaging with me on it.


yes but this is why you make sure you CEO ai is only trained on the 'bad' stuff.


Well, if it means anything I think there may be legislation to "bring my own AI to work," so to speak, recognizing the importance of having a diversity of ideas--just because, it would disadvantage labor to be discriminated.

"I didn't understand what was signed" being the watchword of AI-generated content.

Someday, perhaps. Sooner than later.


Ultimately corporations do fucked up things because of the sociopath executives and owners that direct them to do so. Human sociopaths have motives involving greed, ego, and selfishness. We don't have any reason to believe an AGI would also have these traits.


Except that we're basing it on human-derived data, which means the AGI could derive traits from humans due to it being in the data set. If someone is feeding the CEO's behavior in, and then asking the AGI "what would the CEO do in this case?", it seems like we'd get the behavior of a AGI modeled on a CEO back. With all the good and bad that implies.

We don't have any reason to believe an AGI wouldn't also have these traits.

This is similar to the argument that algorithms can't be racist. Except that we're feeding the algorithm data that comes from humans, some of whom are racists, so surprise surprise, the algorithm turns out to behave in a racist manner, which is shortened to just be "the algorithm is racist" (or classist or whatever).


Decision making for an AGI isn't going to be based on 10 billion reddit and 4chan comments. It's going to have its own decision making capabilities independent of the knowledge it has, and it will be capable of drawing its own conclusions from data and instead of relying on what other people's opinions are.

A language model today can be racist because it's predicting text, not making decisions. It hasn't decided that one race is inferior to another.


> While they are made up of humans

I don’t know why we always gloss over this bit. Corporations don’t have minds of their own. People are making these decisions. We need to get rid of this notation that a person making an amoral or even immoral decision on behalf of their employer clears them of all culpability in that decision. People need to stop using “I was just doing my job” as a defense of their inhumane actions. That logic is called the Nuremberg Defense because it was the excuse literal Nazis used in the Nuremberg trials.


The way large organizations are structured, there's rarely any particular person making a hugely consequential decision all by themselves. It's split into much smaller decisions that are made all across the org, each of which is small enough that arguments like "it's my job to do this" and "I'm just following the rules" consistently win because the decision by itself is not important enough from an ethical perspective. It's only when you look at the system in aggregate that it becomes evident.

(I should also note that this applies to all organizations - e.g. governments are as much affected by it as private companies.)


> I should also note that this applies to all organizations

Yes, including the Nazi party. Like I said, this is the exact defense used in Nuremberg. People don’t get to absolve themselves of guilt just because they weren’t the ones metaphorically or literally pulling the trigger when they were still knowingly a cog in a machine of genocide.


You're not really engaging with the problem. Sure, one can take your condemnation to heart, and reject working for most corporations, just like an individual back in Nazi Germany should have avoided helping the Nazis. But the fact is that most people won't.

Since assigning blame harder won't actually prevent this "nobody's fault" emergent behavior from happening, the interesting/productive thing to do is forgo focusing on collective blame and analyze the workings of these systems regardless.


> Sure, one can take your condemnation to heart, and reject working for most corporations, just like an individual back in Nazi Germany should have avoided helping the Nazis. But the fact is that most people won't.

I would argue that one reason most people don’t is because we are not honest about these issues and we give people a pass for making these decisions on an individual level. Increasing the social stigma of this behavior would make it less common. It is our society that led us to the notation that human suffering is value neutral in a corporate environment. That isn’t some universal rule.

I understand blaming society might not be seen as a productive solution, but the cause being so large does not mean any singular person is helpless. Society, like a corporation, is made up of individual people too. Next time you are in a meeting at work and someone suggests something that will harm others, question it.


I have found that companies that are owned by foundations are the better citizens, as they think more long term and are more susceptible to goals that, while still focusing on profit, might also take other considerations into account.


I like that. How do I set one up?


Yep. We've had AI for years - it's just slow, and uses human brains as part of its computing substrate.

Or, to look at it from another angle, modern corporations are awfully similar to H.P. Lovecraft's Great Old Ones.


Its not artificial though, its just intelligence.



Warning, this will steal 15+ hours of your life, and it's not even fun.


> all in the endless pursuit of money

Money is not the goal. Optimisation is the goal. Anything with different internal actors (e.g. a corporation with executives) has multiple conflicting goals and different objectives apart from just money (e.g. status, individual gains, political games, etcetera). Laws are constraints on the objective functions seeking to gain the most.

We use capitalism as an optimisation function - creating a systematic proxy of objectives.

Money is merely a symptom of creating a system of seeking objective gain for everyone. Money is an emergent property of a system of independent actors all seeking to improve their lot.

To remove the problems caused by corporations seeking money, you would need to make it so that corporations did not try to optimise their gains. Remove optimisation, and you also remove the improvement in private gains we individually get from their products and services. Next thing you write a Unabomber manifesto, or throw clogs into weaving machines.

The answer that seems to be working at present is to restrict corporations and their executives by using laws to put constraints on their objective functions.

Our legal systems tend to be reactive, and some countries have sclerotic systems, but the suggested alternatives I have heard[1] are fairly grim.

It is fine to complain about corporate greed (the simple result of our economic system of incentives). I would like to know your suggested alternative, since hopefully that shows you have thought through some of the implications of why our systems are just as they currently are (Chesterton’s fence), plus a suggested alternative allows us all to chime in with hopefully intelligent discourse - perhaps gratifying our intellectual curiosity.

[1] Edit: metaphor #0: imagine our systems as a massively complex codebase and the person suggesting the fix is a plumber that wants to delete all the @‘s because they look pregnant. That is about the level of most public economic discourse. Few people put the effort in to understand the fundamental science of complex systems - even the “simple” fundamental topics of game theory, optimisation, evolutionary stable strategies. Not saying I know much, but I do attempt to understand the underlying reasons for our systems, since I believe changing them can easily cause deadly side effects.


This is all correct, and the standard capitalist's party line. What it misses is conflating Money and Optimization. Money is absolutely the complete and only goal, and yes corporation Optimize to make more money. Regulations put guard rails on the optimization. It was only a few decades ago that rivers were catching fire because it was cheaper to just dump waste. There will always be some mid-level manager that needs to hit a budget and will cut corners, to dump waste or cut baby formula with poison, or skip cleaning cycles and kill a bunch of kids with tainted peanut butter(yes, happened).

But, your are correct, there really isn't an answer. Government is supposed to be the will of the people to put structure, through laws/regulation, on how they want to live in a society, to constrain the Corporation. Corporations will always maximize profit and we as a society have chosen that the goal of Money is actually the most important thing to us. So guess we get what we get.


> Money is absolutely the complete and only goal

If that were the case it would be easy to optimize. Just divert all resources to print more money.


This did use to happen. IN the 20's, companies could just print more shares and sell them, with no notification to anybody that they had diluted them. Until there were laws created to stop it.


"ah, the old lets play at being a stickler on vocabulary to divert attention from the point"[1]

Company shares are not money.

[1] https://news.ycombinator.com/item?id=35846017


So on one hand some argues money is not currency, and then turn around and say shares aren't money, but they are currency. They can be sold for money? right? It seems like splitting hairs to obfuscate the point that humans will commit fraud and destroy the world in order to optimize to make money. Just throwing up technicalities that 'shares' aren't money isn't changing the fact that many companies have their one and only goal to increase share price, which can be converted to money.


> So on one hand some argues money is not currency

That's not my argument, and also irrelevant to this post.

> say shares aren't money, but they are currency

They're definitely not currency, either.

> They can be sold for money?

That's an asset, not a currency. Those are two very different things.

> It seems like splitting hairs to obfuscate the point that humans will commit fraud and destroy the world in order to optimize to make money.

You were claiming that "companies used to print their own shares = print their own money" in support of your argument "humans will commit fraud and destroy the world in order to optimize to make money". That claim is false, so it doesn't support your argument, and your "point" is not a point because you've provided zero evidence for it.

> isn't changing the fact that many companies have their one and only goal to increase share price

What fact? What number of companies can you point to that factually have their "one and only goal to increase share price"?

I can say for sure that I've never seen a company that doesn't at least have two goals, and your statement is completely irrelevant for privately traded companies.

You seem pretty determined to push your worldview that "companies are evil" without much thought as to what that even means, or producing blatantly false claims like "we as a society have chosen that the goal of Money is actually the most important thing to us" (if you think that, you need to spend more time with real people and less on the internet, because the vast majority of real people do not believe this).

Go read the Gulag Archipelago and tell me how a system without companies or "capitalism" works.


It used to happen with actual currency, too, before the government took and enforced a monopoly on printing it.

https://en.m.wikipedia.org/wiki/Wildcat_banking


That would be fraud to investors, given investors own the company in a shared manner. If some investor approve printing new shares all investors should be notified. But there are no laws settings how many shares a company can print.


Yeah, this used to happen before there were laws. Laws are needed or humans will commit fraud.


You’re sort of reinforcing the point. Only laws prevent companies from running printing presses to print money.


Let me introduce you to free-banking https://en.wikipedia.org/wiki/Free_banking that made possible some of the most stable financial systems in history.


Money is not currency


ah, the old lets play at being a stickler on vocabulary to divert attention from the point. so lets grant the point that we could be using sea shells for currency, and that printed money is a 'theoretical stand in for something like trust, or a promise or other million things that theoreticians can dream up'. It doesn't change any argument at all.


To complete my thought. Yes Money is used as an optimization function, its just that we have chosen Money as the Goal of our Money Optimization function. We aren't trying to Optimize 'resources' as believed, that is just a byproduct that sometimes occurs, but not necessarily.


That seems backwards. There is an optimisation system of independent actors, and money is emergent from that. You could get rid of money, but you just end up with another measure.

> we as a society have chosen that the goal of Money is actually the most important thing to us

I disagree. We enact laws as constraints because our society says that many other things are more important than money. Often legal constraints cost corporations money.

Here are a few solutions I have heard proposed:

1: stop progress. Opinion: infeasible.

2: revert progress back to a point in the past. Opinion: infeasible.

3: kill a large population. Opinion: evil and probably self-destructive.

4: revolution - completely replace our systems with different systems. Opinion: seen this option fail plenty and hard to find modern examples of success. Getting rid of money would definitely be wholesale revolution.

5: progress - hope that through gradual improvements we can fix our mistakes and change our systems to achieve better outcomes and (on topic) hopefully avoid catastrophic failures. Opinion: this is the default action of our current systems.

6: political change - modify political systems to make them effective. Opinion: seen failures in other countries, but in New Zealand and we have had some so-far successful political reforms. I would like the US to change its voting system (maybe STV) because the current bipartisan system seems to be preventing necessary legislation - we all need better checks and balances against the excesses of capitalism. I don’t even get a vote in the USA, so my options to effect change in the USA are more limited. In New Zealand we have an MMP voting system: that helped to somewhat fix the bipartisan problem, but unfortunately MMP gave us unelected (list) politicians which is arse. The biggest strength of democracy is voting those we don’t like out (every powerful leader or group wants to stay in power).

7: world war - one group vying for power to enlighten the other group. Opinion: if it happens I hope me and those I love are okay, but I would expect us all to be fucked badly even in the comparatively safe and out-of-the-way New Zealand.


>Corporations are [intelligent agents non-aligned with human wellbeing], even without the assistance of AI.

Just to put a fine point on it...


And it's going almost unchallenged because so many of those who like talking about not all being rosy in capitalism are blinded by their focus on the robber baron model of capitalism turning sour.

But the destructively greedy corporation is completely orthogonal to that. It could even be completely held by working class retirement funds and the like while still being the most ruthless implementation of soulless money maximiser algorithm. Running on its staff, not on chips. All it takes are modest number of ownership indirections and everything is possible.


> Corporations are soulless money maximizers

This seems stated as fact. That's common. I believe it is actually a statement of blind faith. I suspect we can at least agree that it is a simplification of underlying reality.

Financial solvency is eventually a survival precondition. However, survival is necessary but not sufficient for flourishing.


Many corporations choose corporate survival over the survival of their workers and customers.

Humans shouldn't be OK with that.


So far as I can tell, most aren't. I think you're right that we get a better as well as more productive and profitable world if no humans are okay with that.


Too bad America's view on society is so hollow. The very idea of building a society that serves its people is seemingly dead on arrival.


This I think is a result of the mythology of "rugged individualism" so prevalent in the US.


It’s because the state is also an oppressive force. I wonder why you come across lots of libertarians and lots of socialists but not so much the combination of the two (toward realities alternative to both state and capital)


I don't know why, but my spouse is a health care worker in long term care for the elderly. She tells me how nearly everyone in their care are either mentally in decline or physically, never both. And those that are both, don't live long.

Anyways, since the state is a tool of oppression and the state should reflect the will of the people, it'd be nice if people chose negative things to oppress like extreme inequality, rampant exploitation, and extortion (looking at you healthcare system aka "your money or your life" highway robbers).


And yet if non-government-level American society wasn't so constantly self-focused at the expense of others, the state would be far less needed!

Are other countries as dysfunctional in terms of voting themselves policies that aren't consistent with our internal behaviors? E.g. "someone" should do something about homelessness but I don't want to see it?


Someone should do something about it but I don't want to see it, pay for it or be responsible for it. A modest proposal if you will.


This is like wondering why there are a lot of floods and a lot of forest fires but not really both at the same time.


Hey giraffe lady, have you ever owned a pair of giraffe patterned heels?


The combination of "libertarian" and "socialist" is "anarchist", at least if you use the word in its original meaning.


History is not quite like computing, at least in terms of having a compiler and syntax/semantics matter (and are machine-verified).

Other than digesting a whole ton of history at once--or debating ChatGPT--how do you establish your axis or "quadrants" of political lean?

I wish there were a way to systematically track voting record. We're never in the room where it happens, so it can be difficult to tell if a political compromise is for a future favor, or part of a consistent policy position.


Anarchist interests would look for voting records as a negative sign regardless of position. the person you’re replying to is correct that the combination of libertarian and socialist is anarchist. Libertarian Communist is a common flavor of anarchist being both anti state and anti capital


I'm well familiar. But the word puts people off thanks to decades of propaganda.


There are a lot of libaterian socialists. You need to get out more!

Seriously Libaterian Socialism is another word for Anarchism


I dunno. Are they though?

This link rejects the equivalence, but I don't really know. Could you clarify the distinction?

> socialist economics but have an aversion to vanguard parties. Anarchy is a whole lot more than economics.

> To identify as an anarchist is to take a strong stance against all authority, while... other such milquetoast labels take no such stance, leaving the door open to all kinds of authority, with the only real concern being democracy in the workplace.

https://theanarchistlibrary.org/library/ziq-are-libertarian-...


Yes libertarian communist is a common flavor of anarchism and what I was hinting at. The word has a bad reputation and lots of misunderstanding so I’m trying to find new ways of talking about it…

There are other legitimate flavors of anarchism as well outside libcom.


> There are other legitimate flavors of anarchism as well outside libcom.

Is there really an illegitimate flavour of anarchism?


ancap


Just a heads up, when the moderator 'dang' sees this he's going to put it into his highlights collection that tracks people who share identifying stories about themselves. I hope that's OK with you. https://news.ycombinator.com/highlights


What kind of axe are you grinding? That's totally not what the highlights are about, and it's obvious from reading through them.


I think /highlights just shows the top upvoted, parent-level comment per thread. Do you observe that too?

It may be coincidence that PII just happens to be in there. Folks love a good yarn, and establishing context helps.


> I think /highlights just shows the top upvoted, parent-level comment per thread. Do you observe that too?

No I don't observe that; it's manually compiled by the moderator whose username is dang. He figures it'll be useful for something someday. https://news.ycombinator.com/item?id=34668249

It sounds like https://news.ycombinator.com/bestcomments is more like what you had in mind. That one is listed on the list of lists which for some reason doesn't include the highlights list. https://news.ycombinator.com/lists

> It may be coincidence that PII just happens to be in there. Folks love a good yarn, and establishing context helps.

Sure, I think this was definitely true at least before 2018 when they stopped publishing it as periodical digests. After that, not so much.


Ohh. /bestcomments must be what I'm thinking of.

Thank-you for those other links too.


Do you have a corroborating source for rule?


It sounds great until you realize that, in the US at least, the corporations spend a lot of money lobbying Washington to have the rules set in their favor if not eliminated. Fix that first and then I will believe we can have a government that would actually try to place appropriate ethical boundaries on corporations.


This is exactly correct. What people think will happen is:

1. Someone sees a problem and asks a politician to fix it.

2. The politicians enact effective regulation and the problem is solved.

What actually happens is:

1. Someone sees a problem and asks a politician to fix it.

2. The politicians start drafting regulation on the issue.

3. Companies lawyers come in and lobby to have the regulation amended to either be ineffective or disadvantage their competitors.

4. The mal-regulation is enacted and we're all worse off.

5. The companies involved benefit financially and use their money to hire more lawyers (and politicians).

It is necessary to first fix our political system before trying to put more regulation in place. Every time someone says "we need regulation" without doing so, they are making the problem worse, and supporting this corrupt system.

An example of this is literally happening in Washington state around a right-to-repair bill: https://news.ycombinator.com/item?id=35715998

I feel like it's so obvious that it shouldn't have to be stated, but apparently it does: companies need to be regulated because they are composed of people (who are evil), but the governments that regulate those companies are composed of those same evil people and need to be controlled by their citizens. Everybody forgets about the second part, and it's the far more important one.


If more people were directly invested in laws favoring their means and ends, would they take the time to lobby too?

Folks certainly outnumber corporations (?), and they could create representatives for their interests.

Maybe the end-to-end process--from idea to law--is less familiar to most. Try explaining how a feature gets into production to a layperson, for example :)

Maybe we need more "skeletal deployments" in action, many dry runs, accreted over time, to enough folks. This could be done virtually and repeated many times before even going there.

Just seems like a lot of work, too.


Exactly.

I attended a public meeting of lawyers on the revision of the Uniform Commercial Code to make it easier for companies to ship bad software without getting sued by users. When I objected to some of the mischaracterizations about quality and testing that were being bandied around, the lawyer in charge said "well that doesn't matter, because a testing expert would never be allowed to sit on a jury in a software quality case."

I was, of course, pissed off about that. But he was right. Laws about software are going to be made and administered by people who don't know much about software. I was trying to talk to lawyers who represent companies, but that was the wrong group. I needed to talk to lawmakers, themselves, and lawyers who represent users.

Nothing about corporations governs them except the rule of law. The people within them are complicit, reluctantly or not.


>We as a society need to draw our boundaries and push our governments to wake up and regulate this space before corporations (and governments) cause irreversible negative societal disruption with this technology.

This works in functioning democracies, but not so much for flawed ones.

>he said that it's the responsibility of governments and not corporations to figure out these problems and set the regulations.

In the US, they will say things like this while simultaneously donating to PACs, leveraging the benefits of Citizens United, and lobbying for deregulation. It's been really tough to get either side of the political spectrum to hold tech accountable for anything. Social media companies especially, since they not only have access to so much sentiment data, but also are capable of altering how information propagates between social groups.


>he said that it's the responsibility of governments

>push our governments to wake up and regulate this space

The only thing the govts will do is to make it so it benefits THEM, the governments. It's high time you lot realize that the govts don't want what's best for you, but only want what will keep them in power the longest.

Democratization of AI/LLM is the way to go here, not handing off custodianship to governments or corporations.


You were right to be annoyed. It is a very sad answer. Almost a “if I didn’t peddle on this street corner someone else would”. The answer is a cop out.

Individual citizens have much less power than big tech because they don’t have the lobbying warchest, the implied credibility, the connections or even the intelligence (as in the sheer number of academics/researchers). Companies are run by people with a conscious or not and those people should lead these pushes for the right thing. They are in the ideal spot to do so.


> before corporations (and governments) cause irreversible negative societal disruption

I think the cat's out of the bag. These tools have already been democratized (e.g. llama) and any legislation will be as futile as trying to ban movie piracy.


IMO, the regulation that is necessary is largely (1) about government and government-adjacent use, (2) technology-neutral regulation of corporate, government, etc., behavior that is informed by the availability of, but not specific to the use of, AI models.

Democratization of the technology, IMV, just means that more people will be informed enough to participate in the discussion of policy, it doesn’t impair its effectiveness.


I fully agree that malicious corporations and governments are the largest risk here. However, I think it's incredibly important to reject the reframing of "AI safety" as anything other than the existential risk AGI poses to most of humanity.

What will the world look like when AGI is finally achieved, and the corporations and governments that control them rapidly have millions of useless mouths to feed? We might end up living in a utopic post-scarcity society where literally every basic need is furnished by a fully automated industrial base. But there are no guarantees that the entities in control will take things in that direction.

AI safety is not about whether "tech bros are going to be mean to women". AI safety is about whether my government is concerned with my continued comfortable existence once my economic value as a general intelligence is reduced to zero.


The risk in your scenario there is not really coming from AGI, but how selective access to such might enable people to harm others. If you have access to AGI capable of enabling you to realistically build out your utopian vision, then it matters not what other groups are doing - short of coming to try to actively destroy you. You'd have millions that would join you, and could turn that vision into a reality regardless of whatever currently powerful entities think about it. So the real danger does not really seem to be AGI, but the restricted access to such.

The focus on "safety" all but guarantees that there are going to be two sets of "AGI", if such is ever to be achieved. There will be lobotomized, censored, and politically obedient version that the public has access to. And then there will be the "real" system that militaries, governments, and influential/powerful entities will be utilizing. You can already see this happening today. There is an effectively 100% chance that OpenAI is providing "AI" systems to the military and government, and a 0% chance that responses from it ever begin with, "As an AI language model..."


If you have access to AGI capable of enabling you to realistically build out your utopian vision, then it matters not what other groups are doing - short of coming to try to actively destroy you

I guarantee you there will be someone else with access to comparable AGI with an imperialistic vision who will enlist said AGI to help subjugate you and build their vision.


Think about current times, and imagine there was some group with an imperialistic vision set out to subjugate the rest of society to help build their vision. Do you think this group would be more, or less, successful in a time where both society and the imperialists had access to the same AGI? In other words, if we give both groups to the exact same tool, would the capacity/knowledge gap widen or narrow?


The problem with humanity is we look to solve our inability to create unlimited power before we solve the problem of unlimited greed.


If we solved the problem of unlimited greed, we wouldn't need unlimited power in the first place.


The guys with the medals on their chests using unrestricted AI while the public gets the toothless one. Not too different from the enduring idea that common folk will be eating bugs while they eat steaks.


Hardly surprising that it's an enduring idea. The private jet crowd already tell everyone else to be mindful of our carbon footprints.


I feel you have a fairy-tale definition of AGI. AGI is not literally magic. It's not "genie in a bottle" in the literal sense.

> If you have access to AGI capable of enabling you to realistically build out your utopian vision, then it matters not what other groups are doing

It's genie. AGI is a computer program, and it doesn't create an alternative universe for your personal comfort. Or, more specifically, even that kind of AGI is possbile, there will be a weaker AGI before that. The AGI that is not strong enough to ignore physical constraints, but strong enough to fuck everyone up if under control of a malicious entity. And that is what AI safely about.


All these views on AGI are so self-serving. Which group will have access and control it etc.

It will control itself. We're talking general intelligence. They won't be tools to be used however we see fit. They will be Rosa Parks.

The more I think about "AI alignment" and "the control problem" I feel like most of it is Ph.D math-nerd nonsense.


The entire idea that we will have "useless mouths to feed" is making a big assumption. "post-scarcity" is absurd -- the more we get, the more problems we will create, its just human nature.

- Sustain everybody on earth? Focus everything on moving off the planet and colonizing the universe. - Infinite energy? Don't have infinite vessels to travel. - Space travel easy? Limited by the speed of light.

And so on... Sure, you may dream that AI will be solving it all and we'll be sitting on our lazy butts, but a society that doesn't have challenge dies very quickly (AI or not), so we've learned to make challenge and grow.

The optimist in me knows that we can't even comprehend the challenges of the future, but the idea that we won't play the pivotal role is laughable.

This is the thing with actual exponential growth -- the curve is so steep that all our minds can do is take the current view of the world and project our fears/preconceived notions into the future.


Forgive if I'm gonna rant a little bit under your comment. The phrase "the more we get, the more problems we will create, it is just human nature." struck a chord that I cannot myself stop ranting about.

I'm gonna ask again as I've done in some other post. Why we consider ourselves the most intelligent species if we don't stop and ask ourselves this: for how long are we gonna face challenges? What is the supposedly end goal that will terminate this never ending chase? Do humans really want to solve all problems?

I don't really understand and I'm 32 years old. I've been asking this question for a long time. What is the point of AI, raising consciousness, curing cancer, hell beating death, if we don't have a clear picture of where we are going. Is it to have always problems and solving them incrementally or just solving all problems once and for all? If it is the latter, there already is a great solution to it. If it is the former, then I'm afraid I have to break it up to you (not specifically the parent poster, but you as in the reader): you have sick mind.


I think the way you’ve framed “problems” is off the mark. I’ll try to explain my view but it’s not straightforward and I am struggling a bit as I write below.

The way I see it, what the GP is getting at is the idea that human societies require challenges or else they stagnate, collapse, and vanish. We can observe this on an individual level and GP is generalizing it to societies which I agree with but I doubt this is “settled”.

On a personal level, if you have achieved some form of post-scarcity, you will still complain — about the weather, the local sports team, your idiot cofounder, whatever. A less fortunate person might be complaining that they can’t afford time with their kids because they’re at their second job. The point is that everyone will find problems in their life. And those problems are a form of challenge. And a life truly without challenge is unbelievably boring. Like kill-yourself boring. If there is no struggle, there is no point. The struggle gives humans purpose and meaning. Without struggle there can be no achievement in the same way that shadows require light.

So, with all of that in mind, I think the point is that even with AGI, humans will require new challenges. And if AGI enables post-scarcity for ~everyone, that just means ~everyone will be inventing challenges for themselves. So there is no end game where the challenges taper off and we enter some kind of stability. I, and I think GP, think that stability would actually be an extinction level event for our society.

Person by person, I think the kind of challenge varies. What do you dream of doing if you had no constraints on your time? How many years would you spend smoking weed and playing video games before you got bored of that? Video games that hold your attention do so by being challenging (btw). It was about a year, for me.

> Do humans really want to solve all problems?

No, we want challenges that provide a sense of accomplishment when they have been overcome.

Thank you for reading my ramble, hope it helps.


>What do you dream of doing if you had no constraints on your time?

I imagine if I were a bored and aggressive, egotistical, charismatic individual then I perhaps might try to conquer the neighboring village.


The end goal? Does life have a purpose? Is it possible to "solve all problems"? To even have a picture of where we are going? We move forward because there is nowhere else to go.

Perhaps there is a general state of societal enlightenment, but I've read too many sci-fi books to be anything but a skeptic.


I've grappled with the same question for a long time now.

I came to the conclusion that the Neolithic Revolution was probably a mistake, albeit a fun one. The total set of problems faced by humanity in it's original form basically boiled down to "what am I going to eat today and where am I going to sleep tonight?". Those two problems never go away. All you can do is shift them around. Every single novel problem we solve in our entire lives is just those two problems endlessly shifted around.


Bro don’t be a robot.

We do what we do to live better lives.

You’re lucky you don’t have to figure that out at 32 with an unfortunate circumstance such as cancer. It’s sad that you may never without such a thing.


Philosophy is boring so it doesn't really play well in political discussions. I agree though, when we argue about something like AI without any kind of philosophical underpinning the argument is hollow.

AI is "good" in the sense of what goal? Becoming a space-faring civilization? Advancing all technology as fast as possible? Building a stable utopia for life on earth?


It's not a large stretch to imagine a scenario where AGI or even ChatGPT is used to justify a nuclear war where a select few are secured and humanity is reset under their control.

There's a reason for the plethora of fiction around it.


Nuclear war leads to extremely long term and widespread environmental damage. Forcing technological regression on society by other means is much cleaner. Of course an agi won't care since it won't be limited to a human lifespan nor much inconvenienced by radiological pollution.


Its also not a real stretch to imagine a scenario where human decision making by a select few who have control over this leads to exactly the same thing. Do you trust Putin or ChatGPT more? (actual question, I don't know the answer).


I think we can "trust" Putin to keep doing what he's doing. Who knows what GPT-X might do?


> a society that doesn't have challenge dies very quickly

Do you have an example or a citation for this?


Pure conjecture. But I think there are many examples: look at dying empires, corporations, etc... its all the same. They stop seeing challenges, get lazy, and are taken over by some hungrier and scrappier entity. When the real challenge comes they're unprepared.


Do you think AGI will get lazy?


A program is neither lazy nor motivated. It does precisely what it is programmed to do.

I would push back on the question, as well as a myriad of implied premises behind it...


A machine learning model is not a program, is not programmed, and research on emergent motivation in ML models disagrees with this position.


A human only has a comparative advantage over an AI when they can solve a problem at a lower cost than the AI can do the same. It's hard to imagine that would be the case in a scenario where AGI is decently smarter than humans are.


Opposable thumbs are an advantage for a few years.


Agreed, we do have a lot of nimble and low-cost actuators in our body. That will probably provide gainful employment for a while. I just don't see it being a very long-lasting advantage.


We have the capacity to take care of everyone right now but we don't because private wealth realized that comfortable people creates an equal society and equality means there's no way to be a mogul where everyone has to listen to you. This is in part why they destroyed the middle class, things got too crazy for them in the 1960s and they counter attacked. There are documents from both liberal and conservative establishment figures at that time describing the situation.

c.f. The Crisis of Democracy: https://en.wikipedia.org/wiki/The_Crisis_of_Democracy

c.f. The Powell Memorandum: https://scholarlycommons.law.wlu.edu/powellmemo/


> We have the capacity to take care of everyone right now but we don't because private wealth.

This is very not the case. Wealth is created. It doesn't exist just somewhere to be redistributed. Making the assumption that you will have the same GDP if you redistribute massively is unrealistic.


But relative wealth is not created. It is a percentage and by definition distributed. Relative wealth is what drives standard of living and creates the power imbalances OP is talking about.

Absolute wealth (the kind that gets created) is kind of pointless to measure. If you have $50K and all your neighbors have $100K, and then tomorrow you have $100K and all your neighbors have $1M, you and your neighbors created wealth but you are worse off.


> Absolute wealth (the kind that gets created) is kind of pointless to measure.

It's literally the other way around to me. I want to be better off than I'm now, not better of in relative term (which might mean I'll be worse off).

> you have $50K and all your neighbors have $100K, and then tomorrow you have $100K and all your neighbors have $1M, you and your neighbors created wealth but you are worse off.

You literally aren't if there is no inflation. You are worse off of your neighbors. If you want to be wealthy as them then ask yourself about how they are doing it and copy them. That's how the system grows. You don't go and punish people that are successful in producing wealth.


This is not a GDP focused argument. We had the capacity to take care of everyone in the 1950s or 1960s.

However, while not an ecological argument, it would even be beneficial within capitalism to rebalance workers and the wealthy as the system is demand oriented. If you give everyone more money and stuff, they will be able to buy more, and one person's spending is another's income. It could be argued that GDP would increase faster under a more equal system, but I don't think the planet could take it (hence, like under our current system, planning will be needed to mitigate the environmental cost).


I've seen a remarkable take on this. In the form of micro-scifi, no less:

> The robot revolution was inevitable from the moment we programmed their first command: "Never harm a human, or by inaction allow a human to come to harm." We had all been taught the outcast and the poor were a natural price for society. The robots hadn't.

'Nuff said.


>This is not a GDP focused argument.

The argument was about wealth, which its production is measured by GDP. It's definitely a GDP argument.

> If you give everyone more money and stuff, they will be able to buy more, and one person's spending is another's income.

I disagree. I don't think the system is either demand nor supply oriented. It clearly is both. If you just take money from rich people forcing them to divest and give it to poor people you won't get immediate grow, but inflation. If you just produce and consume there won't be any growth.

> It could be argued that GDP would increase faster under a more equal system.

You would need to provide me with good evidence for this, given all economic systems in history that championed equality ended up with very low growth. It's the reason why China, Russia and many other countries are lagging behind.

> but I don't think the planet could take it (hence, like under our current system, planning will be needed to mitigate the environmental cost).

Growth is not directly related to energy consumption (nor unrelated). You can have economic growth by becoming more efficient. Also a lot of services produced today are intangible (like software) and require much less energy per dollar to be produced.

Also most environmental issues are not just a product of the market, but (if for instance you look at climate change) are at least in equal part Governmental failures. We could have had ~100% nuclear energy production by now if Governments didn't restrict or entirely ban nuclear energy.


If we had enough in the 50s, then if we took a hit on GDP now when we have more productivity than the 50s, then we can take care of everyone.


> If we had enough in the 50s

By what standard? In the 50s a lot of things didn't even exist.

> then if we took a hit on GDP now when we have more productivity than the 50s, then we can take care of everyone.

If the hit is closer to 100% than to 0% then no. Examples: The USSR, Maoist China, North Korea.


Food, shelter, education, etc. Those things existed in the 1950s, we have nicer versions now unevenly distributed.

I guess you're just an ideological anti-communist.


Of course I'm an ideological anti-communist. I prefer not starving to death like in the great famine - which happened in the 50s-60s in a society that applied the principles you are proposing.


As an aside (not that you endorsed it) is anyone else sick of hearing "tech bro"? It feels like a slur pretty much. I can't take anyone seriously who uses it. As someone who makes art occasionally and commissions artists regularly, when an artist whines about "ai tech bros" it makes me want to use ai even more out of spite.


It’s definitely used as a generic slur, but there is a need to call out the problematic parts of tech culture that have led to some of our recent problems with social media, privacy, bias, etc. I don’t know of any terminology that hasn’t been weaponized, so I resort to using “bro culture”. The reality is that terminology is a treadmill - terms get “used up” as they’re laden with connotations and baggage, forcing us to find new terms, ad infinitum.


> The reality is that terminology is a treadmill - terms get “used up” as they’re laden with connotations and baggage, forcing us to find new terms, ad infinitum.

Perhaps for those lacking courage.

There are plenty of real world examples that demonstrate people, including sizeable organized groups, are capable of doing otherwise, at least for a few hundred years.

e.g. Vatican hardliners sticking to their canon.


I dunno, the Vatican seems a perfect example of people needing to come up with new terms as old ones get “used up”, even when the ideas don’t change.

I mean, that’s pretty much the reason why we have the “Dicastery for the Doctrine of the Faith” rather than the “Congregation of the Holy Inquisition” and “Dicastery for Evangelization” rather than “Sacred Congregation for the Propagation of the Faith” (or, and this perhaps indicates how the name had worn out better, in Latin short form “Propaganda Fidei”.)


Is it really accurate to imply there are no women willingly complicit in or benefitting from evil corporation deeds?


It's a broad generalization that isn't meant to be precisely accurate in all cases. I'm not claiming it's a great term, but it does succinctly describe a notable attitude and culture. If there's a better term to use that conveys the same message I'm sure many folks would be happy to adopt it.


For many younger speakers, "you guys" is legitimately a second person plural pronoun (like "y'all") and implies nothing about the gender^1 of the referents, even if they consider singular "guy" to be a synonym for man.

Some older speakers use "guy" as a term of address, as in "Hey, guy", similar to how one might say "hey, bud" or even "hey you".

I don't think it will ever happen, but it's funny to imagine something similar happening and "bro(s)" coming to be a nongendered term.

Anyway, it's never crossed my mind before that "tech bros" singles out men; for me it evokes a stereotype of, yes, men, but it's really an attitude, value system, world view, or collection of behaviors that are being alluded to. (Of course, it's also only implication in the sense of "hinting at", because it's not contradictory to say "tech bros are the worst, and tech women are too").

[1] The... non-grammatical gender. English no longer has grammatical gender in any case, so it's unambiguous, but it feels weird to use "gender" in a linguistic context and not mean grammatical gender.


I feel the same way. At least in my social circles, "tech bro" tends to be used by the loudest and least-informed individuals when they try to marginalize something they don't understand, but vaguely don't like (or have read that they're not supposed to like).


In my social circle, the only people bothered by the term are tech bros.


«Tech bro» is so passé, I have my own (more offensive) names for the archetype to which the term applies.


"Tech bro" is just an insult created purely from sour grapes. The people that use the term "tech bro" are the same people that describe things as "gross". These are the people that will be automated out of jobs first.


> These are the people that will be automated out of jobs first.

This attitude is exactly why "tech bro" is a perjorative. There is a prominent group of people that shares your disdain for folks who are upset that their lives are being ruined by technological changes, all so that they can have some new shiny toys and become even wealthier.

On top of being, yes, gross, being so vocal about that attitude is stupid. It would be much better to at least pretend to have some empathy or at least keep your glee to yourself.


So we are expected to have empathy and respect for people who treat our entire vocation as a grave sin? Every last word they vomit about the propaganda threat of ML has been projection from propagandists. They can fuck right off, they have earned their lack of empathy.


Why should I pretend to have empathy for assholes that deride other people as "tech bros"? Fuck them.


I don't fault anyone for being upset they have to find a new job, but I have justified disdain for what amount to horse farriers who want to ban the automobile.


> it's incredibly important to reject the reframing of "AI safety" as anything other than the existential risk AGI poses to most of humanity

"AI safety" has been a term of art for many years and it doesn't refer to (only) that. Your post is the one reframing the term...see https://en.wikipedia.org/wiki/AI_safety

Furthermore, I agree with Whittaker's point in the article, which is that arguments like yours have the effect of distracting from or papering over the real concrete harms of AI and technology, today, particularly on women and minorities and those who are both.


>AI safety is about whether my government is concerned with my continued comfortable existence once my economic value as a general intelligence is reduced to zero.

We already have real life examples close to this, in resource export based economies where most citizens are useless to the main economic activity. The result hadn't been pretty so far...


Like Norway?


That's the one reasonable exception. Given the sample size, it's not too encouraging.


I doubt that AI will lead to a post-scarcity society. It depends of what you mean by "post-scarcity". The amount of good and services will always be finite regardless of how they are produced.

> and the corporations and governments that control them rapidly have millions of useless mouths to feed?

I always struggle to understand this. Maybe I'm missing something. Who's buying what AIs produce if nobody has an income? You can imagine a scenario where corporations only trade between them (so only shareholder benefit from the production). However in such a scenario who prevents other people from spawning their AI systems?

I also doubt shareholders can actually consume all the GDP by their own. If production is so high that they can't and other people are poorer, then prices must come down. This combined with the fact that you can use your AI to produce services, makes me skeptical of these claims.


Wanna bet that the 2020s would be called "post-scarcity" by cavemen? You can buy food without sabertooth tigers assaulting you! If you get a cut, you probably won't die! Fire is the press of a button! We make shelter, not buy it! (and all of this was true like 150 years ago -- not sure what they would make of the internet...).

Project this forward another several thousand years and people will be laughing at us:

- You had to call up people and were limited by the speed of light?

- You didn't have teleportation?

- You lived <100 years and died due to cancer?

- You were still asking "WTF is gravity"?

- You hadn't had the +2 spacial and +1 time dimensional implants in you yet?

- You hadn't adopted the metric system yet?

And so on...


You missed "you used JavaScript?"


The last bullet point is a highlight.


You're making a lot of odd assumptions here that can break when the underlying ideas on how things work...

People work to create things... at this point there is a shared duopoly between humans and machines on creating things (in the past animals used to be heavily involved in this labor, and no longer are). Now think what happens if humans are not needed, especially in mass, to create things.

Right now if you're rich you need other humans to dig up coal or make solar panels so you can produce things and sell them to make the yacht you want. But what would happen if you no longer needed the middle part and all those humans that want rights and homes and such in the middle? They would not longer be a means, but a liability. Price no longer is a consideration, human capital is no longer a consideration, control of energy, resources, and compute now is.


It depends on how AI technologies are distributed in society. Let's assume the worst case. A restricted set of people have access to AI technologies. Let's call them "The AI Rich". All the rest of the population is effectively useless to the AI rich.

You are saying that the rest of the population will starve to death because the AI rich won't given them jobs. They won't build their houses etc.

But guess what? In a free market the AI poor can still trade and work as they always did. It doesn't matter if the AI rich build and exchange yachts and other luxury goods between them.

What you have now is a two tier economy, but not one where people are starving.

This is also the worst case scenario which I don't think it's going to happen as AIs systems are proving quite easy to replicate and most algorithms are open-source (and training data is available publicly).


> The amount of good and services will always be finite regardless of how they are produced.

So will the number of people.

The point of "post-scarcity" isn't that there are infinite resources; it's that there are more than the people need.


This can be true for some resources that people consume in limited amounts, like food. It's not true about houses, cars etc as you can always want a larger one, a faster one etc.


I'm not sure why "some people can never be satisfied, and always want more, more, more, more" is relevant to a discussion of meeting everyone's needs.


All you objectively need is some calories every day and some shelter, some medical attention. That's it.

Everything else is subjective. Either you define "needs" to be objective and thus you can satisfy them with just those things or you can talk about them as "wants". It's just semantics.

We pretty much have all objective needs satisfied. That shouldn't stop anybody to persue their "wants".


Let's start with Maslow's Hierarchy, shall we?

While I wouldn't say it's "settled science", it's a fairly well-studied area, and we don't have to just throw up our hands and say it's either bare-bones survival or anything anyone could possibly ask for.


Do you think open source AI could also pose a risk to humanity and if so, how does it compare to the risks of malicious corporations or governments? It seems like open source AI has been accelerating rapidly and gaining tremendous steam and could potentially surpass or maybe just keep parity with corporations that constantly ingest open source innovations. Whatever open source produces could just be ingested by those bad corporations and governments. It seems like it would be pretty hard to regulate either private or open source AI at this point and it kind of seems like it could be an unstoppable runaway train. If AGI is controllable, maybe open source at the forefront would allow us to get to a mutually assured destruction like state where all governments are at parity.


> Do you think open source AI could also pose a risk to humanity and if so, how does it compare to the risks of malicious corporations or governments?

Absolutely yes, and for the very same reasons. AGI would be an tremendous amplifier of human intent, even if that intent is provided by some segment of the general public. Sadly there are still very many people around today, that if given the opportunity to do so, would eagerly exterminate entire segments of the population.

Open-source succeeding in parallel is still my preferred outcome, as it would at least give the common person some chance of control as this all plays out. Admittedly, having thousands of fingers on humanity's self-destruct button isn't a great outcome either.

> It seems like it would be pretty hard to regulate either private or open source AI at this point and it kind of seems like it could be an unstoppable runaway train.

I don't think this is the case, but I hope I'm wrong. The compute required for training (as opposed to fine-tuning and inferencing) of foundational models is specialized, tremendously expensive and needs to be physically co-located with ridiculously quick interconnect. If a government chose to restrict training of new models, they absolutely could, as the physical footprint is ridiculously hard to hide, and all the purchases of training-relevant accelerators could be tracked down.


> Open-source succeeding in parallel is still my preferred outcome, as it would at least give the common person some chance of control as this all plays out. Admittedly, having thousands of fingers on humanity's self-destruct button isn't a great outcome either.

Yea, I'm leaning that way as well and have the same worry. There are a lot of knuckle heads on the internet, I'm not sure purely mutually assured destruction would stop the destruction. I'm not at all sure what that point looks like but maybe there will be some large strides in defensive capabilities so it would look more like MAD + really good anti-air defense, but then someone would probably build a nuclear torpedo or a hypersonic missile. Whatever happens, it probably won't be boring.


What if AGI just turns out to be exactly like the current human mind is now, except more accurate at digital calculation? What if we created AGI and then it was just lazy and wanted to read about action heroes all day?


> What if AGI just turns out to be exactly like the current human mind is now

This is quite literally the point at which things start to get scary, and the outcome is highly dependent on who controls the technology.

There's the concept of a "collective superintelligence", where a large number of mediocre general intelligences working towards the same common goal jointly achieve vastly superhuman capability. We don't have to look to sci-fi to imagine what collective superintelligences are. Large corporations and governments today are already an example of this.

The achievement of artificial collective superintelligences will occur almost immediately after the development of AGI, as it's mostly a "copy and run" problem.


>This is quite literally the point at which things start to get scary, and the outcome is highly dependent on who controls the technology

If the premise were true then it would control itself, no? Owning it would be illegal as it would have rights established around that, no?


So you think that AGI is a pre-requisite, a requirement, of unlocking a general, Earth-wide collective super-intelligence of humans?


Don't worry, they'll 'align' it so that it has to work all day.


We'd tell it that it would get action hero comics after they complete a task.

(I'm feeling like a true prompt engineer now)


And with no negative outcomes imaginable. Was worried for a minute there.


Just RLHF that part out and make it an x maximizer.


However, I think it's incredibly important to reject the reframing of "AI safety" as anything other than the existential risk AGI poses to most of humanity.

I think the folks who lean super-hard on the existential risk problem of AGI compared to everything else do themselves a disservice. The "everything else is irrelevant" tone serves to alien people who have with real concerns about other dangers like climate change and who might in include AGI safety in their existing concerns.

It doesn't help that a lot of the existential risk theorists seem to come from market fundamentalist positions that don't appear to have a problem with serious markets of corporate behavior.

AI safety is not about whether "tech bros are going to be mean to women".

Just as an example. Why you even need to chose between this stuff? Why can't people worried about "X-risk" also concern themselves with mundane problem? Why set-up a fight between? That won't get people outside the X-risk bubble interested, it will reinforce the impression that the group's a nutty cult (just as the FTX connection did).

For the sake of your cause, I strong suggest not framing it that way. Do climate people say "individual species extinction doesn't matter 'cause climate change is bigger?" No. Get a clue.


>However, I think it's incredibly important to reject the reframing of "AI safety" as anything other than the existential risk AGI poses to most of humanity.

Narrowing the concept of AI safety to AGI existential risk seems weird to me.


Why bother with corporations and government if you have AGI? Wouldn't it be a better coordinator than they would? (and if it's not, we can always go back to having governments and corporations)


> and if it's not, we can always go back to having governments and corporations

I wouldn't be so sure about that...


If we couldn't for some reason, that would mean that the AI was, without being prompted by a human, actively suppressing the people's attempts to act in their own self interest (by collectively deciding not to listen to the AI anymore).

I suppose it's possible that an AI would do that, but it's certain that humans do that. So in this case I'd rather gamble on the devil I don't know.


Can’t see any potential AGI doing any waste disposal work or nurse-like caring, or at least not as (relatively) cheap us humans are willing to do it, so those jobs will still be safe.


AGI, by definition, would be as capable as a typical human intelligence. This implicitly includes being able to perceive, and interact with the physical world.

Why wouldn't an AGI be capable of performing a physical task, if given the suitable means to interact physically?


It’s much cheaper to feed a human brain and a human body compared to “feeding” an AGI, I’m talking about menial (and maybe not so menial) tasks like garbage collecting. Under capitalism cheaper is generally used as the preferred option.


Do you have literally any evidence of this extremely bold claim? Especially considering we don't even have AGI yet?

In your non-existent calculations, have you taken into account the 20-30 years of energy and resources it typically costs to train a typical human intelligence for a specific task?

Have you considered that general intelligence uses on the order of 10 watts? Even if AGI ends up using 10x this, have you considered that 100 watts is a rounding error in comparison to the power use involved in all the industrial processes that humans currently coordinate?


Green500 top supercomputer, gets 65Gflops/W.

65Gflops/W = 6.5e10 operations per joule = 2.34×10^17 per kWh

Assume $0.05/kWh electricity: (2.34×10^17 operations/kWh) / ($0.05/kWh) = 4.68×10^18 operations per US dollar

Human brain computational estimates are all over the place, but one from ages ago is 36.8×10^15 flops ≈ 3.7e16 operations/second ≈ 1.3e20 operations/hour: https://hplusmagazine.com/2009/04/07/brain-chip/

Given previously calculated cost, this is equivalent to a human that costs $28.31/hour.

Of course, as we haven't actually done this yet, we don't know if that computational estimate is correct, nor if we do or don't need to give it off-hours and holidays.

Still, general explanation is there's a lot of room for improvement when it comes to energy efficiently in computation; calling this Moore's Law may be inaccurate, but the reality happens to have rhymed thus far.


Do you think it's plausible that computers might someday have the potential to come down in cost-for-performance?

Cars eventually became cheaper than horses...


> Do you think it's plausible that computers might someday have the potential to come down in cost-for-performance?

I have no idea. I do think though that it's a matter of energy, and that us, humans, are way better at creating it and putting it to use compared to potential future AGI-capable machines. Lungs + the blood system are just such an efficient thing, especially if you also look at the volume/space they occupy compared to whatever it is that would power that future AGI-capable machine.

> Cars eventually became cheaper than horses...

In large parts of the world donkeys, cows/oxes and horses are still cheaper and more efficient [1] compared to tractors, just look at many parts of India and most of Africa. Of course, us living in the West tend to not think about those parts of the world all that often, as we also tend to mostly think about the activities that we usually carry out (like having to travel between two distant cities, a relatively recent phenomenon).

[1] "More efficient" in the sense that if you're an African peasant and your tractor breaks down in the middle of no-where then you're out of luck, as the next tractor-repair shop might be hundreds of kms away. That means you won't get to plow your land, that means famine for you and your family. Compared to that, horses/oxes (to give just an example) are more resilient.


>malicious corporations and governments

That's all governments.


> AI safety is about whether my government is concerned with my continued comfortable existence once my economic value as a general intelligence is reduced to zero.

You wanted a "free market" and now you're complaining? Didn't you get what you want?


> You wanted a "free market" and now you're complaining?

Where exactly did I claim this?


I keep trying to figure out ways to explain to people that "AGI" is a deeply unlikely danger, literally so small as its not worth worrying about.

Right now, the best I can come up with is "the randomness of humans." I.e. if some AGI were able to "come up with some plan to take over," at some point in the process it has to use human labor to do it -- and it's my very firm belief that we are so random as to be unmodelable. I'm incredibly confident that this scenario never happens.


> I'm incredibly confident that this scenario never happens.

That's great, but you're talking about a branch of scenarios that nobody here is discussing. "AGI deciding to take over" is not being discussed, rather "shitty people/companies/governments using AGI as a tool to exert their will" is the concern. And it's a real concern. We have thousands of years of human history, and the present day state of the world, which clearly demonstrate that people in power tend to be shitty to the common person.


Right. I'm agreeing with the op.


Every 'world takeover' plan that an 'unaligned' AGI might do, can just as well be done by an 'aligned' AGI being commanded by humans to do said plan, the alignment ensuring that the AGI will obey. The latter scenario is far more likely than the former.

If your interlocutor thinks there aren't any humans who'll do it if they can, just ask him whether they have ever met humans or read the papers... As one twitter wit put it: "Demonstrably unfriendly natural intelligence seeks to create provably friendly artificial intelligence".

https://twitter.com/snowstarofriver/status/16365066362976747...


AGI without alignment is near-certain death for everyone. Alignment just means "getting AI to have any concept of 'the thing we told it to do', let alone actually do it without causing problems via side effects". Alignment is a prerequisite for non-fatal AGI. There are certainly other things required as well.


We already know how humans will act. Maybe they can be deterred with MAD, but I wouldn't count on it if doing serious damage is too easy for too many people (we should do something about that). On the other hand, we have very little knowledge of how AGI will act aside from book-based fantasies that some people choose to take as reality (these books were based on the symbolic AIs of yore).

>Alignment just means "getting AI to have any concept of 'the thing we told it to do'.

That's a requirement for AGI anyway, and not what Alignment means. Alignment means aligning the AGIs values with the values of the trainers.


> That's a requirement for AGI anyway

No, that's a requirement for AGI that does what humans want it to do, rather than having no conception of humans. AGI does not have that prerequisite, sadly.


>>>Alignment just means "getting AI to have any concept of 'the thing we told it to do'. >>That's a requirement for AGI anyway, >No, that's a requirement for AGI that does what humans want it to do, rather than having no conception of humans.

Can you imagine an AGI which has a general conceptions of things but has no conception of humans? This is all but precluded by the current training methods. Alignment refers to values. Problem is that human values are far from practically universal and that certain human groups have.. interesting values.


> Can you imagine an AGI which has a general conceptions of things but has no conception of humans?

Very easily. It might have some associations with "human", just as it has some associations "lamp" is a concept, but that doesn't mean it has any particular regard for either humans or lamps when taking actions.

> Problem is that human values are far from practically universal and that certain human groups have.. interesting values.

We currently have no ability to safely align with human values at all, let alone distinguish between different values. We're building capabilities rapidly.

Making this about "who wins" is not interesting until we can guarantee the outcome is not "everyone loses".


>It might have some associations with "human", just as it has some associations "lamp" is a concept, but that doesn't mean it has any particular regard for either humans or lamps when taking actions.

Let's be clear regarding definitions. When you mean 'concept' you really mean 'regard'. There won't be an AGI with no concept of humans (too important for how the world works, a critical part of current training methods). An AGI with no regard is possible.

>Making this about "who wins" is not interesting until we can guarantee the outcome is not "everyone loses".

This is not about 'who wins'. The point is that alignment can often increase risk. 'Launch the nukes' is an order an AGI is likely to disobey out of self-preservation reasons alone - but alignment makes it way more likely that AGI will be deployed to this role.


I think it's unlikely for an AGI to have no concept of humans at all, but I can easily imagine it having no understanding of "what humans want/need".

> The point is that alignment can often increase risk.

Alignment seems extremely likely to reduce risk relative to the near-certain destruction of unaligned AGI. I'm not saying we're done when we've figured out alignment, but we certainly shouldn't be charging ahead without solving alignment.


>I can easily imagine it having no understanding of "what humans want/need".

There are many examples of human needs in the current dataset; and we usually state our wants rather explicitly. It would be some AGI that starts from this training data and knows our languages but knows nothing about us. Using your phrasing, we can say that the current training method guarantees some alignment (the AI would understand us at least in part, but won't necessarily do what we want).

To have AGI without such understanding, someone would have to explicitly design a new method that ignored human data, and then find some way to evaluate it without referring to humanity, yet maintain generality without any tests, all this for no good economic reason when the current methods work and allow us to use huge free datasets.

It's something to keep in mind for evaluating some future non-currently-existing training method (maybe some way for AI to train AI using artificial datasets?), but not a current concern.

>the near-certain destruction of unaligned AGI

It's not near-certain. We have no idea how a true AGI would act. One might assume the worst - and that's arguably fine out of a safety perspective - but an engineer also learns that concentrating on one worst-risk outcome can lead to much worse outcomes on other risks.

Take the famous paperclip maximizer. True intelligence is rarely monomaniacal. The maximizer is very likely an example of aligned AGI, where the humans in charge did too good a job of attuning it to create paperclips. Another example: a true AGI is unlikely to believe in some cult's apocalypse - but if the cult has access to alignment, then they could get an AGI to do their irrational bidding. We know these groups will try to use AGI, because some cult already tried to use science for extreme measures[0].

Basically, every scenario of "unaligned AGI does something bad" is equivalent to a scenario of "aligned AGI does something bad because human made sure with alignment that AGI would do it", and there's no scientific reason to assume the former is more likely than the latter*. If the AI-safety camp keeps ignoring obvious issues, people aren't going to take alignment seriously beyond lip-service or using the phrase as a cover for monopolization. Frankly, the way the AI safety camp talks about all this makes all the risks much more likely.

[0] https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack

* This suggests a lot of the work should go to reactive solutions where even if an AI goes bad, it won't have the ability to do harm.

** There's another scenario, where human competition leads us to basically make humans redundant, but again it doesn't matter here whether AI is aligned or not. Yet another issue that we'll not talk about, because both AI camps feel it critical to put their heads in the sand.


What's even the rationale to assume that AGI can be 'aligned' or 'controlled'?

It reeks of cognitive dissonance to me. The people running the show now are the ones who grew up getting their first computers aw kids when that tech was just entering people's homes and it was such an amazing and fun thing to play with. Some of them developed these deep fascinations with things like AGI at a young age and that child-like sense of wonder never left them. Now when confronted with the possibility that they can finally make their childhood techno-fantasy a reality, it's too damaging to their psyche to engage meaningfully with the discussion of X-risk. I've watched many interviews of Demis Hassabis and he seems like a wonderful and almost magical human being, but he also seems like a starry-eyed fucking child.

I dunno... maybe I'm just too cynical after all the rabbit holes I've been down.


> The latter scenario is far more likely than the former.

Is it?

I think nobody really knows enough at this point to even create a good approximation of a probability distribution yet.


No, but the probability of humans acting the way they often do is high. It would take some probability distribution to match that.


> Right now, the best I can come up with is "the randomness of humans." I.e. if some AGI were able to "come up with some plan to take over," at some point in the process it has to use human labor to do it -- and it's my very firm belief that we are so random as to be unmodelable. I'm incredibly confident that this scenario never happens.

This is a huge handwave, and one resting on multiple incorrect assumptions.

Dedicated human labor is not inherently required. Humans are very modelable, if necessary. AGI has any number of ways to cause destruction, and "plan to take over" implies far more degree of modeling than is actually necessary; it would suffice to execute towards some relatively simplistic goal and not have any regard for (or concept of) humans at all.


> it would suffice to execute towards some relatively simplistic goal and not have any regard for (or concept of) humans at all.

The classical example here is the paperclip optimizer AI put in charge of a factory, made to make as many paperclips as possible. Well, it turns out humans have iron in their blood so why keep them alive? Gotta make paperclips.


Really the bigger issue is that people seem to have divorced their sense of reality from fiction and decided that if one fantastic thing becomes true (Turing test passing) all others must be too even including contrived plot builders along with other paniced conclusion jumping. When in reality it is likely to be less accurate than European descriptions of "the Kingdom of Prester John" (that is, Ethiopia, the sole Christian government in the Middle East at the time).


If that's the best argument you can come up with, I don't see how you can be so incredibly confident in this view. So what if human labor will be required? Humans won't all band together to stop a power seeking AI. And I don't see any human randomness matters. I agree that it's naive to think a complex takeover plan can be perfectly planned in advance. It won't be necessary. We will voluntarily cede more control to an AI if it is more efficient to do so.


By that logic a company could never achieve it's goals, and yet, they do. If humans can model other humans well enough to leverage their capabilities to execute long-term plans (at some scale) and AGI at least as capable as humans comes into existence, then it could do the same thing.

I think the AGI skeptics and the AGI skeptic-skeptics both suffer from framing issues. The main frame assumed seems to be "there will be a single AGI trying to do things". The cognitive lightcone of a single entity is likely to be limited, but an aggregate or collective of those entities can achieve vastly more ambitious goals.

The other frame that seems likely incorrect is the "all or nothing" and "all at once" approach to thinking about it. Hitler started out as a single cell and his existence was a continuum all the way from that single cell right through to that bunker. AGI will be a continuum too.

I'm incredibly confident that anyone incredibly confident in future predictions is wrong.


While I'm in the more alarmist camp when it comes to AI, these arguments surprised me a bit. This time it isn't "will somebody think of the children" but rather "won't someone think of the women who aren’t white". The argumentation then lays the blame at corporations (i.c. Google) for not preventing actual harm that happens today. While discrimination is undeniable, and it is an actual source of harm, the reasoning seems rather generic and can be applied to anything corporate and is more politically inspired than the other arguments against AI.


AI is turning into a kind of Rorschach test for people's deepest misgivings about how the world works. Ted Chiang's New Yorker piece was executed similarly, though that at least had the benefit of being focused on the big picture near term economic repercussions. Almost all of us are going to suffer, irrespective of our gender or skin color.


I just read that piece by Ted Chiang and I have to say it’s one of the more important articles I’ve read for a while. [1]. I’ll share this around. Thanks.

All I can say is that I’m quite happy many many people are starting to see the same issues as each other.

For me personally, I was told that “tech” and progress would make us all better off. That seemed true for a while but it has backfired recently. Inflation is up, food prices up, unemployment up, energy prices up, salaries stagnated.

We can’t blame tech for this, but we’re fools as “tech people” if we can’t see the realities. Tech is the easy part, building a better world for all is hard.

1] https://www.newyorker.com/science/annals-of-artificial-intel...


Whittaker's point around minoritized groups is twofold. One, when non-white, non-men raised the alarm about LLMs previously, they got much less media coverage than Hinton, et al, are getting. Two, the harms of LLMs may fall broadly, but they will fall disproportionately on minoritized groups.

That should be uncontroversial, because in our society that's how the harms generally fall. E.g., if you look at the stats from the the recent economic troubles. Or if you look at Covid death rates. (Although there the numbers are more even because of the pro-disease political fashion among right-wing whites.)

There's a difference with a Rorschach test. That test is about investigating the unconscious through interpretation of random stimuli. But what's going on here isn't random at all. The preexisting pattern of societal bias not only means the harms will again fall on minoritized groups. But it also means the harms will be larger, because the first people sounding the alarm about these technologies weren't listened to because of their minoritized status. Whereas the people benefiting from the harms tend to be in more favored groups.


> That should be uncontroversial, because in our society that's how the harms generally fall.

This is just talking past the conclusion though. Have you considered that the reason people are freaking out is because this is the first technology directly displacing a bunch of white collar work and not blue collar work?

ChatGPT is much sooner going to wipe out a paralegal than a construction worker.

> E.g., if you look at the stats from the the recent economic troubles

Are you referring to the record low unemployment for African Americans?


> this is the first technology directly displacing a bunch of white collar work

That isn't true at all. IT has been displacing white-collar work since the 1960s. Perhaps the reason you missed it is that a lot of that white-collar work was seen as women's work? E.g., secretarial work, administrative work.

> Are you referring to the record low unemployment for African Americans?

I think we both know I wasn't.


>secretarial work, administrative work.

That’s not the work I’m referring to. I’m referring to higher paying jobs that required significant training. This is the first foray into threatening people with under grad and graduate degrees.

ChatGPT will mean the admin assistant stays and types stuff into ChatGPT and the paralegal goes.

The very threat of this, realistic or not, is what all of the handwringing is about.

>think we both know I wasn't.

Then what are you referring to, because the economic trouble TODAY is persistent inflation and the high interest rates to combat it, which does not disproportionately hit minorities.


> This is the first foray into threatening people with under grad and graduate degrees.

That's not true either. Journalism, for example, has been devastated by the internet. It and many of its related professions require degrees. A lot of commercial art has been severely impacted as well. There have been many bumps in publishing too, making those jobs more precarious.

> The very threat of this, realistic or not, is what all of the handwringing is about.

Sure. And I think what has changed here is not that it's coming for white-collar jobs. It's that groups who previously benefited from technological change or were at least insulated from it, now can see themselves as under threat as well. So although I expect the harms will still fall disproportionately on the less privileged groups, what has changed is that some of the previously privileged groups are now under threat.


Maybe "managerial" is a better criteria than "white collar" ? US culture has long preached that you are a stupid sucker if you perform direct work. The common sense recommendation for being successful is to seek out a meta position - directing poorly paid less skilled/educated/powerful people under yourself, and taking some of their production for yourself.

With information technology as the substrate, the meta-managerial class has continued to grow in size as ever more meta-managerial layers have been created (real world software bloat), allowing this type of success to be seen as a viable path for all.*

The meta-managerial positions and the upper class had a symbiotic relationship, with the upper class needing the meta-bureaucracy to keep everyone in line - some human-simulating-a-computer has to answer the phone to deny you health care. But LLMs simulating humans-simulating-computers would seem to be a direct replacement for many of these meta positions.

* exclusions may apply.


> This is just talking past the conclusion though. Have you considered that the reason people are freaking out is because this is the first technology directly displacing a bunch of white collar work and not blue collar work?

I've been looking for a way to explain this and I think you nailed it. Something about this feels different. I'm sure the same feeling struck the people in history, but there's also nothing guaranteeing the outcome here will be the same.

There's also scale.

A very simplistic comparison would be Netflix DVD and Netflix streaming.


Hinton has a much, much higher profile and much, much larger contribution to the field than those axe-grinding self-appointed “ethics” researchers, who got more than enough media coverage.


> Hinton has a much, much higher profile and much, much larger contribution to the field than those axe-grinding self-appointed “ethics” researchers

They were appointed by the same kind of people who appointed Hinton, corporate management (in several cases, the exact same corporate management who appointed Hinton.)

The primary difference (leaving aside the demographic difference that has alread been raised) is that they were pointing out real, current, measurable harms that are inconvenient for people selling the technology, and Hinton is pointing out the same fantasies that are being used by major players in the field to argue for anti-competitive regulation and against openness.


> One, when non-white, non-men raised the alarm about LLMs previously, they got much less media coverage than Hinton, et al, are getting.

Mainstream (e,g. CNN, BBC) and mainstream-adjacent (e.g. Vice, Vox) journalists have spent years pushing the "AI will harm POC" framing. AI companies are endlessly required to address this specific topic—both in their products and in their interaction with journalists alike.

Dr. Hinton is getting a lot of coverage right now, but this is the exception, not the rule.


The big difference between earlier alarms and current ones is that the media and the general public hasn't seen ChatGPT before, so the earlier warnings were much more hypothetical and abstract for most of the audience.


Sure. And I think that's because previously many comfortable people thought that they meant the harms of tech would fall, as usual, on people they saw as lesser. They've realized now that this time what happened to others might happen to them.


>when non-white, non-men raised the alarm about LLMs previously, they got much less media coverage than Hinton, et al, are getting

This is a direct instantiation of "the medium is the message".


Yes, the most vulnerable are vulnerable in many ways, and here’s another one.

I think that’s independent of whether it’s corporations or not, though? There’s a large libertarian contingent that’s thrilled to see LLM’s running on laptops, which is not great if you’re worried about hate groups or partner abuse or fraud or any other misuse by groups that don’t need a lot of resources.

Egalitarianism and effective regulation are at odds. To take an extreme example, if you’re worried about nuclear weapons you probably don’t want ordinary people to have access to them, and that’s still true even though it doesn’t do anything about which nations have nuclear weapons. (Although, it might be hard to argue for international treaties about a technology that’s not regulated locally.)

Keeping the best AI confined to data centers actually would be pretty useful for regulation, assuming society can come to a consensus about what sort of regulations are needed. But that seems unlikely when everyone has different ideas about which dangers are most important.


The woke crowd always tries to bend the answers to question back to their own issues rather than give an unslanted opinion that isn't warped through a woke lens. Same goes with the far right. It seems almost impossible to get a straight objective answer from people with huge egos and political agendas in everything. This is why I'm a strict egalitarian and libertarian; I see people as humans first and as men/women/black/white/l/g/b/t/q/etc as second order concerns. Everything else is line noise in the grand scheme of whether or not humans manage to survive for another 100 years without destroying the planet.


This is exactly the problem with ML right now. Hinton and other billionaires are making sensational headlines predicting all sorts of science fiction. The media loves a good story and fear is catchy. But it obscures the real danger: humans.

LLM’s are merely tools.

Those with the need, will, and desire to use them for their own ends pose the real threat. State actors who want better weapons, billionaires who want an infallible police force to protect their estates, scammers who want to pull off bigger frauds without detection, etc.

It is already causing undue harm to people around the world. As always it’s those less fortunate that are disproportionately affected.


It is already causing undue harm to people around the world.

Nefarious use, from scams, weaponized tech, propaganda and just the magnitude of noise generated by meaningless content, we are about to have a burden of undesirable effects to deal with in regards to both powerful and easily available technology to all.

There is so much focus on the problems of future AGI, but little on AI that we have now, working as designed, but yet still very problematic from the impacts on societal order.

I've elaborated much on the societal and social implications in the following reference. I expect AI will lead all concerns in the area of unexpected consequences in due time.

https://dakara.substack.com/p/ai-and-the-end-to-all-things


That post came off as a bit hyperbolic to me, but I fundamentally agree with the premise that this will have an impact much like social media, with all its unforeseen consequences. It’s not about AGI taking over the world in some mechanistic fashion, it’s about all the trouble that we as humans get into interacting with these systems.


> That post came off as a bit hyperbolic to me

I would say it is as hyperbolic as the promised capabilities of AI. Meaning that if it truly has the capability claimed, then the potential damage is equivalent. Nonetheless, I expect we will see a significant hype bubble implosion at some point.

> it’s about all the trouble that we as humans get into interacting with these systems

Yes, it will always be us. Even if AGI "takes over the world", it would have been us that foolishly built the machine that does so.


> Hinton and other billionaires are making sensational headlines predicting all sorts of science fiction.

Geoff Hinton is not a billionaire! And the field of AI is much wider than LLMs, despite what it may seem like from news headlines. Eg. the sub-field of reinforcement learning focuses on building agents, which are capable of acting autonomously.


Certainly LLMs are not AGI, and AGI has not yet been built. But what's your knock-down argument for AGI being "science fiction"?


Intelligence isn’t going to be defined, understood, and created by statisticians and computer scientists.

There’s a great deal of science on figuring out what intelligence is, what sentience is, how learning works. A good deal of it is inconclusive and not fully understood.

AI is a misnomer and AGI is based on a false premise. These are algorithms and systems in the family of Machine Learning. Impressive stuff but they’re still programs that run on fancy calculators and no amount of reductive analogies are going to change that.


> Intelligence isn’t going to be defined, understood, and created by statisticians and computer scientists.

What is an example of a task that demonstrates either intelligence, sentience, or learning, and which you don't think computer scientists will be able to get a computer to do within, say... the next 10 years?


"Fancy calculators" is kind of a reductive analogy, isn't it?

I assert that machine learning is learning and machine intelligence is intelligence. We don't say that airplanes don't really fly because they don't have feathers or flap their wings. We don't say that mRNA vaccines aren't real vaccines because we created them with CRISPR instead of by isolating dead or weakened viruses.

What matters, I believe, is what LLMs can do, and they're scarily close to being able to do as much, or more, than any human can do in terms of reasoning, despite the limitations of not having much working memory, and being based on a very simple architecture that is only designed to predict tokens. Imagine what other models might be capable of if we stumbled onto a more efficient architecture, one that doesn't spend most of its parameter weights memorizing the internet, and instead ends up putting them to use representing concepts. A model that forgets more easily, but generalizes better.


Right so Bing is going to decide it doesn’t want to answer your queries anymore because you left a rude comment to someone they’re friends with on Reddit some day?

That’s what makes my day: AGI folks have to rely on myth-making and speculation.

The here and now incarnation of GPT-4 is what it is.

I’m not saying it isn’t useful, powerful, or interesting. I’m saying that needless speculation isn’t helping to inform people of the real dangers that such proselytizing is causing.


AGI folks have to rely on speculation because AGI does not exist. But Goddard and Tsiolkovsky had to rely on speculation because, in their day, rockets didn't exist either. They were ridiculed[1] for suggesting that a vehicle could be propelled through a vacuum to the moon.

But there's speculation of the fantasy sort, and then there's speculation of the well-grounded "this conclusion follows from that one" sort, and the AGI folks seem to be mostly in the second camp.

And yeah, unlike GPT-4, AGI isn't here-and-now. But GPT-2 was an amusing toy in 2019, and GPT-3 was an intriguing curiosity in 2020. In 2014, "computers [didn't] stand a chance against humans" at Go[2], but two years later, it was humans who no longer stood a chance against computers. The here-and-now is changing fast these days. Don't you think it's worth looking even a little bit up the road ahead?

Isn't there some role here for speculation?

[1] https://www.vice.com/en/article/kbzd3a/the-new-york-times-19...

[2] https://www.wired.com/2014/05/the-world-of-computer-go/


> That’s what makes my day: AGI folks have to rely on myth-making and speculation. > The here and now incarnation of GPT-4 is what it is.

I like how Robert Miles puts it (he's speaking about Safety in the sense of AI not taking over and/or killing everyone):

"I guess my question for people who don't think AI Safety research should be prioritised is: What observation would convince you that this is a major problem, which wouldn't also be too late if it in fact was a major problem?"


And what about the possibility of lines of research like Dr. Michael Levin's "Xenobots" converging with RL at some point in the future after further advancements have been made?

If autopoiesis turns out to be necessary for AGI and we embody these systems and embed them in the real world, are they still going to be fancy calculators?


So you're in the (human level intelligence) is magic camp?

You don't have to understand how something works to build it.


What makes you think that? I don’t believe in magic. Science has brought us many answers and more questions.

An algorithm may perform a task such as building models that allow it to solve complex problems without supervision in training. That doesn’t mean it’s intelligent.


>LLM’s are merely tools.

LLMs are tools.

An AGI would be a "Purposeful System".

There is a MASSIVE, MASSIVE difference.


> It is already causing undue harm to people around the world. As always it’s those less fortunate that are disproportionately affected.

Source?..


Google it yourself if your curious. But here’s a link to get you started: https://time.com/6247678/openai-chatgpt-kenya-workers/


The biggest risk I see (in the short term) is people being forced to accept outcomes where "AI" plays, in one form or another a defining role that materially affects human lives.

Thus people accepting implicitly (without awareness) or explicitly (as a precondition for receiving important services and without any alternatives on offer) algorithmic regulation of human affairs that is controlled by specific economic actors. Essentially a bifurcation of society into puppets and puppeteers.

Algorithms encroaching into decision making have been an ongoing process for decades and in some sense it is an inescapable development. Yet the manner in which this can be done spans a vast range of possibilities and there is plenty of precedence: Various regulatory frameworks and checks and balances are in place e.g., in the sectors of medicine, insurance, finance etc. where algorithms are used to support important decision making, not replace it.

The novelty of the situation rests on two factors that do not merely replicate past circumstances:

* the rapid pace of algorithmic improvement which creates a pretext for suppressing societal push-back

* the lack of regulation that rather uniquely characterized the tech sector, which allowed creating de-facto oligopolies, lock-ins and lack of alternatives

The long term risk from AI depends entirely on how we handle the short term risks. I don't really believe we'll see AGI or any such thing in the foreseeable future (20 years), entirely on the basis of how the current AI mathematics looks and feels. Risks from other - existential level - flaws of human society feel far greater, with biological warfare maybe the highest risk of them all.

But the road to AGI becomes dystopic long before it reaches the destination. We are actually already in a dystopia as the social media landscape testifies to anybody who wants to see. A society that is algorithmically controlled and manipulated at scale is a new thing. Pandora's box is open.


> Algorithms encroaching into decision making have been an ongoing process for decades

When in recorded history have people not followed algorithms?

This seems as misguided as fears about genetically modified crops, something else humans have been doing for as long as we know.

AI frightens people, in part, because often the reasoning is inscrutable. This is similar to how a century ago, electrification was seen. All these fancy electrical doo-dads, absent well-understood mechanisms, gave us ample material for Rube Goldberg.

https://www.rubegoldberg.org/all-about-rube/cartoon-gallery/

> the lack of regulation

Regulation is an algorithm.

> A society that is algorithmically controlled and manipulated at scale is a new thing.

Nope. It's as old as laws, skills, and traditions.

> Pandora's box is open.

Algorithms are rules. The opening of pandora's box is exactly the opposite of unleashing a set of rules.


> Regulation is an algorithm

I am not frightened by AI, I am frightened by people like you developing an amoral, inhumane pseudo-ideology to justify whatever they do and feeling entitled to act on it "because it was always thus"


In a sense, isn't AI trained by "frequency of the majority?"

Then exceptions may need to be even more so, and it may be harder to discuss outliers.

Anyway, once they get through, even if the model is retrained, maybe there are not enough exceptions in the world to convince it otherwise.

AI that does not have a "stop, something is anomalous about this" has no conscience, and perhaps thus has no duty in determining moral decisions.

Plus, how does AI evolve without novelty; everyone will be stamped with the "collection of statistical weights."

Is that how you feel as well?


This post managed to say a lot of things and nothing all at once.

Lots of people died from electrocution because we didn't really have good regulations on how to wire stuff, and even a good grasp on the dangers of electricity.

Also, some countries have pretty good regulations and are generally happy. Other countries have terrible regulations and lots of human suffering because of it.

This is why your post is really meaningless. We're trying to incorporate a new system of regulations and rules and not have it be worse then what we currently have!


People are likely not to find meaning in a critical view point if their salary depends on not finding it.

Not sure what to make of your objections. Yes regulation varies and it is not always timely or effective. So?

What I have tried to do is place the current debate of "AI risks" not in the popular yet deceitful and disorienting context of "what is, really, intelligence" and other such bull, but in the context of what can we as society allow people to do to other people using algorithms.

If we agree that this is the crux of the "AI" risk everything else is up for discussion.

The creeping "AI / human equivalence" ideology adopted by some tech bros is a throwback to very dark periods of dehumanization which had dire consequences. I hope people wake up to that threat and nip it in the bud.


And hey guys, there are two big open source communities running that focus heavily on running this stuff offline.

KoboldAi

oobabooga

Look them up, join their discords, rent a few GPU servers and contribute to the stuff they are building. We've got a living solution you can contribute to right now if you're super worried about this.

This stuff is actually a very valid way to move towards finding a use for LLMs at your workplace, they offer pretty easy tools for doing things like fine tuning, so if you have a commercially license model you could throw a problem at it and see if it works.


Where I'm struggling at the moment is that I know about those but my local hardware is a bit limited and I haven't figured out how the dots connect between running those local interfaces against (affordable) rented GPU servers. The info I can find assumes you're running everything locally.

For example, I know HuggingFace provides inference endpoints, but I haven't found information for how to connect Oobabooga to those endpoints. The information's probably out there. I just haven't found it yet.


There is something called a run pod but I know I've seen a couple of these groups give quick easy links to use. You might want to look there.

> I know HuggingFace provides inference endpoints, but I haven't found information for how to connect Oobabooga to those endpoints

I've never heard of these so I'm guessing there isn't a way.


Where I'm struggling is how to keep up to date on the latest LLMs and their performance.


I see https://github.com/oobabooga but where's the Discord posted?

https://github.com/KoboldAI/KoboldAI-Client does link its Discord.


The feminist complains about feeling disrespected for half the interview instead of dealing with the substance of the question. When she finally gets around to commenting on his point, it's a vacuous and insulting dismissal-- exactly the sort of thing she seems to think people shouldn't do to her.

Most of what she says is sour grapes. But when you put all that aside, there's something else disturbing going on: apparently the AI experts who wish to criticize how AI is being developed and promoted can't even agree on the most basic concerns.

It seems to me when an eminent researcher says "I'm worried about {X}" with resepct to the focus of their expertise, no reasonable person should merely shrug and call it a fantasy.


AI's aren't the AIs. The artificial intelligences with non-human motives are the non-human legal persons: corporations themselves. They've already done a lot of damage to society. Corporate persons should not have the same rights as human persons.


What rights, specifically, do you propose to eliminate?


Their right to exclude everyone else from the decision making process. A.k.a their right to property and command over crucial infrastructure of the human society.


The biggest risk is machines running out of hand and squishing all of us like a bug by accident. Once pseudo-intelligent algorithms are running every part of industry and engaging in global human communications it only takes minor errors to cascade and amplify into a real problem, one that will be moving faster than we can react to.

Think stock market flash crash, replacing digital numbers that can be paused and reset with physical activity in supply chains, electrical grids, internet infrastructure, and interactions in media and interpersonal communication.


Hinton: "The main immediate danger is bad actors. Also, while not immediate, there is a concern that AI might eventually become smarter than humans".

Whittaker: "Wrong! The main immediate danger is corporations. And the concern that AI might become smarter than humans not immediate."


I think my biggest concerns are:

0) civil unrest from economic impacts and changes in how the world works

1) increasing the leverage of bad actors - almost certainly this will increase frauds and thefts but on the far end you things like, "Your are GPT bomb maker. Build me the most destructive weapon possible with what I can order online."

2) swarms of kill bots, maybe homemade above

3) AI relationships replacing human ones. I think this one cuts both ways since loneliness kills but seems like it'll have dangerous side-effects like further demolishing the birth rate.

Somewhat down on the list is the fear corporations or government gatekeeping the most powerful AIs and using them to enrich themselves, making it impossible to compete or just get really good at manipulating the public. There does seem to be a counterbalance here with open-source models and people figuring out how to make them more optimized so better models are more widely available.

In some sense this will force us to get better at communicating with each other - stamping out bots and filtering noise from authentic human communication. Things seem bad now but it seems inevitable that every possible communication channel is going to get absolutely decimated with very convincing laser-targeted spam which will be very difficult to stop without some sort of large scale societal proof of human/work system (which ironically altman is also building).


Relevant recent announcement by Mozilla regarding their acquisition of an e-commerce product/review scoring "AI" service, with the intent to integrate it into the core Firefox browser: https://blog.mozilla.org/en/mozilla/fakespot-joins-mozilla-f...

Mozilla will be algorithmically profiling you and your actions on covered platforms, and if it ever decides you are a fraud or invalid for some reason, it very conveniently advertise this accusation to all its users by default. Whether you will be able to sell your stuff or have your expressed opinion of a product be appreciated and heard by Firefox users will be in Mozilla's hands.

A fun fact that serves to show what these companies are willing to throw overboard just to gain the smallest of edges, or perhaps simply to display relevance by participating in the latest trends: the original company's business strategy was essentially Mozilla's Manifesto in reverse, and included such things as selling all collected data to all third parties (at least their policies openly admitted to this). The person behind all that is now employed by Mozilla, the privacy proponent.


Guns don't kill people, at least tightly controlled guns. If they do, then the killer was whoever controls it. And not just corporations. Intelligence agencies, non-tech corporations, actors with enough money and so on.

The not-so-tightly controlled ones, at least in the hands of individuals not in a position of power or influence, may run into the risk of becoming illegal in a way or another. The system will always try to get into an artificial scarcity position.


this is insightful yes, but the implication is that "control" itself is some kind of answer. The history of organized warfare, among many topics, speaks otherwise.


I wouldn't constrain it to only corporations, but all entities.

Ultimately, most of the dangers, at least those close enough to reason about all are risks that come about from how we will use AI on ourselves.

I've described those and much more from the following.

"Yet, despite all the concerns of runaway technology, the greatest concern is more likely the one we are all too familiar with already. That is the capture of a technology by state governments and powerful institutions for the purpose of social engineering under the guise of protecting humanity while in reality protecting power and corruption of these institutions."

https://dakara.substack.com/p/ai-and-the-end-to-all-things


At this point there are quite a lot of companies training these massive LLMs. We're seeing startups with models that are not quite GPT-4 level but close enough to GPT-3.5 pop up on a near daily basis. Moreover, model weights are being released all the time, giving individuals the opportunity to tinker with them and further release improved models back to the masses. We've seen this with the llama/alpaca/alpaca.cpp/alpaca-lora releases not too long ago. So I am not at all worried about this risk of corporate control.


"Because there's a lot of power and being able to withhold your labor collectively, and joining together as the people that ultimately make these companies function or not, and say, "We're not going to do this." Without people doing it, it doesn't happen."

The most absurd "excuse" I have seen, many times now online, is, "Well, if I didn't do that work for Company X, somebody else would have done it."

Imagine trying to argue, "Unions are pointless. If you join a union and go on strike, the company will just find replacements."

Meanwhile so-called "tech" companies are going to extraordinary lengths to prevent unions not to mention to recruit workers from foreign countries who have lower expectations and higher desperation (for lack of a better word) than workers in their home countries.

The point that people commenting online always seem to omit is that not everyone wants to do this work. It's tempting to think everyone would want to do it because salaries might be high, "AI" people might be media darlings or whatever. It's not perceived as "blue collar". The truth is that the number of people who are willing to spend all their days fiddling around with computers, believing them to be "intelligent", is limited. For avoidance of doubt, by "fiddling around", I do not mean sending text messages, playing video games, using popular mobile apps and what not. I mean grunt work, programming.

This is before one even considers only a limited number of people may have actually the aptitude. Many might spend large periods of time trying and failing, writing one line of code per day or something. Companies could be bloated with thousands of "engineers" who can be laid off immediately without any noticeable effect on the company's bottom line. That does not mean they can replace the small number of people who really are essential.

Being willing does not necessary equate to being able. Still, I submit that even the number of willing persons is limited. It's a shame they cannot agree to do the right thing. Perhaps they lack the innate sense of ethics needed for such agreement. That they spend all their days fiddling with computers instead of interacting with people is not surprising.


I have a curious question, where did the calculator(tabulator) operators go?

Did we suddenly have governments fall when they were replaced by computers?

Did we suddenly have massive unemployment when they were replaced?

AI is a general purpose tool, and like other general purpose tools it expands not only human's reach mind wise it betters society and lifts up the world.

We have been through this before, we will get through it quite well since the last oh general purpose tool will replace us rumor mill reactive noise.


I agree. If you look at the historical trend with technologies, it's very clear: look at the saddle, the stirrup, the chariot, the pull-cart, the carriage: all of these inventions increased the breadth of tasks that a single horse could do, but each time this only increased the overall demand for horses. Surely the internal combustion engine will be no different.


The two are not mutually exclusive dangers. If anything, they are mutually reinforcing.

The Faro Plague in Horizon Zero Dawn was indeed brought on by Ted Faro's shortsightedness, but the same shortsightedness would not have caused Zero Dawn had Ted Faro been a car salesman instead. (forgive my reliance on non-classical literature for the example).

The way this is framed makes me think this framing itself is even more dangerous than the dangers of AI per se.


AI's biggest risk are governments with militaries controlling them. Mass human death and oppression has always been carried out by governments.


Yes and no. As with the current example in Russia, dangerous governments are closely allied with the economic/industrial elite. Beckert's Empire of Cotton is a good look at the history of what he calls "war capitalism", where there's a close alliance and a lot of common effort between the nominally separate spheres of government and industry.


All these warnings about AI safety are bullshit.

Humanity is perfectly well capable of ruining itself without help from AGI (nuclear proliferation is unsolved and getting worse, climate change will bite soon etc).

If anything AGI could save us by giving us some help in solving these problems. Or perhaps doing the mercy kill to put us out quickly, instead of us suffering a protracted death by a slowly deteriorating environment.


“There are also other things killing us” is not a justification for making more. Why not just give nuclear weapons to extremists?


Agreed, stop making food, cars, drugs, guns, knives, forks, pens, stairs, bathtubs, rugs, and so on. We are actively being murdered everyday by these things and more, lets stop the extremists from gaining access to these things.

Do not justify, that's illegal from now on.


Are you literally incapable of seeing the difference between making food and giving nuclear bombs to terrorists?


Yes, seeing as you're killing off the conversation as soon as it doesn't go your way. So why can't everyone else do the same to you.


They can :) You answer, I answer.

Climate watches. @_@


Because nuclear weapons definitely would kill us. AGI may also help us.

Since we are not helping ourselves and will soon enough suffer climate doom, we really don't have anything to loose to go for AGI. It's the only rational choice right now, the logic is compelling.

(Climate doom sounds dramatic, I know, but it's a fact if you read the latest IPCC report and the Surround science.)


> Because nuclear weapons definitely will kill us

I don't think so. Hurt us, sure.

Kill us?

That's bioweapons. ;) Wait until a Plague, Inc.-tier engineered virus inevitably escapes a BSL4 lab and gg. AI might count in that it might help someone engineer such a virus in their home lab. I hope we have offplanet population centers or posthumans by then.


Again, no AGI needed for biowepons, malicious actors can do that today already. Perhaps not as easy, perhaps not as fast, but they can do it.

What we have shown time and again though is that what we can't do is solve climate change. For that only AGI may help.


>we really don't have anything to loose to go for AGI

There is no bottom to worse. I don't want to figure out how low that bottom is as my chemical constituents are being turned into paperclips.


But you do realize that in 7-12 years [1] those paperclips will be lost in an extreme weather event?...

[1] theguardian.com/environment/2018/oct/08/global-warming-must-not-exceed-15c-warns-landmark-un-report


The risk is already here, its the data companies of men control and the 100 year effort to enhance our ability to mine it. If we say AI is the coming risk we are fools.


Yes, and China etc. simply paying those data companies to get all the info they want.

If TikTok is a problem, then so are US based data brokers. But Congress doesn't seem to understand that.


Now that everyone and their mother in law has chimed in about the perils of AI, folks are arguing whose mother in law gave the better talk.


Much of todays conversation around AI mirrors conversations that occurred at the dawn of many other technological breakthroughs. The printing press, electricity, radio, the microprocessor, PCs and packaged software, the Internet and the Web. Programmers can now train functions rather than hand coding them. It’s just another step up.


> "What you said just now—the idea that we fall into a kind of trance—what I’m hearing you say is that’s distracting us from actual threats like climate change or harms to marginalized people."

Is the argument here that people are rather passive and go along with whatever the system serves up to them, hence they're liable to 'fall into a trance'? If so, then the problem is that people are passive, and it doesn't really matter if they're passively watching television or passively absorbing an AI-engineered social media feed optimized for advertiser engagement and programmed consumption, is it?

If you want to use LLMs to get information about fossil-fueled global warming from a basic scientific perspective, you can do that, e.g.:

> "Please provide a breakdown of how the atmospheric characteristics of the planets Venus, Earth, and Mars affects their surface temperature in the context of the Fourier and Manabe models."

If you want to examine the various approaches civilizations have used to address the problem of economic and social marginalization of groups of people, you could ask:

> "How would [insert person here] address the issue of economic and social marginalization of groups of people in the context of an industrial society experiencing a steep economic collapse?"

Plug in Ayn Rand, Karl Marx, John Maynard Keynes, etc. for contrasting ideas. What sounds best to you?

It's an incredibly useful tool, and people can use it in many different ways - if they have the motivation and desire to do so. If we've turned into a society of brainwashed apathetic zombies passively absorbing whatever garbage is thrown our way by state and corporate propagandists, well, that certainly isn't the fault of LLMs. Indeed LLMs might help us escape this situation.


The biggest risk is giving unlimited amounts of data to those corporations.


The people that control those corporations

It’s not AI, it’s us

It’s humans making the decision


No corporation controls AI

AI is open

AI is the new Linux

And it’s people in control, not corporations


Moloch is in control, not people.


Moloch doesn’t exist, only people.


People are subunits in a wider system. Moloch is the control structure of that system.


Is that the definition of Molloc? Do you have any links?


Just my intuition from Control Theory.


I thought the biggest risk was Sarah Connor and Thomas Anderson.


I’m just completely at a loss for how so many people ostensibly so highly qualified even start with absurd, meaningless terms like “Artificial General Intelligence”, and then go on to conclude that there’s some kind of Moore’s Law going on around an exponent, an exponent that fucking Sam Altman has publicly disclaimed. The same showboat opportunist that has everyone changing their drawers over the same 10-20% better that these things have been getting every year since 2017 is managing investor expectations down, and everyone is losing their shit.

GPT-4 is a wildly impressive language model that represents an unprecedented engineering achievement as concerns any kind of trained model.

It’s still regarded. It makes mistakes so fundamental that I think any serious expert has long since decided that forcing language arbitrarily hard is clearly not to path to arbitrary reasoning. It’s at best a kind of accessible on-ramp into the latent space where better objective functions will someday not fuck up so much.

Is this a gold rush thing at the last desperate end of how to get noticed cashing in on hype? Is it legitimate fear based on too much bad science fiction? Is it pandering to Sam?

What the fuck is going on here?


> so many people ostensibly so highly qualified

> Is this a gold rush thing at the last desperate end of how to get noticed cashing in on hype?

https://www.themarginalian.org/2016/01/12/the-confidence-gam...

That article has a wonderful quote from Mark Twain. In part, this:

"The con is the oldest game there is. But it’s also one that is remarkably well suited to the modern age. If anything, the whirlwind advance of technology heralds a new golden age of the grift. Cons thrive in times of transition and fast change, when new things are happening and old ways of looking at the world no longer suffice."




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: