Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can someone make an honest argument for how OpenAI staff are missionaries, after the coup?

I'd be very happy to be convinced that supporting the coup was the right move for true-believer missionaries.

(Edit: It's an honest and obvious question, and I think that the joke responses risk burying or discouraging honest answers.)



That is just an act of corpo-ceo bulshitting employees and press about high moral standards, mission, etc. Don't trust any of his words.


Anytime someone tells you to be in it for the mission, you are expendable and underpaid.


I don't at all disagree with you, but at the kind of money you'd be making at an org like OAI, it's easy to envision there being a ceiling, past which the additional financial compensation doesn't necessarily matter that much.

The problem with the argument is that most places saying this are paying more like a sub-basement, not that there can't genuinely be more important things.

That said, Sam Altman is also a guy who stuck nondisparagement terms into their equity agreement... and in that same vein, framing poaching as "someone has broken into our home" reads like cult language.


We shouldn’t even be using the offensive word “poaching.” As an employee, I am not a deer or wild boar owned by a feudal lord by working for his company. And another employer isn’t some thief stealing me away. I have agency and control over who I enter into an employment arrangement with!


I don't disagree with this either -- it's very clearly just a free market working both ways.

It also immediately reminds me of the no-call agreements companies had with each other last decade 10 or 15yrs ago.


So then, is "headhunting" more or less bad?


I think anything that evokes “hunting on someone else’s land for his property” is equally inappropriate.


Would "bought" be better then? implies slavery!


There's a word for this, it's called being hired.


"Making a more competitive offer"


That could be genuine words. Mission is to be expendable and make them rich.

Don't forget about the mission during next round of layoffs and record high quarterly profits.


Totally agree.

Well said.

Man, you are on a mission, to enable manumission!

https://en.m.wikipedia.org/wiki/Manumission


Crazy that this proves that engineers making >1 million USD /year can still be underpaid


Yes Capitalism is an amazing thing


Could Facebook hire away OpenAI people just by matching their comp? Doubtful. Facebook is widely hated and embarrassing to work at. Facebook has to offer significantly more.

And if someone at OpenAI says hey Facebook just offered me more money to jump ship, that's when OpenAI says "Sorry to hear, best of luck. Seeya!"

In this scenario, you're only underpaid by staying at OpenAI if you have no sense of shame.


> Facebook is widely hated and embarrassing to work at.

Not sure it's widely hated (disclaimer: I work there), despite all the bad press. The vast majority of people I meet respond with "oh how cool!" when they hear that someone works for the company that owns Instagram.

"Embarassing to work at" - I can count on one hand the number of developers I've met who would refuse to work for Meta out of principle. They are there, but they are rarer than HN likes to believe. Most devs I know associate a FAANG job with competence (correctly or incorrectly).

> Could Facebook hire away OpenAI people just by matching their comp?

My guess is some people might value Meta's RSUs which are very liquid higher than OAI's illiquid stocks? I have no clue how equity compensation works at OAI.


Within my (admittedly limited) social circle of engineers/developers there is consensus that working at Facebook is pretty taboo. I’ve personally asked recruiters to not bother.


Honestly I’d be happy to work at any FAANG. Early FB in particular was great in terms of keeping up with friends.

I’ve only interviewed with Meta once and failed during a final interview. Aside from online dating and defense I don’t have any moral qualms regarding employment.

My dream in my younger days was to hit 500k tc and retire by 40. Too late now


> defense

By defense do you mean like weapons development, or do you mean the entire DoD-and-related contractor system, including like tiny SIBR chasing companies researching things like, uh

"Multi-Agent Debloating Environment to Increase Robustness in Applications"

https://www.sbir.gov/awards/211845

Which was totally not named in a backronym-gymnastics way of remembering the lead researcher's last vacation destination or hometown or anything, probably.


I'm trying to avoid anything primarily DoD related.

I guess I'd be ok with getting a job at Atlassian even if some DoD units use Jira.

I don't have anything against anyone who works on DOD projects, it's just not something I'm comfortable with


> Doubtful. Facebook is widely hated and embarrassing to work at. Facebook has to offer significantly more.

I’m at a point in my career and life at 51 that I wouldn’t work for any BigTech company (again) even if I made twice what I make now. Not that I ever struck it rich. But I’m doing okay. Yes I’ve turned down overtures at both GCP, Azure, etc.

But I did work for AWS (ProServe) from the time I was 46-49 remotely knowing going in that it was a toxic shit show for both the money and for the niche I wanted to pivot to (cloud consulting) I knew it would open doors and it has.

If I were younger and still focused on money instead of skating my way to retirement working remotely, doing the digital nomad thing off an on etc, I would have no moral qualms about grinding leetcode and exchanging my labor for as much money as possible at Meta. No one is out here feeding starving children or making the world a better place working for a for profit company.

My “mission” would be to exchange as much money as possible for labor and I tell all of the younger grads the same thing.


I wonder what is it that Facebook offered? It can’t be money so I think it’s more responsibility or freedom. Or they had some secret breakthroughs?


It's money. It's also a fresh, small org and a new project, which is exciting for variety of reasons.


I can't explain why but I don't think money is it. Nor a new project or whatever can't be it either. Its just too small of a value proposition when you are already in openAI making banger models used by the world.


According to reports, the comp packages were in the hundreds of millions of dollars. I doubt anyone but execs are making that kind of money at OpenAI; its the sort of money you hope from a successful exit after years of efforts. I don’t blame them for jumping ship.


I need you to be a team player on this one.


And will be fired/thrown under the bus the moment firing you is barely more profitable for the CxO than having you around.


yeah, I used to work in the medical tech space, they love to tell you how much you should be in it for the mission and that's why your pay is 1/3 what you could make at FAANG... of course, when it came to our sick customers, they need to pay market rates.


Yes, especially not his


There are a couple of ways to read the "coup" saga.

1) Altman was trying to raise cash so that openAI would be the first,best and last to get AGI. That required structural changes before major investors would put in the cash.

2) Altman was trying to raise cash and saw an opportunity to make loads of money

3) Altman isn't the smartest cookie in the jar, and was persuaded by potential/current investors that changing the corp structure was the only way forward.

Now, what were the board's concerns?

The publicly stated reason was a lack of transparency. Now, to you and me, that sounds a lot like lying. But where did it occur and what was it about. Was it about the reasons for the restructure? was it about the safeguards were offered?

The answer to the above shapes the reaction I feel I would have as a missionary

If you're a missionary, then you would believe that the corp structure of openai was the key thing that stops it from pursuing "damaging" tactics. Allowing investors to dictate oversight rules undermines that significantly, and allows short term gain to come before longterm/short term safety.

However, I was bought out by a FAANG, one I swear I'd never work for, because they are industrial grade shits. Yet, here I am many years later, having profited considerably from working at said FAANG. turns out I have a price, and it wasn't that much.


Honest answer*:

I think building super intelligence for the company that owns and will deploy the super intelligence in service of tech's original sin (the algorithmic feed) is a 100x worse than whatever OpenAI is doing, save maybe OpenAI's defense contract, which I have no details about.

Meta will try to buoy this by open-sourcing it, which, good for them, but I don't think it's enough. If Meta wants to save itself, it should re-align its business model away from the feeds.

In that way, as a missionary chasing super intelligence, I'd prefer OpenAI.

*because I don't have an emotional connection to OpenAI's changing corporate structure away from being a non-profit:


As a thought exercise, OpenAI can partner to apply the technology to:

- online gambling

- kids gambling

- algorithmic advertising

Are these any better ? All of these are of course money wells and a logical move for a for-profit IMHO.

And they can of course also integrate into a Meta competitor's algorithmic feeds as well, putting them at the same level as Meta in that regard.

All in all, I'm not seeing them having any moral high ground, even purely hypotheticaly.


Wait if an online gambling company uses OpenAI API then hosts it all on AWS, somehow OpenAI is more morally culpable than AWS? Why?


I saw the discussion as whether OpenAI is on a better moral ground than Meta, so this was my angle.

On where the moral burden lies in your example, I'd argue we should follow the money and see what has the most impact on that online gambling company's bottom line.

Inherently that could have the most impact on what happens when that company succeeds: if those become OpenAI's biggest clients, it wouldn't be surprising that they put more and more weight in being well suited for online gambling companies.

Does AWS get specially impacted by hosting online gambling services ? I honestly don't expect them to, not more than community sites or concert ticket sellers.


There is no world in which online gambling beats other back-office automation in pure revenue terms. I'm comfortable saying that OpenAI would probably have to spend more money policing to make sure their API's aren't used by gambling companies than they'd make off of them. Either way, these are all imagined horrors, so it is difficult to judge.

I am judging the two companies for what they are, not what they could be. And as it is, there is no more damaging technology than Meta's various algorithmic feeds.


> There is no world in which online gambling beats other back-office automation in pure revenue terms.

Apple's revenue is massively from in-app purchases, which are mainly games, and online betting also entered the picture. We had Tim Cook on the stand explain that they need that money and can't let Epic open that gate.

I think we're already there in some form or another, the question would be whether OpenAI has any angle for touching that pie (I'd argue no, but they have talented people)

> I am judging the two companies for what they are, not what they could be

Thing is, AI is mostly nothing right now. We're only discussing it because it of its potential.


My point exactly. The App Store has no play in back office automation so the comparison doesn’t make sense. AFAICT, OpenAI is already making Billions on back office automation. I just came from a doctors visit where the she was using some medical grade ChatGPT wrapper to transcribe my medical conversation meanwhile I fight with instagram for the attention of my family members.

AI is already here [1]. Could there be better owners of super intelligence? Sure. Is OpenAI better than Meta. 100%

[1] https://www.cnbc.com/amp/2025/06/09/openai-hits-10-billion-i...


If you have "superintelligence" and it's used to fine-tune a corporate product that preexisted it, you don't have superintelligence.


> I think building super intelligence for the company that owns and will deploy the super intelligence in service of tech's original sin (the algorithmic feed) is a 100x worse than whatever OpenAI is doing,

OpenAI announced in April they'd build a social network.

I think at this point it barely matters who does it, the ways in which you can make huge amounts of money from this are limited and all the major players after going to make a dash for it.


Like I told another commenter, "I am judging the two companies for what they are, not what they could be."

I'm sure Sam Altman wants OpenAI to do everything, but I'm betting most of the projects will die on the vine. Social networks especially, and no one's better than Meta at manipulating feeds to juice their social networks.


> In that way, as a missionary chasing super intelligence, I'd prefer OpenAI.

There ain't no missionary, they all doing it for the money and will apply it to anything that will turn dollars.


An honest argument is that cults often have missionaries.


I'm not very informed about the coup -- but doesn't it just depend on what side most of the employees sat/sit on? I don't know how much of the coup was just egos or really an argument about philosophy that the rank and file care about. But I think this would be the argument.


There was a petition with a startlingly high percentage of employees signing it, but no telling how many of them felt pressured to to keep their job.


The thing where dozens of them simultaneously posted “OpenAI is nothing without its people” on Twitter during the coup was so creepy, like actual Jonestown vibes. In an environment like that, there’s no way there wasn’t immense pressure to fall into line.


That seems like kind of an uncharitable take when it can otherwise be explained as collective political action. I’d see the point if it were some repeated ritual but if they just posted something on Twitter one time then it sounds more like an attempt to speak more loudly with a collective voice.


They didn't need pressuring. There was enough money involved that was at risk without Sam that they did what they thought was the best way to protect their nest eggs.


That was actually the kind of pressure I was thinking of, not social/managerial pressure, though I think either could apply in that situation, depending on the individual.


Altman has to be the most transparently two-faced tech CEO there is. I don't understand why people still lap up his bullshit.


Money.


What money is in it for the "rationalist", AI doom crowd that build up the narrative Altman wants for free?


Suggesting that the AI doom crowd is building up a narrative for Altman is sort of like saying the hippies protesting nuclear weapons are in bed with the arms makers because they're hyping up the destructive potential of hydrogen bombs.


That analogy falls flat. For one we have seen the destructive power of hydrogen bombs through nuclear tests. Nuclear bombs are a proven, real threat that exists now. AGI is the boogeyman under the bed, that somehow ends up never being there when you are looking for it.


It's a real negotiating tactic: https://en.wikipedia.org/wiki/Brinkmanship

If you convince people that AGI is dangerous to humanity and inevitable, then you can force people to agree with outrageous, unnecessary investments to reach the perceived goal first. This exactly happened during the Cold War when Congress was thrown into hysterics by estimates of Soviet ballistic missile numbers: https://en.wikipedia.org/wiki/Missile_gap


Chief AI doomer Eliezer Yudkowsky's latest book on this subject is literally called "If Anyone Builds it, Everyone Dies". I don't think he's secretly trying to persuade people to make investments to reach this goal first.


He absolutely is. Again, refer to the nuclear bomb and the unconscionable capital that was invested as a result of early successes in nuclear tests.

That was an actual weapon capable of killing millions of people in the blink of an eye. Countries raced to get one so fast that it was practically a nuclear Preakness Stakes for a few decades there. By collating AI as a doomsday weapon, you necessarily are begging governments to attain it before terrorists do. Which is a facetious argument when AI has yet to prove it could kill a single person by generating text.


> He absolutely is.

When people explicitly say "do not build this, nobody should build this, under no circumstances build this, slow down and stop, nobody knows how to get this right yet", it's rather a stretch to assume they must mean the exact opposite, "oh, you should absolutely hurry be the first one to build this".

> By collating AI as a doomsday weapon, you necessarily are begging governments to attain it before terrorists do.

False. This is not a bomb where you can choose where it goes off. The literal title of the book is "if anyone builds it, everyone dies". It takes a willful misinterpretation to imagine that that means "if the right people build it, only the wrong people die".

If you want to claim that the book is incorrect, by all means attempt to refute it. But don't claim it says the literal opposite of what it says.


Though there still is the problem of telling a child of an early level of cognitive development not to do something, which virtually guarantees that they will try to do it.

One of my favorite Tweets:

https://x.com/AlexBlechman/status/1457842724128833538

> Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale

> Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus


Edward Teller worried about the possibility that the Trinity nuclear test might start a chain reaction with the nitrogen in the Earth's atmosphere, enveloping the entire planet in a nuclear fireball that destroyed the whole world and all humans along with it. Even though this would have meant that the bomb would have had approximately a billion times more destructive power than advertised, and made it far more of a doomsday weapon, I think it would also not have been an appealing message to the White House. And I don't think that realization made anyone feel it was more urgent to be the first to develop a nuclear bomb. Instead, it became extremely urgent to prove (in advance of the first test!) that such a chain reaction would not happen.

I think this is a pretty close analogy to Eliezer Yudkowsky's view, and I just don't see how there's any way to read him as urging anyone to build AGI before anyone else does.


The grandparent asked what money was in it for rationalists.

You're saying an AI researcher selling AI Doom books can't be profiting off hype about AI?


This reminds me a lot of climate skeptics pointing out that climate researchers stand to make money off books about climate change.

Selling AI doom books nets considerably less money than actually working on AI (easily an order of magnitude or two). Whatever hangups I have with Yudkowsky, I'm very confident he's not doing it for the money (or even prestige; being an AI thought leader at a lab gives you a built-in audience).


The inverse is true, though - climate skeptics are oftentimes paid by the (very rich) petrol lobby to espouse skepticism. It's not an asinine attack, just an insecure one from an audience that also overwhelmingly accepts money in exchange for astroturfing opinions. The clear fallacy in their polemic being that ad-hominem attacks aren't addressing the point people care about. It's a distraction from global warming, which is the petrol lobby's end goal.

Yudkowsky's rhetoric is sabotaged by his ridiculous forecasts that present zero supporting evidence of his claims. It's the same broken shtick as Cory Doctorow or Vitalik Buterin - grandiose observations that resemble fiction more than reality. He can scare people, if he demonstrates the causal proof that any of his claims are even possible. Instead he uses this detachment to create nonexistent boogeymen for his foreign policy commentary that would make Tom Clancy blush.


What sort of unsupported ridiculous forecast do you mean? Can you point to one?


I'm not the grandparent but the more interesting question is what could possibly constitute "supporting evidence" for an AI Doom scenario.

Depending on your viewpoint this could range from "a really compelling analogy" to "A live demonstration akin to the trinity nuclear test."


FWIW, in the case of Eliezer's book, there's a good chance that at the end of the day when we account for all the related expenses, it makes very little net profit, and might even be unprofitable on net (which is totally fine, since the motivation from writing the book isn't making money).


dumb people need symbols. Same reason elon gets worship.


"He looks like such a nice young man"


People buy into the BS and are terrified of missing out or being left behind.


Tim Cook is right there. If I say "Vision Pro" I'll probably get downvoted out of a mere desire to not want to talk about that little excursion.


The Vision Pro flopped, but I don't see the connection to two-faced-ness. Help?


The "this is our best product yet" to "this is an absolute flop" pipeline has forced HN into absolute denial over the "innovation" their favorite company is capable of.


Missionary (from wikipedia):

A missionary is a member of a religious group who is sent into an area in order to promote its faith or provide services to people, such as education, literacy, social justice, health care, and economic development. - https://en.wikipedia.org/wiki/Missionary

Post coup, they are both for-profit entities.

So the difference seems to be that when meta releases its models (like bibles), it is promoting its faith more openly than openai, which interposes itself as an intermediary.


I'd bet 100 quatloos that your comment will not have honest arguments below. You can't nurture missionaries in an exploitative environment.


Not to mention, missionaries are exploitative. They're trying to harvest souls for God or (failing the appearance of God to accept their bounty) to expand the influence of their earthbound church.

The end result of missionary activity is often something like https://www.theguardian.com/world/video/2014/feb/25/us-evang... .

Bottom line, "But... but I'm like a missionary!" isn't my go-to argument when I'm trying to convince people that my own motives are purer than my rival's.


There's one slightly more common outcome of your so-called "missionary activities".


Eh? Plenty of cults like Jehivahs Witnesses that are exploitive as hell.


This is just a CEO gaslighting his employees to "think of the mission" instead of paying up

No different than "we are a family"


But “we are family”


I got all my sisters with me.


> “I have never been more confident in our research roadmap,” he wrote. “We are making an unprecedented bet on compute, but I love that we are doing it and I'm confident we will make good use of it. Most importantly of all, I think we have the most special team and culture in the world. We have work to do to improve our culture for sure; we have been through insane hypergrowth. But we have the core right in a way that I don't think anyone else quite does, and I'm confident we can fix the problems.”

tldr. knife fights in the hallways over the remaining life boats.


100% agree. You are hearing the dictator claim righteousness.


yeah... didnt the missionaries all leave after the coup? and the folks who remain are the mercenaries looking for the big stock win after sama figures out a way to be acquired or IPO?

all the chatter here at least was that the OpenAI folks were sticking around because they were looking for a big payout


[flagged]


They didn't mean it as a pun, but understanding it as a pun helps understand the situation.

In religions, missionaries are those people who spread the word of god (gospel) as their mission in life for a reward in the afterlife. Obviously, mercenaries are paid armies who are in it for the money and any other spoils of war (sex, goods, landholdings, etc.)

So I guess he's trying to frame it as them being missionaries for an Open and accepting and free Artificial Intelligence and framing Meta as the guys who are only in it for the money and other less savory reasons. Obviously, only true disciples would believe such framing.


english is my first language : they mean that Sam Altman's people are preaching a righteous future for AI, or something vague like that.


Close. A missionary is what the sex position was named after.


Specifically, Catholic missionaries indoctrinating indigenous cultures into their church's imaginary sexual hangups. All other positions were considered sinful.

Again, not a label I'd self-apply if I wanted to take the high road.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: