Hacker News new | past | comments | ask | show | jobs | submit login
I sensed anxiety and frustration at NeurIPS 24 (kyunghyuncho.me)
181 points by wavelander 21 hours ago | hide | past | favorite | 116 comments





Trained as a physicist I became acutely aware of what mismatches in the academic job market look like and, particularly, how smoking hot fields can become burned out fields in just about the time it takes to complete a PhD.

(Some physicists told me about how quickly Ken Wilson's application of RG to phase transitions went from the the next big thing to old hat, for instance.)

When I can bear to read editorials in CACM I see the CS profession has long been bothered by whipsawing demand for undergraduate CS degrees. I've never heard about serious employment problems for CS PhDs and maybe I never will because they have a path to industry that saves face better than the paths for physics.

Maybe we will hear about a bust this time. As a cog in the social sciences department, I used to have a view of a baseball diamond out my office window but now there is a construction site for a new building to house the computer science, information science and "statistics and data science" departments which are bulging in undergraduate enrollment.

Will there finally be a bust?


AI has always had the concept of a “winter” or bust. [1] As in computer science in general we have had people shift concentration on what can make the most and that can occur during a PhD. The thing about the AI winter this time is that ML and Deep learning and LLMs advanced so quickly that a large bust in AI didn’t really occur. Also, I have heard about people getting the AI jobs and then complaining it’s all about cleaning data and nothing about applying new AI techniques to solve a big challenging problem. For CS in general and IT I can sort of see this as a business cycle and LLMs doing some replacement of jobs or it could just be a business cycle.

[1] https://en.wikipedia.org/wiki/AI_winter?wprov=sfti1#


If you want to to "solve a big challenging problem" somebody has to do the data cleaning. As Edison put it: genius is 1% inspiration and 99% perspiration.

  > Deep learning and LLMs advanced so quickly that a large bust in AI didn’t really occur.
I think the problem is we railroad too much. It’s LLMs/large models or bust. It can be hard to even publish if you aren’t using one (e.g. build on a pertained model). The problem is putting all our eggs in one basket when we should diversify. The big tech companies hired a lot of people to freely research but research just narrowed. We ignored the limitations that many discussed for a long time and we could have solved them by now if we just spent a small portion of the time and money we have on railroad topics.

What I’ve seen is research become very product focused. Fine for industry research but we have to also have the more academic research. Academic is supposed to do the low TRL (1-4) while industry does higher (like 4-6). But when it’s all mid or high you got nothing in the pipeline. Even if the railroad will get us there there’s always other ways and ways to increase efficiency.

This is a good time to have this conversation as we haven’t finished CVPR reviews. So if you’re a reviewer, an AC, or meta, remember this when evaluating. Evaluate beyond benchmarks and remember the context of the compute power people have. If you require more than an A100 node you only allow industry. Most universities don’t even have a full node without industry support.

https://www.nasa.gov/directorates/somd/space-communications-...


Other machine learning techniques are still being researched . Deep learning / lLMs are just more popular in general due to being in vogue and the public can understand GenAI and what it does.

Somebody from Nvidia donated some money to our university to buy a few H100s but it’s tied up in procurement hell between the CS/DS department and IT for almost a year now

I had gotten a contract with an unspecified California energy lab because they wanted to continue my summer internship research and it after 2 terms it still wasn’t solidified… so we moved on. People say government is slow but it wasn’t the government in my case… All these administrators drive up the cost but I’ve failed to see the value add

east bay or peninsula?

The one with the bigger computer ;)

I think you two are talking about different things. The AI winter as I understand it refers to slowing technological advancement in “expert systems” for example. But technological development is moving faster than ever.

This is different. It’s a weird combination of huge amounts of capital chasing a very specific idea of next token prediction, and a slowdown in SWE hiring that may be related. It’s the first time “AI” as an industry has eaten itself.


The AI winter refers to the collapse in government and industry funding. It came as a result of a lot of hype like expert systems not panning out but it was about the money that dried up. This time “it’s different” but VCs are just as fickle as grant reviewers and executives - that funding could dry up at any moment if one of the big AI companies has a down round or goes bankrupt altogether. I don’t think a second AI winter is likely because of how useful LLMs have proven this time around, but we’re probably going to have a correction sooner or later.

I don’t think the hiring slowdown and AI are related. Some companies are using rhetoric about AI to save face but the collapse in the job market was due to an end to ZIRP.


The down rounds have started for those companies that either haven’t found product market fit, or aren’t in contention on leaderboards. Especially those companies that are a OpenAI wrapper and can be easily replaced by a new, more broadly applicable, foundation model or a new modality model.

https://www.businessinsider.com/ai-down-rounds-rise-valuatio...


From what I read, AI "winters" have not just had less progress but also have had less research activity in general: during them, AI was seen as a dead end that didn't provide value, which translates to less funding, which translates to less people doing research on it (or introducing AI to the research they wanted to do already).

Universities are definitely trying to use data science as a draw for students in the face of declining enrollment. I know one R1 employee who is taking it upon themselves to enable the training of tens of thousands of students over the years. People are more excited about AI than coding.

Physics degrees are great for a successful career outside of physics.

If you can do an undergrad in physics and also hold a conversation, your path is open to getting to the top 0.1% in terms of problem solving ability

If someone believed they will earn 2-5x better than in academia, with full freedom to work on whatever interests them, and no need to deliver value to the employer... Well, let's say "ok", we have all been young and naive, but if their advisors have not adjusted their expectations, they are at fault, maybe even fraudulent.

Even being in elite research groups at the most prestigious companies you are evaluated on product and company Impact, which has nothing to do with how groundbreaking your research is, how many awards it gets, or how many cite it. I had colleagues at Google Research bitter that I was getting promoted (doing research addressing product needs - and later publishing it, "systems" papers that are frowned upon by "true" researchers), while with their highly cited theoretical papers they would get a "meet expectations" type of perf eval and never a promotion.


Yet your Google Research colleagues still earned way more than in academia, even without the promo.

Plus, there were quite a few places where a good publication stream did earn a promotion, without any company/business impact. FAIR, Google Brain, DM. Just not Google Research.

DeepMind didn't have any product impact for God knows how many years, but I bet they did have promos happening:)


You don't understand the Silicon Valley grind mindset :) I personally agree with you - I am happy working on interesting stuff, getting a good salary, and don't need a promo. Most times I switched jobs it was a temporary lowering of my total comp and often the level. But most Googlers are obsessed with levels/promotion, talk about it, and the frustration is real. They are hyper ambitious and see level as their validation.

And if you join as a PhD fresh grad (RS or SWE), L4 salary is ok, but not amazing compared to costs of living there. From L6 on it starts to be really really good.


I assure you, before the LLM race, those research shops (DM, FAIR) had many directors that didn't contribute to any product whatsoever.

> I am happy working on interesting stuff, getting a good salary, and don't need a promo

People who don't contribute to the bottom line are the first to get a PIP or to be laid off. Effectively the better performers are subsidizing their salary, until the company sooner or later decides to cut dead wood.


Someone correct me if this is wrong, but wasn't that pretty much the premise of Institute for Advanced Study? Minus very high-paying salaries. Just total intellectual freedom, with zero other commitments and distractions.

I know Feynman was somewhat critical to IAS, and stated that the lack of accountability and commitment could set up researchers to just follow their dreams forever, and eventually end up with some writers block that could take years to resolve.


> Minus very high-paying salaries.

They very high salaries are central to the situation.

If you remove high salary then you have a lot more freedom. The tradeoff is the entire point of discussion.


Come be a professor in the Netherlands! You can even run a company on the side. Freedom is real. You don’t get paid well for it.

> you are evaluated on product and company Impact, which has nothing to do with how groundbreaking your research is,

I wonder... There are some academics who are really big names in their fields, who publish like crazy in some FAANG. I assume that the company benefits from just having the company's name on their papers at top conferences.


> with full freedom to work on whatever interests them, and no need to deliver value to the employer

You know that in academia you constantly have to beg for money by trying to convince government agencies that you’re bringing them value right?


> full freedom to work on whatever interests them, and no need to deliver > value to the employer...

That was an exaggeration. No employee has full freedom, and I am sure it was expected that you do something which within some period of time, even if not immediately, has prospects for productization; or that when something becomes productizable, you would then divert some of your efforts towards that.


It wasn't an exaggeration! :) The shock of many of my colleagues (often not even junior... sometimes professors who decided to join the industry) "wait, I need to talk to product teams and ask them about their needs, requirements, trade-offs, and performance budgets and cannot just show them my 'amazing' new toy experiment I wrote a paper about that costs 1000x their whole budget and works 50% of time, and they won't jump to putting it into production?" was real. :) They don't want to think about products and talk to product teams (but get evaluated based on research that gets into products and makes a difference there), just do Ivory tower own research.

One of many reasons why Google invented Transformers and many components of GPT pre-trainint, but ChatGPT caught them "by surprise" many years later.


Well there are a few. The Distinguished Scientists at Microsoft Research probably get to work on whatever interests them. But that is a completely different situation from a new Ph.D. joining a typical private company.

I believe the above post was highlighting that as a misconception young people may have, not saying it is the case.

Being one of those lucky few at Google Brain who switched into ML early enough to catch the wave... I fielded my fair share of questions from academic friends about how they could get one of those sweet gigs paying $$$ where they could do whatever they wanted. I wrote these two essays basically to answer this class of questions.

Why did Brain Exist? https://www.moderndescartes.com/essays/why_brain/ Who pays you? And why? https://www.moderndescartes.com/essays/who_pays_you/

Free lunch briefly existed for a small lucky few in 2017-2021, but today there is definitely no more free lunch.


I appreciated the humility in this paragraph in particular

> once the first generation of lucky PhD’s (including me!) who were there not out of career prospects but mostly out of luck (or unluck), we started to have a series of much more brilliant and purpose-driven PhD’s working on deep learning. because these people were extremely motivated and were selected not by luck but by their merits and zeal, they started to make a much faster and more visible progress. soon afterward, this progress started to show up as actual products.

I have to say, though, that AI and robotics are going through similar transitions as were highlighted in TFA. Robotics has been basically defined by self driving cars for a long time, and we're starting to see the closure of some very big programs. Already the exorbitant salaries are much lower based on what I've seen, and demand is flat lining. My hope is that the senior level engineers with good PhD backgrounds move out into the broader field and bring their experience and research zeal with them as a force multiplier. I expect the diaspora of talent to reinvigorate industry innovation in robotics.

So it will be with LLM-focused researchers in industry in the next phase after we pass peak hype. But the things those battle-scarred researchers will do for the adjacent fields that were not hype-cycled to death will probably be amazing.

Unless they succeed in replacing themselves with general AI. Then all bets are off.


TFA?

Slang for "the fine article". Comes from RTFA, "read the fine article". Or some other adjective of your preference that starts with F.

I always thought it was "The Featured Article" to be polite about it.

(Where 99% of use refers to fuck not fine.)

But despite this, it's more often than not a comedic use rather than unkind.

Dunno but RTFA was always presented to me in the less kind version, so it's pretty odd to see it when someone uses it in its positive spin.

Totally different typical usage:

RTFA == "you should have actually read the article instead of wasting everyone's time"

TFA == "referring specifically to the [original, in this context] article"


I know this comment doesn't add much to the discussion, but why, god, why does the author feel the need to write completely in lowercase? It seems completely unnecessary. Impress with your words, not with a weird formatting.

The lowercase formatting reminded me of my uncle who used to write exclusively in uppercase. His reasoning was that lower case letters were superfluous. Also something about how uppercase letters were invented to distinguish between royals and commoners in writing (I'm not sure how true that was).

Funnily enough, he has started publishing academic papers in which he uses proper casing.


Completely agree, it made it hard to continue reading, and to take the author seriously. I wonder if they write their papers the same way.

I felt the same. Especially someone who claims to have

    and, some of these PhD students and postdocs are my own under my supervision
How can you supervise students and write this way? This doesn't make any sense to me.

Also doesn't add much, but I was waiting for this comment. It makes me laugh that non-capitalization (or non-conformative structure) is the bane of HN's existence.

The effort it takes most people to write, publish, and share their ideas is already steep. Id much rather this than them not sharing it at all for trivialities such as capitalization.

I agree, it's a very annoying trend recently. Just use proper capitalization, stop calling attention to your writing style.

it's a stylistic choice, youthful & casual,,

Oh, I haven’t even realized that because I let ElevenLab read for me!

People are really not nice today! :( I have a problem with my eyesight and I tried to avoid reading as much as I can. So I use an app to read aloud most things. I’m not trying to endorse it or anything.

Two things: 1. Your comment was unnecessary; and 2. It came out as an Ad, especially in this time where AI companies are spamming all spaces left and right.

Downvotes are often capricious, including on HN.

At the end of the day it's just numbers in a database somewhere.

It's actually kind of surprising how seriously people take it. I can easily imagine someone saying to the inventor of downvotes: "Yeah but what happens if people just... ignore the number? Why would anyone actually give a crap?"


Yes, normally, I just ignore it, too. But today I feel a bit different. Sorry for the sentimental note.

Big conferences suck in just about every area. Back when I was going to the joint math meetings (a couple few thousand math people), the dominant smell was adrenaline, as departments tried to streamline their hiring process into the conference setting... Phds seeking postdoc positions were therefore trying to talk up their work publicly and rushing around to interviews on the side. It sucks as a conference, though: you get lots of heat, and little light.

I get the same feeling going to big ML conferences now. The incentives are all about job prospects and hype, rather than doing cool stuff.

The sweet spot is the 200 person conferences, imo. You have a much better focus on the science, can literally meet everyone if you put your mind to it. And the spaces tend to be more about the community and the research directions, rather than job prospects.


What does this comment have to do with the article?

The article is about NeurIPS, a very large conference. It’s in the title.

Yes, and the comment doesn’t have anything to do with the contents of the article which only tangentially mentions NeurIPS because the article author was there recently

As someone who did a PhD ending 4y ago and joined a industry research lab, I can relate to this a lot, I think this is very much spot on

> a lot of these PhD’s hired back then were therefore asked to and free to do research; that is, they chose what they want to work on and they publish what they want to publish. it was just like an academic research position however with 2-5x better compensation as well as external visibility and without teaching duties,

exactly!

> such process standardization is however antithetical to scientific research. we do not need a constant and frequent stream of creative and disruptive innovations but incremental and stable improvements based on standardized processes.

A lot of the early wave AI folks struggle with this. They wanna keep pushing wild research ideas but the industry needs slow incremental stuff, focused on serving


"There are much fewer opportunities if they do not want to work on productizing large-scale language models, and these positions are disappearing quickly."

Yes. In 1900, the thing to be was an expert electrician. I had a friend with a PhD in bio who ended up managing a coffee shop because she'd picked the wrong branch of bio.

LLMs may be able to help with productizing LLMs, further reducing the need for people in that area.


As a consultant data scientist who attended NeurIPS this year I was surprised at how few of the talks and posters covered innovations practically applicable outside of niche domains. Those that did (DL for tabular data, joint LLM training with RAG, time series foundational models) were swamped with attendees. It's not that there aren't jobs for innovators, it's just that there needs to be some amount of applicability of your research outside the theory of learning.

What was the joint LLM training with RAG work? Sounds interesting.

Isn't this just the culture of "true" AI researchers? Until very recently (2019?) it was almost exclusively an academic pursuit

the physical conference itself is now an excuse for companies to throw lavish parties, but NeurIPS as a publication venue is still mainly for academics.

for research that translates directly to an industry setting, look someplace like KDD. venues like that are where data science/ML teams who do some research but are mainly product focused tend to publish.


This story plays out so often that here should be a law about it: Supply lags demand, prices soar, everybody hears about it, everybody pours in, supply surges, demand normalizes, supply overshoots demand, prices collapse. Already happened to software engineering, data science. Keeps happening to hardware production every few years. Sounds like AI research is headed that way too.


"The boom and bust cycle is a process of economic expansion and contraction that occurs repeatedly. The boom and bust cycle is a key characteristic of capitalist economies and is sometimes synonymous with the business cycle." [1] Somewhat similar is the bullwhip effect[2], although there isnt a long supply chain for labor.

1. https://www.investopedia.com/terms/b/boom-and-bust-cycle.asp 2. https://www.investopedia.com/bullwhip-effect-definition-5499...


I think the author is twisting things a bit to suit their own hiring needs.

Researchers at big tech companies always struggled to get promoted without showing some sort of impact.

What changed is before they could just publish papers (with 0 reproducibility due to “ai ethics”) and still coast by with a decent salary.

ChatGPT ended the whole “the paper is the product” phenomenon, and the end of ZIRP ended the smooth run for slackers in big tech overall.

AI phds are still in an amazing position compared to any other new grad software engineer.


"betrayed", "promised" - these are quite telling about the sense of entitlement these people have (rightly or wrongly).

The world doesn't owe you anything, even if you did a PhD in AI.

Sorry if you picked the wrong thing, but it's the same for anything at degree level.


Is there a point to saying something like this other than to be explicitly uncompassionate towards those affected?

I could simply respond with "sorry to hear you upset yourself by considering these people entitled", and it would have about the same merit.

It's like telling somebody after a close relative or some else dear to them died that what, did they expect that person would live forever? No, do you?

Do you genuinely think their feelings of betrayal stem from an unreasonable notion of the world? Have you at all considered that the expectations they harbored were not 100% of their own creation?


Sorry, but these people are not victims. I went through a tech PhD; it was well-known how fast the wind changes and the trendy topic falls by the wayside. Big data and crowdsourcing pre-AI boom, then ML, then ethics of AI (around 2020-21), now we are at AI agents which is inherently more product-oriented. Beyond this, we had COVID and a now brutal tech market, despite the US economy seemingly way up and the AI craze continuing. If you went into a CS PhD thinking nothing changes and people will beg you to take cushy salary positions with total freedom, then you simply didn’t do your research.

I suspect many commentators would say the same thing to the English majors out there who were told that passion and a college degree would lead to economic mobility and not a load of debt. Weren’t they also harboring unreasonable views that were not 100% of their own creation?

I'm unable to read minds, so if in your head that self-serving scenario played out such that they were, then they were. Does that justify being uncompassionate towards them by calling them entitled? Is entitled really the best fit for what they were being, rather than say, just plain unreasonable or misguided? Can you read minds that you know they were thinking the world owes them and that's how they developed a passion and completed a college degree?

I’m glad you are willing to treat both groups the same. I suspect many on this site wouldn’t and would take pains to explain how they’re entirely different :)

Maybe I'm just not smart/eloquent enough to come up with a response like GP, but often I'll have arguments in my head where it looks like:

- make a point A

- oh, counterexample with point B

- ehh that's a good point logically...

but I'll just go with point A anyways, especially when point A is something generally optimistic, and point B is cynical.

I don't like going into social arguments for this reason - it's very easy (and often logically correct) to create an ad-absurdum counter-argument to a lot of our general social beliefs.

But to be a functioning human being, sometimes you just have to take the optimistic choice regardless. I know that certainly when I was constantly listening to B, I was depressed out of my mind; would rather be "delusional and stubborn" on some things than depressed.

Twice two might not be five, but it keeps your sanity.


In their defense nobody expected America to jettison education. You can’t plan your career around idiocy like that, it just happens to you.

> Is there a point to saying something like this other than to be explicitly uncompassionate towards those affected?

those "affected" have the same little compassion towards many millions of others who are not lucky to work on AI in top school and visiting top AI conference. That's what makes them entitled.


How would you know that everyone who felt "betrayed" were also "have the same little compassion towards many millions of others who are not lucky to work on AI in top school and visiting top AI conference"?

This is just blatantly painting high profile individuals with a negative brush because it personally appeals to your fantasies.


> everyone who felt "betrayed"

feeling betrayed is my test criteria, since many others are "betrayed" much more, but those individuals focused on their feelings and not on systematic issues in general, hence this makes them "entitled".

> This is just blatantly painting high profile individuals with a negative brush because it personally appeals to your fantasies.

you have to check your language if you want to continue productive discussion.


I disagree with your test criteria. It doesn't adequately support that entitlement is present in or applies to these people. You can be swayed by your feelings for a myriad of things, on its own I do not consider that any sufficient. Even if they come from a privileged background or gained a privileged status by virtue of studying where they did.

> you have to check your language if you want to continue productive discussion.

I disagree that my use of language was unreasonable. And to clarify, I do not wish to continue this conversation, productively or otherwise.


The mistake in their argument was appealing to structural issues while essentializing that stratum as 'entitled', an individualistic attribution. It tells on its own resentment while preaching structural causes. Anyways it's the holidays and we could all use a little compassion and nonjudgmental empathy.

> I disagree that my use of language was unreasonable. And to clarify, I do not wish to continue this conversation, productively or otherwise.

bye then


i think PhD programs and advisors did make some implicit or explicit promises to recruit students to work for them, very hard, for years, at very low salaries.

the PhD is not like a master's program, it's an apprenticeship relationship where you essentially work for a senior professor, and your work directly financially benefits the department as well.

so there is a deal here, but if one end of it is collapsing, the students in some departments might very well feel betrayed -- not by society but by their advisors and programs.


There's no reason to do a Ph.D. if you don't aspire to an academic position. If you want to be a practitioner, do a Masters.

It's really not that simple. But it's fair to say that while a masters can often be a very straightforward training to get you ready for a particular job, you really shouldn't go into a PhD program without a good idea of a) why and b) where you are aiming for afterward. For STEM I would usually add c) someone else is paying for it

this is true overall, but for the last decade there have been many research-oriented positions at large tech companies for which the PhD degree is a formal job requirement. others don’t make it a formal requirement, but demand research experience which can be gotten most easily as a PhD student.

the competitive investment world has gotten a flavor of ever-increasing requirements.. it fits with a scarcity model they thrive in

It's kinda funny, the general dread regarding AI in tech circles is mostly due to fears that all jobs will be automated away, while the concerns expressed by the researchers according to the author are ironically a lot narrower in the sense that they haven't learned the currently cool topic.

Also, and not quite. I believe the author is mostly talking about how undergraduate and masters degree can already do what PhD students set out to do a few years ago, so they might complete their PhD but it will be them vs someone cheaper and younger with the same hot skills and knowledge. The extra PhD ooomph isn't providing more value.

I always thought PhDs often get long in the tooth because research takes a long time and things can change drastically over that period of time, be it the research doesn't go the direction they expected or the world around them changed? If I was to go down the PhD path I'd consider it might end up taking quite a long time, because of exactly what you mention in your comment.

In my lab, we try to focus on the science and less on applicative deep learning, of course it has its uses and we develop 'fashionable' deep learning with some students, but I think the value of a PhD remains in the path taken.

It's all that you learn doing it, it is not just knowledge, it's the scientific method, social skills, learning to communicate science.

A PhD is a personal path, an experience in life that changes it profoundly, I hope that those students will be able to handle any kind of future issues, not just a very niche applicative aspect of deep learning.


This is probably anecdotal evidence that the field is maturing rather rapidly.

Phd hires will continue ofcourse, its the volume and selection criteria that get adjusted. When a technical domain gets standardized its natural that the ratio of PhD's to Masters in a team lowers.

Mostly teams with a mandate to innovate (or at least signaling that intention) will have more of the former. A succesful Phd will always be a strong filter for certain types of abilities and dispositions. Just don't expect stampedes to last forever.


I'll take "what are capital letters" for $500 Alex

For real. Very distracting to read this with no upper-case letters at the beginning of sentences. Strangely, many acronyms have properly placed upper-case letters. I don't understand the decision to omit them for single letter words or at the beginning of a sentence.

Guessing this is a generational thing. I wasn't distracted by it because it's the same style that many of my friends text with.

This is the first time I've seen it in an actual article, however.


Relevant quote:

>this post will be more of less a stream of thoughts rather than a well-structured piece


Why bother with punctuations then? It feels deliberate.

I'm reading the author's PhD thesis at the moment, which he seems to have successfully applied to some modelling projects.

I'd imagine the most significant aspect of this betrayal of PhD students, as far as it exists, is to polarise the domain along research lines that never very clearly mapped to the perennial objectives of modelling or mere curve-fitting. Had a thread of 'modelling science' been retained then it would always be useful.

Businesses may, at the moment, appear to have been confused into the value of curve-fitting --- the generic skills of algorithm design, scientific modelling, and the like, will survive the next wave of business scam.


Wrong capitalization makes me feel really axious and frustrated.

Me too. At least they’re happy to capitalize “PhD”, oh wait, that somehow makes it worse.

> some of them probably feel betrayed, as the gap between what they were promised earlier and what they see now is growing rapidly.

When you enter a PhD program, you are promised nothing else but hard work (hopefully within a great research group). I don’t know what the author is referring to here.


This is probably the best example of what it feels like to be someone dealing with cognitive dissonance of Elite Overproduction in AI.

https://en.m.wikipedia.org/wiki/Elite_overproduction

Since 2012 (Alexnet winning Imagenet with DL) “AI” has been dominated by corporations for one simple and obvious reason that Suttom has pointed out over and over: The group with the best data wins

That wasn’t always true. It used to be the case that the government had better data or academia had better data, but it’s not even close at this point to the point where both of those groups are nearly irrelevant in either applied AI or AI research.

I’ve been applying “AI” at every possible chance I have since 1998 (unreal engine + A* + knn …) and the field has foundationally changed three or four times since.

When I started, people looked at those of us that stated out loud that we wanted to work on AGI as totally insane.

Bengio spoke at the AGI 2014 conference, and at a lunch between myself, Ben G, Josha Bach and Bengio we all agreed DL was going to central to AGI - but it was unclear who would own the data. Someone argued that AGI would most likely come out of an open source project and my argument was there are no structured incentives to allow for the type of organization necessary at the open source level for that.

My position was that DL isn’t sufficient - we need a forward looking RL streaming reward system that is baked into the infrastructure between planning and actuation. I still think that’s true and why AGI (robots doing things instead of people at greater than human level) is probably only a decade away.

It’s pretty wild that stating out loud that your goal is to build superhuman artificial intelligence is still some kind crazy idea - even though it’s so obviously the trajectory we’re on.

As much as I dislike OpenAI and the rest of the corporate AI companies, I respect the fact that they’ve been very vocal about trying to build general intelligence and superhuman intelligence.


> My position was that DL isn’t sufficient - we need a forward looking RL streaming reward system [...]

I agree, but don't you think some degree of symbolism or abstraction building is necessary for AGI? Current systems seem to be too fragile and data-intensive.


I guess I'm a little bit optimistic, but looking at the title I was thinking that the AI models had reached the point where they were the ones expressing anxiety and frustration :)

Never mind . . .


This is the best time in history to research AI and get paid for it.

I respect your perspective and acknowledge I might be missing key context since I didn’t attend the conference. That said, one interpretation is that you’re grappling with the rapid acceleration of innovation—a pace you’ve helped shape. Challenges that once seemed distant now feel imminent, creating a mix of excitement and unease, like the moment a rollercoaster drop starts to terrify.


Great read! Though it's a bit off putting how paragraphs start with a lower case letter

Not just the paragraphs, every sentence.

There's an underlying sentiment through the article that there isn't more significant research to do to further improve ai and generate more revenue. This is a large gap and myopic.

For example, the amount of compute needed to obtain state of the art on a benchmark is only obtainable by big labs. How can we improve the learning efficiency of deep learning so that one can obtain similar performance with 100x less compute.


I could've swore I posted this article as well.

Now imagine how 80% of humanities grads feel when they didn’t get the job they thought they were promised. (With 10x the debt a CS PhD will finish with)

Absolutely no chance am I reading an article that doesn’t even give the respect of capitalising sentences. Honestly despicable and I feel personally offended

It's an interesting piece if you can muster the strength to make it through the irksome writing style. Either way, I don't think your comment adds much. Why not suffer in silence?

(shoift)Tab Control, also make it work with the mouse like others

The field is now owned by Big Capital.

Democratization of AI was a joke.


The author is certainly doing his part fighting against capital (letters)

How about governments pass a law that requires all AI research & engineering efforts to provide all the data that they use, and all technical details must be open access and any AI stuff cannot be patented. Does that solve the democratisation issue?

There's next to no chance of that happening though.

It'd be like asking if a law was passed making all of the companies on the S&P 500 co-ops owned equally by all Americans would solve the democratization issue.


No. Because the vast majority of the data used was not theirs to provide. If people, and especially organizations, found their data in that data release the courts would make anyone using the model pay the appropriate licensing fees. So everyone now owns the model and you have to pay checks to god knows how many people if you use that model.

Basically what you’re suggesting is that we admit to stealing data at scale. It would kill model production, because no one would be sure of the provenance of their training data. In all honesty, we can’t even be sure right now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: