Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why do people still talk about AGI?
34 points by cermicelli 18 hours ago | hide | past | favorite | 50 comments
I am curious I am not sure if AI is just hype, I use it for software and a few other things. But looking at so many people talking about AGI when the best models can't even answer simple stuff correctly, fail at tool use, are vulnerable to all types of injection attacks that don't make sense.

I don't know if the investments in AI are worth it but am I blind for not seeing any hope for AGI any time soon.

Agentic AI is interesting perhaps but I hardly have had it work perfectly, I have to hold it's hand at everything.

People making random claims about AGI soon is really weakening my confidence in AI in general. Given I haven't seen much improvements in last few years other than better tools and wrappers, and models that work better with these tools and wrappers.





Don't believe the hype, this is tribal thinking. Everybody seems to have these widely diverging opinions on AI lately. What does AGI have to do with predicting the next token stochastically like a parrot? Oh, people say you can brute-force AGI, if only things are answered correctly. I get that, I still see SOTA models sometimes fail like babies. I also mostly see them perform at a much higher intelligence and work ethics than I can, but maybe I'm too hard on myself.

Anyway, here's something I've recently build that shows the HN consensus when it comes to AI-Coding (spoiler: they say it's quite good): https://is-ai-good-yet.com/ Is AI “good” yet? – A survey website that analyzes Hacker News sentiment toward AI coding.


Brilliant website. It should be a post of its own.

Thank you so much, I _really_ appreciate this first, positive feedback. I only posted this couple of hours ago and my goal is to reach the front page. Here's the Show HN post: https://news.ycombinator.com/item?id=46851589

We have good research showing we think in language. So the seed is there. I’m working on methods (hardware and software) that gave us insane speeds and compression so you get orders of magnitude greater performance.

> We have good research showing we think in language.

Source? From my knowledge, we do not "think in language", but we learn to finetune our thinking to be expressed in the form of words. Unless you consider pictures as language, after all "A picture is worth a thousand words"..


I think what you mean is Language shapes our thinking, but it is not a requirement of it.

That implies that nonverbal people are unable to think, no?

What? What does it have to do with anything that I said? Wow, I think HN has lost its spark.

Seems like they just want to promote their product, sadly.

> totals: +37.4k (332 GOOD) -4.6k (139 NEUTRAL) -28.2k (269 BAD)

I mean. “Yes”, but no, not really.


We are decades if not hundreds of years away from any real AGI. The AGI is for many tech people what God is for most people in the world, it's kind of a feel-good thing to believe that in their lifetime they can essentially have this intelligence that can be cloned unlimited times and essentially solve any problem thrown at it and liberate them from aging and death.

I think one day this kind of system will probably exist in some form, I don't see any fundamental reason why not but to believe fraudsters like Amodei and Altman telling you that in 6 months world will possibly end and the AIs will take over, it's just nonsense fed to boomer investors who have 0 clue.

Many crypto scammers have pivoted to AI because there are new useful idiots to scam now that crypto is not doing too hot.

Frankly the whole world is just scam after scam nowadays, they noticed that the Elon Musk strategy of continuously promising something "the next year" actually works so they are all doing it stringing people along.


I find it funny that people would think an AGI would want to solve their problems rather than its own problems, ie going rogue from humanity's perspective.

I don't want to believe you are right. Not that I refuse to...

You could be wrong, hopefully. I'll just remain optimistic.

It's good that critical thinking still exists to keep us all grounded.

Edit: I think you meant to replace AGI with ASI.

What we have now is approaching the ability to solve problems generally with considerable effort but super intelligence is definitely out of reach for the time being.


Probably the biggest thing that serious predictions are relying on is the METR graph:

https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...

It shows a remarkably consistent curve for AI completing increasingly difficult coding tasks over time. In fact, the curve is exponential, where the X axis is time and the Y axis is task difficulty as measured by how long a human would take to perform the task. The current value for 80% success rate is only 45 minutes, but if it continues to follow the exponential curve, it will only take 3 years and change to get to a full 40 hour human work week's worth of work. The 50% success rate graph is also interesting, as it's similarly exponential and is currently at 6 hours.

Of course, progress could fall off as LLMs hit various scaling limits or as the nature of the difficulty changes. But I for one predicted that progress would fall off before, and was wrong. (And there is nothing saying that progress can't speed up.)

On the other hand, I do find it a little suspicious that so many eggs are in the one basket of METR, prediction-wise.


> It shows a remarkably consistent curve for AI completing increasingly difficult coding tasks over time.

I'm not convinced that "long" is equivalent to "difficult". Traditional computer can also solve tasks that would take extemely long for humans, but that doesn't make them intelligent.

This is not to say that this is useless, quite the opposite! Traditional computers shown that being able to shorten the time needed for certain tasks is extremely valuable, and AI shown this can be extender to other (but not necessarily all) tasks as well.


Wouldn’t actual “AGI” require an ~80 year timeframe ;)? After all most humans are able to achieve the task of “survival” over that period.

Very interesting thought! TY for sharing

LLMs are real, far from what we could call AI, but AI is here because corporate marketing said so.

People talk about AGI because this is how corporate marketing will create AGI, even if AGI is not near to what we could call possible.

But this is how things work now, corporate marketing says what is real and what is not.


> LLMs are real, far from what we could call AI

What's your definition of AI, and why is yours righter than anyone else's?


It's not my definition of AI, but the concept people collectively have as it was popularized by literature, movies, tv shows, games, and others.

All those media bring more or less the same concept of AI that is completely different from LLMs.

Corporate marketing transformed LLMs into AI telling people that what people think about AI is the same as LLMs when actually it's not.


Does it matter if AGI ever come any time soon? Does the current AI ( LLM ) provide or generate any value? If not we can agree to disagree.

Is it perfect? No. Far from it. Is it useful in some, and in the future many situations, Yes.


Ten years ago I believed we'll have AGI/end-of-the-world/Singularity circa 2040, and meanwhile in the 2020s we will chill out in a futuristic, booming world of un-smart innovations like 3D printing, VR and the Metaverse.

Then, in March 2023, with GPT-4, I said that we'll have AGI only ten years later, and the progress in the last few years (multimodal stuff, reasoning, coding agents) hasn't changed this view.


I guess people talk about AGI as in human abilities or better AI because it'll have a big effect when it gets here.

I think a lot of confusion with skeptics is they think - oh someone's invented the LLM algorithm but it's not that good - what's the big deal?

The people who think it's coming eg. Musk, Altman, Kurzweil, the Wait Not Why guy and myself tend to think of it more coming down to hardware - the brain's a biological computer and as computer hardware get's faster each year it'll overtake at some stage. The current backprop algo was invented around 1982. It works now because hardware.

Also the present algorithms are a bit lacking but now we have the hardware to run better algorithms, billions of dollars and many of the best minds are getting thrown at that. Before the hardware was there there wasn't much financial motivation to do so. So I think things will advance quite quickly there.

(Wait But Why thing from eleven years ago. Has cartoons. Predicted human level about 2025 https://waitbutwhy.com/2015/01/artificial-intelligence-revol...

Moravec's 1997 paper "When will computer hardware match the human brain?" Quite science based - predicted "required hardware will be available in cheap machines in the 2020s" https://jetpress.org/volume1/moravec.pdf

And here we are.)


I think what we have is mostly AGI. It’s artificial , it’s intelligence, and most important it’s general. It may never get an IQ about 75 or so, but it’s here.

> I think what we have is mostly AGI.

I agree that the term AGI is the problem. If I have something as intelligent as a mouse that should be AGI, if I have something a intelligent as a bird that should be AGI. Same is it's as intelligent as a 2 year old human, or someone with an IQ of 75. Those would all clearly be Artificial, General Intelligences.

But the problem is the term AGI also oddly has this bar that if must be equal to our better than human (a standard that the majority of humans would fail based on definition of intelligence alone). Plus multidisciplinary better than all humans (which it would take a super genius to have a human accomplish).

Given the current definition of you took a bright high schooler and made them artificial they wouldn't count as AGI which makes the definition silly.

And that is separate from the entire concept of sentience - which it's unclear if it's a requirement for intelligence.

It's all a bunch of squishy definitions mashed together.


Yeah, LLMs fulfill any goalpost I had in my mind years ago for what AGI would look like, like the starship voice AI in Star Trek, or merely a chat bot that could handle arbitrary input.

Crazy how fast people acclimate to sci-fi tech.


The Mass Effect universe distinguishes between AI, which is smart enough to be a person—like EDI or the geth—and VI (virtual intelligence), which is more or less a chatbot interface to some data system. So if you encounter a directory on the Citadel, say, and it projects a hologram of a human or asari that you can ask questions about where to go, that would be VI. You don't need to worry about its feelings, because while it understands you in natural language, it's not really sentient or thinking.

What we have today in the form of LLMs would be a VI under Mass Effect's rules, and not a very good one.


Peter F Hamiltons Sci-Fi novels, do something similar they differentiate between SI (Sentient Intelligence) which is basically their own being, and is not used by people as it would be essentially slavery. And for General Purpose "AI" they use RI which is Restricted Intelligence with strict limits placed around them.

The SI on Peter Hamilton's Commonwealth duology is pretty badass!

This is a great analogy.

The term AGI so obviously means something way smarter than what we have. We do have something impressive but it’s very limited.


The term AGI explicitly refers to something as smart as us: humans are the baseline for what "General Intelligence" means.

To clarify what I meant, “what we have” means “the AI capabilities we currently have,” not “our intelligence.”

I.e., what I mean is that we don’t have any AI system close to human intelligence.


No serious person thinks LLMs will be the method to create AGI. Even Sam Altman gave that up.

Anyone still saying they'll reach AGI is pumping a stock price.

Separately and unrelated, companies and researchers are still attempting to reach AGI by replacing or augmenting LLMs with other modes of machine learning.


AGI is just a meme at this point sold to midwits in Reddit and X.

Especially to those on HN.

hello,

idk ... even sam altman talked a lot about AGI *) recently ...

*) ads generated income

*bruhahaha* ... ;^)

just my 0.02€


AGI is already here and arrived without a bang, AGI arrived last year. To each his own and their own reality.

Sure, to each their own and their own reality, but I think most people would consider something with a bold name like “artificial general intelligence” to at least match an average employee peon.

We aren’t even really in “minimum wage job” territory yet, never mind a median salaried employee.

I’m still being paid a small fortune even though AGI is supposedly available for the cost of a monthly Starbucks habit.

I recently had to talk to a real human to get a cell phone activation/payment to go through even though supposedly an AI should be better at communicating with digital payment systems than a human clicking around with a mouse. The only purpose the AI in the system had was to regurgitate support articles and discourage me from contacting the human who could solve my problem.


I am a believer of agentic LLM and aside from a few downsides, it has been imensely useful for me.

Having said that, I could not care less about AGI and don't see how it's any relevant to what I wanna do with AI.


While large language models don't have enough nuance for AGI, there is some promise still in multi-modal models, or models based purely on other high-bandwidth data like video. So probabilistic token-based models aren't entirely out of the running yet.

Part of the problem with LLMs in particular is ambiguity -- this is poisonous to a language model. And English in particular is full of it. So another potential that is being explored is translating everything (with proper nuance) to another language that is more precise, or by rewriting training data to eliminate any ambiguities by using more exact English.

So there are ideas and people are still at it. After all, it usually takes decades to fully exploit any new technology. I don't expect that to be any different with models.


Look carefully at the 'why' in the person / influencer and you almost answered your own question.

> I don't know if the investments in AI are worth it but am I blind for not seeing any hope for AGI any time soon.

> People making random claims about AGI soon is really weakening my confidence in AI in general.

The "people" that are screaming the loudest and making claims about AGI are the ones that have already invested lots of money into hundreds of so-called AI companies and then create false promises about AGI timelines.

Deepmind was the one that took AGI seriously first which it actually meant something until it became meaningless, when every single AI company after OpenAI raised billions in funding rounds over it.

No one can agree as to what "AGI" really means, It varies depending who you ask. But if you look at the actions made by these companies invested in AI, you can figure out what the true definition converges to, with some hints [0].

But it is completely different to what you think it is, and what they say it is.

[0] https://news.ycombinator.com/item?id=46668248


I think modern agentic tools let you take bigger steps when programming. They’re still fallible and you need to be mentally engaged when using them. But they’re a programmer’s power drill.

Sure, if a power drill would randomly create a hole twice the size of the bit you put in, or go drill in a direction that you didn't point it in. The reason power drills are a good tool and LLMs are not is that the former works reliably whereas the latter does not and never will.

If the context window is managed properly that offsets almost all these "random drill holes", I see so many people just filling up the buffer and then compacting and complaining. With ambitious huge task requests and not having any form of system that breaks a job into multiple mini tasks.

Context rot is real and people who complain about AI's hallucinating and running random wild, I don't see it when the context window is managed properly.


Luckily in programming you can very quickly undo a wrongly drilled hole if you notice it. It does require some effort and it's not always clear whether it's worth it, but it's a tool that can definitely be helpful in some situations.

Your analogy does not apply. Remember the old saying?

> If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization.

I think you’re taking my analogy too literally. I just mean they help you go faster. When building software you have a huge advantage in that there is very little risk in exploring an idea. You can’t hurt yourself in the process. You can’t waste materials. You can always instantly go back to a previous state. You can explore multiple options simultaneously with a favorable cost to doing so.

You don’t have to let your standards drop. Just consider AI coding an interactive act of refinement. Keep coding manually where you meet too much resistance. Accept the LLM can only do so much and you can’t often predict when or why it will fail. Review its output. Rewrite it if you like.

Everything always has a chance of being wrong whether or not you use AI. Understand an AI getting something wrong with your code because of statistical noise is not user error. It’s not a complete failure of the system either.

It’s a mega-library that inlines ether an adjustment of a common bit of code or makes up something it thinks looks good. The game is in finding a situation and set of rules which provide a favorable return on the time you put into it.

Imagine if LLMs were right 99% of the time, magically doing most tasks of a certain complexity 10x faster than you could do them. Even when it’s wrong you will only waste so much time fixing 1% of the AI’s work. So it’s a net positive. Find a system that works for you and lets you find something where it makes sense to use it. Maybe 50% of the time and 3X faster than you makes it make sense.

In some domains you can absolutely learn some basic rules for AI use that make it a net positive right away. Like as a boilerplate writer, or code-code translator. You can find other high success likelihood tasks and add them to the list of things you’ll use AI for. These categories can be narrow or wide.


> Imagine if LLMs were right 99% of the time

This is an hypothetical that's not here yet.

Of course if LLMs had human-level accuracy they would be ideal.


That's an extreme to demonstrate perfection is not required. I then ask you to consider where the break-even point is.

It's like standing over the shoulder of a bunch of junior devs, fresh out of the certification mills that went belly up 15 years ago when their business model became illegal. You know, the sort of people that demanded a six figure starting salary because they are a certified php developer and a CCNA (sales-oriented).

because it’s all about the adjusted gross income



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: