Hacker News new | past | comments | ask | show | jobs | submit login

It's also likely we're just seeing the limits of what the tech can do. It's not actually "intelligent" it's effectively a cool trick, and at some point the gap between what the trick can do and what a "perfect" AI can do becomes obvious. It's really not that different from Eliza being cool for a few minutes before being obviously found wanting. It just takes longer for the shine to wear off as we get to a better technology. Ditto for self driving cars - a steep climb to a 90% solution, then an impossible chasm to cross to something that actually handles the edge cases.



I refuse to believe that anybody who compares LLMs to Eliza has interacted with the technology for more than 10 minutes.


I think the poster's point was that initial reactions to LLMs are similar to initial reactions to Eliza. Everyone was gobsmacked for a while, but then once the shine wore off and its limitations were exposed, everyone got bored and moved on quickly.


It’s also not trained the way humans are. We’re trained interactively, and from an early age, say things in order to express ourselves and try to get what we want. Machine learning models are generally trained passively.


The chatbots using the model over at character.ai give you a sort of genetic algorithmic interface by letting you choose from alternative responses and vote on them as conversations progress, and you can make your own bots then train them and get some very interesting results. I am personally extremely optimistic about the future of such technology and I think we've only begun to scratch the surface collectively.


Ah yes, the AI Effect in action.

https://en.wikipedia.org/wiki/AI_effect




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: