Hacker News new | past | comments | ask | show | jobs | submit login

The managers may believe that's what they're asking their developers to do, but doesn't this whole charade expose the fact that this technology just does not have even close to the claimed capabilities?

I see it as wishful thinking in the extreme to suppose that probabilistic mashing together of plagiarized jigsaw pieces of code could somehow approach human intelligence and reasoning—and yet, the parlour trick is convincing enough that this has escalated into a mass delusion.






Philosophy becomes key. True human intelligence is not very well defined, and possibly cannot be divorced from concepts like “consciousness” or “agency”, at which point claiming that the thing is “like human” opens the operator to accusations of running a torture chamber or being a slave owner of entities that can feel.

Agreed, though long before such qualms come to the fore I'd like to see even a shred of evidence that this entire approach to AI is at all capable of formulating mental models of the kind that have enabled humans to produce all the wonderful mathematics, physics, chemistry, biology, philosophy, poetry, literature, art, etc. of the past several centuries.

I see the supposed reasoning tokens this latest crop of models produce as merely an extension of the parlour trick. We're so deep into this delusion that it's so very tempting to anthropomorphize this ersatz stream of consciousness as being 'thought'. I remain unconvinced that it's anything of the sort.

This comes to mind: "It is difficult to get anybody to understand something, when their salary depends on them not understanding it."

This latest bubble smacks ever more of being a con.


> We're so deep into this delusion that it's so very tempting to anthropomorphize this ersatz stream of consciousness as being 'thought'. I remain unconvinced that it's anything of the sort.

Coincidentally, I’m listening to an interesting episode[0] of QAA that goes through various instances of how people (sometimes educated and technically literate) demonstrate mental inability to adequately handle ML-based chatbot tech. The podcast mostly focuses on extreme cases, but I think far too many people are succumbing to more low-key delusions.

As an example, even on this forum people constantly point out that unlicensed works should be allowed in ML training datasets because if humans are free to learn and be inspired then so should be the model—it’s crazy to apply the notions of freedom and human rights to a [commercially operated] software tool, yet here we are. Considering how handy it is for tool’s operator, hardware suppliers, and whoever owns respective stocks, some of this confusion is probably financially motivated, but even if half of it is genuine it’d be alarming.

[0] https://podcasts.apple.com/us/podcast/qaa-podcast/id14282093...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: