Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you are right in your sentiment that deep learning is becoming over-hyped. I've written before that it's mostly a method of brute forcing the problem of AI that happens to work very efficiently on modern GPUs and other parallel processing units. That being said, for the tasks it does well on (mostly facial and other types of simple image recognition), it does extremely well compared to everything else out there. (And it may very well end up as a part of a future architecture that better simulates general intelligence.)

However, I don't think it is reasonable to assume that AI tools can only be valuable if they can be rigorously mathematically characterized. The "holy grail" of intelligence -- that is, actual human intelligence -- certainly can't or at least hasn't been mathematically characterized, and I don't think anyone will ever offer any "proof" (mathematical or otherwise) that the biological brain is inherently wired to arrive at or at least tend to converge to correct solutions to intelligence problems. And of course no one will argue that human intelligence isn't valuable!



>The "holy grail" of intelligence -- that is, actual human intelligence...

When people talk about human-level AI, are they referring to specific beneficial subsets of human intelligence, like holding a conversation or interacting with the world around them, or does it literally include all of human cognition? In other words, would the "holy grail" include AI that acts extremely irrational and makes poor decisions based on temporary chemical imbalances in the body? Does the perfect human-level AI get depressed and commit suicide some of the time? Are we trying to replicate all of the parts, or only the good parts?


Good point. I would compare it with human speech. Human speech is a powerful mean to communicate and for this reason it is thus very tempting to develop artificial systems that can communicate in the same way. But human language is far from optimal. In the same way, I believe that human brain is powerful but far from optimal.

The strength of human brain is its ability to adapt to fast changing context. The solution (function) it finds is however usually far from optimal.

I also think that the future of computing is the development of systems that can efficiently adapt their rules of actions according to execution context evolution. The programmers of today will then become trainers.


If my machine learning algorithms went wrong as regularly as the term "human error" denotes, I wouldn't be able to use them to run much of anything. Identifying faces, for instance, is just a matter of statistical accuracy, of pattern-matching, but a more complex task like automated asset-trading or world domination requires a program that will make fewer mistakes than even many trained humans.


Why? Even in your hypothetical world domination scenario a program can make many mistakes but as long as it wins more territory than it loses it is still successful.


One thing is making a certain mistake because a choice based on erroneous or insufficient data has gone wrong (No matter how great the AI, this will always be an issue), and a very different one is making a mistake even with complete and correct data, something that humans are prone to.


Because I don't want my empire covered in paperclips ;-).




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: