Hacker News new | past | comments | ask | show | jobs | submit login

This seems to be Hinton's MO though. A few years back he ripped out convolutions for capsules and while he claims it's better and some people might claim it "has potential", no one really uses it for much because, as with this, the actual numerical performance is worse on the tests people care about (e.g. imagenet accuracy).

https://en.wikipedia.org/wiki/Capsule_neural_network




I mean yes, this should be the MO of a tenured professor, making large speculative bets, not hyper optimizing benchmarks


But some of those bets should be right, or else he'd be better spending his time and accumulated knowledge writing a historical monograph.


Specifically, tenure is to remove the pressure that "you'd better be right" so professors are free to take meandering tangents through the solution space that don't seem like they'll pay off immediately.

The failure mode of tenure is that the professor just rests on their past accomplishments and doesn't do anything. That's a risk the system takes. In this case though, Geoff Hinton is doing everything right: he's not only not sitting around doing nothing, he's actively trying to obsolete the paradigm he helped usher in, just in case there is a better option out there. I think that's admirable


The backpropagation paper was published in 1986.

It took >20 years for it to be right.

Maybe we ought to give this one some time?


Not sure what you mean by >20 years to be right. I built and trained a 3-layer back-propagating neural net to do OCR on an Apple 2 in 1989 based on that paper. Admittedly, just the 26 upper case characters. But it clearly worked better than the alternatives.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: