Hacker News new | past | comments | ask | show | jobs | submit login

I like the quotes, however:

> With his new thrust on whats called "deep belief networks" he is challenging his own early seminal contribution in the field

I don't know if I agree with that, he still uses backprop. Backprop has always been known to have problems when you scale to millions of connections, and his work on RBMs/DBNs is really quite old. What was novel more recently was showing that the contrastive divergence step need only be performed once, rather than 100 times, while the performance remained similar. The networks are generally still 'fine tuned' with backprop.

Still, the focus on generative networks (not sure if that's the right term still, been a while) and single layer training is fairly recent even if the concepts are quite old.




Mostly agreed, one difference that I would like to highlight is that errors are not always backpropagated across all the layers. In addition to contrastive divergence the breakthrough has been that you can get away with unsupervised learning (like with autoencoders) in the layers.

On the comment that RBMs are new, now I have to come to accept that if one looks hard enough almost all things are old, only the name changes !




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: