Hacker News new | past | comments | ask | show | jobs | submit login

> Notably, the techniques that researchers employ to rescue overfitted artificial neural networks generally involve sampling from an out-of-distribution or randomized dataset. The overfitted brain hypothesis is that the brains of organisms similarly face the challenge of fitting too well to their daily distribution of stimuli, causing overfitting and poor generalization. By hallucinating out-of-distribution sensory stimulation every night, the brain is able to rescue the generalizability of its perceptual and cognitive abilities and increase task performance.

Bit of jump there from randomized/out of sample data to dreams are for generalization.




Also a bit of a jump from neural networks to brains.


And if the brain is overfitted, how can it generate out-of-distribution samples? Seems like wildly cyclical reasoning.


The brain is a lot of different components working together. Perhaps the bits which do statistical learning are functionally distinguishable from the bits which generate permutations on the data.

(This is in fact how modern neural network training often works: start with a set of 50,000 images, run them through a bunch of permutations — translation, rotation, lighting changes, etc. — and produce a training set of 1 million images which yield better generalization when trained against.)


You'd have to posit multiple models in the brain. For example, one which simulates the world (call it the "world model"), and one which determines the best actions to take in the world (call it the "policy"). The policy might be overfit even while the world model is not. In dreaming, you are drawing samples from the world model and using them to train the policy.


I can't remember what it was, but I've read something within the past 3 months, possibly on these pages, which very much suggested human cognition being derived from a series of overlapping partial models of the world, similar to what you described (but with more simulation models in that layer)


How can computers calculate more accurately than the human brain but also have PRNGs or even true hardware RNGs?


But that’s the thing, it’s NOT random. It’s just unlikely scenarios. Dreams still follows the general model of the world.


Maybe something like turning up the exploration vs exploitation parameter for tree search AI algorithms.


Of course it is. But that wouldn't make a very interesting hyperbolic headline.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: