Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Even single neurons can do processing way more advanced than the artificial ones we use in machine learning.

How does that work? A single neuron is a single neuron. My understanding is that the brain has lots of neurons and also they are assembled in certain "NN architectures" that are far more advanced than what we currently have. But I think that if you go the the single neuron level then they are pretty similar in terms of problem solving capabilities.



Well, a neuron can either fire or not fire; 0 or 1. For a neuron to fire, enough of its' synapses need to fire. How likely a single neuron is to do so based on an incoming action potential varies over time according to Hebbian learning. Once x synapses fire, the input is linearly summed to determine if the neuron as a whole fires. That's a neat abstraction and it is pretty descriptive. But it's also a little too neat to be true. The summation is, of course, not linear and neurons react to incoming signals even if they don't fire. How likely a neuron is to fire also depends on how much it has fired recently, as the synapses "get tired" (the concentration of certain molecules are temporarily exhausted), but for some neurons, likelihood increases as a result of previous firings before it decreases. Hebbian learning increases the strength of the connection to neurons that participates in successfully firing the neuron, but connections under a certain strength threshold decay with time, while those over it are stable (inability to create these stable connections is connected to alzheimers).

That's what I remember and I just took some neuroscience courses. There are multiple books out there attempting only to describe behaviour mathematically, let alone describing the underlying mechanisms, which is true "here be dragons" territory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: