Hacker News new | past | comments | ask | show | jobs | submit login

"With just a few hundred or thousand neurons, you can easily recognise perhaps a hundred faces".

It makes me think we are missing something when creating arificial neural networks which needs much more neurons to achieve only this specific task. Maybe artificial neurons are too simplified models compared to biological ones, maybe our training process could be much more efficient?




Two points:

First, it's important to keep in mind the difference between artificial "neurons" and real neurons. Real neurons, with their complicated dendritic arbors, are much more complicated than anything you'll see in a typical ANN. So there isn't a one to one correspondence between the "few hundred or thousand" neurons in a bee and the number of units in an ANN. Now is there a one to thousand correspondence? I don't know. There's probably research on it, but I'm unfamiliar. Certainly for some neurons even a thousand unit ANN would seem inadequate (look at the arborization of a Purkinge cell, for example).

Point two: Absolutely modern ANNs are missing something fundamental. I would wager obscenely large amounts of money that they are missing more than one fundamental idea, and I doubt I could find another neuroscientist who'd take that wager. What are ANNs missing? Obviously I don't know or I would have published it already. But I'll guarantee you the first step is recurrence. Hell, intelligent recurrence might be the only thing missing and I'd lose my bet. But recurrence is hard. And anyway, back in point one, even the simple facial recognition in a bee using only a thousand neurons would take a few hundred thousand to a few tens of millions of modularly-recurrently connected "neurons." Not exactly a laptop simulation.


If I had to guess: neural networks have to operate on pixel data whereas real neurons don't. Brains and eyes have evolved in tandem. Perhaps what makes them so efficient is that the eyes handle some of the processing as a consequence of their physical shape and characteristics.

Look at the eyes of bees. Very different from our own (and from the cameras we build) and perhaps very specialized to the limited set of tasks that bees carry out?


On eyes in general: the retina is not like a camera pixel array, rather, it extracts all kinds of features and detects motion:

"Eye smarter than scientists believed: neural computations in circuits of the retina." Gollisch, Tim, and Markus Meister. Neuron 65.2 (2010): 150-164.

http://www.sciencedirect.com/science/article/pii/S0896627309...


Yes. The retina is not so much a part of the eye as an extension of the brain into the eye.


They say the eyes are the window into the soul, but really we're just looking at each other's brains!


The neurons in the eyes pre-process a huge amount of information converting data from about 126M rods and cones to 1M ganglion cells - a pretty amazing amount of data compression.

Ref: https://www.amazon.com/Visual-Thinking-Kaufmann-Interactive-...


From my limited understanding, CNNs seem to approximate this functionality in image processing tasks, where the convolution layers are akin to the retina extracting info (lines, shapes, etc) and compression (kernel striding and pooling), while the later fully-connected neural network works for categorization or whatnot.

That probably doesn't answer the question of how a bee's eyes work, though.

???


I guess that's true for mammalian eyes, but what about insect eyes? If the bumble brain only hold 200k neurons, how many do their eyes have on board?


~7000 elements per compound eye according to this page:

https://www.google.co.uk/amp/s/brookfieldfarmhoney.wordpress...


That's individual lenses, right? But it says that each lens has its own photoreceptor(s). So that is a multiple of 7000.


So if that's three photo receptors per facet, already each eye is rivaling the neuron count of the brain.


That's much of a topology thing...our brain may be a CPU, but we have neurons throughout much of our body, processing extremely low level sensory inputs into richer information that brains can process at a higher level.

And different species have very different neural topologies: I've heard it described that Octopoda act more like an eight-member swarm intelligences than an intelligent eight-armed creature. Because their neural density is much more diffuse in their arms, they do so much more processing there, and the central brain acts more like a coordination unit than anything.


Monolithic brain or modular microbrains...it's the Torvalds/Tannenbaum debate all over again!


Probably a little bit of both, but definitely a huge amount of the former. Simplistic models of human intelligence being the result of neurons ignore huge amounts of human physiology, such as the fact that we have over 400 recognized types of neurons. Marvin Minsky has a lot of criticism of the overly fantastical fantasies of Neural Networks, and this is one of them.


Much of Minsky's criticisms came before current and modern implementations of neural networks, using extremely large datasets and large GPU processing. I'm not saying his criticisms are invalid - and certainly should not be tossed aside!

We are in the very early stages of another "AI Spring" - I have the inkling of an opinion that if we are to advanced further in the GAI direction, it will be by applying these same kinds of large-dataset tools toward other ML models of the past, much like we have done with neural networks - and also seeking to connect and unify these various parts into a whole (I don't think we should throw the baby out with the bathwater - I think all of our past ML systems have validity - it's more of where they fit in the overall scheme that may be lacking).

I do know that such approaches have been tried in the past, but I don't think as much consideration of using today's tools on yesterday's models has been approached as strongly, due to all the hype and money (and success!) being poured into neural networks today.


They did come before our current datasets and processing power, but those were very predictable trends back then. I think the biggest flaw in his criticisms were that they were mathematically focused on very simple neural topologies. And while larger datasets and better processing power would have helped all ML methods, NNs have benefitted from the combination of external advances combined with very significant topological advances that Minsky didn't foresee.

Much of his criticisms still stand even in the face of his failure to predict those topological advances. And his criticisms weren't even the derogatory kind...at their most ideological they were an attempt to refute the idea that conceptually simple neural networks are not essentially complex enough to describe the vast complexity of general intelligence. He still saw their place, as do I: NNs have performed remarkably in areas of sensory perception and processing, but still lag behind many other methods at higher level tasks like learning a mathematical model of a physical process. After reading a lot of Minsky, I'm pretty sure most of the advances in AI over it's entire history are due to AI Winters crushing the dreams of Neats and forcing them back to Scruffier methods.

And I'm right there with you. I'd love to see a new revolution in Expert Systems, Logic Programming, or tree-based models. Hell, we're kinda seeing a (IBM-centric) revolution in Symbolic AI and Logic Programming with Watson/UIMA. But I want more!


saosebastiao says:> "...we're kinda seeing a (IBM-centric) revolution in Symbolic AI and Logic Programming with Watson/UIMA."

Are we?

Watson appears to be a framework for applications using classical GOFAI techniques, so I would hesitate to term it "revolutionary". AFAICT its emergence is due to faster von Neumann hardware, not new algorithms. Not that NNs couldn't be rolled into the mix, too, of course.

I believe the current interest in NNs is an AI diversion, something to do until a breakthrough occurs. Now we find that bees can read faces, pull strings to get nectar, count, etc. So what? Are we closer to something that can navigate the world, solve problems like we do, using language to explain how it was done and answer questions?

IOW I await the first version of the Odyssey written by an AI, once it's excursions are complete (Kindle version: a Google car describing the perils of it's cross-country trip from NY to LA).


My impression from the 1970's AI stuff was that Minsky was of the opinion that neural nets were an inefficient (what a way to run a rail road) way to do biological computation. And general purpose computers were working at a high level of abstraction. Appear not to be the case.


Reminds me of a fascinating talk by Gerald Sussman at Strangeloop.

We Really Don't Know How to Compute! https://www.youtube.com/watch?v=O3tVctB_VSU


Maybe it's the difference between digital and analog?

Consider a vinyl record and record player. It's super simple - a long groove with smooth bumps and a needle that slides in the groove. Recording and playback via analog methods are super simple.

Compare that to a compact disk or MP3 file and the complexity required to encode, store, and playback the sound.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: