Hacker News new | past | comments | ask | show | jobs | submit login
How insects like bumblebees do so much with tiny brains (bbc.com)
269 points by happy-go-lucky on Jan 24, 2017 | hide | past | favorite | 135 comments



Highly relevant is the Portia genus of spiders ([0], [1]) and apparently other related jumping spiders ([2]).

One of my favorite excerpts from [1]:

> Harland says Portia’s eyesight is the place to start. Jumping spiders already have excellent vision and Portia’s is ten times as good, making it sharper than most mammals. However being so small, there is a trade-off in that Portia can only focus its eyes on a tiny spot. It has to build up a picture of the world by scanning almost pixel by pixel across the visual scene. Whatever Portia ends up seeing, the information is accumulated slowly, as if peering through a keyhole, over many minutes. So there might be something a little like visual experience, but nothing like a full and “all at once” experience of a visual field.

[0] - https://en.wikipedia.org/wiki/Portia_(spider) [1] - http://www.dichotomistic.com/mind_readings_spider%20minds.ht... [2] - http://news.nationalgeographic.com/2016/01/160121-jumping-sp...


Portia spiders are also referenced in Peter Watts' Echopraxia. (Some spoilers could be inferred from thinking about this too hard while reading the novel.)


Some spoilers could be inferred from thinking about this too hard while reading the novel

Not.. if you don't say that...


Yeah, I guess it's a bit like saying "don't think about a pink elephant", but not saying that doesn't guarantee that they won't think of a pink elephant.

And I'd wager that people who would read about the Portia, and would read Blindsight and Echopraxia are the type to think long and hard about such topics anyway.


Well, iirc they do actually reference these spiders right there in the novel, so it's not THAT much of a spoiler.


Adrian Tchaikovsky's Children of Time is another novel where Portia spiders are featured prominently (great book btw).


Yep! Watts' novels are how I first heard about Portia. I can't wait for the third in the trilogy.

In general, I love a good novel with a bibliography.


For people who don't believe that spiders can be cute:

https://adlayasanimals.files.wordpress.com/2015/01/u2byv.jpg

Also, I got interested and tried to find some documentation on Portia Labiata after reading Watts' book, and there's very, very little, but what he says appears to be true. They do have telescopic eyes with vibrating retinas. They do spend ages painstakingly scanning a scene before moving. They are absurdly intelligent for insects. I don't know if Watts' suggestion that they timeshare pattern-matching areas of their brains is true, but it seems plausible.

Here's a good article: http://www.dichotomistic.com/mind_readings_spider%20minds.ht...


If you're afraid of spiders, I would suggest not clicking on that link. I was hoping for cute, but that is just a very large closeup of a spider, and it definitely just hit the fear instinct.



This one has to win for cutest spider, it's even smiling:

https://s-media-cache-ak0.pinimg.com/originals/ef/ca/4d/efca...


That one is ugly. The jumping spiders have a neater look due to their colors. The really colorful ones with big eyes might not hit fear instinct so hard. Try this one I cherry-picked:

http://www.arachne.org.au/_dbase_upl/spi_mm.jpg



maybe to balance these we should have some pictures of terrifying kittens...


Props for sourcing your images! That is often overlooked by just about everyone (especially when trying to meme for a buck) and breaks at least the spirit of the web.


Those do look neat. The white one looks like the old, wise men of spiders. Haha.


The ugly scary spider is much cuter with the green channel inverted.

http://imgur.com/gallery/WWoRX

Ugh, unfortunately that trick doesn't work for every ugly scary image.

http://imgur.com/gallery/SeAi3


Lol. People that coast through life on acid would use these as additional justifications their way of living is more pleasant. I find it dangerously misleading, though. ;)


Clearly cute is in the eye of the beholder. It looks like it's begging to be petted!


In English is it correct to say spiders are insects? In my native language (portuguese) it would be not accurate, as they are arachnida class, along with scorpions,mites and ticks.


A good word for something that encompasses all things creepy, crawly and tiny without involving scientific classification is "bug". Most english would agree that most spiders, ants, bees, centipedes and other like it are all bugs.

This breaks down in Australia where the spiders are as big as your head or in Puerto Rico where the foot long centipedes eat bats. Those aren't bugs, those are nightmare fuel.


'Bug' in that context is mostly American dialect. I'm British, and 'bug' in that context is infantile; by adults, it's either a verb or a small device planted by MI5.

But the OP is right, of course. Spiders are not insects. Although I suspect that most people I'd talk to in the UK would call them such. I'll have to hand in my Pedant's Society card (which is actually made out of plastic)...


Then again the little grey oval Pill bug is a crustacean not an insect and not an arachnid.


And also another source of British vs. American dialect difference/confusion. In my experience, they're known in the U.K. as "woodlice", but in the US it's entirely reasonable for grown adults to refer to them as "roly-polys"

The American use of "critter" also bemuses


"critter" is a corruption of "creature" and distinctly southern USA (to my Midwestern ears).


Spiders are not insects; from Wikipedia:

"terrestrial arthropods, such as centipedes, millipedes, scorpions, and spiders, are sometimes confused with insects"[1]

[1] https://en.wikipedia.org/wiki/Insect#Phylogeny_and_evolution


It's not correct, but it's a pretty common mistake to refer to bugs in general as "insects."


And even saying "bugs in general" is technically inaccurate!

https://en.wikipedia.org/wiki/Hemiptera


Not among biologists :)

In real life many people call spiders insects, either from ignorance or from not caring about the distinction.


Mostly, though, to humiliate them.


No, it's not.


No, they are not referred to as insects in (correct) English either, they are arachnids - though sometimes people do not make the distinction.

To refer to both insects and spiders & similar in one term you could use the (rather less formal) terms "bugs" or "creepy crawlies". I can 't think of anything more formal off the top of my head.


>They are absurdly intelligent for insects.

You know what I'm going to say don't you.


What, it's a trilogy? Has Watts confirmed a third novel?

Last I saw, was what looked like a video game based on Blindsight.


He mentioned it in a blog post sometime ago [0].

No news on it's status, alas.

[0] http://www.rifters.com/crawl/?p=5438


I will admit that it won’t be quite as upbeat as my other endings.

I'm now more excited for this than winds of winter, wow.


> [not] quite as upbeat as my other endings.

Wow, this guy sure knows how to make a statement.


What, there's a video game?


Shouldn't you put the spoiler warning before the spoiler?


Their hunting strategies are awesome:

http://portiaspiders.weebly.com/hunting-strategies.html


Quickly skimming those but I think I've read most of those I do recall reading the Portia spider's eyes don't move but instead it rotates its brain.

It can mimic the footsteps of other spiders by plucking the web of the spider it's going to kill.

Doesn't the name comes from Portia in the Merchant of Venice a devious character?

Jumping spiders are cool we have the stripy ones in my region. Spider give me the creeps but not jumping spiders I think it's due to the shorter legs and small size.


That's so different from our concept of vision. It would be like watching a web cam that surveils a park, but only refreshes small bits of the screen at a time. You might see a headless woman strolling along, etc


Here's a video of the bumblebee string pulling behaviour to get at the "nectar": https://www.youtube.com/watch?v=gSCr5OxXN1A.

Quite extraordinary.


It's interesting that bees can learn new tasks, but this is amazing:

> Other bumblebees learned by observing trained demonstrators from a distance. Only a small minority solved the task spontaneously. The experiments suggest that learning a nonnatural task in bumblebees can spread culturally through populations.


I know: mind blown. Amazing communication (and observation) skills. I've certainly developed a newfound respect for the humble bee.


Eusocial insects like honeybees, some wasps, and ants take this even one step further by combining the extraordinary but limited powers of one individual insect to make highly-complex decisions with the coordinated input from thousands or millions of individuals. It's often stated that after humans, these eusocial insects are the most advanced life form on earth. After studying, for example, the fascinating voting process of how honeybees choose a new home during swarming, I am in full agreement.


Bees, ants, and (I think) other hive living insects share ~75% of their DNA with their hive mates.

This makes it reasonable to think of the hive as one individual!

Just like we are made of individual cells and organs, an ant hill is made up of individual ants. That the parts can move separately is a fairly superficial difference.


I think you mean 100% because ants are clones. The only difference between them and the queen is the nurturing process they go through which influences epigenetics.


Welcome to the complexities of Hymenoptera genetics! Workers are 75% related to each other (sisters) and only 50% related to their mother (the queen). https://en.wikipedia.org/wiki/Haplodiploidy#Relatedness_rati...


It's even more complicated than this with some species.

Those numbers are true if a female mates once. Honeybee (Apis spp.) queens that lead successful colonies, for example, typically mate with a dozen or more male partners. This has beneficial genetic effects but is probably done just to satisfy sperm storage requirements since they only perform one mating flight. The result of this is that the thousands of sisters in the hive have many different fathers, reducing the worker-to-worker relatedness.


I remember reading essay on bees and altruism. Standard line is bees are altruistic because workers are 75% related. Author made the claim that's bunkum because a) Honey bees are rather extreme. b) Multiple male fathers.

Bee reproductive setups are really diverse with honey bees at one end, solitary bees at the other, and just about anything else between.


The eusocial altruistic behavior of bees and other Hymenoptera requires relatedness (kin selection), but it also requires other ecological benefits like a nest that could be kept safe across multiple generations. This work is detailed exhaustively by Richard Alexander where he discusses origins of altruistic group behavior in both eusocial species and humans.


The hive can be thought of as one superorganism, but I think it's more than a superficial difference.

Eusocial species behavior evolved from fully-functioning individual insects (solitary bees and wasps that still exist, for example) as a behavioral adaptation. A human organ is not a self-surviving entity with social behavior.

In a honeybee hive, queens typically mate with many male partners (10+). While all hive individuals are related, the majority female workers are each the product of diploid sex determination with multiple possible fathers.


Ender's game explores this idea from a more philosophical perspective.


Read Complex Worlds from Simpler Nervous Systems[1] if you want more information on how very complex behavior can emerge from jumping spiders and bees. After reading that book, it seemed to me that AI researchers should focus much more time on duplicating the feats of these simpler animals (in similarly computationally limited contexts) rather than focusing on duplicating extremely high level human faculties like reasoning and even playing Go. I mean, imagine a robot with the intelligence of a parrot or the smartest birds in the corvidae family[2]. I think people would probably be afraid of autonomous robots of this level of intelligence.

[1] https://mitpress.mit.edu/books/complex-worlds-simpler-nervou...

[2] https://en.wikipedia.org/wiki/New_Caledonian_crow


I bet the drone industry will do work in that direction.


"With just a few hundred or thousand neurons, you can easily recognise perhaps a hundred faces".

It makes me think we are missing something when creating arificial neural networks which needs much more neurons to achieve only this specific task. Maybe artificial neurons are too simplified models compared to biological ones, maybe our training process could be much more efficient?


Two points:

First, it's important to keep in mind the difference between artificial "neurons" and real neurons. Real neurons, with their complicated dendritic arbors, are much more complicated than anything you'll see in a typical ANN. So there isn't a one to one correspondence between the "few hundred or thousand" neurons in a bee and the number of units in an ANN. Now is there a one to thousand correspondence? I don't know. There's probably research on it, but I'm unfamiliar. Certainly for some neurons even a thousand unit ANN would seem inadequate (look at the arborization of a Purkinge cell, for example).

Point two: Absolutely modern ANNs are missing something fundamental. I would wager obscenely large amounts of money that they are missing more than one fundamental idea, and I doubt I could find another neuroscientist who'd take that wager. What are ANNs missing? Obviously I don't know or I would have published it already. But I'll guarantee you the first step is recurrence. Hell, intelligent recurrence might be the only thing missing and I'd lose my bet. But recurrence is hard. And anyway, back in point one, even the simple facial recognition in a bee using only a thousand neurons would take a few hundred thousand to a few tens of millions of modularly-recurrently connected "neurons." Not exactly a laptop simulation.


If I had to guess: neural networks have to operate on pixel data whereas real neurons don't. Brains and eyes have evolved in tandem. Perhaps what makes them so efficient is that the eyes handle some of the processing as a consequence of their physical shape and characteristics.

Look at the eyes of bees. Very different from our own (and from the cameras we build) and perhaps very specialized to the limited set of tasks that bees carry out?


On eyes in general: the retina is not like a camera pixel array, rather, it extracts all kinds of features and detects motion:

"Eye smarter than scientists believed: neural computations in circuits of the retina." Gollisch, Tim, and Markus Meister. Neuron 65.2 (2010): 150-164.

http://www.sciencedirect.com/science/article/pii/S0896627309...


Yes. The retina is not so much a part of the eye as an extension of the brain into the eye.


They say the eyes are the window into the soul, but really we're just looking at each other's brains!


The neurons in the eyes pre-process a huge amount of information converting data from about 126M rods and cones to 1M ganglion cells - a pretty amazing amount of data compression.

Ref: https://www.amazon.com/Visual-Thinking-Kaufmann-Interactive-...


From my limited understanding, CNNs seem to approximate this functionality in image processing tasks, where the convolution layers are akin to the retina extracting info (lines, shapes, etc) and compression (kernel striding and pooling), while the later fully-connected neural network works for categorization or whatnot.

That probably doesn't answer the question of how a bee's eyes work, though.

???


I guess that's true for mammalian eyes, but what about insect eyes? If the bumble brain only hold 200k neurons, how many do their eyes have on board?


~7000 elements per compound eye according to this page:

https://www.google.co.uk/amp/s/brookfieldfarmhoney.wordpress...


That's individual lenses, right? But it says that each lens has its own photoreceptor(s). So that is a multiple of 7000.


So if that's three photo receptors per facet, already each eye is rivaling the neuron count of the brain.


That's much of a topology thing...our brain may be a CPU, but we have neurons throughout much of our body, processing extremely low level sensory inputs into richer information that brains can process at a higher level.

And different species have very different neural topologies: I've heard it described that Octopoda act more like an eight-member swarm intelligences than an intelligent eight-armed creature. Because their neural density is much more diffuse in their arms, they do so much more processing there, and the central brain acts more like a coordination unit than anything.


Monolithic brain or modular microbrains...it's the Torvalds/Tannenbaum debate all over again!


Probably a little bit of both, but definitely a huge amount of the former. Simplistic models of human intelligence being the result of neurons ignore huge amounts of human physiology, such as the fact that we have over 400 recognized types of neurons. Marvin Minsky has a lot of criticism of the overly fantastical fantasies of Neural Networks, and this is one of them.


Much of Minsky's criticisms came before current and modern implementations of neural networks, using extremely large datasets and large GPU processing. I'm not saying his criticisms are invalid - and certainly should not be tossed aside!

We are in the very early stages of another "AI Spring" - I have the inkling of an opinion that if we are to advanced further in the GAI direction, it will be by applying these same kinds of large-dataset tools toward other ML models of the past, much like we have done with neural networks - and also seeking to connect and unify these various parts into a whole (I don't think we should throw the baby out with the bathwater - I think all of our past ML systems have validity - it's more of where they fit in the overall scheme that may be lacking).

I do know that such approaches have been tried in the past, but I don't think as much consideration of using today's tools on yesterday's models has been approached as strongly, due to all the hype and money (and success!) being poured into neural networks today.


They did come before our current datasets and processing power, but those were very predictable trends back then. I think the biggest flaw in his criticisms were that they were mathematically focused on very simple neural topologies. And while larger datasets and better processing power would have helped all ML methods, NNs have benefitted from the combination of external advances combined with very significant topological advances that Minsky didn't foresee.

Much of his criticisms still stand even in the face of his failure to predict those topological advances. And his criticisms weren't even the derogatory kind...at their most ideological they were an attempt to refute the idea that conceptually simple neural networks are not essentially complex enough to describe the vast complexity of general intelligence. He still saw their place, as do I: NNs have performed remarkably in areas of sensory perception and processing, but still lag behind many other methods at higher level tasks like learning a mathematical model of a physical process. After reading a lot of Minsky, I'm pretty sure most of the advances in AI over it's entire history are due to AI Winters crushing the dreams of Neats and forcing them back to Scruffier methods.

And I'm right there with you. I'd love to see a new revolution in Expert Systems, Logic Programming, or tree-based models. Hell, we're kinda seeing a (IBM-centric) revolution in Symbolic AI and Logic Programming with Watson/UIMA. But I want more!


saosebastiao says:> "...we're kinda seeing a (IBM-centric) revolution in Symbolic AI and Logic Programming with Watson/UIMA."

Are we?

Watson appears to be a framework for applications using classical GOFAI techniques, so I would hesitate to term it "revolutionary". AFAICT its emergence is due to faster von Neumann hardware, not new algorithms. Not that NNs couldn't be rolled into the mix, too, of course.

I believe the current interest in NNs is an AI diversion, something to do until a breakthrough occurs. Now we find that bees can read faces, pull strings to get nectar, count, etc. So what? Are we closer to something that can navigate the world, solve problems like we do, using language to explain how it was done and answer questions?

IOW I await the first version of the Odyssey written by an AI, once it's excursions are complete (Kindle version: a Google car describing the perils of it's cross-country trip from NY to LA).


My impression from the 1970's AI stuff was that Minsky was of the opinion that neural nets were an inefficient (what a way to run a rail road) way to do biological computation. And general purpose computers were working at a high level of abstraction. Appear not to be the case.


Reminds me of a fascinating talk by Gerald Sussman at Strangeloop.

We Really Don't Know How to Compute! https://www.youtube.com/watch?v=O3tVctB_VSU


Maybe it's the difference between digital and analog?

Consider a vinyl record and record player. It's super simple - a long groove with smooth bumps and a needle that slides in the groove. Recording and playback via analog methods are super simple.

Compare that to a compact disk or MP3 file and the complexity required to encode, store, and playback the sound.


Slightly off-topic but related. BEAM robots created by Mark Tilden who later founded WowWee might be of interest to readers.

From Wikipedia[0]: BEAM robotics (from Biology, Electronics, Aesthetics and Mechanics) is a style of robotics that primarily uses simple analogue circuits, such as comparators, instead of a microprocessor in order to produce an unusually simple design.

....

BEAM robots may use a set of the analog circuits, mimicking biological neurons, to facilitate the robot's response to its working environment.

[0]: https://en.wikipedia.org/wiki/BEAM_robotics


I would also suggest anyone that is interested in BEAM robotics to take a look at Braitenberg Vehicles[1]. These are simple reflex-based robots that can exhibit complex behaviors due to their interaction with their environment. Originally, they were thought experiments by the psychologist Valentino Braitenberg who showed in his book, Vehicles[2], how one might ascribe complex emotional and mental states to a simple automaton if one only observed its external behavior. For a fascinating earlier example of analog robotics, (and the first example of true autonomous robotics to my knowledge) one has the work of Grey Walter and his "tortoise" robots[3].

[1] https://en.wikipedia.org/wiki/Braitenberg_vehicle

[2] https://books.google.com/books?id=7KkUAT_q_sQC (there appears to be excerpts available for free as pdfs)

[3] https://en.wikipedia.org/wiki/William_Grey_Walter


Maybe the real question is 'why can we do so little with our giant brains?'


Maybe something along the lines of "our brains would overheat" - there's a small temperature window in which proteins won't denature, it takes k T log 2 joules to erase one bit of information (Landauer's principle), and our brain uses around 20 Watts of power.

Maybe tin foil hats make good heatsinks..

Interestingly enough there is some evidence that the Gibbs free energy of ketone metabolism is more thermodynamically efficient than glucose in the brain (c.f. Dr. Richard Veech's work). You can measure a lower temperature gradient between the tongue temperature and the brain's.

*edit: s/Beech/Veech..


Doesn't that imply that people who live in hotter climates have...how can I put this politely?...a disadvantage when it comes to the stability of their brains? I would assume that the brain has a certain amount of redundancy, but has this ever been researched: How hot can it get before people start getting noticeably "stupider"?


Our bodies can cool themselves decently well by sweating. I think that if the climate was affecting you to the point where your brain was getting cooked, the symptom would probably be less "slightly dumber" and more "seizures and organ failure".

But I'm not a doctor or a scientist, so I could be completely wrong. Hopefully someone more informed will weigh in.


When diving to 90ft+ you get 'stupider' (nitrogen narcosis). Our instructor had us do simple math problems on a board underwater and timed us. We were doing them very very slowly but you don't feel that it is slower. It was an interesting experience meant to warn you that you won't realize that you are affected by nitrogen narcosis.


I've been there. It feels so good, you forget things and get unfocused


It'd make sense that there would be a mechanism for graceful degradation, no?


I think that sweating profusely and feeling thirsty is the graceful degradation. I don't think there's a way to gracefully degrade from "my brain is a burger patty now".


Well, getting hotter than you can handle is (at least sometimes) called heatstroke. And some of the symptoms include "Altered mental state or behavior. Confusion, agitation, slurred speech, irritability, delirium, seizures and coma can all result from heatstroke." At least some of those sound roughly like "getting stupider," though it should be a surprise that a complex system doesn't respond in a simple manner to going outside of its "design limits."

http://www.mayoclinic.org/diseases-conditions/heat-stroke/ba...


Don't worry you're not being racist, humans of all ethnicities are made of the same proteins after all. And getting too cold will affect brain function too.


Nope, we have that awesome homeostasis thing going on which is why our bodies (and brains) can live in such a wide range of climates in the first place.

If you get hotter than your behavioural adaptations (shade seeking, splashing yourself with water, turning on the air conditioner) plus your own inbuilt capabilities (vasodilation, sweating) can handle, then you pretty much overheat and die.


There's no association of human body temperature with latitude.

So endogenous human thermal regulation is efficient enough to deal with the normal range of terrestrial environments inhabited by humans. And when it can't (if you're naked in the arctic, for example), you die.


Wouldn't they not get noticeably stupider, because the observer would be getting just as stupid?


Does this also explain why people in Scandinavia+Finland seem to be so clever?


> it takes k T log 2 joules to erase one bit of information...and our brain uses around 20 Watts of power

Yet another time when my Political Science education fails me. What should I take away from this?


The brain has a fixed amount of energy it can use per second, and it takes a fixed amount of energy to erase one bit of information (which you can think of as performing a computation). This means that there are a limited number of computations that the brain can do in one second.

(I'll add that there's no need to make this argument from thermodynamic principles. Your brain has a finite number of neurons, and each neuron can only fire a few times a second, so that limits the number of computations that your brain can do per second as well. And that limit is a lot tighter than the thermodynamic limit.)


> What should I take away from this?

Just that you should not omit with "..." the most important part of the statement for you. You haven't studied physics. Then you see the formula and the textual name of it with the most probable entry in Wikipedia and then you throw the most useful bit for you away and ask what the formula without the name means, even though the name would give you an explanation.

pc86's quote:

> it takes k T log 2 joules to erase one bit of information...and our brain uses around 20 Watts of power

Original:

> it takes k T log 2 joules to erase one bit of information (Landauer's principle), and our brain uses around 20 Watts of power.

The explanation:

https://en.wikipedia.org/wiki/Landauer's_principle


I assumed there was some relation between k T log 2 joules and 20 Watts, probably giving some sort of context to how easily or not easily information in our brains could be lost. The theoretical principles behind the amount of energy required to erase one bit, while interesting, doesn't necessary matter for that relationship, does it?


Landauer's principle would come into effect if you had a theoretical "Computorium" material that could perfectly turn energy into computation. Then the computation you could produce per volume of space would be limited by it.

Our brains are (very) far from being the perfect computing material (so are our chips).

The theoretical minimum is only useful here to show that our brains cannot do free work. (Irreversible) computation always has a cost.

The 20 Watts are the important number. Our brains have a certain energy budget and have to perform within that constrain.


the log 2 comes from Shannon coding, because information (tm) encoding is modeled on a bit string of discrete energetic states (something quantum, too, I guess). And there is a transformation to the frequency domain (ie via discrete fourier transform). Entropy is always messured in Watt, doesn't. make sense to equate psychologic information content with physiologic enthropic consumption.


[flagged]


The condescension isn't too helpful here unfortunately (is it ever?). You could have made your initial response a bit less condescending by saying something like:

The comment mentions "Landauer's principle", which provides some explanation: https://en.wikipedia.org/wiki/Landauer's_principle

Here as well, you could have made your comment better if you just omitted this part: "which you'd find if you'd attempt to read what you avoided... twice."


What do you find condescending in pointing that the clear sentence with enough information was edited of its most important part and that presented as "unclear"?

The way he quoted the comment, it's not even visible that "the comment mentions" like you suggest. He kept the formula of the principle and only edited its very name out. Which is a direct name of the entry in Wikipedia.

What do you find condescending when the person asking doesn't show that he even tried to read not only what the original sentence pointed to but also what the previous answer directly links, both times on such an obvious place as Wikipedia?

I see it more either as intentional trolling, or as somebody who could actually profit of learning to read what's already written, and I don't see I'm wrong here. But I welcome if you explain exactly why I should pretend somebody didn't do what he did in my answer, or even more important, what most of the readers would benefit from? I'm much more a reader than an active writer here, so that's an important question for me.

I very often ask here for the clarifications of the statements for which I can't easily find the answers to. But here the editing out in the single sentence couldn't be more blatant.


Next time you ask for a clarification here I expect a list of the sources you approached first trying to figure it out on your own.

I'll be sure to offer my opinion on the breadth and depth of your research.


You are very smart.


You too, Pavel, my dear, thanks for identifying yourself. I know you say it only because it's true.

I'm very impressed how you masterfully contributed to the discussion.


Please don't be so thorny when other users are clearly participating with such earnest curiosity. This is exactly the attitude that Hacker News flourishes under and it's important not to squash it with such dogged ellipsis litigation.


I was not expecting such a useful and interesting answer. Thanks!


That's interesting indeed - is there any chance there's a connection between ketone metabolism being more efficient and 'hungry intelligence'? http://www.nytimes.com/2006/12/10/magazine/10section1C.t-1.h...


Note too that digestion is an extremely energy-intensive process, so that means less glucose to the brain (etc.) So much so that humans actually lose energy digesting raw meat.


Blood seems to be a pretty decent cooling agent. Our muscles can generate a lot more waste heat and we manage to get rid of it.


It's probably not so much the blood as a whole, but rather the water in it (in fact, I would expect the other parts in blood would make the water less efficient for heat carrying capacity - but they also may have other advantages to that end).


Interesting to imagine this difference as a species which has evolved distinct specialized circulatory systems for cooling and for oxygenating tissues.


That hardly seems relevant. kT log 2 joules is something like 10^-7 times the energy of an electron. I seriously doubt that that is anything but negligible compared to the other operations brains are performing.


Did an estimate a couple of years ago that the brains estimated information processinh and the cost of 'erasing' the same amount of information under the assumption that all processed information is lost, it seemed to imply that the cost of forgetting is somewhere in the range 1/1000 of the amount of energy the actual information processing is thought to consume ( approx 8W )

It seems crazy that it can be that close given the miniscule energy information carries, to me those ~8mW is completely mind boggling big.

The most reasonable explanation is that my calculation was wrong, but at the time it did not appear to be the case, although it probablys is anyway :)


Read Blindsight by Peter Watts. Apart from being an incredible novel it gets into this question, and it's freely available from the author's site: http://www.rifters.com/real/Blindsight.htm

Trying to avoid spoilers here, so ROT13 - Gur nyvraf rapbhagrerq va gur abiry ner abg pbafpvbhf, naq uhzna pbafpvbhfarff vf cerfragrq nf na ribyhgvbanel fvqr rssrpg / zvfgnxr juvpu erdhverf hf gb jnfgr uhtr nzbhagf bs cbjre guvaxvat nobhg guvaxvat, vafgrnq bs whfg guvaxvat. Bhe phygher, oebnqpnfg vagb fcnpr, nccrnef yvxr n QQBF nggnpx bs vafnar vachgf gb gur aba-pbafpvbhf nyvraf orpnhfr bs ubj zhpu rssbeg vg gnxrf gb cnefr guebhtu vg nyy.

(The neurological phenomena of blindsight is also very very interesting, and suggests that our brains may do more work than is strictly required)


I really can't communicate how awesome Blindsight is. Have you read the sequel?


I only made it through one third of the sequel. I thought the quality had taken a dramatic step down and couldn't bring myself to finish it.


The story-telling wasn't as great, but the ideas he plays with are still good (though admittedly not as great as the first).


Meh. I thought the ideas were no longer grounded in what's possible in this universe.


Give bumblebees radio, tv, reddit, hackernews, facebook and whatsapp. Pretty soon they too will be doing pretty much nothing...

Also, give them money and the idea that they will have to amass this stack pile paper before they die, instead of gathering the stupid nectar...


> Also, give them money and the idea that they will have to amass this stack pile paper before they die, instead of gathering the stupid nectar...

Are you serious? You can pick up your mobile phone and arrange to have nectar come to you in any amount you require (as long as you have the money to afford it).

In fact, to a honeybee, nectar is pretty much exactly analogous to money.


Like Salarium was once to us. :)


>In fact, to a honeybee, nectar is pretty much exactly analogous to money.

Can you or your newborn eat money?


Tiny ASIC vs large SoC with CPUs and FPGAs?


Can we do so little?

I think not.

I see a brain like a FPGA or a programming language, it can become anything, but it has to be led in the right direction.

Like, some people did Facebook with PHP and some people hacked together one in a million CMS.

Some people wrote a OS in C and I wrote a asteroids clone.


Or rather, why our digital neurons do so little. Mammal brains do quite a lot IMO.


The article mentions lots of different insect skills such as spatial representation, route finding, pattern recognition, predation, etc, and then says that "Neurons act a little like wires, carrying electrical signals from one part of the brain to another. They are a biological version of the circuit board in a computer".

After reading the article, I was impressed by the insects, not by the BBC explainers.


Why? That analogy is so widely recognised and accepted that electronic engineers/neuroscientists are experimenting on computer circuit boards to assess the understanding of neuroscience.

"It does so by way of neuroscience’s favourite analogy: comparing the brain to a computer. Like brains, computers process information by shuffling electricity around complicated circuits."

http://www.economist.com/news/science-and-technology/2171497...


Wires conduct signals unconditionally. Neurons may or may not transmit a signal depending on its complex network of inputs and internal state. A better technological analogy for neurons would be a signal processor.

Also, neurons interface using chemical messages in the form of many different neurotransmitters. The electrical phenomenon of cellular depolarization might as well be an implementation detail. If anything resembles a wire, it's the axon.


The article essentially says 'insects have very little computers in their heads'. To me at least, this doesn't meaningfully answer the question posed in the title. Furthermore, as per my comment, the BBC article characterizes neurons as 'wires', rather than the very sophisticated low-power processors they in fact are. Wetware implementing/running efficient algorithms honed by millions of years of evolution with (probably) trillions of test cases (all the insects that have ever lived).

[Edit: clarity]


I was hoping there would be something in the article about emergent behavior in colonies of insects. If you think of the bees as a big group of automata, the complexity isn't so much in the individual bee's brain, but in the collective mind of the hive.

The research on nematodes, fruit flies and dragonflies - non-colony creatures - seems to contradict this, except none of their behaviors seemed as complex to me as the problem the bees were solving. But that might just be my programmer's brain over-simplifying the seek and flee behaviors I've put into NPCs.


The article frames the behaviors it describes as "clever", but they're really not. The bees detect some pollen, get as near as they can, and their brain tells their bodies "try stuff!", and they exhibit some behaviors they typically use to get into small spaces or past objects. A "small minority" of the bees (quote from video of the test) get the pollen when they happen to try the right stuff. They excrete a chemical signaling to the other bees that they did something that worked, and the other bees can gradually copy the behavior.

None of this is "clever" -- it's some amazing mechanisms they've selected into, but it's not figuring things out, using logic, or using tools. It's much more akin to a simple learning network playing super mario brothers and dying frequently until it eventually succeeds. It shouldn't surprise us that it only takes a few hundred thousand neurons to do (including a tiny, low-res, colorless visual cortex and olfactory system that maintain a tiny, low-res representation of their surroundings).

I'm not dismissing the wonders of nature, just trying to add some detail to a write-up that glosses over _how_ these things are working.


Paramecia manage to do quite a lot without any brains at all [1]. They do have microtubules though [2].

[1] https://www.ebiomedia.com/the-biology-classics-paramecium-be...

[2] http://www.ruf.rice.edu/~bioslabs/studies/invertebrates/micr...


It's astonishing to me that on the one hand the article upends what had been common sense with experimental results, but then blithely goes on and makes assertions about the nature of cognition based on no other justification than it being the au courant model.


Carpenter Ants have a mind and are aware of adult sized predators following them (no harm became the ants).


Heuristics. Lots of them.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: