This passage really distills it down to the essence of what happened:
"Despite skepticism in the neuroscience community, Markram won over the people who really mattered: funders at the European Commission, who seem to have looked less closely at the proposal's scientific feasibility than at its potential economic and political payoff."
Given the limited understanding of neuroscience and the problems of Markram's vision, the whole affair had the earmarks of a doomed project.
One can see similar problems in smaller effort by Palm Founder Jeff Hawkin's efforts to jump-start algorithmically oriented neuroscience research with his book On Intelligence and his company Numenta.
With all the failures pretty visible now, I'd like to say I think there's some validity in seeing if one can extract a broad vision out of all the disparate threads of neuroscience research.
There have been events in the history of science when outsiders walked in and saw a forest instead of a lot of trees - see Craig Venter, Alfred Wegener and others (and there have been many more cranks who claimed to the same of course). Making some effort to see if that kind of approach can gel is worth some effort in the study of the brain, a field whose complexity is so huge that gradual, iterative research could continue in places for a thousand years and not answer basic questions.
These two things have nothing to do with each other. Deep learning, despite the use of the world "neural", has nothing to do with neuroscience, and mapping the human brain is not the same as making an artificial neural network.
Actually, Numenta is doing well.
They keep working towards understanding how a human neocortex works. It took academic neural net community more than 50 years to achieve "amazing successes". Compared to that timeline, Numenta is moving pretty fast.
Most ambitious projects fail, period. Most startups don't succeed. I don't know if size is the issue.
I don't think a high failure rate even matters. If only 10% of billion dollar projects succeed, then spending $10 billion should get you something equivalent to a working human brain simulation. And hopefully at least something is learned from the failures. Even they don't succeed at building a human brain doesn't mean they don't learn anything.
The interesting question for me is how to buy those 9/10 failures more cheaply. Can we get our $1 billion failures for only $100m each? Or even $10 million? Then we can buy a lot more successes.
I think the main barrier there, though, is that funders have to be in favor of interesting failure. With the described mega-science funding project, there's a big incentive to come up with dramatic plans that are novel and promise certain success.
I think the tech industry isn't perfect at this; we can all name companies that spent way too much money for what they learned. But our FFF/seed/A/B funding model gets us a good discount on interesting failures.
You've got no clue what you are talking about, do you? The thing is that many mega projects fail because of ridiculous reasons; mostly because high ranking people make dicisions outside the range of their expertise. Startup projects have very different characteristics and different stakeholders (i.e. mostly private). If 90% of the money could be saved then that's pretty bad.
I was never satisfied with their explanation for why they had to start with the human brain rather than the brain of a MUCH simpler organism like C. elegans.
Exactly. OpenWorm [1] is trying to do exactly that, emulate C. elegans, with all of 302 neurons. It doesn't work yet, but they;'re plugging along. When that problem is totally solved, and the neural model is working right, it's time to move up in complexity. Probably to lizards or birds.
About two decades ago, I heard Rod Brooks talking about building a human brain. He'd done some good artificial insects and was big on purely reactive AI. At the time, he was promoting Cog [2], as a way to get to human-level AI in one big jump. I asked "You've done an insect brain. Why not try to get to a mouse brain next?". He replied "I don't want to go down in history as the man who created the world's greatest robot mouse." 20 years later, Brooks doesn't even have mouse-level AI, although he does have a robotic vacuum cleaner company.
A mouse has about 1g of brain mass; a human has about 1000g. But they're both mammals, with about 90% similar DNA. If someone could simulate a mouse at the neural simulation level, that would be real progress. Once you have a mouse simulation, human level is a scaling problem.
The Human Brain Project blew through a billion dollars in funding while OpenWorm is a volunteer effort. That's the real problem.
It's not clear at all that Mouse->Human is simply a scaling problem.
It would appear that there is some emergent phenomenon (consciousness) somewhere on the scale between mouse and human. This phenomenon may be triggered by a surplus of "computational resource" with respect to "body to be controlled", which I personally think is unlikely. It may be triggered by a matching of computational resource to controllable complexity, i.e. the hands and fingers -> tool use, who knows?
There is so much we don't know about cognition. For example, are Whales/Dolphins self aware/conscious? They have the requisite brain mass, if it's simply a scaling problem.
Do you believe mice ask themselves "why do I exist?" or whether it is better "to be, or not to be"?
I am genuinely interested in your perspective.
I adore my dog. He shows a wide range of what I believe to be genuine emotions. But I do not think he is self-aware, and by self-aware I mean able to contemplate his own existence (and therefor lack of existence). For that matter, I don't think he ever contemplates the lack of my existence (despite feeling it when I am not around, so I am told).
I think conscious isn't a binary yes/no kind of property. It's more a range, where you can put humans arbitrarily at 1.0 and then a mouse might be 0.05 or something like that. It certainly has self-awareness of somekind, even basic biological homeostasis requires self-awareness of a limited kind.
Other animals then present more consciousness, like chimps, dolphins, elephants etc and would place somewhere between the mouse and humans.
Thought experiment: Dogs tend to act guilty when they know they've done something wrong. If you get two dogs together, and one does something naughty, do both act guilty afterwards or only one? If only one, then is it not reasonable to assume that the guilty dog has internally models the situation as "I did bad thing, I am bad dog" but the other dog internally models the situation as "I didn't do bad thing, I am good dog"?
I'm not saying dogs sit around quoting Kant, but they're highly social animals and I find it hard to believe they don't have some internal representation of themselves.
No, we could always tell when my dog had gotten into something even before we found the evidence, because she'd be slinking around embodying the word "hangdog" from the moment we walked in the door. There's debate over whether this is actually guilt or simply anticipation of punishment, but there's no question that dogs know when they've done something they weren't supposed to do.
Reminds me of the Vonnegut quote from Cat's Cradle:
“Tiger got to hunt, bird got to fly;
Man got to sit and wonder 'why, why, why?'
Tiger got to sleep, bird got to land;
Man got to tell himself he understand.”
What would observe differently about your dog if he were self-aware? Self-awareness wouldn't automatically let dogs speak their minds or write philosophical treatises on existence. When a dog is just sitting still, I have no idea if they're being self-reflective or not.
I would expect him to exhibit less reliably to stimuli. His behaviour is closer to a large switch(stimulus){} statement than the emergent behavior of human reaction to situations and stimuli.
One example: he was stepped on when a small puppy by a very old man from that point on and to this day, many years later, he has a major dislike of old people that he does not know well.
If the same thing happened to a child, I would expect the child to slowly, over time, question why they disliked all other old people when the old people they do know are kind to them. This would result in a change of behavior without someone explicitly teaching them.
Human reactions are hardly freeform. Most human responses are either instinctive or socially scripted/learned - more often, some combination of the two.
The only thing that makes humans truly unique is the fact that we can store, retrieve, and process information outside our bodies. While we can question our responses, as in your example, we don't - unless we live in a social environment that suggests we should. (Not all environments do. Some actively discourage it.)
Everything else we do - solving puzzles, counting, acting socially, teaching in person, showing emotions - appears in simple forms in at least some other animals.
Going up a level, humans are actually an extremely large, planet-spanning colony organism with very limited collective consciousness. We act more like an aimless flock of birds or mound-building insects than a collective with awareness of its own planetary situation.
I think the case for a collective consciousness for humanity is much too weak. An insect colony does way better. At least most individual insects tend to act toward the common good, rather than waging war on other insects in the same colony.
No way, I did not mean to imply I thought that mice do that. But I think that's setting the bar too high. How often does the average person ask themselves such questions after all? I think four year olds are conscious, and yet I doubt that they ask such questions either, and might even look at you funny if you asked them such a question.
Humans have 92% DNA commonality with mice, and 98% commonality with chimpanzees. There's a lot of handwaving about how special humans are, but it's not showing up at the genetic level.
I'm curious--did the Human Brain Project ever produce any noteworthy science, or did it actually blow through a billion bucks with nothing to show? Because, as far as I'm concerned, if we got a billion dollars worth of science out of it, who cares if they hit the mark? That could still be a lot of good science!
Exactly. So much of neuroscience is still in "unknown unknown" phase, and the idea of just doing this massive interdisciplinary project to simulate a full human brain, built on the pile of sand that is any sort of "theory" of full-brain simulation ... it's like dedicating $1 billion to some dude's unpublished idea of proving P = NP. Bullshit should have been called before this ever got off the drawing board.
Sure seems like the EU has it's fair share of incompetent technocrats if you can throw $1 billion at some nonsense any handful of PhDs in the field could have told you was nonsense if you'd just picked up a phone.
Sure seems like the EU has it's fair share of incompetent technocrats if you can throw $1 billion at some nonsense any handful of PhDs in the field could have told you was nonsense if you'd just picked up a phone.
> any handful of PhDs in the field could have told you was nonsense
The problem with groundbreaking work is often that any handful of experts can tell you that it's nonsense. E.g., look at the reception that Barry Marshall got when he proposed that stomach ulcers were caused by bacterial infection. [1] All the experts "knew" it was caused by "stress, spicy foods, and too much acid" in the stomach. Eventually he got the Nobel prize, but you have to wonder what else we're missing out because whoever discovered it wasn't as dogged or as lucky as Marshall.
Examples of this type are not a good guideline for policy decisions, almost by definition. For every Marshall, there's a million cranks whose ideas are rejected by the establishment and rightfully so. At the outset, all these ideas appear equally wild. But resources are scarce -- we can't just fund everything.
So what to do with a billion Euros? Obviously, we shouldn't put everything in a single pot. Particularly not in one crazy pot. That's why people complain about Markram's ego trip and enabling politicians. If you got a certain amount of funding, diversify: fund a couple of cranks, sure, but 90% should go toward what's often called normal science where small steps add up to comparatively predictable progress.
(In Markram's/the HBP's defence: Ultimately, the project will yield a metric tonne of normal science, especially in terms of infrastructure etc. However, everybody will be disappointed at the end...)
I think examples like Marshall can't be the only guideline for policy. But they are excellent examples for policy. Marshall made lives better for millions of people.
Anybody responsible for science funding should be keenly aware that the machine they steer is much better at responding to obvious failures like Markram than silent failures that ulcer treatment was for decades.
Because it's unlikely that C. elegans can't even learn. It doesn't have any brain structures equivalent to ones in humans. It's neurons work totally differently. They are so different from us that anything we learn probably wouldn't apply to human brains. A more intelligent animal, ideally a mammal, would be a much better choice.
Yes, quite true, but in this thread, a bunch of people who know nothing at all about neuroscience and cognitive science - worse, who don't believe anyone anywhere knows anything about those subjects - are going to pontificate on why having a bunch of trained experts study the brain was always a terrible idea.
The thing I was always wondering about brain simulations is how to determine what level of detail is enough. What if neurons evolved to take advantage of some funny quantum phenomena that we may not even know about yet, and without this the simulation won't work? For this reason I would think that getting a simple brain simulation to work would at least help enumerate the physics most involved in their function.
Now maybe there is some extra physics that comes into play at scale (like some quantum-computery effect with human-size brains that induces conciousness) but maybe this would be detectable as small discrepancies in the small simulations (especially as they progress in size to e.g. mice).
Since brain has to work in a noisy environment, it is unlikely its function depends too much on details. Neurons can't depend on effects we don't know about, because if effect is large, we would know about it, and if effect is small, brain has to work with noise that size, so it is not necessary for us to get it right for its function. (It's a different matter for its exact output, but that's for later.)
You don't need to appeal to "quantum effect" to think that being able to understand, well enough to simulate, THE SIMPLEST NEURAL SYSTEM of any organism would be helpful before jumping in feet first with 1 billion euros into the most complex neural system. Of course it is a process over 10 years and they have "only" spent 200 million so far. I would expect that quite a bit of that money is going to study model organisms from c. elegans to mouse to chimpanzee. Studying simpler model organisms is one of the best ways we have to understand fundamental biological principles.
If there would be such a quantum effect it would contradict a lot of conventional theoretical physics, brains operate at high temperatures ~310 Kelvin, and the length scales involved are fairly large for quantum mechanics. You would only expect to see quantum effects at such large length scales at very low temperatures, because otherwise the correlation length is just too small.
My line of thinking is, say it'll cost X to model C. elegans.
For even the simplest mammals (let alone humans), surely the cost is >> X. So might it be a decent idea to give it a shot with C. elegans first? Because if we fail with C. elegans, we're almost guaranteed to fail with a mammal. And it's not immediately clear to me that we currently have the ability to do it successfully even for C. elegans.
I am definitely not a neuroscientist so maybe I am misguided. I'm also not wedded to C. elegans as being the ideal prototype, and I doubt anyone is (except possibly the people already studying C. elegans). But the first attempt should definitely be on something as simple as possible.
It was Dalrymple's site. His work was open source and collaborated with OpenWorm researchers. Looks like Dalrymple gave up to go work for Twitter. I guess he found a way to leverage his expertise in studying very small brains. Sigh. Greatest minds of our generation, yadda yadda.
John Cleese, of all people, summarized our current knowledge of the nervous system best in this video: https://youtu.be/FQjgsQ5G8ug?t=20
We are clueless. A lot of what is published is just neurobabble, even in 'serious' journals. As others have pointed out, we still don't even know how to simulate a brain of a tiny worm with 302 neurons. Talking about simulating billions of neurons is beyond fantasy, and funding such a projects is.. well.. not surprising from politicians, but that is another matter.
Similarly, a lot of scientists are researching themes like 'consciousness' and 'executive functions' and still don't know how to make a robot walk like a normal person on two feet. We should be concentrating on trying to understand the simple stuff - the reflex arc, muscle control, grasping, and pointing a finger toward something.
[Ok, I just saw in another one of you're posts your MA involved perceptual control theory, so yah, right on. I have this written already and feel a little silly now, but I figure the links are good and worth posting.]
I am finishing up my undergraduate right now in psychology, and while I want to defend the work that is being done by cognitive psychologists on concepts at higher levels of abstraction (Many researchers working in more mathematical areas of psychology tend towards working on things like consciousness, perception, concepts and categorization. There are cool advances in it, but I won't go too much into it in this post. I guess I'm just trying to say that you end up working with constructs that have higher barriers of entry to communicating outside of groups of researchers who get excited about modeling and the perceptual measurement theory.) I digress with that though. The main reason I wanted to comment is that your comment on understanding things like pointing hit on one of my favorite areas of psychology that has tragically always been somewhat nitch, but provides a lot of interesting connections between different areas if you take the time to get into it. There is a thread in the field of psychology that does just this kind of research, termed sometimes ecological psychology. It is often referred to colloquially as Gibsonian psychology, termed after [JJ Gibson](https://en.wikipedia.org/wiki/James_J._Gibson) who is seen as the father of the field(its practitioner's are known colloquially as Gibsonians).
I was lucky enough to take the Geoffrey Bingham's (who specializes reach to grasp behavior iirc) Perception/Action course a few years back, and found it immensely rewarding.* The primary text for the course was JJ Gibson's classic book "The Ecological Approach to Visual Perception". The philosophical foundations of gibsons theory of affordances is to take a realist approach to perception. (Avoiding the troubles that mind-body dualism imposes; taking a more Heidegerrian approach). The classic mantra of the field is ask not what is in you're mind, but what you're mind is in.
I wish I had time to say more now, but I need to run here.
But I'll leave you with a few snippets of what I took away from the course, and some links I think you'll enjoy, and can furnish more if you would be interested.
The Archival Gibson Video Series (Whoever assembled these had perfect choice in music)
Like any project, I'm sure there are many ways it can be improved.
Still, I'd much rather see $1B spent imperfectly on something important - a thousand times over - rather than a trillion plus on the F-35, or so many of the ways governments spend money.
No results from a wahatever effort is still axwaste no matter how wonderful the payoff might have been. This goes to the heart of why efforts like this get funding in the first place. The funding committees get blinded by the bright light of the potential payoff, and the money gets wasted instead of going to fund and actualy deliver valuable projects that aren't as glitzy. But at least you would have ended up with a real actual benefit.
> No results from a wahatever effort is still axwaste no matter how wonderful the payoff might have been.
That's easy to say with hindsight. No results from putting money into something doesn't mean you made the wrong choice. That might be the case here, but as a general statement it's really quite dangerously wrong.
> No results from a wahatever effort is still axwaste no matter how wonderful the payoff might have been.
That's only true from a cynically commercial standpoint. From a scientific standpoint, negative results are every bit as important as positive ones. As Edison said, "I have not failed. I've just found 10,000 ways that won't work."
(Which isn't to say that we shouldn't consider the likelihood of valuable results, positive or negative, in relation to the time, money, and effort spent, just that there are important considerations besides commercial payoff potential.)
IBM started questioning their funding of the project because of the lack of a success criteria.
It's cool that you have many calculations running in parallel on a super computer, and each calculation is a simplified model of a neuron, but how do you know when you've actually replicated a brain? A brain is more than a collection of neurons, they need to be connected and do something.
In the current era, technological advances are far more important to progress in the life sciences than ideas or analytical/computational methods. The cycle of innovation is quite fast. The human genome project is a classic example. Celera genomics came out with a a totally new DNA sequencing technology towards the end of the HGP, and managed to sequence the human genome in a fraction of the time. It so happens that the HGP had enough of a head start to release their data at the same time, and that their data and Celera's data were complementary in many ways. But imagine if the HGP started 5 years later, and Celera released their complete genome when HGP had only done 50%? It too would be considered a grand failure.
A grand project needs to be timed very carefully. Even the ones that actually work well will be obsolete by their completion unless they happen to occur in the right part of the technology cycle.
A 10 part documentary is being made by film director Noah Hutton, with each installment detailing the year-long workings of the project at the EPFL. Having started filming in 2009, the documentary is planned to be released in 2020, after the years of filming and editiang have finished. Regular contributions from Henry Markram and the rest of the team provide an insight into the Blue Brain Project, while similar research tasks across the world are touched on.
>While he was still in Germany, his son Kai had been diagnosed with autism. As he told The Guardian in 2013, he wanted “to be able to step inside a simulation of my son's brain and see the world as he sees it.”
Or terrifying, if you consider that a simulation of his son's brain is, in a very real sense, his son. And that when he shuts that simulation down he's killing it.
The Human Brain Project does not actually try to create a realistic human brain, instead it tries to randomly generate neurons and synapses whose statistical distribution matches those of the different kinds of neurons observed in the human brain. While this is utterly useless for things like intelligence or learning (both of which are wildly unrealistic given the simulation speed of the software system), for things like autism, parkinson which go along with abnormally distributed brain patterns it could still be very useful. I think beyond the fact that Markram probably oversold the project, the project also suffers from a clash of cultures. In fields physics and other branches of natural science it has long been a given to do simulations instead of experiments once one was reasonably sure how things worked. In the case of the Human Brain Project if the software is architected in the right way additional insights in how synapses and neurons worked and how they are connected could be incorporated over time. Obviously you wouldn't understand conciousness or learning this way, at least not right away, but solely relying on experiments, won't get you there either, since they can only deal with dozens of neurons and not much more.
I might go so far as to say that the issue isn't with the management structure so much as the complete lack of a management structure. It takes a well run and managed organization to spend a billion euros effectively.
"Despite skepticism in the neuroscience community, Markram won over the people who really mattered: funders at the European Commission, who seem to have looked less closely at the proposal's scientific feasibility than at its potential economic and political payoff."