Hacker News new | past | comments | ask | show | jobs | submit login




No, Drexler doesn't think it's nonsense. He just thinks that it will be unnecessary to make replicators for the manufacturing purposes he has in mind. He says

>In particular, it turns out that developing manufacturing systems that use tiny, self-replicating machines would be needlessly inefficient and complicated. The simpler, more efficient, and more obviously safe approach is to make nanoscale tools and put them together in factories big enough to make what you want.

(note that he explicitly acknowledges the safety risk) and

>The popular version of the grey-goo idea seems to be that nanotechnology is dangerous because it means building tiny self-replicating robots that could accidentally run away, multiply and eat the world. But there’s no need to build anything remotely resembling a runaway replicator, which would be a pointless and difficult engineering task. I worry instead about simpler, more dangerous things that powerful groups might build deliberately - products like cheap, abundant, high-performance weapons with a billion processors in the guidance systems.

This does nothing to diminish the risk of replicators if they are, in fact, created. And there are all sort of possible problems where replicators would be essential. For example, we may want to release replicators into the environment to clean up certain kinds of pollution which can't be easily brought to a central facility.


That's like saying, "This does nothing to diminish the risk of a Moon-based Turbo Death Ray if one is, in fact, created." We might be able to make one if we tried really hard. But it would be pointless and difficult, and there are so many more pedestrian failure modes that it seems pointless to worry about it.

Drexler thinks we underestimate the difficulty of building run-away replicators. Nature's had 4 billion years and hasn't managed it. Yes, I'm aware of the wheel argument.


First, you say building replicator is pointless without giving an argument. This is clearly wrong. I have already given you one example of the use of creating replicators, and here's another: weapons.

Second, you say it is difficult with many failure modes. So was going to the moon. How can you possibly think it's so difficult that it will never get done? In a thousand years?

Third, you cite Drexler to claim that we underestimate the difficulty. (Who underestimates it? Me? The irrational people who Drexler is afraid will take away his funding, or the handful of academics who seriously consider the issue?) The argument for extreme caution does not rely on it being easy to build run-away replicators, only that it is reasonably possible and that the results are catastrophic. Can you really argue with 99% certainty against the feasibility of future technologies without any sort of restriction based on physical law?

Fourth, you say you are aware of the wheel argument...so...what is your reponse? Should we also consider the laser argument? The computer argument? The space-ship argument? Or the argument from any of the nearly countless things that humanity has created in the past 40 years that never existed in the previous 4,000,000,000 that life was around?


I apologize, but I don't have the time to put together a considered response right now, though I wish I did. Maybe this evening.

I do think it's interesting the amount of anger I encounter whenever I even remotely question one of the Singularitarians' babies. There seems to be more ... emotional attachment ... than is healthy for skeptical inquiry.


You would encounter less frustration if you engaged others' arguments rather than dismissing them out of hand. That doesn't mean you have to argue with every crank on the internet, but there's not much point in writing a comment which declares an argument wrong without giving an explanation for why.

I'd very much like to hear what you have to say, because even when I've discussed this with academics who work in nanotech (though I've never spoken to anyone working directly on replicators), I've never heard a better argument than "it's really, really hard".


I'd still very much like to hear what you have to say!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: