TheJach.com

Jach's personal blog

(Largely containing a mind-dump to myselves: past, present, and future)
Current favorite quote: "Supposedly smart people are weirdly ignorant of Bayes' Rule." William B Vogt, 2010

Nanotech will probably come first

So I just read this Nanotech article and I thought: "Damn, this is already possible?" It doesn't seem to be long now before we'll have nanobots that self-replicate as well. Right now I can only hope humans are responsible enough with the vast power nanotech gives to not screw up into a grey goo scenario.

Is it better that Friendly Artificial General Intelligence should come first, though? Of course it's impossible to halt progress on nanotech and focus all efforts at FAI, but if it we could, should we do it?

One of the arguments for doing FAI first is that humans are notoriously untrustworthy. The moment nanotech looks promising, the governments will swoop in and weaponize it. When nuclear energy started coming about, really the first application was in a very destructive bomb. Even now its nicer, energy-producing capabilities still aren't fully harnessed, but you can guarantee there are still many nuclear weapons in the world. And with the Middle East situation, why would any of those countries not want nukes when Israel has several? Personally I actually believe Iran wants it for power, not for weapons, but that's another topic.

The other argument is that humans aren't very careful. It doesn't even require a malicious entity to destroy the entire world with nanotechnology; it could happen from a mere accident. Always install remote kill switches to your nanobots, and if they're self-replicating especially make sure the replicates have them as well. You think nuclear weapons are bad? Nanoweapons are waaaay worse.

But I think it could be said that nanotech is the last of humanity's tests. It would just be too easy if we developed Friendly AI tomorrow and the world changed beyond prediction several days later. Humans ought to prove themselves worthy of an intelligence vastly superior to their own. When nuclear weapons came around, humanity itself wasn't so much in a species-level test (it's kind of hard to wipe out all human life, even with the amount of nukes the world possesses), but there was still a test. There is a classic quote from Einstein that I'll paraphrase here:

I don't quite know with which weapons World War III will be fought, but World War IV will be fought with sticks and stones.

Einstein was referring to the possibility of nuclear warfare for WW3, which would destroy most of civilized humanity but not the whole. He didn't really see nanotech, I don't think, because if WW3 is fought with nanotech there won't be a human species afterward--even having a planet supporting life left over would be lucky. I'd say give us another 5000 years though and we'd be back in the same spot.

So if we can get through nanotech without destroying ourselves completely, I think we'd be more than ready for AGI. There are also some arguments to be made from a "practical" side, which unfortunately I must invoke the fallacy of generalization from fictional evidence to explain.

Star Trek: warp speed, teleporters, food replicators, and holo-decks (invented in that order if memory serves). Warp speed is a simple engineering problem (in the Star Trek universe); in ours, we might yet be able to approach light speed (whether we can really go faster remains to be seen, but I have to doubt it). Teleportation is simply done by analyzing the quantum state of an entity, building an exact copy at another location, and then destroying the original. This is a nanotech problem, but not exactly one that requires self-replicators.

Food replicators though, seem to require self-replication. In Star Trek, food, shelter, and hygiene are givens to humanity. Even if you don't work at all, you still get those. Then you do what interests you, and if it benefits humanity in some way then good for you! (Money doesn't exist.) Nanotech seems to imply a similar future: food, shelter, and hygiene are virtually free, money will be destroyed and people will work for something more noble.

Holodecks aren't so much a nanotech issue, more like a computing power issue. I find it interesting they were invented so late...

So, we have all these fancy technologies, and even Star Trek doesn't do nanotech justice. Now we have Star Trek fail though, as Data, an android seemingly at least at the level of human intelligence, seeks to become more human rather than augment his own intelligence and program himself or a new android. The creators of Star Trek admitted AI was feasible with Data, but of course it was very hard.

I think I believe real, general AI is indeed a very hard problem, even harder than nanotech, which is yet another reason that nanotech will probably come first. I'm not even sure I see this as a bad thing, either. If humanity passes this final test, we can use nanotech to augment our own intelligence. Smarter humans have a better chance of getting Friendly AI the first time someone gets general intelligence, which I agree is necessary. We can't have an AGI who is not friendly to humans; we want to go on after all. But once we make an AGI, we are finished if it's not Friendly with a capital-F.

FAI is such an important problem, more important than nanotech, more important than anything. It's hard, too. Out of all possible mind designs (whose space is pretty vast), we're trying to hit in the area of Friendly ones. This is an insanely hard problem, and it might be better tackled with augmented humans going at it. I do believe Eliezer has a genuine shot at this, but if nanotech comes along that lets him augment his brain he should definitely do it immediately.

Finally, it's more intuitively comfortable if FAI doesn't come first. With nanotech at least, we can still reasonably imagine what the future will be like. Humans will probably still be much the same, we'll still have lots of sex, we'll still go hiking in the mountains, we'll still have humans at the top of the intelligence food chain. With a Friendly AI Singularity, the future is off the table for prediction. You can give standard transhuman ideas like mind uploading, but even that's not certain if we don't come out the other side alive. Now, doing things that are intuitively comfortable isn't that great of an idea (I argue against government--government is intuitively comfortable, as is an idea of the Sun and Planets revolving around the Earth).

So I've found I'm not really against the idea of nanotech coming first, and if we could halt its progress to pursue FAI I don't think we should. Getting FAI first of course would be the maximal, best, ideal outcome, but I can't help but feel nanotech's coming soon, and that's okay. If humanity can't pass the test it's about to face, then we don't deserve the benefits of either nanotech or a Singularity. I guess in that case the universe will just have to wait another age-of-the-universe to produce minds capable to creating other minds.


Posted on 2009-10-16 by Jach

Tags: singularity, technology

Permalink: https://www.thejach.com/view/id/33

Trackback URL: https://www.thejach.com/view/2009/10/nanotech_will_probably_come_first

Back to the top

Back to the first comment

Comment using the form below

(Only if you want to be notified of further responses, never displayed.)

Your Comment:

LaTeX allowed in comments, use $$\$\$...\$\$$$ to wrap inline and $$[math]...[/math]$$ to wrap blocks.