Jach's personal blog

(Largely containing a mind-dump to myselves: past, present, and future)
Current favorite quote: "Supposedly smart people are weirdly ignorant of Bayes' Rule." William B Vogt, 2010

Is general machine intelligence inevitable?

Short answer: yes, provided humanity doesn't go extinct.

We currently have one example of general intelligence, and that's us. But we're not just singular objects, our brains and our minds are composed of parts. Just as surely as you can blind someone by removing their eyeballs, you can also just damage the right parts of the brain to get the same result, and the eyeballs themselves will be fine. The person's reasoning and tasting faculties will also be fine. They just won't be able to see anymore.

The past century, though in particular the last thirty years, has brought tremendous advances in our understanding of the human brain as well as the human mind. Our understanding of the human brain is, more or less, complete in the sense that we can describe it in terms of neuron networks. There are just so many neurons that it's incredibly hard to model anything sizable with computers right now.

Here is the first reason why general intelligence on a machine is inevitable. Suppose that our neuron-models of the brain are right, and that if you simulate a large enough neuron network shaped like a human's out will pop a human mind. The second assumption is actually easier to justify than the first, because even knowing nothing about the physics of sound, you know that if you send this WAV file to that CPU, and record the results by letting the CPU crunch through it or doing it by hand with pen and paper, you know that eventually voltages will reach a speaker and out will come the song the WAV file encoded. It doesn't matter if those voltages are determined by guessing, CPU computation, string vibration on an electric guitar, or pen and paper, so long as they agree. Out will pop music.

Now with those two assumptions, the only requirement for human-level general intelligence running on a computer is that we can simulate it. We currently can't do that yet, but a lot of people tend to agree we'll have that kind of computing power before this century is out.

The recent flattening of clock speeds doesn't matter for three reasons. One, they're likely to go up again once graphene and other materials start being used. Two, brains only work from 20 to 200 hertz, while an i7 processor can work at 3 gigahertz. That's more than 15 million times faster. Three, the magic in the brain is largely its incredibly parallelized structure. Stagnant clock speeds have forced us to start dealing with these parallelization problems, and we're making good progress indeed.

The second reason machine intelligence is inevitable is because human brains and human minds are composed of parts. Finitely many parts. We've already developed computers that are adequate for visual recognition (seeing), sound recognition (hearing), force recognition (touching), and chemical recognition (tasting and smelling). We've already developed robots that can move in dynamic terrain and pick up a multitude of different objects. What's left? Tying it all together. Intelligence is left. More specifically, a system that can make sense of all its inputs, then act according to some goal, is all we need.

The hard part of this problem is the "making sense" part. The actual input sensors, the actual output actuators, while interesting and challenging are mechanical engineering problems and are mostly problems of time to put it together. They're easier than this "making sense of things" part. What separates us from chimps, when we both have four limbs, a nose, two eyes, can use tools, etc? Why was it humans who built pyramids less than 7000 years ago, when before that both humans, chimps, and our ancestors spent tens of thousands of years accomplishing nothing of note with most of the same input sensors and output actuators? Clearly the missing piece is the human mind.

Intelligence is powerful. Normal processes like shifting plates, volcanoes, asteroids, etc. create giants like mountains over incredibly long periods of time. Evolution would never create something so large as a mountain, though it did create large animals relative to us, but again over a very long period of time. (By "evolution would never create", I mean that there's no feasible way for natural selection to reach the stage of a mountain-size animal.) Human intelligence, on the other hand, can create skyscrapers on a scale of days. We can launch things into space fairly easily now too, a new trick we've learned over the past 40 years. We can blow up mountains, we can kill the biggest animals, both very quickly.

Normal processes like the behavior of chemicals created the first strands of RNA and eventually DNA. So far as we can tell, this has only happened once at least in our neck of the universe. Evolution eventually went from single-celled organisms to multi-celled bacteria, but it was a very long time before things like plants and animals and then humans came about. It still creates tiny viruses though that can damage the larger things, but they're also pretty slow to actually create. Humans are still mastering this field, but we're getting better. We've made things smaller than RNA strands, and we can do it fairly quickly, again on the timescale of days (perhaps months might be more realistic for really interesting devices).

So really, this "making sense of things" is just another way to frame intelligence in the most general sense, without the human-intelligence issues of morality and emotion and hormones and so on. When there are finitely many things to make sense of, this can naively become a simple brute force problem, and the only thing stopping us from solving it is, again, insufficient computing power.

If you accept that computing power is going to massively increase from today's levels eventually, even if it takes 1000 more years, even if it takes 7000 more years due to world war three that wipes out all modern civilizations and technology, then we have two reasons why machine intelligence is inevitable. The only case it's not inevitable is where humans no longer exist.

Given that intelligence as a process is so incredibly powerful as well as useful I think we should try our hardest to get it done before this century is out. And furthermore not wait for more computing power and a brute forcer to stumble on it, but actually design intelligence mathematically and algorithmically. It'd be great too if we could really understand human intelligence instead of just throwing the fundamental parts in a simulator and letting a human intelligence pop out. But if we can encode human intelligence and human minds in a computer, suddenly those things become much easier to work with and improve.

For example, you can create digital electric circuits with a breadboard, the right logic chips (such as AND gates), wires, and power, but putting it together and then changing it and improving it is an incredibly cumbersome task. If you can encode it in a language such as Verilog, suddenly you just need to change some words in a text file and your change will be complete.

Currently human brains are hard to change, as anyone in the mental illness industry knows. Imagine being able to change words in a text file to try and help someone instead of having them eat and digest a capsule that sends chemicals through the bloodstream to the brain where it causes further chemical reactions that cause electrochemical reactions in the brain and hormones to be released and absorbed and this huge mess of stuff. Text file changes are strictly simpler, and I would argue they're easier and less expensive in this case too. There's your economic incentive.

Posted on 2011-11-04 by Jach

Tags: artificial intelligence


Trackback URL:

Back to the top

Back to the first comment

Comment using the form below

(Only if you want to be notified of further responses, never displayed.)

Your Comment:

LaTeX allowed in comments, use $$\$\$...\$\$$$ to wrap inline and $$[math]...[/math]$$ to wrap blocks.