TheJach.com

Jach's personal blog

(Largely containing a mind-dump to myselves: past, present, and future)
Current favorite quote: "Supposedly smart people are weirdly ignorant of Bayes' Rule." William B Vogt, 2010

Tracing Beliefs

Sometimes I wonder where I get some of my ideas. I try to identify my beliefs that have or don't have justification, too. Sometimes I find one that I don't know where it came from, but can think of some justification or where to find some justification and end up keeping it; other times I'll find no justification, and I can't remember where it came from, and I stop believing the proposition and increase my uncertainty about it.

Recently I rediscovered a source for one of my shaping beliefs, I think. The belief is that humanity is only going to survive long into the future if every human acquires the ability to destroy our planet's life completely, but chooses not to. I've blogged about that belief in passing a few times, maybe I'll do a full post devoted to it one of these days. Since I still believe it, it's one of those "I don't know where this came from, I'm pretty sure I didn't think of it on my own unlike some others, but I know some reasons why it could be a good belief to have" types of beliefs mentioned above. But now I'm pretty sure where it came from--not the original thinker, but just how I first came across it.

It also passes the makes-a-prediction test; I don't anticipate living in a world thousands of years from now where people's individual control of things is about the same or less than it is now. At the same time, this belief is one of the reasons I think nanotechnology coming before intelligence augmentation is a bad idea. I don't think everyone will choose not to destroy everything, and molecular nanotech in the hands of everyone who wants it makes that a scary possibility regardless of the immense benefits such technology would bring. I'm firmly in the camp that further technological progress will either be tremendously awesome or tremendously devastating. It won't be a mix, it won't be the same old story. This isn't to say that intelligence enhancement won't carry the same risks, but I think more intelligent humans have a better shot at handling existential threats than the current crop. Even if I'm wrong, and we can survive without everyone wanting to survive, I don't see a world where I'm wrong and we also don't have a way to stop people from destroying everything. If 100% isn't needed, the N% not on board will still need to be neutralized somehow for humanity to survive.

See Full Post and Comments

Is general machine intelligence inevitable?

Short answer: yes, provided humanity doesn't go extinct.

We currently have one example of general intelligence, and that's us. But we're not just singular objects, our brains and our minds are composed of parts. Just as surely as you can blind someone by removing their eyeballs, you can also just damage the right parts of the brain to get the same result, and the eyeballs themselves will be fine. The person's reasoning and tasting faculties will also be fine. They just won't be able to see anymore.

The past century, though in particular the last thirty years, has brought tremendous advances in our understanding of the human brain as well as the human mind. Our understanding of the human brain is, more or less, complete in the sense that we can describe it in terms of neuron networks. There are just so many neurons that it's incredibly hard to model anything sizable with computers right now.

See Full Post and Comments