TheJach.com

Jach's personal blog

(Largely containing a mind-dump to myselves: past, present, and future)
Current favorite quote: "Supposedly smart people are weirdly ignorant of Bayes' Rule." William B Vogt, 2010

Tracing Beliefs

Sometimes I wonder where I get some of my ideas. I try to identify my beliefs that have or don't have justification, too. Sometimes I find one that I don't know where it came from, but can think of some justification or where to find some justification and end up keeping it; other times I'll find no justification, and I can't remember where it came from, and I stop believing the proposition and increase my uncertainty about it.

Recently I rediscovered a source for one of my shaping beliefs, I think. The belief is that humanity is only going to survive long into the future if every human acquires the ability to destroy our planet's life completely, but chooses not to. I've blogged about that belief in passing a few times, maybe I'll do a full post devoted to it one of these days. Since I still believe it, it's one of those "I don't know where this came from, I'm pretty sure I didn't think of it on my own unlike some others, but I know some reasons why it could be a good belief to have" types of beliefs mentioned above. But now I'm pretty sure where it came from--not the original thinker, but just how I first came across it.

It also passes the makes-a-prediction test; I don't anticipate living in a world thousands of years from now where people's individual control of things is about the same or less than it is now. At the same time, this belief is one of the reasons I think nanotechnology coming before intelligence augmentation is a bad idea. I don't think everyone will choose not to destroy everything, and molecular nanotech in the hands of everyone who wants it makes that a scary possibility regardless of the immense benefits such technology would bring. I'm firmly in the camp that further technological progress will either be tremendously awesome or tremendously devastating. It won't be a mix, it won't be the same old story. This isn't to say that intelligence enhancement won't carry the same risks, but I think more intelligent humans have a better shot at handling existential threats than the current crop. Even if I'm wrong, and we can survive without everyone wanting to survive, I don't see a world where I'm wrong and we also don't have a way to stop people from destroying everything. If 100% isn't needed, the N% not on board will still need to be neutralized somehow for humanity to survive.

More on the source for the belief, it's still pretty interesting. "Talking to God." Go read it if you haven't, it goes pretty fast. Apparently a revision was published in 2010, but the original was published in 2005 which sounds about right as for when I remember last reading it. Rereading it, it's not a particularly great work of writing, and there are many issues one can take with it, but it's still got some kick as it were. Take for example this quote from God:

‘If you think the dangers of genetic warfare are serious, imagine discovering an algorithm, accessible to any intelligent individual, which, if abused, will eliminate your species instantly. If your progress continues as is, then you can expect to discover that particular self-destruct mechanism in less than a thousand years. Your species needs to grow up considerably before you can afford to make that discovery. And if you don’t make it, you will never leave your Solar System and join the rest of the sapient species on level two.’

I use nanotech dangers purely as an example of existential risk. Nuclear warfare, genetic warfare, nanotech warfare, biotech warfare, manufactured diseases, warfare involving things we can't even currently conceive yet, environmental problems that can kill us like ocean acidification or global warming or the death of our Sun or a giant asteroid collision requiring powerful technologies to overcome... The array of existential risks is vast, and as we get more and more powerful the chance of us undoing ourselves because of our own creations is greater than because of something out of our control.

The only ones who reach level two are those who learn to accept and to live with their most dangerous knowledge. Each and every individual in such a species must eventually become capable of destroying their entire species at any time. Yet they must learn to control themselves to the degree that they can survive even such deadly insight. And frankly, they’re the only ones we really want to see leaving their solar systems. Species that haven’t achieved that maturity could not be allowed to infect the rest of the universe, but fortunately that has never required my intervention. The knowledge always does the trick’

Again the story puts forth this belief. It's justified in the fiction--it's the way things are. The fictional God has experimental evidence backing up his claims. Fictional evidence is surprisingly common in humans, even smart ones--just start talking to people about general AI and they'll respond with clichés from the Terminator movies and so on.

The story is most likely the catalyst for me in exploring things like transhumanism and considering intelligence explosions and other Singularities as actual possibilities within my lifetime. I find it fascinating that such a short piece of fiction could have such far-reaching effects, it's even more fascinating that the fiction itself kind of hints that that should be the case. None of the story's ideas were original for the time, but I'm sure they were the first place I read them all at once. 6 years ago. I'm glad I came upon the story again so I could do the tracing, but a true master should be able to trace all of their beliefs without stumbling upon an incredibly helpful marker. I can still improve a lot, I'm not near a master's level. In any case, I think it's a great technique for keeping your beliefs in line.


Posted on 2011-11-26 by Jach

Tags: personal, philosophy, rationality

Permalink: https://www.thejach.com/view/id/219

Trackback URL: https://www.thejach.com/view/2011/11/tracing_beliefs

Back to the top

Back to the first comment

Comment using the form below

(Only if you want to be notified of further responses, never displayed.)

Your Comment:

LaTeX allowed in comments, use $$\$\$...\$\$$$ to wrap inline and $$[math]...[/math]$$ to wrap blocks.