TheJach.com

Jach's personal blog

(Largely containing a mind-dump to myselves: past, present, and future)
Current favorite quote: "Supposedly smart people are weirdly ignorant of Bayes' Rule." William B Vogt, 2010

Tracing Beliefs

Sometimes I wonder where I get some of my ideas. I try to identify my beliefs that have or don't have justification, too. Sometimes I find one that I don't know where it came from, but can think of some justification or where to find some justification and end up keeping it; other times I'll find no justification, and I can't remember where it came from, and I stop believing the proposition and increase my uncertainty about it.

Recently I rediscovered a source for one of my shaping beliefs, I think. The belief is that humanity is only going to survive long into the future if every human acquires the ability to destroy our planet's life completely, but chooses not to. I've blogged about that belief in passing a few times, maybe I'll do a full post devoted to it one of these days. Since I still believe it, it's one of those "I don't know where this came from, I'm pretty sure I didn't think of it on my own unlike some others, but I know some reasons why it could be a good belief to have" types of beliefs mentioned above. But now I'm pretty sure where it came from--not the original thinker, but just how I first came across it.

It also passes the makes-a-prediction test; I don't anticipate living in a world thousands of years from now where people's individual control of things is about the same or less than it is now. At the same time, this belief is one of the reasons I think nanotechnology coming before intelligence augmentation is a bad idea. I don't think everyone will choose not to destroy everything, and molecular nanotech in the hands of everyone who wants it makes that a scary possibility regardless of the immense benefits such technology would bring. I'm firmly in the camp that further technological progress will either be tremendously awesome or tremendously devastating. It won't be a mix, it won't be the same old story. This isn't to say that intelligence enhancement won't carry the same risks, but I think more intelligent humans have a better shot at handling existential threats than the current crop. Even if I'm wrong, and we can survive without everyone wanting to survive, I don't see a world where I'm wrong and we also don't have a way to stop people from destroying everything. If 100% isn't needed, the N% not on board will still need to be neutralized somehow for humanity to survive.

See Full Post and Comments

Is general machine intelligence inevitable?

Short answer: yes, provided humanity doesn't go extinct.

We currently have one example of general intelligence, and that's us. But we're not just singular objects, our brains and our minds are composed of parts. Just as surely as you can blind someone by removing their eyeballs, you can also just damage the right parts of the brain to get the same result, and the eyeballs themselves will be fine. The person's reasoning and tasting faculties will also be fine. They just won't be able to see anymore.

The past century, though in particular the last thirty years, has brought tremendous advances in our understanding of the human brain as well as the human mind. Our understanding of the human brain is, more or less, complete in the sense that we can describe it in terms of neuron networks. There are just so many neurons that it's incredibly hard to model anything sizable with computers right now.

See Full Post and Comments

Joins as Matrices

Warning: if you're unfamiliar with SQL Joins, go have a look at the Venn Diagram explanation here.

We'll start with the last example, Cartesian Joins. Recall the definition of a Cartesian Product:

[math]X\times Y = \{\,(x,y)\mid x\in X \ \text{and} \ y\in Y\,\}.[/math]

See Full Post and Comments

Multicore programming with Clojure

This is really just a blurb, not a serious introduction or set of examples. A few months ago I wrote about how I prefer Python's map() to using its List Comprehensions feature even if list comprehensions look and feel more Pythonic. The main reason is because by using map, it makes it simple to extend functional code to multiple cores or machines without changing the original, just by writing a clever version of the map function.

Clojure has such a clever version built-in, called pmap. Basically, it works just like map but applies the mapper function to the input dataset in parallel. (Hence it really shines when the mapper function time dominates.) I just wanted to gush over how awesome it is. Clojure also includes a time macro that makes benchmarking easy. Check out the docs here for an example.

That's it.

See Full Post and Comments

The Graph Nature of Reality

I'm not talking about the real reality that quantum electrodynamics, quantum chromodynamics, and general relativity describe. I'm not making statements about the fundamental level that's the only real level, but about what reality kind of looks like at a bigger scale if you squint my way for a moment.

Nature has tuned us to think heavily in Cause and Effect. A chain, one thing proceeding to the next. Sometimes human choice dictates the direction of that chain, but human choice contains its own cause and effect cycle with choice and consequence. Only a few smart thinkers in history have seen beyond this, and only for a moment. Consider this quote from George Santayana, circa 1905-1906 in The Life of Reason. (Emphasis mine.)

Progress, far from consisting in change, depends on retentiveness. When change is absolute there remains no being to improve and no direction is set for possible improvement: and when experience is not retained, as among savages, infancy is perpetual. Those who cannot remember the past are condemned to repeat it.

See Full Post and Comments

Where did God get his morality?

I don't remember when I first thought this, or perhaps when I first heard it, but as I've considered it more recently I just considered it a clever remark but not particularly substantive. It's of the same order of quip as the other similar quip: "God is, himself, an atheist because He doesn't believe He has a creator. (At least he hasn't said anything about it.)"

It seems like a rather obvious argument to me, but I don't think it's really that obvious to most people, especially people who wonder how atheists could have morality or morals at all. I think a background of programming makes me think of it as obvious: for a good programmer, indirection and recursion start to become natural. "Who created God?" "If this reality is a simulation, is the environment we're simulated in also a simulation?"

The first thing we must realize is that even Divine Morality changes. The Bible has demonstrated that God can change His mind, and a pure historical account of the Catholic Church shows their positions on certain issues differ significantly from their founding views. I don't think this is very controversial, and I don't mean to imply morality can change into anything; it still must fall within certain bounds.

See Full Post and Comments

Smartness

I distinguish smartness from intelligence in the following way: intelligence, specifically human intelligence, is simply what the human species is and does. Every human has intelligence, and roughly the same as another, from the dumbest idiot to the brainiest genius, barring large amounts of brain damage. This is because we're all the same species, our brains are all more or less the same "hardware", our genes are more or less the same, etc.

The difference between intelligence of a chimp and a human is staggering, even though we share about 95% or so of our DNA with a chimp. Put simply, the smartest chimp can't match the dumbest fully functioning human. There are thoughts a chimp brain is literally incapable of holding due to its design, that a human brain can hold.

Yet there's clearly variation among humans. I call this smartness. Intelligence is a spectrum, with a minimum (a rock) and a maximum (AIXI with some modifications), with humans and chimps occupying points on the line very near each other. I hope we as a species will be able to build the next step up from human intelligence and create something not only smarter than us in every measurable way, but simply more intelligent.

See Full Post and Comments