Jach's personal blog

(Largely containing a mind-dump to myselves: past, present, and future)
Current favorite quote: "Supposedly smart people are weirdly ignorant of Bayes' Rule." William B Vogt, 2010

Idiocy, and Why is disabling allowed?

So besides being more practical to allow disabling a violent defector (but not killing them!), why else might it be good, and why should we not kill anyway?

Let's briefly examine the history of mankind. Okay, pretty much everyone who was alive 120 years ago is now dead. Gone. Vanished from the universe, unlikely ever to be repeated. They are just dust. Now, 100 years ago, this seemed pretty inevitable to everyone living at the time. (And it turns out, it was.) If you were alive, you were going to die. If it wasn't from a disease, it would just be from old age eventually.

But then these really, really, really stupid people invented something called war and had the great idea of killing other people! Hey, they're gonna die anyway, right? Why not kill them? Why not get so angry at your cousin being murdered by a Jew that you go out and kill ten random Jews? Why not get so angry at a handful of people you don't know, don't even know anyone who knows them, who got killed by another handful of people you similarly have no personal connections with, and decide to kill all the people associated with that group anyway?

See Full Post and Comments

Research Paper: The Need For Friendly Artificial General Intelligence

This paper is best read as a PDF, which you can download here. I have included a copy-paste of the text here for your benefit, however. This was my final paper project for my Sociology class. (Side note: research papers are easier than full blown essays since you're just researching what other people have said rather than trying to develop your own techniques!)


The word ``Singularity'' is overloaded: generally, the meaning implies a point of time in the future when an entity with greater-than-modern-human intelligence exists. However, there are three ``schools'' of thought that accompany this definition. (see YudSchools) The first is called Accelerating Change, and is commonly advocated by Ray Kurzweil. Our normal human intuitions are primed to think that roughly the amount of change experienced in our past can be expected in the future, while modern times have uprooted that intuition because with modern technology, the rate of change increases exponentially (and this can be seen using graphs). The invention of the printing press caused a surge in printed materials, and as the printing press improved so did the amount of printed materials. The invention of the modern computer caused a surge in many, many fields, and as the computers get better those fields too get better.

See Full Post and Comments

Sociology Memo: Is it Ethically Permissible to Clone Human Cells?

The No Author, Van Gend, presents an interesting flavor of arguing that initially begins by insulting the benefits of cloning, calling it a waste of hope. Well, yeah, if you have 8 years of no stem cell research (in the US) and in general no time to test these things, there are going to be problems. I will agree with him that cloning is rather a misplaced technology, but only because I see nanotechnology as far more powerful than stem cells, and that's where our money should be going instead.

It seems like everyone arguing abortion or now apparently cloning is wrong brings up the ``cold'' way of treating children. Here's a fun fact about breeding: every month a woman produces an egg that if not fertilized exits her body, and every time a guy ejaculates millions of sperm are released that never go on to fertilize any egg. Shouldn't we in our infinite caring about the unborn be harvesting all this reproductive material for potential use? Of course not.

Furthermore I think it seems like people are under the impression that cloning is an all-or-nothing deal. We either clone the entire human, brain and all, or not clone at all. No, we can clone parts. We never have to create a conscious being unless we really want to, for example in the case where a young child is run over and the parents want that child again. (This will become especially relevant when parents start choosing which genetic traits they wish their children to have, for example a tendency to have higher math skills or to be resistant to various illnesses and problems like allergies.)

See Full Post and Comments

Sociology Memo: Should the World's Libraries Be Digitized?

When people start arguing against the right to read, I get pretty suspicious where their motives are. Keith presents some very odd arguments to start out with before he goes into the old copyright-violating argument. Copyright violation is really the only one that should be taken seriously, as the ones preceding aren't very good. First of all, if I go and buy a book, I do not thereafter need the author or publisher's permission to make a copy of it for personal use, or to write in my copy, or to lend it to my friend, or to even donate it to a library where others will be able to check it out and read it for free. The sticky water is when I try to sell the copy, but the existence of used book stores and resells on Amazon make it fairly clear that this is not an issue either.

What Google is doing is saving time and money, both now and for the future. Scan the whole thing, and if the whole thing is copyrighted, then simply restrict the majority of it from being displayed. Allowing the entire book to be indexed for search, however, is a net benefit to all parties. The author and publisher, because their book will receive more exposure and potentially more sales if people find the snippets intriguing enough, and for general people interested in some subject or in writing a research paper. Keith argues that this could destroy the value of books if people somehow got their hands on them for free. The true value of books cannot be destroyed: the true value of books is in what they give to humanity. Copyright was never a fundamental system, and before copyright laws people still wrote a lot. Copyright is merely allowed by various governments as an incentive to get more works out there, for the benefit of humanity as a whole.

His next weak argument is related to a destruction of value, but is aimed at an implied lack of security on Google's part. Yes, every system has security vulnerabilities, though Google's is particularly secure. In any case, it's not like The Matrix wasn't freely available to download prior to the Google hack. In fact, a huge portion of copyrighted material is available through torrent networks that involve little to no effort to find. Yet empirical evidence does not agree with the claims of the copyright holders: as "pirating" (a horrible phrase that simply means "copying"; there is no theft as the original remains) has increased, oddly, the profits of the movie and other media industries have also increased. In particular for those who have embraced a new model of business that involves electronic distribution.

See Full Post and Comments

Sociology memo: Is Information Technology a Threat to Privacy?

Taylor talks about lives at stake. Well, the price of freedom is a couple planes in a couple buildings with a few thousand dead, not a decade of foreign occupation, thousands of our own soldiers dead, hundreds of thousands of innocent Arabs dead or displaced, an economy in the shambles, and invasion of privacy for Americans. When a country fights for such things as freedom, there are risks it has to take. It seems like the relevant Benjamin Franklin quote always comes up during these conversations: ``Any society that would give up a little liberty to gain a little security will deserve neither and lose both.'' Airplanes are safer after 9/11 not because of any security increases at airports (such measures are a joke and trivial for any thinking person to get around (which shows you the stupidity of the enemy if they can't get around it (though I suspect the problem is more in the difficult of motivating people to kill themselves, which is not easy and a promise of virgins has little to nothing to do with it))), but because passengers now know to resist terrorists. The recent ''underwear bomber'' was stupid for being obvious, but passengers also resisted him and he was stopped.

If the US doesn't have the capabilities of Dubai, I suspect they might desire them. Dubai identified the assassins of Mahmoud al-Mabhouh extremely quickly, along with releasing video taken from various locations. It's interesting to wonder how much computer power is behind all that. But I digress...

I don't consider it ridiculous to do a flat ban on wire tapping. Wire tapping catches stupid criminals, not the smart ones. The smart ones will communicate using encryption impossible for government computers to crack (unless they secretly have a quantum computer, which is unlikely). The smart ones will use services such as Tor and various proxy networks to hide their online presence, and there's always the Darknet which is architectured in such a way to prevent spying on the information. When humans want to communicate secretly, there's nothing the government can do to stop them. One may argue that measures which invade everyone else's privacy may ``thin the herd'' by removing the stupid, but the stupid are mostly harmless and don't require such invasive measures to capture anyway.

See Full Post and Comments

Sociology memo: Intelligent Machines

It's a sad thing that people don't seem to be able to see science fiction happening in real life. Horgan starts with the ``I'm 54'', as if that makes a difference on his opinion of this matter, but does reveal his likely generational bias. He even uses the rhetoric claim that ``they're all men'', which is a falsehood as an absolute statement. There are in fact women, even if he's never seen one.

There are three ``flavors'' of the Singularity (link): Accelerating Change is advocated by Kurzweil, with the exponential increases leading to powerful technology where we can do full-brain emulation; Event Horizon is advocated by Vernor Vinge, where we are likely to either significantly improve human intelligence or create a new one, but then all bets are off for the future after that (``if you could predict a Grandmaster's moves in chess, you would be at least as smart as the Grandmaster''); the third type is Intelligence Explosion, advocated by Eliezer Yudkowsky, with the claim that when we create a recursively modifying intelligence, one of the things we can expect it to do is to improve its own intelligence and enter a positive feedback cycle that goes FOOM, ``like a chain of nuclear fissions gone critical.''

For the third type, basing the AI on the human brain isn't required or even desired. Rather, a mathematical model for how intelligence works and how it can improve itself in a stable way is being worked on. Nevertheless, the human brain is fairly complex. But complexity is not an argument. In 1000 years, do these critics really believe no one will have figured out the brain? Look at what understanding science has given us in just the last 400 years! In 1000 years, only a catastrophic global extinction or severe extermination would make me consider the possibility of not reaching a point of intelligence improvement. At the very least, we should be able to trivially increase the speed of our brain neurons within these next few years. They only fire at around 60 Hz.

See Full Post and Comments