TheJach.com

Jach's personal blog

(Largely containing a mind-dump to myselves: past, present, and future)
Current favorite quote: "Supposedly smart people are weirdly ignorant of Bayes' Rule." William B Vogt, 2010

CEO-monarchs would be best in a patchwork

I've liked the idea of neocameralism since I was first exposed to it. In short, the government runs as a classic monarchy with an absolute King, but it's really structured like a modern corporation and so the King is really a CEO-King who is formally accountable and replaceable. (By the board members, which themselves are replaceable by the shareholders.) This improves over classical succession issues where the best person can take the reigns when the CEO dies or retires, not necessarily the CEO's son. This also improves some accountability issues because if the country is dying and the CEO isn't seen as doing his best to stop it, he can be gotten rid of with a nice and clean process to let someone else have a go, much the same as within the company/country official org chart itself the CEO can replace anyone.

This has some obvious defects, though some maybe not worth too much concern about. The first is: will a CEO-King who is being replaced actually go quietly? Maybe in the distant future you can actually crypto-lock and secure all weapons of significance to make this a guaranteed peaceful transition, but even without such questionable tech, I think it'd usually go well anyway. For one, humans respect hierarchy quite a bit -- even when guns are in play, even when there are many reasons someone with a cynical disposition can question legitimacy. The historical case of Lincoln and his Generals is a good one to reflect on. Why should the President, who typically lacks any military training, be Commander-in-Chief after all? It's amazing this country has never had a successful military coup against the sitting President (I'm unaware of even an unsuccessful one). Instead, even top generals defer, and even go away, and are replaced by other generals. And Lincoln himself, had he not been shot, eventually would have deferred to let another President take office. (Though perhaps like FDR not immediately.)

Also, having a nice severance package is a common and good incentive to keep people from doing harm on their way out, or later on.

See Full Post and Comments

Grabby Aliens and Paperclips

In a recent bar-talk-style chat with a friend, we got on the topic of AI risk again. Somewhere in the conversation he expressed surprise at some of my arguments, given some of my older arguments. Namely, I put forth some arguments de-emphasizing the importance of recent AI advances when it comes to full-blown AGI, and generally I expressed skepticism that full-blown AGI was "near". At least, a lot more skepticism than I've ever done before with him, it seems.

Ultimately I think I've just been more and more influenced by Robin Hanson's viewpoints, and his recent work on grabby aliens may have been another push that I've only connected now.

But stepping back a bit, I want to describe the shape of my beliefs about AGI. Before I get to probabilities I have to first talk about possibilities. So first of all, it seems obviously possible that creatures of human intelligence could one day create machine intelligence that rivals and surpasses them. It does not at all seem that it's an impossible engineering challenge in the way that a perpetual motion machine is one**, nor does it seem like humans should just happen to be close to the limits of general intelligence. If nothing else, we could at least think faster, which alone would be a large advantage even without greater generality to thought.

See Full Post and Comments