TheJach.com

Jach's personal blog

(Largely containing a mind-dump to myselves: past, present, and future)
Current favorite quote: "Supposedly smart people are weirdly ignorant of Bayes' Rule." William B Vogt, 2010

Aumann's Agreement Theorem

Aumann's agreement theorem, roughly speaking, says that two agents acting rationally (in a certain precise sense) and with common knowledge of each other's beliefs cannot agree to disagree. More specifically, if two people are genuine Bayesians, share common priors, and have common knowledge of each other's current probability assignments, then they must have equal probability assignments.
- Less Wrong Wiki


Whenever someone says "well we'll just have to agree to disagree", the parties involved in the disagreement have failed at presenting their cases. It means that all parties, or maybe just one, are ignorant of some piece of information the other is implicitly using.

This happens a lot, unfortunately. The more your argument depends on, the harder it becomes to actually argue. From a distance such arguments look like a series of each person moving the goal posts of the argument, when in reality they're just trying to get across more prior information the other(s) don't have access to.

The problem is one of where the belief comes from. Assuming from now that only two people are disagreeing, does one's belief come as a consequence of other beliefs, or do they "just believe it" as something isolated? I like to use atheism and theism as an example because it's handy and I won't get beheaded in this day and age for it (at least in the United States).

This belief implication is the case of an atheist who "just believes it" because they were raised that way, versus an atheist who, when asked the god question answers "no", because (for example) they might have a belief structure already that states "I don't believe in things I can't actually see". (Ignore the poor belief network that produces like not believing in electrons--my point is that the single belief implies other beliefs like not believing in god.)

Now switch out the atheist who believes "just because" with a Christian who believes "just because", and pit that Christian against the second atheist. The argument might go: "I believe in God!" "I don't believe in God!" "Well I guess we'll have to agree to disagree." "Pfft, fine."

But they don't have access to each other's prior information. In a sense, the fundamental question of rationality is: what do you think you believe, and how do you think you believe it? The two parties have identified to each other what they each believe, but they missed the second part, and so at present they're allowed to agree to disagree.

Once the atheist knows the Christian was just raised that way, and once the Christian knows the atheist just doesn't believe in things they can't see, the prior information swap is complete. They know each other's beliefs, so they can no longer agree to disagree. However, they may still actually disagree on the god question. Their prior beliefs are still different. What Aumann's theorem goes on to suggest is that through repeated discussion/argument, then the two parties' prior beliefs will grow closer to each other if they're acting rationally.

Unfortunately things like "faith" and "absolute certainty" create priors that are effectively turned into infinities. An agent with such an infinite prior cannot be reasoned with, because a rationalist only has finite evidence and argument at their disposal and as any high schooler with pre-calc knowledge can tell you, a finite amount taken from a countable infinity is still a countable infinity. In other words, finite evidence cannot shift an infinite belief.

Of course, human brains can't actually store an actual countable infinity, so it's still possible to convince people who claim to believe with absolute certainty or faith, because they really just believe with a very high amount of certainty that they take for granted and perform deduction on. Therefore after the information swap is done, the two people's beliefs are still at least finitely proportional in reality.

Human brains are merely Bayesian-approximaters, though. So in a perverse way evidence that should shift a belief one way, ends up erroneously shifting a belief the other way in human brains! This is behind the observed effect that if people are told something a lot they'll start to believe it, and if they're subsequently told that such a thing was actually a lie, the mere act of mentioning that thing again just ends up strengthening the neural pathways corresponding with the thing, which in turn leads to a stronger belief in its veracity. Human brains are stupid.

In cases where essentially-infinite prior beliefs are encountered, everyone using such a thing needs an additional piece of background information before they can effectively become a "reasonable person" to have an honest debate with. That piece of background information is the idea that infinite prior beliefs are bad. From many different points of view.

As an example, from the point of view of seeking truth: If you already believe you know the truth with absolute certainty, you're never going to change your belief (if you actually, in reality, believed with absolute certainty). This can easily lead to your death or the death of someone else. Again picking on religion, this happens fairly frequently when parents believe that their act of praying will save their child from some disease, instead of taking the child to the hospital. Their infinite belief in the power of prayer caused their child's death. If they had a finite belief in the power of prayer, and additionally acted approximately rationally, they would have taken their child to the hospital and the child might have lived with a much higher experimental frequency. After that, their belief in the power of prayer would go down. To an infinitist, the death wouldn't change their belief in the power of prayer at all. They just say: "Oh, it was His plan."

As another example, from the point of view of game theory: If you're taking a bet with infinite outcomes, watch out. Suppose I'm offering $100 to anyone who can roll 80 6's in a row on a fair dice. Suppose you believe with absolute certainty you'll win. That makes you a Sure Loser, and experimentally you will lose each time you play. Similarly, suppose I'm offering an infinite amount of money to anyone who can roll 80 6's in a row on a fair dice. If you include infinities in your rationality system, then you'll play every time (and lose every time) because the infinite payoff can make up for any losses. (Note you could win given infinite time, but that's basically a "canceling infinity".)

Richard Feynman believed that this information, this idea that you can be wrong, this idea of doubting everything, is the key step to a rational mind. He believed that once a religious person consciously starts down that path it's hard to go back. As you doubt more and more, it becomes harder to believe.



Which is the way it should be. Increased doubt should lower your prior. In some cases you may find some evidence on the way down that can stabilize your total belief again, but it will be lower than before.

Anyway, if you've reached the stage where you and your opponent's beliefs are finitely proportional, that the only difference is in your priors, you can't agree to disagree, but you can agree that your priors are different. But don't use that in the way "agree to disagree is used" as a stop sign to stop talking. Keep talking! If you're both sufficiently rational, your priors will come closer. And they won't usually meet in the middle, either. Once they're the same, however, the two of you must agree.


Posted on 2011-10-16 by Jach

Tags: atheism, rationality, thought

Permalink: https://www.thejach.com/view/id/212

Trackback URL: https://www.thejach.com/view/2011/10/aumanns_agreement_theorem

Back to the top

Back to the first comment

Comment using the form below

(Only if you want to be notified of further responses, never displayed.)

Your Comment:

LaTeX allowed in comments, use $$\$\$...\$\$$$ to wrap inline and $$[math]...[/math]$$ to wrap blocks.