# Aumann's Agreement Theorem

Aumann's agreement theorem, roughly speaking, says that two agents acting rationally (in a certain precise sense) and with common knowledge of each other's beliefs cannot agree to disagree. More specifically, if two people are genuine Bayesians, share common priors, and have common knowledge of each other's current probability assignments, then they must have equal probability assignments.
- Less Wrong Wiki

Whenever someone says "well we'll just have to agree to disagree", the parties involved in the disagreement have failed at presenting their cases. It means that all parties, or maybe just one, are ignorant of some piece of information the other is implicitly using.

This happens a lot, unfortunately. The more your argument depends on, the harder it becomes to actually argue. From a distance such arguments look like a series of each person moving the goal posts of the argument, when in reality they're just trying to get across more prior information the other(s) don't have access to.

The problem is one of where the belief comes from. Assuming from now that only two people are disagreeing, does one's belief come as a consequence of other beliefs, or do they "just believe it" as something isolated? I like to use atheism and theism as an example because it's handy and I won't get beheaded in this day and age for it (at least in the United States).

This belief implication is the case of an atheist who "just believes it" because they were raised that way, versus an atheist who, when asked the god question answers "no", because (for example) they might have a belief structure already that states "I don't believe in things I can't actually see". (Ignore the poor belief network that produces like not believing in electrons--my point is that the single belief implies other beliefs like not believing in god.)

Now switch out the atheist who believes "just because" with a Christian who believes "just because", and pit that Christian against the second atheist. The argument might go: "I believe in God!" "I don't believe in God!" "Well I guess we'll have to agree to disagree." "Pfft, fine."

But they don't have access to each other's prior information. In a sense, the fundamental question of rationality is: what do you think you believe, and how do you think you believe it? The two parties have identified to each other what they each believe, but they missed the second part, and so at present they're allowed to agree to disagree.

Once the atheist knows the Christian was just raised that way, and once the Christian knows the atheist just doesn't believe in things they can't see, the prior information swap is complete. They know each other's beliefs, so they can no longer agree to disagree. However, they may still actually disagree on the god question. Their prior beliefs are still different. What Aumann's theorem goes on to suggest is that through repeated discussion/argument, then the two parties' prior beliefs will grow closer to each other if they're acting rationally.

Unfortunately things like "faith" and "absolute certainty" create priors that are effectively turned into infinities. An agent with such an infinite prior cannot be reasoned with, because a rationalist only has finite evidence and argument at their disposal and as any high schooler with pre-calc knowledge can tell you, a finite amount taken from a countable infinity is still a countable infinity. In other words, finite evidence cannot shift an infinite belief.

Of course, human brains can't actually store an actual countable infinity, so it's still possible to convince people who claim to believe with absolute certainty or faith, because they really just believe with a very high amount of certainty that they take for granted and perform deduction on. Therefore after the information swap is done, the two people's beliefs are still at least finitely proportional in reality.

Human brains are merely Bayesian-approximaters, though. So in a perverse way evidence that should shift a belief one way, ends up erroneously shifting a belief the other way in human brains! This is behind the observed effect that if people are told something a lot they'll start to believe it, and if they're subsequently told that such a thing was actually a lie, the mere act of mentioning that thing again just ends up strengthening the neural pathways corresponding with the thing, which in turn leads to a stronger belief in its veracity. Human brains are stupid.

In cases where essentially-infinite prior beliefs are encountered, everyone using such a thing needs an additional piece of background information before they can effectively become a "reasonable person" to have an honest debate with. That piece of background information is the idea that infinite prior beliefs are bad. From many different points of view.

As an example, from the point of view of seeking truth: If you already believe you know the truth with absolute certainty, you're never going to change your belief (if you actually, in reality, believed with absolute certainty). This can easily lead to your death or the death of someone else. Again picking on religion, this happens fairly frequently when parents believe that their act of praying will save their child from some disease, instead of taking the child to the hospital. Their infinite belief in the power of prayer caused their child's death. If they had a finite belief in the power of prayer, and additionally acted approximately rationally, they would have taken their child to the hospital and the child might have lived with a much higher experimental frequency. After that, their belief in the power of prayer would go down. To an infinitist, the death wouldn't change their belief in the power of prayer at all. They just say: "Oh, it was His plan."