1a) receive $24k with certainty
1b) receive $27k with 33/34 chance, 1/34 receive nothing
2a) receive $24k with 34/100 chance, 66/100 receive nothing
2b) receive $27k with 33/100 chance, 67/100 receive nothing
Most people prefer 1a and 2b. But according to math, a consistent decision theory should choose either 1a and 2a or 1b and 2b, no mixing. And in general 2b should be considered better in isolation because it maximizes expected return, so really the controversy is in human preference for 1a over 1b.
I don't argue with the math, but in denouncing human behavior, I think humans have a somewhat defensible algorithm. I would go with 1a and 2b despite knowing about this.
The two problems with just applying the math to humans is that to humans magnitudes matter, as does repetition. If you offered me choice 1 a hundred times, but I had to commit to a single choice beforehand, I would choose 1b. The odds of coming up with nothing are tiny, and if you used a quantum number generator, well the version of me that came up with nothing would be kicking himself, but the majority of me will be happy, even if they got a bit less than what they'd get if they had 1a one hundred times. If I got to pick for each repetition, I'd switch my strategy up depending on how much 1b) varies from the expected outcome.
Magnitudes matter because at some point I'll switch to 1b. If you offer me $24k with certainty, vs. 33/34 chance of $1bn, I'll take the chance. $24k is a relatively small sum of money -- not insignificant, but small.
Magnitudes and repetition matter together. A 99.9999% chance that running this one experiment will not destroy the earth seems worth the risk if the experiment should be useful and is a one-shot. The same chance but for a million experiments, please no -- we expect the world to be destroyed by one of them! The flaw with reasoning with the assumption of a one-shot is that it may not be one-shot, so it's dangerous. Suppose that across all the human population, independently people are running their own experiments that have these odds, and on average one happens on earth every 30 seconds. Then we should expect the world to end sometime in the next 57 years. That's not very good.
The other issue with this paradox is risk aversion. In scenario 1, you're basically getting paid $24k. You then have the option of betting that $24k to win an additional amount with good odds, or walking away. Even when the odds are so good that the expected outcome is higher than the cost to play, if it's not much higher, people will want to walk away. In scenario 2, you don't ever have the option of walking away, you're just choosing between two potentials. Humans have evolved great reasons to quit while they're ahead. You can either go home now with 5 roots with the rest of your buddies and expect to get home safely, or you can spend an extra hour in the dark alone for the chance of 1 more root but at the risk of being eaten yourself. If you're looking to maximize the number of roots, then the whole tribe should risk staying out, but that won't necessarily lead to the maximum population size and if there are some who always go home and some who are risky the ones that go home may outbreed the riskier ones. The odds for the risk are also probably unknown, and the closer they are to certainty then the less it matters unless the potential reward is significant enough (like 100 roots) to increase your chances of breeding.
Sensitivity to risk is dependent on a lot of context. If I'm totally homeless and the money values are $1m vs $1.5m, I'm not going to risk turning my life (and the lives of all the alternate quantum-mes) around for the chance at an extra half million. Even if you offered $1m vs. something like $10m or $100m at good odds, I'd still be willing to go with the certain number. The reason I'd be okay with choosing 34/35 of $1bn over $24k is because $24k isn't much to me at the moment, I'm not homeless and have an income and savings.
Taken to the other extreme, $1 vs $2. In this case, I'll go for the shot at $2, the amounts are insignificant.
I think the problem is that the math assumes that utility of dollars maps linearly to the real numbers, and that positive utility should be the same as negative utility... I'd have to work out the math to verify this though. This is a problem because utility can map different ways if it's a loss vs. if it's a gain, and more money might not necessarily be more valuable (it could be less!) depending on your values and background. Yes $2 is strictly greater than $1, but in terms of my utility? I make a good salary. If you're offering to add $1 or $2 to my bank account, I don't care, they fall in the same bucket. But if I'm grocery shopping? Then it's a matter of subtracting, and they don't fall in the same bucket. Is this inconsistent? Maybe, by the math of decision theory.
Perhaps the problem is with certainty. It feels different if 1a is, say, 99.999999999999% certainty, while 1b is still 97%. Given that neither is certain, that may indeed make me choose 1b. I wonder if this is more a problem with the math (where in my opinion dealing with 0 and 1 in probability theory is "icky" because they're not real probabilities...) or if it's a problem with human behavior when they think nothing can go wrong.
Anyway, all I wanted to write about is that I don't think this paradox is a great example of human irrationality. There are plenty of other things in the heuristics and biases literature that demonstrate our irrationality.
Posted on 2015-05-30 by Jach