Saturday, February 5, 2011

Agreeing to disagree

If anyone has ever wanted to formally understand Aumann's agreement theorem and the stuff it's built on, I recommend reading sections 0-3 of Aumann's Interactive epistemology I: Knowledge (10 pages) and then Aumann's 3-page Agreeing to Disagree. There may be a better way, but I just did that and it worked for me...and I appreciated the pointer so I'm passing it on.

I checked google scholar to see if the latter paper might have the most citations per page of any econ paper, but it was edged out by at least Nash (for also being brief but important) and White (for just getting so many darn citations).


  1. Exogenous CombustionFebruary 9, 2011 at 7:51 AM

    I agree that it is a fun and informative theorem. Isn't is also worth noting perhaps the most relevant point at which Aumann's agreement theorem is inapplicable to the "real world"?

    Specifically, I posit that the real world is full of individuals who are looking to actively deceive others? Aumann recognized this in "Agreeing to Disagree", of course, noting that the two "had to trust each other."

    It seems the most demanding of the required assumptions. Thoughts? Should we agree to disagree on this?

  2. In fact it's even worse than you suggest: people don't even need to be trying to deceive each other. We can even have two honest truthseekers, but if their truthseekingness isn't common knowledge, the theorem doesn't apply. Perhaps we are both able to control our biases; nevertheless, if you entertain the possibility that I may suffer from overconfidence in my own opinions, we need not come to agreement.

    But! Don't conclude that Aumann's agreement theorem is "inapplicable to the real world"! You want to posit that people are deceptive. But why posit? The contrapositive of the theorem gives it to you: if people *are* always agreeing to disagree, then perhaps we really are short of honest truthseekers, right?

  3. Exogenous CombustionFebruary 10, 2011 at 11:59 PM

    I like the turnabout, and it certainly provides an update on whether or not we are short of honest truthseekers.

    I also agree I focussed on a less stringent condition, but I think it's one that's most pertinent in life. Why? Because the priors question just pushes things one step back. Why do we have the priors we do? If I transfer to you the complete dataset on all my life experiences since birth, then surely we should have the same priors, no?

    I recognize that's not true, but it does seem like a reasonable hypothesis: that everyone before birth should have the same priors. On the other hand, I posit that we have fundamental principal agent problems in life that makes liars of everyone; I think this comment contains a large amount of truth when it comes to social interactions (we lie to ourselves, we lie to others, and we lie to third parties, implicitly and explicitly).

    There may be some concern about why we have different priors, but my less stringent condition certainly holds.

  4. I agree about priors. Bayes is often misinterpreted as support for the sacredness of one's priors; really, in every day use, priors are mostly a convenience, a shorthand for the current state of beliefs, which prevents us from having to go back and examine all the data we ever collected up to this point. Most of your beliefs are primarily due not to your priors at birth, but rather by information you have seen since, and the way you processed it...and it is a sin against logic to sneak all that processing into an argument under the guise of sacred priorness. And even your prior at birth isn't unimpeachable; to the extent that it was itself generated by a mechanical process (e.g. evolution), you can roll the issue back even further. Indeed it's difficult to see how any difference in beliefs could be due to anything other than differences in *information* and how it is processed. Therefore, while Aumann's assumption of common priors seems strong, it constitutes no loss of generality. If we are in a situation where people have different priors, we can just zoom out -- it is embedded in a larger situation where people do in fact have the same priors.

    Next, as for your concern about our lyingness, you might want to consider the agreement theorem as applying in the usual info theory setting where people are incentivized to tell the truth. However, for a novel argument for how we will *never* be able to actually trust each other 100%, consider that incentivizing someone to tell the truth relies on knowledge of their preferences, which can never truly be obtained. In particular, I will always place a higher probability on my own sentience than on yours, no matter what you say to the contrary. We will never agree on your probability of sentience, because I cannot trust the information you supply on the matter. There is no test I can do to rule out the possibility that I am, say, living in a simulation and you aren't even how can I possibly know that I'm giving you an incentive to tell the truth? Nevertheless, conditional on that sort of thing not being the case, the theorem can apply. And most of the time we are quite happy to condition our behavior on that not being the case, so as a practical matter it's not necessarily a big deal.