Friday, August 19, 2011

Usury 3: Me, myself, and I(ntertemporal tradeoffs?)

In the original usury post I said that we could explain a pattern of [splurging up front, massive debt thereafter] with time preferences that strongly favor the present.  Here are a couple possibilities.

1. For starters, maybe people's preferences -- while consistent over time -- are just really impatient.  They care a lot more about the beer and chips today than their increased poverty a year from today.  There's certainly nothing wrong with this in theory.  A person's goal isn't to maximize their bank account in the long run; rather, they care about their own rate of time preference, and it could be making them better off overall to borrow at 15% today in order to consume that much more in the present.

While possible, some find it unconvincing.  Also, yawn.  It is actually written into my economonomics contract that consistency gets a verbal and written yawn.  Do you prefer option 2?  At the very least, it's actually interesting...

2. Maybe people's preferences display "time inconsistency."  Their "true" preferences are rather patient, but in the moment they cannot resist the beer and chips even though it's "really" not good for them overall.  If they could commit in advance to not borrowing so much from the credit card company, they would.

Time inconsistency is often considered irrational, but let me take a few minutes to make a relatively deep comment about the nature of preferences.

First of all, the idea that you are actually a single entity over time is at best a simplification, and at worst an actively detrimental illusion.  "Multiple selves" is not just a way of looking at's reality, right?  At the level of reality, you are not the same person tomorrow!  There is some transition rule that turns your constituent atoms of today into your atoms of tomorrow, and why should they necessarily have to agree with each other?  And even if you were physically constant over time, different instances of you wouldn't have the same bodily experience from the time-dated consumption of a bag of chips (yesterday-you enjoys yesterday-chips more than today-you enjoys yesterday-chips), and different versions of yourself could easily have different preferences over when to eat chips.  Would you expect a hundred clones in a room to agree over who gets the chips?  More likely they'd all want the chips for themselves; they have the "same" preferences but over different identities, which is to say they don't really have the same preferences at all, over states of the world.

That is the default.  We can try to model things with consistency, and we can look for reasons why consistency might come to the surface, but it's important to keep in mind that consistency is not the privileged default state of the world.  It is special case if it turns out that different versions of yourself all agree with each other.

There are about 8 directions we could go from here.  We could describe some salient theoretical possibilities for the shape of our time preferences.  We could talk about empirical evidence for the actual shape of our time preferences in various settings.  Or we could go a level deeper and talk about where the preferences came from in the first place, and what sorts of preferences were likely to have arisen.  We could talk about the powerful (but not all-powerful) forces pushing our preferences in the direction of consistency, and reasons we might systematically deviate from consistency.

But right now I just want to make a simple remark. As soon as there is any disagreement whatsoever between different instances of "your" preferences, what's to say which is the "true" or "right" set of preferences?  Six years in advance you want to abstain from the beer and chips.  In six years, you don't.  What does it mean to say the advanced preferences are the "real" ones?  In some sense there are simply many agents with different preferences in conflict with each other, doing what they can to get their way.  Now perhaps most agents agree that You-2017 should not go for the chips, while You-2017 disagrees, in which case are we really making a comment about social optimality when we say You-2017 is in "error"?  And if social optimality is on the table, by all means, please tell me: what weights are we using?  How is it related to the discounting that's already taking place?  It is so very nice if preferences start out consistent, because then everyone agrees on everything...but as soon as there's disagreement, there's a big discussion to be had. (More to say, another time).

I'm happy to abstract away from this, and a single, consistent agent over time is a great fit in many situations.  But at the level of reality, I don't exactly like to think of time inconsistency as irrational. Irrationality presupposes a correct set of preferences which are being violated, but there is no correct set of preferences here, only a bunch of preferences in disagreement.

(The discussion of why a particular set of preferences may jump out and seem to be "correct" also for another time).

No comments:

Post a Comment