Tuesday, September 10, 2013

Donut Time, All the Time!

You know when you read something and you're like, "I sort of already knew that, but it was nice to hear it laid out properly"?

That's the first half of this post.

You know when you eat a donut and you are like, "Yum, but...I really wish I had given that to Xan since he loves donuts so much more than me..."

Well, that should happen -- I do really love donuts -- but it doesn't.  And I don't blame you, since not once in my life have I even saved a donut for my own future self.

The second half of this post is about why evolution just couldn't accomplish the seemingly simple task of getting me to save donuts for my future self.  It has to be evolutionarily optimal at least sometimes, right?


I. AGENCY

Am I an agent?  
Before I answer that question, I want to impress on you what it truly means to be an agent.

Economists typically use the word agent to refer to a single person with a single set of preferences.
An agent decides how to act, taking his preferences into account, and then actually acts upon that decision.

Contrast this with a collective, a group of agents each with their own preferences.  Whereas an agent has coherent preferences that tell it unambiguously how to behave, a collective is an amalgam of preferences that may not agree with each other. And whereas an agent decides how to act and then pulls the trigger, it is ultimately the agents in the collective who act on its behalf.

There are many instances where a collective can be thought of as having agency.  Whether the collective is a family or nation or firm or sports team, we may not lose much by modeling it as a single entity, depending on the problem.

But it can also be deeply misleading to imagine that a collective has proper agency when it doesn't.  In fact, this is one of the most pervasive logic failures in circulation today. Consider this representative quote:
"We run the risk of going extinct, and the irony is, we did it to ourselves. The ‘smarty pants’ brain that created advanced weapons, complex global economics...is routinely bossed around by the brain that shoots from the hip.... 
No one in their right mind would deliberately create the means of their own extinction, but that’s what we seem to be doing. The only conclusion is that we’re not in our right minds..." 
-K.C. Cole
No, that is not the only conclusion.  Of course people do stupid things, but that's a bit of a red herring here.  When people talk about all the ways in which we are destroying ourselves -- and how stupid to destroy oneself! -- they almost always mean that agents are destroying the collective.   Since the agents are in the collective, these sound like the same thing; however, the deep insight of the tragedy of the commons is that everyone can be doing individually smart things and still end up with a collectively stupid result.

More generally, there is almost always a tension between the preferences of different agents.  In the absence of well-designed laws or norms or enforceable contracts or markets to discipline their interactions, the large-scale effects of these interactions can diverge significantly from what they would collectively choose, if only it were possible to choose collectively.  We cannot hope to understand their joint behavior if we always think of the collective as an agent with the power to dictate its own behavior. Because it's not.


So am I an agent?
In a word, no.  I exist, and exist, and decide, and decide.  But the cold, hard truth is that I am better thought of as a collective of agents that exist at different moments of time.  Do we have a lot in common?  Sure.  But we aren't the same agent.  For one, we have different preferences over which of us gets to eat the donut (I can tell you that each of me wants the donut for himself!).  For another, when all is said and done, each of me must decide for himself what to do.  Today-me cannot flex a muscle to lift the arm of tomorrow-me.

Of course, today-me has influence over the state of the world tomorrow, and as such can manipulate the incentives of tomorrow-me.  Perhaps today-me can make it optimal for tomorrow-me to lift his arm, perhaps even employing a commitment device if one is available.  However, that does not make us the same agent, any more than you become the same person as me when I hold a gun to your head and order you around.  In any case, commitment devices are often either unavailable or prohibitively costly, so the fact remains that versions of me are not always going to agree on how to behave.

To facilitate breaking yourself free from any illusion that you are a single agent over time, let me stress how unnatural the idea of single agency is from a physical standpoint.  Your mind itself is under constant reconstruction at the atomic level.  Your brain isn't even composed of any of the same atoms it was originally made of.  The strong feeling that you are a single person is a product of inherited memories and characteristics passed along by the physical laws that govern your transition from one instant to the next.

[Just to say it: You might believe in some sort of incorporeal soul that binds all the versions of you together into one coherent entity.  Fine by me, but that is immaterial (ha!) for the present discussion.  What really matters right now is that different versions of yourself have different preferences, all else equal.]

II. DONUT EVOLUTION

The thing is, there are strong evolutionary reasons for agency.  The evolutionary optimum is a single consumption path and, while it may technically be an array of agents who need to act at different points in time, it would be most adaptive if they all cooperated along the same optimal consumption path.

But since this so often fails to happen, we are left to wonder why.  I have seen a few evolutionary models that generate dynamic inconsistency (procrastination, preference reversals, etc) as a second-best outcome, but never convincingly.  Dynamic inconsistency is pervasive, so a truly satisfactory explanation needs to rely on first-order issues.

My feeling is that agency -- which entails dynamic consistency -- is just plain hard to accomplish.  Because the truth of the universe is that we aren't agents.  The null hypothesis in economics is that we are agents, so evolutionary models are forced to come up with reasons why evolution would "prefer" us to act inconsistently.  But can't it just be legitimately hard to make a collection of people arrayed through time agree on everything?

A donut model.  (No relation to a circle model). 
Say you live for 10 periods, but that "you" are really a collection of 10 agents, one per period.  In the first period, you are endowed with a donut which you can save or consume at any time.  And suppose that for whatever reason, it is evolutionarily optimal to eat the donut in period 7.  But because of limited hardware/software, evolution cannot fit a 10-period utility function into your brain.  It can only fit a single, simple, one-period utility function into the brain of each of the ten agents.  The same utility function has to go into every agent, and each agent can only make the simplest of differentiations: "himself" versus "other agents."

This utility function can tell an agent to save the donut or eat it himself.  It cannot tell him to save the donut for agent 7, as he doesn't properly know who agent 7 is.

Constrained in this way, if each agent prefers to save the donut, it will never get eaten.  It's better for each agent to prefer to eat the donut! Then it gets eaten in period 1, which is better than not eating it at all, but far from the unconstrained evolutionary optimum.


Complexity is Simplicity for Complicated People.  
I think this minimalist model captures something really true.  At its heart, dynamic inconsistency arises because it's simpler than dynamic consistency, and complexity is evolutionarily expensive. That's it.  That's the real reason.

That sounds so weird to economists, because if you are really a single agent over time, as we tend to assume, then of course it's more complicated to make your preferences vary over time.  But if you are actually a bunch of different agents, then it's more complicated to make your preferences agree!

In this model everyone has the "same" preferences; that's the sense in which dynamic inconsistency is simpler.  But they aren't really the same, are they?  In fact, they all have the same utility function with respect to a different reference point: everyone has a different definition of "me"!

As well they should. If a frisbee is hurtling toward your head, you should react the same way that I would react if a frisbee were hurtling toward my head.  That's what a one-size-fits-all program looks like, given that we all see out of our own eyes.

Different versions of ourselves are qualitatively the same story.  People ask, "Why dynamic inconsistency?"  But to the contrary, isn't it amazing that evolution was able to endow us with as much glorious consistency as it did?


No comments:

Post a Comment