Friday, September 27, 2013

The prisoners' other dilemma

A person is made up of multiple selves across time.  In a sense, you can never really punish the agent who committed a crime, because that agent is long gone.

Now, we care about our future selves, so the threat of punishment to our future selves deters us from doing bad acts today.  However, we frown on threats to our family members, even though they would also deter us; is there a qualitative difference?

You could say we are more closely "related" to our future selves than to our family members.  It's wrong to punish your brother for something you did, because he's so obviously not you.

Hypothesis: People who commit serious crimes are systematically less likely to care about their own future selves.  They are not so closely "related" to their future selves, and so do not take the future consequences as deeply into account, which is why they commit crimes.

If so, we penalize specifically the people for whom there is the biggest disconnect between the agent who committed the crime and the agents who must pay the price.

That makes me kind of uncomfortable.  But note that the legal system treats "crimes of passion" as a special case, presumably in recognition that the agent who committed a crime of passion is no longer around.

*
Drug addiction is especially susceptible to the problem identified here.  Strong addiction induces an obsessive focus on the present, and thus a disconnect between present and future selves.

Maybe it's wrong to punish your brother on your behalf because he clearly had nothing to do with the bank robbery, and didn't even benefit in any way.  By contrast, future versions of yourself may be beneficiaries of the robbery, and indeed might agree that current-you should have committed it.  Perhaps a sense of common agency comes from agreement over what should be done.

But in the case of drug addiction, that link is broken.  In general, drug addicts harm their own future selves by their present indulgence.  Any current smoking makes it both harder to quit and harder for the addict to satisfy the craving in the future.  The worst!

Wednesday, September 25, 2013

A matter of life and death


If you watch Doctor Who, you know that Daleks love to announce their intentions before carrying them out. EXTERMINATE! EXTERMINATE!  This is probably not the best idea, strategically speaking.  It also has welfare consequences, which we're going to talk about today!  On the way, we will discuss animal slaughter, murder, deathbed wishes, criminalizing private acts, and more.

The argument will proceed in the following steps:
  1. What matters is the welfare of agents with preferences.  
  2. Welfare is exclusively a function of an agent's experience, not the underlying state of the world.
  3. Yet we have a strong intuition that we should be allowed to have preferences over states of the world rather than experiences.  
  4. This intuition leads to strange ideas about life, the universe, and everything.
I'm sure the Daleks will fit in there somewhere.  Allons-y!


1. What matters is the welfare of agents with preferences

Well, I dare you to argue otherwise.  

For something to matter, you need an entity it matters to.  If there is no entity with preferences, the state of the world doesn't matter one bit, to anyone or anything.  There is no "better" or "worse," only indifference.  The clock does not care what time it is.

An agent is an entity with preferences.  His preferences generate a sense of "better" and "worse," and the degree to which these preferences are satisfied is captured in his welfare.  

Of course, the agent can care about whatever he wants, including the state of the clock, which is not itself the welfare of an agent.  But if you asked why something matters, it will always be traced back to the fundamental welfare of an agent.


2.  Welfare is exclusively a function of an agent's experience, not the state of the world.

The above implies that agents are the gateways to meaning.  Everything that matters has to go through an agent first.  It has to be experienced by an agent.  

It's important to distinguish between an agent's experience and the underlying state of the world. The state of the world -- the way the world really is -- gets filtered by the senses and turned into an experience.  The mental experience is all the agent directly encounters, and thus the agent's welfare is a direct function of that experience, not the underlying state of the world.

The state of the world is important only insofar as it influences experiences.  If your experience is identical in two states of the world, you must be equally well off.  It could be raining or not, but if you can't tell the difference from inside your windowless office, you are currently just as well off either way.

3. Yet we have a strong intuition that we should be allowed to have preferences over states of the world rather than experiences. 

Suppose someone has taken a compromising photo of you. Do you prefer it to: 
  • circulate without your knowledge, or 
  • not circulate at all?
I know what you want to say, of course.  But supposing that nobody who sees the photo lets on, you can't tell those states of the world apart, so in point of fact you like them equally well.

Mentally, it's hard to disentangle the two, because I'm essentially asking you to choose one, and choosing a state would seem to entail knowing which state is realized.  Also, there is the sidetracking objection that you might find out about (or be otherwise impacted by) the photo at the later date.  To get rid of these, let's consider someone who is (a) not you, and (b) dead.

In particular, Nabokov:
At the time of his death, he was working on a novel titled The Original of Laura. His wife VĂ©ra and son Dmitri were entrusted with Nabokov's literary executorship,and though he asked them to burn the manuscript, they chose not to destroy his final work.
Now put yourself in Dmitri's shoes and consider the decision of whether to publish the work.  Nabokov has expressed a clear preference for the work to be burned.  However, by the time the work is published, he is dead.  He cannot distinguish the two states of the world -- indeed, all his prior life experiences are identical either way -- and so he is made no worse off by the publication.

Let me be clear that just because we won't be alive in the future doesn't mean we can't care about it.  Our past experiences generate beliefs about the likely future states of the world, and we can have preferences over these believed states.  We can even do things today to make certain states more likely than others.  However, our welfare today is a function of these beliefs, not the realized future state.  (Information can't travel back in time!)

Perhaps Nabokov worried this would happen.  This expectation would have hurt his welfare prior to death.  Nabokov could have given his wishes the force of law, in which case he might have been happier, knowing that his wishes would be fulfilled.  Nevertheless, Dmitri did not harm Nabokov one bit by violating his wishes.

(None of this necessarily means it was a good idea to publish, since freely violating deathbed wishes can influence the beliefs of other people about the likelihood that their own future deathbed wishes will be violated.  Similar logic applies to circulating compromising photos, even if it is possible to do so without the subject ever finding out).


4. This intuition leads to strange ideas about life, the universe, and everything.

Well, we are already here.  If you think that welfare is a function of the state of the world, you might think that you are somehow making Nabokov better off by following his deathbed wishes.  But that makes no sense.

Time for a bunch more examples.  


Private acts.  Define a consensual private act as an act unobserved by any agent other than the ones directly involved, who mutually consent.  Societies are always trying to block people from doing "immoral" things, even when all involved parties are consenting, even when no one else knows about it.  This bothers me, because such acts would seem to strictly improve welfare, since each agent in society is either better off or cannot distinguish the states of the world.  However, here are some counterarguments:
  1. Consider the private use of illicit substances in light of my previous post.  In this case, future versions of the drug user may be impacted against their will.  Thus, "consensual private act" has narrower scope than first appeared.
  2. Suppose there is a higher power who is always watching, who disapproves of the act but does not block it.  This too narrows the scope of "consensual private act."  On the other hand, it could be argued that if church and state are to be separate, the state cannot implicitly rely on the existence of an all-knowing entity to motivate any of its laws, which is actually an interesting legal limitation.
  3. Even without a higher power, ordinary members of society may prefer a world in which it is harder to engage in acts of which they disapprove, because this supports beliefs that less "immoral" behavior is happening in the world, which increases their welfare.
Note an interesting implication of (3): In this world, it could be socially optimal to both prohibit a behavior and secretly engage in that behavior!  Compare this to the tragedy of the commons, in which it is socially optimal to prohibit overgrazing but only privately optimal to overgraze.  

(Workbook question: What about the case where agents have limited perception and the commons is very large?  Are carbon emissions effectively a private act, since no individual can make a perceptible difference in the CO2 levels?)

Your stance on government's proper role will determine whether you think any of the above arguments are a legitimate basis for lawmaking. But note that even if (3) is a basis for law, such laws should be handled differently than other laws.  If a prohibition exists, it can be socially optimal for you to break it.  As long as you are not caught, you do no harm.  Furthermore, in a basic crime deterrence model, we would normally say that the punishment should be (net harm to others)*(probability of being caught).  However, in this case, it should just be the net harm to others (conditional on getting caught), since you do no harm when you aren't caught.

So many interesting things to think about!  Let's keep going!


A lamb is born and lives an idyllic, free-range life.  At some point before the age of 12 months, it is gathered up by Friendly Farmer John, who has always been nice to the lamb in the past.  On this occasion, Farmer John puts a captive bolt gun to the unsuspecting lamb's head, and instantaneously ends the lamb's consciousness.

How should we feel about this?  Well, our first instinct is to think of the lamb as an agent who wants to live, who has life stripped away from him, therefore harming him.  But that's not right.  To answer this question, it's better to recognize the lamb as a bunch of agents arrayed through time, like in my last post.

Every instance of the lamb prior to death had an experience that the lamb could not distinguish from a scenario in which he was not about to die.  That means that every instance of the lamb prior to death is just as well off as if the farmer had not killed him.

Meanwhile, there are these hypothetical post-slaughter versions of the lamb, which either exist or not, depending on the farmer's actions.  Let me be clear that the choice is not for them to die or not.  The choice is for them to exist or not.

That means you can't rely the lamb's own preferences to weigh those alternatives, because the lamb's preferences are not consistent across the alternatives.  The lamb isn't either happy to be alive or sad to be dead.  He's happy to be alive or he's dead and does not care.

Of course, you can feel however you want about existing versus nonexisting lambs.  But do you feel differently about a lamb who doesn't exist because he was never born versus a lamb who doesn't exist because he was recently slaughtered?  I don't see the difference.  The "continuity" from current lamb to future lambs feels like an illusion, like we discussed in the last post.

How to feel about agents that do and don't exist?  It's a pure matter of preference.  Personally, I feel no obligation to care about entities that don't exist.  I do care about some of them, of course -- for example, my future selves! But I don't feel bad for lambs that don't exist, and I don't feel differently for lambs that died versus lambs that were never born, because to me those seem like the same thing.  I challenge you to make a compelling case for why these are different.

Now, the slaughtered lamb might have friends and family who miss him.  That is a decent reason to prefer life in this case.

Let's keep climbing the ladder...


People!
Sherlock: People have died.
Moriarty: THAT'S WHAT PEOPLE DO!
People think that death hurts them in some way.  And while the prospect of death can certainly affect our welfare, the actual state of being dead doesn't hurt you at all.  You are not an agent who will live a longer or shorter life, with a single utility function that sums your utility over time, producing a larger total if you live longer.  No, you are a series of agents who will exist or not,  at each instant in time, each with their own utility function if they exist, and nothing if they don't.  Death is a dividing line, not a state.

(Of course I'm separating death from the pain often associated with the act of dying). 

Below, I explore a few implications of making the following untraditional assumption which I first introduced when talking about lambs above:

Assumption 1: The life of a "previously alive" agent is not intrinsically different than the life of a nonexistent agent.  Harm is not inflicted on either type of agent by the fact of it not existing.

This is the assumption that's driving everything.  If you have a problem with this post, which is likely, then this is probably what you want to challenge.  Which I welcome!  (Workbook Question: What happens if a previously alive agent persists in the form of a "soul"?  Does it matter if they're en route to Heaven or Hell?).

But for now, take this assumption as given, and continue to charitably maintain whatever other assumptions I've implied up to now.  Some observations:
  • In some ways, livestock have it lucky.  It is easy to systematically delude them.  You can slaughter every last lamb before their first birthday and they will never catch on.  It's hard to do that to humans, outside of sci-fi.  That said, many of the worlds explored in sci-fi cannot be distinguished from the one in which we live.  The plug on the Matrix could be pulled any time, and the simulators of this world need not feel any guilt at all, even if they have the utmost respect for our sentience.  But it is dangerous to talk about such matters since we don't know who may be listening.
  • There are, obviously, lots of good reasons for murder to be outlawed.  I am not advocating the killing of humans.  However, murder is bad in a bizarre, backwards way: the harm from the (painless) death of an agent comes from its effect on other agents!  This is so counterintuitive, but so is the tragedy of the commons: everyone is hurt by overgrazing, but no individual is hurt by their own overgrazing.  In the same way, when the murder rate rises we are all hurt, even though no murdered agent is hurt by his own murder.  I know, it's still counterintuitive.  Feel free to argue...
    • Oh, and if you are going to kill someone, for God's sake don't tell them they are about to be exterminated.  Daleks have so many opportunities to kill people unexpectedly and instantaneously, but they blow it every time.  (I told you we'd get to Daleks!)
  • I said the slaughtered lamb might have friends and family who are sad when it dies.  That's even more true of people, generally.  But one interesting philosophical case is that of an abortion in which the mother and father are the only ones who know about the baby, and do not want it.  Here, the abortion does not negatively alter the observed experience of any agent who exists.  The fetus is like the lamb; the parents prefer it; everyone else who might not prefer it doesn't know.  However, note that if you believe that a higher power exists and that he has a strong preference for the baby to live, then that's a third agent right there who will know and be sad when his preferences are violated.

Whether you agree with any of this or not, I hope you have found it thought-provoking.  I leave you with two final thoughts:  

1) For an individual, the worst part about dying is not death, it's the act of dying, all the pain and suffering and fear of death that precedes being dead.  That could be a good reason to sign a Do Not Resuscitate order.  Because once you've lost consciousness, you've already gone through all that suffering, and maybe you don't want to do it again.  (It depends on other factors, too, like how much your death will upset your loved ones).

2) Death is scariest when it is imminent.  People may prefer to live in a world where they are taken by surprise, even if it means living slightly shorter lives.  Suppose instantaneous death arrives each day with a fixed probability p, constant over time, for your whole life.  Under what circumstances would that be preferable?  Or what if we could be systematically deluded into thinking we would live for 100 years, only to be euthanized at 75?  Would people choose such a world over the one we're living in?

Tuesday, September 10, 2013

Donut Time, All the Time!

You know when you read something and you're like, "I sort of already knew that, but it was nice to hear it laid out properly"?

That's the first half of this post.

You know when you eat a donut and you are like, "Yum, but...I really wish I had given that to Xan since he loves donuts so much more than me..."

Well, that should happen -- I do really love donuts -- but it doesn't.  And I don't blame you, since not once in my life have I even saved a donut for my own future self.

The second half of this post is about why evolution just couldn't accomplish the seemingly simple task of getting me to save donuts for my future self.  It has to be evolutionarily optimal at least sometimes, right?


I. AGENCY

Am I an agent?  
Before I answer that question, I want to impress on you what it truly means to be an agent.

Economists typically use the word agent to refer to a single person with a single set of preferences.
An agent decides how to act, taking his preferences into account, and then actually acts upon that decision.

Contrast this with a collective, a group of agents each with their own preferences.  Whereas an agent has coherent preferences that tell it unambiguously how to behave, a collective is an amalgam of preferences that may not agree with each other. And whereas an agent decides how to act and then pulls the trigger, it is ultimately the agents in the collective who act on its behalf.

There are many instances where a collective can be thought of as having agency.  Whether the collective is a family or nation or firm or sports team, we may not lose much by modeling it as a single entity, depending on the problem.

But it can also be deeply misleading to imagine that a collective has proper agency when it doesn't.  In fact, this is one of the most pervasive logic failures in circulation today. Consider this representative quote:
"We run the risk of going extinct, and the irony is, we did it to ourselves. The ‘smarty pants’ brain that created advanced weapons, complex global economics...is routinely bossed around by the brain that shoots from the hip.... 
No one in their right mind would deliberately create the means of their own extinction, but that’s what we seem to be doing. The only conclusion is that we’re not in our right minds..." 
-K.C. Cole
No, that is not the only conclusion.  Of course people do stupid things, but that's a bit of a red herring here.  When people talk about all the ways in which we are destroying ourselves -- and how stupid to destroy oneself! -- they almost always mean that agents are destroying the collective.   Since the agents are in the collective, these sound like the same thing; however, the deep insight of the tragedy of the commons is that everyone can be doing individually smart things and still end up with a collectively stupid result.

More generally, there is almost always a tension between the preferences of different agents.  In the absence of well-designed laws or norms or enforceable contracts or markets to discipline their interactions, the large-scale effects of these interactions can diverge significantly from what they would collectively choose, if only it were possible to choose collectively.  We cannot hope to understand their joint behavior if we always think of the collective as an agent with the power to dictate its own behavior. Because it's not.


So am I an agent?
In a word, no.  I exist, and exist, and decide, and decide.  But the cold, hard truth is that I am better thought of as a collective of agents that exist at different moments of time.  Do we have a lot in common?  Sure.  But we aren't the same agent.  For one, we have different preferences over which of us gets to eat the donut (I can tell you that each of me wants the donut for himself!).  For another, when all is said and done, each of me must decide for himself what to do.  Today-me cannot flex a muscle to lift the arm of tomorrow-me.

Of course, today-me has influence over the state of the world tomorrow, and as such can manipulate the incentives of tomorrow-me.  Perhaps today-me can make it optimal for tomorrow-me to lift his arm, perhaps even employing a commitment device if one is available.  However, that does not make us the same agent, any more than you become the same person as me when I hold a gun to your head and order you around.  In any case, commitment devices are often either unavailable or prohibitively costly, so the fact remains that versions of me are not always going to agree on how to behave.

To facilitate breaking yourself free from any illusion that you are a single agent over time, let me stress how unnatural the idea of single agency is from a physical standpoint.  Your mind itself is under constant reconstruction at the atomic level.  Your brain isn't even composed of any of the same atoms it was originally made of.  The strong feeling that you are a single person is a product of inherited memories and characteristics passed along by the physical laws that govern your transition from one instant to the next.

[Just to say it: You might believe in some sort of incorporeal soul that binds all the versions of you together into one coherent entity.  Fine by me, but that is immaterial (ha!) for the present discussion.  What really matters right now is that different versions of yourself have different preferences, all else equal.]

II. DONUT EVOLUTION

The thing is, there are strong evolutionary reasons for agency.  The evolutionary optimum is a single consumption path and, while it may technically be an array of agents who need to act at different points in time, it would be most adaptive if they all cooperated along the same optimal consumption path.

But since this so often fails to happen, we are left to wonder why.  I have seen a few evolutionary models that generate dynamic inconsistency (procrastination, preference reversals, etc) as a second-best outcome, but never convincingly.  Dynamic inconsistency is pervasive, so a truly satisfactory explanation needs to rely on first-order issues.

My feeling is that agency -- which entails dynamic consistency -- is just plain hard to accomplish.  Because the truth of the universe is that we aren't agents.  The null hypothesis in economics is that we are agents, so evolutionary models are forced to come up with reasons why evolution would "prefer" us to act inconsistently.  But can't it just be legitimately hard to make a collection of people arrayed through time agree on everything?

A donut model.  (No relation to a circle model). 
Say you live for 10 periods, but that "you" are really a collection of 10 agents, one per period.  In the first period, you are endowed with a donut which you can save or consume at any time.  And suppose that for whatever reason, it is evolutionarily optimal to eat the donut in period 7.  But because of limited hardware/software, evolution cannot fit a 10-period utility function into your brain.  It can only fit a single, simple, one-period utility function into the brain of each of the ten agents.  The same utility function has to go into every agent, and each agent can only make the simplest of differentiations: "himself" versus "other agents."

This utility function can tell an agent to save the donut or eat it himself.  It cannot tell him to save the donut for agent 7, as he doesn't properly know who agent 7 is.

Constrained in this way, if each agent prefers to save the donut, it will never get eaten.  It's better for each agent to prefer to eat the donut! Then it gets eaten in period 1, which is better than not eating it at all, but far from the unconstrained evolutionary optimum.


Complexity is Simplicity for Complicated People.  
I think this minimalist model captures something really true.  At its heart, dynamic inconsistency arises because it's simpler than dynamic consistency, and complexity is evolutionarily expensive. That's it.  That's the real reason.

That sounds so weird to economists, because if you are really a single agent over time, as we tend to assume, then of course it's more complicated to make your preferences vary over time.  But if you are actually a bunch of different agents, then it's more complicated to make your preferences agree!

In this model everyone has the "same" preferences; that's the sense in which dynamic inconsistency is simpler.  But they aren't really the same, are they?  In fact, they all have the same utility function with respect to a different reference point: everyone has a different definition of "me"!

As well they should. If a frisbee is hurtling toward your head, you should react the same way that I would react if a frisbee were hurtling toward my head.  That's what a one-size-fits-all program looks like, given that we all see out of our own eyes.

Different versions of ourselves are qualitatively the same story.  People ask, "Why dynamic inconsistency?"  But to the contrary, isn't it amazing that evolution was able to endow us with as much glorious consistency as it did?